Anda di halaman 1dari 2

Andrzejczak Liu Usability Stress location testing 2010

Andrzejczak Liu: The effect of testing location on usability testing performance, participant
stress levels, and subjective testing experience 1258-66

1258 testing on site and in lab showed no significant difference in user reported stress levels

Sources

Andreason 2007: various remote testing methods


Andreasen, M.S., 2007. What happened to remote usability testing? An empirical study of three
methods. In: Chi 2007 Proceedings Usability Evaluation, April 28–May 3, 2007.

Bruun et al 2009 remote, asynch testing Ids about half problems found by traditional usability testing;
remote asynch usa testing does have time saving element
Bruun, A., Gull, P., Hofmeister, L., Stage, J., 2009. Let your users do the testing: a comparison of three
remote asynchronous usability testing methods. In: Proceedings of the 27th International Conference
on Human Factors in Computing Systems, pp. 1619–1628.

Campbell 1988 discussion and listing of 4 elements that increase complexity of a task
Campbell, D., 1988. Task complexity: a review and analysis. Academy of Management Review 13,
40–52.

Petrie et al 2006 remote usability testing for folks with disabilities; quant feedback quality for
asynchronous and in lab same, but qualitative feedback for asynch was poorer
Petrie, H., Hamilton, F., King, N., Pavan, P., 2006. Remote usability evaluation with disabled people.
In: Proceedings of Chi 2006. ACM Press, pp. 1133–1141.

1258
Abstract
“The present study investigated two groups of users in remote and traditional settings. Within each
group participants completed two tasks, a simple task and a complex task. The dependent measures
were task time taken, number of critical incidents reported, and user-reported anxiety score. Task
times differed significantly between the physical location condition; this difference was not meaningful
for real world application, and likely introduced by overhead regarding synchronous remote testing
methods. Critical incident reporting counts did not differ in any condition. No significant differences
were found in user reported stress levels. Subjective assessments of the study and interface also did
not differ significantly. Study findings suggest a similar user testing experience exists for remote and
traditional laboratory usability testing.”

importance of stress testing w/in context of usability


“Though many studies have been performed regarding usability study performance regarding task
times, few focus on ‘‘softer” elements such as stress levels and subjective appraisals. It is possible
that altered stress states can lead to incorrect or less applicable test results. This decrease in
applicability occurs because the user does not perform his or her tasks in the same manner during the
test as they would in a familiar work environment. In addition, the presence of an observer or the
feeling of being tested may create unnecessary anxiety or pressure to perform. This is especially true
in military environments, where enlisted personnel may feel they are being assessed. The reverse
may be true if this observer or performance pressure is not present, as users in comfortable
environments may not perform the task as convincingly or with the same level of effort.”
Andrzejczak Liu Usability Stress location testing 2010

1259
usability literature claims that usability is measurable—as does this article --goes against the notion of
complex systems/systems thinking “Usability is measurable. Specifying the metrics is easy, the
measurement process is not (Nielsen, 1997). Success rates, task times, error rates, and subjective
satisfaction scores all contribute towards understanding and qualifying a usability level.”

critical incident
“Critical incidents (CIs) occur when expectations of an outcome are violated. If a user performs an
action whose actual consequences do not match that user’s expected consequences, a critical
incident has occurred. Critical incidents are context-specific. For usability study purposes, a critical
incident is an indicator of a usability issue that is either positive or negative. Designers can use this
critical incident data to make appropriate changes. The critical incident method described by Flanagan
(1954)”

perhaps critical incidents can be considered or used as a label for when tourists expectations are not
met or when the VX deviates from expectations. In some ways, the name seems a bit over the top;
however, this could be one way to introduce some usability-based models into tourism VX work.

1260
Task Complexity
Task complexity is related of the psychological state of the individual performing the task (Campbell,
1988). User perceptions of the task directly influence perceived task complexity. However, there are
several task elements that when present add to task complexity perception. Campbell lists the
following four elements that increase the complexity of a task: multiple paths to the goal; multiple
desired goals; conflicting interdependent attributes between paths; uncertainty or ambiguity of paths.
Ideally, none of these elements would be present in a perfectly usable interface. However, lack of any
complexity would limit functionality. Attempting to provide the user with a sensible mix of related tasks
of varying complexity makes for interesting trade-off studies, and necessitates usability testing.

1262 Task complexity has significant effect on number of critical incidents reported
“Task complexity had a significant effect on number of critical incidents reported. As predicted, users
reported experiencing more critical incidents during the complex task. Critical incidents were reported
at the same task steps in both conditions; participants experienced the same difficulties regardless of
testing condition. Only 7 critical incidents were reported for the simple task, and 23 were reported for
the complex task across all participants in both conditions. Fig. 3 is the main effects plot for critical
incidents reported, depicting visually the effect of testing location and task type on CIs reported.”

1265 no difference in lab & field testing on stress


“The data suggests user experience does not differ between remote and traditional testing
environments in terms of subjective ratings and stress levels. Synchronous remote and traditional
usability testing do not differ in the quality of their results. Participants seem to treat both testing
conditions with the same level of seriousness, and put forth a genuine effort to complete the usability
study.'

Anda mungkin juga menyukai