Anda di halaman 1dari 14

THE GEORGE WASHINGTON UNIVERSITY ISTM 6215 GROUP PROJECT

Human Computer
Interaction
UI & UX Evaluation: The
DaRT Project
The Art of Science and the Science of Art: A
Scientific Evaluation of Human-Centered
Design

Alexander J. Singleton

May 7, 2017

The George Washington University


School of Business
A Cognitive Walkthrough for Human-Centered Design 3
What is the DaRT App? 3
Required Equipment and Platforms 3
Design Usability Goals (Illustration of defining usability goals): 4

Analysis of Design Usability Evaluation 4


Design = 1 | Aesthetic and Minimalist Design 5
Effectiveness = 1 | Match between system and the real world 5
Flexibility = 1 | User Control and Freedom 6
Efficiency = 1 | Consistency and Standards 6
Attitude = 1 | User Control and Freedom 6
Learnability = 1 | Prevention, Recognition and Recovery 7
Error Prevention & Visibility of System Status 7
Error prevention & Visibility of System Status: 7
Recognition Rather than Recall 7

The Art of Science and the Science of Art: A Scientific Approach to UX Design 8

Appendix 11
User Experience Evaluation 11
User Experience Scenario Tests 12
User-Experience Scoring 13

Works Cited 14
A Cognitive Walkthrough for Human-Centered
Design
This study is chartered to measure the usability of a vehicle maintenance application called
DaRT specifically designed to efficiently but effectively deliver an optimized user-experience
for electronic vehicle maintenance management. The hybrid-approach is a summative
evaluation derived from a myriad of testing-methodologies substantiated by graduate-research
in the field cross-examining various user-types subjected to multiple base-case scenarios
created to identify and benchmark consistencies promoting ideal human-centered design and
any inconsistencies that might otherwise preclude a less than ideal user-experience for
electronic management of vehicle maintenance. The model is defined by 6 qualitative-attributes
of measure and 14 mathematical-constraints conclusively determining the overall quality of
design.

What is the DaRT App?

According to the documentation, the goal of DaRT is to establish a working relationship based
on trust between the customer and the auto-repair shop...accomplished by making repair
information and transparent and to present the car owners with all of the necessary data
upfront. The product is appealing for those who are unable to manage their maintenance
history electronically by providing an application platform for market-intelligence and
knowledge-sharing to promote transparency for the consumer in the automotive service
industry.

Required Equipment and Platforms

The application prototype was generated by proto.io, a web-application specializing in rapid


prototyping for proof of concept; the web-application generated a robust deliverable accessible
via web-browser. The test-environment was simulated with the Google Chrome web-browser
(Version 58.0.3029.96) operated on a MacBook Pro and recordable for screencast; though the
subject study was operated in the aforementioned environment, the application is apparently
scoped for a mobile platform, specifically iOS.
Design Usability Goals (Illustration of defining usability goals):

According to the International standard ISO 941 part 11 usability is the the extent to which a
product can be used by specified users to achieve specified goals with effectiveness, efficiency
and satisfaction in a specified context of use.1 The following factors will be examined for
qualitative assessment of the application:2 3

1.1.1. Design
1.1.1.1. Aesthetic and Minimalist Design
1.1.2. Efficiency
1.1.2.1. Consistency and Standards
1.1.3. Flexibility
1.1.3.1. Flexibility and Efficiency of Use
1.1.4. Effectiveness
1.1.4.1. Match between Lab and Real World
1.1.5. Attitude
1.1.5.1. User Control and Freedom
1.1.6. Learnability
1.1.6.1. Error Prevention & Visibility of System Status
1.1.6.2. Help Users Recognize, Diagnose and Recover from Errors
1.1.6.3. Recognition Rather than Recall

Analysis of Design Usability Evaluation


Usability Evaluation Goals (Evaluation of Interactive Systems/(Illustration of defining usability
goals). The defining characteristic of usability testing is that the test environment and format of
the testing is controlled by the evaluator.4

In order to achieve the aforementioned, the selected users are required to register,
presumably authenticate, then sign-in to a personal account (please see section 4.1 for
a comprehensive examination of the DaRT user-story). Recorded observations will

1
"The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
2
he Evaluation of Interactive Systems." (n.d.): n. pag. Web.
T
3
The Design Usability Goals were derived from the models examined by Shackel, B. (1966). Ergonomics
and Design. The Design Method, 49-57. doi:10.1007/978-1-4899-6331-4_7 and "The Evaluation of
Interactive Systems." (n.d.): n. pag. Web.

4
"The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
reveal any consistencies promoting ideal human-centered design and any
inconsistencies discovered that may otherwise preclude a less than ideal
user-experience while examined under the following qualities rated by a binary score
denoted by 1 or 0 (1 = achieved | 0 = Not Achieved):

Design5 = 1 | Aesthetic and Minimalist Design

A tasteful, but optimally-efficient user-experience according to Occams Razor- as a design


pattern,- meaning all things being equal, the simple most elegant solution tends to be the
correct one.6

The DaRT application is logically and tastefully organized according to conventional


standards of design. Upon launching the app, the user is presented with buttons Login
and Register- both processes of the registration user-story flowed precisely as
expected. The overall feeling of DaRT is familiar but unique. Color-blind subjects can
appreciate the selection of color-schemes.

Effectiveness7 = 1 | Match between system and the real world

Did the subject complete the assigned task without assistance? If not, how much help was
required in order to achieve the assigned task.8

Yes, DaRT is logically architectured and systemically-sound to complete each user-story


for the purpose of electronically managing vehicle maintenance with an application
instead of physical-files and folders. Inherently, prototypes require additional
development of certain features, however, test-subjects can easily predict the logic of
tasks and subtasks defining each user-story scenario.

5
Shackel, B. (1966). Ergonomics and Design. The Design Method, 49-57.
doi:10.1007/978-1-4899-6331-4_7
6
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
7
Shackel, B. (1966). Ergonomics and Design. The Design Method, 49-57.
doi:10.1007/978-1-4899-6331-4_7
8
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
Flexibility9 = 1 | User Control and Freedom

According to Don Normans Design of Everyday Things, the best solution to the problem of
designing for everyone is flexibility.10 If the test-user expressed trouble, could they
independently resolve the matter without guided-assistance?11

In all three scenarios examined, the test-subjects can easily navigate back to the original
landing-page or home-screen to re-orient and re-approach a task if need be.

Efficiency12 = 1 | Consistency and Standards

Regardless of website application framework or programming language, a quality user-interface


will require modularity and consistency, from the front-end to the back-end, for the optimal
user-experience. Obviously, the elapsed-time recorded to complete tasks is the best unit of
measure.13

All three scenarios are architectured and sequenced according to conventional design
standards and logic. All icons are logical and predictably signify the anticipated action
for timely completion of tasks and all user-story scenarios.

Attitude14 = 1 | User Control and Freedom

Recording of facial expressions and/or verbal comments and or questions- including tone
captured via screencast. The user will be required to perform the experiment via remote
screen-share for review upon completion of the assigned experiment.15

Although DaRT is a prototype and understandably requires additional development, all


user-story tasks and subtasks provided sufficient notification for available actions and
easily identifiable optionality for alternative actions. Flexibility is an important point, and

9
Shackel, B. (1966). Ergonomics and Design. The Design Method, 49-57.
doi:10.1007/978-1-4899-6331-4_7
10
The Design of Everyday Things Revised and Expanded Edition. New York: Basic , 2013. Print.
11
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
12
Shackel, B. (1966). Ergonomics and Design. The Design Method, 49-57.
doi:10.1007/978-1-4899-6331-4_7
13
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
14
Shackel, B. (1966). Ergonomics and Design. The Design Method, 49-57.
doi:10.1007/978-1-4899-6331-4_7
15
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
worth reiterating for error prevention. In all three scenarios, the test-subjects can easily
navigate back to the original landing-page or home-screen and re-orient for alternative
approaches for a given user-story scenario.

Learnability16 = 1 | Prevention, Recognition and Recovery

If a mistake is observed or reported, kindly request their attempt before providing additional
guidance- in any case note.17

Error Prevention & Visibility of System Status

The conventional use and placement of navigation signifiers afford the user with a
degree of familiarity to explore the unknown routes within the app and the comfort to
easily back-track to the landing-page if there was any sense of confusion or frustration.

Error prevention & Visibility of System Status:18

A bad user-experience occurs when expectations do not necessarily reflect the expected
outcome of an attempted action. Reiterating the observations noted for Attitude, DaRT
is indeed a prototype and understandably requires additional development.

Recognition Rather than Recall19

All user-story stages of tasks and subtasks provided sufficient notification and easily
identifiable optionality for alternative actions. If the user experienced a different outcome
than was initially anticipated before initiating a given action, the situation was easily
understood and available for backtracking- very impressive.

16
Shackel, B. (1966). Ergonomics and Design. The Design Method, 49-57.
doi:10.1007/978-1-4899-6331-4_7
17
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
18
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
19
The Evaluation of Interactive Systems." (n.d.): n. pag. Web.
The Art of Science and the Science of Art: A
Scientific Approach to UX Design
The 14 constraints below define a multiple linear-regression yielding a value defining the overall
quality of user-interface design.

1. Balance: the distribution of optical weight in a picture; optical weight refers to the
perception that some objects appear heavier than others: larger objects appear heavier,
whereas small objects are lighter.20

a. Score = 1 | DaRT tastefully exhibits balance in multiple areas- from the landing
page to the icons contained within the landing-page/home-screen of the
application.

2. Equilibrium: the stabilization, a midway center of suspension. Equilibrium on a screen


is accomplished through centering the layout itself. The center of the layout coincides
with that of the frame.21

a. Score = 1 | The iOS environment layout ensures optimal equilibrium throughout


the app so each page renders a precise presentation of forms and icons.

3. Symmetry: the extent to which the screen is symmetrical in three directions: vertical,
horizontal, diagonal.22

a. Score = 0 | The application equilibrium inherently respects vertical symmetry all


throughout the each user-story; however, there are a few pages that do not
respect aspects of horizontal-symmetry on every page- consider adding a footer
of some type to create horizontal symmetry.

4. Sequence: a measure of how information in a display is ordered in relation to a reading


pattern that is common in the Western Cultures. Sequence in design refers to the
arrangement of objects in a layout in a way that facilitates the movement through the
information displayed.23

20
Evaluating Interface Esthetics.
21
go, D. C., Teo, L. S., & Byrne, J. G. (2002)
N
22
go, D. C., Teo, L. S., & Byrne, J. G. (2002)
N
23
go, D. C., Teo, L. S., & Byrne, J. G. (2002)
N
a. Score = 1 | The DaRT application elegantly arranges objects in a layout that
facilitates movement through the information displayed; there are no surprises
encountered throughout the logical flow of each user-story.

5. Cohesion: a measure of how cohesive the screen according to aspect ratios, which
refers to the relationship of width.24

a. Score = 1 | The applications exhibits similar aspect ratios of all visual fields
displayed for each user-story.

6. Proportion: the comparative relationships that should be considered for major


components of the screen, including windows and groups of data and text. What
constitutes beauty in one culture is not necessarily considered beautiful by another
culture; although beauty may very well be in the eye of the beholder, some
proportionalities transcend culture, space and time- many of which are enjoyed in
abundance today.25

a. Score = 1 | The DaRT app exhibits square and double-square elements


positioned relative to the center of the application.

7. Simplicity: directness and singleness of form, a combination of elements that results in


ease in comprehending the meaning of a pattern. Simplicity in screen design is
achieved by optimizing the number of elements on a screen and minimizing the
alignment points.26

a. Score = 1 | DaRT is a utilitarian application- one that is obviously built for


simplicity in-mind; there was no noteworthy confusion observed during any of the
user-story scenarios tested.

8. Density: the extent to which the screen is covered with objects. Density is achieved by
restricting screen density levels to an optimal percentage.27

a. Score = 1 | There was no density or crowding of objects noted within the


application- well done!

9. Regularity: is a uniformity of elements based on some principle or plan. Regularity in


screen design is achieved by establishing standard and consistently spaced horizontal
and vertical alignment points for screen elements, and minimizing the alignment points.28

24
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
25
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
26
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
27
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
28
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
a. Score = 1 | There is a consistent header displaying the signifiers to navigate all
throughout each user-story of the DaRT app.

10. Economy: is the care and discreet use of display elements to get the message across
as simply as possible. Economy is achieved by using as few sizes as possible.29

a. Score = 1 | Economy is the very essence of Occams Razor as a


design-pattern- meaning all things being equal, the simplest solution tends to be
the right one- and so it is the DaRT: the design-pattern maintained a situated
discipline all throughout the application.

11. Homogeneity: a measure of how evenly the objects are distributed among the
quadrants.30

a. Score = 0 | Homogeneity is apparent on the landing page-only but there does


appear to be treatment for any additional objects or elements that may be added
for the next iteration in development.

12. Rhythm: is the extent to which the objects are systematically ordered and arranged,
referring to regular patterns of changes in the elements; this order with variation helps to
make the appearance exciting through variation of arrangement, dimension, number and
form of the elements. The extent to which rhythm is introduced into a group of elements
depends on the complexity (number and dissimilarity of the elements.31

a. Score = 1 | The static positioning of the all elements relative to the


navigation-bar creates a feeling of rhythmic flow while toggling through each
stage of a user-story.

13. Order & 14. Complexity: the measure of order is written as an aggregate of the above
measures for a layout. The opposite pole on the continuum is complexity. The scale
created may also be considered a scale of complexity, with extreme complexity at one
end and minimal complexity (order).32

a. Score = 1 | The aforementioned units of measure dictate the scoring for the
subject measure, which means, in this case, the application was appropriately
ordered.

29
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
30
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
31
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
32
Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002)
Appendix

User Experience Evaluation

1. Observe
a. Create goal via user-story.
b. Itemize tasks required in order to create user-story goal.
c. Establish Task-Time-Target Benchmark (T^3)

2. Orient
a. Observe and record elapsed time required for user to attempt completing task.
b. Regardless of success or failure, record time, and then ascertain user-feedback
for each task mark 1 for Not Frustrated mark 0 Frustrated.
c. Record and compile user data from test-session into spreadsheet to calculated
average recording times and total Frustration Column

3. Decide
a. Compare study findings to T3:
b. Is the goal-benchmark practical?
c. Should tasks be revised?

4. Act (To iterate or not iterate- that is the question)


a. Iterate: if observations are deemed unacceptable, iterate.
i. Deductively begin with the identified user-story goal deemed
unacceptable.
ii. Diagnose and assess the task precluding an optimal experience.
iii. Examine and determine if there is UI optionality in order to facilitate
satisfactory completion of user-story.
iv. Map all UI-options to identify critical-path to complete user-story and test
each option via scoring methodology.
1. If there are no UI-options available to facilitate satisfactory
completion of task, consider overhauling user-story goal.
2. Compare results- iterate or proceed to ship.
b. Ship!
User Experience Scenario Tests

1. Test: User-Story 1: Easy - Capture Consumer Data


a. Register by entering the following data:
i. First Name
ii. Last Name
iii. Address
iv. Age
v. Male/Female
vi. Username
vii. Password
viii. Register Vehicle
ix. Vehicle Nickname
x. Year
xi. Make
xii. Model

2. Test: User-Story 2: Capture Maintenance Report


a. Capture Maintenance Report
i. Select Maintenance Option
ii. Select Desired Option
iii. View Historic Report
iv. Enter Newly Completed Maintenance Task
v. Print Maintenance Report as Needed

3. Test: User-Story 3: Run Diagnostic Report


a. Select Diagnostic Option
i. Pair On-Board Diagnostic (OBD) Device to the Vehicle
ii. View Report.
iii. Select Repair Shops/Dealers Option
iv. View Detailed Repair Shops/Dealers Listing with Cost and Rating
Information.
v. Select Desired Repair Shop/Dealer and Make Appointment
User-Experience Scoring

User Time
Performance Rating (Seconds)
Design 1 150 Scenario 1
Effectiveness 1 180 Scenario 2
Flexibility 1 200 Scenario 3
Efficiency 1 153 Scenario 1
Attitude 1 176 Scenario 2
Learnability 1 190 Scenario 3
Score 100.00%

Metric Score
Balance 1
Equilibrium 1
Symmetry 0
Sequence 1
Cohesion 1
Proportion 1
Simplicity 1
Density 1
Regularity 1
Economy 1
Homogeneity 0
Rhythm 1
Order &
Complexity 1
Score 84.62%
Works Cited
Eason, K. D. (1984). Towards the experimental study of usability. Behaviour & Information
Technology, 3(2), 133-143. doi:10.1080/01449298408901744

Frank, S., Pothireddy, L., Singer, W., & Zhao, R. (n.d.). The DaRT Project.

Hollingsed, T., & Novick, D. G. (2007). Usability inspection methods after 15 years of research and
practice. Proceedings of the 25th annual ACM international conference on Design of communication
- SIGDOC '07. doi:10.1145/1297144.1297200

Ngo, D. C., Teo, L. S., & Byrne, J. G. (2002). Evaluating Interface Esthetics. Knowledge and
Information Systems, 4(1), 46-79. doi:10.1007/s10115-002-8193-6

Norman, D. A. (2013). The Design of Everyday Things. New York: Basic Books.

Shackel, B. (1966). Ergonomics and Design. The Design Method, 49-57.


doi:10.1007/978-1-4899-6331-4_7

"The Evaluation of Interactive Systems." (n.d.): n. pag. Web.

Anda mungkin juga menyukai