Anda di halaman 1dari 13

RESEARCH REPORT

The Anatomy of E-Learning Tools: Does Software Usability


Influence Learning Outcomes?
Sonya E. Van Nuland, Kem A. Rogers*
Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western
Ontario, London, Ontario, Canada

Reductions in laboratory hours have increased the popularity of commercial anatomy


e-learning tools. It is critical to understand how the functionality of such tools can influ-
ence the mental effort required during the learning process, also known as cognitive
load. Using dual-task methodology, two anatomical e-learning tools were examined to
determine the effect of their design on cognitive load during two joint learning exercises.
A.D.A.M. Interactive Anatomy is a simplistic, two-dimensional tool that presents like a
textbook, whereas Netter’s 3D Interactive Anatomy has a more complex three-
dimensional usability that allows structures to be rotated. It was hypothesized that longer
reaction times on an observation task would be associated with the more complex ana-
tomical software (Netter’s 3D Interactive Anatomy), indicating a higher cognitive load
imposed by the anatomy software, which would result in lower post-test scores. Under-
graduate anatomy students from Western University, Canada (n 5 70) were assessed using
a baseline knowledge test, Stroop observation task response times (a measure of cognitive
load), mental rotation test scores, and an anatomy post-test. Results showed that reac-
tion times and post-test outcomes were similar for both tools, whereas mental rotation
test scores were positively correlated with post-test values when students used Netter’s
3D Interactive Anatomy (P 5 0.007), but not when they used A.D.A.M. Interactive Anat-
omy. This suggests that a simple e-learning tool, such as A.D.A.M. Interactive Anatomy,
is as effective as more complicated tools, such as Netter’s 3D Interactive Anatomy, and
does not academically disadvantage those with poor spatial ability. Anat Sci Educ 9: 378–
390. V
C 2015 American Association of Anatomists.

Key words: gross anatomy education; undergraduate education; cognitive load; dual-
task; mental rotation test; instructional design; e-learning; computer-assisted instruction

INTRODUCTION Burguillo, 2010; Dahlstrom, 2012). E-learning is defined by


the National Centre for Education Statistics (Waits and
E-Learning in Anatomical Education Lewis, 2003) as the process of extending learning and/or
delivering instructional materials to individuals or groups
Competing interests and responsibilities of today’s learners
that are physically separated, via any type of electronic
have propelled the movement of post-secondary courses into
media, such as the Internet or satellite broadcast. In the ana-
the online environment, known as e-learning (Toynton, 2005;
tomical sciences, computer-aided instruction and online learn-
ing tools have become a critical component of teaching the
*Correspondence to: Dr. Kem Rogers, Department of Anatomy and intricacies of the human body when physical classroom space
Cell Biology, Schulich School of Medicine and Dentistry, Medical Sci-
and cadaveric resources are limited. In terms of commercial
ences Building, Room 443, University of Western Ontario, London,
Ontario N6A 5B9, Canada. E-mail: kem.rogers@schulich.uwo.ca anatomical e-learning tools, there is extensive variability in
Grant sponsor: Ontario Graduate Scholarship, Government of
the design of such programs from manufacturer to manufac-
Ontario, Canada. turer, particularly in terms of the images each program pro-
Received 14 October 2015; Revised 11 November 2015; Accepted 13 vides (see Attardi and Rogers, 2015 for a review of
November 2015. products). Yet, as e-learning tools gain popularity, questions
Published online 15 December 2015 in Wiley Online Library remain: Does the design of an e-learning tool influence learn-
(wileyonlinelibrary.com). DOI 10.1002/ase.1589 ing? For example, does a complex interface, which has high
interactivity, benefit a learner more so than if they were to
C 2015 American Association of Anatomists
V use a simplistic e-learning tool?

Anat Sci Educ 9:378–390 (2016) JULY/AUGUST 2016 Anatomical Sciences Education
An important characteristic to consider when assessing the capacity, the processing of novel information through the
impact of e-learning tool design is the inherent ability of the working memory, and the impact of overloading these mem-
learner to mentally manipulate an object and its spatial rela- ory resources (Chandler and Sweller, 1996; Baddeley, 1986,
tionships, in either a two or three-dimensional (3D) manner, 2003; Sweller et al., 1998). E-learning interfaces that require
and then perform a mental transformation of that representa- a learner to search for information or arbitrary test possibil-
tion (Carpenter and Just, 1986; Mayer and Sims, 1994; Luur- ities (e.g., having to press numerous buttons) to uncover
sema et al., 2006). This is known as visuospatial ability. information required for task completion can cause learners
Within the study of human anatomy, visuospatial ability can to engage in extraneous processing (Anderson, 1987; van
be further defined as the knowledge of the names, 3D proper- Merri€enboer and Sweller, 2010). By using their working
ties and relative positions, and spatial relationships of differ- memory-processing capacity to attend to and process material
ent tissues, anatomical structures, and organs (Garg, 1998). that is not essential, fewer cognitive resources are available
It is not surprising then that visuospatial ability has been pos- to devote to the domain-specific knowledge being learned
itively correlated with anatomy learning and performance (Anderson, 1987; Mayer and Sims, 1994; Mayer, 2008; van
(Rochford, 1985), but what is unknown is the extent to Merri€enboer and Sweller, 2010). Studies have shown that
which learning is impacted by a student’s spatial ability when overloading the working memory in this way impairs student
learning from an e-learning tool and if student learning is dif- academic performance (Sweller et al., 1998; Mayer, 2002;
ferentially influenced by spatial ability based on the Lahtinen et al., 2007; DeLeeuw and Mayer, 2008). Propo-
e-learning tool interface (e.g., complex versus simplistic). nents of CLT utilize it to guide the design of e-learning tool
interfaces permitting the cognitive load associated with the
organization and presentation of information, known as
Cognitive Load and Learning extraneous cognitive load, to be mitigated (see Sweller et al.,
Cognitive load theory (CLT) has been used by educational 1998; van Merri€enboer and Sweller, 2005, 2010; Mayer,
researchers to inform instructional designs with the goal of 2008). Yet, researchers argue that e-learning tools created by
modulating the cognitive load placed on a student’s working commercial and educational companies have been developed
memory during e-learning software use. Cognitive load is without attention to, or consideration of working memory
defined as the “total load that performing a particular task limitations and are instead based on the intuitive beliefs of
imposes on a learner’s cognitive system,” and is a function of the designers (Chandler and Sweller, 1991; Park and Hanna-
both the mental resources required by a specific task, and a fin, 1994; Sweller et al., 1998; Grunwald and Corsbie-
learner’s ability to meet the resource demand (Moray, 1979; Massay, 2006; Moreno and Mayer, 2007). If the information
O’Donnell and Eggemeier, 1986; Paas and van Merri€enboer, displayed to educational software users, and the way in
1994; Paas et al., 2003, 2004; Oviatt, 2006). which it is presented, has tangible effects on the working
In terms of the cognitive process required for learning, memory and the learning process, there is a need for more
novel information must be processed through a temporary empirical research concerning the effectiveness of commer-
storage site, known as the working memory, before it can be cially produced e-learning tools to ensure that the educational
organized into a learner’s long-term memory (Mayer and experiences in these learning environments remain effective
Sims, 1994; Mayer, 2001). Although the working memory for all students.
has distinct subsystems dedicated to visual (visuospatial
sketch pad) and auditory information processing (phonologi-
cal loop), it has a restricted capability to process and store
Dual-Task Assessment of Cognitive Load
more than 7 6 2 informational elements (Miller, 1956; in Education
Br€unken et al., 2004; Mayer, 2005). Learning is defined by Of the numerous methods used to assess learner cognitive
CLT as the increase in and transfer of knowledge from the load, task- and performance-based techniques, called dual-
working memory into the long-term memory (Paas et al., task paradigms, are the most promising approaches for evalu-
2003; Hessler and Henderson, 2013). In a learning situation ating mental resource investment during educational software
where the information elements presented simultaneously are use (Br€unken et al., 2003; de Jong, 2010; Gwizdka, 2010).
greater than the working memory resources one holds (i.e., Dual-task paradigms are predicated on CLT, which states
7 6 2), learners may experience an impaired ability to trans- that if two tasks require the same cognitive resources and are
mit the information into long-term memory (Mayer and performed simultaneously, then a learner’s available working
Sims, 1994; Josephsen, 2015). Novel interface designs that memory resources will be distributed between both tasks
include complex navigational functions or confusing menu accordingly (Br€ unken et al., 2002). Commonly, a learning
bars have the potential to overload a naive user’s limited task, performed within an e-learning tool interface, and an
working memory resources, impairing their ability to learn observation task are selected and performance measures of
(Sweller et al., 1998; Eva et al., 2000; Mayer, 2008; van the latter are utilized to assess the relative amounts of mental
Merri€enboer and Sweller, 2010). resources consumed by the former (Kerr, 1973; Fisk et al.,
1986). Learning task performance measures include compari-
Cognitive Load Theory sons between knowledge acquisition scores before and after
the learning task is completed using an e-learning tool soft-
Cognitive load theory emphasizes that the presentation of ware. Conversely, observation tasks are often simple, continu-
information within an e-learning tool can have substantial ous activities that require a response to a visual or auditory
effects on student working memory resources and thus the stimulus; performance variables such as reaction time and
learning process (Mayer, 2001, 2002; Paas et al., 2003, 2004; error rate are recorded in real-time.
van Merri€enboer and Sweller, 2005; Oviatt, 2006). The tenets The use of a simple observation task in a dual-task para-
of CLT recapitulate the limitations of working memory digm is predicated on the capacity sharing approach to

Anatomical Sciences Education JULY/AUGUST 2016 379


explaining dual-task interference (Pashler, 1994). Under this tors have utilized a novel dual-task methodology involving a
theory, tasks that share the same working memory resources modified Stroop task (see later) to examine two commercial
(e.g., if the tasks must be interpreted visually) and are per- anatomical e-learning tools (Netter’s 3D Interactive Anatomy
formed concurrently can interfere with each other; that is, if and A.D.A.M. Interactive Anatomy) to determine the effect
two tasks are being performed simultaneously, then there is of their design on learner cognitive load during two joint
less processing capacity (i.e., fewer working memory resour- learning exercises (elbow and knee). It was hypothesized that
ces) available for each individual task than there would be if longer reaction times on a visual observation task (VOT)
each task was completed in isolation (Pashler, 1994). As a would be associated with the more complex anatomical soft-
result, performance on the concurrently running tasks would ware (Netter’s 3D Interactive Anatomy), which would indi-
be impaired in comparison with performance when said tasks cate a higher cognitive load imposed by the anatomy
are completed in isolation (Stroop, 1935; Pashler, 1994). This software and interfere with learning, thus resulting in lower
relative degree of this interference, called dual-task interfer- post-test scores.
ence, can be investigated through the use of a probe reaction
time method (Pashler, 1994). During this testing method a
clear distinction is made to participants regarding the impor-
tance of the primary task (i.e., learning task) and the second-
METHODOLOGY
ary task (i.e., observation task). Investigators emphasize that Participants and Recruitment
effort and attention should be directed to the primary task
for its entire duration. A secondary task, or “probe,” is then All students in the Fall 2014 undergraduate anatomy course
introduced and participants are asked to respond to the task Systemic Human Anatomy (ANATCELL3319) at The Univer-
as quickly as possible when they notice the task cue (e.g., a sity of Western Ontario were eligible to participate in the
noise, a change in color, or a vibration) while also continuing study. Of those students in the course, 25.49% (n 5 78) con-
to attend to the primary task (i.e., learning task) (Posner and sented to participate. Students were recruited through an in-
Boies, 1971; Pashler, 1994). The assumption inherent to the class announcement and e-mails from the first author. In
probe reaction time method is that the speed of response to accord with The University of Western Ontario Research
the probe (i.e., observation task; probe) would provide a rela- Ethics Board (case number: 104961), the first author had no
tive estimate of spare cognitive capacity (i.e., working mem- relationship with the students and was not associated with
ory resources) left unoccupied by the primary task (i.e., the course.
learning task) (Posner and Boies, 1971).
In a classic dual-task paradigm, said tasks are organized
into two distinct conditions: (1) a dual-task condition, where
a learning task is performed simultaneously with an observa-
Course Description
tion task; and (2) single-task conditions, which require the The course from which participants were recruited in this
completion of only one task, either the observation task and/ study, Systemic Human Anatomy, is a full-credit, third-year
or learning task, in isolation. Single-task conditions provide undergraduate course offered by the Department of Anatomy
baseline measures for when all working memory resources and Cell Biology at The University of Western Ontario, Can-
are available to devote to a single task. If the learning task ada. It comprises biweekly, 50-minute didactic lectures (50
and the cognitive resources it requires are held constant, hours total) in conjunction with a weekly 1-hour laboratory
dual-task conditions that use dissimilar e-learning tool inter- demonstration (23 hours total) over a 26-week period. The
faces should provide a measure of the cognitive load associ- first 13 weeks of the course cover the central and peripheral
ated with the design of each software package. To achieve
nervous systems, special senses, and the musculoskeletal sys-
this, learning and observation task performance variables
tem of the head and thorax. The second 13 weeks cover the
(e.g., knowledge acquisition scores, observation task reaction
musculoskeletal system of the upper and lower limbs as well
time and error rate, etc.) are compared across single-task and
as the circulatory, respiratory, digestive, urinary, and repro-
dual-task conditions. Through the extension of Kerr’s (1973)
ductive systems. This course is offered in a face-to-face (F2F)
reasoning and in consideration of the capacity sharing model
and online format. Every didactic lecture is streamed live and
of dual-task interference, reaction times would be slower or
more inaccurate when the observation task is performed then archived using Blackboard Collaborate 12 for use by
simultaneously with the cognitively demanding e-learning both the online and F2F students (Blackboard Inc., Washing-
tool interface than compared with when the observation task ton, DC). The audio/video equipment required for simultane-
is performed in isolation, or in combination with a noncogni- ous delivery of F2F lectures using Blackboard Collaborate
tively demanding e-learning tool (Posner and Boies, 1971; has been previously described by Barbeau et al. (2013). Addi-
Pashler, 1994; Br€unken et al., 2003; de Jong, 2010). tionally, a description of the development and delivery of the
Dual-task has been empirically shown to be a suitable online anatomy laboratories has been detailed by Attardi and
approach for evaluating cognitive load during library system/ Rogers (2015).
Web searches and multimedia learning in the subject areas of Students who participated in the study did so in the final
mathematics, physiology, and arts and culture. However, no week of the first half of the course. At the time of testing,
studies to date, with the exception of our previous work, Van participants had not yet received a formal lecture on the
Nuland and Rogers (2016), have attempted to use this tech- joints (wrist, elbow, and knee) used in this study; however,
nique to assess cognitive load during educational, commercial participants had received a formal lecture on anatomical ter-
anatomical e-learning tool use (Br€ unken et al., 2002, 2003; minology (e.g., anterior, posterior, supination, pronation,
Kim and Rieh, 2005; Oviatt, 2006; DeLeeuw and Mayer, etc.). Participants can be considered to have a low-level of
2008; Gwizdka and Lopatovska, 2009). Based on our previ- experience in anatomy, and thus possess a small amount of
ous findings (Van Nuland and Rogers, 2016), here investiga- domain-specific knowledge (Mayer and Sims, 1994).

380 Van Nuland and Rogers


Figure 1.
Screenshots of commercial e-learning tools used in this study. A.D.A.M. Interactive Anatomy (A) displays anatomical content in a textbook-like fashion with limited but-
tons for student interaction. Conversely, Netter’s 3D Interactive Anatomy (B) has a 3D appearance and 20 buttons through which users can interact with the program.

Experimental Design Overview lined for each joint learning exercise (see later) and (2) their
interface designs were significantly dissimilar (Fig. 1).
Participants were asked to complete two pre-test measures, A.D.A.M. Interactive Anatomy is a two-dimensional (2D)
inclusive of a demographic questionnaire and a mental rota- tool that presents static anatomical images in a textbook-like
tions test (MRT). The demographic questionnaire asked par- fashion using a navigational tool set containing nine buttons
ticipants to declare whether they had used anatomy that can be used to manipulate the image, including extract-
e-learning tools in the past, and if so, which products. The ing, highlighting, and changing the magnification of a
revised Vandenberg and Kuse MRT-A was administered to selected structure. Within A.D.A.M. Interactive Anatomy,
participants as a measure of spatial ability prior to the study users have the ability to access four key anatomical views,
(Peters et al., 1995). Each of the 24 MRT items were pre- including anterior, posterior, lateral, and medial. Rollover
sented individually on a computer screen, and participants labels are used to identify structures, and users manipulate a
were given 6 minutes to answer as many questions as possi- sliding depth bar to examine different image layers. However,
ble. One point was awarded per question if both stimulus fig- A.D.A.M. Interactive Anatomy does not enable body systems,
ures that matched the target image were identified correctly; such as the skeletal and digestive systems, to be viewed inde-
thus, the maximum score obtainable on the MRT was 24. pendently from the rest of the body (Fig. 1A).
To assess the cognitive load that each e-learning tool’s Conversely, Netter’s 3D Interactive Anatomy has a 3D
design placed on the learner, participants were involved in usability that is generated through the compilation of static
four separate testing conditions, including two single-task images whose 3D coordinates are stored on a server (Garg,
conditions, where a VOT and then a joint learning task were 1998; Jain and Getis, 2003; Vogel-Walcutt et al., 2010).
completed in isolation, followed by two dual-task conditions, These coordinates are used to generate animations, where
where a VOT and a joint learning task were completed simul- depth cues are conveyed to the learner by the differing speeds
taneously. In both dual-task conditions, participants were of near and far structures when the objects are rotated
randomly assigned to explore the anatomy of two joints, the (Cohen and Hegarty, 2007). This gives the illusion of depth
elbow and the knee, using two dissimilar e-learning tools. to a 2D structure even though no stereopsis is required. In
addition, users are able to rotate a structure along any axis
point, enabling viewing from all angles (Attardi and Rogers,
Commercial Anatomical E-Learning 2015). Netter’s 3D Interactive Anatomy has 20 buttons con-
Tools Examined tained within its navigational tool set that can be used to exe-
cute a wide range of functions including, but not limited to,
In an effort to understand how different educational magnifying and peeling away individual structures, as well as
e-learning tool interfaces and interactive features influence translating and rotating images. Rollover labels are only visi-
learner cognitive load levels, and by extension the learning ble for larger structures, such as individual bones like the
process, two interactive educational e-learning software pack- femur; however, to access more specific labeling (e.g., bony
ages were selected: A.D.A.M. Interactive Anatomy (Ebix, landmarks), users must choose two different buttons in suc-
Atlanta, GA) and Netter’s 3D Interactive Anatomy (Elsevier, cession. Within Netter’s 3D Interactive Anatomy students can
Philadelphia, PA). They were chosen because (1) they con- dissect the cadaveric images by removing specific, individual
tained the relevant structures, anatomical landmarks, and structures, a function that is not possible within A.D.A.M.
labels required for a participant to meet the objectives out- Interactive Anatomy. Furthermore, Netter’s 3D Interactive

Anatomical Sciences Education JULY/AUGUST 2016 381


Figure 2.
Experimental station design. During the visual observation task (VOT) in isolation, participants responded to a modified Stroop test that appeared on Monitor 2;
no images were shown on Monitor 1 during this time. In the single-task condition where the wrist learning task was completed in isolation, an image was displayed
on Monitor 1, and no images were shown on Monitor 2 positioned behind. In the dual-task testing conditions where participants completed both a VOT and joint
learning exercise in tandem, the VOT was displayed on Monitor 2, and at the same time, the assigned e-learning tool was displayed on Monitor 1.

Anatomy enables users to toggle on and off different catego- (Microsoft, Redmond, WA) connected to an external mouse.
ries of structures (e.g., skeletal, circulatory, and muscular sys- The laptop was used to display all pre-tests and post-tests in
tems) (Fig. 1B). In accord with The University of Western addition to the e-learning tools. A 26-inch LG Flatron Wide
Ontario Research Ethics Board, the authors had no signifi- monitor (LG Electronics, Seoul, Korea; Monitor 2) was placed
cant ties with the e-learning tools used, or the companies that directly behind the 15.6-inch Lenovo laptop and was connected
R
created them. to a second Lenovo laptop, which was used to run E-PrimeV 2.0
Previous research suggests that e-learning tools that enable Professional (Psychology Software Tools, Sharpsburg, PA) in the
high navigational control through extensive tool sets and mul- background while the external monitor (e.g., Monitor 2) was
tiple viewing angles can disorient novice users and increase used to display the VOT to participants (Fig. 2).
extraneous cognitive load (Dias et al., 1999; Garg et al., The VOT single-task condition was used to estimate par-
1999a,b; Khalil et al., 2005; Levinson et al., 2007; Dindar ticipants’ average response times when 100% of their visual
et al., 2015). The extensive tool set of Netter’s 3D Interactive working memory resources were available. The VOT used in
Anatomy may result in elevated extraneous cognitive load lev- this study was a modified Stroop task. Based on the classic
els, as learners are forced to spend more time arbitrarily test- color-naming test (Stroop, 1935), a modified Stroop test is a
ing possibilities when learning to navigate the program simplified paradigm that makes the test less cognitively
(Sweller et al., 1998; Schnotz, 2002; Wong et al., 2003; Ardito demanding and a better fit as a “probe” task (i.e., observa-
et al., 2006; Grunwald and Corsbie-Massay, 2006; van Mer- tion task) under the dual-task paradigm (Van Nuland and
ri€enboer and Sweller, 2010). A.D.A.M. Interactive Anatomy, Rogers, 2016). In this study, the Stroop task was modified to
which has a significantly fewer tools available in the menu its simplest form, in which participants were asked to
bar, limits how a user can interact with the anatomical struc- respond to color word (red, blue, green, or yellow) that
tures, which may reduce confusion when using the program. appeared on Monitor 2 (Fig. 2). The color word appeared in
In terms of viewing angles, Netter’s 3D Interactive Anatomy is lower case and was shown in one of four ink colors (red,
considered to be a high interactivity system, as it allows struc- blue, green, or yellow). Participants were asked to press the
tures to be rotated on any axis and studied from multiple up arrow if the ink color was the same as the word, known
views, where as A.D.A.M. Interactive Anatomy only allows as a congruent stimulus, or the down arrow if the ink color
anatomical structures to be studied from four key views (e.g., is different from the word, known as an incongruent stimu-
anterior, posterior, lateral, and medial; Luursema et al., 2006). lus. Each image was presented for 5–9 seconds and was fol-
lowed by a blank interval of 15–29 seconds. Research has
suggested that such a range would be short enough for a pri-
Condition 1: Visual Observational Task in mary (learning) task to affect performance on the VOT but
Isolation (Single-Task Condition) still allow for close to normal performance during the learn-
ing task (Wastlund et al., 2008; Gwizdka, 2010). E-Prime 2.0
The research station used in this study consisted of a 15.6-inch Professional was used to program the image changes and
Lenovo laptop (ThinkPad E531, Lenovo, Morrisville, NC; record time lapses between each image change and partici-
Monitor 1) with Intel dual-core processor running Windows 7 pant response.

382 Van Nuland and Rogers


Participants completed a practice round of the modified study, validated the questions used to assess participant
Stroop task to ensure that they were acclimatized to the dis- knowledge of the bony wrist.
play set-up and response characteristics of the task Once the baseline was established, participants were given
(MacLeod, 1998, 2005). Eight-color word-based trials were the opportunity to learn the anatomy of the bony wrist joint
utilized during the practice session because previous research and were provided with an instruction page that detailed the
indicates that the Stroop task is relatively robust in the face two objectives of the wrist exercise: (1) Identify bones of the
of practice and thus task familiarity is less likely to impact wrist, known as the carpal bones (of which there are eight),
the total mental resources invested in the VOT over time in both the anterior and posterior view; and (2) Distinguish
(Br€unken et al., 2002; Kim and Rieh, 2005; MacLeod, 1998, between an anterior versus a posterior view of the joint. Par-
2005; DeLeeuw and Mayer, 2008). Following the practice ticipants were presented with a labeled, static 2D image of
round, participants were exposed to 10 additional modified the bony wrist joint on Monitor 1 (Fig. 2), which they were
Stroop stimuli (50% congruent stimuli and 50% incongruent permitted to study for 12 minutes.
stimuli), which were used to establish a stable recording of Upon completion of the wrist learning exercise, partici-
participant’s baseline reaction times on the VOT. Equal pro- pants were asked to complete a 1-minute math worksheet
portions of congruent and incongruent stimuli were utilized containing addition/subtraction problems consistent with a
for all VOTs, as is common in cognitive research (MacLeod, Grade 5 math level. The math sheet was designed to erase
2005). Each participant’s average baseline reaction time, memorized information held in the learner’s short-term mem-
which is a measure how a participant responds to the VOT ory (working memory) (Fisk et al., 1986). Participants then
when 100% of their working memory resources are avail- completed a post-knowledge test to assess how studying from
able, was compared with their reaction time on the VOT a static 2D image impacted their anatomical knowledge of
during the dual-task testing conditions. In the case of the the bony wrist. The 10-minute post-knowledge quiz was
dual-task testing (discussed later), participants divided their based on the same questions as the anatomical knowledge
baseline quiz; however, the questions were presented in a
working memory resources between the VOT and learning
scrambled format.
from the e-learning tool. Using the logic of the capacity shar-
ing model of dual-task interference, as more working mem-
ory resources are consumed by the primary task (e.g., Conditions 3 and 4: Visual Observation Task
learning from a novel e-learning tool), fewer resources are and Joint Learning Exercise Combined (Dual-
available to devote to the secondary task (Posner and Boies, Task Condition)
1971; Kerr, 1973). Thus in this study, reaction times on the
VOT would be slower or more inaccurate when the primary The final testing conditions involved the completion of a
task is performed simultaneously with the cognitively joint learning task and a VOT simultaneously using a dual-
demanding e-learning tool (Netter’s 3D Interactive Anatomy), task paradigm (Fig. 2). Participants were randomly assigned
than compared with a simpler e-learning tool (A.D.A.M. to explore both the elbow and knee joints, using different
Interactive Anatomy). commercial e-learning software packages (Table 1). Partici-
A visual stimulus, instead of an auditory stimulus, was pants studied the first joint using one e-learning tool during
chosen for this study because its optical nature makes it ideal the first dual-task condition and the second joint and
to estimate the impact of a visually based e-learning tool on e-learning tool during the following dual-task condition. The
the resources of the visual subsystem of the working memory elbow and knee joints were chosen because they are both
(Fisk et al., 1986; Br€unken et al., 2002; Kim and Rieh, 2005; complex hinge joints with unique ligaments. The knee is
DeLeeuw and Mayer, 2008). In regards to the VOT, the use arguably the more complex hinge joint; however, it is more
of a modified rather than a classic Stroop test was intentional commonly understood by students prior to their exposure to
as research indicates that the modified version requires fewer a formal anatomy course.
cognitive resources than its classical counterpart, therefore Prior to the beginning of each dual-task condition, partici-
reducing the risk that it will impact learning task perform- pants were asked to complete a multiple-choice baseline anat-
ance artificially (Kerr, 1973; Br€
unken et al., 2003; for further omy knowledge test based on the respective joint they were
discussion on VOT design see Van Nuland and Rogers, to learn about. Each baseline test consisted of 10 questions,
2016). each associated with a cadaveric picture, and was designed to
assess their level of anatomical knowledge of the elbow or
knee joint prior to e-learning tool exposure. Questions
Condition 2: Joint Learning Task in Isolation related to the identification of articulating bones at each
(Single-Task Condition) joint, associated bony landmarks, and whether the joint origi-
nated from the left or right side of the body. Again, to ensure
Following completion of the VOT in isolation (Condition 1), the quality and scope of the questions utilized in this study, a
participants were asked to complete a second single-task con- qualified anatomist, unrelated to the study, validated the
dition involving learning the anatomical structure of the bony questions used to assess the bony elbow and knee.
wrist joint. Prior to the start of this learning task, partici- Following each baseline knowledge test, participants were
pants were given 10 minutes to complete 10 multiple-choice given the opportunity to learn about a joint from a pre-assigned
questions, each with its own cadaveric image, to assess their e-learning tool (Table 1). Participants were provided with an
baseline level of anatomical knowledge of the wrist joint. instruction page that briefly outlined how to manipulate the
Questions were related to identification of the carpal bones basic functions of the e-learning tool and included three guiding
and whether it was an anterior or posterior view of the wrist objectives: (1) Identify the bones that articulate at the (elbow or
joint. To ensure the quality and scope of the questions uti- knee) joint; (2) Identify which side (left or right) a joint origi-
lized in this study, a qualified anatomist, unrelated to the nates from; and (3) Identify the bony landmarks on the bones

Anatomical Sciences Education JULY/AUGUST 2016 383


Table 1.
E-learning Tool and Joint Learning Exercise Assignment

Visual observation Wrist learning E-learning tool and


Participant task in isolation task in isolation joint learning exercise

1   Elbow in A.D.A.M. Knee in Netter’s 3D


Interactive Anatomy Interactive Anatomy

2   Knee in A.D.A.M. Elbow in Netter’s 3D


Interactive Anatomy Interactive Anatomy

3   Elbow in Netter’s 3D Knee in A.D.A.M.


Interactive Anatomy Interactive Anatomy

4   Knee in Netter’s 3D Elbow in A.D.A.M.


Interactive Anatomy Interactive Anatomy

immediately surrounding the (elbow or knee) joint. Using dual- and dual-task conditions were compared by e-learning tool
task parameters purposed in our previous study (Van Nuland (A.D.A.M. Interactive Anatomy or Netter’s 3D Interactive
and Rogers, 2016), participants were given 12 minutes to Anatomy), joint learning exercise (knee or elbow), and by
explore their pre-assigned joint (elbow or knee) using the e- dual-task condition (first dual-task or second dual-task condi-
learning tool. At the same time, participants were also asked to tion) using one-way repeated measures analysis of variances
respond to the VOT described earlier (Fig. 2). (ANOVAs). Reaction times were also compared by joint
Upon completion of each combined dual-task exercise, par- learning task and e-learning tool using independent t-tests.
ticipants were asked to complete another 1-minute math work- Participant pre-test and post-test scores, as well as gain scores,
sheet followed by a scrambled version of knowledge questions were also compared by e-learning tool, joint learning exercise,
administered during the baseline test; however, only questions and by dual-task condition using one-way repeated measures
relevant to the joint the participant just learned were dis- ANOVAs. Performance scores were also compared with partici-
played. Again, participants were given 10 minutes to complete pant’s spatial ability scores using linear regression analysis.
the post-test questions in order to assess how studying from a
different e-learning tool impacted a user’s anatomical knowl-
edge of the knee and elbow joint. This sequence was repeated Exclusion Criteria
for second dual-task condition, which involved the remaining
Participants were excluded from analysis if: (1) they failed to
joint and the second e-learning tool (e.g., if, during the first
complete all tests, (2) they self-identified as color blind, or
dual-task condition, a student studied the elbow joint using
(3) if their performance on the VOT was more than two
A.D.A.M. Interactive Anatomy, then they would study the
standard deviations away from the mean.
knee joint using Netter’s 3D Interactive Anatomy; Table 1).

RESULTS
Analysis
Demographics
Participant VOT reaction times in the single-task condition
were averaged and compared with reaction times when the Of the 78 Systemic Human Anatomy students who participated
VOT was paired with a joint learning task in the dual-task in the study, 70 individuals had usable data sets that passed the
condition. The reaction times gathered during the single-task exclusion criteria (n 5 70, M:F 30:40, mean age 5 21.5 years).

Table 2.
Visual Observation Task Reaction Times Assessed by Joint Learning Exercise and E-Learning Tool

Joint learning task Condition Number of participants VOT reaction time (msec) (mean 6 SD)

Elbow Netter’s 3D Interactive Anatomy 38 1,533 6 382

A.D.A.M. Interactive Anatomy 32 1,550 6 475

Knee Netter’s 3D Interactive Anatomy 32 1,500 6 328

A.D.A.M. Interactive Anatomy 38 1,513 6 360

No significant differences were found.


VOT, visual observation task.

384 Van Nuland and Rogers


Table 3. Visual Observation Task Reaction Times
Assessed by Joint Learning Task
Pre-Test, Post-Test, and Gain Scores Assessed by E-Learning
Tool (1Visual Observation Task) Using a one-way repeated measures ANOVA, VOT baseline
values were also compared with dual-task VOT reaction
Pre-test Post-test Gain times (mean 6 SD) when participants learned the elbow joint
score score scores (1,540 6 424 msec) and knee joint (1,507 6 343 msec).
Condition (mean 6 SD) (mean 6 SD) (mean 6 SD) Again, although reaction times for each e-learning tool were
significantly slower than baseline values [F(2, 138) 5 164.32,
2D textbook image 3.10 6 2.21 7.67 6 2.14 4.57 6 2.86
P < 0.001], there was no significant difference between elbow
Netter’s 3D 3.19 6 2.33 7.80 6 1.81 4.61 6 2.89 and knee VOT reaction times.
Interactive When reaction times were compared by dual-task sce-
Anatomy 1 VOT nario, similar results were found. Baseline reaction times
were significantly faster than reaction times in both dual-task
A.D.A.M. 3.20 6 2.24 7.86 6 1.86 4.66 6 2.51 scenarios 1 and 2; however, when the dual-task scenarios
Interactive were compared with each other, no significant difference was
Anatomy 1 VOT found (data not shown).

No significant differences were found; n 5 70. All pre-tests and


post-tests were out of 10 marks total. Visual Observation Reaction Times Assessed
VOT, visual observation task. by Joint Learning Task and E-Learning Tool
Of those who participated, 64.3% were third-year undergraduate Dual-task VOT reaction times were further broken down by
students, 32.9% were fourth-year undergraduate students, joint learning task and e-learning tool (Table 2). Independent
whereas two participants chose not to declare their standing. samples t-tests did not reveal any significant differences
between VOT reaction times when the elbow and knee was
Approximately 55.7% of the participants were registered in the
studied within Netter’s 3D Interactive Anatomy or A.D.A.M.
online portion of the Systemic Human Anatomy course, whereas
Interactive Anatomy.
the remaining students were registered in the F2F section. No par-
ticipant involved in the study had received formal anatomy lec-
tures on the joints tested in this study. Baseline, Post-Test, and Gain Scores
When participants were asked about past e-learning tool use, Assessed by E-Learning Tool
48.6% indicated they had not used e-learning tools in the past;
however, 45.7% indicated they had (four students either could The anatomical knowledge baseline scores, post-test scores,
not remember or did not answer). Of those participants that had and overall gain scores (mean 6 SD) in single-task and dual-
utilized e-learning tools in the past, 71.9% reported having used task scenarios were compared using a one-way repeated
Netter’s 3D Interactive Anatomy to study anatomy. This was not measures ANOVA with a Greenhouse–Geisser correction
surprising as those students enrolled in the online section of the
course had been using Netter’s 3D Interactive Anatomy as part of Table 4.
their online laboratory experience. Approximately 19% of partici-
pants reported having used other anatomy e-learning tools, includ- Pre-Test, Post-Test, and Gain Scores Assessed by Joint Learn-
ing Anatomy TV (Primal Pictures), Anatomy One, and Visible ing Task (1Visual Observation Task)
Body. The remaining 9.3% of those participants who had used e-
learning tool in the past could not remember which anatomical e- Pre-test Post-test Gain
learning tools they had used. When participants were questioned score score scores
Condition (mean 6 SD) (mean 6 SD) (mean 6 SD)
about their comfort level with computer technology, 47.1% indi-
cated they were somewhat comfortable, whereas the majority Wrist 3.10 6 2.21a 7.67 6 2.14 4.57 6 2.86b
(52.9%) indicated that they were extremely comfortable.
Elbow 2.43 6 1.70c 8.23 6 1.93 5.80 6 2.70d
1 VOT
Visual Observation Task Reaction Times
Assessed by E-Learning Tool Knee 3.96 6 2.53a,c 7.43 6 1.63 3.47 6 2.15b,d
1 VOT
In the visual observation single-task scenario, where the VOT
was completed in isolation (VOT baseline; Condition 1), the
a,c
mean reaction time (mean 6 standard deviation (SD)) across One-way repeated measures ANOVA with Greenhouse–Geisser
corrections and post hoc tests with Bonferroni corrections
all participants (n 5 70) was 875 6 210 msec. Mean VOT revealed that knee pre-test scores were significantly higher than
reaction times in the dual-task scenario were virtually identical wrist and knee pre-test scores (P 5 0.001 and P < 0.001,
when participants used Netter’s 3D Interactive Anatomy respectively).
b,d
(1,518 6 356 msec) and A.D.A.M. Interactive Anatomy One-way repeated measures ANOVA with Greenhouse–Geisser
corrections and post hoc tests with Bonferroni corrections
(1,530 6 414 msec). A one-way repeated measures ANOVA revealed that knee gain scores were significantly lower than wrist
with a Greenhouse–Geisser correction determined that VOT and knee gain scores (P 5 0.011 and P < 0.001, respectively). All
reactions times were significantly slower for each e-learning pre-tests and post-tests were out of 10 marks total. Note that
tool compared with baseline values [F(1.83, 126.53) 5 163.37, identical superscripts are significantly different; n 5 70.
VOT, visual observation task.
P < 0.001].

Anatomical Sciences Education JULY/AUGUST 2016 385


(Table 3). No significant difference among participant’s base- and A.D.A.M. Interactive Anatomy) to determine the effect
line, post-test, and gain scores was found when students used of their design on learner cognitive load during two joint
the 2D textbook image, Netter’s 3D Interactive Anatomy, or learning exercises (elbow and knee), using a novel dual-task
A.D.A.M. Interactive Anatomy. paradigm design, described by Van Nuland and Rogers
When baseline, post-test, and gain scores were compared (2016). It was hypothesized that the design of the Netter’s
by single- and dual-task scenarios (single-task scenario and 3D Interactive Anatomy interface would impose higher cogni-
dual-task scenario 1 or 2), again no significant differences tive load than the A.D.A.M. Interactive Anatomy interface,
were found between related anatomical assessment scores thus resulting in longer VOT reaction times and lower post-
(data not shown). test scores.
The dual-task paradigm utilized in this study was designed
to limit three important confounding variables, including
Baseline, Post-Test, and Gain Scores response automation, task performance tradeoff, and the
Assessed by Joint Learning Exercise inherent cognitive load of the learning task itself. The assess-
ment of the reaction times on the VOT demonstrated no sig-
Baseline, post-test, and gain scores were also analyzed by
nificant difference between first or second dual-task
joint learning task (e.g., wrist, elbow, and knee), regardless of
conditions. This suggests that the VOT required consistent
e-learning tool (Table 4). A one-way repeated measures
and effortful processing in the working memory throughout
ANOVA with a Greenhouse–Geisser correction indicated a
the experiment, reducing response automation to negligible
significant difference between joint learning exercise pre-test
levels and improving the quality of the dual-task paradigm
scores [F(1.79, 123.49) 5 16.15, P < 0.001], with knee pre-
(Fisk et al., 1986; MacLeod, 2005). Furthermore, this novel
test scores significantly higher than the wrist and elbow
paradigm was designed to detect task performance trade-off,
(P  0.001; post hoc test with Bonferroni correction). No sig-
a process through which participants sacrifice learning task
nificant difference was found among wrist, knee, and elbow
performance in order to improve their performance on the
post-test scores; however, there was a significant difference
VOT (Fisk et al., 1986). The pre-test, post-test, and gain
between joint learning exercise gain scores [F(1.57,
scores of the single-task learning condition involving the
108.42) 5 14.05, P < 0.001], with higher differences seen
wrist were not significantly different from the pre-test, post-
with the wrist and elbow than the knee (post hoc with Bon-
test, or gain scores of the dual-task conditions, suggesting
ferroni corrections; P 5 0.011 and P < 0.001, respectively).
that the mental effort invested during both the single and
However, no significant difference was found between wrist
dual-task scenarios was equivalent (Fisk et al., 1986). In
and elbow gain scores.
other words, participants maintained a consistent usage of
working memory resources during each learning task across
Mental Rotations Test Scores Versus all conditions, suggesting that no performance trade-off
Post-Test Scores Assessed by E-Learning occurred enabling VOT reaction times to be interpreted as a
Tool and Joint sensitive measure of cognitive load (Fisk et al., 1986; Hegarty
et al., 2000).
The mean MRT score (6SD) across all participants was Using this novel dual-task design, cognitive load during
9.84 6 4.17. No significant difference between genders was learner interactions was investigated with two commercial
detected by means of a Student’s t-test (female MRT score: anatomical e-learning tools, Netter’s 3D Interactive Anatomy
10.30 6 4.08 and male MRT score: 9.23 6 4.27). When MRT and A.D.A.M. Interactive Anatomy, through the usage of a
scores were plotted against A.D.A.M. Interactive Anatomy VOT. Not surprisingly, when learners performed the VOT
and Netter’s 3D Interactive Anatomy post-test scores, linear simultaneously with a learning task, reaction times were sig-
regression analysis established that, although MRT scores did nificantly slower than that when the VOT was completed in
not significantly predict A.D.A.M. Interactive Anatomy post- isolation. These results, which are consistent with previous
test scores, they did predict Netter’s 3D Interactive Anatomy dual-task studies using VOTs, suggest that the learning task
post-test scores [F(1,68) 5 7.81, P 5 0.007], with an R2 of and VOT use the same working memory resources and that
0.103 (Fig. 3). When Netter’s 3D Interactive Anatomy post- the VOT is a sensitive measure of cognitive load during proc-
test scores were separated by joint and assessed using linear essing of the joint learning task (Br€
unken et al., 2002).
regression, the correlational significance was isolated to the Confirming the sensitivity of the VOT, reactions times
elbow joint only [F(1,35) 5 4.90, P 5 0.034], with an R2 of during Netter’s 3D Interactive Anatomy and A.D.A.M. Inter-
0.123 (Fig. 4). No such correlation was found when Netter’s active Anatomy usage were then compared. CLT indicates
3D Interactive Anatomy knee post-test scores were compared that e-learning tools that contain animations may impose
with participant spatial ability. When A.D.A.M. Interactive extraneous cognitive load on the student because animations
Anatomy post-test scores were separated by joint, linear only provide transient information, meaning that once the
regression established that MRT scores did not significantly animation has advanced beyond a specific frame, that frame
predict specific joint post-test outcomes (Fig. 4). is no longer available to the viewer and must be mentally
reconstructed (Chandler and Sweller, 1991; Hegarty, 2004).
DISCUSSION Numerous studies have shown that multiple views of an ani-
mated structure in a 2D environment do not provide an
Dual-task paradigms have been used in previous studies to instructional or performance advantage over that of key
evaluate cognitive load differences between learning interfa- views represented by static images (Garg et al., 1999a,b; Jain
ces in Web searches as well as physiology and cultural learn- and Getis, 2003; Levinson et al., 2007; Vogel-Walcutt et al.,
ing platforms (Br€unken et al., 2002, 2003; Kim and Rieh, 2010). Furthermore, research suggests that systems with high
2005). The aim of this study was to examine two commercial interactivity may increase extraneous cognitive load levels in
anatomical e-learning tools (Netter’s 3D Interactive Anatomy novice users, particularly if a learner has a poor ability to

386 Van Nuland and Rogers


Figure 3.
Correlation between participant mental rotation test (MRT) scores and post-test performance scores according to e-learning tool. Participant MRT scores were com-
pared with their post-test performance scores when they learned using A.D.A.M. Interactive Anatomy (A) and Netter’s 3D Interactive Anatomy (B). Post-test scores
following Netter’s 3D Interactive Anatomy utilization were significantly and positively correlated with a student’s spatial ability, as determined by the MRT
(P 5 0.007). All pre-tests and post-tests were out of 10 marks total.

mentally rotate structures in space (Garg et al., 1999a,b; ence between VOT reaction times when using Netter’s 3D
Levinson et al., 2007). Our initial hypothesis, based on Interactive Anatomy compared with A.D.A.M. Interactive
e-learning tool usability studies, predicted that the Netter’s Anatomy.
3D Interactive Anatomy interface would be associated with Although these results may lead some readers to suggest
longer VOT reaction times than A.D.A.M. Interactive Anat- that the cognitive load imposed by these two e-learning tools
omy because of its high navigational control and multiple is similar, e-learning tool usability studies and studies com-
viewing angles, which require more working memory resour- paring static with animated images do not support such a
ces. However, our results demonstrated no significant differ- conclusion (Dias et al., 1999; Garg et al., 1999a,b; Mayer,

Figure 4.
Correlation between participant mental rotation test (MRT) scores and post-test performance scores according to e-learning tool and joint learning task. Participant
MRT scores were compared with post-test performance scores when they learned about (1) the elbow joint using A.D.A.M. Interactive Anatomy (A) or Netter’s 3D
Interactive Anatomy (B) and (2) the knee joint using A.D.A.M. Interactive Anatomy (C) or Netter’s 3D Interactive Anatomy (D). Student’s post-test scores when
using Netter’s 3D Interactive Anatomy to study the elbow were significantly and positively correlated with their spatial ability, as determined by the MRT.

Anatomical Sciences Education JULY/AUGUST 2016 387


2001; Wong et al., 2003; Khalil et al., 2005; Ardito et al., guided by mental rotation and as such, is a spatially demand-
2006; Grunwald and Corsbie-Massay, 2006; Levinson et al., ing task related to a user’s spatial ability (Wohlschl€ager and
2007; van Merri€enboer and Sweller, 2010; Dindar et al., Wohlschl€ager, 1998; Ruddle and Jones, 2001; Stull, 2009).
2015; Borycki et al., 2015; Deraniyagala et al., 2015). It is Although some studies have found that external animations
the opinion of the authors that this result may be attributable and high interactivity tools benefit students, they often depict
to our study population. The reality is that today’s university systems that contain an element of dynamism (e.g., mechani-
students, typically born between 1982 and 2002, are part of cal systems, computer algorithms, or geological and astro-
the digital generation (Howe and Strauss, 2000) and as such nomical phenomena; see, for example, Narayanan and
generally have more exposure to technological interfaces dur- Hegarty, 2002 and Rebetez et al., 2004; see H€ offler and
ing their education than their predecessors (Brown, 2000; Leutner (2007) for review). In line with this, numerous stud-
Oblinger and Oblinger, 2005; Oblinger et al., 2005; Skiba ies have indicated that not only do animations fail to confer
and Barton, 2006). This exposure can increase user comfort an academic advantage over static images, but they may also
navigating within and “reading” from starkly different yet hinder the learning processes of those users with lower spatial
novel multimedia interfaces (Brown, 2000; Skiba and Barton, abilities. (Large et al., 1996; Garg et al., 1999a,b, 2001,
2006). This ability to adapt to new learning interfaces may 2002; Narayanan and Hegarty, 2002; Jain and Getis, 2003;
explain why no differences in VOT reaction times were seen Yang et al., 2003; Cohen and Hegarty, 2007; Levinson et al.,
between the dissimilar e-learning tools used in this study. 2007; Keehner et al., 2008; Stull et al., 2009; Vogel-Walcutt
Although current students may possess skills that enable et al., 2010; see also Tversky et al., 2002 for review). It has
them to navigate across different learning interfaces, the anat- been suggested that low-spatial individuals may have diffi-
omy e-learning tools used in this study did not confer any culty manipulating and interpreting virtual models and as a
instructional advantage over the static 2D image of this wrist. result achieve lower knowledge acquisition scores compared
This finding is consistent with earlier studies that demon- with their highly spatial colleagues (Narayanan and Hegarty,
strate similar learning outcomes between students who use 2002; Levinson et al., 2007; Stull et al., 2009). Consistent
static images and those who use dynamic ones (Garg et al., with the studies mentioned earlier, our results suggest that
1999a,b; Levinson et al., 2007; Vogel-Walcutt et al., 2010; virtual anatomical animations within a high interactivity soft-
Dindar et al., 2015). ware package may be more cognitively and perceptually
In terms of basic anatomy knowledge, the joint pre-test demanding for lower spatial ability users than for those with
scores of this study demonstrate that naive anatomy students high spatial ability (Stull, 2009).
are more familiar with the general anatomy of the knee joint
than either the wrist or elbow joint. This may be due, in
part, to the increased athletic activity in pivoting sports (e.g., Study Limitations
soccer, football, and basketball) among the pediatric popula-
tion and the resultant incidence of knee injuries (Soprano, Although the research presented is an important step in
2005; Micheli and Purccell, 2007). Approximately 25% of understanding the impact of commercial anatomical e-
primary care office visits by adolescents are sports related, learning tools on student’s working memory systems, it is
with the knee being among the most common injury and the imperative that the limitations of this study also be discussed.
most frequent complaint when seeking medical attention (Ziv As described in the methodology, post-tests were adminis-
et al., 1999; Rice, 2000; Hambidge et al., 2002). These inci- tered the same day students utilized the e-learning tools,
dences of knee injury and the consequent discussion with meaning that immediate learning and recall were assessed,
medical professionals may explain why naive anatomy stu- however, long-term retention and learning were not. Beyond
dents are more familiar with knee anatomy than other joints. the methodological design, this study should be repeated with
Interestingly, joint post-test scores demonstrate that the the ankle joint substituted for the knee joint, to completely
higher pre-test scores in the knee did not result in higher eliminate pre-test knowledge as a confounding variable.
post-test scores, rather students improved to the same level in
the post-test, regardless of the joint studied. This suggests CONCLUSION
that higher pre-test knowledge of the knee did not impact the
post-test knowledge results of the study. In an educational era where the rapid adoption of e-learning
The effect of user visuospatial ability on post-test out- tools to facilitate online learning has been propelled, in part,
comes is perhaps one of the more critical results in this study. by rising annual enrollments and an influx of new learners
To understand the role spatial ability plays in acquiring from digital generation, these findings are noteworthy. The
knowledge from two dissimilar e-learning tools, an MRT was body of research suggests that as highly interactive software
utilized to quantify students’ spatial ability (Shepard and products are utilized to depict subjects that intrinsically favor
Metzler, 1978; Vandenberg and Kuse, 1978; Peters et al., higher spatial ability learners over lower spatial ability learn-
1995). When MRT scores were plotted against e-learning ers, educators may be unintentionally placing students with
tool-specific post-test scores, our results suggested that visuo- low spatial ability at a significant academic disadvantage.
spatial ability was positively correlated with performance Nevertheless, the results of our study leave a number of ques-
outcomes in Netter’s 3D Interactive Anatomy (specifically tions unanswered. As Hegarty (2004) highlighted, too often
with the elbow), but when the same students used A.D.A.M. investigators assume that although learners may use dissimi-
Interactive Anatomy, visuospatial ability did not confer an lar interactive displays differently, they nonetheless use each
academic advantage. These results may be attributable to the display in an efficient, productive, and effective manner. Yet,
design of each e-learning tool. Previous research has shown research on user interactions with interactive displays contra-
that highly interactive e-learning software such as Netter’s dicts this assumption, suggesting that not all individuals have
3D Interactive Anatomy, which enables the rotation, transfor- the metacognitive skills to manipulate said software and ani-
mation, and manipulation of unfamiliar virtual objects, is mations to learn effectively (Morrison et al., 2000; Hegarty

388 Van Nuland and Rogers


et al., 2002, 2008; Hegarty, 2004; Lowe, 2004; Rieber et al., Deraniyagala R, Amdur RJ, Boyer AL, Kaylor S. 2015. Usability study of the
EduMod eLearning program for contouring nodal stations of the head and
2004; Cohen and Hegarty, 2007). Future research involving neck. Pract Radiat Oncol 5:169–175.
cursor and gazing tracking as well as fixation patterns may Dias P, Gomes MJ, Correia AP. 1999. Disorientation in hypermedia environ-
provide valuable insights about how learners with different ments: Mechanisms to support navigation. J Educ Comput Res 20:93–117.
Dindar M, Kabakçı Yurdakul I, Inan _ D€
onmez F. 2015. Measuring cognitive
visuospatial abilities use various interactive functions and
load in test items: Static graphics versus animated graphics. J Comput Assist
allow us to elucidate strategy differences between successful Lear 31:148–161.
and unsuccessful learners in a virtual environment. Eva KW, MacDonald RD, Rodenburg D, Regehr G. 2000. Maintaining the
characteristics of effective clinical teachers in computer assisted learning envi-
ronments. Adv Health Sci Educ Theory Pract 5:233–246.
NOTES ON CONTRIBUTORS Fisk AD, Derrick WL, Schneider W. 1986. A methodological assessment and
evaluation of dual-task paradigms. Curr Psychol 5:315–327.
SONYA E. VAN NULAND, M.Sc., is a Ph.D. student (Clini- Garg AX. 1998. Learning anatomy from rotating three dimensional virtual
models. Master of Science Dissertation. Toronto, ON, Canada: University of
cal Anatomy Program) in the Department of Anatomy and Toronto. 51 p.
Cell Biology, Schulich School of Medicine and Dentistry at Garg A, Norman GR, Spero L, Maheshwari P. 1999a. Do virtual computer
Western University, London, Canada. She is a teaching assist- models hinder anatomy learning. Acad Med 74:S87–S89.
ant in the undergraduate histology laboratory, and her Garg A, Norman GR, Spero L, Taylor I. 1999b. Learning anatomy: Do new
computer models improve spatial understanding? Med Teach 21:519–522.
research interest is in the anatomical e-learning tool Garg AX, Norman G, Sperotable L. 2001. How medical students learn spatial
effectiveness. anatomy. Lancet 357:363–364.
KEM A. ROGERS, Ph.D., is a professor and Chair of the Garg AX, Norman GR, Eva KW, Spero L, Sharan S. 2002. Is there any real
virtue of virtual reality?: The minor role of multiple orientations in learning
Department of Anatomy and Cell Biology, Schulich School of
anatomy from computers. Acad Med 77:S97–S99.
Medicine and Dentistry at Western University, London, Can- Grunwald T, Corsbie-Massay C. 2006. Guidelines for cognitively efficient mul-
ada. He is the coordinator and instructor for the undergradu- timedia learning tools: Educational strategies, cognitive load, and interface
ate histology course, and his research interests range from design. Acad Med 81:213–223.
Gwizdka J. 2010. Distribution of cognitive load in web search. J Am Soc Inf
models of cardiovascular disease to educational scholarship. Sci Tech 61:2167–2187.
Gwizdka J, Lopatovska I. 2009. The role of subjective factors in the informa-
LITERATURE CITED tion search process. J Am Soc Inf Sci Tech 60:2452–2464.
Anderson JR. 1987. Skill acquisition: The compilation of weak-method prob- Hambidge SJ, Davidson AJ, Gonzales R, Steiner JF. 2002. Epidemiology of
lem solutions. Psychol Rev 94:192–210. pediatric injury-related primary care office visits in the United States. Pediatrics
Attardi SM, Rogers KA. 2015. Design and implementation of an online sys- 109:559–565.
temic human anatomy course with laboratory. Anat Sci Ed 8:53–62. Hegarty M. 2004. Dynamic visualizations and learning: Getting to the difficult
Ardito C, Costabile MF, De Marsico M, Lanzilotti R, Levildi S, Roselli T, questions. Learn Instr 14:343–351.
Rossano V. 2006. An approach to usability evaluation of e-learning applica- Hegarty M, Narayanan NH, Freitas P. 2002. Understanding machine from
tions. Univ Access Inf Soc 4:270–283. multimedia and hypermedia presentations. In: Otero J, Le on JA, Graesser AC
Baddeley A. 2003. Working memory: Looking back and looking forward. Nat (Editors). The Psychology of Science Text Comprehension. 1st Ed. Mahwah,
Rev Neurosci 4:829–839. NJ: Lawrence Erlbaum Associates. p 357–384.
Baddeley AD. 1986. Working Memory. 1st Ed. New York, NY: Oxford Uni- Hegarty M, Shah P, Miyake A. 2000. Constraints on using the dual-task meth-
versity Press. 304 p. odology to specify the degree of central executive involvement in cognitive
Barbeau ML, Johnson M, Gibson C, Rogers KA. 2013. The development and tasks. Mem Cognit 28:376–385.
assessment of an online microscopic anatomy laboratory course. Anat Sci Ed Hegarty M, Smallman HS, Stull AT. 2008. Decoupling of intuitions and per-
6:246–256. formance in the use of complex visual displays. In: Proceedings of the 30th
Borycki EM, Monkman H, Griffith J, Kushniruk AW. 2015. Mobile usability Annual Conference of the Cognitive Science Society (CogSci 2008); Washing-
testing in healthcare: Methodological approaches. Stud Health Technol Inform ton, DC, July 23-26, 2008. p 881–886. Wheat Ridge, CO: Cognitive Science
216:338–342. Society.
Brown JS. 2000. Growing up digital: How the web changes work, education Hessler KL, Henderson AM. 2013. Interactive learning research: Application
and the ways people learn. Change 32:10–20. of cognitive load theory to nursing education. Int J Nurs Educ Scholarship 10:
133–141.
Br€unken R, Plass JL, Leutner D. 2003. Direct measurement of cognitive load
H€offler TN, Leutner D. 2007. Instructional animation versus static pictures: A
in multimedia learning. Educ Psychol 38:53–61.
meta-analysis. Learn Instr 17:722–738.
Br€unken R, Plass JL, Leutner D. 2004. Assessment of cognitive load in multi-
Howe N, Strauss W. 2000. Millenials Rising: The Next Great Generation. 1st
media learning with dual-task methodology: Auditory load and modality
Ed. Toronto, Canada: Random House, Inc. 415 p.
effects. Instr Sci 32:115–132.
Jain C, Getis A. 2003. The effectiveness of internet-based instruction: An
Br€unken R, Steinbacher S, Plass JL, Leutner D. 2002. Assessment of cognitive
experiment in physical geography. J Geogr High Educ 27:153–167.
load in multimedia learning using dual-task methodology. Exp Psychol 49:
109–119. Josephsen J. 2015. Cognitive load theory and nursing simulation: An integra-
tive review. Clin Simulat Nurs 11:259–267.
Burguillo JC. 2010. Using game theory and competition-based learning to
stimulate student motivation and performance. Comput Educ 55:566–575. Keehner M, Hegarty M, Cohen C, Khooshabeh P, Montello DR. 2008. Spatial
reasoning with external visualizations: What matters is what you see, not
Carpenter PA, Just MA. 1986. Spatial ability: An information processing
whether you interact. Cognit Sci 32:1099–1132.
approach to psychometrics. Adv Psychol Hum Intell 3:221–253.
Kerr B. 1973. Processing demands during mental operations. Mem Cognit 1:
Chandler P, Sweller J. 1991. Cognitive load theory and the format of instruc-
401–412.
tion. Cognit Instruct 8:293–332.
Khalil MK, Paas F, Johnson TE, Payer AF. 2005. Design of interactive and
Chandler P, Sweller J. 1996. Cognitive load while learning to use a computer dynamic anatomical visualizations: The implication of cognitive load theory.
program. Appl Cognit Psychol 10:151–170. Anat Rec 286B:15–20.
Cohen CA, Hegarty M. 2007. Individual differences in use of external visual- Kim YM, Rieh SY. 2005. Dual-task performance as a measure for mental
isations to perform an internal visualisation task. Appl Cognit Psychol 21: effort in searching a library system and the Web. In: Proceedings of the 68th
701–711. Annual Meeting of the American Society for Information Science and Technol-
Dahlstrom E. 2012. ECAR Study of Undergraduate Students and Information ogy (ASIS&T 2005). Sparking Synergies: Bringing Research and Practice
Technology, 2012. 2012 Ed. Louisville, CO: EDUCAUSE Centre for Applied Together; Charlotte, NC, Oct 28–Nov 2, 2005. Vol. 42. Silver Spring, MD:
Research. 38 p. URL: http://net.educause.edu/ir/library/pdf/ERS1208/ERS1208. American Society for Information Science and Technology.
pdf [accessed 25 Feb. 2015]. Lahtinen TMM, Koskelo JP, Laitnen T, Leino TK. 2007. Heart rate and per-
de Jong T. 2010. Cognitive load theory, education research and instructional formance during combat missions in a flight simulator. Aviat Space Environ
design some food for thought. Instr Sci 38:105–134. Med 78:387–391.
DeLeeuw KE, Mayer RE. 2008. A comparison of three measures of cognitive Large A, Beheshti J, Breuleux A, Renaud A. 1996. Effect of animation in
load: Evidence for separable measures of intrinsic, extraneous, and germane enhancing descriptive and procedural texts in a multimedia learning environ-
load. J Educ Psychol 100:223–234. ment. J Am Soc Inf Sci 47:437–448.

Anatomical Sciences Education JULY/AUGUST 2016 389


Levinson AJ, Weaver B, Garside S, McGinn H, Norman GR. 2007. Virtual pean Association for Research on Learning and Instruction Special Interest
reality and brain anatomy: A randomized trial of e-learning instructional Group (EARLI Sig) Conference on Comprehension of Text and Graphics: Basic
designs. Med Educ 41:495–501. and Applied Issues; Valencia, Spain, Sept 9–11, 2004. p 187–192. Leuven, Bel-
Lowe R. 2004. Interrogation of a dynamic visualization during learning. Learn gium: European Association for Research on Learning and Instruction.
Instr 14:257–274. Rice SG. 2000. Risks of injury during sports participation. In: Sullivan JA,
Luursema J-M, Verwey WB, Kommers PAM, Geelkerken RH, Vos HJ. 2006. Anderson SJ (Editors). Care of the Young Athlete. 1st Ed. Rosemont, IL:
Optimizing conditions for computer-assisted anatomical learning. Interact American Academy of Orthopaedic Surgeons and American Academy of Pedia-
Comput 18:1123–1138. trics. p 9–18.
MacLeod CM. 1998. Training on integrated versus separated Stroop tasks: Rieber LP, Tzeng SC, Tribble K. 2004. Discovery learning, representation, and
The progression of interference and facilitation. Mem Cognit 26:201–211. explanation within a computer-based simulation: Finding the right mix. Learn
MacLeod CM. 2005. The Stroop task in cognitive research. In: Wenzel A, Rubin Instr 14:307–323.
DC (Editors). Cognitive Methods and Their Application to Clinical Research. Rochford K. 1985. Spatial learning disabilities and underachievement among
1st Ed. Washington, DC: American Psychological Association. p 17–40. university anatomy students. Med Educ 19:13–26.
Mayer RE. 2001. Multimedia Learning. 1st Ed. Cambridge, UK: Cambridge Ruddle RA, Jones DM. 2001. Manual and virtual rotations of a three-
University Press. 222 p. dimensional object. J Exp Psychol Appl 7:286–296.
Mayer RE. 2002. Multimedia learning. In: Ross BH (Editor). Psychology of Schnotz W. 2002. Enabling, facilitating, and inhibiting effects in learning from
Learning and Motivation. Vol. 41: Advances in Research and Theory. 1st Ed. animated pictures. In: Proceedings of the International Workshop on Dynamic
San Diego, CA: Academic Press. p 85–139. Visualizations and Learning; T€ ubingen, Germany, July 18–19, 2002. T€ ubingen,
Mayer RE. 2005. Cognitive theory of multimedia learning. In: Mayer RE (Edi- Germany: Knowledge Media Research Centre (KMRC).
tor). The Cambridge Handbook of Multimedia Learning. 1st Ed. New York, Shepard S, Metzler D. 1978. Mental rotation: Effects of dimensionality of
NY: Cambridge University Press. p 31–48. objects and types of tasks. J Exp Psychol Hum Percept Perform 14:3–11.
Mayer RE. 2008. Applying the science of learning: Evidence-based principles Skiba DJ, Barton AJ. 2006. Adapting your teaching to accommodate the net
for the design of multimedia instruction. Am Psychol 63:760–769. generation of learners. Online J Issues Nurs 11:15.
Mayer RE, Sims VK. 1994. For whom is a picture worth a thousand words? Soprano JV. 2005. Musculoskeletal injuries in the pediatric and adolescent ath-
Extensions of a dual-coding theory of multimedia learning. J Educ Psychol 86: lete. Curr Sports Med Rep 4:329–334.
389–401.
Stroop JR. 1935. Studies of interference in serial verbal reactions. J Exp Psy-
Micheli LJ, Purccell L (Editors). 2007. The Adolescent Athlete: A Practical chol 18:643–662.
Approach. 1st Ed. New York, NY: Spinger Science1Business Media, LLC. 441 p.
Stull AT. 2009. Anatomy Learning in Virtual Reality: A Cognitive Investiga-
Miller GA. 1956. The magical number seven, plus or minus two: Some limits tion. Doctorate of Philosophy Dissertation. Santa Barbara, CA: University of
on our capacity for processing information. Psychol Rev 63:81–97. California. 184 p.
Moray N (Editor). 1979. Mental Workload: Its Theory and Measurement. 1st Stull AT, Hegarty M, Mayer RE. 2009. Getting a handle on learning anatomy
Ed. New York, NY: Plenum. 500 p. with interactive three-dimensional graphics. J Educ Psychol 101:803–816.
Moreno R, Mayer R. 2007. Interactive multimodal learning environments. Sweller J, van Merri€enboer JJG, Paas F. 1998. Cognitive architecture and
Educ Psychol Rev 19:309–326.
instructional design. Educ Psychol Rev 10:251–296.
Morrison JB, Tversky B, Betrancourt M. 2000. Animation: Does it facilitate
Toynton R. 2005. Degrees of disciplinarity in equipping mature students in
learning? In: Butz A, Kruger A, Oliver P (Editors). Smart Graphics: Papers
higher education for engagement and success in lifelong learning. Active Learn
from the 2000 AAAI Spring Symposium (Technical Reports, Vol. SS-00-04).
High Educ 6:106–117.
1st Ed. Menlo Park, CA: Association for the Advancement of Artificial Intelli-
gence Press. p 53–59. Tversky B, Morrison JB, Betrancourt M. 2002. Animation: Can it facilitate?
Narayanan NH, Hegarty M. 2002. Multimedia design for communication of Int J Hum-Comput Int 57:247–262.
dynamic information. J Educ Psychol 57:279–315. Vandenberg SG, Kuse AR. 1978. Mental rotations, a group test of three-
Oblinger DG, Oblinger JL. 2005. Is it age or IT: First steps toward under- dimensional spatial visualization. Percept Motor Skill 47:599–604.
standing the Net Generation. In: Oblinger DG, Oblinger JL (Editors). Educat- van Merri€enboer JJG, Sweller J. 2005. Cognitive load theory and complex
ing the Net Generation. Boulder, CO: EDUCAUSE. p 2.1–2.20. learning: Recent developments and future directions. Educ Psychol Rev 17:
Oblinger DG, Oblinger JL, Lippincott JK (Editors). 2005. Educating the Net 147–177.
Generation. 1st Ed. Boulder, CO: EDUCAUSE. 264 p. van Merri€enboer JJG, Sweller J. 2010. Cognitive load theory in health profes-
O’Donnell RD, Eggemeier FT. 1986. Workload assessment methodology. In: sional education: Design principles and strategies. Med Educ 44:85–93.
Boff KR, Kaufman L, Thomas JP (Editors). Handbook of Perception and Van Nuland SE, Rogers KA. 2016. E-learning, dual-task and cognitive load:
Human Performance: Volume II. Cognitive Processes and Performance. 1st Ed. The anatomy of a failed experiment. Anat Sci Educ 9:186–196.
New York, NY: Wiley Interscience. p 42–49. Vogel-Walcutt JJ, Gebrim JB, Nicholson D. 2010. Animated versus static
Oviatt SL. 2006. Human-centered design meets cognitive load theory: Design- images of team processes to affect knowledge acquisition and learning effi-
ing interfaces that help people think. In: Proceedings of the 14th Annual ACM ciency. J Online Teach Learn 6:1.
International Conference on Multimedia (MM ’06); New York, NY, Oct 23– Waits T, Lewis L. 2003. Distance Education at Degree-Granting Postsecondary
27, 2006. p 871–880. New York, NY: Association for Computing Machinery. Institutions: 2000–2001. E.D. Tabs. (NCES 2003-017). 1st Ed. Washington,
Paas F, van Merri€enboer JJG. 1994. Instructional control of cognitive load in DC: National Center for Education Statistics, U.S. Department of Education.
the training of complex cognitive tasks. Educ Psychol Rev 6:51–71. 95 p.
Paas F, Renkl A, Sweller J. 2004. Cognitive load theory and instructional Wastlund E, Norlander T, Archer T. 2008. The effect of page layout on mental
design: Recent developments. Educ Psychol 38:1–4. workload: A dual-task experiment. Comput Hum Behav 24:1229–1245.
Paas F, Tuovinen J, Tabbers H, van Gerven PWM. 2003. Cognitive load mea- Wohlschl€ ager A, Wohlschl€ ager A. 1998. Mental and manual rotation. J Exp
surement as a means to advance cognitive load theory. Educ Psychol 38:63–71. Psychol Hum Percept Perform 24:397–412.
Park I, Hannafin MJ. 1994. Empirically-based guidelines for the design of Wong SK, Nguyen TT, Chang E, Jayaratna N. 2003. Usability metrics for
interactive multimedia. Educ Tech Res Dev 41:63–85. e-learning. In: Meersman R, Tari A (Editors). On the Move to Meaningful
Pashler H. 1994. Dual-task interference in simple tasks: Data and theory. Psy- Internet Systems 2003: OTM 2003 Workshops. Proceedings of the OTM Con-
chol Bull 116:220–244. federated International Workshops HCI-SWWA, IPW, JTRES, WORM, WMS
Peters M, Laeng B, Latham K, Jackson M, Zaiyouna R, Richardson C. 1995. and WRSM 2003 Catania, Sicily, Italy, November 3–7, 2003. 1st Ed. Heidel-
A redrawn Vandenberg & Kuse mental rotations test: Different versions and berg, Germany: Springer-Verlag. p 235–252.
factors that affect performance. Brain Cogn 28:39–58. Yang EM, Andre T, Greenbowe TJ, Tibell L. 2003. Spatial ability and the
Posner MI, Boies SJ. 1971. Components of attention. Psychol Rev 78: impact of visualization/animation on learning electrochemistry. Int J Sci Educ
391–408. 25:329–349.
Rebetez C, Sangin M, Betrancourt M, Dillenbourg P. 2004. Effects of collabo- Ziv A, Boulet JR, Slap GB. 1999. Utilization of physician offices by adoles-
ration in the context of learning from animations. In: Proceedings of the Euro- cents in the United States. Pediatrics 104:35–41.

390 Van Nuland and Rogers

Anda mungkin juga menyukai