Anda di halaman 1dari 22

JID: CAG

ARTICLE IN PRESS [m5G;January 25, 2018;9:57]


Computers & Graphics xxx (2018) xxx–xxx

Contents lists available at ScienceDirect

Computers & Graphics


journal homepage: www.elsevier.com/locate/cag

Survey Paper

A survey of virtual human anatomy education systemsR


Q1 Bernhard Preim∗, Patrick Saalfeld
Department of Simulation and Graphics, University of Magdeburg, Germany

a r t i c l e i n f o a b s t r a c t

Article history: This survey provides an overview of visualization and interaction techniques developed for anatomy ed-
Received 20 October 2017 ucation. Besides individual techniques, the integration into virtual anatomy systems is considered. Web-
Revised 5 January 2018
based systems play a crucial role to enable learning independently at any time and space. We consider
Accepted 11 January 2018
the educational background, the underlying data, the model generation as well as the incorporation of
Available online xxx
textual components, such as labels and explanations. Finally, stereoscopic devices and first immersive VR
Keywords: solutions are discussed. The survey comprises also evaluation studies that analyze the learning effective-
Medical visualization ness.
Virtual anatomy
© 2018 Elsevier Ltd. All rights reserved.

1 1. Introduction active systems as valuable learning resources and, moreover, they 26


made substantial progress in understanding spatial relations [2,3]. 27
2 Anatomy education aims at providing medical students with Anatomy education was one of the driving applications of med- 28
3 an in-depth understanding of the morphology of anatomical struc- ical visualization techniques with the VOXEL-MAN as the outstand- 29
4 tures, their position and spatial relations, e.g. connectivity and in- ing example [4]. Early medical visualization papers discussed high- 30
5 nervation. Students should be able to locate anatomical structures, quality volume renderings of segmented volume data [5] or effi- 31
6 which is an essential prerequisite for surgical interventions. They cient realization of clipping and cutting techniques, such as using 32
7 should also be aware of the variability of the morphology and lo- the virtual scalpel [6], all aiming at anatomy education. Another 33
8 cation, e.g. of branching patterns of vascular structures. The tra- development motivated by anatomy education is labeling, where 34
9 ditional methods for anatomy education involve lectures, the use sophisticated algorithms were developed to ensure an efficient and 35
10 of text books and atlases as well as cadaver dissections. Cadaver pleasant label placement. Although the focus of most medical vi- 36
11 dissection plays an essential role for many reasons, e.g. the train- sualization research shifted towards diagnosis, treatment planning 37
12 ing of manual dexterity and communication skills [1]. The realism and intraoperative support, anatomy education made progress as 38
13 of this experience, however, is limited, since the color and texture well. The advent of web-based standards, in particular the intro- 39
14 of anatomical structures in cadavers differ strongly from living pa- duction of WebGL, and the recent progress of affordable VR glasses 40
15 tients. Other disadvantages associated with the use of cadavers, in- triggered the development of new systems in research and in com- 41
16 clude their high cost, the difficulty of supply and short time they mercial settings. 42
17 can be used only. Besides technically motivated developments, progress was also 43
18 While cadaver dissection is often part of anatomy education in made w.r.t. motivational design partially inspired by serious games 44
19 medicine, anatomy education in related disciplines, e.g. sport and and w.r.t. a proper balance between self-directed learning and 45
20 dentistry do not benefit from dissections. guidance. The essential questions from an application point of 46
21 Interactive 3D visualizations, in particular when combined with view are, of course, how the learning is affected and which stu- 47
22 specific learning tasks, have a great potential to add to these tra- dents will benefit from virtual anatomy systems. This survey ar- 48
23 ditional methods and even partially to replace them. The latter is ticle also pays attention to studies where the learning effects are 49
24 desirable due to a shortage of cadavers and available teaching time analyzed. 50
25 in anatomy [1]. Many studies indicate that students perceive inter- So far, there is no survey article on this issue. However, in the 51
chapter “Computer-Assisted Medical Education” (with a focus on 52
surgery training) of the book “Visual Computing for Medicine” [7], 53
anatomy education is discussed in some detail. Also in the pre- 54
R
This article was recommended for publication by Stefan Bruckner.

vious book by Preim and Bartz [8] anatomy education is part of 55
Corresponding author.
one chapter. The latter is outdated, since many systems were in- 56
E-mail addresses: bernhard.preim@ovgu.de, preim@isg.cs.uni-magdeburg.de
(B. Preim). troduced later. We pay attention to avoid large overlaps to these 57

https://doi.org/10.1016/j.cag.2018.01.005
0097-8493/© 2018 Elsevier Ltd. All rights reserved.

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

2 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

58 chapters, focusing on more recent developments and on evalua- even be tailored to the individual user by managing their learn- 122
59 tions that were barely discussed at all there. ing progress and adapting the presentation of tasks and material. 123
60 Scope of the survey. Anatomy education may be discussed from The potential of e-learning is only realized with highly motivated 124
61 a teacher perspective and from a learner perspective. The teacher self-disciplined users — a prerequisite that is fulfilled in case of 125
62 perspective involves the tools necessary to author interactive 3D vi- medical students. Many studies indicate that medical students en- 126
63 sualizations and relate them to the symbolic information (anatomic joy online anatomy resources. The survey of Jastrow and Hollinder- 127
64 names, categories, relations) as well as self-assessment tools and bäumer [14] shows that students consider them as motivating and 128
65 other methods to directly support learning. These tools comprise appreciate high-quality visualizations, keyword searches and up- 129
66 segmentation and model reconstruction as well as hypertext facil- to-date information in particular. 130
67 ities to organize textual information. In this survey, however, we All general principles of e-learning, for example w.r.t. instruc- 131
68 restrict to the learners’ perspective, i.e. we focus on the character tion design and use of media, apply to anatomy education as well. 132
69 and quality of the available information and the techniques to ex- The special situation in anatomy is the extremely complex struc- 133
70 plore this information space and acquire knowledge. We do not ture of the human body, characterized by various systems that are 134
71 discuss how the information space was created. We also restrict to closely related to each other. The level of required spatial under- 135
72 anatomy education for medical students and exclude related disci- standing in anatomy is extraordinary. Anatomic structures have to 136
73 plines, such as dentistry or studies of veterinarians. be recognized from different perspectives, understood in relation 137
74 There is wide variety of computer systems that support to adjacent structures and finally integrated in an understanding 138
75 anatomy education, including those that use only static 2D images of regions and systems [15]. The potential of interactive 3D visual- 139
76 and associated text or additional media, such as audio. Multimedia ization is that students actively explore anatomical structures and 140
77 versions of anatomic atlases are popular examples. We restrict our their spatial relations in a direct way [16]. 141
78 discussion to systems that support the exploration of anatomical
79 structures in 3D and refer to these systems as virtual anatomy (VA)
80 systems. Such VA systems enable to display anatomical structures 2.1. Learning theories 142
81 from any viewpoint and are not restricted to a few familiar view-
82 ing directions such as a text book. This includes desktop solutions, In the following, we discuss major learning theories that are 143
83 semi-immersive and virtual reality systems. We do, however, con- mentioned in VA system descriptions to justify design decisions. 144
84 sider systems that integrate 2D slice-based visualizations and 3D These theories have in common that they go beyond learning of 145
85 visualizations, since this integration is important for many clinical mere facts, as supported by drill-and-practice programs—the earli- 146
86 disciplines. est type of e-learning software. Pedagogical background informa- 147
87 Recently, considerable effort was spent on using augmented tion is provided by Dev et al. [17], including a history of e-learning 148
88 reality for anatomy education [9,10] including a survey on aug- in medicine since the 1960s. 149
89 mented reality in medicine [11]. To restrict this survey to a man- Constructivism and embodied cognition. According to construc- 150
90 ageable size, we do not consider these systems since a number of tivists’ theories, active learning favors spontaneous knowledge con- 151
91 specific topics would be involved. Another recent development in struction and thus reduces cognitive load compared to tradi- 152
92 anatomy education is the creation of physical models with 3D print- tional learning experiences. Moreover, the handling of anatom- 153
93 ing [12,13]. These models offer tactile cues when explored and can ical structures—the interactive rotation—resembles the way stu- 154
94 be generated in a cost-effective manner and thus can potentially be dents would explore such structures in reality when they, e.g. ro- 155
95 used widely. In this survey, we do not discuss these developments. tate real bones in their hands. The advantage of this similarity is 156
96 A related medical education topic is surgery simulation where vir- due to the phenomenon of embodied cognition, i.e. learning bene- 157
97 tual reality solutions are advanced but since target users, goals and fits from the involvement of body movements. Based on increasing 158
98 systems are strongly different, we do not discuss them. evidence, embodied cognition considers the perceptual and mo- 159
99 Selection strategy. This survey is based on a comprehensive tor systems as essential ingredients of cognitive processes [16]. As 160
100 search in various digital libraries (Sept. 2017). A search for the key- an example, Kosslyn et al. found that physical rotation leads to a 161
101 word “anatomy” in the Eurographics digital library resulted in 292 stronger activation of the motor cortex in a subsequent mental ro- 162
102 publications and a search for “anatomy education” in the IEEE dig- tation task [18]. Of course, the similarity between movements in 163
103 ital library to a further 202 publications with “anatomy” being part the real world and in the virtual world is higher with a 3D in- 164
104 of the title or abstract. Moreover, Google Scholar and PubMed were put device or with head tracking in VR compared to a 2D mouse. 165
105 used for searching. The selection of the papers to actually consider But even a 2D input device provides the experience how the view 166
106 was based on relevance, originality and visibility. Papers, where changes depending on the speed and direction of movement. A 167
107 anatomy education is not central, were discarded. This relates to 3D input device has the added advantage of a good stimulus- 168
108 papers where anatomy is considered with the goal of character an- response compatibility; a close correspondence between movements 169
109 imation and also papers with focus on education in surgery. Jour- to steer an object and the resulting changes in the visualization. 170
110 nal publications were favored instead of, e.g. local EG chapter pub- Jang et al. [19] highlight the additional learning effect due to in- 171
111 lications. Short papers were excluded, except they present a highly teractive manipulation of 3D content compared to passive view- 172
112 innovative concept. As a minor criterion, citation numbers were ing of prepared video sequences showing the same content. Thus, 173
113 also considered to ensure that highly visible papers are included. anatomy learning requires more than passive viewing even if the 174
presented material is of high didactic quality. The relation be- 175
114 2. Educational background tween constructivism and virtual reality learning environments is 176
discussed by Huang et al. [20]. 177
115 E-learning in general is an essential trend in many disciplines. There are different links between constructivism and anatomy 178
116 Students may use e-learning resources at any time and adapt education. Murray and Stewart [21], for example, use a physical 179
117 the system to their learning goals. Self-assessment facilities, such 3D model of the skeletal system and learners should place wires 180
118 as multiple choice questions, careful guidance, and motivational as models for nerves to learn about the origin and path of nerves. 181
119 strategies, are essential components. Communication functions to This active engagement may generate knowledge in the construc- 182
120 share thoughts and questions with other students and teachers tivists’ sense. A similar experience may be provided in a virtual 183
121 further enhance the learning experience. E-learning systems may anatomy system. Ma et al. [22] argue that students learn anatomy 184

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 3

target region. A general distinction can be made in non-spatial and 214


spatial anatomy knowledge. Non-spatial knowledge includes ter- 215
minology, taxonomy, and functions of structures, whereas spatial 216
anatomy knowledge includes the position, orientation, extent and 217
shape of structures [25]. Our focus is on the acquisition of spatial 218
knowledge that may be enhanced by 3D visualizations. Anatomy 219
is studied under the following aspects: 220

• clinical anatomy: the study of anatomy that is most relevant to 221


the practice of medicine. 222
• comparative anatomy: the study of anatomies of different or- 223
ganisms, related to the shape and size of anatomic structures, 224
or the variants of the topology in case of branching structures, 225
such as arteries, veins and nerves. 226
• cross-sectional anatomy: anatomy viewed in the transverse 227
plane of the body. 228
• radiographic anatomy: the study of anatomy as observed with 229
imaging techniques such as conventional X-ray, MRI, and CT. 230
• regional anatomy: the study of anatomy by parts of the body, 231
e.g. thorax, heart, and abdomen. In regional anatomy, all bio- 232
logical systems, e.g., skeletal, circulatory, are studied with an 233
emphasis on the interrelation of the systems and their regional 234
function. Dissection supports learning of regional anatomy. 235
• systemic anatomy: the study of anatomy by biological systems, 236
e.g., skeletal, muscular, digestive, and circulatory system. In sys- 237
temic anatomy, a single biological system is studied across all 238
body regions. Dissection is less relevant for systematic anatomy 239
learning. 240
• macroscopic anatomy: the study of anatomy with the un- 241
aided eye, essentially visual observation. Typically, macroscopic 242
Fig. 1. The correlation between real structures of a cadaver and radiological image anatomy is explored using dissected cadavers. 243
data is explained in a VA system (From: [24]). • microscopic anatomy: the study of anatomy with the aid of the 244
light microscope and with electron microscopes that provide 245
subcellular observations. Microscopic anatomy is based on very 246
185 by developing a serious game. Provided that the authoring systems
high resolution images and provides insight at a level that is 247
186 are very powerful and easy to use, this may be possible.
neither possible with tomographic image data nor dissecting 248
187 Problem-based learning. A major trend in medical education is
cadavers. 249
188 problem-based learning (PBL). Instead of isolated learning of facts, • surgical anatomy: the application and study of anatomy as it re- 250
189 with PBL, small groups of students should cooperate and solve
lates to surgery, e.g. the relevance of anatomical structures for 251
190 problems, e.g. to locate a deep anatomic structure and describe
avoiding complications or guiding the surgeon during an inter- 252
191 the regional surroundings or to treat a particular injury. A tutor
vention. 253
192 facilitates problem solving by making necessary resources avail-
193 able [1]. Virtual anatomy systems may employ carefully chosen
194 tasks to fit in a problem-based curriculum. This educational back-
These different aspects represent different “views” on the anatomy. 254
195 ground is carefully discussed in a number of publications, e.g. by
For example, the kidney is part of the abdominal viscera in the 255
196 Chittaro and Ranon [16].
regional anatomy and part of the urogenital system in the systemic 256
197 Blended learning. VA systems will be particularly efficient if
anatomy [26]. 257
198 they are linked to other forms of learning, supported by anatomy
Ideally, anatomy education systems integrate all aspects, for ex- 258
199 teachers explaining in which stages and how to use them as a
ample by smoothly blending in data in different resolutions. Cur- 259
200 kind of blended learning. VA systems cannot replace other types of
rently, such an ideal system is not fully realized due to many tech- 260
201 anatomy education. Brenton et al. [1], for example, discuss the ad-
nical problems such as acquiring and analyzing a large variety of 261
202 vantages of dissection that are also valid in the presence of VA sys-
image data and adapting visualization techniques. Very few VA 262
203 tems. Also lectures that are used to introduce important concepts,
systems integrate microscopic anatomy. An example is the digital 263
204 remain indispensable. Some VA systems are not primarily intended
pelvic atlas focusing on anatomical relations that are essential for 264
205 for (isolated) self-study, but for use in the classroom or dissection
surgery [27]. Clinical anatomy and radiographic anatomy can be 265
206 room. As an example, Philips et al. [23] show how a VA system
explored with medical volume data and derived information. As 266
207 with radiological image data of a cadaver is operated in the dissec-
an early example for combining different aspects of anatomy, the 267
208 tion room to explain students the structures of the corresponding
DigitalAnatomist [28] provides 3D overviews, where the position 268
209 real cadaver (see Fig. 1).
of certain slabs is marked. For each slab, the related information is 269
shown as CT slice and as photographic data. 270
210 2.2. Aspects of anatomy education The study of comparative anatomy requires a variety of differ- 271
ent datasets that represent at least the typical anatomic variants. 272
211 Anatomy education is an essential basis for many clinical Most systems do not support this important aspect of anatomy. 273
212 disciplines. Prevention of surgical complications and successful Many VA systems focus on regional anatomy and represent the re- 274
213 interventions requires an in-depth anatomical knowledge of the lation between labels and segmentation results. 275

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

4 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

VH datasets are employed for a variety of virtual anatomy systems, 321


e.g. the VOXEL-MAN [32,33] (see Fig. 2), the “Internet Atlas of Hu- 322
man Gross Anatomy” [34] and the “Anatomic VisualizeR” [35]. The 323
VH datasets were also used for functional anatomy, e.g. for biome- 324
chanical animations of the musco-skeletal system [36]. Juanes 325
et al. [37] provide a comprehensive survey of early applications of 326
the VH data for anatomy teaching. The website https://www.nlm. 327
nih.gov/research/visible/products.html currently (September 2017) 328
lists 25 products based on the VH dataset including three re- 329
gional and radiological anatomy resources of the VOXEL-MAN fam- 330
ily (Brain and Skull, Inner Organs and Upper Limbs) and the “Com- 331
plete 3D model of the human body (male and female)” from Primal 332
Pictures. 333
Visible Korean and Chinese Visible Human. Datasets from people 334
Fig. 2. Photographic data from the Visible Human Male dataset. Left: a slice view, of other regions in the world were also needed for medical educa- 335
right: a volume rendering with clipping enabled (From: [38]). tion. In the Visible Korean Human project, a dataset of a 65-year- 336
old patient was provided [39]. The Visible Korean dataset was em- 337
ployed, e.g. for the Online Anatomical Human [40]. The Chinese 338
276 3. Datasets
Visible Human avoided some problems of the VH project [41]. In 339
particular, the image acquisition was performed with a very large 340
277 Radiological imaging plays an important role in anatomy ed-
milling machine that did not require to cut the body avoiding any 341
278 ucation, since the relation between radiological image data and
section loss. 342
279 live patients is important not only for radiologists, but also for
The Chinese Visible Human dataset contains healthy female and 343
280 many physicians. VA systems are often based on image data and
male adults and exhibits a superior spatial resolution (in-plane res- 344
281 segmentation results representing anatomic structures. Based on
olution: 170 μm). The raw data (photographs, CT and MRI data) 345
282 these data, high-quality renderings can be generated and interac-
were carefully segmented (869 structures of the male dataset and 346
283 tively explored. Direct volume rendering as well as surface render-
860 of the female dataset) [42]. 347
284 ing techniques may be employed. While clinical applications re-
Further anatomical data. The University of Colorado, Center for 348
285 quire fast segmentation and visualization for the current patient,
Human Simulation continued its efforts to acquire anatomical data 349
286 educational systems require primarily high quality results that are
at a very high-quality. Spitzer and Ackerman [43] describe datasets 350
287 typically achieved after careful image analysis and modeling. Accu-
acquired between 20 03 and 20 08. The dataset with the highest 351
288 racy of visualizations is less important in educational settings since
quality represents the foot and ankle region with 50 0 0 visible 352
289 the visualization does not serve to plan surgery on this patient. The
light photographs. This increased resolution resembles high mag- 353
290 visualization should convey a plausible instance of the spatial re-
nification images from arthroscopy. Other datasets represent the 354
291 lations. Therefore, smooth surfaces without distracting features are
prostate and pelvis region, the wrist and hand as well as the tho- 355
292 preferred.
rax and heart region. Spitzer and Ackerman [43] report on ongoing 356
293 As a source for anatomy education, commercial 3D model
work to simulate the movement of ligaments and spinal nerves de- 357
294 catalogs can be employed. Leading vendors are Digimation
pending on the movement of the vertebrae. 358
295 (formerly Viewpoint Datalabs), providing almost 600 excellent
296 anatomic models (http://www.digimation.com/the-archive/), and
297 TurboSquid where even 9.0 0 0 anatomy models in very good qual- 3.2. Clinical image data 359
298 ity are available https://www.turbosquid.com/3d-model/anatomy.
299 Some systems, such as ZygoteBody [29] and the BioDigital Hu- Although the image quality of clinical data is lower than that 360
300 man [30] rely on artistically created 3D models. of cadaver data, they provide an alternative image source. As an 361
example, the first versions of the VOXEL-MAN were based on a 362
cerebral MRI dataset [44]. Datasets used for anatomy education are 363
301 3.1. Cadaver data
from carefully selected healthy persons with normal anatomic re- 364
lations. 365
302 For diagnostic purposes, the image quality is only as good as
303 necessary, thus reducing the time of image acquisition or radiation
304 in case of CT data. Imaging data of cadavers, however, aims at a 3.3. Segmentation 366
305 high resolution and low level of noise. Therefore, VA systems are
306 often based on anatomic models extracted from imaging data of The segmentation of anatomic structures can be accom- 367
307 cadavers. plished with general segmentation techniques, such as threshold- 368
308 The most widely used cadaver datasets are the Visible Human ing, region-growing, live wire or watershed segmentation depend- 369
309 (VH) datasets from the National Library of Medicine (see Fig. 2). ing on the shape and contrast of the target structure. A special case 370
310 These 3D datasets originate from two bodies that were given to is the segmentation of the photographic datasets of the VH project. 371
311 science, frozen, and digitized into horizontally spaced slices. A total These datasets represent colored voxels (24 bits representing a red, 372
312 number of 1871 cryosection slices was generated for the Visible green, and blue component). Thus, regions in RGB-space have to be 373
313 Man (1 mm slice distance) and even more for the Visible Woman separated [45]. 374
314 (0.33 mm slice distance) [31]. Besides photographic cryosectional For anatomy education, all anatomic structures, which can be 375
315 images, fresh and frozen CT data have been acquired. derived from the image data, are relevant. Therefore, consider- 376
316 The quality of the VH datasets was unprecedented at that time. ably more objects are segmented compared to clinical applications 377
317 CT data were acquired with high radiation—resulting in an excel- where the time pressure is high. The VOXEL-MAN, for example, is 378
318 lent signal-to-noise ratio—and without breathing and other motion based on the segmentation of 650 objects. A challenge is the iden- 379
319 artifacts. However, the frozen body was cut in four blocks prior tification and delineation of functional areas, for example in the 380
320 to image acquisition leaving some noticeable gaps in the data. The brain, since these areas are not represented as recognizable objects 381

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 5

used. Saalfeld et al. [49] use tri-planar texture mapping to tex- 423
ture arbitrary vascular structures. Here, the normal of each ver- 424
tex is mapped to a texture on the x, y, and z plane to get three 425
colors. The final color is calculated by a weighted combination 426
of these colors. This weight depends on the direction of the nor- 427
mal. For example, if the normal is facing exactly to the z-direction, 428
only the texture color of the x-y-plane would be taken into ac- 429
count. This approach is sufficient for structures, where the orien- 430
tation of the texture does not matter. This is not the case for mus- 431
cles and tendons, where the orientation give hints about the ori- 432
gin and insertion. For these models, an approach from Gasteiger 433
et al. [50] can be used. They calculate the direction for a hatch- 434
ing texture by using the principle curvature direction for each ver- 435
tex and the skeleton of the model. By deriving the skeleton’s di- 436
rection of the nearest skeleton point and combining this with the 437
principle curvature direction, an orientation for the whole model is 438
calculated. 439
Fig. 3. Reconstruction of anatomical structures from clinical image data. Objects are
segmented (a–b), transformed into a mesh (c), remeshed to reduce model size and
improve smoothness (d), shaded (e) and optionally textured (f) (Inspired by: [1]).
4.2. Volume visualization 440

382 in the image data. Considerable expert knowledge is necessary to Volume data provides more detail and enables relevant explo- 441
383 delineate these features. ration facilities beyond surface visualizations. In particular clipping 442
and cutting surfaces (virtual dissection) makes only sense if infor- 443
384 4. Visualization techniques mation on the inner portions of the objects is available, which is 444
not the case when only the outer wall is represented by triangles. 445
385 In the following, visualization techniques used in VA systems The use of volume rendering is also challenging. Whole volume 446
386 are described. In addition, we briefly discuss illustrative visualiza- data have a larger memory consumption than polygonal meshes 447
387 tion techniques since they are inspired by anatomic illustrations of selected objects. This may cause bandwidth-related problems 448
388 and have potential for enhancing VA systems. in web-based systems. For VA systems, where typically all essen- 449
tial structures are segmented, the high-quality rendering of these 450
389 4.1. Surface visualization tagged volumes is essential. Tiede et al. [5] introduced an appropri- 451
ate algorithm where the object borders are displayed at subvoxel 452
390 Most VA systems rely on surface rendering. Azkue [46] argues accuracy to avoid terracing artifacts. A useful variant of volume 453
391 that surface representations with their solid appearance resemble rendering for VA is the simulation of X-ray images to teach learn- 454
392 real anatomic structures, such as bones. Either segmented objects ers how an anatomic region would look like as X-ray. With GPU 455
393 are transformed into polygonal meshes or the meshes are con- support, the X-ray attenuation through complex objects can be ef- 456
394 structed with a modeling tool. A hybrid approach is also possi- ficiently simulated [51]. 457
395 ble: structures that are only partially segmented in the image data, In the last two decades, the performance of GPUs improved 458
396 such as small blood vessels and nerves, are used as input for mod- drastically. This enables to add a number of visual effects that en- 459
397 eling where they are “completed”, e.g. by fitting a spline to the ex- hance the realism of volume-rendered images. Boundary empha- 460
398 tracted cross-sections. A corresponding “tube editor” is described sis may be achieved with curvature-based transfer functions [52]. 461
399 by Pommert et al. [26]. The direct transformation of binary seg- Deferred shading enables to incorporate advanced lighting ef- 462
400 mentation results to surfaces via the Marching Cubes algorithm is fects [53]. Screen-space ambient occlusion further improves depth 463
401 not recommended since artifacts occur. Mesh smoothing is typi- perception [54]. A realistic display of colors to enhance recogni- 464
402 cally applied afterwards. Constrained elastic surface nets [47] is a tion of structures in surgery was developed and evaluated for dis- 465
403 particularly method useful for meshes resulting from binary seg- playing the brain [55]. Recent developments lead to photorealistic 466
404 mentation results on a regular grid. With this method, considerable rendering systems, such as Exposure Render [56] and the Cine- 467
405 smoothing is achieved while limiting the displacement of vertices maticRenderer (see Fig. 4) developed at SIEMENS [57,58]. These 468
406 to half of the diagonal size of a voxel. Also mesh simplification may systems are based on the Monte-Carlo path tracing of volumetric 469
407 be essential to reduce the number of polygons and to enable fast data [59] instead of simple raycasting. With this method, thou- 470
408 transfer and rendering (see the pipeline in Fig. 3). Mesh simplifica- sands of photon paths per pixel are traced by a stochastic pro- 471
409 tion is particularly important for web-based applications to reduce cedure and the complex interactions of photons with the human 472
410 the download speed. For elongated branching structures, such as anatomy are faithfully simulated [58]. Effects, such as soft shadows 473
411 vessels, dedicated reconstruction methods are available based on and volumetric scattering, are represented well. Besides advanced 474
412 splines, convolution surfaces or MPU implicits (see [48] for a sur- lighting, also the camera model is extended: it has variable aper- 475
413 vey). The polygonal models are typically shaded and may also be ture and focal length and thus enables to simulate depth-of-field 476
414 textured to enhance realism. and motion blur. The additional power and flexibility comes with 477
415 Texturing surface models. Applying textures to 3D surface mod- additional parameters for the renderer, e.g. illumination parame- 478
416 els can enhance the representation of medical structures by mak- ters. Templates, e.g. for bones and muscles, are used to summarize 479
417 ing them more realistic or illustrative, which eventually supports viewing parameters and reuse them [57]. 480
418 learning. Texturing is mostly done manually using 3D modeling The advantage of the increased realism for anatomy education 481
419 applications. This approach works for single surface models, but seems obvious. Shape perception indeed benefits from increased 482
420 is too time-consuming if the process from image acquisition to a realism [60]. However, no study was carried out to actually demon- 483
421 textured model should be automated. To automatically apply tex- strate differences in knowledge gains due to improved visualiza- 484
422 tures to models, procedural terrain modeling techniques can be tions. 485

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

6 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

Fig. 5. The complex anatomy of the pelvic region displayed with an illustrative ren-
dering style. The large skeletal structures serve primarily as orientation and are thus
rendered in black and white, while finer anatomical structures are emphasized with
colors (From: [67]).

Fig. 4. Comparison of traditional volume rendering (left) and cinematic rendering


(right). The arrows point to portions where the differences become obvious. Cine-
matic rendering provides more shape cues (From: [57]).

486 4.3. Illustrative visualization

487 Illustrations play a crucial role in knowledge dissemination


488 since centuries. Before the advent of photography there was no
489 alternative means to visually convey the morphology and struc-
490 tural relations, but even after photography was widely available,
491 anatomy text books and atlases primarily employ manual illustra-
492 tions where colors are chosen to optimally convey relations and
493 provide sufficient contrast and where the level of detail is carefully
494 adapted to the educational goal. Thus, some structures might be
495 shown without details serving as anatomic context, whereas oth-
496 ers employ prominent colors and more detail. The development of
497 anatomical illustrations is summarized in a recent article [61].
498 Anatomical illustration is an interdisciplinary field where both
499 the expert knowledge in anatomy and in art is required. Whereas
500 in earlier centuries, anatomy illustrations were often shown along
Fig. 6. Surface rendering of the hepatic anatomy embedded in a silhouette render-
501 with plants and animals leading to a dramatized display, later il- ing of the skeletal structures to provide anatomic context (From: [63]).
502 lustrators, such as Henry Gray and Frank Netter, focussed conse-
503 quently on the human anatomy.
504 Since the 1960s, also radiological image data was used as a
505 source for anatomy education [23]. More recently, computer sup- 3D visualization may be disadvantageous compared with shaded 522

506 port was employed to create 3D models to be displayed with il- surfaces and semi-transparent rendering. Probably, in the era of 523

507 lustrative styles. Pioneering work in this field was carried out by computer graphics, stippling may be replaced by other and more 524

508 Corl et al. [62].1 While they focussed on specific medical scenar- efficient techniques. 525

509 ios, e.g. the position of the kidney in the pelvic region, recent re- The ConFis method was recently refined and combined with an 526

510 search in computer graphics and visualization focused on general illustrative color scheme, thus emphasizing different categories of 527

511 methods that generate feature lines and hatchings from polygonal anatomic structures (see Fig. 5). 528

512 surfaces. These 3D illustrations combine the advantages of illus- Also volume rendering-based VA systems may benefit from il- 529

513 trations and those of interactive visualizations. Notable develop- lustrative visualization styles. Two-level volume rendering [68] and 530

514 ments include the combination of surface, volume and line render- the Volume Shop [69] as well as focus-context renderings [70] are 531

515 ings with a consistent visibility computation introduced by Tietjen essential examples. 532

516 et al. [63], the automatic generation of stippling images [64], and Whether anatomy education systems indeed benefit from illus- 533

517 the automatic computation of hatching to reflect lightings [65]. La- trative visualization techniques is not clear, since none of these 534

518 wonn et al. introduced the ConFis method that combines feature techniques were actually used for anatomy education and sub- 535

519 lines and hatchings. Thus, depending on the local curvature either ject to rigorous testing for learning effects. In VA systems, pure 536

520 features are emphasized, or, if no features are available, principal line-based renderings certainly are not successful, since lines can- 537

521 curvature directions [66]. In particular the use of stippling for a not convey shape information as efficiently as shading [71]. How- 538
ever, the level of detail in the display of an anatomical structure 539
can be adapted well with these rendering styles. Thus, illustrative 540
1
Meanwhile a gallery of illustrations in twelve regions is available at http: line renderings may have a role to display structures that serve as 541
//www.ctisus.com/redesign/learning/illustrations anatomic context (see Fig. 6). 542

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 7

543 4.4. Viewpoint selection animations can be interrupted by the learner, i.e. they may rotate 606
the geometric model themselves or zoom in and then return to the 607
544 When a 3D model of a certain part of the body is loaded, e.g. animation mode where the system returns gradually to the original 608
545 the inner ear, the question arises which viewpoint should be ini- position and continues the animation. Muehler et al. [81] extended 609
546 tially chosen. Stull et al. [15] argue that a canonical orientation, this work and incorporated also clipping planes and optimized 610
547 such as a frontal or lateral view, should be selected. Even if the viewpoints. Thus, to emphasize an object, a camera position is cho- 611
548 virtual camera can be controlled to adjust any possible viewpoint, sen that is optimal w.r.t. the visibility of that object. Their scripting 612
549 a carefully chosen initial viewpoint is helpful to reduce the men- language also allowed to specify measures, e.g. distances or angles, 613
550 tal load of the learner. The same situation occurs when the name that should be displayed. It is essential that computer-generated 614
551 of an anatomic structure is selected and this structure should be animations consider motion perception. In particular, the path of 615
552 emphasized in a 3D view. Emphasis, obviously, requires visibil- the virtual camera should change smoothly. The same holds for 616
553 ity. Vazquez et al. developed methods to automatically select good the speed; sudden accelerations, for example, should be avoided. 617
554 viewpoints for an object in a polygonal model using viewpoint en- Humans can follow a few movements simultaneously, but not too 618
555 tropy as measure [72]. Later, they incorporated this technique in an many. Otherwise, change blindness occurs. 619
556 anatomy education framework [73]. Manually created 3D animations. Hoyek et al. [77] created 3D an- 620
imations of the upper limb (a part of the muscoskeletal system) 621
557 4.5. Animations and provided these animations to students by means of a Quick- 622
Time VR player enabling learners to slow-down and repeat seg- 623
558 Video sequences are an established resource for anatomy edu- ments of the animation. Compared to a control group, where stu- 624
559 cation [74]. In the last decade, such video sequences are primar- dents employed Powerpoint slides with static images, the “anima- 625
560 ily available in digital form, e.g. at YouTube. Several attempts were tion group” could significantly better answer spatial questions. Ani- 626
561 made to test the effectiveness of videos to assist anatomy students mations may also be effective when applied to cross-sectional data. 627
562 in dissection tasks [75,76]. Such videos are perceived as useful Jastrow and Vollrath [34] provide animations that show adjacent 628
563 learning resources, although they do not always improve the ex- slices of the VH dataset comprising a region, such as the thorax. 629
564 amination score [75]. The successful use of video sequences gave Discussions with radiologists reveal that they infer additional in- 630
565 rise to research attempts to provide computer-generated anima- formation in such movements compared to the analysis of static 631
566 tions using geometric models and a related knowledge base. Such images. Therefore, any radiology workstation provides a cine mode 632
567 animations involve rotations of anatomic models and provide mo- where the slices are animated. Mueller-Stich and colleagues em- 633
568 tion parallax as an essential depth cue. However, they are not su- ployed such animations in a VA system for studying liver anatomy 634
569 perior to a series of static images under all circumstances. The in- [82]. 635
570 formation provided is of transient nature, it may be very complex
571 and can lead to information overload. Animations may also lead 4.6. Modeling and visualization for functional anatomy 636
572 to the illusion of understanding [77,78]. A number of studies in-
573 dicate that high spatial ability learners benefit significantly more Functional anatomy comprises, e.g. the range of motion of 637
574 than low spatial ability learners from animations, which supports joints, the tension of muscles, and tendons. In addition to such 638
575 the hypothesis that a certain level of spatial ability is a prerequi- biomechanics-based models, functional anatomy serves to better 639
576 site for understanding movements of anatomic models (see [25] for understand the physiology of certain anatomical systems, such as 640
577 a discussion of such studies). To enable low spatial ability learners the respiratory tract. VA systems for functional anatomy require a 641
578 to benefit from animations, they should be able to slow down, stop modeling of the underlying movements. This modeling is typically 642
579 and repeat segments of the animation. carried out by motion capture, i.e. healthy persons walk, climb or 643
580 Animations that introduce an anatomic region, e.g. by slowly ap- perform other movements. These movements are tracked either 644
581 proaching it from outside, removing outer layers of information based on markers associated with the persons or marker-less using 645
582 and rotating it around a canonical axis may be very effective to depth cameras, such as the Kinect. Albrecht et al. [83] described 646
583 give a first impression of an anatomic region. Animated emphasis, the creation of a dynamic model representing limb movements 647
584 where a target object is focused, e.g. by gradually reducing trans- based on motions captured from six man and four women, all se- 648
585 parency or saturation of nearby objects may be a useful component lected to be normal w.r.t. body weight and height. The underlying 649
586 of such animations. Passive viewing of predefined animations is models do also consider constraints, such as non-compressibility 650
587 less effective than active exploration (recall Section 2 and [19]), but and collision constraints [84]. The movement of skin and other soft 651
588 may prepare learners for this active stage. Such animations can be tissue typically follows from modifying a parameter, such as mus- 652
589 manually created with 3D modeling software as it is used for the cle contraction. Thus, other structures move according to the mus- 653
590 film industry. Since this is very expensive and strongly tailored to cle movement. In biomechanics and computer graphics, there are 654
591 a very specific learning scenario, some researchers developed sys- a number of detailed and anatomically plausible models that can 655
592 tems to generate animations based on high-level textual descrip- be used for functional anatomy support [84,85]. Such models can 656
593 tions. Graphics libraries, such as Open Inventor and Unity, with be employed to generate animations (recall Section 4.5). So far, we 657
594 their support for interpolations may be used to realize the actual discussed animations to present the anatomy from different view- 658
595 movements. points. Functional anatomy benefits from animations that display 659
596 Computer-generated animations. Inspired by computer-generated the dynamic character of processes, such as breathing and heart 660
597 animations for technical illustrations [79] and by available anatomy beats. 661
598 teaching videos, Preim et al. [80] introduced an extendable script-
599 ing language, where authors can specify which objects should be 5. Knowledge representation and labeling 662
600 emphasized and optionally which techniques should be employed
601 (otherwise, automatically generated suggestions apply). The high Although interactive visualizations are the primary components 663
602 level statements, such as show spine with focus on lumbar spine, of anatomy education systems, they are not sufficient to effec- 664
603 are translated via decomposition rules in a low-level script that tively support learning processes. The mental integration of visual 665
604 may be executed. Moreover, the display of labels and further tex- elements and related symbolic knowledge is an essential learning 666
605 tual explanations, like subtitles in a video, can be specified. The goal. 667

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

8 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

668 5.1. Knowledge representation

669 Symbolic knowledge relates to concepts, names, and functions


670 of anatomic objects and to various relations between them. Knowl-
671 edge representation schemes employ segmentation information
672 and allow to add various relations between individual objects, for
673 example, an object is part of a system. In the spirit of PBL, it is also
674 useful to add comments on the clinical relevance of an anatomi-
675 cal structure, e.g. complications that would arise if this structure is
676 damaged.
677 A sophisticated representation of symbolic anatomic knowledge
678 effectively builds an ontology composed of different views, e.g. dif-
679 ferent kinds of relations between anatomic objects. The first ad-
680 vanced (digital) knowledge representation for anatomy has been
681 developed by Schubert et al. [86] as a semantic net. It serves as a
682 basis for interactive interrogation. Among others, they represent:

683 • part of relations (one object belongs to a larger object, for ex-
Fig. 7. Interactive labeling with the VOXEL-MAN (From: [26]).
684 ample a functional brain area),
685 • is a relations which group anatomic objects to categories, and
686 • supplied by relations, which characterize the blood supply. visual objects, external labels are more promising. They have bet- 729
ter legibility, since they are placed on top of a background in uni- 730
687 The relation between labeled volumes and the symbolic knowledge
form color. Moreover, they do not disturb the perception of com- 731
688 is referred to as intelligent voxel. Similar concepts for knowledge
plex shapes. 732
689 representation have been used for the Digital Anatomist [28] and
690 the AnatomyBrowser [87]. The last notable development in this
691 area was presented by Rosse and Mejino [88]. They suggested a 5.3. Placement of external labels 733
692 standardized ontology of anatomical knowledge that can be used
693 by any system that requires an anatomy knowledge base. The con- We discuss several labeling strategies based on the following 734
694 nection of a VA system to a standardized ontology is meanwhile labeling requirements (LR) (cf. [69,92,93]): 735
695 typical [89].
LR1. Anchor points should be visible. 736
LR2. Labels must not hide any graphical representation. 737
696 5.2. Labeling anatomical models LR3. A label should be placed as close as possible to its reference 738
object. 739
697 Labels are used to establish a bidirectional link between visual LR4. Labels must not overlap. 740
698 representations of anatomic structures and their textual names. LR5. Connection lines should not cross. 741
699 Students should learn how anatomical structures are named and LR6. Label placement should be coherent in dynamic and interac- 742
700 where they are, as well as their shape features. tive illustrations. “Coherent” basically means that strong and 743
701 Almost all of the presented labeling techniques were inspired sudden changes of anchor point and label positions should 744
702 by anatomic illustrations and most of them were integrated be avoided. 745
703 in anatomy education frameworks used as research prototypes
704 [73,90]. However, none of these techniques were integrated in The list above represents a minimal subset of requirements dis- 746
705 wide-spread virtual anatomy systems. cussed by various authors. For the moment, we assume one anchor 747
706 Interactive labeling. The simplest and most general solution is point only. This point might be the center of gravity (typically ap- 748
707 to let the labeling task to the user: she might select a position, proximated as the average coordinate of all vertices) or the bound- 749
708 initiate a “label” command and then interactively place that label. ing box center. Since these points are often hidden, a nearby vertex 750
709 The label name corresponds to the object visible at the position should be chosen that is actually visible (LR1). For this purpose, 751
710 selected by the user. A line is generated to connect the label with the projection of an object in screen space should be analyzed to 752
711 the selected position. If more labels should be included, the user select a point of a visible segment of that object. In case an ob- 753
712 tries to avoid overlapping labels (see Fig. 7). This simple strategy ject has several visible segments, it is often appropriate to select a 754
713 is not appropriate in interactive scenarios where the objects are point from the largest segment [94]. 755
714 rotated and zoomed in, since the labels might simply disappear. In order to fulfill LR2-LR5, some general layout strategy is nec- 756
715 In addition to placing predefined labels, users may add labels essary. Observations in medical illustrations reveal two typical 757
716 and textual descriptions as personal notes, flexibly move and save strategies: either labels are arranged in vertical columns to the left 758
717 such labels. The BioDigital Human provides such advanced inter- and right of the drawing or they are placed closely to the silhou- 759
718 active labeling [30]. ette of the model. With both strategies, LR2 is fulfilled. Since it is 760
719 Automatic labeling. Automatic labeling strategies can be classi- difficult to place labels in concave notches, Hartmann et al. [93] as 761
720 fied into internal and external [91]. Internal labels overlap with the well as Bruckner and Gröller [69] suggest to use the projection 762
721 objects to which they refer, whereas external labels are arranged at of the convex hull of the drawing to guide label placement (see 763
722 the background and connected to the visual representation of their Fig. 8). Closeness of labels and graphical objects (LR3) is known to 764
723 reference object. The relation between a visual object and its label be essential for cognition [95]. Convex hull-guided placement (see 765
724 is easy to perceive with internal labels. Fig. 8) is the better choice according to LR3. 766
725 The use of external labels requires anchor points and connection LR4 and LR5 relate to the distribution of labels in the area that 767
726 lines. The selection of anchor points is crucial to establish a clear is “reserved” for labels. Once the areas are defined that are used 768
727 correlation between a label and its reference object. Despite the for labeling, a layout algorithm is necessary that manages occu- 769
728 problems of providing a correlation between a label and related pied space and ensures that each label has a separate space (and 770

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 9

Fig. 10. Labeling anatomic objects of the knee region with potential fields. Overlap-
ping labels are effectively avoided (From: [93])..

Anchor point determination. While simple strategies are suffi- 787


cient for labeling compact and visible objects, they may fail for 788
Fig. 8. External labeling of an anatomic illustration of a foot. Labels are arranged other objects. In case of branching objects, or objects that are 789
close to the convex hull of the whole model. Labels do not overlap each other and only partially visible, more anchor points are needed to convey 790
connection lines do not cross (From: [69]).
the relation between a label and a graphical object. This requires 791
to analyze the shape of graphical objects, e.g. by means of skele- 792
tonization. Since the skeleton represents the medial axis, points 793
on the skeleton are good candidates for being anchor points. Since 794
branching objects may be partially hidden, the visibility of anchor 795
point candidates has to be considered as well [92]. 796
Using potential fields for label placement. An advanced labeling 797
algorithm is based on potential fields and their applications in robot 798
motion planning [94]. The requirements for a good configuration of 799
labels are translated to the definition of attractive forces (between 800
a label and a related anchor point) and repulsive forces (between 801
different labels and between a label and anchor points that do not 802
belong to the graphical object). This algorithm starts from an initial 803
configuration of labels and moves them according to the forces. 804
Fig. 10 shows an automatically generated result. 805
Generation of intermediate connection points. Once the anchor 806
points for an object have been defined, the label may be connected 807
Fig. 9. Initial layout of labels arranged in columns left and right of an illustration
with each of them via straight lines. If the angle between two con- 808
(From: [90])..
nection lines is very small and the lines are hard to distinguish, 809
the resulting layout is confusing. The edge bundling principle in- 810
troduced in the visualization of graphs and networks [97] is useful 811
771 a small margin). As an example, the algorithm described by Hart- to improve the label layout. 812
772 mann et al. [93] proceeds as follows. Starting from initial labels
773 close to the silhouette of the whole model, labels are exchanged 5.4. Placement of internal labels 813
774 or slightly translated to prevent overlaps and crossing connection
775 lines. A maximum number of iterations ensures that the algorithm Internal labels are placed on top of the visual representation 814
776 terminates. Bruckner and Gröller [69] also employed this algorithm of an object. The use of internal labels is only feasible for objects 815
777 (Fig. 8). large enough to accommodate a label and that are not partially ob- 816
778 During interaction the area occluded by the illustration changes scured [98]. With respect to legibility, it is preferred to present la- 817
779 considerably and therefore the label positions have to be adapted. bels horizontally. If this is not possible, Götzelmann et al. [91] sug- 818
780 The layout in Fig. 9 considers the 3D bounding box of the underly- gest to place a label along the centerline. The skeleton path is 819
781 ing model and places labels outside which can be done with fewer smoothed to gently accommodate a label. Fig. 11 illustrates the 820
782 changes (LR6). However, even more important is that the label po- principle approach. The readability of internal labels has been im- 821
783 sitions are not automatically updated after every small rotation. proved by incorporating thresholds on the allowable curvature of 822
784 Updating may be performed after a user initiated it or after the the path. In interactive scenarios, internal labels have a role when 823
785 user stopped rotating the model. The placement of external labels graphical representations are strongly zoomed in. An alternative 824
786 w.r.t. a balanced layout is described by Preim et al. [96]. strategy by Azkue [46] uses numbers instead of textual labels and 825

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

10 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

Fig. 12. Rotation of anatomical structures is strongly improved with handles de-
Fig. 11. Hybrid labeling of heart anatomy with internal and external labels. Internal rived by an anatomy-oriented coordinate system (Inspired by: [15]).
labels smoothly fit to the 3D shapes, but legibility is reduced compared to external
labels (From: [98])..
base (recall Section 5.1) and requires an appropriate layout strat- 863
egy where the annotation is clearly visible and does not hamper 864
826 accompanies a legend with the associated names. The placement the recognition of other structures. 865
827 of numbers is easier due to their smaller space requirements, but
828 the use of the legend increases the cognitive load. 6. Interaction techniques 866

829 5.5. Labeling of interactive illustrations Before discussing interaction techniques, we briefly mention re- 867
quirements that we derived from a number of publications, e.g. the 868
830 Labeling interactive 3D illustrations poses some challenges. cognitive task analysis by Berney et al. [102] and our own experi- 869
831 When a 3D model is zoomed in, the visibility of objects and their ence. 870
832 relative positions do not change. With an external labeling strategy,
833 only the connection lines need to be updated. Labels may, however, 6.1. Basic interaction techniques 871
834 occlude part of the geometry. Therefore, it is reasonable to switch
835 to internal labeling for such objects. There are some basic functions that are available in almost all 872
836 When the 3D model is rotated, the visibility of objects as well VA systems. A hierarchical list widget often allows to show/hide 873
837 as their relative position in screen space will change. Different individual anatomical structures or whole categories, such as mus- 874
838 strategies are viable to account for this situation: Labels relating cles or tendons. Some systems support a layer-based exploration, 875
839 to objects that are no longer visible are hidden or rendered more i.e. outer structures may be removed and subsequently more and 876
840 transparent to indicate that they are “invalid”. The labels may be more inner structures are removed as well. This layer-based explo- 877
841 rearranged in vertical and horizontal position to reflect the new ration better fits to volume-rendering based systems, where struc- 878
842 screen space position of their anchor points. As a consequence, tures may be clipped instead of being always completely removed. 879
843 connection lines may get long and cross each other. “Rearranging” It is possible to select objects with a pick on a rendered im- 880
844 labels yields consistency between labels and related objects. How- age resulting in some emphasis and often a label displayed, e.g. as 881
845 ever, it is highly distracting when labels “fly” around a 3D model a tooltip. Some systems provide advanced selection support, such 882
846 while the user’s focus is on perceiving the movement of anatomi- as snapping, that is useful for small structures, e.g. vascular struc- 883
847 cal structures. Better strategies are to rearrange labels after a rota- tures. Probing tools that can be freely placed in a 3D model and 884
848 tion is finished or to provide a command that triggers a new com- that list all anatomical structures along the ray defined by the tool 885
849 putation of the label layout [96]. Mühler and Preim [99] discussed are helpful for learning. Probing and selection within large geomet- 886
850 more options, e.g. to change the style of the connection line when ric models may lead to a delay if not efficiently implemented. Hier- 887
851 the anchor point becomes hidden. archic data structures, such as octrees, accelerate the computation 888
852 More details on labeling, in particular on internal labeling, pre- of polygons hit by a selection ray. As an alternative with reduced 889
853 sentation variables, such as line width and fonts as well as imple- preprocessing, Smit et al. [40] suggested to use an offscreen ren- 890
854 mentation of labeling algorithms, are discussed in a survey arti- derer where for every object a unique color is assigned so that the 891
855 cle [100]. In this article also labeling of 2D images for radiographic color in the render buffer reveals the object visible at that pixel. 892
856 and cross-sectional anatomy is discussed. Rotation is the core function of any VA system. It is typically 893
857 Other annotations. In addition to textual labels, graphical anno- unrestricted, i.e. an anatomic surface model can be rotated on any 894
858 tations may be useful for educational purposes. Fairen et al. [101], axis, e.g. with a trackball widget. Panning and zooming are other 895
859 for example, add red and blue arrows to represent arterial and essential features. Mouse-based rotation of 3D models is not very 896
860 venous blood flow. Anatomical structures may also be encircled intuitive. An improved solution is possible with handles displayed 897
861 or pointed at with arrows to emphasize some relations between (see Fig. 12). These handles should convey an object-specific ref- 898
862 them. This needs to be prepared in the design of a knowledge erence frame. This enhancement is motivated by vision research 899

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 11

900 that indicates that mental rotation benefits from a clear reference
901 frame. The integration of an orientation widget, e.g. a mannequin,
902 to convey the current viewing direction and the facility to return
903 to a “home” position improve the experience during rotations.

904 6.2. Advanced interaction techniques

905 In the following, we briefly discuss additional interaction tech-


906 niques that are provided by a few systems only but seem justified
907 by the education goals of virtual anatomy. Some systems provide
908 also measurement facilities, e.g. a virtual ruler to assist students
909 in understanding absolute and relative sizes of anatomical struc-
910 tures [35].
911 An advanced feature of VA systems is to let learners dupli-
912 cate a selected object to rotate it in a separate view. This ghost
913 copy [103] enables a detailed inspection of that object. Clipping
914 and cutting are typically restricted to VA systems based on volume
915 rendering.
916 In addition to planar clipping, more advanced cutting, such Fig. 13. A teacher may intuitively load and sketch branching structures, such
as vessels of the liver, to explain vascular structures (created with an approach
917 as with a virtual scalpel (recall [6]), a deformable cutting plane
from [112]).
918 [104] or illustrative membrane clipping [105] may provide a sub-
919 stantial learning experience. The virtual scalpel and the deformable
920 cutting plane are inspired by surgical approaches and thus provide
921 a link to clinical medicine. The illustrative membrane clipping is a
922 modified type of clipping where the clipping plane is locally de-
923 formed to avoid that structures are cut if there border is close
924 to the plane. The resulting visualizations often better follow the
925 user’s intents. We also want to mention peel-away visualizations
926 [106]. Here the user is provided with peel-away templates for sep-
927 arating anatomical regions to be rendered with a different trans-
928 fer function. This technique is realized in a view-dependent man- Fig. 14. A simple model of the Aortic arch is sketched (left) and converted into a
929 ner and enables a focus-context-type of visualization. Magic Lenses 3D model (right) (From: [111]).
930 [107] is another potential useful interaction technique for explor-
931 ing 3D anatomical models. As an example, a volumetric lens, e.g. a
932 spherical lens, may be moved over a model and show a different w.r.t. longterm experience of learners. Thus, it is currently not clear 961

933 layer, such as the nerves. It must be noted that only the virtual whether a VA system would benefit from incorporating gesture- 962

934 scalpel was indeed used for anatomy education. based interaction. 963

935 Facilities to enable drawing on surfaces to mark regions, com- Sketching. An innovative and not yet carefully evaluated in- 964

936 ment on them, and eventually share with others are also use- teraction for education is to employ free-hand sketching. Pihuit 965

937 ful. Self-assessment exercises and real-time feedback are essential et al. [111] developed a first system strongly inspired by chalkboard 966

938 components of VA systems [1]. illustrations performed by anatomy teachers. Anatomy teachers are 967

939 Haptic interaction. Haptics would greatly enhance the experi- used to draw, explain, and refine the illustration with further ob- 968

940 ence of learners exploring and interacting with anatomical mod- jects and explain how they relate to the overall system or organ. 969

941 els. In particular, interactions, such as cutting would benefit from Students reproduce such sketches and thus favor memorization 970

942 tactile feedback that conveys the different stiffness and elasticity of the spatial relations. Chalkboard illustrations, however, are re- 971

943 of tissue structures. Basic strategies of incorporating haptics in the stricted to 2D. 972

944 exploration of anatomical models were developed by Petersik et al. Pihuit et al. [111] present a system where teachers provide 973

945 [108]. Tactile feedback needs a higher frame rate than the visual- sketches of anatomic structures, containing hatching lines and sil- 974

946 ization that is challenging if complex geometric structures are in- houettes. This sketch is then transformed in a smooth implicit sur- 975

947 volved. This is feasible meanwhile, however, we are not aware of face representation. The process of reconstructing a 3D geometry 976

948 virtual anatomy systems that really incorporate haptics. Haptic is from 2D sketches involves a number of constraints. Vascular sur- 977

949 however used in a variety of surgical simulation systems. face models arise by applying convolution (see Fig. 13). The mod- 978

950 Gesture and touch input. Multi-touch interaction, pioneered with els are displayed with an illustrative rendering technique, recreat- 979

951 the iPhone, is very familiar among medical students. The Medical ing the appearance of chalkboard drawings (recall Section. 4.3 and 980

952 Visualization Table provides a great user experience when explor- Fig. 14). 981

953 ing medical volume data with gestures for pan, zoom and rotation Saalfeld et al. [112] came up with a similar technique using the 982

954 [109]. This system, provided by SECTRA, was recently enhanced to semi-immersive zSpace and pen-based input to make the drawing 983

955 support collaborative medical education.2 process more convenient. In addition to completely sketched 3D 984

956 Gesture-based input, as a natural interaction style, may improve models, an existing model may be enhanced by drawing further 985

957 the user experience and contribute to the motivation of learners. structures, such as nerves, that may not be part of the original 986

958 Hochmann et al. [110] employed gestures with the Microsoft Kinect model due to their small size. 987

959 and [40] employed the Leap Motion Controller for 3D interaction Guidance. Instead of a powerful highly flexible system that lets 988

960 with anatomic models. However, both systems were not evaluated users do everything but without any support for efficient learn- 989
ing, users may be supported by various forms of guidance. Prede- 990
fined learning paths may be available as an optional orientation. 991
2
https://www.sectra.com/medical/sectra_table/ Halle et al. [89] suggest an anatomic tour consisting of bookmarks 992

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

12 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

993 created by a group of learners. Like a guided tour for tourists, an


994 anatomic tour could be a good starting point for the navigation and
995 exploration of a learner.

996 7. Virtual anatomy systems

997 In the following three sections, we describe selected examples


998 of virtual anatomy systems. We put emphasis on the new con-
999 cepts introduced in these systems and classify them w.r.t. the as-
10 0 0 pects of anatomy that are supported (recall Section 2.2). Also, we
1001 mention the underlying data for all systems. Most system descrip-
1002 tions do not mention an educational concept or are clearly related
1003 to a learning theory (recall Section 2.1). Instead, they explore the
1004 space of possible solutions to provide access to 3D anatomy mod-
1005 els and related textual descriptions. In case, an educational concept
1006 is mentioned or obvious from the paper, it is explicitly mentioned.
1007 Most of the VA systems were described as digital atlas, thus us-
Fig. 15. Different viewing modes used in the VOXEL-MAN. Left: simulated X-ray.
1008 ing the anatomy atlas as a metaphor. This is a good choice, since Right: CT slices combined with surface rendering of selected objects. (From: [26]).
1009 students of medicine are familiar with the term “atlas” and have
1010 rich associations with it. Unfortunately, the term “atlas” does not
1011 encourage 3D interaction, such as rotation and cutting. We will see Primal pictures. Primal Pictures is a series of individual pack- 1055
1012 that other metaphors are discussed as well. ages that support regional, functional, cross-sectional, and system 1056
1013 After a short discussion of requirements, we start with early anatomy. Later versions are web-based. With the PrimalPictures 1057
1014 systems, while in Section 8 we discuss systems that employ 3D packages, users can add or remove layers. Some packages pro- 1058
1015 web standards. Basically, this is a distinction in early and recent vide biomechanics, such as muscle flexion and hip range of mo- 1059
1016 systems, since most recent VA systems employ Web 3D technol- tion, and other functional anatomy animations including respira- 1060
1017 ogy. tion. The packages follow a guided approach to learning and pro- 1061
vide many functions to support the authoring process of anatomy 1062
1018 System requirements. teachers. Primal Pictures is a company where anatomists and 1063

1019 SR1. Provide a dataset where the target anatomy is represented graphic artists have been working together since 1991 to create 1064

1020 in sufficient level of detail to recognize structures, such as high-quality 3D surface models based on clinical data.3 Since it is 1065

1021 smaller vessels, nerves and the lymphatic system. a commercial product, its design is not discussed in scientific pub- 1066

1022 SR2. Provide facilities to flexibly explore a 3D dataset from arbi- lications. 1067

1023 trary perspectives and in a user-selected scaling factor. Digital Anatomist. The Digital Anatomist was another early 1068

1024 SR3. Provide techniques to explore the related semantic informa- and influential system [28,117]. Its focus was the knowledge rep- 1069

1025 tion, e.g. membership of a structure to a certain category, a resentation that was both deep and comprehensive. Already in 1070

1026 group of adjacent objects, and connectivity. 1999 the system incorporated 26.0 0 0 anatomical concepts con- 1071

1027 SR4. Provide direct support for rehearsal, e.g. a quiz function, nected by 28.0 0 0 links. The system incorporates 3D renderings and 1072

1028 puzzle options. predefined movies. Some of these movies show exploded views 1073

1029 SR5. Provide clipping and cutting tools to enable users to expose that may help to understand spatial relations. A group of these 1074

1030 inner structures and study their location in relation to outer movies presents neuroanatomical structures in relation to essential 1075

1031 structures. anatomic landmarks. 1076


The Digital Anatomist supports regional and cross-sectional 1077
1032 SR1 is not always fulfilled. In particular, nerves and the lym- anatomy in the brain, including major fiber tracts and functional 1078
1033 phatic system are often considered by learners as missing. Hein- areas. The system is based on cadaver scans and clinical MRI data. 1079
1034 rich et al. and Kraima et al. discuss the creation of anatomically The educational aspects are considered as well: a tutorial and a 1080
1035 complete models in the sense of SR1 [27,113]. SR2-SR4 are typically question-and-answer mode are available. The DigitalAnatomist is 1081
1036 fulfilled to some extent. VA systems differ in their specific realiza- still available, but restricted to rendered images of 3D models. In- 1082
1037 tion. Clipping and cutting (SR5) is only fulfilled in systems that are teractive exploration is not supported. Following the neuroanatom- 1083
1038 based on volumetric data. ical examples, a knee and a thorax atlas with many detailed 2D 1084
1039 VOXEL-MAN. The VOXEL-MAN is a pioneering work that in- and 3D views were added. 1085
1040 spired many later developments [114,115]. The first version was Anatomy Browser. The AnatomyBrowser is also focussed on 1086
1041 based on clinical data; later versions employed the VH datasets. neuroanatomy. It is based on a multitude of brain MRI data 1087
1042 Two aspects of the development are crucial: substantial visual- and thus supports comparative anatomy (as well as regional and 1088
1043 ization research was carried out, e.g. to achieve high-quality and cross-sectional anatomy) [87,118]. The cases also include patient 1089
1044 efficient rendering of tagged volume data and cut surfaces [5,6], data providing a link to clinical medicine. Again, a comprehensive 1090
1045 and the image data was connected to a sophisticated knowledge knowledge base is used. Interactive labeling is available and quite 1091
1046 base prepared by an anatomist [86] (recall Section 5.1). The knowl- flexible: users may adjust colors, line styles, fonts and font sizes. 1092
1047 edge base was designed to be extendable, e.g. to add biomechan- Although this seems a minor aspect, it may be very helpful for 1093
1048 ical properties. A virtual endoscopy mode was essential to better learners, in particular if they exhibit vision deficiencies. 1094
1049 understand air-filled or fluid-filled structures, such as the colon or Anatomic VisualizeR. In contrast to previous VA systems, the 1095
1050 the bronchial tree. Anatomic VisualizeR is inspired by the virtual dissection metaphor 1096
1051 The VOXEL-MAN supports cross-sectional, regional, and system- and thus puts emphasis on clipping tools. The Anatomic Visual- 1097
1052 atic anatomy (see Fig. 15). Later developments added the correla-
1053 tion of X-ray images with 3D anatomy, gastrointestinal endoscopy,
1054 and the correlation of ultrasound images with 3D anatomy [116]. 3
This information is based on the website https://primalpictures.com/

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 13

Fig. 16. Muscles, ligaments and bones of the foot. The user selected a label on the
left and one on the right. The ZoomIllustrator lets the labels grow so that it can
accommodate explanations. Other objects mentioned in the explanations are high-
lighted (From: [90]).
Fig. 17. A 3D model of the bones and tendons is correctly assembled in the left
view. In the right view, the user has models of all muscles in the foot region. Her
task is to move them in the left viewer and dock them at the right position (Cour-
tesy of Felix Ritter, University of Magdeburg).
1098 izeR was developed to provide a flexible and extensible tool. It
1099 may be linked to images, textual information, video and anima-
1100 tions [35]. In addition to detailed renderings based on the VH name triggers an animation that shows and emphasizes the related 1144
1101 dataset, schematic 3D renderings are available for an introduction. anatomical structure. Fig. 17 gives an example of the system’s use. 1145
1102 The schematic renderings may also be compared with the detailed Educational concept. The 3D puzzle is designed to provide im- 1146
1103 renderings in the same view. plicit guidance by the design of the tasks [123]. Enabling learners to 1147
1104 Educational concept. In contrast to other systems that empha- compose or decompose anatomical structures is mentioned as an 1148
1105 size flexibility, the Anatomic VisualizeR provides guidance. A study anatomy learning strategy by Dev et al. [17] and associated with 1149
1106 guide provides an organized presentation of key concepts, exer- constructivism. In a similar vein, Ma et al. discuss constructivism 1150
1107 cises and recommended actions for each learning module. and anatomy learning [22]. Today, the 3D Puzzle could be consid- 1151
1108 ZoomIllustrator. The ZoomIllustrator [90] focuses on the inter- ered a serious game. However, it does not fully use the motivational 1152
1109 active exploration of the information space consisting of geomet- potential of serious games, e.g. it misses a reward strategy. 1153
1110 ric models, labels, brief explanations, more detailed explanations Virtual ear anatomy. A variety of systems were developed for 1154
1111 and figure captions [119]. It is a zoomable user interface that flex- ear anatomy education. These developments are motivated by the 1155
1112 ibly allocates space for the currently needed textual explanations filigrane structure of the inner and middle ear that need to be fully 1156
1113 (see Fig. 16). Geometric models were acquired from a commercial understood in case of surgery. Cadaver dissection is not effective 1157
1114 provider (Viewpoint Datalabs) where these models were created in the ear region due to the extremely small size of the involved 1158
1115 manually. Figure captions are automatically generated and reflect, structures and their complex relation [3]. 1159
1116 e.g. the visible objects, the viewing directions and the colors em- As an example for such systems, we discuss the system of 1160
1117 ployed. To enable comparisons of different views, two renditions Nicholson et al. [3]. They employ a high-resolution MRI scan of 1161
1118 can be displayed side by side and structures visible in both views a cadaver. The extracted geometric model is comprehensive, care- 1162
1119 are labeled in-between. fully smoothed and stored as a VRML file. Instead of building a 1163
1120 Automatic labeling with external labels is incorporated (recall complete VA system, they used the CosmoPlayer that supports the 1164
1121 Section 5.1). Also animations can be generated based on a scripting exploration of VRML models incorporating pan and zoom as well 1165
1122 language [80] (recall Section 4.5). The evaluation of the ZoomIllus- as rotation. No type of clipping or virtual dissection is supported. 1166
1123 trator was focussed on acceptance and usability [120]. A major This VA system did not pioneer any visualization or interac- 1167
1124 result was that students used the available 3D interaction facilities tion technique. The contribution of Nicholson et al. [3] is to pro- 1168
1125 quite rarely. vide a high-quality model of such a complex anatomic region and 1169
1126 3D Puzzle. The 3D Puzzle [121] was developed at the same in- the evaluation. In the anatomy community, this is often considered 1170
1127 stitution (using the same geometric data) and was partly a con- the first convincing example that VA systems may indeed improve 1171
1128 sequence of the ZoomIllustrator evaluation. The hypothesis un- anatomy education. We come back to this system in Section 10, 1172
1129 derlying the 3D Puzzle was that another metaphor is needed that since the evaluation of this VA system followed high standards. 1173
1130 encourages 3D interaction. To compose a 3D model of many indi- Several VA systems for ear anatomy have been developed later, 1174
1131 vidual objects using predefined docking positions requires to grab e.g. Venail et al. [124] employed an ear model derived from the 1175
1132 objects, rotate them and zoom into the regions around the docking VH datasets as well as a model they created based on serial- 1176
1133 positions. A snapping mechanism supports the docking task. Learn- histological slices, i.e. with a very high spatial resolution. The VA 1177
1134 ers have to think about the sequence of docking actions, since in- system built around these models was also assessed very posi- 1178
1135 ner structures can only be placed correctly if not too many outer tively. 1179
1136 structures are already connected. This thinking about anatomical Integrated pelvis model. The integrated 3D pelvic model pre- 1180
1137 layers corresponds well to relevant anatomic knowledge. sented by Kraima et al. [27] is focused on surgical anatomy, i.e. it is 1181
1138 To enable users to compose an anatomic model is challenging. driven by possible major complications, such as damage of nerves. 1182
1139 Thus, the 3D puzzle incorporates depth cues, such as shadow pro- Thus, increased knowledge of these nerves and adjacent lymphatic 1183
1140 jection, shadow volumes, stereo rendering and collision detection structures is essential. Kraima et al. argue that surgically vulner- 1184
1141 (see Wanger et al. [122] for a discussion of depth cues). 3D in- able tracts need to be identified. For creating a complete pelvic 1185
1142 put with a SpaceBall and the facility to bimanually use the sys- model, Kraima et al. argue that immunochemistry and fluoroscopy 1186
1143 tem further support the 3D exploration. The selection of an object need to be integrated in one unified model. The data structures 1187

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

14 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

Table 1
Major early VA systems.

System Data Key Features Key Publications

VOXEL-MAN VH High-quality rendering, cutting, rich knowledge base [5,26]


Primal Pictures Artistic Functional anatomy (range of motion) –
Digital Anatomist Clinical Rich knowledge base, neuroanatomy [28,117]
Open Anatomy Browser Clinical datasets Neuroanatomy, comparative anatomy [87,118]
Anatomic VisualizeR VH Provides guidance, exercises [35]
ZoomIllustrator Artistic Zoomable UI, automatic labeling [80,90]
3D Puzzle Artistic 3D interaction, support for depth perception [121,123]
Virtual Ear Cadaver Realistic display of filigrane ear anatomy [3]

1188 for a unified model are discussed by Smit et al. [125] with a focus objects, e.g. using vessels, nerves and other categories as group 1239
1189 on a joint coordinate system for structures derived from data with nodes. For individual objects and groups, transformations, mate- 1240
1190 different resolution. rial properties, animations, e.g. by interpolation, and predefined 1241
1191 VarVis. As last VA system in this section, we briefly discuss a viewpoints can be specified. The integration of a volume rendering 1242
1192 system aiming at comparative anatomy teaching [126]. The VarVis node in VRML, essential for VA systems, was carried out by Behr 1243
1193 system is focused on branching structures, where the differences and Alexa [131]. VRML (1995) and its extension VRML2 (1997) re- 1244
1194 between variants relate to topological differences, e.g. missing quire plugins to be installed. In hospitals this is often not possi- 1245
1195 branches or additional branches. As an example, there are eight ble and the students at that time were often not technical savvy. 1246
1196 major variants of arterial supply of the liver. The variant that oc- This problem persisted with the X3D standard [132], an eXtensible 1247
1197 curs in a patient is relevant for avoiding complications in surgery. 3D standard that adds capabilities relevant for VA systems, such 1248
1198 Major learning goals are: as nodes that support programmable shaders and multi-texturing. 1249
X3D provides an XML-like structure [16]. Clipping planes were 1250
1199 • the knowledge of occurring variants,
added in X3D version 3.2. 1251
1200 • the frequency of variance, and
WebGL, a JavaScript graphics API based on OpenGL ES 2.0, 1252
1201 • similarities and differences.
finally enables 3D graphics displayed in modern web browsers 1253
1202 Smit et al. employ a text book illustration of the relevant branching without any plugin. It is widely available on mobile and desktop 1254
1203 structures and enhance it with further information. A global view platforms. The current version, WebGL 2.0, extended WebGL 1255
1204 indicates the similarity between variants, whereas a local view en- with support for flexible and powerful rendering, including 3D 1256
1205 ables an in-depth analysis of differences between a pair of variants. texture mapping. Because WebGL exploits the graphics card at the 1257
1206 Branching structures are modeled as graphs with nodes represent- client, it is not necessary to perform rendering at a server. Thus, 1258
1207 ing branching points. Based on the graph representation, similarity WebGL-based applications may be simultaneously used by many 1259
1208 is assessed based on graph matching (the better two graphs can be learners with sufficient performance. For VA systems, there is only 1260
1209 matched with each other, the more similar they are). one major drawback of WebGL: volume rendering is still not part 1261
1210 While variations in anatomy are frequently studied, e.g. in ba- of the standard. 1262
1211 sic research, few attempts were made to support educational sce- The Web3D consortium’s Medical Working Group4 is continuing 1263
1212 narios in comparative anatomy. Besides the VarVis system, only a to develop X3D and related technologies for medical use, particu- 1264
1213 framework by Handels and Hacker is notable [127]. It conveys the larly to integrate the technology with the DICOM standard, such 1265
1214 variability of organ shapes, such as the kidney. as the ability to drag and drop DICOM files on a web browser 1266
1215 Summary. We have discussed a variety of VA systems that dif- to view medical volumes. MedX3D is an implementation of the 1267
1216 fer in the underlying data and in the viewing tools used to en- volume rendering component of X3D consisting of a volume data 1268
1217 able learners to explore them. As Halle et al. [89] point out, a node, a segmented volume node, a volume style node, e.g. boundary 1269
1218 large drawback of most developments is that atlas data and view- or silhouette enhancement. The default volume renderer is imple- 1270
1219 ing tools are not standardized and cannot be exchanged. None of mented using a GPU-based raycaster. Polygonal surfaces and vol- 1271
1220 the discussed major VA systems support any type of cooperation; umes may be rendered together with correct depth sorting [133]. 1272
1221 they are all focused on the (isolated) learner. The presented sys- So far, there are no essential web-based VA systems based on vol- 1273
1222 tems are summarized in Table 1. umetric data. 1274
In addition to the open standards, QuickTime VR was fre- 1275
1223 8. 3D web-based anatomy education quently used to provide animations as part of web-based VA sys- 1276
tems [116,134]. 1277
1224 The promise of e-learning is that resources for learning are Teaching of the nervous system. Brenton et al. [1] discuss gen- 1278
1225 available at any time and any place. This promise can only be fully eral concepts of web-based VA systems and use the specific ex- 1279
1226 realized with web-based systems. Web-based VA systems may ample of the nervous system of the brachial plexus (a network 1280
1227 reach a large number of learners and may support collaborative of nerves extending from the neck to the shoulder, arm and fin- 1281
1228 learning. The design of web-based VA systems needs to consider gers) to refine their concepts. The nerves are modeled in form 1282
1229 general experiences and recommendations related to web-based of a diagrammatic representation with labels, measurements and 1283
1230 learning environments, as discussed by Cook and Dupras [128]. A clearly visible branchings. As anatomical context, the vertebrae are 1284
1231 special challenge in anatomy education is to handle large geomet- required. Brenton et al. [1] argue that high-quality surface models 1285
1232 ric models. are needed and show how they may be generated with subdivision 1286
1233 Open web-based standards. Since 3D visualizations are the es- modeling (see Fig. 18). 1287
1234 sential components of anatomy learning systems, 3D standards for EVA. A prominent example for a web-based anatomy learning 1288
1235 the web are essential. The first web-based anatomy education sys- tool was EVA (educational virtual anatomy) presented by Peters- 1289
1236 tems, e.g. [129,130], were built on top of the VRML standards de-
1237 veloped by the Web3D consortium. The virtual reality markup lan-
1238 guage enables to represent a complex 3D model as a hierarchy of 4
http://www.web3d.org/working-groups/medical/charter

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 15

Fig. 18. The surface model of a vertebrae is considerably smoothed with subdivi- Fig. 19. Hepatic anatomy is shown in enhanced slice-based views and in a 3D vi-
sion surfaces to avoid distracting details (From: [1]). sualization as part of the LiverAnatomyExplorer. Labels are shown as tooltip. In-
cremental rotation and zooming is possible (Courtesy of Steven Birr, University of
Magdeburg).

1290 son et al. [135]. Their system was focussed on arteries extracted
1291 from clinical CT angiography data comprising nine regions of the face and pathologies, e.g. liver metastasis. Besides 3D visualization, 1340
1292 body. They used QuickTime VR in combination with the medical 2D slice images are available and can be integrated with the 3D 1341
1293 image viewer Osirix as technical basis following a recommenda- visualization (see Fig. 19). The SVG standard is employed to real- 1342
1294 tion from [136]. The 3D visualizations were provided in three lev- ize the visualization of cross-sectional slices. Interactive labeling is 1343
1295 els of detail resulting in more or less finer branches. Color-coding supported as well as emphasis of selected objects with a focus- 1344
1296 was employed to separate major vessels, such as Arteria carotis in- context visualization. The 3D models were simplified to some 10% 1345
1297 terna, Arteria vertebralis, and Arteria basilaris with clearly different of their original size to make interactive exploration possible. 1346
1298 hues. The artery visualization was shown along with visualizations Surgical anatomy of the liver is a worthwhile topic for 1347
1299 of the skeletal anatomy, e.g. the skull or the pelvis. Besides the VA systems. Other examples are presented by Crossingham 1348
1300 3D visualization a panel with textual information is shown, where et al. [139] and Mueller-Stich et al. [82]. The VirtualLiver system 1349
1301 the arteries of the selected region are explained with hyperlinks [139] employs older web technologies. 1350
1302 to other arteries that reflect branchings. The texts are taken from Online anatomical human. The Online Anatomical Human pro- 1351
1303 Gray’s Anatomy [137]. The only link between the 3D visualization vides also access to slice-based and 3D visualizations. It provides 1352
1304 and the textual components of EVA is provided by the use of col- advanced functions to annotate 3D models (derived from clinical 1353
1305 ors. An artery is colored in the textual description in the same way image data). Anatomical landmarks, lines, e.g. incision lines for 1354
1306 like in the 3D visualization. EVA is comprehensive, comprising nine surgery, and region annotations may be added [40]. Region anno- 1355
1307 regions of the body. tation requires special functions to draw directly on meshes. These 1356
1308 Google body and ZygoteBody. The first VA system based on We- functions may be used by learners and teachers, e.g. for exercises 1357
1309 bGL was Google Body introduced in 2010 using beta versions of where learners should annotate structures. Gesture-based interac- 1358
1310 current web browsers. The system is based on artificially gener- tion with the Leap Motion controller is another speciality of this 1359
1311 ated data and provides an easy-to-use zoomable interface inspired VA system. 1360
1312 by other Google applications, such as Google Maps and Google Open anatomy browser. The most recent and substantial devel- 1361
1313 Earth. Free exploration, search for anatomical structures and dif- opment was published by Halle et al. [89], again using WebGL. 1362
1314 ferent layers that could be shown or hidden were available. When Their Open Anatomy Browser is based on a longterm experience 1363
1315 objects are selected (and labeled), all other objects are rendered at the Surgical Planning Lab in Boston to create digital atlas data 1364
1316 transparently to emphasize the selected object. Realistic textures, and viewing tools (recall [87,118]). The Open Anatomy Browser em- 1365
1317 in particular of muscles, are employed. From the beginning, the ploys clinical MRI data. Two contributions are essential: 1366
1318 system also aims at mobile use from smartphones and tablets. Af-
• The cooperation between users is directly supported by en- 1367
1319 ter Google finished this project, Zygote continued with their Zy-
abling to share bookmarks and dynamic views and 1368
1320 goteBody project [29]. The system design is explicitly linked to
• an architecture that decouples atlas data and viewing tools. 1369
1321 constructivist theory.
1322 BioDigital human. The development of the BioDigital Human We want to elaborate on the second aspect: When atlas data 1370
1323 started at the School of Medicine’s Division of Educational Infor- is represented based on official and widely used standards, a 1371
1324 matics [30] at the New York university and turned into a com- whole community may focus on creating viewing tools. The atlas 1372
1325 mercial product of BioDigital in 2013. It is a comprehensive web- data comprises image and geometry data, annotation and is man- 1373
1326 based system that incorporates the normal anatomy and more aged by a Google Database. Moreover, the atlas data may be up- 1374
1327 than 1.0 0 0 health condition models. It provides dissection facilities dated, enhanced with further information, or corrected if neces- 1375
1328 and a collapsable browser of the hierarchy of anatomic structures. sary. This reflects a new perspective on anatomy atlas data: they 1376
1329 BioDigital Human can be highly personalized for both teachers are no longer considered as complete and authoritative but sub- 1377
1330 and learners and has over 2 million registered users. ject to interpretation and revision. While all previous systems em- 1378
1331 The geometric models used in ZygoteBody and the BioDigital ployed static atlas data that was not meant to be changed, like a 1379
1332 Human were created by artists. printed encyclopedia, the Open Anatomy Browser is inspired by 1380
1333 Liver Anatomy Explorer. One of the first anatomy education sys- Wikipedia: a team effort to maintain the knowledge source based 1381
1334 tems that were built on top of WebGL is the LiverAnatomyEx- on mechanisms that check for consistency and support versioning 1382
1335 plorer [138]. This system was designed to rehearse the vascular promises high-quality and up-to-date information. Both the view- 1383
1336 anatomy that is crucial for liver surgery. Thus, the system provides ing tools and the atlas data are freely available as open source. In 1384
1337 a large variety of cases (derived from clinical CT data) to repre- the same vein, Azkue [46] argue that surface meshes should be re- 1385
1338 sent the variability of the arterial supply and venous drainage as viewed “for anatomic accuracy and educational relevance” before 1386
1339 well as the portal vein along with a visualization of the liver sur- being widely used. Halle et al. describe interesting implementation 1387

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

16 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

Table 2
Major web-based VA systems.

System Data Key Features Key Publications

EVA CT angiography Three levels of detail, Osirix integration [135]


ZygoteBody Artistic Link to constructivism, WebGL support [29]
BioDigital Human Artistic Anatomy + many health conditions, personalization [30]
LiverAnatomyExplorer Clinical CT WebGL, SVG, focus on surgical anatomy [138]
Online Anatomical Human Clinical Advanced interaction, drawing, gestures [40]
Open Anatomy Browser Clinical MRI Cooperation support, standardization [89]

1388 details, e.g. how to store the application state and use this infor- A stereoscopic system that tracks the user’s head position is the 1439
1389 mation to enable other viewers to share the view. zSpace. The Visible Body system was ported to the zSpace to em- 1440
1390 Mobile VA systems. Apps for mobile anatomy education are po- ploy its benefits. Another example of the usage of the zSpace is 1441
1391 tentially useful although the small display clearly is a disadvan- the mentioned system from Saalfeld et al. [112] to sketch anatomi- 1442
1392 tage for understanding the complex human anatomy. According to cal structures. Although primarily aiming at neurosurgery training, 1443
1393 Lewis et al. [140] Visible Body, 3D4Medical, and PocketAnatomy the system presented by John et al. [147] provides concepts useful 1444
1394 are leading examples. Checking their websites and Youtube videos for anatomy education as well. Surprisingly, it demonstrates that 1445
1395 reveals that they exhibit convenient touch-based interaction and for the chosen tasks stereoscopy (provided by the zSpace again) 1446
1396 the full range of interactions that we discussed earlier. Detailed, did not add any value. 1447
1397 textured surface models are the basis for such systems. These are
1398 commercial systems where no scientific publications describe and 9.2. VR systems 1448
1399 discuss their design and evaluation.
1400 Summary. The availability of VA systems in web browsers, in Immersive VR systems “provide an interactive environment that 1449
1401 particular if no plugins are needed, spread the use of virtual reinforces the sensation of an immersion into computer-specified 1450
1402 anatomy. Performance aspects, such as providing a high frame rate, tasks” [20]. The following aspects are often discussed as essential 1451
1403 are more important when web-based systems are designed. Sim- for VR learning environments as they are related to a high motiva- 1452
1404 plification of geometric models may be necessary. Web browsers tion [20]: 1453
1405 enable cooperative working. The cooperative features, however, are
• intuitive interaction, 1454
1406 not adequately used so far. The presented web-based VA systems
• the sense of physical imagination, and 1455
1407 are summarized in Table 2.
• the feeling of immersion. 1456

1408 9. Stereoscopic and VR systems While previous VR systems were expensive and fragile, the recent 1457
development of VR glasses has the potential for a wide adoption 1458
1409 Although many papers in medical journals are entitled “VR- also for anatomy education. The design of VR-based systems aims 1459
1410 based” and imply some form of stereoscopic or head-worn display, at exploiting the potential of VR for an enhanced understanding 1460
1411 the majority of VA systems is “only” a desktop-based interactive and at avoiding motion sickness, that, as well as fatigue, may occur 1461
1412 3D visualization. when such systems are used for a longer time. 1462
SpectoVive. A yet unpublished system is the SpectoVive VR 1463

1413 9.1. Stereoscopic systems room5 that combines a powerful volume rendering framework 1464
with the exploration by means of the HTC Vive. In addition to 1465

1414 The recent survey by Hackett et al. [141] lists 38 VA systems transforming a volume-rendered image, users may select a cutting 1466

1415 using stereoscopic viewing (mostly with shutter glasses) or au- plane to cut in the volume data and reveal the clipping surface. 1467

1416 tostereoscopic viewing where the stereo rendering is perceived Although the system is described as a tool for diagnosis and treat- 1468

1417 without glasses. The learners reported a number of perceived ad- ment planning, it would be clearly useful in teaching. 1469

1418 vantages related to the understanding of spatial structures, but also 3D Puzzle in VR. The idea of a 3D puzzle for anatomy educa- 1470

1419 state problems related to perception conflicts and artifacts that tion was also explored by Messier et al. [148] who compared the 1471

1420 are more severe with autostereoscopic displays [142]. In reality, use of a puzzle on a 2D monitor, on a stereo monitor and with 1472

1421 the pupil’s accommodation length coincides with the focal plane the Oculus Rift. In a recent master thesis, the idea of a 3D puz- 1473

1422 where everything is seen sharply. This coincidence is violated zle for learning anatomy (recall [121]) was transformed in a VR 1474

1423 with stereoscopic displays. The larger the difference between these system with HTC Vive [149]. The student can choose between sev- 1475

1424 lengths, the more disturbing is this perceptual conflict. Complex eral anatomical structures, and scale them freely from realistic to 1476

1425 vascular structures are among the anatomical structures where the large scales. Beside assembling a puzzle (see Fig. 20), the VR puz- 1477

1426 benefit of stereo perception is particularly large [143]. zle allows the disassembly in a specific order, which mimics the 1478

1427 Luursema et al. [144] found that low spatial ability learners dissection course. The additional depth cues of the fully immersive 1479

1428 benefit stronger from stereoscopic displays. These results correlate environment support the spatial understanding and learning. 1480

1429 to a reduced perceived cognitive load compared to a display with- Cinematic rendering in VR. Cinematic rendering (recall 1481

1430 out stereo. Brown et al. [145] visualized CT data and interactive la- Section 4.2) was recently used in a 3D projection space for 1482

1431 bels as 3D stereoscopic images. A study with 183 medical students an audience of up to 150 people [58]. Users wear 3D shutter 1483

1432 showed that the 3D system aided their understanding of anatomy. glasses to get a stereo impression and may control the display 1484

1433 Weber et al. [146] used real-life photos of dissected brains to cre- with a game controller. This is an impressive setting for virtual 1485

1434 ate stereoscopic content. These were compared with static images, anatomy. However, the hardware requirements to render the 1486

1435 interactive 3D models, and interactive stereoscopic models. Their high-resolution images efficiently are huge. 1487

1436 study showed significantly higher scores for the interactive groups.
1437 However, no difference was shown for the stereoscopic and non- 5
https://www.unibas.ch/en/News-Events/News/Uni-Research/
1438 stereoscopic group. Virtual- Reality- in- Medicine.html

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 17

• knowledge gain, i.e. whether learners indeed acquire new 1533


knowledge, 1534
• usability, i.e. whether a VA system is easy to learn and efficient 1535
to use, and 1536
• behavioral change, i.e. whether the use of educational software 1537
changes the way learners practice medicine. 1538

Although the last category is most important, evaluations of VA 1539


systems focus on knowledge gain. Behavioral change is just too dif- 1540
ficult to assess in anatomy teaching, since students are far from ac- 1541
tually practicing medicine. Often, the gain of knowledge achieved 1542
with a VA system is compared to another conventional learning 1543
scheme. We use the term intervention group for the group of learn- 1544
ers who employed the new technique and the term control group 1545
for the other group. Some studies also report on the time learn- 1546
ers take to complete a test. We mention time differences of major 1547
Fig. 20. The anatomy of the skull is learned in a VR environment by assembling
studies, but consider time as less important compared to the gain 1548
the skull from an unsolved (left) to a solved (right) state (Inspired by: [149]).
of knowledge. A high usability is largely a prerequisite for accept- 1549
ability and knowledge acquisition. However, a VA system may be 1550
1488 Cooperative VR systems. Cooperative systems come in two fla- very usable and engage learners but does not lead to a substan- 1551
1489 vors: co-located cooperative VR systems enable that different users tial knowledge gain. In most evaluations, usability is not explicitly 1552
1490 inspect a virtual environment at the same place, e.g. in a CAVE or discussed and therefore also omitted in our survey. 1553
1491 dome projection, whereas remote cooperative VR systems allow dis- In addition to the measureable knowledge gain, the subjective 1554
1492 tantly located students and teachers to learn together in a “shared assessment of the learners is considered. Do they trust a VA sys- 1555
1493 reality” [150]. In co-located collaborative VR, a group of users are tem to provide an efficient way to learn anatomy? This and related 1556
1494 in a VR room, seeing and thus being aware of each other. As an ex- questions are clearly essential. 1557
1495 ample, Fairen et al. [101], introduced a VR system for anatomy ed- Systematic and in-depth evaluations of VA systems have 1558
1496 ucation that was run on a Power wall and a in a CAVE (a four-sided been primarily conducted in the last decade. While Nicholson 1559
1497 CAVE where virtual anatomy is projected on four walls. In both et al. [3] only found four randomized controlled studies before 1560
1498 settings, small groups of up to ten students could follow a VR ex- 2005 in a systematic search, Azer and Azer in 2016 [2] reported of 1561
1499 ploration of complex anatomical structures, such as the heart with 30 evaluation studies. This strong increase is likely due to availabil- 1562
1500 ventricles, atriums and valves. One student is tracked and guides ity of comprehensive commercial systems that enable anatomists 1563
1501 the inspection whereas the others are more passive. This system to carry out such studies without having to build models and sys- 1564
1502 is quite promising. However, the hardware is very expensive and tems themselves. 1565
1503 typically not available in medical faculties. Thus, the use of such a
1504 system beyond a research project is difficult to accomplish. In re- 10.1. Evaluation strategies 1566
1505 mote collaboration, cheaper VR glasses may be used. Special care
1506 must be taken by representing the students and teachers in the After introducing virtual anatomy systems in Sections 7–9 we 1567
1507 virtual environment. Avatars can be used to mimic movement, in- briefly discuss selected evaluation studies in the following. We put 1568
1508 teractions and looking directions of the users. A teacher can ben- emphasis on evaluation criteria and strategies. 1569
1509 efit from knowing the location of all students. However, it can be Assessment of knowledge gain. All evaluations described in the 1570
1510 cluttering and distracting for students to know where the fellow following employ a pre-test to analyze the knowledge available be- 1571
1511 students are. fore using the VA or an alternative system. Multiple choice ques- 1572
1512 There exist several possibilities for remote collaborative set- tions are frequently used, but some authors use more open inter- 1573
1513 tings, such as group instructions or one-on-one scenarios. These views, video tape them, and let a group of anatomical instructors 1574
1514 settings differ in effectiveness of learning, applicability and costs. assess them later. A post-test is performed ideally in the same way 1575
1515 Moorman [151] showed a one-on-one scenario where an anatomist and the difference between the two test results is the knowledge 1576
1516 in her office supports a learner via video conferencing. A recent gain. The questions in the pre- and post-test should be focused 1577
1517 master thesis uses a one-on-one tutoring system to teach the hu- on the spatial aspects of anatomy. Various questionnaires were de- 1578
1518 man skull base [152]. Here, an asymmetric hardware setup is used. signed for this purpose. In addition to verbal questions related to 1579
1519 The teacher uses a zSpace to be able to see the surroundings and the shape and position of objects, drawings may be used. As an 1580
1520 easily input text via a keyboard. The student uses a HTC Vive example Nguyen et al. [25] show for several 3D models a horizon- 1581
1521 and explores a scaled up skull from inside. For collaboration, the tal line and ask learners to select from a series of cross-sectional 1582
1522 teacher may set landmarks and sketch 3D annotations in front of images the one that fits to the depicted line. Another type of ques- 1583
1523 the student or reposition the student to direct her attention to tions shows again 3D models now with several horizontal lines 1584
1524 something. Additionally, the student’s first person view is shared together with a cross-sectional image. Learners should answer to 1585
1525 with the teacher. which line the cross-sectional image fits [25]. 1586
The role of spatial ability. Since not all learners are likely to 1587
1526 10. Evaluation of VA systems
benefit in the same way, their spatial ability is assessed typi- 1588
cally based on the mental rotation test (MRT) according to Van- 1589
1527 In this section, we will discuss some general principles how VA
denberg and Kuse [153]. It may be used to correlate knowledge 1590
1528 systems are evaluated and continue with selected examples. There
gain to MRT results or to classify learners in low and high spatial 1591
1529 are four typical categories of evaluation in the area of educational
ability learners to be analyzed separately. As an example, Nguyen 1592
1530 software [17]:
et al. [25] showed that high spatial ability learners using animated 1593
1531 • preference, i.e. whether learners prefer the e-learning system rotations solved spatial anatomy tasks significantly faster and 1594
1532 compared to other learning resources, more correctly than low spatial ability learners [25]. While most 1595

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

18 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

1596 studies consider the interaction between spatial ability and knowl- viewpoints actually taken by the learners: They spent much more 1657
1597 edge gain, there are other learner-related features that influence time close to the key viewpoints. An interesting conjecture of Garg 1658
1598 the results [2,154]: et al. [156] is that anatomy education benefits from 3D models, 1659
probably more from physical models than virtual ones. 1660
1599 • the experience of students with 3D applications, such as games,
3D Puzzle. The 3D Puzzle (recall [121]) was evaluated with 16 1661
1600 • the experience with graphics modeling,
participants (students of physiotherapy with already existing ba- 1662
1601 • familiarity with computers, and
sic anatomic skills) [123]. To understand the effect of the puzzle 1663
1602 • active experience with painting and sculpting.
on the knowledge gain, two systems were used for learning. Both 1664
1603 Thus, evaluations of VA systems should analyze individual differ- employ exactly the same geometric models of the foot anatomy 1665
1604 ences and relate them to the performance of learners. comprising 28 bones, 11 muscles and 14 ligaments. Both systems 1666
1605 Integration in the curriculum. An essential aspect of any evalua- also contain the same textual information. They differ only in the 1667
1606 tion of a VA system is whether and how it is integrated in the cur- support for the 3D puzzle. 1668
1607 riculum (recall the discussion of blended learning in Section 2.1) . In a between-subject study the knowledge gain was evaluated 1669
1608 What do the students know already? Is the use of the VA system for the two groups of 8 students. The pre-test anatomic knowl- 1670
1609 guided by anatomy teachers? Do anatomy teachers provide feed- edge was on average 52% of all questions and increased by 11 % 1671
1610 back or may answer questions related to the use of a VA systems? in the intervention group and only by 8% in the control group. 1672
1611 Azer and Azer [2] require that evaluations should be explicit about This difference was observed for all questions that relate to spatial 1673
1612 these questions. knowledge. The participants took part in a subset of an intelligence 1674
test focused on mental rotation, figurative classification and recog- 1675
1613 10.2. Overview nition. The differences in the post-test knowledge cannot be ex- 1676
plained by different spatial abilities. Students in the puzzle group 1677
1614 A structured overview of evaluation studies for VA systems is were also more confident in their abilities. They liked to use the 1678
1615 given by Yammine et al. [154]. They identified 36 high-quality eval- 3D mouse for model rotation although it took them some time to 1679
1616 uations (peer-reviewed full papers with convincing study design become familiar with it. To improve the system, it was suggested 1680
1617 and a high number of participants). Most studies were random- that the correct solution of the puzzle should be shown first before 1681
1618 ized, that is neither teachers nor the learners influenced the deci- the students solve the task. 1682
1619 sion whether a VA system was used or alternative non-interactive Use of supportive handles. In Section 6 we briefly mentioned 1683
1620 resources. This randomization avoids a selection bias; otherwise it the idea of providing orientation handles to support rotation of 1684
1621 is likely that students voluntarily using VA systems differ in their anatomical structures (recall [15] and Fig. 12). The handles repre- 1685
1622 capabilities from other students. sent a coordinate system for the selected object and can be based 1686
1623 Almost all systems that were evaluated focus on a limited (re- either on anatomical knowledge or on an analysis of the object 1687
1624 gional) aspect of anatomy. Frequently, they were focussed on neu- shape with a principal component analysis. For testing the effec- 1688
1625 roanatomy, vascular anatomy, the anatomy of the middle ear, the tiveness of these models, Stull et al. [15] assembled a number of 1689
1626 pelvis, or the abdominal area. difficult tasks that require the users to rotate the object in all di- 1690
rections until the view matches a given target view. The tasks are 1691
1627 10.3. Selected evaluations designed to test whether students can recognize anatomical struc- 1692
tures from different directions. 75 participants were assigned to an 1693
1628 The selected examples are described in a chronological order. intervention group (38 participants, among them 17 with high and 1694
1629 All evaluations are designed as comparisons between a VA sys- 21 with low spatial ability) and a control group without orienta- 1695
1630 tem and more classic form of teaching, either conventional learn- tion handles (in total 37 students among them 19 with high and 1696
1631 ing or a VA system with reduced functions. The major motiva- 18 with low spatial ability). Learners could use a 3D input device 1697
1632 tion to include an evaluation is, whether new and relevant evalua- which eases 3D rotation tasks compared to mouse input or touch 1698
1633 tion methods were employed. We focus also on these new aspects gestures on the screen. 1699
1634 which may lead to the wrong impression that evaluation methods The results indicate that both low and high spatial ability learn- 1700
1635 strongly differ. ers benefit from the supportive handles, whereas low spatial abil- 1701
1636 Carpal bone anatomy. In a series of publications Garg ity learners benefit stronger. In other words, the difference be- 1702
1637 et al. [155,156] analyzed VA systems focussed on the carpal bone tween low and high spatial ability learners gets reduced, proba- 1703
1638 anatomy. In the first study, Garg et al. found no advantage of pre- bly because the cognitive load for low spatial ability learners was 1704
1639 senting the carpal bone in multiple views (15 degree steps) com- strongly reduced. At the same time, students in the intervention 1705
1640 pared to presenting three key viewpoints only. group completed the tasks even faster. The handles obviously en- 1706
1641 In a second experiment [156], the intervention group could con- abled them to rotate an object more goal-directed. 1707
1642 trol multiple views of the carpal bone, whereas the control saw Virtual ear anatomy. We briefly discussed the evaluation of a vir- 1708
1643 only key views. The questions were divided in two groups: one tual ear anatomy system (recall [3]). Here, we add some essential 1709
1644 half relates to questions where key viewpoints are sufficient to an- facts. 57 students took part in the evaluation divided in an inter- 1710
1645 swer them, whereas the other half requires to understand unfa- vention and a control group that used a web tutorial for learning 1711
1646 miliar views that strongly differ from these key viewpoints. High without any interactive 3D model. The authors aimed at 60 par- 1712
1647 spatial ability learners (determined with the mental rotation test ticipants, since this would enable them to identify significant dif- 1713
1648 [153]) in the intervention group improved their spatial understand- ferences (at least two points in a quiz with 15 questions) with a 1714
1649 ing according to both types of questions. Low spatial ability learn- 95% level. The resulting difference between the intervention group 1715
1650 ers could not benefit. This result was later theoretically explained (mean score 12.5/15) and the control group (9.8/15) was indeed 1716
1651 with the connection between mental rotation tasks and manual significant. Learners in the intervention group were not only better, 1717
1652 rotation. Low spatial ability learners are less effective using man- but also faster taking only 16 min. to complete the quiz compared 1718
1653 ual rotation of a virtual object. Garg et al. argued that key view- to 21 min. in the control group. 1719
1654 points are indeed important but the active control of the viewing EVA evaluation. In Section 8, we described the EVA system (re- 1720
1655 direction in particular close to the key viewpoints adds a sense call [135]) which was carefully evaluated. The questions were cho- 1721
1656 of the third dimension. This is also based on an analysis of the sen such that unusual views, not present in an anatomy atlas, 1722

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 19

1723 are essential to answer them. In the knowledge assessment part, only a surrogate for the actual imagination. Thus many authors, e.g. 1787
1724 90 second semester students (41 using EVA, 49 using alternatives Nicholson et al. [3] require studies that analyze the external valid- 1788
1725 resources) and 77 fifth semester students (51 using EVA, 26 us- ity of such results, for example by analyzing the resulting compe- 1789
1726 ing alternatives) were compared. The mean score for the second tence in real cadaver dissection tasks. More research is necessary 1790
1727 semester students was significantly higher in the EVA group (mean to better understand how to optimally integrate VA systems in the 1791
1728 score 13.5 vs. 11.1), where as in the fifth semester group the scores overall process of anatomy teaching. This research has to consider 1792
1729 were almost the same and no significant difference was observed. constraints, e.g. some institutions do not provide any cadaver dis- 1793
1730 In addition to this quantitative assessment, students’ prefer- section at all. This situation likely changes the optimal use of a VA 1794
1731 ences were analyzed w.r.t. a comparison of EVA to anatomy text system. 1795
1732 books, anatomy lectures, and dissections as the major conventional
1733 forms of education. On average, EVA was considered as similar. 11. Concluding remarks 1796
1734 EVA was assessed as slightly better compared to text books (16/37
1735 considered it as equal and 12/37 as better). Also in comparison to Many resources are available for students of medicine or related 1797
1736 lectures, the largest group (14/37) considers EVA as equal. Only subjects for anatomy learning. Virtual anatomy systems with their 1798
1737 compared to dissections, 19 students assessed EVA as worse and emphasis on exploring shapes and spatial relations play an es- 1799
1738 only 7 as better. The individual differences are interesting: some sential role. Powerful visualizations that support shape and depth 1800
1739 students do not prefer the interactive visualization and may also perception, labeling and interaction techniques are the technical 1801
1740 perform worse with them, whereas another group considers the ingredients of virtual anatomy systems. Learning scenarios to be 1802
1741 VA system as the best teaching resource. The authors stated the supported, learning and motivational strategies are the educational 1803
1742 limitations of their work: EVA was not integrated in the curricu- foundations. A large diversity of software libraries was used to 1804
1743 lum, the QuickTime plugin was not easily available for some stu- create virtual anatomy systems. Currently, game engines, such as 1805
1744 dents and there was no reward for using the software. In partic- Unity, that support a wide range of input devices and interaction 1806
1745 ular, it was not clear whether the case of the software is directly techniques, may be considered and were already used for a num- 1807
1746 relevant for the examinations. The integration of EVA in a PBL sce- ber of VA systems, e.g. [25]. 1808
1747 nario, was considered the most important extension. In the scientific literature, there is no in-depth analysis of re- 1809
1748 Evaluation of a VR system for anatomy teaching. In Section 9 we sources that are actually used by students, how they integrate the 1810
1749 briefly discussed a system for co-located collaborative VR. This sys- use of these resources with more traditional learning and which 1811
1750 tem was used for a comprehensive evaluation with 254 partici- exploration features are particularly desired. Most virtual anatomy 1812
1751 pants (students of nursing). The participants represented two full systems are not carefully integrated in the curriculum; their use 1813
1752 cohorts, i.e. there is no selection bias where only students with is optional and the relevance of using them is not clear. The wide 1814
1753 high affinity for new technology voluntarily took part. The students availability of VA systems indicates that at least a substantial por- 1815
1754 took a two hour lecture; one hour in a CAVE and another hour us- tion of students employs them. 1816
1755 ing the Powerwall to experience in total ten models, all represent- Outlook. The information space provided in VA systems is often 1817
1756 ing complex spatial structures, such as the heart, the eye, and the limited to 3D models and related text. Diagrams, photos of dissec- 1818
1757 ear. The knowledge gain, unfortunately, was not directly assessed. tions and video clips may be added and need to be carefully inte- 1819
1758 Instead, the students were asked whether they consider learning grated. VA systems support explorative usage typically with only 1820
1759 with the CAVE and the Powerwall appropriate and motivating and limited guidance. Based on experiences in many e-learning sys- 1821
1760 they had the impression that they improved their knowledge. On tems, powerful search and retrieval facilities need to be added. VA 1822
1761 a scale from one to ten, the average answers are between 7.3 and systems may have a long life-time and thus need to enable easy 1823
1762 8.8. An interesting detail was that students taking part in the ex- modification and extension of the content. This means, that the 1824
1763 periment in the morning consistently rated the VR system better content is clearly separated from the software to display and navi- 1825
1764 (on average by 0.3) compared to those using it in the afternoon. gate within that content. 1826
We have seen that few systems support variational and func- 1827
1765 10.4. Discussion tional anatomy. A strong demand for future research in these areas 1828
exists. The usefulness of virtual anatomy resources depends on the 1829
1766 We discussed selected studies that showed a positive effect of learner, e.g. her spatial ability. It may be investigated whether VA 1830
1767 virtual anatomy. Indeed, most of the recent studies had this posi- systems may adapt themselves to these abilities. 1831
1768 tive effect. This may be due to the fact that the underlying models True virtual reality systems may enhance anatomy education, 1832
1769 are meanwhile more realistic and the amount of interactivity in- but are in its infancy. Considerable research is needed to explore 1833
1770 creased. We also mentioned a study that showed no significant dif- the design space more systematically and evaluate possible so- 1834
1771 ference in the learning effectiveness between the virtual anatomy lutions w.r.t. motivational aspects, and problems, such as motion 1835
1772 group and a control group, e.g. [155]. To summarize the findings sickness. Currently, a number of obstacles restrict the use of such 1836
1773 from evaluations within anatomy, it is very likely that for com- systems. Since the development of VR glasses w.r.t. spatial resolu- 1837
1774 plex spatial regions and systems, such as the ear and the vascular tion, field of view and comfort is still very rapid, it is expected that 1838
1775 system, interactive virtual anatomy systems based on very detailed many opportunities arise for VR-based anatomy education. 1839
1776 anatomic models favor learning and deep understanding of spatial There is a synergetic effect between patient education and 1840
1777 relations. However, not all students will benefit in the same man- anatomy education. As an example, the system by Saalfeld et al. 1841
1778 ner and only a test with emphasis on spatial knowledge reveals the [112] was originally intended to inform patients about their vas- 1842
1779 advantage. In summary, “evidence shows that learning anatomy us- cular disease and possible treatment options. More effort should 1843
1780 ing computer-based learning can enhance learning by supplement- be spent to directly support patient education as well, including 1844
1781 ing rather than replacing the traditional teaching methods” [157]. substantial evaluations of their effect. A fresh new idea that may 1845
1782 While there is an impressive number of studies assessing the be beneficial for patient education was presented by Stoppel et al. 1846
1783 general effects of interactive 3D visualization, the specific value of [158] who produce hardcopies of volume visualizations and pre- 1847
1784 techniques, such as clipping, virtual dissection endoscopic views, serve a certain degree of interactivity. 1848
1785 and focus-context views was not assessed. As a limitation of al- An integration of realistic haptic feedback that conveys the dif- 1849
1786 most all evaluations, the knowledge gain is assessed in a test is ferent mechanical properties of soft tissue, vascular structures and 1850

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

20 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

1851 bones would be a highly useful addition to VA systems. Several [23] Phillips AW, Smith S, Straus C. The role of radiology in preclinical anatomy: a 1927
1852 studies indicated that cadaver dissection is still perceived as more critical review of the past, present, and future. Acad Radiol 2013;20(3):297– 1928
304. 1929
1853 important than VA-based learning. This is likely due to the haptic [24] Phillips AW, Smith SG, Ross CF, Straus CM. Direct correlation of radiologic and 1930
1854 interaction. While haptics was used in surgery simulation systems, cadaveric structures in a gross anatomy course. Med Teach 2012;34(12):e779– 1931
1855 anatomy education is based on different learning goals and con- 84. 1932
[25] Nguyen N, Nelson AJ, Wilson TD. Computer visualizations: factors that influ- 1933
1856 straints. VA systems need to be cheaper due to their larger audi- ence spatial anatomy comprehension. Anat Sci Educ 2012;5(2):98–108. 1934
1857 ence. Thus, considerable effort is necessary to adapt haptic feed- [26] Pommert A, Höhne KH, Pflesser B, Richter E, Riemer M, Schiemann T, et al. 1935
1858 back to VA systems. Creating a high-resolution spatial/symbolic model of the inner organs based 1936
on the Visible Human. Med Image Anal 2001;5(3):221–8. 1937
[27] Kraima A, Smit N, Jansma D, Wallner C, Bleys R, van de Velde C, et al. Toward 1938
1859 Acknowledgments a highly-detailed 3D pelvic model: approaching an ultra-specific level for sur- 1939
gical simulation and anatomical education. Clin Anat 1996;26(3):333–8. 1940
1860 This work was partially funded by the German Federal Ministry [28] Brinkley JF, Rosse C. The digital anatomist distributed framework and its 1941
applications to knowledge based medical imaging. J Am Med Inf Assoc 1942
1861 for Economic Affairs and Energy (grant number ’ZF4028201BZ5’). 1997;4(3):165–83. 1943
1862 We thank the Virtual Anatomy team (Wolfgang D’Hanis, Lars Dorn- [29] Kelc R. Zygote body: a new interactive 3-dimensional didactical tool for 1944 Q3
1863 heim, Kerstin Kellermann, Alexandra Kohrmann, Philipp Pohlenz, teaching anatomy. WebmedCentral ANATOMY 2012;3(1). 1945
[30] Qualter J, Sculli F, Oliker A, Napier Z, Lee S, Garcia J, et al. The biodigital 1946
1864 Thomas Roskoden and Hermann-Josef Rothkötter) for many discus- human: A web-based 3D platform for medical visualization and education. 1947
1865 sions about anatomy teaching, Nigel John, Benjamin Köhler, Maria Stud Health Technol Inf 2012;173:359–61. 1948
1866 Luz, Timo Ropinski and Noeska Smit for fruitful discussions on the [31] Spitzer VM, Ackerman MJ, Scherzinger AL, Whitlock D. The visible human 1949
male: a technical report. J Am Med Inform Assoc 1996;3(2):118–30. 1950
1867 manuscript as well as Steven Birr, Konrad Mühler, Daniel Pohlandt, [32] Höhne KH, Petersik A, Pflesser B, Pommert A, Priesmeyer K, Riemer M, et al. 1951
Q2
1868 Felix Ritter and Anna Schmeier for their implementations. VOXEL-MAN 3D navigator: brain and skull. regional, functional and radiolog- 1952
ical anatomy. Heidelberg: Springer Electronic Media; 2001. 1953
1869 References [33] Höhne KH, Pflesser B, Pommert A, Priesmeyer K, Riemer M, Schiemann T, 1954
et al. VOXEL-MAN 3D navigator: inner organs. Regional, systemic and radi- 1955
1870 [1] Brenton H, Hernandez J, Bello F, Strutton P, Purkayastha S, Firth T, et al. ological anatomy. Heidelberg: Springer Electronic Media; 2003. 1956
1871 Using multimedia and web3D to enhance anatomy teaching. Comput Educ [34] Jastrow H, Vollrath L. Anatomy online: Presentation of a detailed WWW at- 1957
1872 2007;49(1):32–53. las of human gross anatomy - reference for medical education. Clin Anat 1958
1873 [2] Azer SA, Azer S. 3D anatomy models and impact on learning: A review of the 2002;15(6):402–8. 1959
1874 quality of the literature. Health Prof Educ 2016;2(2):80–98. [35] Hoffman H, Murray M, Curlee R, Fritchle A. Anatomic visualizer: teaching 1960
1875 [3] Nicholson DT, Chalk C, Funnell W, Daniel S. Can virtual reality improve and learning anatomy with virtual reality. In: Information Technologies in 1961
1876 anatomy education? A randomised controlled study of a computer-generated Medicine: Medical Simulation and Education, Volume I. John Wiley & Sons, 1962
1877 three-dimensional anatomical ear model.. Med Educ 2006;40(11):1081–7. Inc.; 2001. p. 205–18. 1963
1878 [4] Höhne KH, Pflesser B, Pommert A, Priesmeyer K, Riemer M, Schiemann T, [36] Garner BA, Pandy MG. Musculoskeletal model of the upper limb based on 1964
1879 et al. VOXEL-MAN 3D-Navigator: inner organs: regional, systemic and radi- the visible human male dataset. Comput Methods Biomech Biomed Eng 1965
1880 ological anatomy. Springer; 20 0 0. 2001;4(2):93–126. 1966
1881 [5] Tiede U, Schiemann T, Höhne KH. High quality rendering of attributed volume [37] Juanes J, Prats A, Lagándara M, Riesco J. Application of the visible human 1967
1882 data. In: Proceedings of the visualization; 1998. p. 255–62. project in the field of anatomy: a review. Eur J Anat 2003;7(3):147–60. 1968
1883 [6] Pflesser B, Tiede U, Höhne KH. Specification, modeling and visualization of [38] Tiede U, Schiemann T, Höhne KH. Visualization blackboard: visualizing the 1969
1884 arbitrarily shaped cut surfaces in the volume model. In: Proceedings of the visible human. IEEE CG&A 1996;16(1):7–9. 1970
1885 MICCAI; 1998. p. 853–60. [39] Park JS, Chung MS, Hwang SB, Shin BS, Park HS. Visible korean human: its 1971
1886 [7] Preim B, Botha C. Visual computing for medicine. Morgan Kaufman; 2013. techniques and applications. Clin Anat 2006;19(3):216–24. 1972
1887 [8] Preim B, Bartz D. Visualization in medicine: theory, algorithms, and applica- [40] Smit NN, Hofstede CW, Kraima AC, Jansma D, Deruiter MC, Eisemann E, et al. 1973
1888 tions. Morgan Kaufmann; 2007. The online anatomical human: web-based anatomy education. In: Proceed- 1974
1889 [9] Meng M, Fallavollita P, Blum T, Eck U, Sandor C, Weidert S, et al. Kinect for in- ings of the Eurographics - Education Papers; 2016a. p. 37–40. 1975
1890 teractive AR anatomy learning. In: Proceedings of the Mixed and Augmented [41] Zhang SX, Heng PA, Liu ZJ. Chinese visible human project. Clin Anat 1976
1891 Reality (ISMAR); 2013. p. 277–8. 2006;19(3):204–15. 1977
1892 [10] Stefan P, Wucherer P, Oyamada Y, Ma M, Schoch A, Kanegae M, et al. An AR [42] Wu Y, Tan LW, Li Y, Fang BJ, Xie B, Wu TN, et al. Creation of a female and 1978
1893 edutainment system supporting bone anatomy learning. In: Proceedings of male segmentation dataset based on chinese visible human (CVH). Comput 1979
1894 the IEEE Virtual Reality; 2014. p. 113–14. Med Imaging Gr 2012;36(4):336–42. 1980
1895 [11] Chen L, Day T, Tang W, John NW. Recent developments and future challenges [43] Spitzer VM, Ackerman MJ. The visible human at the university of colorado 15 1981
1896 in medical mixed reality. In: Proceedings of the ISMAR; 2017. years later. Virtual Reality 2008;12(4):191–200. 1982
1897 [12] McMenamin P, Quayle M, McHenry C, Adams J. The production of anatomical [44] Höhne KH, Bomans M, Riemer M, Schubert R, Tiede U, Lierse W. A 3D 1983
1898 teaching resources using three-dimensional (3D) printing technology. Anat Sci anatomical atlas based on a volume model. IEEE CG&A 1992a;12:72–8. 1984
1899 Educ 2014;7(6):479–86. [45] Schiemann T, Tiede U, Höhne KH. Segmentation of the visible human for high 1985
1900 [13] Lim K, Loo Z, Goldie S, Adams J, Mcmenamin P. Use of 3D printed models quality volume based visualization. Med Image Anal 1997;1:263–71. 1986
1901 in medical education: a randomized control trial comparing 3D prints ver- [46] Azkue JJ. A digital tool for three-dimensional visualization and annotation in 1987
1902 sus cadaveric materials for learning external cardiac anatomy. Anat Sci Educ Anatomy and Embryology learning. Eur J Anat 2013;17(3):146–54. 1988
1903 2015;9(3):213–21. [47] Gibson SFF. Constrained elastic surface nets: generating smooth surfaces from 1989
1904 [14] Jastrow H, Hollinderbäumer A. On the use and value of new media and how binary segmented data. In: Proceedings of the MICCAI; 1998. p. 888–98. 1990
1905 medical students assess their effectiveness in learning anatomy. Anat Rec Part [48] Preim B, Oeltze S. 3D visualization of vasculature: an overview. In: Visualiza- 1991
1906 B 2004;280B(1):20–9. tion in Medicine and Life Sciences. Springer Berlin Heidelberg; 2008. p. 19– 1992
1907 [15] Stull A, Hegarty M, Mayer R. Getting a handle on learning anatomy with in- 39. 1993
1908 teractive three-dimensional graphics. J Educ Psychol 2009;101(4):803–16. [49] Saalfeld P, Glaßer S, Beuing O, Preim B. The faust framework: Free-form an- 1994
1909 [16] Chittaro L, Ranon R. Web3D technologies in learning, education and training: notations on unfolding vascular structures for treatment planning. Comput Gr 1995
1910 motivations, issues, opportunities. Comput Educ 2007;49(1):3–18. 2017;65:12–21. 1996
1911 [17] Dev P, Hoffer EP, Barnett GO. Computers in medical education. In: Medical [50] Gasteiger R, Tietjen C, Baer A, Preim B. Curvature- and model-based surface 1997
1912 informatics. Springer; 2001. p. 610–37. hatching of anatomical structures derived from clinical volume datasets. In: 1998
1913 [18] Kosslyn SM, Thompson WL, Wrage M. Imaging rotation by endoge- Proceedings of the smart graphics. Springer; 2008. p. 255–62. 1999
1914 neous versus exogeneous forces: distinct neural mechanisms. NeuroReport [51] Vidal FP, Garnier M, Freud N, Létang J, John NW. Simulation of x-ray attenu- 20 0 0
1915 2001;12(11):2519–25. ation on the GPU. In: Proceedings of the EG UK theory and practice of com- 2001
1916 [19] Jang S, Vitale JM, Jyung RW, Black JB. Direct manipulation is better than pas- puter graphics; 2009. p. 25–32. 2002
1917 sive viewing for learning anatomy in a three-dimensional virtual reality envi- [52] Kindlmann GL, Whitaker RT, Tasdizen T, Möller T. Curvature-based transfer 2003
1918 ronment. Comput Educ 2017;106:150–65. functions for direct volume rendering: Methods and applications. In: Proceed- 2004
1919 [20] Huang HM, Rauch U, Liaw SS. Investigating learners’ attitudes toward virtual ings of the visualization; 2003. p. 513–20. 2005
1920 reality learning environments: based on a constructivist approach. Comput [53] Hadwiger M, Sigg C, Scharsach H, Bühler K, Gross MH. Real-time ray- 2006
1921 Educ 2010;55(3):1171–82. casting and advanced shading of discrete isosurfaces. Comput Gr Forum 2007
1922 [21] Murray R, Stewart I. Modelling the somatic peripheral nervous system. In: 2005;24(3):303–12. 2008
1923 Proceedings of the anniversary meeting of the anatomical society; 2012. [54] Hernell F, Ljung P, Ynnerman A. Local ambient occlusion in direct volume 2009
1924 [22] Ma M., Bale K., Rea P.. Constructionist learning in anatomy education. In: Pro- rendering. IEEE Trans Vis Comput Gr 2010;16(4):548–59. 2010
1925 ceedings of the international conference on serious games development and [55] ap Cenydd L, John NW, Bloj M, Walter A, Phillips NI. Visualizing the surface 2011
1926 applications,Springer; 2012, 43–58. of a living human brain. IEEE CG&A 2012;32(2):55–65. 2012

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx 21

2013 [56] Kroes T, Post FH, Botha C. Exposure render: an interactive photo-realistic vol- [88] Rosse C, Mejino JLV. Anatomy ontologies for bioinformatics: principles and 2099
2014 ume rendering framework. PlOS ONE 2012;7(7):e38586. practice. Springer London; 2008. p. 59–117. The Foundational Model of 2100
2015 [57] Fellner FA. Introducing cinematic rendering: a novel technique for post- Anatomy Ontology 2101
2016 processing medical imaging data. J Biomed Sci Eng 2016;10(8):170–5. [89] Halle M, Demeusy V, Kikinis R. The open anatomy browser: a collaborative 2102
2017 [58] Fellner FA, Engel K, Kremer C. Virtual anatomy: the dissecting theatre of the web-based viewer for interoperable anatomy atlases. Front Neuroinf 2017;11. 2103
2018 future implementation of cinematic rendering in a large 8 k high-resolution [90] Preim B, Raab A, Strothotte T. Coherent zooming of illustrations with 3D- 2104
2019 projection environment. J Biomed Sci Eng 2017;10(8):367–75. graphics and text. In: Proceedings of the graphics interface; 1997. p. 105–13. 2105
2020 [59] Rezk-Salama C. GPU-based monte-carlo volume raycasting. In: Proceedings of [91] Götzelmann T, Ali K, Hartmann K, Strothotte T. Adaptive labeling for illustra- 2106
2021 the pacific graphics; 2007. p. 411–14. tions. In: Proceedings of the pacific graphics; 2005. p. 64–6. 2107
2022 [60] Lindemann F, Ropinski T. About the influence of illumination models on im- [92] Preim B, Raab A. Annotation von topographisch komplizierten 3D-modellen. 2108
2023 age comprehension in direct volume rendering. IEEE Trans Vis Comput Gr In: Proceedings of the simulation and visualization; 1998. p. 128–40. 2109
2024 2011;17(12):1922–31. [93] Hartmann K, Ali K, Strothotte T. Floating labels: applying dynamic potential 2110
2025 [61] Ghosh SK. Evolution of illustrations in anatomy: a study from the classical fields for label layout. In: Proceedings of the smart graphics; 2004. p. 101– 2111
2026 period in europe to modern times. Anat Sci Educ 2015;8(2):175–88. 113. 2112
2027 [62] Corl FM, Garland MR, Fishmann EK. Role of computer technology in medical [94] Hartmann K, Götzelmann T, Ali K, Strothotte T. Metrics for functional and 2113
2028 illustration. Am J Roentgenol 20 0 0;175:1519–24. aesthetic label layouts. In: Proceedings of the smart graphics; 2005. p. 115– 2114
2029 [63] Tietjen C, Isenberg T, Preim B. Combining silhouettes, surface, and volume 26. 2115
2030 rendering for surgery education and planning. In: Proceedings of the EuroVis; [95] Moreno R, Mayer RE. Cognitive principles of multimedia learning. J Educ Psy- 2116
2031 2005. p. 303–10. chol 1999;91:455–60. 2117
2032 [64] Baer A, Tietjen C, Bade R, Preim B. Hardware-accelerated stippling of surfaces [96] Preim B, Ritter A, Forsey DR, Bartram L, Pohle T, Strothotte T. Consistency of 2118
2033 derived from medical volume data. In: Proceedings of the EuroVis; 2007. rendered images and their textual labels. In: Proceedings of the compuGraph- 2119
2034 p. 235–42. ics; 1995. p. 201–10. 2120
2035 [65] Tietjen C, Pfisterer R, Baer A, Gasteiger R, Preim B. Hardware-accelerated il- [97] Holten D. Hierarchical edge bundles: visualization of adjacency relations in 2121
2036 lustrative medical surface visualization with extended shading maps. In: Pro- hierarchical data. IEEE Trans Vis Comput Gr 2006;12(5):741–8. 2122
2037 ceedings of the smart graphics; 2008. p. 166–77. [98] Ropinski T, Praßni J, Roters J, Hinrichs KH. Internal labels as shape cues for 2123
2038 [66] Lawonn K, Mönch T, Preim B. Streamlines for illustrative real-time rendering. medical illustration. In: Proceedings of the VMV; 2007. p. 203–12. 2124
2039 Comput Gr Forum 2013;32(3):321–30. [99] Mühler K, Preim B. Automatic textual annotation for surgical planning. In: 2125
2040 [67] Lichtenberg N, Smit N, Hansen C, Lawonn K. Sline: seamless line illustration Proceedings of the VMV; 2009. p. 277–84. 2126
2041 for interactive biomedical visualization. In: Proceedings of the VCBM; 2016. [100] Oeltze-Jafra S, Preim B. Survey of labeling techniques in medical visualiza- 2127
2042 p. 133–42. tions. In: Proceedings of the VCBM; 2014. p. 199–208. 2128
2043 [68] Hadwiger M, Berger C, Hauser H. High-quality two-level volume rendering of [101] Fair n M, Farrs M, Moys J, Insa E. Virtual reality to teach anatomy. In: Pro- 2129
2044 segmented data sets on consumer graphics hardware. In: Proceedings of the ceedings of the eurographics - education papers. The Eurographics Associa- 2130
2045 visualization; 2003. p. 301–8. tion; 2017. p. 51–8. 2131
2046 [69] Bruckner S, Gröller E. Volumeshop: an interactive system for direct volume [102] Berney S, Bétrancourt M, Molinari G, Hoyek N. How spatial abilities and dy- 2132
2047 illustration. In: Proceedings of the visualization; 2005. p. 671–8. namic visualizations interplay when learning functional anatomy with 3D 2133
2048 [70] Viola I, Feixas M, Sbert M, Gröller E. Importance-driven focus of attention. anatomical models. Anat Sci Educ 2015;8(5):452–62. 2134
2049 IEEE Trans Vis Comput Gr 2006;12(5):933–40. [103] Tan DS, Robertson GG, Czerwinski M. Exploring 3D navigation: combining 2135
2050 [71] Cole F, Sanik K, DeCarlo D, Finkelstein A, Funkhouser TA, Rusinkiewicz S, et al. speed-coupled flying with orbiting. In: Proceedings of the ACM SIGCHI con- 2136
2051 How well do line drawings depict shape? ACM Trans Gr 2009;28(3). ference on human factors in computing systems (CHI); 2001. p. 418–25. 2137
2052 [72] Vázquez P, Feixas M, Sbert M, Heidrich W. Viewpoint selection using view- [104] Konrad-Verse O, Littmann A, Preim B. Virtual resection with a deformable 2138
2053 point entropy. In: Proceedings of the VMV; 2001. p. 273–80. cutting plane. In: Proceedings of the simulation and visualization (SimVis); 2139
2054 [73] Vázquez P, Götzelmann T, Hartmann K, Nürnberger A. An interactive 3D 2004. p. 203–14. 2140
2055 framework for anatomical education. Int J Comput Assist Radiol Surg [105] Birkeland A, Bruckner S, Brambilla A, Viola I. Illustrative membrane clipping. 2141
2056 2008;3(6):511–24. Comput Gr Forum 2012;31(3):905–14. 2142
2057 [74] Jaffar AA. YouTube: An emerging tool in anatomy education. Anat Sci Educ [106] Birkeland A, Viola I. View-dependent peel-away visualization for volumetric 2143
2058 2012;5(3):158–64. data. In: Proceedings of the spring conference on computer graphics, SCCG; 2144
2059 [75] Mahmud W, Hyder O, Butt J, Aftab A. Dissection videos do not improve 2009. p. 121–8. 2145
2060 anatomy examination scores. Anat Sci Educ 2011;4(1):16–21. [107] Viega J, Conway MJ, Williams G, Pausch R. 3d magic lenses. In: Proceedings 2146
2061 [76] DiLullo C, Coughlin P, D’Angelo M, McGuinness M, Bandle J, Slotkin EM, of the UIST; 1996. p. 51–8. 2147
2062 et al. Anatomy in a new curriculum: facilitating the learning of gross [108] Petersik A, Pflesser B, Tiede U, Höhne KH, Leuwer R. Haptic volume inter- 2148
2063 anatomy using web access streaming dissection videos. J Vis Commun Med action with anatomic models at sub-voxel resolution. In: Proceedings of the 2149
2064 2006;29(3):99–108. symposium on haptic interfaces for virtual environment and teleoperator sys- 2150
2065 [77] Hoyek N, Collet C, Rienzo F, Almeida M, Guillot A. Effectiveness of three- tems; 2002. p. 66–72. 2151
2066 dimensional digital animation in teaching human anatomy in an authentic [109] Ynnerman A, Rydell T, Persson A, Ernvik A, Forsell C, Ljung P, et al. Multi- 2152
2067 classroom context. Anat Sci Educ 2014;7(6):430–7. touch table system for medical visualization. In: Proceedings of the euro- 2153
2068 [78] Tversky B, Morrison JB, Betrancourt M. Animation: can it facilitate? Int J graphics 2015 - Dirk Bartz Prize; 2015. p. 9–12. 2154
2069 Human-Comput Stud 2002;57(4):247–62. [110] Hochman JB, Unger B, Kraut J, Pisa J, Hombach-Klonisch S. Gesture-controlled 2155
2070 [79] Karp P, Feiner SK. Automated presentation planning of animation using task interactive three dimensional anatomy: a novel teaching tool in head and 2156
2071 decomposition with heuristic reasoning. In: Proceedings of the graphics in- neck surgery. J Otolaryngol-Head Neck Surg 2014;43(1):P135–6. 2157
2072 terface; 1993. p. 118–27. [111] Pihuit A, Cani MP, Palombi O. Sketch-based modeling of vascular systems: a 2158
2073 [80] Preim B, Ritter A, Strothotte T. Illustrating anatomic models: a semi- first step towards interactive teaching of anatomy. In: Proceedings of the EG 2159
2074 interactive approach. In: Proceedings of the visualization in biomedical com- sketch-based interfaces and modeling symposium; 2010. p. 151–8. 2160
2075 puting; 1996. p. 23–32. [112] Saalfeld P, Stojnic A, Preim B, Oeltze-Jafra S. Semi-immersive 3D sketching of 2161
2076 [81] Mühler K, Bade R, Preim B. Adaptive script-based design of animations for vascular structures for medical education. In: Proceedings of the VCBM; 2016. 2162
2077 medical education and therapy planning. In: Proceedings of the MICCAI. p. 123–32. 2163
2078 Springer; 2006. p. 478–85. [113] Heinrichs WL, Srivastava S, Dev P, Chase RA. LUCY: a 3-D pelvic model for 2164
2079 [82] Müller-Stich BP, Löb N, Wald D, Bruckner T, Meinzer HP, Kadmon M, et al. surgical simulation. J Am Assoc Gynecol Laparosc 2004;11(3):326–31. 2165
2080 Regular three-dimensional presentations improve in the identification of sur- [114] Höhne KH, Pommert A, Riemer M, Schiemann T, Schubert R, Tiede U, et al. 2166
2081 gical liver anatomy–a randomized study. BMC Med Educ 2013;13(1):131–8. Anatomical atlases based on volume visualization. In: Proceedings of the vi- 2167
2082 [83] Albrecht I, Haber J, Seidel HP. Construction and animation of anatomically sualization; 1992b. p. 115–23. 2168
2083 based human hand models. In: Proceedings of the ACM SIGGRAPH/EG sym- [115] Höhne KH, Pflesser B, Pommert A, Riemer M, Schiemann T, Schubert R, et al. 2169
2084 posium on computer animation; 2003. p. 98–109. A new representation of knowledge concerning human anatomy and function. 2170
2085 [84] Bauer A, Paclet F, Cahouet V, Dicko AH, Palombi O, Faure F, et al. Interactive Nat Med 1995;1(6):506–11. 2171
2086 visualization of muscle activity during limb movements: towards enhanced [116] Pommert A, Höhne KH, Burmester E, Gehrmann S, Leuwer R, Petersik A, et al. 2172
2087 anatomy learning. In: Proceedings of the VCBM; 2014. p. 191–8. Computer-based anatomy: a prerequisite for computer-assisted radiology and 2173
2088 [85] Kähler K. A head model with anatomical structure for facial modeling surgery. Acad Radiol 2006;13(1):104–12. 2174
2089 and animation. University of Saarbrücken, Naturwissenschaftliche-technische [117] Rosse C, Shapiro LG, Brinkley JF. The digital anatomist foundational model: 2175
2090 Fakultät I; 2003. Ph.D. Thesis. principles for defining and structuring its concept domain. In: Proceedings of 2176
2091 [86] Schubert R, Höhne KH, Pommert A, Riemer M, Schiemann T, Tiede U. Spa- the American medical informatics association fall symposium; 1998. p. 820– 2177
2092 tial knowledge representation for visualization of human anatomy and func- 4. 2178
2093 tion. In: Proceedings of the information processing in medical imaging; 1993. [118] Golland P, Kikinis R, Umans C, Halle M, Shenton M, Richolt J. Anatomy- 2179
2094 p. 168–81. browser: a framework for integration of medical information. In: Proceedings 2180
2095 [87] Kikinis R, Shenton ME, Iosifescu DV, McCarley RW, Saiviroonporn P, of the MICCAI; 1998. p. 720–31. 2181
2096 Hokama HH, et al. A digital brain atlas for surgical planning, model [119] Preim B, Michel R, Hartmann K, Strothotte T. Figure captions in visual in- 2182
2097 driven segmentation and teaching. IEEE Trans Vis Comput Gr 1996;2(3):232– terfaces. In: Proceedings of the advanced visual interfaces; 1998. p. 235– 2183
2098 241. 246. 2184

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005
JID: CAG
ARTICLE IN PRESS [m5G;January 25, 2018;9:57]

22 B. Preim, P. Saalfeld / Computers & Graphics xxx (2018) xxx–xxx

2185 [120] Pitt I, Preim B, Schlechtweg S. An evaluation of interaction techniques for the [139] Crossingham JL, Jenkinson J, Woolridge N, et al. Interpreting three- 2235
2186 exploration of 3D-illustrations. In: Proceedings of the software-ergonomie; dimensional structures from two-dimensional images: a web-based interac- 2236
2187 1999. p. 275–86. tive 3D teaching model of surgical liver anatomy. HPB 2009;11(6):523–8. 2237
2188 [121] Ritter F, Preim B, Deussen O, Strothotte T. Using a 3D puzzle as a metaphor [140] Lewis T, Burnett B, Tunstall R, Abrahams P. Complementing anatomy educa- 2238
2189 for learning spatial relations. In: Proceedings of the graphics interface. Mor- tion using three-dimensional anatomy mobile software applications on tablet 2239
2190 gan Kaufmann Publishers; 20 0 0. p. 171–8. computers. Clin Anat 2014;27(3):313–20. 2240
2191 [122] Wanger L, Ferwerda JA, Greenberg DP. Perceiving spatial relationships in [141] Hackett M, Proctor M. Three-dimensional display technologies for anatomical 2241
2192 computer-generated images. IEEE CG&A 1992;12(3):44–58. education: a literature review. J Sci Educ Technol 2016;25:641–54. 2242
2193 [123] Ritter F, Berendt B, Fischer B, Richter R, Preim B. Virtual 3D jigsaw puzzles: [142] Tourancheau S, Sjstrm M, Olsson R, Persson A, Ericson T, Rudling J, et al. Sub- 2243
2194 studying the effect of exploring spatial relations with implicit guidance. In: jective evaluation of user experience in interactive 3D visualization in a med- 2244
2195 Proceedings of the Mensch & Computer; 2002. p. 363–72. ical context. In: Proceedings of the medical imaging; 2012. 2245
2196 [124] Venail F, Deveze A, Lallemant B, Guevara N, Mondain M. Enhancement of [143] Abildgaard A, Witwit AK, Karlsen JS, Jacobsen EA, Tenn e B, Ringstad G, et al. 2246
2197 temporal bone anatomy learning with computer 3D rendered imaging soft- An autostereoscopic 3D display can improve visualization of 3D models from 2247
2198 wares. Med. Teach 2010;32(7):e282–8. intracranial MR angiography. Int J Comput Assist Radiol Surg 2010;5(5):549– 2248
2199 [125] Smit NN, Kraima AC, Jansma D, Ruiter MCd, Botha CP. A unified represen- 54. 2249
2200 tation for the model-based visualization of heterogeneous anatomy data. In: [144] Luursema JM, Verwey WB, Kommers PA, Annema JH. The role of stereopsis in 2250
2201 Proceedings of the EuroVis - short papers; 2012. p. 85–9. virtual anatomical learning. Interact Comput 2008;20(4–5):455–60. 2251
2202 [126] Smit NN, Kraima AC, Jansma D, Deruiter MC, Eisemann E, Vilanova A. Varvis: [145] Brown PM, Hamilton NM, Denison AR. A novel 3D stereoscopic anatomy tu- 2252
2203 Visualizing anatomical variation in branching structures. In: Proceedings of torial. Clin Teach 2012;9(1):50–3. 2253
2204 the EuroVis - short papers; 2016b. p. 49–53. [146] de Faria JWV, Teixeira MJ, de Moura SJL, Otoch JP, Figueiredo EG. Virtual and 2254
2205 [127] Handels H, Hacker S. A framework for representation and visualization of 3D stereoscopic anatomy: when virtual reality meets medical education. J Neu- 2255
2206 shape variability of organs in an interactive anatomical atlas. Methods Inf rosurg 2016;125(5):1105–11. 2256
2207 Med 2009;48(3):272–81. [147] John NW, Phillips NI, ap Cenydd L, Pop SR, Coope D, Kamaly-Asl I, et al. The 2257
2208 [128] Cook DA, Dupras DM. A practical guide to developing effective web-based use of stereoscopy in a neurosurgery training virtual environment. Presence 2258
2209 learning. J Gen. Intern Med 2004;19(6):698–707. 2016;25(4):289–98. 2259
2210 [129] Warrick PA, Funnell WRJ. VRML-based anatomical teaching (VAT): work in [148] Messier E, Wilcox J, Dawson-Elli A, Diaz G, A LC. An interactive 3d virtual 2260
2211 progress. In: Proceedings of the IEEE information technology applications in anatomy puzzle for learning and simulation - initial demonstration and eval- 2261
2212 biomedicine; 1998. p. 71–5. uation. Stud Health Technol Inf 2016;20:233–40. 2262
2213 [130] Lu J, Pan Z, Lin H, Zhang M, Shi J. Virtual learning environment for medical [149] Pohlandt D.. Supporting anatomy education with a 3D puzzle in a virtual real- 2263
2214 education based on VRML and VTK. Comput Gr 2005;29(2):283–8. ity environment. 2017. Master’s Thesis; Dept. of Computer Science, University 2264
2215 [131] Behr J, Alexa M. Volume visualization in VRML. In: Proceedings of the Web of Magdeburg. 2265
2216 3D conference; 2001. p. 23–7. [150] Brown JR. Enabling educational collaboration ? a new shared reality. Comput 2266
2217 [132] Behr J, Eschler P, Jung Y, Zöllner M. X3DOM: a DOM-based HTML5/x3d inte- Gr 20 0 0;24(2):289–92. 2267
2218 gration model. In: Proceedings of the web 3D technology; 2009. p. 127–35. [151] Moorman SJ. Prof-in-a-box: using internet-videoconferencing to assist stu- 2268
2219 Web3D dents in the gross anatomy laboratory. BMC Med Educ 2006;6(1):55. 2269
2220 [133] John NW, Aratow M, Couch J, Evestedt D, Hudson AD, Polys N, et al. [152] Schmeier A.. Student and teacher meet in a shared virtual environment: a 2270
2221 Medx3d: standards enabled desktop medical 3D. Stud Health Technol Inf VR one-on-one tutoring system to support anatomy education. 2017. Master’s 2271
2222 2008;132:189–94. thesis; Dept. of Computer Science, University of Magdeburg. 2272
2223 [134] Trelease R, Nieder G, Dorup J, Hansen M. Going virtual with quicktime VR [153] Vandenberg SG, Kuse AR. Mental rotations a group test of three-dimensional 2273
2224 new methods and standardized tools for interactive dynamic visualization of spatial visualization. Percept Motor Skills 1978;47(2):599–604. 2274
2225 anatomical structures. Anat rec 20 0 0;261:64–77. [154] Yammine K, Violato C. A meta-analysis of the educational effectiveness of 2275
2226 [135] Petersson H, Sinkvist D, Wang C, Smedby O. Web-based interactive 3D visual- three-dimensional visualization technologies in teaching anatomy. Anat Sci 2276
2227 ization as a tool for improved anatomy learning. Anat Sci Educ 2009;2(2):61– Educ 2015;8(6):525–38. 2277
2228 8. [155] Garg A, Norman G, Spero L, Maheshwari P. Do virtual computer models hin- 2278
2229 [136] Trelease R, Rosset A. Transforming clinical imaging data for virtual reality der anatomy learning. Acad Med 1999;74:87–9. 2279
2230 learning objects. Anat Sci Educ 2008;1(2):50–5. [156] Garg A, Norman G, Spero L. How medical students learn spatial anatomy. 2280
2231 [137] Gray H. Gray’s anatomy: the anatomical basis of medicine and surgery. 39th Lancet 2001;357(3):363–4. 2281
2232 edition. Churchill-Livingstone; 2004. [157] Estai M, Bunt S. Best teaching practices in anatomy education: a critical re- 2282
2233 [138] Birr S, Mönch J, Preim U, Preim B. The web3D liveranatomyexplorer. IEEE view. Ann Anat 2016;208:151–7. 2283
2234 CG&A 2013;33:48–58. [158] Stoppel S, Bruckner S. Vol2velle: printable interactive volume visualization. 2284
IEEE Trans Vis Comput Gr 2017;23(1):861–70. 2285

Please cite this article as: B. Preim, P. Saalfeld, A survey of virtual human anatomy education systems, Computers & Graphics (2018),
https://doi.org/10.1016/j.cag.2018.01.005

Anda mungkin juga menyukai