Anda di halaman 1dari 2

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/323152911

Lower Jawbone Data Generation for Deep Learning Tools under MeVisLab

Poster · February 2018


DOI: 10.13140/RG.2.2.24044.67207

CITATIONS READS

0 53

7 authors, including:

Birgit Pfarrkirchner Christina Gsaxner


Graz University of Technology Siemens Healthineers
6 PUBLICATIONS   0 CITATIONS    6 PUBLICATIONS   0 CITATIONS   

SEE PROFILE SEE PROFILE

Lydia Lindner Jürgen Wallner


Graz University of Technology Medical University of Graz
6 PUBLICATIONS   0 CITATIONS    36 PUBLICATIONS   35 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Medical Image Segmentation View project

www.augmentedrealitybook.org View project

All content following this page was uploaded by Jan Egger on 13 February 2018.

The user has requested enhancement of the downloaded file.


Lower Jawbone Data Generation for Deep Learning Tools under
MeVisLab
B. Pfarrkirchner, C. Gsaxner, L. Lindner, N. Jakse, J. Wallner, D. Schmalstieg, J. Egger
Graz University of Technology, Institute for Computer Graphics and Vision, Graz, Austria; BioTechMed-Graz, Graz, Austria
Medical University of Graz, Department of Maxillofacial Surgery, Graz, Austria; Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria

INTRODUCTION METHODS RESULTS


Segmentation is an important branch in medical image All implementations of this work were accomplished with the The number of exportable images relies on the definition of
processing and the basis for further detailed investigations on MeVisLab platform. MeVisLab is a medical image processing the storage parameters by the user. If the default values for
computed tomography (CT), magnetic resonance imaging software with a graphical user interface. Besides, it provides transformation and adding noise are applied on a single slice,
(MRI), X-ray, ultrasound (US) or nuclear images. Through built-in modules for basic image processing operations, such it is possible to export eleven slices (the original slice and ten
segmentation, an image is divided into various connected as low-pass filtering. These modules can be connected to form artificially generated slices). We chose rotation angles of
areas that correspond to certain tissue types. A common aim image processing networks. ±8°, and a scale of 1±0.04 in x- and y-direction. The
is to delineate healthy and pathologic tissues. A frequent We developed a MeVisLab module network that converts amplitude of uniform noise has a value of 800 gray values, the
example in medicine is the identification of a tumor or segmentation contours into ground truth images and a Gaussian noise has a mean value of zero and a standard
pathological lesion and its volume to evaluate treatment depiction of the patients' CT images. In addition, a MeVisLab deviation of 300 gray values. The salt-and-pepper amplitudes
planning and outcome. In the clinical routine, segmentation is macro module was created, which saves image data as are set to ±2000 gray values, and the density is set to a value
necessary for the planning of specific treatment tasks, that are separate and, optionally, modified slices. of 0.05.
for example used in the radiation therapy or for the creation of In addition, we developed a SaveAsSingleSlices macro-
three-dimensional (3D) visualizations and models to simulate module, which allows storing all slices of one image data stack
a surgical procedure. Segmentation can be classified into as separate TIFF or PNG files automatically. In addition, our
several families of techniques, such as thresholding, region macro-module is able to export selected slices and to
growing, watershed, edge-based approaches, active contours augment the dataset with geometric transformations and
and model-based algorithms. Recently, deep learning using noise. Geometric transformations may use any combination of
Fig. 3 Examples of exported CT slices in the lower
neural networks is becoming important for automatic rotation, scaling and mirroring, while noise can be of the jawbone area. The left image shows the original acquired
segmentation applications. uniform, Gaussian or salt-and-pepper variety. All parameters CT slice of a patient. In the middle, is a flipped version of
the left image visible. The right CT slice displays an image
can be interactive specified in a custom user interface panel. with added salt-and-pepper noise (amplitude ± 2000 and
density 0.05).
Neural networks are constructed of neurons that are
organized into input layers, output layers, and hidden
layers, which are located between the input and
CONCLUSIONS
output layers. Neural networks with a large number In this work, a MeVisLab network and a macro-module have
of layers are known as deep networks. The neurons been developed to provide a convenient way to handle data
are connected via weights, which can be trained with preparation and augmentation for the segmentation of the
a training dataset to solve specific problems. For lower jawbones with deep learning networks. The macro-
efficient training, neural networks require large module provides a convenient interface for tuning the training
amounts of training data. data set by selecting slices and applying freely configurable
data augmentation. This approach makes it easy to
To train deep learning networks for lower jawbone (mandible) systematically vary the data set before training.
segmentation, we used anonymized CT datasets from ten
patients of the complete head and neck region. All CT data REFERENCES
acquisitions have been performed in the clinical routine for
diagnostic reasons at a maxillofacial surgery department at 1. Zhou, K. et al. “Deep Learning for Medical Image Analysis,” Elsevier,
pp. 1-458 (2017).
our disposal. Each mandible of these CT images was 2. Egger, J. et al. “Integration of the OpenIGTlink network protocol for
image guided therapy with the medical platform MeVisLab,” The
segmented by two specialized doctors manually (slice-by-slice Fig. 2 Various representations created with a View2D international Journal of medical Robotics and Computer assisted
in axial direction) to generate the ground truth contours for the modules. No. 1 shows the original CT, No. 2 the CT with Surgery, 8(3):282-390 (2012).
an overlaid contour, No. 3 the binary mask and No. 4 the 3. Egger, J. et al. “HTC Vive MeVisLab integration via OpenVR for
segmentation task. CT with the overlaid binary mask. medical applications,” PLoS ONE 12(3): e0173972 (2017).

The work received funding from BioTechMed-Graz in Austria (“Hardware accelerated intelligent medical imaging”), the 6th Call of the Initial Funding Program from the Research & Technology
House (F&T-Haus) at the Graz University of Technology (PI: Dr. Dr. habil. Jan Egger). The corresponding Macro module and Python source code is freely available under:
https://github.com/birgitPf/Data_Generation
View publication stats

Anda mungkin juga menyukai