Anda di halaman 1dari 50

Automatic Discovery of Connection Between Vietnameses Anthropometric Features

Dinh Quang Huy


Faculty of Information Technology College of Technology Vietnam National University, Hanoi Supervised by Associate Professor Bui The Duy

A thesis submitted in fulllment of the requirements for the degree of Master of Information Technology December, 2010

ORIGINALITY STATEMENT
I hereby declare that this submission is my own work and to the best of my knowledge it contains no materials previously published or written by another person, or substantial proportions of material which have been accepted for the award of any other degree or diploma at University of Engineering and Technology or any other educational institution, except where due acknowledgement is made in the thesis. I also declare that the intellectual content of this thesis is the product of my own work, except to the extent that assistance from others in the projects design and conception or in style, presentation and linguistic expression is acknowledged.

Signed ........................................................................

Abstract
Long time ago, when people found a skeleton, it was hard to determine who the victim was. However, people are trying to nd a way to solve this problem due to its demands and importance. Several methods have been introduced for identifying deceased persons, some more eective than others. Facial reconstruction is one of them. It is a work of recreating the face of a person from his skeletal remains. At the rst days, facial reconstruction is done using clay, where a skillful experts who understand the structure of skull and skin very well to use clay to build up the depth of tissue on the skull to that of a living individual. Later, this method is computerized and people tend to develop 3D facial reconstruction systems. In the facial reconstruction systems, the most important issue is to predict the soft tissue depths at every location or some locations. Most researches try to obtain a database of soft tissue thicknesses at facial landmarks, and store the average thickness for every landmark. When performing the reconstruction, these thicknesses are referenced, and the face is built based on the skull model. Their approaches have some problems in data collecting, and they do not make use of the discovered skull to predict the thicknesses. Therefore, the accuracy is very low and most of the time, they need to manually modify the model generated from the system a lot in order to receive a suitable face. Realizing that the soft tissue thickness and some other anthropometric features may have some relationships with the skull shape, we propose a method for automatic discovery of these connections. We rst collect data using the CT technique which is the most accurate method at the moment. After that, we try some machine learning techniques on the data to see the performance. The evaluations and comparison with other approaches are also given in the thesis. ii

Table of Contents
1 Introduction 1.1 Overview and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Background 2.1 Previous Work in Facial Reconstruction From Skulls . . . . . . . . . 2.1.1 2D Reconstruction . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Manual 3D Reconstruction . . . . . . . . . . . . . . . . . . . 2.1.3 Computer-Aided Reconstruction . . . . . . . . . . . . . . . . 2.2 Facial Reconstruction Systems . . . . . . . . . . . . . . . . . . . . . 2.2.1 System developed by Bjorn Anderson, Martin Valfridsson in 2005 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 System developed by Kolja Khler and Jrg Haber . . . . . a o 2.2.3 FACES - software developed by Salerno University, Italy . . 2.3 Facial Landmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Important Facial Features . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Ears . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Eyes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Nose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Lips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Soft tissue thickness studies . . . . . . . . . . . . . . . . . . . . . . 2.6 Available Soft Tissue Thickness Data . . . . . . . . . . . . . . . . . 1 1 2 2 4 4 4 6 7 8 8 9 10 10 12 13 13 13 13 13 15

. . . . . . . . . . . . . . . .

3 Automatic discovery of connections between Vietnameses anthropometric features 16 3.1 Data description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2 Data collecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 iii

iv 3.3 3.4

TABLE OF CONTENTS Discovery of anthropometric relationships using linear regression Discovery of anthropometric relationships using neural networks 3.4.1 Select network structure . . . . . . . . . . . . . . . . . . 3.4.2 Initialize and train the network . . . . . . . . . . . . . . . . . . . . . . . . . . 23 25 25 26 29 35

4 Evaluation and Result 5 Conclusions and Future Work

List of Figures
2.1 2.2 2.3 2.4 2.5 2.6 2.7 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 Matching skull into drawing portrait . . . . . . . . . . . . . . . . . . Matching skull into a picture . . . . . . . . . . . . . . . . . . . . . . . Successful clay reconstrion by LSU Faces Lab . . . . . . . . . . . . . Process of Reconstruction using volumetric data . . . . . . . . . . . . Result of Bjorn Anderson and Martin Valfridssons reconstruction . . 5 5 6 8 9

Facial Reconstruction Diagram by FACES . . . . . . . . . . . . . . . 11 Facial landmarks Location . . . . . . . . . . . . . . . . . . . . . . . . 11 Phillip MX8000D CT Scanner . . . . . . . . . . . . . . . . . . . . . . 19 CT images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Head CT image taken with sagittal plane . . . . . . . . . . . . . . . . 20 Head CT image taken with verticle plane that goes through the middle of the left eye socket . . . . . . . . . . . . . . . . . . . . . . . . . 21 Head CT image taken with vertical plane that goes through the forehead 22 Head CT image taken with horizontal plane that goes through the zygion landmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Head CT image taken with horizontal plane that goes through the gonion landmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Example of linear regression . . . . . . . . . . . . . . . . . . . . . . . 24 A feed-forward network with a single output layer (a) and with one hidden layer and one output layer (b) . . . . . . . . . . . . . . . . . . 26

3.10 A recurrent network with hidden neurons . . . . . . . . . . . . . . . . 26 3.11 Neural network structure used in the study . . . . . . . . . . . . . . . 27 4.1 Regression results obtained by ten-fold cross validation for pronasale thickness using (a) neural network model and (b) linear regression model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 v

vi 4.2 4.5 4.3 4.4

LIST OF FIGURES Regression results obtained by ten-fold cross validation for nose length using (a) neural network model and (b) linear regression model. . . . Regression results obtained by ten-fold cross validation for upper lip border using (a) neural network model and (b) linear regression model. Regression results obtained by ten-fold cross validation for nose height using (a) neural network model and (b) linear regression model. . . . Regression results obtained by ten-fold cross validation for pupilpupil distance using (a) neural network model and (b) linear regression model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression results obtained by ten-fold cross validation for lower lip border using (a) neural network model and (b) linear regression model. Facial Reconstruction Result Using Linear Regression Equations . . . Matching the face and the skull . . . . . . . . . . . . . . . . . . . . .

31 31 32

32 32 33 33

4.6 4.7 4.8

List of Tables
2.1 3.1 3.2 4.1 4.2 List of Facial Landmarks . . . . . . . . . . . . . . . . . . . . . . . . . 12 Input Data Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Target Data Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 MSE values for average method (AVG), Linear Regression (LR), and Neural Network (NN). The best performance is in boldface. . . . . . . 30 Equations for linear correlation between input and output, with the corresponding MSE when applied with the whole data set. In the equations, x is the input and y is the output. . . . . . . . . . . . . . . 34

vii

Abbreviations
3D Three-dimensional CT Computed Tomography MRI Magnetic Resonance Imaging 2D Two-dimensional RBF Radial Basis Functions MSE Mean Square Error

viii

Chapter 1 Introduction
1.1 Overview and Motivation

Facial reconstruction is the work of recreating the face of an individual from his discovered skull. This process is mainly used in criminal investigations to facilitate victim identication when no other means are available. Besides, facial reconstruction is also used in archaeology to verify the remains of historic gures or in anthropology to approximate the look of prehistoric hominids. People have been recreating the face of an unidentied individual from their discovered skulls for nearly a hundred years. At the rst days, facial reconstruction is done using clay. This method requires skillful experts who understand the structure of skull and skin very well to use clay to build up the depth of tissue on the skull to that of a living individual. The experts rst place the landmark dowels on the pre-dened craniofacial landmarks on the skull. After that, clay is applied and the expert interpolates with clay between the landmark dowels to build up the skin. This method is called the Krogman method [Kro46] and is still used in non-automatic forensic facial reconstruction now. The expert skill and amount of time required have motivated researchers to try to computerize the technique. A well-designed computer-aided facial reconstruction system has many advantages, including great reduction in time consumption. Using such a system, we can produce several possible facial models from a given skull by using parameters determining the persons age, weight, and gender. Recently, the rapid development of 3D equipments and technology enable us to advance into this eld of research. A lot of computerized methods for 3D facial 1

Chapter 1. Introduction

reconstruction have been proposed and developed, which make use of computer program to transform 3D scanned models of the skull into faces. Many researches try to follow the manual approach, but make use of the computer to fasten the process of reconstruction. In these methods, they have the discovered skull scanned to get its 3D model. After that, they need to calculate the soft tissue thicknesses at every location at the skull surface to build the nal facial model and the most critical issue turns out to be discovering these thicknesses. In order to solve this problem, researchers tend to collect a database of information about skin-skull models and then make use of it, but dier in the way of collecting and processing. Mostly, they average the thicknesses for every record in the database and use these for every discovered skull. The preview of these related works are described in the next chapter. The simplicity of the methods using for calculating these thicknesses leads to low performance in the facial reconstruction systems. In addition, some important facial features such as nose and eye shapes are often reconstructed without any information from the skull shape. Realizing this drawback in nowadays facial reconstruction systems, we aim to propose a method for better prediction of the facial information. This study is made based on Vietnamese data, but can be applied to any other race with proper data collection.

1.2

Our Contributions

In scope of a Master thesis, we propose a method for automatic discovery of connections between anthropometric features such as tissue thicknesses, distance between two pupils, nose height and the skull shape. This work includes the method for database collecting, and how we apply machine learning to discover the relationships. The connection model then can be used to solve the problem of reconstructing the face from the skull. Our results from evaluations also prove that there is relationships between facial soft tissue thickness and the skull shape.

1.3

Thesis Organization

The rest of this thesis is organized as follows. Chapter 2 presents some background knowledge that is involved. The issues in this chapter include of previous work in facial reconstruction, current facial reconstruction systems. We also review the work

1.3. Thesis Organization

of soft tissue thickness studies and available soft tissue thickness data. Chapter 3 describes the proposed method of automatic discovery of connections between Vietnameses anthropometric features. Our method of data organization, how to collect this data, and how we make use of this data are provided in this chapter. Chapter 4 presents our results and the comparison with other approaches. Chapter 5 concludes our work and gives the future research directions based on the results obtained so far.

Chapter 2 Background
2.1 Previous Work in Facial Reconstruction From Skulls
Reconstructing a face from a skull is to predict a face as much accurately as possible from the discovered skull. There are many dierent approaches in order to solve this problem. These approaches can be divided into three main techniques, 2D reconstruction, manual 3D reconstruction, and 3D reconstruction using computers.

2.1.1

2D Reconstruction

This technique requires a forensic artist who draws the picture of the reconstructing face. There are two main 2D approaches, matching the skull into the drawing portrait and matching the skull into pictures or videos. In the case of matching the skull into the drawing portrait, the artist rst draws a simple version portrait based on the skulls metrics. After that, he matches this drawing into the skull image and watch. He then redraws or xes some features until the skull image and the drawing t perfectly. The process of this technique is shown in Figure 2.1. The other technique, matching the skull into a picture or video, is used when people want to compare the face with the skull to identify the correlations or if the skull belongs to the people in the picture or video (Figure 2.2). 4

2.1. Previous Work in Facial Reconstruction From Skulls

Figure 2.1: Matching skull into drawing portrait

Figure 2.2: Matching skull into a picture

Chapter 2. Background

Figure 2.3: Successful clay reconstrion by LSU Faces Lab

2.1.2

Manual 3D Reconstruction

The most common manual approach to facial reconstruction is the clay modeling approach. In this method, people rst put the landmark dowels on the predened craniofacial landmarks on the skull. The lengths of these dowels are dened using one of the available soft tissue thickness data in the beginning. After that, they attach clay to the skull in a way that the clay covers the dowels but still keeps a continuous surface. This method is also known as the Krogman [Kro46] method, and it gives good result without any help of a computer. However, this method requires an expert with very good skill and still it takes a lot of time for a version of reconstruction. Faces Laboratory in Louisiana State University, USA has some successful examples of clay reconstruction as shown in Figure 2.3.

2.1. Previous Work in Facial Reconstruction From Skulls

2.1.3

Computer-Aided Reconstruction

Many researchers have worked on the facial reconstruction problem and have provided dierent solutions using computers. Mark Jones [Jon01] uses volumetric data and cross correlation techniques. Matthew Cairns [Cai00] uses statistical tools such as Generalized Procrustes Analysis and Principal Components Analysis. Katrina Archer [Arc97] tries to computerize the manual facial reconstruction process. Another approach is presented by Khler [KHS03] in which he use a physics based head a model with skin surfaces, virtual muscles, a mass spring system and landmarks to reconstruct the face. Bullock [Bul99] uses the Krogman method for placing virtual dowels on the digitized skull with the emission-based implicit modeling. In this modeling, every polygon of the skull model emits a real value which is the interpolated soft tissue thicknesses at the landmarks associated with the polygon. There are many other techniques that share the same method of following steps. The rst step is generating the 3D model of the skull. We can do that with the help of digitalized equipments such as the CT scanner or MRI scanner. The CT technique enables accurate measurement of facial soft tissue thickness and is used widely in collecting soft tissue thickness data. In addition, we can generate the 3D model of the skull from CT images conveniently. The second step is to locate the landmarks at the skull surface and the tissue depths at the landmarks. Based on these landmarks, we can apply some regression techniques such as the RBF, B-spline, and Non-uniform rational B-spline to generate the 3D grid of the skin surface. The last step is to rene the reconstructed face, by tuning the features such as the eyes, nose, ears and lips. This is a hard work because these features are unpredictable by the skull shape only. Figure 2.4 shows the process of the facial reconstruction proposed by Mark Jones [Jon01] in which he compares volumetric data of the 18 remains with that of a reference head. Firstly, the discovered skull is scanned using a CT scanner to obtain volumetric data. After that, a reference head having the same sex, racial and age characteristics as the discovered skull is chosen. Then a correspondence is created between the two heads using correlation techniques. Finally, using this correspondence, the soft tissue from the reference head is mapped onto the discovered skull to produce the face of the unknown person.

Chapter 2. Background

Figure 2.4: Process of Reconstruction using volumetric data

2.2
2.2.1

Facial Reconstruction Systems


System developed by Bjorn Anderson, Martin Valfridsson in 2005

This system uses the 3ds max software and contains 9 steps. The rst step is to collect data from CT. In this step, the skull is scanned in a CT scanner and the CT slices are stored in DICOM format. The second step is data segmentation, in which CT slices are edited using segmentation software to remove artifacts such as metal cavity llings. This segmentation software is also used to produce a 3D model to import into 3ds max. In the third step, the model is imported into 3ds max and some pre-processing such as normalization and rotation is performed. In the forth

2.2. Facial Reconstruction Systems

Figure 2.5: Result of Bjorn Anderson and Martin Valfridssons reconstruction

step, the landmark dowels are located by users using the GUI. In the fth step, the holes in the cranium are covered. Step 6 is to perform mesh calculations. In this step, based on the tissue depths at the landmark, the tissue depths at other locations are calculated. The chin and neck are also constructed at this step. Step 7 is the creation of nose, eyes, ears and lips. Step 8 is the post process, in which they alter the model based on knowledge about human faces. The last step is to insert texture and render the nal images. The sample result of this system is shown in Figure 2.5.

2.2.2

System developed by Kolja Khler and Jrg Haber a o

In December 2003, Kolja Khler defended his PhD thesis with the title of A Head a Model with Anatomical Structure for Facial Modeling and Animation. In the

10

Chapter 2. Background

thesis, he studied the facial muscles, how they work and deform, and built a facial reconstruction software. This software is then developed [KHS03] by him and his colleagues at the University of Saarland to become a complete system. Beside fast reconstruction, the system also enables changing the emotion of the face, based on the 24 types of muscle changes.

2.2.3

FACES - software developed by Salerno University, Italy

This software performs facial reconstruction from the skull of prehistoric hominids. The reconstruction is based on warping and deforming the template face chosen from a set of models with dierent sexes, races. Some warping algorithms are applied to carry out this work. The software uses two databases, craniometrical database of skulls and pictorial physiognomic database of faces. Figure 2.6 shows the softwares diagram.

2.3

Facial Landmarks

As described, most method uses the pre-dened points on the head which are called facial landmarks. There are a number of dierent landmark congurations with various numbers of landmarks. The mostly used landmarks are the 32 landmarks described by Rhine [Rhi84]. The list of these landmarks and its position are described in Table 2.1 and Figure 2.7. In our research, we make use of some extra landmarks such as exocanthion, endocanthion, alare, pronasale, basion, subnasale, and stomion. There are two types of landmarks, craniofacial and cephalometric. They are the marks on the skull and corresponding marks on the skin surface, respectively. In most data measurements, the distances between these pairs are measured and stored.

2.3. Facial Landmarks

11

Figure 2.6: Facial Reconstruction Diagram by FACES

Figure 2.7: Facial landmarks Location

12

Chapter 2. Background Table 2.1: List of Facial Landmarks Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 Landmark Name Supraglabella Glabella Nasion Rhinion Mid Philtrum Upper Lip Margin Lower Lip Margin Chin-Lip Fold Mental Eminence Beneath Chin Frontal Eminence Supraorbital Suborbital Inferior Malar Lateral Orbit Zygomatic Arch, midway Supraglenoid Gonion Supra M2 Occlusal Line Sub M2

2.4

Important Facial Features

Facial reconstructions aim is to produce a model of an individuals face that can be recognized by close friends or family members. Research presented by Wilkinson [Wil04] shows that hair, face outline, eyes, nose and mouth are the important features for the recognition of faces.

2.5. Soft tissue thickness studies

13

2.4.1

Ears

Wilkinson [Wil04] stated that we cannot estimate the shape of the ears including their size, form, and projection based on the skull only, because there are no underlying bone that describes the ear appearance. However, the ear shape is not an important feature in our face.

2.4.2

Eyes

There are some researches which show the eyes relationship. Firstly, for the depth placement of the eyeball in the socket, Wilkinson [Wil04] indicates that a straight line between the superior an d inferior orbital margins will touch the front of the cornea. He also states that the opening of the eye is 60 to 80 percent of the width of the orbit.

2.4.3

Nose

We can reconstruct the nose by looking at the shape of the nasal bones and cartilage. Wilkinson [Wil04] shows a way to calculate the shape of the nose based on the angle of the nasal bone. However, the cartilage part of the nose is hard to predict.

2.4.4

Lips

This is an important feature and it is said that the lips are determined by the structure of the underlying bones. For example, a person with big teeth is likely to have thick lips, while a person with small teeth often has thin lips.

2.5

Soft tissue thickness studies

As mentioned, the soft tissue thickness data play an important role in facial reconstruction, whatever our method is. The issue of collecting soft tissue thickness data to clarify the relationship between soft tissue and the underlying bony structure of skull has been discussed by forensic medicine experts for more than a hundred years. In 1883, Welcker [Wel83] obtained a database of soft tissue thicknesses by inserting a thin blade into facial skin of cadavers at selected anatomical landmarks. After that, he measured the depth of the blades penetration. Until middle 1980, all studies that

14

Chapter 2. Background

need to collect soft tissue thicknesses data at anatomical landmarks used cadaverous populations and this needle technique. However, this type of approaches has some problems. First of all, a dead persons tissues are not the same as in life due to drying and embalming. Secondly, the skin can be deformed due to the penetration of the needle. Lastly, it is hard to nd the landmarks correctly through soft tissue when performing the needle insertion. Since we need to produce a model as accurate as possible, all these matters must be taken into consideration. The needle technique cannot be used on living subjects, which leads to errors in measurement. After 1980, with the development of technology, non-invasive medical systems become popular. A variety of methods have been used to measure tissue depth in living subjects, including ultrasound, MRI, and CT. In 1987, George [Geo87] used lateral craniographs to record the depths of tissue at the midline anthropometric points. And in 2000, Manhein et. al. [MLB+ 00] used ultrasound to collect information for sample of children and adults of sexes, varying ages and dierent varieties. El-Mehallawi and Soliman [EMS01] and De Greef et. al. [DGPV+ 06] also used ultrasound to conduct study. In 2002, Sahni et. al. [SJG+ 02] used MRI to obtain tissue depth data of Indians. The most accurate measurement can be obtained by using CT. This technique is faster and more accurate as it gives high quality images. With the help of the computer, we can also construct the 3D model from the CT images. In 1996, Phillips and Smuts [PS96] used CT technique to obtain data of mixed population of South Africa. There are many more related researches that collect soft tissue thicknesses for study. However, most measurements are collected from rather small populations due to the harm it may cause when tests are carried out. Ultrasound techniques seem to be the most accurate and safe as it can be used without any considerable threat for the candidate [Wil04].MRI has the advantage of collecting data in 3D format. Soft tissue visualization is excellent, but bony tissue is not as well visualized as on the CT scan [VPST+ 07]. In addition, they just gather tissue depths data at anthropometric landmarks, but give no information about any relationship between these depths and the skull shape. Therefore, in most facial reconstruction systems, they just use the average thicknesses that can be calculated from the database for every landmark. There are some researches which are available for Vietnamese. These researches are made by Le Viet Vung (2005), Xu Xuan Khoi (1996), Le Gia Vinh (2005), Pham Huu Phung and Nguyen Trong Toan (2007) and are provided in form of average values and variations. They have also made some conclusions about Vietnamese

2.6. Available Soft Tissue Thickness Data

15

facial characteristics. For example, they conclude that Vietnamese faces are short and wide type, Vietnamese noses are normal type. These researches are meaningful when we want to verify a predicted model of a face or when we manually tuning the facial features. However, this information is far from enough for an automatic facial reconstruction system with accurate soft tissue thickness prediction.

2.6

Available Soft Tissue Thickness Data

There are several published soft tissue thickness data collections. Some datasets of American Blacks and American Whites are provided by Rhine [Rhi84]. These datasets are divided into groups of dierent sex and weight, and show the average soft tissue thicknesses at Rhines landmark for each population in each group. Many later facial reconstruction systems used these data collections as the method to dene the tissue depth. However, these datasets were obtained from cadavers that they suer the disadvantages described above. In 2000, Manhein et. al. [MLB+ 00] published a study made on American Blacks and Caucasian Americans using the ultrasound technique. This data is divided into groups of age, with the landmarks similar to the ones made by Rhine. However, the correctness of this data is considered higher than Rhines due to the method of ultrasound in obtaining. The latest dataset seems to be the T-tables (Tallied Facial Soft Tissue Depth Data) provided by Stephan [SC10]. The T-tables represent pooled soft tissue depth means from many of previously published studies. They started in 2008 and are well updated up to now. The T-tables provide three sets of soft tissue thickness data with dierent age range, 0 to 11 years, 12 to 17 years, and 18 years and beyond. In contrast to any single soft tissue depth study that typically includes fewer than 40 individuals, each of the T-tables report values for more than 3000 individuals. Therefore, the T-tables have an advantage that tolerates the measurement error in single study. However, some researches [HLW85] [Dum86] have shown that race, sex, age and weight have moderate impacts on soft tissue thickness. The T-tables divide data into groups of age only, which make it hard to use. In addition, the data provided is the already averaged one, so we cannot separate the data into dierent groups.

Chapter 3 Automatic discovery of connections between Vietnameses anthropometric features


We treat the soft tissue thickness prediction issue the missing data problem, so that the solution is straight forward. We need to build a database of sets of input and target, with the input is the skull and target is the soft tissue thicknesses. After the database is ready, the data processing begins. We try two approaches to nd the relationships: one using the simple linear regression, and one using the neural network. This chapter is organized as follow. Firstly, we describe our database such as the features that we are storing. Secondly, we show how to collect this information using the CT technique. Finally, we in turn present the data processing stage using linear regression and neural networks.

3.1

Data description

Our database is the mean of storing information of candidates. For each candidate, the information is divided into two sets, input set and target set. Since we cannot store the entire skull model into database, only the important distances are stored. These are the distances between landmark points on the skull and some skull metric such as cranial height and cranial length. The target set is mostly the thicknesses at landmark locations. In the case of facial reconstruction, only the input set is known. 16

3.1. Data description Table 3.1: Input Data Fields

17

N# 1

Name cranial height

Description distance from the midpoint of the anterior border of the foramen magnum (basion) to the intersection of the coronal and sagittal sutures (bregma). distance between nasion and rhinion landmarks. distance between the nasion and mid philtrum landmarks. distance from the midsagittal plane from the most anterior point on the frontal (glabella) to the most posterior point on the occipital (opisthocranion). greatest width between the parietal eminences (euryon). distance between the left exocanthion landmark and the right exocanthion landmark. distance between the left endocanthion landmark and the right endocanthion landmark. distance between two molars. distance between the left alare landmark and the right alare landmark. the largest horizontal distance of the nose socket. distance between the left frontal eminence landmark and the right frontal eminence landmark. distance between the left Zygomatic Arch landmark and the right Zygomatic Arch landmark. distance between the left gonion landmark and the right gonion landmark. distance between the nasion landmark and the pronasale landmark. distance between the nasion landmark and the basion landmark. distance between the basion landmark and the pronasale landmark. distance from the most anterior inferior point of the mandible in the median sagittal plane (gnathion) to the point of intersection of the internasal suture with the nasofrontal suture (nasion).

2 3 4

n-rh base nose length cranial length

5 6 7 8 9 10 11 12 13 14 15 16 17

cranial breadth ex-ex en-en molar-molar al-al nose socket width forehead width facial width (Zy-Zy) jaw width (go-go) upper face height (n-pr) n-ba ba-pr facial height

Chapter 3. Automatic discovery of connections between Vietnameses 18 anthropometric features Table 3.2: Target Data Fields

N# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38

Name vertex trichion glabella nasion rhinion pronasale

Description

soft tissue thickness at vertex landmark. soft tissue thickness at trichion landmark. soft tissue thickness at glabella landmark. soft tissue thickness at nasion landmark. soft tissue thickness at rhinion landmark. soft tissue thickness at craniofacial pronasale landmark. nose length distance from nasion to cephalometric pronasale landmark. subnasale soft tissue thickness at subnasale landmark. upper lip border soft tissue thickness at upper lip margin landmark. lower lip border soft tissue thickness at lower lip margin landmark. stomion soft tissue thickness at stomion landmark. metal soft tissue thickness at metal landmark. meton soft tissue thickness at meton landmark. opisthooranion soft tissue thickness at opisthooranion landmark. exocanthion (R) soft tissue thickness at right exocanthion landmark. exocanthion (L) soft tissue thickness at left exocanthion landmark. endocanition (R) soft tissue thickness at right endocanition landmark. endocanition (L) soft tissue thickness at left endocanition landmark. pupil-pupil distance between two pupils. supraobital (R) soft tissue thickness at right supraobital landmark. supraobital (L) soft tissue thickness at left supraobital landmark. infraobital (R) soft tissue thickness at right infraobital landmark. infraobital (L) soft tissue thickness at left infraobital landmark. zygomatic arch (R) soft tissue thickness at right zygomatic arch landmark. zygomatic arch (L) soft tissue thickness at left zygomatic arch landmark. zygomatic (R) soft tissue thickness at right zygomatic landmark. zygomatic (L) soft tissue thickness at left zygomatic landmark. porion (R) soft tissue thickness at right porion landmark. porion (L) soft tissue thickness at left porion landmark. gonion (R) soft tissue thickness at right gonion landmark. gonion (L) soft tissue thickness at left gonion landmark. alare (R) soft tissue thickness at right alare landmark. alare (L) soft tissue thickness at left alare landmark. lateral nasal (R) soft tissue thickness at right lateral nasal landmark. lateral nasal (L) soft tissue thickness at left lateral nasal landmark. nose height projection of cephalometric pronasale landmark to the skull surface. bucal (R) soft tissue thickness at right bucal landmark. bucal (L) soft tissue thickness at left bucal landmark.

3.2. Data collecting

19

The input elds are shown in Table 3.1 and the target elds are shown in Table 3.2. Our data is collected from 220 candidates, 98 of which are males and 122 are females. The age range is from 17 to 82 and the weight range is from 38kg to 75kg.

3.2

Data collecting

The CT images which are used for our database is captured using a Phillip MX8000D CT Scanner (Figure 3.1). This CT technique is very convenient because it is fast, accurate, and can produce high quality images. It neglects the disadvantages of [Kro46] because it can capture the skin and skull models of living objects. Besides, we can build 3D models from CT images.

Figure 3.1: Phillip MX8000D CT Scanner

Figure 3.2 shows some samples of CT images that we are using. We have specialized software that we measure the distances visually by selecting the start point and end point for each distance in the CT image. The image is at the correct ratio with the real person so that the distance can be converted from pixel to milimetre.

Chapter 3. Automatic discovery of connections between Vietnameses 20 anthropometric features

Figure 3.2: CT images

Firstly, we take the CT image for the sagittal plane (Figure 3.3) to measure the soft tissue thickness for the landmarks: supraglabella, glabella, nasion, rhinion, mid philtrum, upper lip margin, lower lip margin, chin-lip fold, mental eminence, and beneath chin.

Figure 3.3: Head CT image taken with sagittal plane

After that, we take the CT image for the vertical plane that goes through the middle of the left eye socket (Figure 3.4). From this image, we can measure the soft

3.2. Data collecting

21

tissue thicknesses for left frontal eminence, left supraorbital, and left inferior molar landmarks. It works the same way with the right landmarks.

Figure 3.4: Head CT image taken with verticle plane that goes through the middle of the left eye socket

The CT image taken for the vertical plane that goes through the forehead (Figure 3.5) can be used to measure the soft tissue thicknesses at the left and right porion landmarks. The CT image taken for the horizontal plane that goes through the zygion landmark (Figure 3.6) is used to measure the soft tissue thicknesses at the left and right zygion landmarks, and the CT image taken for the horizontal plane that goes through the gonion landmark (Figure 3.7) is used to measure the soft tissue thicknesses at the left and right gonion landmarks.

Chapter 3. Automatic discovery of connections between Vietnameses 22 anthropometric features

Figure 3.5: Head CT image taken with vertical plane that goes through the forehead

Figure 3.6: Head CT image taken with horizontal plane that goes through the zygion landmarks

3.3. Discovery of anthropometric relationships using linear regression 23

Figure 3.7: Head CT image taken with horizontal plane that goes through the gonion landmarks

3.3

Discovery of anthropometric relationships using linear regression

Linear regression is a method to model the relationship between two variables by tting a linear equation to the observed data. Linear regression is used popularly in practical applications because models which depend linearly on their unknown parameters are easier to t than models which are non-linearly related to their parameters. In prediction problems, linear regression is used to t a predictive model to an observed data set of y and X values. After that, this model can be used to make a prediction of the value of y with any given value of X. There are numerous methods to estimate the a and b parameters in the linear equation such as the ordinary least squares, generalized least squares, least absolute deviation, or maximum likelihood estimation. These methods dier in terms of computational simplicity. We use the simple linear regression which is the ordinary least squares estimator.

Chapter 3. Automatic discovery of connections between Vietnameses 24 anthropometric features

Figure 3.8: Example of linear regression

A linear regression line has an equation of the form Y = a + bX, where X is one of the distances in input set and Y is one of the thicknesses in output. The slope of the line is b, and a is the intercept. Suppose there are n data points yi , xi . The linear regression equation would give best t for the data points in terms of least square error. This error can be calculated using the following formular
n

lse =
i=1

(yi a bxi )2

(3.1)

Since we consider the soft tissue thickness calculation the prediction problem, we need to obtain such a linear equation for every output eld. In order to determine which distance in the input data should be the X parameter, we try all the possible elds in the input data. With each eld, we apply the linear regression and choose the one with best performance. This method can discover one-to-one linear relationships eectively. Figure 3.8 shows an example of our process.

3.4. Discovery of anthropometric relationships using neural networks 25

3.4

Discovery of anthropometric relationships using neural networks

Articial neural network have seen a rapid increase of interest over the last few years, and are being successfully applied on a wide range of domains such as character and speech recognition, signal processing. Neural networks have several advantages. The most important advantage is the ability to learn from data and thus potential to produce an acceptable output for previously unseen input data. Neural networks can even work when input series contain low-quality or missing data. Another advantage is the non linear nature. In addition, the network is very exible to changes in the environment. We only have to retrain the system in these cases. The process of neural network design for prediction problems contains ve primary steps, collect data, select network structure, initialize the weights and biases, train the network, validate the network, and use the network.

3.4.1

Select network structure

The step of select network structure is not to be underestimated. There is a tight relationship between the learning algorithm and network structure which makes the design suitable for the problem [Hay94]. This step ensures the network is compatible with the problem we are going to solve, as dened by the sample data. Two dierent types of neural networks can be distinguished, feed-forward and recurrent networks. Feed-forward network is a typical neural network consists of layers, where connections between the units do not form a directed cycle. In a single layered network there is an input layer and an output layer of neurons. A multi-layer network has one or more hidden layers of hidden neurons. Extra hidden neurons increase the ability to extract higher order statistics from data. However, using too many hidden neurons might leads to overtting. Figure 3.9 shows the feed forward network structure. Recurrent network, in the other hand, is a network where connections between units form a directed cycle and is shown in Figure 3.10. This structure is believed to be more eective in tasks such as unsegmented connected handwriting recognition, where they have achieved the best known results [GLF+ 09]. We select the two-layer feedforward network, with a tan-sigmoid transfer function in the hidden layer and a linear transfer function in the output layer because this

Chapter 3. Automatic discovery of connections between Vietnameses 26 anthropometric features

Figure 3.9: A feed-forward network with a single output layer (a) and with one hidden layer and one output layer (b)

Figure 3.10: A recurrent network with hidden neurons

structure can represent any functional relationship between inputs and outputs if the hidden layer has enough neurons [HDB96]. The design of this neural network structure is shown in Figure 3.11.

3.4.2

Initialize and train the network

Before training the network, the weights and biases are randomly initialized. The performance is dierent each training because of this randomly initiation. The training process requires a set of inputs p and targets t and begins afterward. This process tunes the values of the networks weights and biases to optimize network performance dened by MSE function. MSE between the network outputs a and

3.4. Discovery of anthropometric relationships using neural networks 27

Figure 3.11: Neural network structure used in the study

target outputs t is dened as follows mse = 1 N


N

(ei )2 =
i=1

1 N

(ti ai )2
i=1

(3.2)

In order to train the network, any optimization algorithm can be used to optimize the performance function. However, some algorithms are believed to have better performance. These methods use the gradient of the network performance with respect to the network weights. The gradient is calculated using the backpropagation algorithm which is an ecient way to calculate the partial derivatives of the network error function with respect to the weights [Gro02]. There are many training algorithms which make use of the gradients information supplied by the backpropagation algorithm. In these algorithms, a weight update from iteration k to k + 1 may look like

wk+1 = wk + .dk

(3.3)

where dk is the search direction and is the learning rate. The training algorithms are dierent in ways of determining the search direction and the learning rate. Dierent algorithms might also generate dierent performances. The fastest training functions are Levenberg-Marquardt function and QuasiNewton function. However, these two methods are less ecient for large networks due to their huge resource consumption. In these cases, Scaled Conjugate Gradient

Chapter 3. Automatic discovery of connections between Vietnameses 28 anthropometric features function and Resilient Backpropagation function are better choices. Since our network model is small and Levenberg-Marquardt function performs best on nonlinear regression problems, we decide to choose this function as our training function. For each thickness in output, we need to obtain a prediction neural network model. This can be done by let this thickness be target, and all the input data be input for the training process. However, as most elds in input do not have any relationship with the output thickness, the performance might be very bad. We apply a simple method to increase the performance. We start the training process with all the input data. We train the network and record the performance over the validation set. After that, we try removing one eld in input data and retrain the network. If the performance over the validation set this time is worse, we return the removed eld. Otherwise, the removed input eld stays outside. We continue this process until all input eld is tried. By this time, we have the set of good relationship input data with the output thickness, and the model that contains this relationship.

Chapter 4 Evaluation and Result


We perform the evaluation on the dataset of males which contains 98 samples. In our evaluation, we use the ten-fold cross-validation to compute the outputs MSE for the two approaches, linear regression and neural network. As for neural network, the training is done several times, with the number of neurons from 10 to 20 and randomly initialized weights each time. The network with best performance over the validation set is chosen to generate output for the test set. We then compare these MSE with the average method in which the output thickness for all tests is simply the average of all the output in training set. This average method is what is used in almost every facial reconstruction systems so far. Table 4.1 shows our result and their comparisons with the average. It can be seen from the table that the linear regression always give better result than the average. Most of the time, neural networks generate the best result over all. However, there are cases when neural network gives even worse result than average such as result for zygomatic arch (R), zygomatic (L), gonion (L), and nose height. In order to deeply analysis, we try plotting results for some random output. Figure 4.1, 4.2, 4.3, 4.4, 4.5, and 4.6 shows the experiment result. In these gures, predicted distances are plotted against the true value. For a perfect prediction, the data should fall along a 45 degree line (the Y=T line), where the outputs are equal to the targets. The neural networks values for pronasale thickness, nose length, pupil-pupil distance are close to the diagonal, indicating the prediction was good. For linear regression, prediction for nose length and pupil-pupil distance seems to have good performance. The other predictions are not as good, but acceptable. 29

30

Chapter 4. Evaluation and Result

Table 4.1: MSE values for average method (AVG), Linear Regression (LR), and Neural Network (NN). The best performance is in boldface.
N# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 Output vertex trichion glabella nasion rhinion pronasale nose length subnasale upper lip border lower lip border stomion metal meton opisthooranion exocanthion (R) exocanthion (L) endocanition (R) endocantion (L) pupil-pupil supraobital (R) supraobital (L) infraobital (R) infraobital (L) zygomatic arch (R) zygomatic arch (L) zygomatic (R) zygomatic (L) porion (R) porion (L) gonion (R) gonion (L) alare (R) alare (L) lateral nasal (R) lateral nasal (L) nose height bucal (R) bucal (L) AVG 1.1914 1.2945 1.2074 0.9699 0.3886 7.9621 21.8621 6.3008 4.9468 3.1674 2.2193 4.1007 2.3685 1.8909 0.7884 0.8609 2.5804 2.6779 10.8380 0.6689 0.6859 1.4038 1.1147 0.8485 0.8857 0.8326 0.9557 3.3546 2.5552 1.0521 0.9360 2.0965 2.0342 1.9751 2.0908 4.1012 13.6992 13.9451 LR 1.0625 1.0877 1.0110 0.7571 0.3400 6.0558 10.8344 4.3927 4.3581 2.7312 1.8766 3.4298 1.9901 1.5124 0.6635 0.7121 2.0950 2.0706 4.4587 0.5533 0.5340 1.2479 0.9573 0.7432 0.7400 0.6982 0.7722 2.7241 2.0471 0.9333 0.8330 1.6396 1.5304 1.4220 1.3537 3.5995 11.2034 11.6959 NN 0.8928 1.0664 1.0706 0.7220 0.3797 5.2456 8.7059 4.6878 3.7205 2.4167 1.8168 3.3625 2.0885 1.1001 0.7084 0.8459 1.7213 2.0099 4.9687 0.4556 0.4986 1.0475 1.1920 1.6805 0.7982 0.5635 1.3729 2.9786 1.7367 0.8245 1.5443 1.5934 1.4494 1.5541 1.3495 4.5687 12.2837 11.7598

31

(a) Neural network model (MSE=5.2456)

(b) Linear regression model (MSE=6.0558)

Figure 4.1: Regression results obtained by ten-fold cross validation for pronasale thickness using (a) neural network model and (b) linear regression model.

(a) Neural network model (MSE=8.7059)

(b) Linear regression model (MSE=10.8344)

Figure 4.2: Regression results obtained by ten-fold cross validation for nose length using (a) neural network model and (b) linear regression model.

(a) Neural network model (MSE=3.7205)

(b) Linear regression model (MSE=4.3581)

Figure 4.5: Regression results obtained by ten-fold cross validation for upper lip border using (a) neural network model and (b) linear regression model.

32

Chapter 4. Evaluation and Result

(a) Neural network model (MSE=4.5687)

(b) Linear regression model (MSE=3.5995)

Figure 4.3: Regression results obtained by ten-fold cross validation for nose height using (a) neural network model and (b) linear regression model.

(a) Neural network model (MSE=4.9687)

(b) Linear regression model (MSE=4.4587)

Figure 4.4: Regression results obtained by ten-fold cross validation for pupil-pupil distance using (a) neural network model and (b) linear regression model.

(a) Neural network model (MSE=2.4167)

(b) Linear regression model (MSE=2.7312)

Figure 4.6: Regression results obtained by ten-fold cross validation for lower lip border using (a) neural network model and (b) linear regression model.

33

Figure 4.7: Facial Reconstruction Result Using Linear Regression Equations

Figure 4.8: Matching the face and the skull

A complete linear equation for one to one correlation between input and output is shown in Table 4.2. These equations are used in our facial reconstruction system. A visual result of our work is given in Figure 4.7. In this gure, the face on the left is the result of facial reconstruction from the skull in the right. The facial landmarks are also shown in the skull. Figure 4.8 shows how the face and skull are matched.

34

Chapter 4. Evaluation and Result

Table 4.2: Equations for linear correlation between input and output, with the corresponding MSE when applied with the whole data set. In the equations, x is the input and y is the output.
N# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 Output vertex trichion glabella nasion rhinion pronasale nose length subnasale upper lip border lower lip border stomion metal meton opisthooranion exocanthion (R) exocanthion (L) endocanition (R) endocantion (L) pupil-pupil supraobital (R) supraobital (L) infraobital (R) infraobital (L) zygomatic arch (R) zygomatic arch (L) zygomatic (R) zygomatic (L) porion (R) porion (L) gonion (R) gonion (L) alare (R) alare (L) lateral nasal (R) lateral nasal (L) nose height bucal (R) bucal (L) Input cranial breadth cranial height forehead width (ft-ft) cranial height molar-molar base nose length n-rh bn-bn molar-molar cranial height base facial length(ba-pr) al-al al-al al-al cranial height cranial height cranial height cranial height ex-ex en-en al-al cranial height cranial height n-rh base cranial length (n-ba) base facial length(ba-pr) base facial length(ba-pr) bn-bn bn-bn al-al nasal projection al-al al-al al-al al-al base nose length bn-bn bn-bn Linear Equation y = -0.038041x + 10.8251 y = 0.07284x - 4.6447 y = 0.073272x - 2.2482 y = 0.070439x - 5.022 y = -0.036784x + 4.1817 y = 0.34191x + 5.6906 y = 1.1274x + 27.5733 y = -0.3371x + 22.5646 y = 0.12137x + 5.0732 y = 0.086747x + 1.7837 y = -0.072432x + 10.843 y = 0.18113x + 3.5632 y = 0.13342x + 1.0783 y = 0.14536x - 0.019573 y = 0.052174x - 3.2308 y = 0.054596x - 3.4705 y = 0.10877x - 8.8266 y = 0.13074x - 11.5992 y = 0.71282x - 5.9472 y = 0.083979x + 2.6781 y = 0.097903x + 1.0674 y = 0.051155x - 2.4011 y = 0.055695x - 2.9922 y = 0.070673x + 3.49 y = -0.049215x + 9.3495 y = -0.042995x + 8.556 y = -0.051253x + 9.3569 y = 0.1512x + 4.5823 y = 0.16467x + 4.1336 y = 0.088104x + 0.53022 y = 0.10235x + 3.5032 y = 0.18611x + 1.0906 y = 0.2063x + 0.33053 y = 0.23816x - 2.0783 y = 0.2648x - 3.1083 y = 0.17297x + 15.4149 y = 0.31662x + 6.9254 y = 0.31206x + 7.0209 MSE 1.1230 1.1500 1.0456 0.8300 0.3635 6.7684 11.8050 4.7101 4.5644 2.9479 2.0073 3.7307 2.1684 1.5853 0.7072 0.7609 2.2855 2.1918 4.6317 0.5880 0.5861 1.2943 1.0309 0.8013 0.7973 0.7596 0.8386 3.0310 2.1702 0.9684 0.8845 1.7071 1.6000 1.4152 1.3845 3.7506 12.2240 12.3088

Chapter 5 Conclusions and Future Work


Facial reconstruction is an interesting research eld as it helps in many cases. Researchers have been developing facial reconstruction systems to fasten the manual process and also to produce better results. In this general problem, one of the most important issues is to determine the soft tissue thicknesses at landmarks on the skull. However, most facial reconstruction systems neglect this issue and use the average thickness for simplicity. Our research has pointed out that this average method has worse performance than our linear regression method in every case, and worse than our neural network method in most cases. Our research also shows that there are relationships between the skull shape and the tissue depths and should people investigate more and more to discover these relationships. However, our research has some limitation which can be improved to obtain better results. The following is our future works. The rst possible development is to improve measurement process. As can be seen from the experiments, our results show good performance for long distances such as pronasale thickness or nose length, and bad performance for short distances, due to the error appeared in the measurement process. This is because the longer the distance, the less eect it receives from measurement error. In addition, the thin soft tissues do not depend much on the skull shape, or in other words, they do not have much relationship with the metrics. In addition, to dene the landmarks on the CT images depends much on the skill and judgment of the people who perform the measurement, although this technique is the most accurate. This method also requires a lot of time to measure and collect data. We plan to apply image processing to automatic discovery of these metrics. This would save a lot of time in 35

36

Chapter 5. Conclusions and Future Work

measurement and might give better accuracy. Another thing that needs to be noted is that, in 2009, Pascal Paysan et. al. [PLA+ 09] proposed a method to reconstruct the face from the skull, with the capable of tuning the weight and age attributes. From this research, we know that weight and age aect the facial shape greatly. Our candidates age and weight are within wide range of 18 to 82 and 43kg to 75kg, respectively. Separating candidates into groups is very important because the relationship between features is dierent from this age and weight range to the others and missing this step will lead to moderate error in training and validation. However, in our experiment, we could not separate the candidates into groups because the number of entries was not sucient. Separating would give even worse result. In the future, we will collect more data for each group of weight and age. This will improve the prediction performance signicantly. In addition, because our data and problem is straight forward, many other machine learning techniques can be applied such as the decision stump, support vector machines, or boosting. With satisfactory results from neural network approach, it is possibly that better result can be obtained from other techniques. We plan to implement and analyze result using dierent techniques. Lastly, as dierent landmark congurations might lead to dierent results and performances, using dierent landmark conguration is a worth trying work. This work requires additional data obtaining from CT images, however.

Publications list
Quang Huy Dinh, Thi Chau Ma, The Duy Bui, Trong Toan Nguyen, Dinh Tu Nguyen (2011). Facial soft tissue thicknesses prediction using anthropometric distances. Studies in Computational Intelligence, Springer. Proceedings of the 3rd Asian Conference on Intelligent Information and Database Systems. 2011 (to appear).

37

Bibliography
[Arc97] Katrina Marie Archer. Craniofacial reconstruction using hierarchical b-spline interpolation. The University of British Columbia, 1997. David William Bullock. Computer Assisted 3D Craniofacial Reconstruction. The University of British Columbia, 1999. Matthew James Francis Cairns. An Investigation into the use of 3D Computer Graphics for Forensic Facial Reconstruction. Glasgow University, 2000.

[Bul99]

[Cai00]

[DGPV+ 06] S. De Greef, Claes. P., D. Vandermeulen, W. Mollemans, P. Suetens, and G. Willems. Large-scale in-vivo caucasian facial soft tissue thickness database for craniofacial reconstruction. Journal of Forensic Sciences, 159:126146, 2006. [Dum86] E. R. Dumont. Mid-facial tissue depths of white children: An aid to facial feature reconstruction. J Forensic Sci, 1986. I.H. El-Mehallawi and E. M. Soliman. Ultrasonic assessment of facial soft tissue thicknesses in adult egyptians. Journal of Forensic Sciences, 117(1-2):99107, 2001. R. M. George. The lateral craniographic method of facial reconstruction. Journal of Forensic Sciences, 32(5):13051330, 1987. A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel connectionist system for improved unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 2009. 38

[EMS01]

[Geo87]

[GLF+ 09]

Bibliography [Gro02]

39

R. Grothmann. Multi-Agent Market Modeling based on Neural Networks. Ph.D. thesis, Faculty of Economics, University of Bremen, Germany, 2002. S. Haykin. Neural Networks. A Comprehensive Foundation. Macmillan College Publishing, New York, 1994. Martin T. Hagan, Howard B. Demuth, and Mark H. Beale. Neural Network Design. PWS Publishing Company, Boston, Massachusetts, 1996. L. S. Hodson, Lieberman, and P. Wright. In vivo measurement of facial thickness in american caucasoid children. J Forensic Sci, 1985. Mark W. Jones. Facial reconstruction using volumetric data. Proceedings of the Vision Modeling and Visualization Conference 2001, pages 135150, November 2001. Kolja Khler, Jrg Haber, and Hans-Peter Seidel. Reanimating the a o dead: reconstruction of expressive faces from skull data. ACM Transactions on Graphics (TOG), 22(3):554561, 2003. Wilton Marion Krogman. The reconstruction of the living head from the skull. FBI Law Enforcement Bulletin, 15(7):18, July 1946. M. H. Manhein, G. A. Listi, R. E. Barsley, Musselman R., N. E. Barrow, and D. H. Ubelaker. In vivo facial tissue depth measurements for children and adults. Journal of Forensic Sciences, 45(1):4860, 2000. Pascal Paysan, Marcel Lthi, Thomas Albrecht, Anita Lerch, Brian u Amberg, Francesco Santini, and Thomas Vetter. Face Reconstruction from Skull Shapes and Physical Attributes. 5748:232241, 2009. V. M. Phillips and N. A. Smuts. Facial reconstruction: Utilization of computerized tomography to measure facial tissue thickness in a mixed racial population. Forensic Sci Int., 83:5159, 1996. Stanley Rhine. Tissue thickness measures: American caucasoids, american blacks, southwestern indians. Physical Anthropology Laboratories, Maxwell Museum of Anthropology, University of New Mexico, 1984.

[Hay94]

[HDB96]

[HLW85]

[Jon01]

[KHS03]

[Kro46]

[MLB+ 00]

[PLA+ 09]

[PS96]

[Rhi84]

40 [SC10]

Bibliography C. N. Stephan and J. Cicolini. Tallied facial soft tissue depth data (tfstdd), 2010. D. Sahni, I. Jit, M. Gupta, P. Singh, S. Suri, Sanjeev, and H. Kaur. Preliminary study on facial soft tissue thickness by magnetic resonance imaging in northwest indians. Forensic Science Communications, 4, 2002.

[SJG+ 02]

[VPST+ 07] J. Vander Pluym, W. W. Shan, Z. Taher, C. Beaulieu, C. Plewes, A. E. Peterson, O. B. Beattie, and J. S. Bamforth. Use of magnetic resonance imaging to measure facial soft tissue depth. Cleft Palate-Craniofacial Journal, 44:5257, 2007. [Wel83] H. Welcker. Schillers schdel und todenmaske, nebst mittheilungen ber schdel und todenmaske kants. 1883. Caroline Wilkinson. Forensic facial reconstruction. Cambridge University Press, 2004.

[Wil04]

Anda mungkin juga menyukai