Anda di halaman 1dari 12

+Model

CMIG-774;

ARTICLE IN PRESS

No. of Pages 12

Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

Segmentation and grading of brain tumors on apparent diffusion


coefficient images using self-organizing maps
C. Vijayakumar a, , Gharpure Damayanti b , R. Pant a , C.M. Sreedhar a
a

Department of Radiodiagnosis and Imaging, Armed Forces Medical College, Pune, India
b Department of Electronic Science, University of Pune, Pune, India
Received 5 May 2006; received in revised form 17 April 2007; accepted 25 April 2007

Abstract
An accurate computer-assisted method to perform segmentation of brain tumor on apparent diffusion coefficient (ADC) images and evaluate
its grade (malignancy state) has been designed using a mixture of unsupervised artificial neural networks (ANN) and hierarchical multiresolution
wavelet. Firstly, the ADC images are decomposed by multiresolution wavelets, which are subsequently selectively reconstructed to form wavelet
filtered images. These wavelet filtered images along with FLAIR and T2 weighted images have been utilized as the features to unsupervised neural
network self organizing maps (SOM) to segment the tumor, edema, necrosis, CSF and normal tissue and grade the malignant state of the tumor.
A novel segmentation algorithm based on the number of hits experienced by Best Matching Units (BMU) on SOM maps is proposed. The results
shows that the SOM performs well in differentiating the tumor, edema, necrosis, CSF and normal tissue pattern vectors on ADC images. Using
the trained SOM and proposed segmentation algorithm, we are able to identify high or low grade tumor, edema, necrosis, CSF and normal tissue.
The results are validated against manually segmented images and sensitivity and the specificity are observed to be 0.86 and 0.93, respectively.
2007 Elsevier Ltd. All rights reserved.
Keywords: Self-organizing maps; Artificial neural networks; Multiresolution analysis; Apparent diffusion coefficients; Brain tumors; Image segmentation; Tumor
grading; Magnetic resonance imaging

1. Introduction
The diagnosis of brain tumors is an important application for
various medical imaging techniques. Magnetic resonance imaging is the preferred technique for detecting and characterizing
brain tumors in medical imaging. Correct identification of tumor
grade decides the subsequent treatment.
Several approaches have been proposed to effectively segment and grade brain tumors. Fletcher-heath et al. [1] proposed
an automatic segmentation technique which separates non
enhancing brain tumor using an unsupervised fuzzy clustering
method. Clark et al. [2,3] reported a knowledge-based technique
for brain tumor segmentation. In their method, conventional MR
images (T1-weighted, proton density, and T2-weighted) were
utilized in a system that integrated knowledge-based techniques
with multispectral analysis. Liu et al. [4] proposed a system
based on fuzzy connectedness method for quantifying high grade

Corresponding author. Tel.: +91 2026306204/6033.


E-mail address: vijayafmc@gmail.com (C. Vijayakumar).

tumor such as glioblastoma. In this method, T1 weighted images


with Gadolinium enhancement have been utilized to gather
information about different aspects of the tumor and its vicinity. Time and Axel [5] reported a novel method to evaluate the
malignancy state of breast tumors by an unsupervised method
self organizing maps using the Dynamic contrast enhanced
magnetic resonance imaging. SOM technique has been extensively utilized to handle extremely complex data and cluster
them but very little attention has been given to utilizing it for
segmentation. Reddick et al. [6] developed a pixel-based twostage approach where a SOM is trained to segment multispectral
MR images which are subsequently classified into white matter,
gray matter, etc., by a feed-forward ANN. Hierarchical network
architecture has been developed for optical character recognition
[7] and for segmentation of range images [8].
In spite of the above cited activities in research, a unique
technique to segment and grade brain tumors has not yet been
reported. In this study, a method based on SOM has been
proposed to segment and grade brain tumors using magnetic
resonance images. The reason for choosing the unsupervised
technique SOM for this work is its ability to cluster extremely

0895-6111/$ see front matter 2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compmedimag.2007.04.004

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

complex data [9]. It creates a set of prototype vectors representing the data set and carries out a topology preserving projection
of the prototypes from the high dimensional input space onto a
low-dimensional grid. This ordered grid can be used as a convenient visualization surface for showing different patterns of the
SOM and thus of the data.
Conventionally, magnetic resonance images such as T1, T2
and FLAIR weighted images have been used in evaluating brain
tumors. Nevertheless, evaluation of the tumors solely on the
basis of T1, T2 and FLAIR weighted images is not successful in every case [1012]. Recent advances in medical imaging
suggest that values obtained from apparent diffusion coefficient (ADC) maps of diffusion-weighted images may play a
role in the evaluation of tumors [1315]. Diffusion-weighted
magnetic resonance imaging provides image contrast through
measurement of the diffusion properties of water within tissues.
Application of diffusion sensitizing gradients to the MR pulse
sequence allows water molecular displacement over distances
of 120 m to be recognized. Combining images obtained with
different amounts of diffusion weighting provides an apparent
diffusion coefficient (ADC) map. In brain tumor imaging, ADC
maps have been successfully used to distinguish brain tumors
from oedema. They are also increasingly exploited to differentiate low grade and high grade region where increased cellularity
of high grade lesions restricts water motion in a reduced extracellular space. Thus, characterization of tumors by ADC maps
may not only help to differentiate tumors based on cellularity,
but also to demarcate tumors from surrounding tissue and to
grade the malignancy.
In this study, we sought to determine whether SOM is capable
of segmenting and grading brain tumors using the information
provided by ADC maps, wavelet filtered images of ADC maps,
along with conventional images such as FLAIR and T2 weighted
images. The cellularity of tissues represented in ADC maps is
resolved by employing a multiresolution wavelet technique. The
wavelet is a promising technique for medical image segmentation task [1618] due to its ability to provide information at
the different resolution levels. The multiresolution approach of
wavelet has been identified as the ideal tool for extracting local
textural features from the images [19,20].
2. Methods
2.1. Data acquisition
Using a 1.5-T MR unit, we obtained axial T2 weighted images
with imaging parameters of 4200/99 (TR/TE), a slice thickness
of 5 mm, FLAIR images with imaging parameters of 8200/114
(TR/TE), a slice thickness of 5 mm and DWIs with imaging
parameters of 3800/107 (TR/TE), a slice thickness of 5 mm.
The DWIs were acquired with b values of 0, 1000 in three perpendicular directions by using the echo-planar imaging (EPI).
Number of slices acquired in every protocol was six and number
of averages employed was two. Analysis of diffusion changes
was performed by calculating the ADC values as the negative
slope of the linear regression line best fitting the points for b
versus ln(SI), where SI is the signal intensity from the diffusion

images acquired at the two b values. These calculations have


been carried out at the MR work station and the ADC maps
have been constructed for every slice. These ADC map is the
average map of three map constructed from all three direction.
2.2. Preprocessing
As a preliminary step in the preprocessing, the registration
of all the images has been carried out to ensure the pixel-topixel correlation. All the images have been registered by bring
them all to a common matrix size of 256 256 with the field of
view of 23 cm 23 cm. Subsequently all the images has been
standardized to have normalized pixel values. This standardization has been implemented to overcome the variation exist in
the pixel values of images of different patients and bring all the
images to have common pixel levels [21]. Then, cranium has
been removed in the standardized images by employing the histogram threshold method. This has been specifically employed at
the peripheral side of the images to avoid the removal of regions
having similar pixel values of cranium in the intracranial region.
2.3. Wavelet decomposition and selective reconstruction
In wavelet transform, a basis filter known as a mother filter, is dilated and translated to provide frequency information
as a function of time or location [2224]. Eq. (1) shows how
the wavelet coefficients are obtained given a wavelet basis filter
that is scaled and translated by factors of 2. In this equation, a
dot product is formed between f(x), representing the function
to encode, and representing the mother wavelet. The parameter j is the scaling factor, k is the translation factor, and the
multiplication factor normalizes the dot product:



1
xk
W(j, k) = j f (x),
(1)
2
2j
In order to process the image, each of either the scaling function or wavelet filter is applied in several different orientations.
These orientations are constructed by multiplying a 1D scaling functions and the corresponding wavelet . As provided
below, Eq. (2) responds to variations in the vertical directions.
Eq. (3) responds to variations in the horizontal direction. Eq. (4)
responds to variations along diagonals. Finally, Eq. (5) describes
the scaling function, which is used to generate the approximation
coefficients:
1 (x, y) = (x) (y)

(2)

2 (x, y) = (x)(y)

(3)

3 (x, y) = (x) (y)

(4)

(x, y) = (x)(y)

(5)

After each decomposition process, the resulting coefficient


image is decimated by half in each coordinate. As a result, four
different images containing the mentioned coefficients are generated, each being 1/4 the size of the original images. Since every
wavelet coefficient carries unique information, each has been
preserved separately by selective reconstruction method. This

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

has been implemented by reconstructing every wavelet coefficient with others set to be zero. This results in four reconstructed
images for wavelet decomposition of level one.
2.4. Self-organizing maps
2.4.1. The self-organizing map model
A self-organizing map (SOM) is a neural network [25] model
developed in 1980 by Teuvo Kohonen. In contrast with other
neural network models, it has a strong physiological inspiration, as it is based on the topological map that exists in the brain
cortex. The cortex is organized so that topologically closer neurons tend to produce answers to the same kind of stimulus; this
is one of the reasons why it is largely employed in visual pattern
recognition.
2.4.2. SOM training
The self-organizing map training method is based on competitive learning, which is a type of neural network unsupervised
learning. It consists of a regular, usually 2D grid of map units
(also called neurons) which are defined as H = {wij ; 1 i
L, 1 j M} where L, M are number of rows and columns
and wij is the weights (also called as prototype vector) assigned
to the (i,j)th unit of SOM architecture. The weights of the units
is defined as wij = [v1 vd ], where d is the dimension of the
weight and v1 , v2 . . . are components of the weight vector. In
this study, the components of the weight vector has been initialized with random numbers and subsequently updated during the
training stage of SOM and the dimension of the weight vector
chosen is seven.
Training pattern vectors used for the SOM training is defined
as T = {bm , 1 m n}, where bm is the mth training pattern vector and n is the number of training pattern vectors.
In this study, total number of training pattern vectors chosen
was 700. The individual training pattern vector (bm ) is defined
as bm = [f1 , f2 , . . ., fd ], where f1 , f2 , . . . are components of the

training pattern vector. In this study, the components of each


training pattern vector (bm ) are wavelet filtered values of ADC
image, pixel values of ADC images, FLAIR and T2 weighted
images.
During the training stage, the SOM is searches to find the
unit in its architecture whose weight would be nearer to training
pattern vector (bm ). The similarity criterion [26,27] chosen is
the Euclidean distance. The unit in the SOM architecture whose
weight is similar to training pattern vector (bm ) is called as best
matching unit (BMU) of that training pattern vector (bm ) and
denoted as wm . The BMU of bm is determined as
||bm wm || = min{||bm wij ||}
ij

(6)

The BMU and neighbors around it have their weights wij


updated with step t, and become better representative of the bm ,
as Eq. (7) shows:
wij (t + 1) = wij (t) + (t)hbij (t)(bm wij (t)),
wij (t + 1) = wij (t),

otherwise

(7)

In the above expression, (t) is the learning rate and hbij (t) is
the neighborhood function centered on BMU:


||rb rij ||2
(8)
hbij (t) = exp
2 2 (t)
where rb and rij are positions of best matching unit and (i,j)th
unit in SOM map, respectively. The function (t) is allowed to
decrease monotonically with time. Fig. 1 shows the schematic
display of hexagonal grid SOM utilized for training.
After the complete training, the best matching unit set
(Wbmu ) of training pattern vectors on SOM map, which consist
of BMUs of every training pattern vector has been determined along with its class label set (C). Thus, Wbmu =
{w1 , w2 , . . . , wm , . . . , wn }, where wm is the BMU correspond-

Fig. 1. The training pattern vector (bm ) consist of wavelet filtered images, FLAIR and T2 weighted images The BMU (wm ) of training pattern vector (bm ) is shown
in squared shade on self-organizing map of hexagonal grid. The first and second neighborhood units of wm has been shown in black and gray shades, respectively.
The weights of the individual units (w11 , w1M , wL1 , wLM ) are also be seen on SOM map.

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

ing to training pattern vector bm . Then, class label is assigned to


each element in the set Wbmu to form a class label set C = {c1 ,
c2 , . . ., cm , . . ., cn }where cm is the class associated with the BMU
wm and pattern vector bm .
Subsequently, Wbmu and C has been utilized to determine the
class index Cij of every unit in the SOM maps as

cm if wij = wm Wbmu
(9)
Cij =
0
otherwise
where wij is the weight of (i,j)th neural unit.
2.4.3. SOM visualization
An initial idea of the number of clusters in the SOM, as well as
their spatial relationships, is usually acquired by visual inspection of the map. The most widely used methods for visualizing
the cluster structure of the SOM are distance matrix techniques,
especially the unified distance matrix (U-matrix) [2829]. In our
study, the hexagonal SOM structure has been utilized and hence
U-matrix has been defined on hexagonal planar map space. The
U-matrix provides the values of U-heights which are the distances between weight vectors of neighboring map units on
SOM maps. The U height value (uh(wij )) associated with wij is
calculated as follows:

uh(wij ) =
distance(wij , wkl ); k, l N(wij )

k,l

where N(wij ) is the set of immediate neighbors of wij in SOM


map. A single U-height value represents local distance structure.
U-matrix is usually visualized using gray shade. The U-matrix
has become the standard tool for the display of the distance
structures of SOM.
2.4.4. SOM segmentation
Fig. 2 shows the overview of segmentation method based on
the self-organizing map implemented in this study in differentiating pathology regions from tumor. The segmentation has been
carried out by fully trained SOM. Initially, testing pattern vector
 ) consisting of selectively reconstructed wavelet coefficients
(bm
of ADC image, pixel values of ADC image, FLAIR and T2
weighted images has been organized. Subsequently, the testing
pattern vectors are fed in to the trained self-organizing map to
identify their respective best matching units (wij ).
 belongs is determined by the nature
The class (Cfeat ) that bm


of wij . If wij Wbmu , the respective class label of class index
 . If w
Cij has been considered as the class of bm
ij / Wbmu , then
Cfeat has been determined from the nature of the neighborhood
 has
units around wij in SOM map. Thus, the class (Cfeat ) of bm
determined as

if wij Wbmu
Cij
Cfeat =
(10)
max() otherwise
where = {1 , 2 , . . . , r , . . . , h } is the set of probable
 . Here h represents the total number of
class labels of bm
classes. In our study, there are seven classes and they are low

Fig. 2. Overview of the implementation of SOM segmentation algorithm.

grade tumor, edema, cystic tumor, normal tissue, CSF, necrosis, high grade tumor. Hence, the components of would
be {low grade , edema , cyst , nt , csf , nec , high grade }. The com to belong to every
ponents of represent probability of bm
class; for example, r is the component represents the proba to be belongs to rth class. Thus, they have been
bility of bm
determined as
n

m
s

r =
(11)
st rt , 1 r h
s=1

t=1

Here m represents the neighborhood distance from wij and


generally taken as five. For each s, t ranges from 1 to ns , where ns
is the number of units of the sth neighbor of wij . st is the number
of hits experienced by tth unit of sth neighbor of wij . The Hit is
basically the number of times that particular unit in SOM map
has been selected as BMU during the determination of Cij after
the training phase. The parameter n determines whether the unit
t belong to the class r or not. If the unit (t) on the SOM map has
been identified as a BMU of the class r during the training phase
then n returns one otherwise zero. Thus, the probable class label

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

Fig. 3. The registered images of high grade tumors: (a, d and g) apparent diffusion coefficient (ADC) images; (b, e and h) T2 weighted images; (c, f and i) FLAIR
images.

set indicates the nature of neighborhood units of the BMU of


 ). Then, finally, class represented by
testing pattern vectors (bm
the element l = max() would be selected as the class of the
 ).
testing pattern vectors (bm
3. Results and discussions
To qualitatively evaluate the segmentation result, several
measurements has been applied which are based on Receiver
Operating Characteristics (ROC) analysis [30] and the similarity
of segmented regions. Let GT denote the segmented volume pro-

vided as ground truth, that is the manually segmented regions,


GTc its complement and Seg the segmented region obtained
by the self-organizing map approach. The considered measurements are:
The measures inspired by ROC analysis:
The True Positive Fraction (TPF), which gives a measure
of the sensitivity of the method, corresponding to the probability of detection:
Seg GT
TPF = sensitivity =
GT

Fig. 4. The registered images of low grade tumor: (a) apparent diffusion coefficient (ADC) image; (b) T2 weighted image; (c) FLAIR image.

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

The False Positive Fraction (FPF), which is related to the


probability of false alarm and give a measure of specificity
(specificity = 1 FPF):
FPF =

|Seg GT|
GTc

The proposed method has been validated in the MR images


of 10 patients (6 high grade + 4 low grade) who were diagnosed
as having brain tumors. The original images of both high and
low grade brain tumors are registered to have pixel to pixel
correlation. This has been achieved by bringing all the images

to the same field of view. Figs. 3 and 4 show the registered


images of both high and low grade brain tumors. The cranium
has been subsequently removed by the method described in
Section 2.2. Fig. 5 shows the images after the removal of the
cranium.
In this study, Daubechies db2 wavelet basis was used as
the basis for the wavelet decomposition. The method proposed in this paper, like most other wavelet-based methods,
does not use the wavelet decomposition explicitly. Rather, it
directly manipulates the two-dimensional wavelet coefficients.
The manipulation was carried out by performing reconstruction

Fig. 5. The images after removing the cranium. High grade tumors: (a, d and g) apparent diffusion coefficient (ADC) images; (b, e and h) T2 weighted images; (c, f
and i) FLAIR images. Low grade tumor: (j) apparent diffusion coefficient (ADC) image; (k) T2 weighted image; (l) FLAIR image.

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

with only one resolution coefficient by filtering out all other


resolution coefficients. This approach resulted in forming four
reconstructed images which contain the information of every
wavelet coefficient. Fig. 6 shows the selectively reconstructed
wavelet images of high grade and low grade tumor.

The training pattern vector comprising of the above mentioned reconstructed wavelet images along with ADC, FLAIR
and T2 weighted images has been used to train the selforganizing maps. After the removal of the cranium, the images
with brain tumors have been assumed to have seven different

Fig. 6. Selectively reconstructed wavelet images. High grade tumor: (a) diagonal; (b) horizontal; (c) vertical; (d) low resolution. Low grade tumor: (e) diagonal; (f)
horizontal; (g) vertical; (h) low resolution.

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;
8

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

Fig. 7. U-matrix visualization of SOM cluster results.

Fig. 8. Illustration of best matching units of training pattern vectors. Light red: low grade tumor, dark red: high grade tumor, green: cystic tumor, violet: edema, dark
blue: necrosis, yellow: normal tissue (white + gray matter), light blue: CSF. (For interpretation of the references to colour in this figure legend, the reader is referred
to the web version of the article.)

pathology classes such as High grade tumor, low grade tumor,


cystic region of the tumor, necrosis, edema, CSF and normal
tissue (white + gray matter). Thus the training pattern vectors of
those seven classes have been chosen carefully from MR images
of different patients. Totally 700 such pattern vectors (100 of
every class) have been collected.
As with the SOM, the initial weights of the network are critical and in this study, small random values between 0 and 1
have been set as the initial weight values. The map is a two
dimensional hexagonal grid with a 20 20 neuron matrix. The
number of neurons is chosen to be 400. Table 1 summarizes
the parameters used in this study. The number of training steps
equals 10,000. During the training, the function defined in Eq.

Table 1
Summarizes the parameters and their values used in this study
SOM parameters

Values

SOM map dimension


Training steps
Initial (t)
Final (t)
Weight vector dimension
Pattern vector dimension
Training pattern size

20 20
10,000
0.1
0.01
7
7
700

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

(8) has been utilized as the neighborhood function hbij (t) centered on BMU. The value of (t) has been chosen such that
neighborhood function should be able to cover the entire SOM
map in the initial 2000 iterations of the training and then slowly
allowed to narrows down to BMU as training progresses. The
initial value of learning rate (t) has been chosen as 0.1. As the
training progressed, the value of (t) has been annealed to 0.01.
The training is repeated several times using the same number
of training steps to get the better results as explained in Section
2.4.2. After achieving the best clusters in the training, the training pattern vector has been presented once in a cyclic way to the
trained SOM to evaluate the final BMU set (Wbmu ), class label
set (C), class index (Cij ) and the number of hits ( st ) encountered
by each best matching unit.
The U-matrix visualization of the SOM cluster results are
shown in Fig. 7. As explained in Section 2.4.3, the U-matrix

visualizes distances between neighboring units in SOM map,


and thus shows the cluster structure of the map. This distance
between neighboring units in SOM map has been varying from
0.27380 to 0.00586. Since the U-matrix visualizes these distances through gray scale maps, the high values of the U-matrix
(higher distance) indicate a cluster border (white region), uniform areas of low values (lower distance) indicate clusters
themselves (gray and black regions). The patterns of the classes
such as cystic tumor, CSF, normal tissue and necrosis has formed
strong clusters and the inter cluster distance between them is
found to be high. But others such as high grade tumor, low
grade tumor and edema formed clusters with short inter cluster
distance. Fig. 8 illustrates the best matching units of training
pattern vectors on the SOM hexagonal grid. The units which
have not been hit by any training pattern vectors that are not
part of Wbmu are not shown for the better visualization of BMUs

Fig. 9. Number of hits of training pattern vector on fully trained SOM: (a) low grade tumor; (b) cystic part of tumor; (c) necrosis; (d) normal tissue (white + gray
matter); (e) CSF; (f) edema; (g) high grade tumor.

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;
10

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

Fig. 10. The segmentation results of SOM: (ac) high grade tumor; (d) low grade tumor. Dark red: high grade tumor, light red: low grade tumor, light green: cystic
part of tumor, violet: edema, blue: necrosis, dark green: normal tissue (white + gray matter), white: CSF. (For interpretation of the references to colour in this figure
legend, the reader is referred to the web version of the article.)

of training pattern vectors. The BMUs of classes such as cystic tumor, CSF, normal tissue and necrosis formed distinct
isolated clusters. On the other hand, BMUs of the classes
such as high grade tumor, edema and low grade tumor formed
mixed clusters. Fig. 9 represents the number of hits encountered by each BMU on fully trained SOM. The number of hits
of BMUs of every class has been shown separately with the
color code varying from yellow to red where red represents
the maximum number of hits and yellow represents the minimum number (one) of hit. The units which are not part of the
BMUs of any class are shown as red circle. As it is clearly
evident in the above figures, the BMUs of cystic tumor, low
grade tumor, high grade tumor and normal tissue forms well
definite shaped cloud on SOM maps and number of hits experienced by each BMUs were found be around certain specific
BMUs within their cloud. On the other hand, the number of hits
experienced by the BMUs of edema, CSF and necrosis were
totally distributed equally with in their BMU clouds on SOM
maps.

Fig. 10 illustrates the segmentation results achieved by the


method proposed in Section 2.4.4. The normal tissues and CSF
were assigned dark green and white color respectively for the
better visualization from other pathological regions. To validate
these results, the sensitivity and the specificity of the SOM were
determined against the manually segmented regions. Two experienced MR specialist segmented the images manually and taken
as the ground truth. Table 2 summarizes the sensitivity and specificity values of proposed SOM method. The overall sensitivity
and specificity of the proposed method has been observed as
0.86 and 0.93, respectively. The tumors (low grade, high grade,
cystic) have been observed to have higher sensitivity and specificity in comparison with other pathological regions. The over
all segmentation of normal tissue, CSF has been found to have
good sensitivity and specificity values; however the false prediction of some normal tissue as CSF in the boundary regions
has also be seen in the segmentation images. The necrosis has
been identified to have least sensitivity values due to its frequent
misclassification with CSF.

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;

ARTICLE IN PRESS

No. of Pages 12

C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx
Table 2
Summarizes the sensitivity (true positive factor) and specificity (1 false positive factor) values of self-organizing maps
Patterns

Sensitivity

Specificity

Low grade tumor


Cystic tumor
Necrosis
Normal tissue
CSF
Edema
High grade tumor

0.88
0.87
0.79
0.83
0.82
0.94
0.87

0.98
0.95
0.94
0.89
0.93
0.91
0.93

4. Conclusion
Self-organizing map is an unsupervised competitive neural
network that uses the neighborhood interaction set to approximate lateral neural interaction and discovers the topological
structure hidden in the pattern vector for visual display in one or
two dimensional space. The results of our study shows that the
self organizing maps might be used to segment tumor, necrosis, cysts, and edema, and normal tissue and grade the tumors
simultaneously on the ADC images, its wavelet filtered images
along with FLAIR and T2 weighted images. However the sample size of our study is very small and need a larger sample size
to evaluate the full strength of the proposed method. Since ADC
values have been correlated with the degree of cellularity, the
segmentation and grading of tumors based on the ADC images
and its wavelet images might have strongly dependence on the
cellularity of the region. However, further studies are being carried out to evaluate usefulness of perfusion weighted images
[31,32] along with ADC coefficient maps to segment and grade
the tumors in magnetic resonance images.
References
[1] Fletcher-Heath LM, Hall LO, Goldgof DB, Murtagh FR. Automatic segmentation of non-enhancing brain tumors in magnetic resonance images.
Artif Intell Med 2001;21:4363.
[2] Clark MC, Hall LO, Goldgof DB, Velthuizen R, Murtagh FR, Silbiger MS.
Automatic tumor segmentation using knowledge-based techniques. IEEE
Trans Med Imag 1998;17:187201.
[3] Clark MC, Hall LO, Goldgof DB, Clark LP, Silbiger MS, Li C. MRI segmentation using fuzzy clustering techniques: integrating knowledge. IEEE
Eng Med Biol Mag 1994;13:73042.
[4] Jianguo L, Jayaram KU, Dewey O, David H, Gul M. A system for brain
tumor volume estimation via MR imaging and fuzzy connectedness. Comput Med Imaging Graphics 2005;29:2134.
[5] Tim WN, Axel W. Tumor feature visualization with unsupervised learning.
Med Image Anal 2005;9(4):34451.
[6] Reddick WE, Glass JO, Cook EN, Elkin TD, Deaton RJ. Automated segmentation and classi1cation of multispectral magnetic resonance images
of brain using artificial neural networks. IEEE Trans Med Imaging
1997;16(6):9118.
[7] Koh J, Suk MS, Bhandarkar SM. A multilayer self organizing feature map
for range image segmentation. Neural Netw 1995;8(1):6786.
[8] Kung SY, Taur JS. Decision based neural networks with signal image
classification applications. IEEE Trans Neural Netw 1995;6(1):17081.
[9] Vesanto J, Alhoniemi E. Clustering of the self-organizing map. IEEE Trans
Neural Netw 2000;11(5):586600.
[10] Stephan EM, Peter B, Gabor B, Hatsuho M, Yoshiaki M, Imre R, et al. Normal brain and brain tumor: multicomponent apparent diffusion coefficient
line scan imaging. Radiology 2001;219:8429.

11

[11] Hasso AN, Kortman KE, Bradley WG. Supratentorial neoplasms. In: Stark
DD, Bradley WG, editors. Magnetic resonance imaging, vol. 1, 2nd ed. St.
Louis, Mo: Mosby-Year Book; 1992. p. 770817.
[12] Hicks RJ. Supratentorial brain tumors. In: Edelman RR, Hesselink JR,
Zlatkin MB, editors. Clinical magnetic resonance imaging, vol. 1, 2nd ed.
Philadelphia, PA: Saunders; 1996. p. 53356.
[13] Nail B, Murat K, Fatih O, Cem T, Taner U. Combination of single-voxel
proton MR spectroscopy and apparent diffusion coefficient calculation in
the evaluation of common brain tumors. AJNR 2003;23: 22533.
[14] Fumiyuki Y, Kaoru K, Kenichi S, Kazunori A, Kazuhiko S, Megu O, et
al. Apparent diffusion coefficient of human brain tumors at MR imaging.
Radiology 2005;235:98591.
[15] Cem C, Omer K, Nilgun Y, Taskin Y, Sertac I, Taner A. Perfusion and
diffusion MR imaging in enhancing malignant cerebral tumors. Eur J Radiol
2006;58(3):394403.
[16] Chaplot S, Patnaik LM, Jagannathan NR. Classification of magnetic resonance brain images using wavelets as input to support vector machine and
neural network. Biomed Signal Process Contr 2006;1(1):8692.
[17] Ginneken Bv, Stegmann MB, Loog Ma. Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study
on a public database. Med Image Anal 2006;10(1):1940.
[18] Zhoua Z, Ruana Z. Multicontext wavelet-based thresholding segmentation of brain tissues in magnetic resonance images. Magn Reson Imag
2007;25(3):3815.
[19] Du-Ming T, Cheng-Huei C. Automatic band selection for wavelet reconstruction in the application of defect detection. Image Vision Comput
2003;21(5):41331.
[20] Du-Ming T, Bo H. Automatic surface inspection using wavelet reconstruction. Pattern Recogn 2001;3(6):1285305.
[21] Nyul LG, Udupa JK. On standardizing the MR image intensity scale. Magn
Reson Med 1999;42:107281.
[22] Daubechies I. Ten lectures on wavelets. Philadelphia, PA: SIAM; 1992.
[23] Daubechies I. A wavelet tour of signal processing. New York: Academic
Press; 1998.
[24] Choi H, Baraniuk RG. Image segmentation using wavelet-domain classification. Rice-Preprint; 1999.
[25] Kohonen T, Kaski S, Lagus K, Salojarvi J, Paatero V, Saarela A. Selforganization of a massive document collection. IEEE Trans Neural Netw
2000;11:57485.
[26] Kohonen T. Self-organizing maps. 3rd ed. Berlin: Springer; 2001.
[27] Cottrell M, Fort JC, Page G. Theoretical aspects of the SOM algorithm.
Neurocomputing 1998;21:11938.
[28] Ultsch A. Knowledge extraction from self-organizing neural networks.
Information and Classification. Springer; 1993.
[29] Ultsch A. Maps for the visualization of high-dimensional data spaces. In:
Procedings of the workshop on self organizing maps. 2003. p. 22530.
[30] Obuchowski NA. Receiver operating characteristic curves and their use in
radiology. Radiology 2003;229:38.
[31] Mills SJ, Patankar TA, Haroon HA, Baleriaux D, Swindell R, Jackson A. Do
cerebral blood volume and contrast transfer coefficient predict prognosis
in human glioma. Am J Neuroradiol 2006;27(4):8538.
[32] Wintermark M, Sesay M, Barbier E, Borbely K, Dillon WP, Eastwood JD,
et al. Comparative overview of brain perfusion imaging techniques. Stroke
2005;36(9):e8399.
C. Vijayakumar was born in Coimbatore, India, in 1979. He received the Masters in Physics degree from Bharathiyar University, Coimbatore, India in 2001.
In 2002, he joined Defense Research and Development organization (DRDO)
as Scientist and currently working in Department of Radiodiagnosis and Imaging, Armed Forces Medical College, Pune, India. His research interests are
artificial neural networks, image segmentation, image vision, multiresolution
analysis.
Gharpure Damayanti received the MSc and PhD degrees from Pune University in 1984 and 1992, respectively. She joined the Department of Electronic
Science, University of Pune, India in 1988. Since then she has worked on a
number of research projects and consultancy projects related to machine vision
applications, design of embedded E-nose, hardware implementation of Automatic trajectory tracking system, etc. Her current research interests include

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004

+Model
CMIG-774;
12

No. of Pages 12

ARTICLE IN PRESS
C. Vijayakumar et al. / Computerized Medical Imaging and Graphics xxx (2007) xxxxxx

applications of neural networks for pattern recognition, image analysis and


segmentation, odor analysis and embedded system design.
Rochan Pant was born in Dhanbad, India, in 1966. He completed his Bachelors
Degree in Medicine (MBBS) in 1988 from the Armed Forces Medical College,
Pune, India. He was commissioned into the Indian Navy. He completed his
postgraduate studies and received a Masters Degree (MD) in Radiodiagnosis in
1995 from the Armed Forces Medical College, Pune, India. He completed a 2year fellowship in Interventional Radiology at the Bombay Institute of Medical
Sciences, Mumbai, India. He is at present Associate Professor at the Department
of Radiodiagnosis and Imaging, Armed Forces Medical College, Pune, India.
His research interests are neurovascular interventions, cerebral perfusion studies,
and renovascular disease.

C.M. Sreedhar was born in Coimbatore, India, in 1965. He completed his Bachelors Degree in Medicine (MBBS) in 1986 from the Armed Forces Medical
College, Pune, India. He was commissioned into the Indian Army. He completed
his postgraduate studies and received a Masters Degree (MD) in Radiodiagnosis in 1993 from the Armed Forces Medical College, Pune, India. He received
a years training in MRI and CT techniques at Calcutta, India. He is at present
Associate Professor at the Department of Radiodiagnosis and Imaging, Armed
Forces Medical College, Pune, India. His research interests are Imaging of Cerebral Tumours, Diffusion and Perfusion imaging in the brain, Imaging of CNS
infections in AIDS, MRI in congenital heart disease, and peripheral vascular
imaging.

Please cite this article in press as: Vijayakumar C, et al., Segmentation and grading of brain tumors on apparent diffusion coefficient images
using self-organizing maps, Comput Med Imaging Graphics (2007), doi:10.1016/j.compmedimag.2007.04.004