Anda di halaman 1dari 6

2017 8th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES)

Optic Disk Segmentation in Retinal Images Using


Active Contour Model based on Extended Feature
Projection
Tin Tin Khaing1 and Pakinee Aimmanee2
Sirindhorn International Institute of Technology
Thammasat University
Pathum Thani 12121, Thailand
1
tin.tin@student.siit.tu.ac.th, 2 pakinee@siit.tu.ac.th

Abstract—Accurate localization and segmentation of an optic step because it can reduce the processing time and help
disk (OD) is an important problem in the analysis of abnormality improving the success rate and the accuracy. Most OD lo-
conditions such as optic disk shrinking/swelling, pale optic disk calization approaches rely heavily on the OD features such
and glucoma. Hence, this paper proposes an automated fast and
accurate OD localization and segmentation technique. In this as color, intensity, size, brightness, the gray level variation
work, OD localization is performed using the extended feature (entropy), shape and vessel information. Lalonde et al. [1]
projection method (EFP) based on retinal vessel orientation performed a pyramidal decomposition using Haar discrete
and average intensity variance. Multiple OD candidate locations, wavelet transform to localize the OD and template matching
obtained from OD localization technique (EFP), are used as the based on Hausdorff to segment the OD. Similarly, GeethaRa-
initialization points of the active contour model to detect the
OD boundaries. Next, we use a decision tree based on the OD mani et al. [2] and Lu [3] localized the OD by applying
features such as the area of vessels, the brightness and the entropy template matching method and the localized OD is segmented
to select the final OD candidate. The proposed technique has through morphological procedure. Nugroho et al. [4] localizes
been tested on STARE dataset to evaluate comparative studies the OD using circular average filter and segmented the OD
on the localization and segmentation of OD in retinal images. area by combing mathematical morphology and the active
The accuracy of the OD localization is 90.12% with an average
computing time of 13 seconds per image. The performance of the contour technique. The active contour model is used in [5]
OD segmentation in terms of sensitivity is 74.62 % and positive to detect the OD boundary in retinal fundus images and the
predictive value is 60.22% with an average computing time of random forest classifier is applied to utilize the OD cup edge
20 seconds per image. The proposed approach improves the pixels in [6]. Singh et al. [7] and Omid et al. [8] segmented
accuracy of conventional feature projection method by 12.34% the OD edge using region growing technique. Moreover, OD
and runs as quickly as the conventional one.
Keywords—OD, OD Localization, OD Segmentation, Extended
boundary is detected using feature-based detection combined
Feature Projection Method (EFP), Active Contour with scale space algorithm in [9] and morphological operations
combined with edge detection techniques based on circular
I. I NTRODUCTION hough transform in [10]. These approaches suffer a low
accuracy when some OD features are missing from the images.
The optic nerve head or optic disk (OD) carries more than The fact that OD is the place where all retinal vessels
one million neurons from the eyes towards the brain. Detection converge at is used in some research work [11-15] to localize
of an optical disk in retinal fundus images is important as the OD. The following are few examples of works that
it can be used to differentiate from the symptoms of the employs this fact. Mahfouz et al. [11] and Cao et al. [12]
diseases similar to one another in terms of contrast, color and considered the problem from one 2-D localization approach
brightness. The OD is a significant landmark which appears which is reduced into two 1-D localization problems. OD
from the investigation of a human’s retina. It usually presents consists of many vertical vessels and few horizontal vessels,
as a bright yellowish region, which is circular (oval), interfered and has a high variance in intensity because of the dark
with the main retinal vessels and occupies about one sixth or blood vessels within it. Thus, Mahfouz et al.[11] and Cao
seventh of the entire retinal image. Any change in the OD et al.[12] used the difference between the number of vertical
appearance is a symbol of abnormalities or eye diseases like vessels and the number of horizontal vessels to be a key
diabetic retinopathy, glaucoma, etc. feature to detect a X-location of the OD. They utilize the
Even though the OD characteristics are quite unique, auto- fact that becomes this projection method very fast and obtains
matic OD localization and segmentation is quite challenging. very high accuracy. However, when the vessels are unbalance,
Therefore, many algorithms are proposed to obtain the location having many vertical subvessels and/or the OD are not bright
and the edge of the OD in retinal image processing. To due to some eye abnormalities, the approach may not work
segment the OD boundary, OD localization is an elementary well. Muangnak et al. [16] proposed the OD localization

978-1-5090-4809-0/17/$31.00 ©2017 IEEE


technique, namely Vessel Transform (VT), based on vessels ComputeIntensityVariance(S,IMG)
clustering and a vessel transform. To each focused pixel, the A function that takes a domain S and an image
VT algorithm measures the score, so called the VT score, IMG as inputs and returns the intensity variance
which is the sum of the shortest distance from the focused within the input domain
pixel to the vessel clusters. Pixels that obtain smallest scores
are assumed to be the location of the OD. To segment the Output:
OD, the VT scores are used together with the other features
of OD in the OD selection process of the scale space algorithm C A collection of points containing candidates
(SSVT). SSVT method works very well even when the vessel of the OD locations
structure is unbalance, however it suffers very much from its
slow running time. Another disadvantage of SSVT is that it Algorithm: Extended Feature Projection (EFP)
requires at least 3 vessel clusters.
In this paper, a fast and accurate OD segmentation technique Step 1: Detecting the candidates of X-locations
is proposed based on the extended feature projection and active F1 = { }
contour approach. The proposed technique works fast and very for x = 0 : W
well even when the vessel structure is unbalance and/or with a SV = Sum of the number of non-zero pixels of
pale OD. The rest of the paper is organized as follows. Section VVIMG in Sx
II describes methodology of the proposed OD localization and SH = Sum of the number of non-zero pixels of
segmentation process and Section III shows the experimental HVIMG in Sx
result. Finally, the discussion is shown in Section IV and the Diff = SV - SH
conclusion is drawn in Section V. F1 = F1 ∪ {(x, Diff)}
x=x+h
II. M ETHODOLOGY end
This section describes how we modify the extended fea- X = ComputeArgMax(F1, k)
ture projection (EFP) method based on [11] to improve the
accuracy of localization of the OD and integrate with active Step 2: Detecting the candidates of Y-locations at each x
contour technique to obtain the boundaries of the OD. in X
The following is the EFP algorithm that produces many F2 = { }
candidates of the OD locations. C={}
for i = 1 : k
Input: for y = 0 : H
MIV = ComputeIntensityVariance(Sy , IMG)
IMG The input gray scale image F2 = F2 ∪ {(y, MIV)}
VSSL The vessel image of IMG y=y+h
HVIMG The horizontal vessel image of VSSL end
VVIMG The vertical vessel image of VSSL
H The height of IMG Y = ComputeArgMax(F2, k)
W The width of IMG
T The thickness of the vessel for i = 1 : k
D The approximate diameter of the OD C = C ∪ {(X(i), Y(i))}
Sx A thin sliding window dimension H × 2T end
centered at x end
Sy A square sliding window dimension D × D
centered at y Fig. 1 shows the input images and describes how the sliding
h increment step length windows are used to determine the possible OD locations. Fig.
k A number of required candidates in each 2 shows the graph signals of the feature projection onto the
dimension x-axis and the y-axis to localize the OD candidates. At each
F1 A collection of ordered pairs for X-location X-location candidates, along the y-axis, EFP algorithm finds
F2 A collection of ordered pairs for Y-location the largest y that yields one possible OD candidate.
After obtaining a collection of points which are candidates
ComputeArgMax(P, k) of the OD from the proposed EFP algorithm, each candidate
A function that takes a collection of ordered location is used as an initialization point for the active contour
pair P and an integer k, and returns the set of model to get the candidate boundaries of the OD. The active
k first members of the ordered pair P of which contour model is flexible and powerful that can segment the
their corresponding rear members hold first k OD accurately and quickly [17] [18]. For convenience, we
largest values among all the rear members call the active contour model based on EFP, ACEFP. Fig. 3
(a) IMG (b) VVSL

Fig. 2: (a) Two Y-locations corresponding to 1st X-location


(Peak-1); (b) Retinal image showing the three OD candidate
locations; (c) One Y-location corresponding to 2nd X-location
(Peak-2); (d) X-location signal.

(c) VVIMG (d) HVIMG

(e) (f)
Fig. 1: (a) Input gray scale image; (b) Vessel image; (c)
Vertical vessel image; (d) Horizontal vessel image; (e) A thin
sliding window over vertical vessel image centered as each x;
(f) A square sliding window centered at each y.
Fig. 3: Example of OD outputs from ACEFP model based on
EFP initialization points.
shows how ACEFP segments each OD candidate from EFP
algorithm.
Then, an automatic decision tree generation available from
the Waikato Environment for Knowledge Analysis [19] is
applied to combine with OD features. In this work, the area
of vessels inside the candidates of OD region, the entropy
and the brightness of each OD candidate region are collected
as features. The above features together with a set of ground
truths are used to construct the decision tree for OD selection
process. Fig. 4 shows the resulting decision tree with the
specified threshold values of the applied features.
III. E XPERIMENTAL R ESULT
The proposed method is used to evaluate results in terms
of the accuracy and the computing time for both the OD
localization and the OD segmentation using STARE dataset,
which is a standard dataset publicly available. It was imple-
mented using Matlab (MathWorks, Inc.) and the localization Fig. 4: Decision Tree for STARE Dataset using OD features
and segmentation results shown in Table I and II are obtained (area, brightness and entropy)
by running the developed Matlab code on a PC (Intel(R)
Fig. 5: Examples of the result from OD Localization of EFP, Fig. 6: Examples of the outputs of OD Segmentation, green -
VT and FP, EFP - triangle, VT - rectangle, FP - circle, ground ACEFP, blue - SSVT, red - ground truth.
truth contour - black solid line.

The sensitivity indicates the accuracy of the proposed method


and it is defined as the ratio of the number of pixels detected
Core(TM) i7 - 4500U CPU @1.8GHz 2.4GHz). We consider
correctly as OD to the total number of pixels detected as OD.
that the OD is a candidate location if it locates within the
The PPV reveals the completeness of the obtained result and
ground truth contour. Table I presents the accuracy and the
it is the ratio of the number pixels corrected as OD to the total
computing time of the OD localization performed by EFP
number of pixels of OD from the ground truth image. Table
method, feature projection (FP) [12] and VT [16] on STARE
II shows the sensitivity and the PPV of the OD segmentation
dataset. STARE is categorized into fair and poor groups.
performed by ACEFP and SSVT.
Images which contain a bright and obvious boundary of the
The advantage of our OD segmentation method (ACEFP)
ODs are considered as ’fair’ and the rest is ’poor’. There are
over SSVT in terms of sensitivity is up to 8.15% for fair
31 fair quality images and 50 poor quality images in STARE
images and 32.91% for poor images. ACEFP outperforms
dataset (total 81 images, 605 × 700 pixels).
SSVT in average sensitivity for both fair and poor images.
The results from EFP is compared against the original However, ACEFP underperforms SSVT in average PPV by
feature projection (FP) method and Vessel Transform (VT) 16.12% for fair and by 12.18% for poor images. Fig. 6 shows
method in Table I. The EFP method considerably outperforms the results of the OD segmentation of ACEFP and SSVT.
FP for both fair and poor groups without computation time
overhead in terms of accuracy. However, VT is slightly better IV. D ISCUSSION
for poor images. But the computing time of VT is excepted In some cases, EFP algorithm gives incorrect OD location
to be slower than EFP. The EFP method performs 100% on candidates in retinal images where the OD is very faint and
fair group and 84% on poor whereas FP performs 90.32% for the vessel structures are not clear enough. In this case (see
fair, 70% for poor group and VT performs 96.77% for fair and Fig. 7), ACEFP cannot segment the OD boundaries well and
94% for poor. Fig. 5 shows examples of the OD localization the sensitivity and PPV value will decrease. EFP algorithm
result of EFP, VT and FP. requires clear vessel orientation to determine the correct X-
Then, the OD segmentation performance is evaluated using locations and Y-locations. As future work, we will improve
two standard schemes: sensitivity and positive predictive value such kind of cases to yield the correct OD candidates and the
(PPV). In this work, ACEFP is compared against SSVT [16]. accurate OD boundaries.
Dataset Image quality Accuracy of OD localization, % Average time, second
FP VT EFP FP VT EFP
fair 90.32% 96.77% 100%
STARE 10 sec – 13 sec
poor 70.00% 94.00% 84.00%

TABLE I: Accuracy and average computing time of OD localization

Dataset Image quality Average sensitivity, % Average PPV, % Average time, second
SSVT ACEFP SSVT ACEFP SSVT ACEFP
fair 62.4% 70.55% 74.14% 58.02%
STARE 290 sec 20 sec
poor 45.78% 78.69% 74.59% 62.41%

TABLE II: Average sensitivity, PPV and computing time of OD segmentation

R EFERENCES
[1] M. Lalonde, M. Beaulieu and L. Gagnon, Fast and robust optic disc
detection using pyramidal decomposition and Hausdorff-based template
matching, in IEEE Transactions on Medical Imaging, vol. 20, no. 11,
pp. 1193-1200, Nov. 2001. doi: 10.1109/42.963823.
[2] R. GeethaRamani and C. Dhanapackiam, Automatic localization and
segmentation of Optic Disc in retinal fundus images through image
processing techniques, 2014 International Conference on Recent Trends
in Information Technology, Chennai, 2014, pp. 1-5. doi: 10.1109/ICR-
TIT.2014.6996090.
[3] S. Lu, Accurate and Efficient Optic Disc Detection and Segmentation by a
Circular Transformation, in IEEE Transactions on Medical Imaging, vol.
30, no. 12, pp. 2126-2133, Dec. 2011. doi: 10.1109/TMI.2011.216426.
[4] H. A. Nugroho, L. Listyalina, N. A. Setiawan, S. Wibirama and D. A.
Dharmawan, Automated segmentation of optic disc area using mathemat-
ical morphology and active contour, 2015 International Conference on
Computer, Control, Informatics and its Applications (IC3INA), Bandung,
2015, pp. 18-22. doi: 10.1109/IC3INA.2015.7377739
[5] A. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active contour models,
International journal of computer vision, pp. 321-331, 1987.
Fig. 7: Example of failure OD localization results from EFP. [6] I. Fondn, J. F. Valverde, A. Sarmiento, Q. Abbas, S. Jimnez and P.
Alemany, Automatic optic cup segmentation algorithm for retinal fundus
images based on random forest classifier, IEEE EUROCON 2015 - In-
ternational Conference on Computer as a Tool (EUROCON), Salamanca,
V. C ONCLUSION 2015, pp. 1-6. doi: 10.1109/EUROCON.2015.7313693.
[7] A. Singh, M. K. Dutta, M. Parthasarathi, R. Burget and K. Riha,
An automated fast and accurate technique for OD localiza- An efficient automatic method of Optic disc segmentation using region
tion and segmentation in retinal images is proposed. In this growing technique in retinal images, 2014 International Conference
on Contemporary Computing and Informatics (IC3I), Mysore, 2014, pp.
work, many OD features such as vessel orientation, average 480-484.
intensity variance, brightness, entropy and vessel information [8] S. Omid, J. Shanbehzadeh, Z. Ghassabi and S. S. Ostadzadeh, Op-
are selected to determine the OD candidates. In addition, the tic disc detection in high-resolution retinal fundus images by region
growing, 2015 8th International Conference on Biomedical Engi-
OD localization approach is integrated with the active contour neering and Informatics (BMEI), Shenyang, 2015, pp. 101-105. doi:
model to improve the OD segmentation quality. The results 10.1109/BMEI.2015.7401481
over STARE dataset shows that our proposed method is quite [9] C. Duanggate, B. Uyyanonvara, S. S. Makhanov, S. Barman and T. H.
fast. The comparison of different OD localization methods Williamson, Parameter-free optic disc detection, Comp. Med. Imag.
and Graph. 35 (2011): 51-63.
and different OD segmentation methods on STARE dataset [10] A. Aquino, M. E. Gegundez-Arias and D. Marin, Detecting the Optic
is presented in Table I and II respectively. By comparison, Disc Boundary in Digital Fundus Images Using Morphological, Edge
our proposed hybrid method achieves a good success rate in Detection, and Feature Extraction Techniques, in IEEE Transactions
on Medical Imaging, vol. 29, no. 11, pp. 1860-1869, Nov. 2010. doi:
a very short computing time compared to the other works. 10.1109/TMI.2010.2053042.
[11] A. E. Mahfouz and A. S. Fahmy, Fast Localization of the Optic Disk
Using Projection of Image Features, in IEEE Transactions on Image
ACKNOWLEDGMENT Processing, vol. 19, no. 12, pp. 3285-3289, Dec. 2010.
[12] Q. Cao, J. Liu and Q. Zhao, Fast Automatic Optic Disc Localization in
This work is partially supported by National Research Retinal Images, 2013 Seventh International Conference on Image and
Universities (NRU) and Sirindhorn International Institute of Graphics, Qingdao, 2013, pp. 827-831. doi: 10.1109/ICIG.2013.166.
Technology, Thammasat University, Thailand. The authors [13] D. Zhang, Y. Yi, X. Shang and Y. Peng, Optic disc localization
by projection with vessel distribution and appearance characteristics,
wish to thank Mr. Pongsate Tangseng who greatly assisted Proceedings of the 21st International Conference on Pattern Recognition
this research. (ICPR2012), Tsukuba, 2012, pp. 3176-3179.
[14] A. M. Mendona, S. Antnio, L. Mendona and A. Campilho, Automatic doi: 10.1109/83.902291
localization of the optic disc by combining vascular and intensity infor- [18] S. Lankton and A. Tannenbaum, Localizing Region-Based Active Con-
mation, Comp. Med. Imag. and Graph. 37 (2013): 409-417. tours, in IEEE Transactions on Image Processing, vol. 17, no. 11, pp.
[15] C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H. Williamson, 2029-2039, Nov. 2008. doi: 10.1109/TIP.2008.2004611.
Automated localisation of the optic disc, fovea, and retinal blood vessels [19] M. Hall, E.Frank, G. Holmes, B. Pfahringer, P. Reutemann, I.H. Witten,
from digital colour fundus images, British Journal of Ophthalmology, The WEKA data mining software: an update, SIGKDD Explorations,
vol. 83, no. 8, pp. 902910, 1999. 2009, vol. 11, no. 1, pp. 10-18.
[16] N. Muangnak, P. Aimmanee, S. Makhanov and B. Uyyanonvara, Vessel [20] R. Bock, J. Meier, L. G. Nyu l, J. Hornegger, and G. Michelson,
transform for automatic optic disk detection in retinal images, in IET Glaucoma risk index: Automated glau- coma detection from color fundus
Image Processing, vol. 9, no. 9, pp. 743-750, 9 2015. doi: 10.1049/iet- images, Medical image analysis, vol. 14, no. 3, pp. 471481, 2010.
ipr.2015.0030. [21] N. Otsu, A threshold selection method from gray-level histograms,
[17] T. F. Chan and L. A. Vese, Active contours without edges, in IEEE Automatica, vol. 11, no. 285-296, pp. 23 27, 1975.
Transactions on Image Processing, vol. 10, no. 2, pp. 266-277, Feb 2001.

Anda mungkin juga menyukai