Anda di halaman 1dari 56

Detection of Microaneurysms in Retinal Images through Local

Binary Patterns
A dissertation
submitted in the partial fulfilment of the requirements
for the award of degree
of
MASTER OF TECHNOLOGY
In
INSTRUMENTATION
by
RITIKA
(Roll No. 31507113)
Under the esteemed guidance of
Dr. ASHAVANI KUMAR
(Professor)

DEPARTMENT OF PHYSICS
NATIONAL INSTITUTE OF TECHNOLOGY
KURUKSHETRA-136119
SESSION 2015-2017
DEPARTMENT OF PHYSICS
NATIONAL INSTITUTE OF TECHNOLOGY
KURUKSHETRA – 136119

CANDIDATE DECLARATION
I hereby declare that the work presented in this dissertation entitled “Detection of
Microaneurysms in Retinal Images through Local Binary Patterns” towards partial
fulfillment of the requirements for the award of the Degree of M. Tech in
Instrumentation Engineering is an authentic record of my own work under the
supervision of Dr. Ashavani Kumar.

The work presented in this dissertation has not been submitted elsewhere for the award
of any other degree or in any other institute.

Date: (RITIKA)
Place: Kurukshetra Roll no: 31507113

CERTIFICATE
This is to certify that Ms. Ritika completed her M.Tech Dissertation under my
supervision and the above statement made by the student is correct to the best of my
knowledge.

Date: (Dr.Ashavani Kumar)


Supervisor
Place: Kurukshetra Department of Physics
National Institute of Technology
Kurukshetra-136119

i
ACKNOWLEDGEMENT

I acknowledge with great pleasure my deep sense of gratitude to my


supervisor Dr. ASHAVANI KUMAR, Professor, Department of Physics, NIT
Kurukshetra for his constant encouragement, endless support, motivation, guidance
and for sparking my interest in “Detection of Microaneurysms in Retinal Images
through Local Binary Patterns” during my masters and giving me the
opportunity to do my master’s dissertation on this important and interesting topic.
His professional knowledge, contagious enthusiasm, and faith in me were very
important and gave me the strength to conclude this work.

I would like to acknowledge my thanks to the entire faculty and supporting


staff of Department of Physics in general and particularly for their help, directly or
indirectly during my dissertation work. I would like to show my gratitude to all my
colleagues and friends for giving me stimulating suggestions and endless support. I
am also thankful to the volunteers who collaborated for the experimental
procedures implemented in this work.

Finally, I express my deep sense of reverence to my dearest parents, my


sister, and my friends for their unconditional support, patience and encouragement
through all these years. This work would not have been possible without their
support. Thank you one and all who directly or indirectly had an influence in this
work.

Place: Kurukshetra, (RITIKA)


Date: Roll No: 31507113
M. Tech (Instrumentation)

ii
LIST OF FIGURES
FIGURE NO DESCRIPTION PAGE NO

Figure1. 1 Eye anatomy and retinal layers ............................................................................ 3


Figure1. 2 Electromagnetic spectrum ................................................................................... 3
Figure1. 3 Fundus Image with optic nerve bead.................................................................... 4
Figure1. 4 Schematic view of retina layers organization ...................................................... 5
Figure1. 5 Examples of retinal images; (a) healthy, (b) diabetic retinopathy, ...................... 6
Figure1. 6 Typical fundus images; (a) normal, (b) hemorrhages, and hard Exudates ........... 7
Figure1. 7 (c) soft exudates, (d) neovascularization .............................................................. 8
Figure1. 8 (e) microaneurysms (f) drusen. ........................................................................... 8
Figure1. 9 Normal vision and the same scene viewed by a person with diabetic retinopathy
(a) normal vision, (b) scene viewed by a person with diabetic retinopathy. ......................... 8
Figure2. 1 Circular Hough Transform ................................................................................. 16
Figure2. 2 Relationship of the camera angle and the area of the imaged retina. ................ 19
Figure2. 3 The seven standard 30-degree fields of view. .................................................... 19
Figure 3. 1 Illustration of the basic LBP operator. .............................................................. 23
Figure 3. 2 An example of new LBP operator computation ................................................ 23
Figure 3. 3 Feature extraction flowchart. ............................................................................. 29
Figure4. 1 Original Fundus Image ....................................................................................... 33
Figure4. 2 (a) Red channel Image (b) Green Channel Image (c) Blue Channel image ...... 34
Figure4. 3 (a) Red channel median filtered Image (b) Blue channel median filtered Image
............................................................................................................................................. 34
Figure4. 4 (a) Optic Disc Removal-Red Channel (b) Optic Disc Removal-Blue Channel
(c) Optic Disc Removal-Green Channel .............................................................................. 35
Figure4. 5 (a) Vessel Removed Image-Red Channel (b) Vessel Removed Image-Blue
Channel ................................................................................................................................ 36
Figure 4.6 Figure4. 6 (a)Linear binary pattern-red channel (b) Histogram of LBP-red
channel (c) Linear binary pattern-green channel (d) Hisrogram of LBP-green channel (e)
Linear binary pattern-blue channel (f) Hisrogram of LBP-blue channel (Radius-1) .......... 37

iii
Figure4. 7 (a)Linear binary pattern-red channel (b) Hisrogram of LBP-red channel (c)
Linear binary pattern-green channel (d) Hisrogram of LBP-green channel (e) Linear binary
pattern-blue channel (f) Hisrogram of LBP-blue channel (Radius-5) ................................. 38
Figure4. 8 (a) Variance Feature Image- Red Channel (b) Variance Feature Image- Blue
Channel ................................................................................................................................ 38
Figure4. 9 ROC Curve ......................................................................................................... 40

iv
LIST OF TABLE

TABLE NO. DESCRIPTION PAGE NO

Table 4.1 Testing set (Healthy Images) …………………………………………………………………. 48

Table 4.2 Testing set (MA Images) …………………………………………………………………..…….48

Table 4.3 Training set (Healthy Images) …………………………………………………………………49

Table 4.4 Training set (Healthy Images) ………………………………………………………………. 49

v
ABSTRACT

When sugar level (glucose) in the blood fails to regulate the insulin properly in
human body, diabetes is occurred. The effect of diabetic on eye causes diabetic
retinopathy (DR) and causes blindness. As the presence of microaneurysms is first
sign of DR. It is metabolic and the disordered patients recognize no symptoms until
the disease is at late stage. Due to this reason premature detection and proper
medication is needed. For this purpose automated system has been designed to
recognize and detect retinal lesions in the fundus images. The present study shows
potential to identify the difference in the texture of fundus images between
pathological and healthy images. For this objective, the technique of Local Binary
Patterns (LBP) and variance (VAR) are used as a texture descriptor for fundus
images. The main aim is to detect presence of microaneurysms in retinal image
automatically. The preprocessing techniques such as Contrast Limited Adaptive
Histogram Equalization (CLAHE) are used to enhance the contrast of the input
image. The candidate extractors such as Circular Hough Transform are utilized to
improve the red lesion detection. Finally, the output images have been classified
and average sensitivity and specificity higher than 0.92 has been attained. These
results suggest that the algorithm used in this work for describing fundus image and
texture can be useful in a diagnosis aid system for various retinal disease screening.

vi
Table of Contents

CANDIDATE DECLARATION...................................................................................................... i
ACKNOWLEDGEMENT ................................................................................................................ii
LIST OF FIGURES ......................................................................................................................... iii
LIST OF TABLE ..............................................................................................................................v
ABSTRACT ...................................................................................................................................... vi
CHAPTER-I ..................................................................................................................................... 1
INTRODUCTION ............................................................................................................................ 1
1.1 Introduction ............................................................................................................................ 1
1.2 Anatomy of the Retina Image ............................................................................................... 2
1.3 Retina ...................................................................................................................................... 4
1.4 Retinal Imaging Techniques.................................................................................................. 5
1.5 Lesions of the Retina .............................................................................................................. 7
1.5.1 Diabetic Retinopathy ......................................................................................................... 8
1.5.2 Age related Macular Degeneration ................................................................................... 9
1.6 Treatment of DR .................................................................................................................... 9
1.7 Motivation ............................................................................................................................... 9
1.8 Aim and objective................................................................................................................. 10
1.9 Thesis Structure ................................................................................................................... 10
CHAPTER-II .................................................................................................................................. 12
2.1 Literature Review................................................................................................................. 12
2.2 Image processing techniques ............................................................................................... 13
2.2.1 Retinal Imaging ............................................................................................................... 13
2.3 Overview of Retinal Image Analysis Research .................................................................. 14
2.3.1 Structure segmentation .................................................................................................... 14
2.3.2 Image classification ......................................................................................................... 14
2.3.3 Image registration............................................................................................................ 14
2.3.4 Image restoration or correction ....................................................................................... 14
2.3.5 Content based image recovery ........................................................................................ 15
2.3.6 Retina image feature evaluation ...................................................................................... 15
2.4 Optic Disc Segmentation...................................................................................................... 15
vii
2.5 Circular Hough Transform ................................................................................................. 15
2.6 Image Quality Verification .................................................................................................. 17
2.6.1 Biological Factors ........................................................................................................... 17
2.6.2 External Factors .............................................................................................................. 18
2.7 Retinal Image Acquisition ................................................................................................... 18
2.8 Methods for color normalization ........................................................................................ 20
2.8.1 Color Transformation ...................................................................................................... 20
2.9 Histogram equalization ........................................................................................................ 20
CHAPTER-III ................................................................................................................................ 22
METHODOLOGY......................................................................................................................... 22
3.1 Introduction .......................................................................................................................... 22
3.2 Proposed Method ................................................................................................................. 22
3.2.1 Pre processing ................................................................................................................. 22
3.2.2 Local binary patterns ....................................................................................................... 23
3.2.3 Methods for Contrast Enhancement ................................................................................ 25
3.4 Retinal Vasculature Segmentation ..................................................................................... 26
3.5 Feature Extraction ............................................................................................................... 27
3.6 Classifier ............................................................................................................................... 28
3.7 Masking ................................................................................................................................. 30
3.10 Median Filter ...................................................................................................................... 31
3.10 Experimental Setup............................................................................................................ 32
CHAPTER-IV ................................................................................................................................ 33
RESULTS AND DISCUSSION .................................................................................................... 33
4.1 Original Image...................................................................................................................... 33
4.2 RGB Images .......................................................................................................................... 33
4.3 Median Filtered Images ....................................................................................................... 34
4.4 Optic Disc Removal Images................................................................................................. 35
4.5 Vessel Removed Images ....................................................................................................... 35
4.6 Local Binary Pattern Images and Histogram .................................................................... 36
4.7 Variance Feature Images ..................................................................................................... 38
4.9 TPR and TNR Results ......................................................................................................... 39
4.10 ROC Curve ......................................................................................................................... 40

viii
5.1 Conclusions ........................................................................................................................... 41
5.2 Scope for future work .......................................................................................................... 41
REFRENCES ................................................................................................................................. 42
List of Publications......................................................................................................................... 45

ix
CHAPTER-I

INTRODUCTION

1.1 Introduction

The current scenario of medical development provides extremely professional and


precise response to illness that may occur, but the structure is basically clinic oriented.
For prolonged well being, technologies can be taken forward to home environment and
less time consuming for detection and prevention of diseases. The most recent WHO
estimates on the global magnitude and causes of visual impairments confirm a major
opportunity for change in the lives of millions of people: 80% of all causes of visual
impairment are preventable or curable. WHO estimates that in 2010 there were 285
million people visually impaired, of which 39 million were blind [1]. But there is
possibility to reduce the number of visual impairment patients by providing them
medication at earlier stage. This situation appears to be fairly easy to recognize, but for
multiple reasons both the aforementioned eye diseases remain major items on the
unfinished agenda of public eye care.
So, automatic monitoring system like this can offer handy inspection of patients during
long period treatment and also show the recovery. Thus the system is required for better
life. Digital Retinal images are amenable to the application of sophisticated image
analysis techniques that can assist in the screening process by creating more efficiency
in the grading/reading process and consequently increasing the throughput as the
demand simultaneously increases.
The aim of the automatic screening algorithms is to detect lesions of DR and refer the
patients to an eye care provider when abnormalities which are considered high risk are
found in the retinal fundus images. In this way two purposes can be achieved: 1)
Evaluation and timely referral for treatment in the DR population, and 2) Increased
efficiency in the use of healthcare resources for those that require the timeliest care.
Diabetic retinopathy and age related macular degeneration are the foremost reasons of
visual impairment all over the world. Diabetic retinopathy normally observed in age

National Institute of Technology Kurukshetra Page 1


group of 19-75 years, and it is found by the existence of red lesions and exudates which
is small white or yellowish white colored presented in the external layer of the retina, its
identification is required for diabetic retinopathy screening systems. Age related
macular degeneration generally observed among the age group above 50 years [2]. On
the other hand, in some patients bright lesions may be the indication of early diabetic
retinopathy. Therefore, computer based software systems have been developed to find
the lesions at very early stage.

1.2 Anatomy of the Retina Image


The retina consists of the photochemical and neurologic interconnections due to which
it is capable of progression of light. Also Retina is having two kinds of photoreceptors:
rods and cones. These photoreceptors convert the light into electromagnetic waves
those are subsequently passed to the visual cortex of the brain through the optic nerve.
These structures allow us to see approximately a 140-degree field horizontally. The
retina, which receives all the visual information, has an extension of 32 mm. The area of
the human retina is 1094mm2, estimated with the assumption that average length of the
human retina is 22 mm from anterior to posterior poles; also retina occupies 72% of the
area inside of the globe. Fig. 1.1 shows the most relevant features in the retina. These
are: arteries, veins, optic nerve head, and fovea. The dimensions of the optic nerve head
or disc are 1.86 mm vertically by 1.75 mm horizontally on average [9]. They found a
range of 0.96 to 2.88 mm for the vertical axis and 0.92 to 2.70 mm for the horizontal
axis. An extremely critical landmark of the retina is the fovea since it has the most
concentration of photoreceptors in the retina.

The normal macula appears darker than the rest of the retina because of an increase in
the amount of melanin at this location, which is a pigment that helps absorb light. Any
disease which affects the macula will directly affect visual resolution, or clinically
known as visual acuity. The fovea sits in the middle of the macula, which is 1.5mm of
the central part. The fovea shows the middle, or highest visual resolution and position of
the all color foresight of the retina.

National Institute of Technology Kurukshetra Page 2


Figure1. 1 Eye anatomy and retinal layers

The Human eye is capable for the identification of visible range of light, as it is shown
below in figure1.2 the range of wavelengths in electromagnetic spectrum which varies
from 400 to 700 nm. How color of visible light will detect by human eye, it depends on
its wavelength. For example, green color has a wavelength vary from 495–570 nm.

Figure1. 2 Electromagnetic spectrum

Most of the people will satisfy with the myth that vision ability is better than that of any
other sensing part of the human body. The reason behind is that the human eye is used
in all doings that the human can do [6]. Human is comparable to a human made camera.
The reason is human eye and a camera is used for capturing the images. Human eye is
having pupil at the middle and similarly camera has lens at its centre.

National Institute of Technology Kurukshetra Page 3


Age-related macular degeneration is a relatively widespread reason for acquired central
visual impairment in the range 60 year old age group. In this disorder, the macula is
affected by degeneration of one of the posterior layers of the retina or by aberrant new
blood vessels which grow and bleed under the retina, causing destruction to fine, central
vision.

Figure1. 3 Fundus Image with optic nerve bead

Figure1.3 shows the fundus image with optic nerve bead. It is called so because of its
structure looks like a bead, so called as optic nerve bead. It can be easily observed that
how arteries and veins differs from each other.

1.3 Retina
The human eye is enclosed in three layers of tissue: sclera, choroid and retina. The
sclera is the white opaque shielded outer layer. A middle layer, choroid, is the vascular
layer of the eye. It provides oxygen to nourish the retina. The final layer is the retina, a
thin layer of tissue at the back of human eye. It forms the inner surface of the retina.
The tissue is consisting of chemical photo detector cells known as rods and cones. Rods
function in dim light and are consistent for black and white vision. Cones support vision
in bright light and color perception. Incoming light will be converted into a neural
signal and sent to the brain via the optic nerve that passes through the eye at the optic
disc. Three main features of the normal retina are the optic disc, the blood vessels and
the macula. The optic disc is identified as a white or yellowish circular object in the
retina. It is the entrance of the optic nerve head and the blood vessels. Blood vessels are
National Institute of Technology Kurukshetra Page 4
elongated structures in the retina and are responsible for nourishing the retinal tissue.
The macula is an oval spot in the centre of retina and is where the highest concentration
of the smallest cones is situated, so is the area of most acute vision. The centre of the
macula is an area known as fovea, responsible for daylight vision. This area is free from
blood vessels, thus it is exclusively nourished by the choroidal layer. The following
diagram shows the different layers of retina:

Figure1. 4 Schematic view of retina layers organization

1.4 Retinal Imaging Techniques


Retinal photographs are obtained using a fundus camera, a specialized low power
microscope with an attached camera that is capable of simultaneously illuminating and
imaging the retina. Early versions of fundus cameras were difficult to setup and
required a trained professional to operate them. However over the years, there were
several developments to make fundus imaging more approachable and less dependent

National Institute of Technology Kurukshetra Page 5


on trained personnel. A fundus camera is able to produce several imaging modalities of
retina images such as:

 Color fundus images: The retina is examined in full color under the illumination of
white light. The reflected R, G, and B wavebands are represented by the image
intensities.
 Red-free photography: In this image the imaging light is filtered to remove the red
colors and the contrast of the vessels and other structures is improved.
 Fluorescent angiograms: Angiograms are obtained using a fundus camera with
additional narrow band filters. This image is acquired using the dye tracing method
whereby a subject’s circulation is first injected with sodium fluorescein or indocyanine
green, before the retina is photographed.
 Stereo image: In stereo image, the image intensities show the amount of reflected light
from two or more different view angles for depth resolution.

Figure1. 5 Examples of retinal images; (a) healthy, (b) diabetic retinopathy,

Due to its safety and cost effectiveness, fundus imaging remains the primary method of
retinal imaging, especially for population screening applications. However fundus
imaging only obtains a 2-D representation of the 3-D semi transparent retinal tissues.
Some retina measurements such as retina rim loss are better performed in 3-D as the
cupping is more obvious. The first attempt to depict the 3-D shape of the retina is by
combining multiple stereo images by human observer into 3-D shape [15]. Later Con-
focal Scanning Laser Ophthalmoscopy (CSLO), Scanning Laser Polarimetry (SLP) and
topographic imaging techniques such as Optical Coherence Tomography (OCT) were
developed for 3-D retinal imaging. They make use of the different properties of light
and different characteristics of retinal tissue to obtain the measurement which may be
used to analyze the retina image.

National Institute of Technology Kurukshetra Page 6


1.5 Lesions of the Retina
 Soft Exudates occur as white, feathery spots with fuzzy borders. It results in swellings
of the retinal nerve fiber layer. It is present in the inner side of retina, Fig. 1.6(b)
represents the presence of soft Exudates in retina.
 Hard Exudates is lipoprotein came into existence after leakage of microaneurysms. It
does not have particular shape, normally yellowish white in color. Fig. 1.6(c)
represents the hard exudates.
 Hemorrhages came into existence after outflow of weak capillaries that is basically
flow of blood from outer layer of retina. It has numerous forms. Mostly it is in red
color. Fig. 1.7(c) shows the presence of hemorrhage.
 Neovascularization is the inherit development of new blood vessels mainly in the
mode of functional micro vascular arrangement; Fig. 1.7(d) represents the presence of
Neovascularization in retina.
 Microaneurysms are tiny, gloomy ad red dots that are red in color and can be detected
in earlier stage. They are small in size as compare to optic veins as they cross the optic
disc. They are the physical deterioration of the capillary walls which accelerate them
to leakages. Fig. 1.8(e) presents an example of MA lesions.
 Drusen are inconsistent sized yellowish colored lesion. Generally, drusen does lead to
vision impairment. On the other hand, if its size increases then there is chance of age
related macular degeneration. Fig. 1.8(f) presents drusen in the retina.

(a) (b)
Figure1. 6 Fundus images; (a) normal, (b) hemorrhages, and hard Exudates

National Institute of Technology Kurukshetra Page 7


(c) (d)

Figure1. 7 (c) soft exudates, (d) Neovascularization

(e) (f)
Figure1.8 (e) microaneurysms (f) drusen

1.5.1 Diabetic Retinopathy


Diabetes is a chronic disease affecting a large proportion of the population worldwide.
It occurs when the amount of glucose in the blood is too high because the body can-
not use it properly. This can be caused by insufficient insulin being produced by the
pancreas or by ineffectiveness of the insulin produced. The occurrence of
hyperglycemia is known to destruction tiny and outsized blood vessels. The obstacle
of diabetes in the eye is called Diabetic retinopathy.

(a) (b)

Figure1. 9 Normal visualization and the same image seen by a person with diabetic retinopathy
(a) normal visualization, (b) image seen by a person with diabetic retinopathy.

National Institute of Technology Kurukshetra Page 8


1.5.2 Age related Macular Degeneration
Another disease affecting the eye is Age-related macular degeneration (AMD), a
condition which causes loss of vision in the centre of the visual field (the macula). It
effects older adults (aged 50 years and above). AMD is characterized by the
appearance of drusen, small, round and white-yellow deposits in the retina

1.6 Treatment of DR
If the patient has retinal lesion but does not have macular edema (DME) no treatment
is needed. The recommendations for the patients is to control their levels of blood
sugar and other hazard cause such as blood pressure and lipids, as this has been shown
to slow the progression of DR. In the case the patient develops proliferative
retinopathy (PDR), laser treatment will be recommended. In order to shrink the new
vessels which appears in PDR, a treatment with laser is applied to the patient. In this
procedure, the surgeon makes 1000 to 2000 laser burns with an argon laser in the
retina avoiding the macula area. In theory, it is thought that this laser is actually
causing damage to parts of the healthy peripheral retina (rather a sacrifice) to cause
regression of these new fragile vessels. This treatment is most ideally applied in the
cases where the new blood vessels have little bleeding or none, for best visualization
of the retina In this surgical procedure, the vitreous gel which is mixed with the blood
is extracted from the eye and refilled with a special saline solution. In the case of
retinal detachment, surgery is the only solution. The doctor could apply laser to try to
fix the part of the retina that is torn or another procedure called cryopexy. In some
people, visualization can take some time for recovery after surgery. But in rare cases,
people do not make better vision in any way.

1.7 Motivation
According to World Health Organization in 2010 there were 285 million people
visually impaired and 39 million were blind [1]. “Bad habits” in the population such as
lack of exercise, excess intake of oily food etc. contribute to the increase of
overweight population that leads to diabetes type II. For this reason, the number of
diabetes cases is increasing each year making it difficult for the health care system to
screen all of diabetic people for retinopathy. As an alternative to help cover the high
demand, automatic Diabetic Retinopathy (DR) screening algorithms have been
developed using fundus images.

National Institute of Technology Kurukshetra Page 9


The motivation of this work is to apply different image analysis techniques for the
detection of pathologies associated with DR in the retina with more emphasis on the
detection of two sight threatening diseases that can lead to severe vision damage. First,
we develop an image analysis algorithm to detect abnormalities in the macula, in
which high visual resolution is obtained, to look for macular edema. Second, we
propose to develop an algorithm to detect Neovascularization. Neovascularization in
the optic disc can produce significant problems since it can produce hemorrhages that
can cause significant vision impairment. We expect that by developing and applying
these algorithms to actual patients, we can achieve early detection of critical diseases.
Then, this early detection will lead to timely treatment that can prevent blindness.

1.8 Aim and objective


The main aim of the research work is to propose a technique that will have capability
of differentiation between pathological and healthy fundus image using the linear
binary pattern. This work is totally based upon the unique technique used for
identifying the lesions for this purpose. The general idea of the work is enlighten as
follow:

• Image preprocessing is done with scaling and channel separation so that the irregular
elucidation and boost the distinction of lesion in the fundus image.

• Masking is considered significant for the texture analysis.

• Making the histogram equalization. Training a classifier and obtain the better
constraint by taking a classification with 10 fold cross validation.

1.9 Thesis Structure


The work is organized as explained below:

1. Chapter 1 concisely explains the goal and the purpose of the research work. It
explains the human eye anatomy and the general composition of retina layers, types
lesions present in fundus image.

2. Chapter 2 explains literature review of previous used methods. Briefly describes


various retinal imaging methods.

3. Chapter 3 describes methodology used for identifying the lesions in the fundus image
with the help of linear binary patterns. Matlab software is used for this purpose.

National Institute of Technology Kurukshetra Page 10


4. Chapter 4 presents the results which includes the images of processing and the
histogram graphs.

5. Chapter 5 enlightens conclusions and scope of the future work.

National Institute of Technology Kurukshetra Page 11


CHAPTER-II

LITERATURE SURVEY AND IMAGE PROCESSING


METHODS USING FUNDUS IMAGES

2.1 Literature Review


Several methods have been developed for the automatic detection of red lesions in
color fundus images. Computer-aided diagnosis (CAD) has become one of the most
active research areas in medical image analysis. The varying contrast and non uniform
shade of retina images make lesion detection a challenging task. A number of
approaches based on mathematical morphology for the detection of red lesions in
color fundus photographs have been described as below:

 In more recent work of Mookiah et al., represented “Local configuration pattern


features for age-related macular degeneration characterization and classification,”
(2015) a different methodology for AMD characterization is done through local
configuration patterns (LCP) rather than by LBP [23]. Linear configuration
coefficients and pattern occurrence features are extracted and a linear SVM is used
after feature selection.
 Spencer et al. proposed a framework to detect microaneurysms in fluorescent
angiographic images, where lesions have better visibility than in optical images. A
Shade correction pre-processing and basic bi-linear top-hat morphological
transformation with a specific size structuring element is employed to enhance MAs
and depress blood vessels [24]. The authors reported a sensitivity of 82% and
specificity of 86% in comparison with a clinician. However, only four images were
included and the method of comparing the computed output to the manual selection
was not described.
 A. D. Fleming, S. Philip, and K. A. Goatman, (2006) presented “Automated
microaneurysm detection using local contrast normalization and local vessel
detection” [25]. It improved the method of Spencer et al. by locally normalizing the
contrast around candidate lesions, and eliminating candidates detected on vessels
using a local vessel segmentation step.

National Institute of Technology Kurukshetra Page 12


 Sinthanayothin et al. (2002) presented “Automated detection of diabetic retinopathy
on digital fundus images” which applied a recursive region growing procedure in
combination with a neural network to detect the vessel segments which were removed
to identify microaneurysms [26].
 T. Walter, P. Massin, A. Arginay, R. Ordonez, C. Jeulin, J. C. Klein, (2007)
represented “Automatic detection of microaneurysms in color fundus images,” 2007.
The approach proposed is also based on mathematical morphology [27]. It
recommends contrast enhancement and shade correction as a preprocessing step. The
candidate extraction is accomplished by diameter closing. As we have seen, most of
the above mentioned techniques rely upon binary morphological processing in the
threshold images for the segmentation of microaneurysms. It is very hard for
researchers to determine which ones are the best algorithms to employ at each step,
especially for the images coming from an inhomogeneous population.

2.2 Image processing techniques

2.2.1 Retinal Imaging


Retinal photographs are obtained using a fundus camera, the specialized low power
microscope by the means of camera that has capability of illuminating and imaging the
retina together. Early versions of fundus cameras were difficult to setup and required a
trained professional to operate them [3]. However over the years, there were several
developments to create retinal imaging additional handy and excluding dependence on
trained personnel. A fundus camera is able to produce several imaging modalities of
retina images such as:

 Color fundus images: The fundus is observed in full color beneath the enlightenment
of white light. The reflected Red, green and blue wavebands are represented with the
image intensities.

 Red-free photography: In this image the imaging light is strained to get rid of the red
colors. The contrast of the vessels and other structures is also enhanced.

National Institute of Technology Kurukshetra Page 13


 Angiogram is obtained by the use of a fundus camera with narrow band filters. This
image is acquired using the dye tracing method whereby a subject’s circulation is first
injected with sodium fluoresce in or indicia-nine green, before the retina is
photographed.

2.3 Overview of Retinal Image Analysis Research


Numerous research efforts have been made in the area of retinal images analysis.
Broadly they can be categorized into structure segmentation, retina image
classification, retina image registration, retina image reconstruction or correction,
content based image retrieval and image quality assessment.

2.3.1 Structure segmentation


Structure segmentation in retina image normally involves the identification of
common retinal objects like optic disc, blood vessels and fovea or the identification of
retinal pathology such as exudates. Our work is on the segmentation of the optic disc.

2.3.2 Image classification


Retina image classification refers to the process of assigning a class to a retina image
such as, a normal retina or a retina with pathology, i.e. a retina affected with diabetic
retinopathy (DR), AMD or glaucoma. This area of research is another of our research
foci. In our case, the classification made is either normal or glaucomatous.

2.3.3 Image registration


It is the method of spatial alignment of more than images. It is useful for expanding
the field of view of retina images, for integrating information from different images; it
also helps in analyzing changes over time or longitudinal monitoring of retinal
appearance. Retina image registration approaches normally utilize the identification
and extraction of features derived from the blood vessel tree such as branching points
on separate individual images.

2.3.4 Image restoration or correction


Image restoration or correction involves the process of reversing the damage to the
retina images by a known cause. This damage may be caused by blurring, uneven
illumination or other interference pattern. They may alter the local statistics of the

National Institute of Technology Kurukshetra Page 14


image intensities or introduce artifacts to the retina images. Because of that, image
restoration or correction is often needed to ensure that the images are suitable for
further processing. Some examples of work in this area are retina image de-blurring
using multichannel blind convolution and retina image illumination correction.

2.3.5 Content based image recovery


Content based image recovery refers to the concept of retrieving an image from a
library based on the similarities of the image contents. The image contents may be
described using either low level features such as those based on color or texture using
features based on the segmented object or using abstract attributes that characterize the
segmented object. With regards to retinal images, most of the content based image
retrieval applications focused on retrieval and classification of diseased retina for
diagnosis and study purposes.

2.3.6 Retina image feature evaluation


Image feature evaluation involves the process of evaluating retinal images so that their
quality exceeds the threshold required for an automated grading system.

2.4 Optic Disc Segmentation


A number of studies have reported works on optic disc segmentation. Among the
existing techniques, deformable or active contour and circular and elliptical template
matching are the ones commonly used. An application area whereby an optic disc is
used as landmark is in retinal image listing. Approaches based on deformable contours
attempt to identify the optic disc contour as precisely as possible. The work starts by
performing retinal blood vessel segmentation on the retina images. For automatic
retina lesion detection applications, optic disc segmentation and removal are
considered as one of the pre-processing steps. This is possible because the deformable
contour has the ability to change form.

2.5 Circular Hough Transform


A generalized Hough transform is used to detect a shape which has no predefined
analytical form. The Hough Transform works by transforming a set of characteristic
points in the image space. The Circular Hough Transform used the equation of a circle
given by:

(x-a)2 + (y-b)2 = r2

National Institute of Technology Kurukshetra Page 15


Where (a, b) are the coordinates of centre of the circle, r is the radius and (x, y)
represents the point on the circle boundary.

The factor representation of the circle is thus:

x = a + r cos(θ)

y = b + r sin(θ)

As varies from 0 to 360, with the Circular Hough Transform, the aim is to find sets of
(x, y, r) that are highly likely to be a circle in an image.

Figure2. 1 Circular Hough Transform

Figure 2.1 illustrates the Circular Hough Transform process.

With a Circular Hough Transform, it is assumed that each pixel (x, y) from the edge
image centre’s circles with radius r. For a given r value, the circular Hough Transform
works by considering all circles centred at each edge feature at once and identifies co-
ordinates in Hough space that the circle intercepts. The frequency of these intercepts at
each coordinate is totaled in accumulator space. The coordinate in the accumulator
with the highest frequency of interception will be selected as the circle centre. Since
the optic disc has a circular or elliptical shape the Circular Hough Transform is used
for optic disc segmentation.

National Institute of Technology Kurukshetra Page 16


For optic disc detection, r is normally is not known. Therefore a range of r values is
used. The process of identifying the circle is similar to the one described above except
that the process is repeated for the whole range of the r values and the accumulator
space is in 3D. An example of optic disc detection using Hough Transform is
illustrated in Figure 2.1. The major benefit of the Circular Hough Transform technique
is that it is relatively not affected by noise and gaps in the edge feature.

2.6 Image Quality Verification


It is important for images to be of good quality in order to provide a trustworthy
diagnosis. In a typical screening environment, studies found that 10 to 20% of the
images were inadequate to provide accurate diagnosis. In general, there are two factors
that affect the retinal image quality: biological inherent of the human retina and
external factors. The external factors could be camera artifacts, low resolution, image
compression, unfocused images and low contrast. Below are list of the biological and
external factors.

2.6.1 Biological Factors


These are some biological factors described in the literature that explain to a certain
extent the variability in the retinal images:

 Lens coloration: As an individual age, the lens becomes yellowish. This coloration
changes the appearance of structures in the retina.
 Lesion composition: lesions that appear in the retina have different compositions,
which change their response to light.
 Lesion density: the amount of light being reflected or transmitted depends on the
density of lesions.
 Lesion position: since the retina is composed of different layers, lesions can develop
in different positions. Their positions are another factor that changes their appearance
in the fundus images.
 Pigmentation and Iris color: In the case of people with low pigmentation, their
retinal images appear reddish. Since lower pigmentation implies less reflectance, the
choroidal vessels that are in a layer under the retina usually are more notorious in
these subjects.

National Institute of Technology Kurukshetra Page 17


2.6.2 External Factors
 Camera: The sensors of the camera are not used to changes of illumination as well as
human vision system. The exposure must be controlled in order to obtain a good
quality image. For example, the intensity of the camera flash needs to be adjusted in
order to capture a good image.
 Loss of contrast: Unexpected movements of the patient eyes can make the image
blurry and cause low contrast.

 Eye diseases: The healthy cornea is a transparent media, through which the light
reflected from the retina passes without distortion. In a patient with cataracts, the
cornea becomes opaque and causes distortion-when the light passes through. The
image will appear hazy overall.

 Blurry images: movement of the patient or poor focus in the camera generates blurry
images.

 Compression: The images compressed to other formats like “JPEG” introduce


blocking artifacts. A study performed by Conrath et al. showed how the ratios of
compression affect the grading of abnormalities such as microaneurysms,
hemorrhages and IRMA [21]. They also showed that the detection of pathologies is
affected in both types of JPEG compression: JPEG.

2.7 Retinal Image Acquisition


In the past, fundus images were captured on film. To analyze DR in film of 35mm,
two 30 degrees field of view (FOV) images were required to capture lesions of retinal
capillaries such as microaneurysms and IRMA. Nowadays retinal images are acquired
in digital format. They are usually acquired over a large FOV of 45, 50 or 60 degrees
for the detection of DR. The images have a rectangular shape and the dimensions vary
from hundreds to thousands of pixels. The photographs of the retina are usually
acquired in two ways: mydriatic (pupil dilation) or non-mydriatic. Fig. 2.2 which
shows the area of the retina that is usually captured by a camera for a specific degree
setting, where R is the radius of the sphere of the eye and its length is 11mm. An
external angle, which is always provided by the camera, is greater than the internal
angle depicted in Fig. 2.2, while the angle of the eye is considered to be twice as much
as the internal angle.

National Institute of Technology Kurukshetra Page 18


Eye  0.74 * 2External

Since the area of the retina is bigger than the area that is actually captured, many
pictures of different fields of view of the retina are required. The gold standard
consists of seven 30-degree fields using steoroscopic pairs. Since 2 images are
necessary in stereo (usually used to find the presence of glaucoma), 14 images are
needed from each eye. It has been shown in that reducing this to two. It also reduces
costs, complexity and time. Fig. 2.3 shows the 7-field standard for the right eye

Figure2. 2 Relationship of the camera angle and the area of the imaged retina.

The most common images obtained for screening purposes are from field 1, and field
2, in which the macula is in the middle of the image.

Figure2. 3 The seven standard 30-degree fields of view.

National Institute of Technology Kurukshetra Page 19


In addition to retinal fundus images, other methods of image acquisition to analyze the
retina are used. It is a method used to examine the circulation of blood in retinal
vessels by using a dye tracing method. Patients are injected with sodium fluorescein,
and then an image is captured by detecting the fluorescence emitted by the retina after
having been illuminated. By using this method, microaneurysms, which appear as
hyper fluorescent dots, can be clearly distinguished from dot-blot hemorrhages.

2.8 Methods for color normalization


This step is very important in the case of lesion detection using pixel intensity value
information. The color of each of the lesions can help to discriminate them. That is the
reason why many approaches have been developed to detect abnormalities in the
retina using color information. Different color channels provide different kinds of
information. For example, authors like Shin et al and Soares et al. used the green
channel since it has more contrast in the RGB space. This color plane is considered the
most informative one and fewer subject to heterogeneous illumination. Other
approaches used the red channel color plane in the identification and segmentation of
the optic disc since the vessels appear very smooth and do not interfere with the
segmentation. Finally, the blue channel contains little to non-useful information and
for that reason it is almost not considered in monochromatic implementations. Even
for color or grayscale implementations, the normalization of the range of intensity
pixel values is very useful in order to extract features which are consistent across
images.

2.8.1 Color Transformation


This technique deals with the processing of the components of a color image. Some of
the automated screening systems prefer to use color spaces that are different from
RGB. By weighting each channel and combining them with different mathematical
operations, some transformations are generated. The most commonly used
normalization approach is grey world. This normalization procedure models changes
in illumination spectrum, which are applied to RGB channels. This normalization
divides each color component by its mean value within the image.

2.9 Histogram equalization


It has been shown that histogram equalization performed individually in red, green and
blue channels achieves a better performance than the grey world approach.

National Institute of Technology Kurukshetra Page 20


This method is based in maintaining the pixel rank order for each RGB plane under
different illuminants. Histogram equalization is a monotonically increasing mapping.
One issue of the aforementioned method is that it magnifies the intensity of the blue
channel, when in fact the retina reflects a small amount of blue light. This can be
appreciated, where the images appear with a blue tone when the contrast is low.
Although the red lesions are enhanced by the method, it can also be seen that the optic
disc and its surroundings are affected. The problem with this method is that it depends
on the global statistics of an image, when equalization is sometimes needed only on a
part of the image so that it does not affect important landmarks of the retina such as
the optic disc.

National Institute of Technology Kurukshetra Page 21


CHAPTER-III

METHODOLOGY

3.1 Introduction

The material of the research work is images. The dataset used was collected of healthy
images and pathological images. There are some quality standard that must followed
by the dataset. The below basis were considered roots for segregation:

1. The Images with various artifacts, like, intense and round spots produced by flith in
the lens of the camera.

2. Salt and pepper noise affected the images with great amount.

3. Vascular network is highly over segmented.

3.2 Proposed Method

The proposed method includes various techniques. All these are explained with
algorithm as below:

3.2.1 Pre processing

1. Take bicubic interpolation.

2. Size of the img varies (2544 x 1696).

3. Image resized to standard size (512 x 512).

4. Bicubic interpolation is used for resizing and resizes the image size.

Algorithm for pre processing is as follow:

RI=imresize (I,[512,512],'bicubic');

figure,imshow(RI,[ ]);

title ('original image'); title('resized input image');

impixelinfo;

National Institute of Technology Kurukshetra Page 22


3.2.2 Local binary patterns

Local Binary Pattern is a technique which transforms an image into an array of integer
labels that train the pixel wise detail of the texture image. These labels can be
represented as a histogram that can be interpreted as the texture descriptor of the
analyzed image. Two important properties of LBP which make it attractive for
characterizing textures are its invariance against any monotonic grey level change
such as those caused by illumination changes, and its computational simplicity.

Figure3. 1 Illustration of the basic LBP operator

Consider a 3x3 neighborhood around a pixel. Pixels in the neighborhood with a grey
value lesser or equivalent to the innermost pixel are given a value zero and those with
a higher value are given value ’1’. The 8 binary numbers linked with the 8 neighbors
are then read consecutively in a clockwise direction to outline a binary number (LBP
pattern) or a decimal number (LBP code). The number is allocate to the innermost
pixel and used to characterize the local texture.

Figure 3. 2 An example of new LBP operator computation

National Institute of Technology Kurukshetra Page 23


Where gp and gc is the gray values of the neighborhood and innermost pixel. P shows
the number of samples on the symmetric circular neighborhood of radius R. Because
local binary pattern is exploiting for texture description, it is general to embrace a
contrast measure by identifying the rotational invariant local variance as given below:
1 1
VARP,R = 𝑃 ∑𝑃−1 2 𝑃−1
𝑝−0.(gp-µ) , µ=𝑃 ∑𝑃=0.gp

The LBP and VAR operators are utilized to recognize the texture of the retina
background texture descriptors local binary patterns LBP with different radius.
Following is the coding of LBP of RGB components:

RED CHANNEL—RADIUS 1

Filtred=roired;
nFiltSize=8;
nFiltRadius=1;
filtR1=generateRadialFilterLBP(nFiltSize, nFiltRadius);
effLBPR1=efficientLBP(filtred, 'filtR1', filtR1, 'isRotInv',
false, 'isChanWiseRot', false);
effLBPR1=imcomplement(effLBPR1);
figure,imshow(effLBPR1);
title('LBP feature extracted Red channel image-1');

fontSize=14;
[pixelCounts1, GLs1] = imhist(uint8(effLBPR1));
figure,bar(GLs1, pixelCounts1);
title('Histogram of LBP Red channel image-1 ', 'FontSize',
fontSize);

GREEN CHANNEL--RADIUS 1

filtgreen=ROIgreen;
nFiltSize=8;
nFiltRadius=1;
filtG1=generateRadialFilterLBP(nFiltSize, nFiltRadius);
effLBPG1=efficientLBP(filtgreen, 'filtG1', filtG1, 'isRotInv',
false, 'isChanWiseRot', false);
effLBPG1=imcomplement(effLBPG1);
figure,imshow(effLBPG1);
title('LBP feature extracted Green channel image-1');

[pixelCounts2, GLs2] = imhist(uint8(effLBPG1));


figure,bar(GLs2, pixelCounts2);
title('Histogram of LBP Green channel image-1 ', 'FontSize',
fontSize);

National Institute of Technology Kurukshetra Page 24


BLUE CHANNEL--RADIUS 1

filtblue=ROIblue;
nFiltSize=8;
nFiltRadius=1;
filtB1=generateRadialFilterLBP(nFiltSize, nFiltRadius);
effLBPB1=efficientLBP(filtblue, 'filtB1', filtB1, 'isRotInv',
false, 'isChanWiseRot', false);
effLBPB1=imcomplement(effLBPB1);
figure,imshow(effLBPB1);
title('LBP feature extracted Blue channel image-1');

[pixelCounts3, GLs3] = imhist(uint8(effLBPB1));


figure,bar(GLs3, pixelCounts3);
title('Histogram of LBP Blue channel image-1 ', 'FontSize',
fontSize)

3.2.3 Methods for Contrast Enhancement


These techniques are developed in order to make the structures in the image more
distinguishable than the background or other structures. For this particular topic, many
approaches have developed or used techniques to enhance the quality between the
lesions or vasculature and the retinal background. There is not a quantitative
measurement to evaluate the contrast enhancement; only visual inspection or
improvements in the results based on a specific algorithm are the metrics used. Given
the shape of the retina, conventional methods such as contrast stretching or others
based on global histogram information are not useful since the retinal image was
characterized by its higher contrast in the center and lower contrast in the edges of the
image [15]. For that reason, local enhancement methods are preferred. The methods
found in the literature are usually followed by a noise reduction technique since the
noise was also enhanced. In some publications, only noise reduction without contrast
enhancement was applied to the images. A technique used to assess the limitations of
global enhancement, which is called adaptive local contrast enhancement was
proposed by Sinthanayothin et al. and used by Park et al [31]. In this technique,
contrast enhancement was applied to local areas depending on their mean and
variance. An alternative approach to increase the images contrast was proposed by
Rapantzikos et al. to detect drusen. This method, which is called multilevel histogram
equalization (MLE), is based on applying histogram equalization sequentially to
miniature non-overlapping neighborhoods obtaining enhancement in the drusen
without being affected for the small brightness variations.

National Institute of Technology Kurukshetra Page 25


In the contrast of the images was improved by the use of a CLAHE [21] to detect HE.
This method, which is an extended version of AHE, allows adjustment of the amount
of enhancement required. By using this method, a reduction of the amplification of the
noise can be obtained by selecting the clipping level of the histogram. Other pre-
processing methods are more specific, being designed based on its application.

3.4 Retinal Vasculature Segmentation


This is a research area that presents a large variety of algorithms to segment the retinal
vasculature. Among the purposes for the development of algorithms to segment retinal
vasculature, the most important are: to reduce the number of false positive red lesions
in DR screening systems, to calculate the width of the vessels in order to detect
cardiovascular diseases (CVD), and to measure geometrical attributes at vessel
bifurcations like angles to find features that capture other diseases. For instance, they
show that in patients with hypertension, the angle between the 2 offspring arterioles
was reduced. For that reason, exact division of the retinal blood vessels is very
important not only to detect parts of the retina or the fovea as shown in, but also to
assist in the detection of other diseases such as high blood pressure, stroke, etc. In
Barker et al. they concluded that hypertensive retinopathy signs like focal retina
arteriolar narrowing, arterial venous nicking and other common signs of DR like
microaneurysms and HE are considered to be markers of risk of stroke. A more
complete review study of the association to incident stroke was presented by Doubal et
al. In this comparison, ratios of abnormalities in the vasculature are presented and
compared with previous studies. Despite the large variation in the studies, it was
demonstrated that retinal micro vascular abnormalities and stroke are associated. In
addition to that, Patton et al. in state that the retinal micro vascular observe cerebral
micro vascular changes in aging also vascular dementia and stroke. This is because the
brain and the retina have similar vascular regulatory processes. Therefore, retinal
image analysis can be an alternative to other techniques to detect those diseases. In
addition to the already mentioned diseases, the Blue Mountain Eye study indicates that
large retinal venial caliber can also identify the incidence of obesity near about five
year period. Before we explain the developed approaches for vessel segmentation, a
summarized list of the characteristics of the retinal vasculature is presented.

National Institute of Technology Kurukshetra Page 26


 The vasculature presents lower reflectance, which is the reason why it appears dark in
the fundus image.
 Biological factors in the vessels such as variations in their wall thickness or their
refraction index do not affect the appearance of their width in the retinal images.
 The width of the vessels varies from 36-180 μm

3.5 Feature Extraction


The Local binary pattern and variance operators explained above are applied to
differentiate the texture of the retina background. They are determined for every pixel
of the RGB images using P = 8 and different values of R = {1,2,3,5}. The LBP and
VAR values that match up with pixel locations of the optic disc, vessels, or outside the
retina are not considered. The RGB components of each image are independently
analyzed. The resulting images provide an explanation of the image texture. Later
masking the optic disc and vessel segments, then LBP and VAR values within the
external mask of the fundus are accumulated into histograms, one for each component.

The algorithms for feature extraction are as follow:

RED CHANNEL--RADIUS 2

filtred=ROIred;
nFiltSize=8;
nFiltRadius=2;
filtR2=generateRadialFilterLBP(nFiltSize, nFiltRadius);
effLBPR2=efficientLBP(filtred, 'filtR2', filtR2, 'isRotInv',
false, 'isChanWiseRot', false);
effLBPR2=imcomplement(effLBPR2);

figure,imshow(effLBPR2);
title('LBP feature extracted Red channel image-2');
[pixelCounts4, GLs4] = imhist(uint8(effLBPR2));
figure,bar(GLs4, pixelCounts4);
title('Histogram of LBP Red channel image-2 ', 'FontSize',
fontSize);

GREEN CHANNEL--RADIUS 2

filtgreen=ROIgreen;
nFiltSize=8;
nFiltRadius=2;
filtG2=generateRadialFilterLBP(nFiltSize, nFiltRadius);
effLBPG2=efficientLBP(filtgreen, 'filtG2', filtG2, 'isRotInv',
false, 'isChanWiseRot', false);
effLBPG2=imcomplement(effLBPG2);
figure,imshow(effLBPG2);
title('LBP feature extracted Green channel image-2');

National Institute of Technology Kurukshetra Page 27


[pixelCounts5, GLs5] = imhist(uint8(effLBPG2));
figure,bar(GLs5, pixelCounts5);
title('Histogram of LBP Green channel image-2 ', 'FontSize',
fontSize);

BLUE CHANNEL--RADIUS 2

filtblue=ROIblue;
nFiltSize=8;
nFiltRadius=2;
filtB2=generateRadialFilterLBP(nFiltSize, nFiltRadius);
effLBPB2=efficientLBP(filtblue, 'filtB2', filtB2, 'isRotInv',
false, 'isChanWiseRot', false);
effLBPB2=imcomplement(effLBPB2);
figure,imshow(effLBPB2);
title('LBP feature extracted Blue channel image-2');

[pixelCounts6, GLs6] = imhist(uint8(effLBPB2));


figure,bar(GLs6, pixelCounts6);

title('Histogram of LBP Blue channel image-2 ', 'FontSize',


fontSize);

3.6 Classifier
Support Vector Machines (SVM) was used as a classifier in Ricci et al. approach. In
their algorithm, linear operators were used to extract the features. These operators
follow the same principle as matched filters and directional filters. In this paper, just
as the previous method did, they inverted the green channel to put the vessels brighter
and no contrast enhancement was applied to avoid loss of information of the vessels.
Niemeijer et al. defines a pixel classification based method in which features were
extracted for each pixel in the green channel. These features consisted in the outputs of
applying Gaussians and its derivatives up to order 2 for 5 scales (1, 2, 4, 8 and 16
pixels). Then, the features were normalized to zero mean and unit variance.

Data set is divided into two subsets one for training (model set) and another for testing
(validation set). Once the features are extracted the data of the model set is
preprocessed by using data normalization and data sampling. Data normalization is
carried out by considering mean=’0’ and variance= unity. Data sampling is done by
synthetic minority oversampling technique (SMOTE). External cross validation is
executed on the training so that the dimensionality of the data is lessened by feature
selection before being passed on to a classifier.

National Institute of Technology Kurukshetra Page 28


Figure 3. 3 Feature extraction flowchart

To classify that information, k-NN classifier was selected among others since its
performance was superior. Classification can be categorized into two approaches,
supervised learning and un-supervised learning. Supervised leaning exploits some
prior labeling information to decide whether a pixel belongs to the optic disc or not,
while unsupervised learning performs the optic disc segmentation without any prior
knowledge: pixels are assigned to classes without knowing the class’s identities or
their properties. k-Nearest Neighbor (KNN) classifier for optic disc and optic cup
segmentation on stereo fundus image. An optimal subset of several features was
utilized as the pixel features for the segmentation. Eight of the features are based on
the output of median filter banks in color opponency space, two of the features are
based on the output of simulation of binoculars complex cell and the last two features
are the priori probability of a pixel being a cup.

National Institute of Technology Kurukshetra Page 29


3.7 Masking
The mask is process of building the blocks which will encapsulate the logic in it. It
basically makes simpler the graphical structure of the model. Just the pixels of the
retina background are taken in account for the texture analysis. So the major structure
should not be taken into account for texture analyzing.

The steps and algorithms are as follow:

1. Computing the mean


rmu = mean2(red);
2. Calculating the mean subtraction from the input image
rmeansub=red-rmu;
3. Computing the difference image for each image Ai = Ti - m
figure,imshow(rmeansub);
title('mean subtracted image-red channel');
impixelinfo;

rROI1=rmeansub(:,:,1)<100;
figure,imshow(rROI1);
title('ROI1 MASK');

Masking with the input image

ROIred=filtred;
ROIred=ROIvr;
ROIred(~rROI1)=0;
figure,imshow(ROIred);
title('optic disk removed image-red channel');
impixelinfo;

Disc removal process for blue channel

bROI1=bmeansub(:,:,1)<30;
figure,imshow(bROI1);
title('ROI1 MASK');

ROIblue=ROIvb;
ROIblue(~bROI1)=0;
figure,imshow(ROIblue);
title('optic disk removed image-blue channel');
impixelinfo;

Disc removal process for green channel


ROI2=gmeansub(:,:,1)<60;
figure,imshow(ROI2);
title('ROI1 MASK-GREEN');

National Institute of Technology Kurukshetra Page 30


ROIgreen=ROIvg;
ROIgreen(~ROI2)=0;
figure,imshow(ROIgreen);
title('optic disk removed image-green channel');
impixelinfo;

Extract Blood Vessels


Threshold=10;
bbloodVesselsb = VesselExtract(filtblue, Threshold);
figure,imshow(bbloodVesselsb);
title('Extracted Blood Vessels');
bloodVesselsb=im2bw(bbloodVesselsb);
figure,imshow(bloodVesselsb);
title('Extracted Blood Vessels');

se=strel('disk',1);
bloodVesselsb=imdilate(bloodVesselsb,se);
figure,imshow(bloodVesselsb);
title('holes filled image');

Masking with the input image


bloodVesselsbi=imcomplement(bloodVesselsb);
figure,imshow(bloodVesselsbi);
title('Vessel mask');

ROIvb=filtblue;
ROIvb(~bloodVesselsbi)=0;
figure,imshow(ROIvb);
title('vessel removed image-blue channel');
impixelinfo;

3.10 Median Filter


The median filter was applied respectively for the images of all the three channels
(RGB Components) for Noise lessening using a 3-by-3 neighborhood

Median filter-red channel


filtred=medfilt2(red,[3 3]);
[rr rc]=size(filtred);

for ri=1:rr
for rj=1:rc
if(filtred(ri,rj)<4)
filtred(ri,rj)=0;

else
end
end
end

figure,imshow(filtred,[]);
title('red channel median filtered image');
impixelinfo;
[rr rc rd]=size(filtred);

National Institute of Technology Kurukshetra Page 31


Median filter-green channel
filtgreen=medfilt2(green,[3 3]);
[gr gc]=size(filtgreen);
for gi=1:gr
for gj=1:gc
if(filtgreen(gi,gj)<4)
filtgreen(gi,gj)=0;
else
end
end
end
figure,imshow(filtgreen,[]);
title('green channel median filtered image');
impixelinfo;

Median filter-blue channel


filtblue=medfilt2(blue,[3 3]);
[br bc]=size(filtblue);
for bi=1:br
for bj=1:bc
if(filtblue(bi,bj)<4)
filtblue(bi,bj)=0;
else
end
end
end
figure,imshow(filtblue,[]);
title('blue channel median filtered image');
impixelinfo;
3.10 Experimental Setup
The system performance is evaluated by the use of accuracy measurement which is
computed as follows:

Accuracy =Total number of correctly classified images%

Total number of images

Where, TP is true positive, FP is false positive; TN is true negative, FN (false negative).

K-means clustering method is utilized for various values of K so as to satisfy


classification outcomes. Two type experiments are carried out. One is training set and
other is testing set.

National Institute of Technology Kurukshetra Page 32


CHAPTER-IV

RESULTS AND DISCUSSION

In this work, we have used several datasets of healthy and affected fundus images. The
software used for processing these images is MATLAB 2013a. The results of
processing are as following:
4.1 Original Image
All the images of dataset are digitized slides captured by a canon EOS 20D. One of the
images from the dataset is as shown below: original image

Figure4. 1 Original Fundus Image

This image has dimensions 2544×1696 pixels. It has horizontal and vertical resolution
of 72dpi. The image is digitized with 24 bits per pixel.

4.2 RGB Images


An RGB image is described as red, green, and blue color components for each
individual pixel. In MATLAB array may be of class double, uint8, or uint16.
red channel image blue channel image

(a) (b)

National Institute of Technology Kurukshetra Page 33


green channel image

(c)

Figure4. 2 (a) Red channel Image (b) Green Channel Image (c) Blue Channel image

4.3 Median Filtered Images


The median filter was applied respectively for the images of all the three channels red,
green and blue Components for Noise diminishing using a 3-by-3 neighborhood.
Results of median filtered images are as follow:

red channel median filtered image green channel median filtered image

(a) (b)
blue channel median filtered image

(c)

Figure4. 3 (a) Red channel median filtered Image (b) Blue channel median filtered Image

(c) Green channel median filtered Image

National Institute of Technology Kurukshetra Page 34


4.4 Optic Disc Removal Images
Retinal blood vessel originates from the middle of the optic disc. So that, the disc
location can be used as a intial point for vessel tracking method.
optic disk removed image-red channel optic disk removed image-blue channel

(a) (b)
optic disk removed image-green channel

(c)

Figure4. 4 (a) Optic Disc Removal-Red Channel (b) Optic Disc Removal-Blue Channel (c) Optic
Disc Removal-Green Channel

4.5 Vessel Removed Images


Vessels are eliminated from the retina in order to calculate more accurate features in
the detection of lesions. Vessel Removed Images of three channels are as shown:
vessel removed image-red channel vessel removed image-blue channel

(a) (b)

National Institute of Technology Kurukshetra Page 35


vessel removed image-green channel

(c)

Figure4. 5 (a) Vessel Removed Image-Red Channel (b) Vessel Removed Image-Blue Channel

4.6 Local Binary Pattern Images and Histogram


The LBP and VAR operators are used to characterize the texture of the retina
background texture descriptors local binary patterns LBP with different radius.
Following is the results of LBP and histogram equalization of RGB components.

x 10
4 Histogram of LBP Red channel image-1
16

LBP feature extracted Red channel image-1


14

12

10

0
-50 0 50 100 150 200 250 300

(a) (b)
4
x 10 Histogram of LBP Green channel image-1
18

LBP feature extracted Green channel image-1 16

14

12

10

0
-50 0 50 100 150 200 250 300

(c) (d)

National Institute of Technology Kurukshetra Page 36


4
x 10 Histogram of LBP Blue channel image-1
15

LBP feature extracted Blue channel image-1

10

0
-50 0 50 100 150 200 250 300

(e) (f)

Figure 4.6 Figure4. 6 (a)Linear binary pattern-red channel (b) Histogram of LBP-red channel
(c) Linear binary pattern-green channel (d) Hisrogram of LBP-green channel (e) Linear binary
pattern-blue channel (f) Hisrogram of LBP-blue channel (Radius-1)

x 10
4 Histogram of LBP Red channel image-5
16

14
LBP feature extracted Red channel image-5

12

10

0
-50 0 50 100 150 200 250 300

(a) (b)

x 10
4 Histogram of LBP Green channel image-5
18

LBP feature extracted Green channel image-5 16

14

12

10

0
-50 0 50 100 150 200 250 300

(c) (d)

National Institute of Technology Kurukshetra Page 37


4
x 10 Histogram of LBP Blue channel image-5
15

LBP feature extracted Blue channel image-5

10

0
-50 0 50 100 150 200 250 300

(e) (f)

Figure4. 7 (a)Linear binary pattern-red channel (b) Histogram of LBP-red channel (c) Linear
binary pattern-green channel (d) Histogram of LBP-green channel (e) Linear binary pattern-
blue channel (f) Histogram of LBP-blue channel (Radius-5)

4.7 Variance Feature Images


Similarly variance features are used to identify the parameters of texture of retinal
image. Following is the results of variance of RGB components
Variance Feature Image-Red Channel Radius-3

Variance Feature Image-Blue Channel Radius-3

(a) (b)
Variance Feature Image-Green Channel Radius-3

(c)

Figure4. 8 (a) Variance Feature Image- Red Channel (b) Variance Feature Image- Blue Channel

National Institute of Technology Kurukshetra Page 38


4.9 TPR and TNR Results
The evaluation of the algorithms is totally depending on the two concepts: sensitivity
or true positive rate and specificity or true negative rate. Sensitivity and specificity are
calculated on both training and testing sets in the entire experimental phenomenon.
Following tables shows accuracy of Linear Kernel with 100 iterations the results of
training and testing set images:

4.9.1 Testing set

Image Accuracy
no.
1 96.742%
2 95.161%
3 96.774%
4 96.772%
5 93.548%
6 95.161%
7 95.162%
8 93.548%
9 96.774%
10 96.775%
11 95.161%
12 95.161%
13 96.774%
15 93.548%
16 96.776%
Table 4.1: Testing set (Healthy Images)

Image Accuracy
no.
1 95.161%
2 96.774%
3 95.445%
4 93.548%
5 95.161%
6 96.774%
7 95.162%
8 96.770%
9 97.451%
Table 4.2: Testing set (MA Images)

National Institute of Technology Kurukshetra Page 39


4.9.2 Training set

Image Accuracy
no.
1 96.774% Image Accuracy
2 100% no.
3 96.774% 1 93.548%
4 96.774% 2 96.774%
5 95.161% 3 95.161%
6 96.774% 4 95.161%
7 96.772% 5 96.773%
8 95.161% 6 95.884%
9 96.774% 7 93.162%
10 95.161% 8 96.770%
11 96.774% 9 95.451%
12 93.548%
13 96.774%
14 96.115% Table 4.4: Training set (MA images)
15 96.161%
16 95.155%
Table 4.3: Training set (Healthy Images)

4.10 ROC Curve


ROC curve shows the graph between true positive and false positive. More the rates more
will be accuracy.

ROC Curve of the proposed system


1.0
TPR
0.9 TNR

0.8

0.7

0.6
True Positive

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
False positive

Figure4. 9 ROC Curve

National Institute of Technology Kurukshetra Page 40


CHAPTER-V
CONCLUSIONS

5.1 Conclusions
In this research work, the main algorithm is to detect presence of microaneurysms,
hemorrhages, or hard exudates away from the fovea. The performance of algorithm as
compare to previous methods is better. An advantage of local binary pattern is
undoubtedly exposed in the identification of abnormalities in the retina is improved.
The system originated through local binary pattern can also be very helpful in
medication of diabetic retinopathy. This software detection algorithm is based on a
generalized optimization scheme of image classification and segmentation. Because of
flexibility of the system, these algorithms can be extended to the identification of
different types of lesions. The algorithm showed robustness, as it did not require
retraining. Instead of using contrast enhancement methods that may increase the noise
level in the images or using classification to perform candidates extraction, this
procedure gives a consistent process for extraction of candidates using only the
normalized output of the RGB color space.

5.2 Scope for future work


The research presented in this work, the emphasis was in the detection of sight
threatening pathologies. These algorithms gave good performance on limited data sets.
However, the number of people to be screened is very high, in the order of millions.
Therefore, to establish the clinical relevance of the proposed algorithms, we would
need to validate performance on a very large dataset. This is because the prevalence of
sight threatening conditions is low. In the near future, given the increasing number of
clinical screening research centers, an evaluation of this type will be possible.
Computer aided diagnosis systems are evaluated in terms of accuracy, sensitivity and
specificity. In terms of clinical value, it will be interesting to consider the use of the
predictive value to evaluate performance. The predictive value is a better clinical
metric in the sense that it also considers the prevalence of the disease.

National Institute of Technology Kurukshetra Page 41


REFRENCES
1. World Health Organization (WHO), “Universal eye health: a global action plan
2014-2019,” 2013.
2. S. Zabihi, M. Delgir, and H.-R. Pourreza, “Retinal vessel segmentation using
color image morphology and local binary patterns,” in Machine Vision and Image
Processing (MVIP), 6th Iranian, 2010.
3. M. Garnier, T. Hurtut, H. Ben Tahar, and F. Cheriet, “Automatic multiresolution
age-related macular degeneration detection from fundus images,” in SPIE,
Proceedings, vol. 9035, 2014, pp. 903532–903532–7.
4. M. R. K. Mookiah, U. R. Acharya, H. Fujita, J. E. Koh, J. H. Tan, K. Noronha, S.
V. Bhandary, C. K. Chua, C. M. Lim, A. Laude, and L. Tong, “Local
configuration pattern features for age-related macular degeneration
characterization and classification,” Computers in Biology and Medicine, vol. 63,
pp. 208 – 218, 2015.
5. S. Morales, V. Naranjo, J. Angulo, J. J. Fuertes, and M. Alca˜niz, “Segmentation
and analysis of retinal vascular tree from fundus images processing,” in
International Conference on Bio-inspired Systems and Signal Processing
(BIOSIGNALS 2012), 2012, pp. 321 – 324.
6. T. Ojala, M. PietikaÈ inen, T. MaÈenpaÈa, “Multiresolution grayscale and
rotation invariant texture classification with local binary patterns,” IEEE Trans.
Pattern Anal. Machine Intelligence, vol. 24, no. 7, 2002.
7. S. Morales, V. Naranjo, J. Angulo, and M. Alca˜niz, “Automatic detection of
optic disc based on pca and mathematical morphology,” Medical Imaging, IEEE
Transactions on, vol. 32, no. 4, pp. 786–796, April 2013.
8. Agurto, Carla, Honggang Yu, Victor Murray, Marios S. Pattichis, Sheila Nemeth,
Simon Barriga, and Peter Soliz. "A multiscale decomposition approach to detect
abnormal vasculature in the optic disc", Computerized Medical Imaging and
Graphics, 2015.
9. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE:
Synthetic minority over-sampling technique,” Artificial Intelligence Research,
Journal of, vol. 16, pp. 321–357, 2002.
10. http://www.pertanika.upm.edu.my/journals.

National Institute of Technology Kurukshetra Page 42


11. T. Scheffer, “Error estimation and model selection,” Ph.D. dissertation,
Technischen Universitt Berlin, School of Computer Science, 1999.
12. S. Dudoit and M. J. van der Laan, “Asymptotics of cross-validated risk estimation
in estimator selection and performance assessment,” Statistical Methodology, vol.
2, no. 2, pp. 131 – 154, 2005.
13. R. Kohavi and G. H. John, “Wrappers for feature subset selection,” Artificial
Intelligence, vol. 97, no. 12, pp. 273 – 324, 1997.
14. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten,
“The weka data mining software: An update,” SIGKDD Explor. Newsl., vol. 11,
no. 1, pp. 10–18, Nov. 2009.
15. L. S. Cessie and J. C. van Houwelingen, “Ridge Estimators in Logistic
Regression,” Applied Statistics, vol. 41(1), pp. 191–201, 1992.

16. T. Ahonen, A. Hadid, and M. Pietikainen¨. Face recognition with Local Binary
Patterns. Computer Vision-ECCV 2004, 469–481, 2004.
17. A. Alahi, R. Ortiz, and P. Vandergheynst. FREAK: Fast retina keypoint. In
Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on.
IEEE, 510–517, 2012.
18. M. Alsheh Ali, T. Hurtut, T. Faucon, and F. Cheriet. Glaucoma detection based
on Local Binary Patterns in fundus photographs. Proc. SPIE, Medical Imaging,
Computer Aided Diagnosis, 9035:903531–903531–7, 2014.
19. A. Aquino, M. E. Gegundez´-Arias, and D. Mar´ın. Detecting the optic disc
boundary in digital fundus images using morphological, edge detection, and
feature extraction techniques. Medical Imaging, IEEE Transactions on,
29(11):1860–1869, 2010.
20. F. Bianconi and A. Fernndez. Rotation invariant co-occurrence features based on
digital circles and Discrete Fourier transform. Pattern Recognition Letters,
48(0):34 – 41, 2014. Celebrating the life and work of Maria Petrou.
21. J. D. Cavallerano, L. P. Aiello, A. A. Cavallerano and et al., "Nonmydriatic
digital imaging alternative for annual retinal examination in persons with
previously documented no or mild diabetic retinopathy," Am J Ophthalmol, vol.
140, no. 4, pp. 667-673, 2005.

National Institute of Technology Kurukshetra Page 43


22. V. Murray, P. Rodriguez and M. Pattichis, "Multi-scale AM-FM Demodulation
and Reconstruction Methods with Improved Accuracy," IEEE Transactions on
Image Processing, vol. 19, no. 5, pp. 1138-1152, 2010.
23. M. Mookiah, U. R. Acharya, R. J. Martis, C. K. Chua, C. Lim, E. Ng, and A.
Laude, “Evolutionary algorithm based classifier parameter tuning for automatic
diabetic retinopathy grading: A hybrid feature extraction approach,” Knowledge-
Based Systems, vol. 39, no. 0, pp. 9 – 22, 2013.

24. T. Spencer, J. A. Olson, K. C. McHardy, P. F. Sharp and J. V. Forrester, "An


image-processing strategy for the segmentation and quantification of
microaneurysms in fluorescein angiograms of the ocular fundus," Computers and
Biomedical Research , vol. 29, pp. 284-302, 1996.
25. A. D. Fleming, S. Philip, K. Goatman, J. Olson and P. Sharp, "Automated
assessment of diabetic retinal Image quality based on clarity and field definition,"
Invest. Ophthalmol. Vis. Sci. , vol. 47, pp. 1120-1125, 2006.
26. C. Sinthanayothin, J. F. Boyce, H. L. Cook and T. Williamson, "Automated
localisation of the optic disc, fovea,and retinal blood vessels from digital colour
fundus images," Br. J. Ophthalmol., vol. 83, pp. 231-238, 1999.
27. T.Walter and J. C. Klein, "Segmentation of color fundus images of the human
retina: Detection of the optic disc and the vascular tree using morphological
techniques," in Proc. 2nd Int. Symp. Med. Data Anal., pp. 282-287, 2001.
28. F. H. Jelinek and M. J. Cree., Automated image detection of retinal pathology,
Boca Raton: CRC Press, 2010.
29. M. Lalonde, L. Gagon and M. C. Boucher, "Automatic visual quality assessment
in optical fundus images," Proceedings of Vision Interface 2001, Ottawa, pp. 259-
264, June 2001.
30. H. Davis, S. R. Russell, E. S. Barriga, M. D. Abramoff and P. Soliz, "Vision-
based, real-time retinal image quality assessment," CBMS 2009, pp. 1-6, 2009.
31. D. B. Usher, M. Himaga, M. J. Dumskyj and et al., "Automated assessment of
digital fundus image quality using detected vessel area," Proceedings of Medical
Image Understanding and Analysis. British Machine Vision Association (BMVA)
Sheffield, UK:, pp. 81-84, 2003.

National Institute of Technology Kurukshetra Page 44


List of Publications

1. Ritika¹ and Ashavani Kumar2, “Retinal lesion detection through local binary
patterns” NCNIT-5-6March, 2017, NIT Kurukshetra, PP-78.

2. Ritika¹ and Ashavani Kumar2 “Detection of microaneurysms in retinal images


through local binary patterns” ESHM-26-March, 2017, National Institute of
technical teachers training and research, Chandigarh.

3. Ritika¹ and Ashavani Kumar2, paper title “Detection of microaneurysms in retinal


images through local binary patterns” published in IJATES(ISSN 2348-7550)
Volume 05,Issue 03, March2017.

National Institute of Technology Kurukshetra Page 45


National Institute of Technology Kurukshetra Page 46

Anda mungkin juga menyukai