ABSTRACT
magnetic resonance (MR) images and solving a wide range of problems in medical
imaging. The Fuzzy C means clustering algorithm performs well in the absence of noise
but considers only the pixel attributes and not its neighbors. This leads to accuracy
degradation with image segmentation. This was addressed by using Generalized spatial
Fuzzy C-means clustering algorithm, which utilizes both given pixel attributes and the
their distance attributes. Though GSFCM gives good output, the main drawback behind
this method is, it reaches only the local minima values of the objective function. To
improve the efficiency of clustering MR images, this paper proposes the genetic
algorithm (GA) based GSFCM. By using GA, the global minima of the clustering
1
CHAPTER II
INTRODUCTION
Image segmentation is one of the first and most important tasks in image analysis
and computer vision. Image segmentation remains one of the major challenges in image
analysis, since image analysis tasks are often constrained by how well previous
provide the satisfactory results when the boundaries of the desired objects are not clearly
defined by the image intensity information. Having good segmentations will benefit
clinicians and patients as they provide important information for 3-D visualization,
surgical planning and early disease detection. However, the design of robust and efficient
segmentation algorithms is still very challenging research topic, due to the variety and
complexity of images.
Many image processing techniques have been proposed for brain MRI
appropriate method in medical image segmentation. Its applications are very successful in
the area of image processing as well as medical imaging. The field of medicine has
become a very attribute domain for the application of fuzzy set theory. FCM is one of the
important clustering methods to segment the image. Fuzzy c-means (FCM) is a data
clustering technique in which a dataset is grouped into n clusters with every data point in
the dataset belonging to every cluster to a certain degree. For example, a certain data
point that lies close to the center of a cluster will have a high degree of belonging or
membership to that cluster and another data point that lies far away from the center of a
for the cluster centers, which are intended to mark the mean location of each cluster. The
initial guess for these cluster centers is most likely incorrect. Next, the fuzzy logic
function assigns every data point a membership grade for each cluster. By iteratively
updating the cluster centers and the membership grades for each data point, The Fuzzy
Logic function iteratively moves the cluster centers to the right location within a data set.
This iteration is based on minimizing an objective function that represents the distance
from any given data point to a cluster center weighted by that data point's membership
grade. Membership values of the FCM are renewed by considering the resistance of
clustering. In the possibilistic approach that corresponds to the intuitive concept of degree
means(GSFCM) algorithm has improved. This method takes into account properties of
local neighborhoods because the membership of each pixel is caused by its membership
and the memberships of neighboring pixels which depend on their distances to the
considered pixel. This GSFCM algorithm results as a weighted sum of the pixel
membership and the membership of the pixels in the neighbouring pixels along with the
center pixel.
randomly choosing cluster centers from a uniform distribution over the data space.
3
In this approach, the binary strings representing the cluster centers undergo
mutation. The incorporation of mutation enhances the ability of the genetic algorithm to
near optimal solutions. The role of the mutation operator is to introduce new genetic
material to the gene pool, thus preventing the inadvertent loss of useful genetic material
in earlier phases of evolution. The creation of new genomes from existing ones during
reproduction is the process of crossover. Parent genomes are selected with a probability
The genetic algorithm does not depend on any initial conditions, efficiently
escapes from the sensibility to initial value and improves the accuracy of clustering. It
proceeds in an incremental way attempting to optimally add one new cluster center at
each stage. This approach is very efficient to remove the noise also in the image. It is
LITERATURE SURVEY
Paper 1
Segmentation
Problem Description
these C partitions, let unlabelled data set X={x1,x2,…,xc} be the pixel intensity. Where n
is the number of image pixels to determine their membership. The FCM algorithm tries to
partition the dataset X into C clusters. The standard FCM objective function is defined as
follows.
c n
∑∑
Jm(U,V)= i =1 k =1 uikmd2(xk,vi)-------------------------------------------------------(1)
Where d2(xk,vi) represents the distance between the pixel xk and centroid vi along with the
∑
constraint i =1 uik = 1, and the degree of fuzzification m≥1. A data point xk belongs to the
specific cluster vi that is given by the membership value uik of the data point to that
5
−1
c 1
2
d ( xk , vi ) ( m −1)
U ik = ∑ 2
d ( xk , v j )
j =1
cluster. Local minimization of the objective function Jm(U,V) is accomplished by repeatly
-----------------------------------------------------------------(2)
∑
k =0 uikm xk
∑
k =0 uikm --------------------------------------------------------------------(3)
In the traditional FCM algorithm, for a pixel xk, the clustering of xk with class i
depends on the membership value uik. Since the neighboring pixel xj has an influential
function h of itself membership value uij against pixel xk, this degrades accuracy. To
overcome this problem, we take into the account the spatial information of correlated
neighboring pixels to impact the pixel xk belonging to cluster I by a total function of Pik
Nk
∑
Pik = j =0 h(xk,xj)g(uij) -------------------------------------------------------------------------(4)
Nk
∑
j =0 h(xk,xj) = 1 and g(uij) are ranged in [0,1] with j ∈ sk
If all pixels inside Sk completely belong to cluster i, the function value Pik=1. This implies
that the pixel xk is mostly impacted by its neighbours. Since the function g(uij) depends on
membership value of uij(The probability of pixel xj belonging to cluster i), the effective
rate of g(uij) is between neighbor xj and the pixel xk. To determine the function
Nk
∑
As a result, j =0 h(xk,xj) = 1 when both the function value pik and function g(uij) are equal
center pixel xk, Moreover the function h(xk,xj) should satisfy that the longer distance
between xk and xj, the smaller value of h(xk,xj). These leads the following equation :
-1
k
d2(xk, xj)
∑ d2(xk, xi) ---------------------------------------------------------------------(5)
h(xk,xj) = i =0
-1
Nk k
d2(xk, xj)
∑ ∑ d2(xk, xi) ---------------------------------------------------------------(6)
Pik = j =0 g(uij) i =0
Nk k
1 g(uij)
∑ d2(xk, xi)
∑ d2(xk, xj) ---------------------------------------------------------(7)
Pik = j =0 i =0
7
3.2.1 Advantages of the GSFCM:
Year : 2008
Problem Description
FCM is the most famous clustering algorithm. However one of the greatest
disadvantages of this method is sensitivity for noises and outliers in the data. Since
the membership values of FCM for an outlier data is the same as real data, outliers
There exist different method to overcome this problem. Among them, three
Means (DWFCM) were proposed in this paper. This paper decreased the noise
function, in order to decrease the effect of noisy samples and outliers on centroids.
9
3.3.1. A fuzzy Possibilistic C-Means Clustering (FPCM):
membership and typically for each vector in the dataset. FPCM minimizes the objective
function
c N
∑∑ xk − vi
Jfcm= i = 1 k = 1 (Uikm+tikn) dik2 , dik = -----------------------------------------(9)
Where η is a parameter for controlling the effect typically on clustering and the
c c
∑ ∑
constraints i =1 uik = 1 and i =1 tik = 1.--------------------------------------------------------(10)
This algorithm provided the matrices U,T and V. The conditions for local minima
∑
Tik= [ j =1 (dik/dij)2/(n-1) ] -1 ----------------------------------------------------(11) and
Due to the constraint (10), if the number of input samples (N) in a dataset is large,
the typically of samples will degrade and then the FPCM will not be insensitive to
outliers. So the modified version of FPCM was used. This is known as MFPCM which is
the sum of typically values of a cluster i, for all the input samples are equal to the number
of data that belongs to this cluster.
c N N
∑∑ ∑
Vi= i = 1 k = 1 (Uikm+tikn) xi k =1 (Uikm+tikn) ------------------------------------------(12)
the probabilistic constraint uij=1. So that the algorithm generates low memberships for
outliers. To distinguish an outlier from a non-outlier, a new variable was introduced, that
is called as credibility. It is a vector represents its typically to the data set, not to any
particular cluster. If a vector has a low value of credibility, it is a typical to the data set
Here α k is the distance of vector xk from its nearest centroid. The parameter θ controls
the minimum value of vk so that the noisiest vector gets credibility equal to θ . So the
∑i =1 uik = ψ k--------------------------------------------------------------------------(14)
∑
uik = ψ k
j =1 (dik/dij)2/(m-1)-----------------------------------------------------(15)
11
3.3.3. The Density Weighted Fuzzy C-Means Clustering (DWFCM):
equation of DWFCM is
c n
∑∑
J= i =1 k =1 uikmd2(xk,vi)wk------------------------------------------------------------(16)
∑
Where Wk = y =1 exp(-h x xk-xy / σ ) for which h is a resolution parameter
Like wise the following update equations for U and V for DWFCM
n N
∑ ∑
Vi= k =1 Wk (Uikm) xi k =1 Wk(Uikm)-----------------------------------------(17)
3.3.4 Advantages of the RWFCM:
and DWFCM
3. The FPCM algorithm attempt to minimize the objective function than FCM
5. By using CFCM may result in oscillations for noise-free data and for
3. All these 3 algorithms are not particularly suited for medical image
segmentation.
13
Paper 3
Year : 2006
Problem Description
points. Its constrained fuzzy C-partition can be briefly described as follows : given
that the membership function of the ith (i=1…N) vector to the jth (j=1,2,…C)
c N
∑ ∑
∀ i, i =1 uik = 1; ∀ i, j, uij ∈ [0,1]; ∀ j, i =1 uij > 0
The most widely used clustering criterion is the weighted within-group sum of
∑∑
Jm = i =1 k =1 uikmd2(xk,vi) -------------------------------------------------------------(18)
Where V is the vector of the cluster centers and m is the weighting component.
The FCM is a local search procedure with respect to the clustering criterion. Its
local minimum. To find the global minimum value, this paper proposed the global fuzzy
More specifically, we start with fuzzy 1-partition and find its optimal position
which corresponds to the centroid of the data set X. For fuzzy -2 partition problem, the
first initial cluster center is placed at the optimal position for fuzzy -1 partition, while the
second initial center at execution n is placed at the position of the data point xn(n=1..N).
Then we perform the FCM algorithm from each of these initial partitions respectively, to
obtain the best solution for Fuzzy-2 partition. In general, let (V1(c) ,….,Vc(c)) denote the
final solution for Fuzzy C-means partition. If we have found the solution for the fuzzy (c-
1) partition problem, we perform the FCM algorithm with C clusters from each of these
The main advantage of the algorithm is that it does not depend on any initial
3.4.1 Algorithm :
1. Perform the FCM algorithm to find the optimal clustering centers V (1) of the
fuzzy 1-partition problem and let obj-1 be its corresponding value of the objective
function found by 18
2. Perform N runs of the FCM algorithm with c clusters where each run n starts
15
from the initial value (V1(c) ,….,Vc(c),xn) and obtain their corresponding values of the
3. Find the minimal value of the objective function obj(c+1) and its corresponding
clustering centers V(c+1) from step 2. Let V(c+1) be the final clustering centers for fuzzy c+1
partition.
algorithm for each value of c (c=1,2,….,C), in order to improve the convergence speed of
the global Fuzzy C-means algorithm we proposed the fast global FCM clustering
FCM to obtain the final clustering error Jm. Instead we straightforward compute the
value of the objective function for all initial state, find the center corresponding to the
minimum value of objective function to be the initial center, and then execute the FCM
algorithm to obtain the solution with c clusters. The steps for the fast global Fuzzy C-
Step 1:
Perform the FCM to find the optimal clustering center V(1) of the Fuzzy 1-
partition and let obj_1 be the corresponding value of the objective function found by 18.
Step 2:
Compute the value of the objective function for all initial state (V1(c-1),….,Vc-
N C
∑ (∑
1(c-1),xn) by using Jm = i =1 c =1 xi-vc ) ,
2(1-m) 1-m xi-vc ≠ 0 (i=1,…,N;c=1,2,3,…,C)
Step 3:
Find the minimal value of the objective function obj_(c+1) and the corresponding
initial state V0(c+1) and obtain the final clustering center V(c+1) for fuzzy c+1 partition
Step 4:
Perform FCM algorithm with c+1 clusters from the initial state V0(c+1) and
obtain the final clustering center V(c+1) for fuzzy c+1 partition.
Step 5:
C X N executions of the FCM algorithm, the fast global FCM clustering algorithms only
17
3.4.3 Advantages of the GFCM & FGFCM:
4. The converging speed of the FGFCM did not significantly affect the solution
quality
2.
Paper 4
Author : M.A.Egan
Year : 1998
Most FCM minimize the fitness function. The FCM is an iterative technique that
refines the cluster centers, sizes and weights at each iteration. The genetic c-means fuzzy
centers from a uniform distribution over the data space. In GFCM, the binary strings
enhances the ability of the genetic algorithm to find near optimal solutions. The role of
the mutation operator is to introduce new genetic material to the gene pool, thus
preventing the inadvertent loss of useful genetic material in earlier phases of evolution.
The mutation operator in GFCM flips each bit of the bit string with a small probability,
Pmut(Pmut=0.01).
The creation of new genomes from existing ones during reproduction is the
Pcross(Pcross=0.8) using the roulette wheel selection scheme. After a partner string is
19
chosen randomly, the two-point crossover operator is applied to these two parents.
The implementation of two point cross over is as follows: two integer random numbers, i
and j, between 1 and 2c are generated. Both strings are cut into three portions at positions
I and j and the portions between these crossover points are mutually interchanged. The
representation. It shows the two parent genomes, the crossover points, i and j and their
resulting offspring.
Parent 2: 67 98 76 53 76 86 54 34 65 45
Offstring2: 67 23 35 53 76 81 54 34 65 45
CHAPTER IV
PROBLEM DEFINITION:
recognized quite early, in the mid-1970. Within this field, it is uncertainty found in the
process of diagnosis of disease that has most frequently been the focus of applications of
fuzzy set theory. With the increased volume of information available to physicians from
new medical technologies, the process of classifying different sets of symptoms under a
difficult. A single disease may manifest itself quite differently in different patients and at
different diseases, and the presence of several diseases in a single patient may disrupt the
constitutes one source of imprecision and uncertainty in the diagnostic process, the
knowledge concerning the state of the patient constitutes another. The physician
generally gathers knowledge about the patient from the past history, physical
examination, laboratory test results, and other investigative procedures such as X-rays
and ultrasonic. The knowledge provided by each of these sources carries with it varying
degrees of uncertainty. The past history offered by the patient may be subjective,
laboratory test are often of limited precision, and the exact borderline between normal
and pathological is often unclear. X-rays and other similar procedures require correct
interpretation of the results. Thus, the state and symptoms of the patient can be known by
the physician with only a limited degree of precision. In the face of the uncertainty
21
concerning the observed symptoms of the patient as well as the uncertainty concerning
the relation of the symptoms to a disease entity, it is nevertheless crucial that a physician
determine the diagnostic label that will entail the appropriate therapeutic regimen. The
desire to better understand and teach this difficult and important process of medical
diagnostics has prompted attempts to model it with the use of fuzzy sets.
These models vary in the degree to which they attempt to deal with different
themselves, and the stages of hypothesis formation, preliminary diagnostics, and final
diagnostics within the diagnostics process itself. These models also form the basis for
computerized medical expert systems, which are usually designed to aid the physician in
A fuzzy set framework has been utilized in the several different approaches to
modeling the diagnostics process. In the approach formulated by Sanchez (1979), the
and diseases.
problem of clustering in X is to find several cluster centers that can properly characterize
relevant classes of X. In classical cluster analysis, these classes are required to form a
partition of X such that the degree of association is strong for data within blocks of the
partition and weak for data in different blocks. However, this requirement is too strong in
many practical applications, and it is thus desirable to replace it with the weaker
requirement. When the requirement crisp partition of X is replaced with the weaker
emerging problem area as Fuzzy Clustering. Fuzzy pseudo partitions are often called
Fuzzy C-Partitions, where c designates the number of fuzzy classes in the partition. Both
There are two basic methods of fuzzy clustering. One of them, which are based on
Fuzzy C-Partitions, is called a Fuzzy C-Means clustering method. The other method,
degradation with segmentation because medical images are limited spatial resolution,
poor contrast, noise and non-uniform intensity variation. To overcome this GSFCM was
introduced that incorporates both given pixel attributes and spatial local information
23
ep 1: Input : Medical Image
Stepof2:
ance
Output: Determine
the current
Clustered the Number
pixel
image and centroid value &Cluster
of clusters set thethe imageas
m value according
>=1 to the value U
Step 8: otherwise calculate
Step 3:the J objective
Generate the function.
fuzzification Matrix Um (Uniform Distribution)
tep 5 : Centroid value is calculated
Step 6:
byAssume
sum((U)e=
m) difference
alue is greater than 0.01 then repeat the step 4 to 6. X current pixel
between
/ sum((U)
previous
) center and current cente
which is weighted correspondingly to neighbor elements based on their distance
attributes.
The problem architecture of the FCM and GSFCM is given in the following
diagram.
FCM Algorithm
1. Distributed the pixels of the input image into data set X and initiate centers
V(0)=(v1(0),v2(0),…,vc(0))
fuzzification matrix.
which we used g(uij)=uij and f(pik)=1/pik for an efficient trade-off among the cluster
validity functions.
n n
∑ ∑
vi= k =1 wikmXk k =1 wikm
25
000000000000030004000000000000000000000000000000000000000000000000000000000000000
Output: Clustered
Step 6:Image
Find the different between new centroid and old centroid
000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000000000000000000000000
Yes No
GSFCM algorithm
This paper takes the advantages of genetic algorithm which is used to find the
global optimum solution and degrades the disadvantages of the traditional FCM and
GSFCM algorithm. These two algorithm proposes the gradient local minimum optimum
solution. But these two methods are not successful for finding the global optimum
solution. So this paper propose, Genetic approach apply to this two algorithms FCM and
GSFCM for medical image segmentation. The following diagram shows the architecture
0100090000032a0200000200a20100000000a201000026060f003a03574d464301000000
00000100a0290000000001000000180300000000000018030000010000006c0000000000
000000000000350000006f0000000000000000000000883b00002032000020454d460000
01001803000012000000020000000000000000000000000000003b13000020190000d00
0000010010000000000000000000000000000142e0300a0270400160000000c000000180
000000a0000001000000000000000000000000900000010000000100e0000d50b0000250
000000c0000000e000080250000000c0000000e000080120000000c00000001000000520
000007001000001000000a4ffffff000000000000000000000000900100000000000004400
022430061006c00690062007200690000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000001200ac9a1
20010000000109e1200909b1200524f6032109e1200089b120010000000789c1200f49d12
00244f6032109e1200089b120020000000076f2e31089b1200109e120020000000ffffffff8
c1dfc00826f2e31ffffffffffff0180ffff01803fff0180ffffffff000000000008000000080000430
0000001000000000000005802000025000000372e90010000020f0502020204030204ef0
200a07b20004000000000000000009f00000000000000430061006c00690062007200000
0000000000000d09b120010232e3130be0f32309f12003c9b1200ca3927310b0000000100
0000789b1200789b1200087a25310b000000a09b12008c1dfc0064760008000000002500
27
CHAPTER V
ALGORITHM
There are basically two ways of fuzzifying classical genetic algorithms. One way
is to fuzzify the gene pool and the associated coding of chromosomes. The other one is to
example of determining the maximum of function f(x) = 2x-x2 / 16 within the domain
[0,31]. Numbers in this domain are represented by chromosomes whose components are
∈
numbers in [0,1]. For example, the chromosome {0.1,0.5,0,1,9} represents the number
in [0,31]. It turns out that this reformulation of classical genetic algorithms tends to
converge faster and is more reliable in obtaining the desired optimum. To employ it,
To illustrate this issue, let us consider a travelling salesman problem with four
cities C1, C2, C3 and C4. The alternative routes that can be taken by the salesman may be
characterized by chromosomes {X1, X2, X3, X4} in which xi corresponds to city ci (i N4)
and represents the degree to which the city should be visited early.
Thus for example,{ 0.1,0.9,0.8,0} denotes the route C2,C3,C1,C4,C2. Although the
extension of the gene pool from {0, 1} to [0, 1] may be viewed as a fuzzification of the
There are basically two ways of fuzzifying classical genetic algorithms. One way
is to fuzzify the gene pool and the associated coding of chromosomes. The other one is to
maximum of function f(x) = 2x-x2 / 16 within the domain [0,31]. Numbers in this domain
are represented by chromosomes whose components are numbers in [0,1]. For example,
in [0,31]. It turns out that this reformulation of classical genetic algorithms tends to
29
∈ ∈
∈
converge faster and is more reliable in obtaining the desired optimum. To employ it,
To illustrate this issue, let us consider a travelling salesman problem with four
cities C1, C2, C3 and C4. The alternative routes that can be taken by the salesman may be
characterized by chromosomes {X1, X2, X3, X4} in which xi corresponds to city ci (i N4)
and represents the degree to which the city should be visited early.
Thus for example,{ 0.1,0.9,0.8,0} denotes the route C2,C3,C1,C4,C2. Although the
extension of the gene pool from {0, 1} to [0, 1] may be viewed as a fuzzification of the
are taken from a given genepool. Then, the simple crossover with the crossover position
t = {tj|tj=1 for j Ni and tj=0 for Ni+1,n} referred to as a template, by the formulas
X’=(x ∧ t) ∨ (x ∧ t)
Y’=(x ∧ t) ∨ (x ∧ t),
Where ∧ and ∨ are min and max operations on tuples and t = <tj/tj=1-tj>.
We can see that the template t defines an abrupt change at the crossover
Y’=(x ∧ f) ∨ (x ∧ f),
X’ = <max[min(xi,fi),min(yi,fi)]|i ∈ Nn>,
Y’ = <max[min(xi,fi),min(yi,fi)]|i ∈ Nn>,
algorithms seems to indicate that they are efficient, robust, and better attuned to
31
Compare Input
Generate theMRI
and image
find
chromosome theforoptimal
fitness solution
function
Find the
Find Cross
the
Set thesegmentad
centroidover
centroid valuevalue
matrix
segmentation using
d of
2as
(x the
GSFCM
a
,x chromosome
and FCM
chromosome
)d 2(x
2(x ,x )d2(x ,x k) i k,xi)
segmentation d (xk,xi)d (xk,xi)
2k i 2 k i
Chapter VI
EXPECTED OUTCOME
This section evaluates the performance of the proposed GSFCM algorithm and
Table 1
Functions
Number
clusters
Vpe
Vpc Vxb Vfs[x106]
33
0.94080 0.10401 0.16145
4 FCM -322.930972
4 5 4
0.95382 0.08752 0.16569
MFCM -321.925050
4 6 1
0.96543 0.03043 0.09695
GSFCM -329.502357
4 0 0
0.91684 0.13712 0.17282
5 FCM -335.425575
4 8 5
0.78408 0.17558 0.29602
MFCM -307.483034
9 2 0
0.96776 0.02826 0.07696
GSFCM -353.473182
5 7 5
0.83368 0.10395 0.03777
Image 2 3 FCM -316.408642
3 6 2
0.90960 0.08299 0.03785
MFCM -318.056959
1 2 6
0.95008 0.03770 0.03395
GSFCM -335.892399
9 6 7
Number
clusters
Vpe
Vpc Vxb Vfs[x106]
Image 1 Image 2
Chapter VII
35
IDL TECHNOLOGICAL MAP
-To generate the random numbers for the given input matrix
Reform(Array,size):
– Reform the matrix that is change the row and column of the matrix
Chapter VIII
MILES STONES
May Last week : Problem of Proposed System defined and understood the problem
Chapter IX
37
REFERENCES
[1] Huynh Van Lung and Jong-Myon Kim “A Generalized Spatial Fuzzy C-means
24,2009,PP-409 to 414
[2] M.A.Egan “Locating Clusters in Noisy Data: A Genetic Fuzzy C-Means Clustering
[3] Weina Wang, Yunjie Zhang, Yi Li and Xiaona Zhang “The Global Fuzzy C-means
Clustering algorithm” proceedings of the 6th world congress on Intelligent Control and
pp-305- 311
[5] Dong-Chul Park “Intuitive Fuzzy C-Means Algorithm” IEEE 2009 PP – 83-87
[6] Wu Jian, Feng GuoRui “Intrusion Detection Based On Simulated Annealing and
[7] Jiang-She Zhang and Yiu-Wing Leung “Improved Possibilistic C-Means Clustering
[8] Stelios Krinidis and Vassilios Chatzis “ A Robust Fuzzy Local Information C-Means
IEEE PP-61-65
[10] Chang Wen Chen, Jiebo Luo and Kevin J.Parker “Image Segmentation via Adaptive
Applications” IEEE Transactions on image processing, Vol 7, No12, Dec 1998 PP – 1673
-1683
[11] George J. Klir and Bo Yuan “Fuzzy Sets and Fuzzy Logic“ Prentice-Hall India
Publications
[12] Rafael C. Gonzalez and Richard E.Woods “Digital Image Processing” 3rd edition
39