COVER PAGE
PREFACE
2
INTRODUCTION
3
TABLE OF CONTENT
Table of Contents
PREFACE......................................................................................................i
INTRODUCTION........................................................................................ii
TABLE OF CONTENT...............................................................................iii
1. TUTORIALS..........................................................................................1
1.6 Haftoning.........................................................................................5
1.8 Convolution.....................................................................................7
2. PROJECT.............................................................................................31
1 TUTORIALS
The description of related process or formula used, e.g. how to read TIFF tag
values, etc.
1.1.2 Input
1.1.3 Output
The description of related process or formula used, e.g. how to write RAW
image file, etc.
1.2.2 Input
1.2.3 Output
The description and explanation about this sub section. Include brighten and
darken.
The description of related process or formula used, e.g. brighten by add value
to every pixel, etc.
7
1.3.2 Input
1.3.3 Output
The description and explanation about this sub section. Include brighten and
darken.
The description of related process or formula used, e.g. steps to perform bit-
plane slicing, etc.
1.4.2 Input
1.4.3 Output
1.5.2 Input
1.5.3 Output
1.6 Haftoning
The description and explanation about this sub section. Include patterning and
dithering.
1.6.1 Patterning
1.6.1.2 Input
1.6.1.3 Output
1.6.2 Dithering
1.6.2.2 Input
1.6.2.3 Output
1.7.2 Input
1.7.3 Output
1.8 Convolution
The grey levels taken from the neighbourhood are weighted by coefficients
that come from a matrix known as the convolution kernel. Seems a neighbourhood is
centred on a pixel, then the kernel must has odd dimensions, such as 3x3, 5x5, etc.
12
Figure 1.1: A 3x3 convolution kernel (h) and the corresponding image neighbourhood (f)
The figure above shows a 3x3 kernel used in this tutorial and the
corresponding 3x3 neighbourhood of pixels from the original image. The kernel is
centred on the shaded pixel. During convolution, each kernel is multiplied by a value
from the neighbourhood of the image lying under the kernel, in such a way that the
value at the top-left corner of the kernel is multiplied by the value at the bottom-right
corner of the neighbourhood. The entire calculation is as shown in the figure below.
Figure 1.2: The convolution calculation to get a new value (g) for the pixel
13
In addition, the convolution process tends to set all the borders pixel to black
colour.
14
1.8.2 Result
Figure 1.4: The result of convolution process: (left) the original image, lenna.tif; (right)
result image, Convolution.raw
The low pass filtering is a process to smooth or blurs the image. This tends to
reduce noise, but also obscures fine detail.
The mean filter is a kernel whose coefficients are all positive and is involved
in the low pass filtering process. It is as shown in the figure below.
The low pass filtering has the same process as the convolution, such that the
grey levels taken from the neighbourhood are weighted by coefficients that come
from mean filter kernel. During convolution, the kernel will summed all the pixel
values from the neighbourhood, and then the sum is divided by the number of pixels
in the neighbourhood. Convolution with these kernels is therefore equivalent to
compute the mean grey level over neighbourhood defined by the kernel.
16
A high degree of smoothing can be achieved through the use of larger kernels,
such as 5x5 kernel, and also repeated application of a small kernel, such as 3x3
kernel, to an image.
1.9.2 Input
Figure 1.7: The result of low pass filtering: (left) the original image, lenna_noise.tif; (right)
result image, LowPassFilter.raw
Figure 1.8: The source code of round off pixel value function
17
1.10.1 Process and Formula Used: High Pass Filter and Mapping Function
The high pass filter is a kernel containing a mixture of positive and negative
coefficients and is involved in the high pass filtering process. It is as shown in the
figure below.
18
The sum of the coefficients in this kernel is zero. So, when the kernel is over
an area of constant or slowly varying grey level, the result of convolution is zero or
some very small number. However, when grey level is varying rapidly within the
neighbourhood, the result of the convolution can be a large positive or negative
number, as the kernel contains both positive and negative coefficients.
After that, the pixel values generated must be mapped onto a 0 to 255 range,
in order to display or print the filtered image. The mapping formula is as shown in
the figure below.
By using the above formula, the filter response of 0 maps onto the middle of
the range. Thus, negative filter responses will show up as dark tones, whereas
positive responses will be represented by light tones.
19
1.10.2 Result
Figure 1.12: The result of high pass filtering: (left) the original image, cameraman.tif; (right)
result image, HighPassFile.raw
Figure 1.13: The source code of get minimum and maximum filter function
20
The high boost filter is almost the same as the high pass filter which is
emphasize in sharpening the edges but details might be lost.
Seems it is similar to the high pass filtering, the high boost filtering also
consist of a kernel, as known as high boost filter, containing a mixture of positive
and negative coefficients. The only different is that the high boost filter contains a
central coefficient, c. It is as shown in the figure below.
21
When the central coefficient, c, is large, convolution will have little effect on
an image. As c gets closer to 8, the degree of sharpening will increases. If c = 8, the
kernel becomes the high pass filter as described earlier.
The high boost filtering undergoes the same process as the high pass filtering,
which involves convolution process as well as the mapping process. The mapping
formula is as shown in Figure1.10.
1.11.2 Result
Figure 1.16: The result of high boost filtering, with c = 9: (left) the original image,
cameraman.tif; (right) result image, HighBoostFilter.raw
22
By using median filter, the grey levels of each pixel is replaced by the median
of the grey levels in a neighbourhood of that pixel, instead of by the average. First,
23
the values of the pixel and its neighbourhood are sorted. Next, determine the median.
For example, in a 3x3 neighbourhood, the median is the 5 th largest value. Lastly,
assign this value to the pixel.
Thus, the principle function of median filtering is to force points with distinct
intensities to be more like their neighbours, by eliminating the intensity spikes that
appear isolated in the area of the filter mask. Clearly, median filtering can eliminate
the impulse noise only if the noisy pixels occupy less than half the area of the
neighbourhood.
1.12.2 Result
Figure 1.18: The result of median filtering: (left) the original image, lenna_noise.tif; (right)
result image, MedianFilter.raw
24
Figure 1.19: The source code of insertion sort function to sort the neighbourhood values
The edge detection is one of the major applications in convolution. Edges can
be defined loosely as locations in an image where there is a sudden variation in the
grey level of pixels. The contours of solid objects, surface markings, shadows, etc, all
generate intensity or colour edges.
The edge detection has the same process as the convolution, such that the
grey levels taken from the neighbourhood are weighted by coefficients that come
from the kernel or mask. In edge detection, there are 2 type of masks which are
Prewitt masks and Sobel masks. They are as shown in figure below.
Assume that the masks are done on the grey level values, in a 3x3 region of
an image, as shown in the figure below.
1.13.2 Result
Figure 1.26: The result of Prewitt edge detection: (left) the original image, cameraman.tif;
(right) result image, Prewitt.raw
Figure 1.27: The result of Sobel edge detection: (left) the original image, cameraman.tif;
(right) result image, Sobel.raw
28
enlargement
As for shrinking, an image can be shrink by an integer factor, n, and find the
mean grey level of nxn block in an input image in order to get a single pixel to put in
the output image. The mean of the output pixel is defined as:
Assume that the mean shrinking is done on the grey level values, in a 3x3
region of an image, as shown in the figure below.
Figure 1.33: 3x3 blocks of an image, where the shaded region is the centre pixel values for
each block
mean
shrinking
1.14.2 Result
Figure 1.36: The result of enlargement by factor 3: (top) the original image, Cameraman.tif
(256x256); (bottom) result image, Enlarge.raw (768x768)
32
Figure 1.37: The result of mean shrinking by factor 3: (left) the original image,
Cameraman.tif (256x256); (right) result image, shrinking.raw (85x85)
2. PROJECT
Our Final project for Image Processing class, Gambare, is derived from the
word Gambar (picture) and ganbare (Japanese for good luck). This application
was the result of a task given by our lecturers to develop a photo editing application
for mobile phone. We choose to use Android studio to develop the Gambare
application, as it is the official integrated development environment (IDE) for
Android platform development.
35
In order to complete this project, this application must have these functional
requirements:
2.3.1 AndroidManifest.xml
2.3.2 Layout
A layout defines the visual structure for a user interface (UI) for an activity or
app widget. There are three layouts that will be used in this application are described
as below:
1. splash.xml:
This is the layout for the splash screen UI. This is the first screen
viewed by the user, when the user opened the application. The splash
screen will stay for 3 seconds before go to the next screen, the main
screen. The splash screen UI is as shown in Figure 2.2.
37
2. activity_main.xml:
This is the layout for the main screen UI. This is the second screen
viewed by the user, after the splash screen. The main screen consists
of two buttons, namely GALLERY button and TAKE PHOTO
button, as shown in Figure 2.3. The GALLERY button allows the
user to choose an image from the phones gallery, whereas the TAKE
PHOTO button allows the user to take a photo using the phones
camera. When an image is chose, that image will appear at the centre
of the screen and the START EDIT button will appear at the bottom,
as shown in Figure 2.4.
3. activity_edit.xml:
This is the layout for the edit image screen UI. The edit image screen
consists of a save image button at the top and a horizontal scroll
view at the bottom. The horizontal scroll view contains eight image
buttons, namely sharpening, Gaussian blur, invert colour,
greyscale, rotate, flip, brightness and contrast. The image
chose is located at the centre of the screen. The edit image screen UI
is as shown in Figure 2.6.
38
Figure 2.3: The main screen UI Figure 2.4: The main screen UI when an
image is choose
39
This function allows the user to choose an image from the phones gallery.
The figures below shows the variables and functions required to choose an image
from the phones gallery.
40
When the Sharpening icon was selected in the editing screen, the application
will apply this source code toward the image. The weight in the source code can be
manipulated by the user through the application.
41
When the Gaussian Blur icon was selected in the editing screen, the
application will apply this source code toward the image. The result is shown in
Figure 2.6.
The source code below will be applied to the picture when the Invert Colour
icon was selected in the editing screen. The result is shown in Figure 2.6.
This is the source code to perform the greyscale on an Image. The result is
shown in Figure 2.8.
44
In this source code, user is allowed to control the value variable through a
slide bar as shown in Figure 2.10. Each intensity of picture layer (alpha, red, green,
blue) are increased or decreased manually based on the value variable that user
defined. The result is shown in Figure 2.10.
46
Same with change brightness code, user is allowed to control the value
variable through a slide bar. Each contrast of picture layer (alpha, red, green, blue)
can be controlled either increased or decreased manually. The result is shown in
Figure 2.11.
47
48
Figure 2.10: the normal brightness (left) is controlled and increased (right)