Anda di halaman 1dari 48

1

COVER PAGE

PREFACE
2

INTRODUCTION
3

TABLE OF CONTENT

Table of Contents
PREFACE......................................................................................................i

INTRODUCTION........................................................................................ii

TABLE OF CONTENT...............................................................................iii

1. TUTORIALS..........................................................................................1

1.1 Extract TIFF Tag Values..................................................................1

1.2 Conversion from TIFF to RAW.......................................................1

1.3 Change Brightness...........................................................................2

1.4 Bit-plane Slicing..............................................................................3

1.5 Inverse Colour.................................................................................4

1.6 Haftoning.........................................................................................5

1.7 Greyscale Enhancement: Histogram Equalization..........................6

1.8 Convolution.....................................................................................7

1.9 Low Pass Filtering.........................................................................11

1.10 High Pass Filtering.......................................................................13

1.11 High Boost Filtering.....................................................................16

1.12 Median Filtering...........................................................................18

1.13 Edge Detection.............................................................................21

1.14 Geometric Operations..................................................................25

2. PROJECT.............................................................................................31

2.1 Introdcution to Gambare...............................................................31

2.2 Functional requirement..................................................................31

2.3 Implementation of Gambare Editor Mobile Application..............32

2.4 Functional Requirements of Gambare Editor Mobile Application35


4
5

1 TUTORIALS

1.1 Extract TIFF Tag Values

The description and explanation about this sub section.

1.1.1 Related Process / Formula

The description of related process or formula used, e.g. how to read TIFF tag
values, etc.

1.1.2 Input

Put input tiff file image here.

1.1.3 Output

Put the extracted information here.

1.1.4 Program Source Code

Put the related functions from source code here.

1.2 Conversion from TIFF to RAW

The description and explanation about this sub section.


6

1.2.1 Related Process / Formula

The description of related process or formula used, e.g. how to write RAW
image file, etc.

1.2.2 Input

Put input tiff file image here.

1.2.3 Output

Put the raw file image here.

1.2.4 Program Source Code

Put the related functions from source code here.

1.3 Change Brightness

The description and explanation about this sub section. Include brighten and
darken.

1.3.1 Related Process / Formula

The description of related process or formula used, e.g. brighten by add value
to every pixel, etc.
7

1.3.2 Input

Put input tiff file image here.

1.3.3 Output

Put the brightened raw file image here.

Put the darkened raw file image here.

1.3.4 Program Source Code

Put the related functions from source code here.

1.4 Bit-plane Slicing

The description and explanation about this sub section. Include brighten and
darken.

1.4.1 Related Process / Formula

The description of related process or formula used, e.g. steps to perform bit-
plane slicing, etc.

1.4.2 Input

Put input tiff file image here.


8

1.4.3 Output

Put the 8 raw file image here: bit-plane 0 to bit-plane 7.

1.4.4 Program Source Code

Put the related functions from source code here.

1.5 Inverse Colour

The description and explanation about this sub section.

1.5.1 Related Process / Formula

The description of related process or formula used, e.g. steps to perform


inverse, formula used, etc.

1.5.2 Input

Put input tiff file image here.

1.5.3 Output

Put the result raw file image here.

1.5.4 Program Source Code

Put the related functions from source code here.


9

1.6 Haftoning

The description and explanation about this sub section. Include patterning and
dithering.

1.6.1 Patterning

The detailed description and explanation about patterning.

1.6.1.1 Related Process / Formula

The description of related process or formula used, e.g. steps to perform


patterning, formula used, etc.

1.6.1.2 Input

Put input tiff file image here.

1.6.1.3 Output

Put the result raw file image here.

1.6.1.4 Program Source Code

Put the related functions from source code here.


10

1.6.2 Dithering

The detailed description and explanation about dithering.

1.6.2.1 Related Process / Formula

The description of related process or formula used, e.g. steps to perform


dithering, formula used, matrices used, etc.

1.6.2.2 Input

Put input tiff file image here.

1.6.2.3 Output

Put the result raw file image here.

1.6.2.4 Program Source Code

Put the related functions from source code here.

1.7 Greyscale Enhancement: Histogram Equalization

The description and explanation about this sub section.

1.7.1 Related Process / Formula

The description of related process or formula used, e.g. steps to perform


histogram equalization, formula used, etc.
11

1.7.2 Input

Put input tiff file image here.

1.7.3 Output

Put the result raw file image here.

1.7.4 Program Source Code

Put the related functions from source code here.

1.8 Convolution

The neighbourhood operations perform the calculation over areas of an


image, where a pixels new value is computed from its old value and the values of
pixels in its vicinity. The example of neighbourhood operations are convolution,
linear filter, and edge detection. These operations are invariably more costly than
simple point processes, but can allow us to achieve a whole range of interesting and
useful effect.

Convolution is the fundamental neighbourhood operations of image


processing. In convolution, the calculation performed at a pixel is a weighted sum of
grey levels from a neighbourhood surrounding that pixel.

1.8.1 Process and Formula Used: Convolution kernel

The grey levels taken from the neighbourhood are weighted by coefficients
that come from a matrix known as the convolution kernel. Seems a neighbourhood is
centred on a pixel, then the kernel must has odd dimensions, such as 3x3, 5x5, etc.
12

Figure 1.1: A 3x3 convolution kernel (h) and the corresponding image neighbourhood (f)

The figure above shows a 3x3 kernel used in this tutorial and the
corresponding 3x3 neighbourhood of pixels from the original image. The kernel is
centred on the shaded pixel. During convolution, each kernel is multiplied by a value
from the neighbourhood of the image lying under the kernel, in such a way that the
value at the top-left corner of the kernel is multiplied by the value at the bottom-right
corner of the neighbourhood. The entire calculation is as shown in the figure below.

Figure 1.2: The convolution calculation to get a new value (g) for the pixel
13

Figure 1.3: Algorithm used for convolution

In addition, the convolution process tends to set all the borders pixel to black
colour.
14

1.8.2 Result

Figure 1.4: The result of convolution process: (left) the original image, lenna.tif; (right)
result image, Convolution.raw

1.8.3 Program Source Code


15

Figure 1.5: The source code of convolution function

1.9 Low Pass Filtering

The low pass filtering is a process to smooth or blurs the image. This tends to
reduce noise, but also obscures fine detail.

1.9.1 Process and Formula Used: Mean Filter

The mean filter is a kernel whose coefficients are all positive and is involved
in the low pass filtering process. It is as shown in the figure below.

Figure 1.6: A 3x3 mean filter kernel

The low pass filtering has the same process as the convolution, such that the
grey levels taken from the neighbourhood are weighted by coefficients that come
from mean filter kernel. During convolution, the kernel will summed all the pixel
values from the neighbourhood, and then the sum is divided by the number of pixels
in the neighbourhood. Convolution with these kernels is therefore equivalent to
compute the mean grey level over neighbourhood defined by the kernel.
16

A high degree of smoothing can be achieved through the use of larger kernels,
such as 5x5 kernel, and also repeated application of a small kernel, such as 3x3
kernel, to an image.

1.9.2 Input

Figure 1.7: The result of low pass filtering: (left) the original image, lenna_noise.tif; (right)
result image, LowPassFilter.raw

1.9.3 Program Source Code

Figure 1.8: The source code of round off pixel value function
17

Figure 1.9: The source code of low pass filtering function

1.10 High Pass Filtering

The high pass filtering is a neighbourhood operation that preserves sudden


variations in grey level, such as those occur at the boundaries of objects, but
suppresses the more gradual variations. It can have the adverse effect of making
noise more prominent, because noise has a strong high frequency component.

1.10.1 Process and Formula Used: High Pass Filter and Mapping Function

The high pass filter is a kernel containing a mixture of positive and negative
coefficients and is involved in the high pass filtering process. It is as shown in the
figure below.
18

Figure 1.10: A 3x3 high pass filter

The sum of the coefficients in this kernel is zero. So, when the kernel is over
an area of constant or slowly varying grey level, the result of convolution is zero or
some very small number. However, when grey level is varying rapidly within the
neighbourhood, the result of the convolution can be a large positive or negative
number, as the kernel contains both positive and negative coefficients.

After that, the pixel values generated must be mapped onto a 0 to 255 range,
in order to display or print the filtered image. The mapping formula is as shown in
the figure below.

Figure 1.11: The mapping formula

By using the above formula, the filter response of 0 maps onto the middle of
the range. Thus, negative filter responses will show up as dark tones, whereas
positive responses will be represented by light tones.
19

1.10.2 Result

Figure 1.12: The result of high pass filtering: (left) the original image, cameraman.tif; (right)
result image, HighPassFile.raw

1.10.3 Program Source Code

Figure 1.13: The source code of get minimum and maximum filter function
20

Figure 1.14: The source code of high pass filtering function.

1.11 High Boost Filtering

The high boost filter is almost the same as the high pass filter which is
emphasize in sharpening the edges but details might be lost.

1.11.1 Process and Formula Used: High Boost Filter

Seems it is similar to the high pass filtering, the high boost filtering also
consist of a kernel, as known as high boost filter, containing a mixture of positive
and negative coefficients. The only different is that the high boost filter contains a
central coefficient, c. It is as shown in the figure below.
21

Figure 1.15: A 3x3 high boost filter

When the central coefficient, c, is large, convolution will have little effect on
an image. As c gets closer to 8, the degree of sharpening will increases. If c = 8, the
kernel becomes the high pass filter as described earlier.

The high boost filtering undergoes the same process as the high pass filtering,
which involves convolution process as well as the mapping process. The mapping
formula is as shown in Figure1.10.

1.11.2 Result

Figure 1.16: The result of high boost filtering, with c = 9: (left) the original image,
cameraman.tif; (right) result image, HighBoostFilter.raw
22

1.11.3 Program Source Code

Figure 1.17: The source code of high boost filter function

1.12 Median Filtering

The median filtering is an alternative way to reduce noise in an image rather


than blurring. This method is particular effective when the noise pattern consists of
string, spike-like components and the characteristics to be preserved is edge
sharpness.

1.12.1 Process and Formula Used: Median Filter

By using median filter, the grey levels of each pixel is replaced by the median
of the grey levels in a neighbourhood of that pixel, instead of by the average. First,
23

the values of the pixel and its neighbourhood are sorted. Next, determine the median.
For example, in a 3x3 neighbourhood, the median is the 5 th largest value. Lastly,
assign this value to the pixel.

Thus, the principle function of median filtering is to force points with distinct
intensities to be more like their neighbours, by eliminating the intensity spikes that
appear isolated in the area of the filter mask. Clearly, median filtering can eliminate
the impulse noise only if the noisy pixels occupy less than half the area of the
neighbourhood.

1.12.2 Result

Figure 1.18: The result of median filtering: (left) the original image, lenna_noise.tif; (right)
result image, MedianFilter.raw
24

1.12.3 Program Source Code

Figure 1.19: The source code of insertion sort function to sort the neighbourhood values

Figure 1.20: The source code of median filter function


25

1.13 Edge Detection

The edge detection is one of the major applications in convolution. Edges can
be defined loosely as locations in an image where there is a sudden variation in the
grey level of pixels. The contours of solid objects, surface markings, shadows, etc, all
generate intensity or colour edges.

1.13.1 Process and Formula Used: Prewitt and Sobel Operation

The most common method in edge detection is based on the estimation of


grey level gradient at a pixel. The gradient is used frequently in industrial
application, either to aid humans in the detection of defects or, as a pre-processing
step in automated inspection.

The edge detection has the same process as the convolution, such that the
grey levels taken from the neighbourhood are weighted by coefficients that come
from the kernel or mask. In edge detection, there are 2 type of masks which are
Prewitt masks and Sobel masks. They are as shown in figure below.

Figure 1.21: The Prewitt masks

Figure 1.22: The Sobel masks


26

Assume that the masks are done on the grey level values, in a 3x3 region of
an image, as shown in the figure below.

Figure 1.23: A 3x3 region of an image

For Prewitt operation, the gradient magnitude, g is given by:

Figure 1.24: The Prewitt operation

For Sobel operation, g is given by:

Figure 1.25: The Sobel operation


27

1.13.2 Result

Figure 1.26: The result of Prewitt edge detection: (left) the original image, cameraman.tif;
(right) result image, Prewitt.raw

Figure 1.27: The result of Sobel edge detection: (left) the original image, cameraman.tif;
(right) result image, Sobel.raw
28

1.13.3 Program Source Code

Figure 1.28: The source code of calculate gradient magnitude, g, function

Figure 1.29: The source code of Prewitt edge detection function


29

Figure 1.30: The source code of Sobel edge detection function

1.14 Geometric Operations

Geometric operations change image geometry by moving pixels around in a


carefully constrained way. This is to remove distortions inherent in the imaging
process, or to introduce a deliberate distortion that matches one image with another.
Scaling, such as enlargement and shrinking, is one of the geometric operation.

1.14.1 Process and Formula Used: Enlargement and Mean Shrinking

An image can be enlarged by an integer factor, n, simply by copying each


pixel to an nxn block of pixels in the output image.
30

enlargement

Figure 1.31: The enlargement process, by factor 3

As for shrinking, an image can be shrink by an integer factor, n, and find the
mean grey level of nxn block in an input image in order to get a single pixel to put in
the output image. The mean of the output pixel is defined as:

Figure 1.32: The calculation to find mean in a nxn block of an image

Assume that the mean shrinking is done on the grey level values, in a 3x3
region of an image, as shown in the figure below.

Figure 1.33: 3x3 blocks of an image, where the shaded region is the centre pixel values for
each block

Then, the calculation of mean for each block is defined as:


31

Figure 1.34: The calculation to find mean in a 3x3 block of an image

mean
shrinking

Figure 1.35: The shrinking process, by factor 3

1.14.2 Result

Figure 1.36: The result of enlargement by factor 3: (top) the original image, Cameraman.tif
(256x256); (bottom) result image, Enlarge.raw (768x768)
32

Figure 1.37: The result of mean shrinking by factor 3: (left) the original image,
Cameraman.tif (256x256); (right) result image, shrinking.raw (85x85)

1.14.2.1Program Source Code

Figure 1.38: The source code of enlargement function


33
34

Figure 1.39: The source code of mean shrinking function

2. PROJECT

2.1 Introdcution to Gambare

Our Final project for Image Processing class, Gambare, is derived from the
word Gambar (picture) and ganbare (Japanese for good luck). This application
was the result of a task given by our lecturers to develop a photo editing application
for mobile phone. We choose to use Android studio to develop the Gambare
application, as it is the official integrated development environment (IDE) for
Android platform development.
35

2.2 Functional requirement

In order to complete this project, this application must have these functional
requirements:

1. Import photo from the phones gallery.


2. Capture the image using phones camera.
3. Perform sharpening on an image.
4. Perform Gaussian blur on an image.
5. Perform invert colour on an image.
6. Perform greyscale on an image.
7. Rotate (left, right) on an image.
8. Flip (horizontal, vertical) an image.
9. Change brightness (brighten, darken) of an image.
10. Change contrast (high contrast, low contrast) of an image.

2.3 Implementation of Gambare Editor Mobile Application

2.3.1 AndroidManifest.xml

Every application must have an AndroidManifest.xml file in its root directory.


The manifest file presents essential information about your app to the Android
system, information the system must have before it can run any of the app's code. In
Gambare Editor mobile application, we had added permissions to read gallery, use
camera, and write new image file to gallery.
36

Figure 2.40: The source code of AndroidManifest.xml

2.3.2 Layout

A layout defines the visual structure for a user interface (UI) for an activity or
app widget. There are three layouts that will be used in this application are described
as below:

1. splash.xml:
This is the layout for the splash screen UI. This is the first screen
viewed by the user, when the user opened the application. The splash
screen will stay for 3 seconds before go to the next screen, the main
screen. The splash screen UI is as shown in Figure 2.2.
37

Figure 2.2: The splash screen UI

2. activity_main.xml:
This is the layout for the main screen UI. This is the second screen
viewed by the user, after the splash screen. The main screen consists
of two buttons, namely GALLERY button and TAKE PHOTO
button, as shown in Figure 2.3. The GALLERY button allows the
user to choose an image from the phones gallery, whereas the TAKE
PHOTO button allows the user to take a photo using the phones
camera. When an image is chose, that image will appear at the centre
of the screen and the START EDIT button will appear at the bottom,
as shown in Figure 2.4.

3. activity_edit.xml:
This is the layout for the edit image screen UI. The edit image screen
consists of a save image button at the top and a horizontal scroll
view at the bottom. The horizontal scroll view contains eight image
buttons, namely sharpening, Gaussian blur, invert colour,
greyscale, rotate, flip, brightness and contrast. The image
chose is located at the centre of the screen. The edit image screen UI
is as shown in Figure 2.6.
38

Figure 2.3: The main screen UI Figure 2.4: The main screen UI when an
image is choose
39

Figure 2.5: The edit image screen UI

2.4 Functional Requirements of Gambare Editor Mobile Application

2.4.1 Open a Photo from the Phones Gallery

This function allows the user to choose an image from the phones gallery.
The figures below shows the variables and functions required to choose an image
from the phones gallery.
40

2.4.2 Capture the Image using Phones Camera

This function is activated when the ImageButton where pressed. It will


activate the camera by requesting permission for the camera as the source code
implied.

2.4.3 Perform Sharpening on an Image

When the Sharpening icon was selected in the editing screen, the application
will apply this source code toward the image. The weight in the source code can be
manipulated by the user through the application.
41

2.4.4 Perform Gaussian Blur on an Image

When the Gaussian Blur icon was selected in the editing screen, the
application will apply this source code toward the image. The result is shown in
Figure 2.6.

Figure 2.6: The picture


has been applied with
Gaussian Blur
42

2.4.5 Perform Invert Colour on an Image

The source code below will be applied to the picture when the Invert Colour
icon was selected in the editing screen. The result is shown in Figure 2.6.

Figure 2.7: The colour Figure 2.8: The picture


of the picture has been has been applied with
inverted grayscale
43

2.4.6 Perform Greyscale on an Image

This is the source code to perform the greyscale on an Image. The result is
shown in Figure 2.8.
44

2.4.7 Rotate (Left, Right) on an Image

2.4.8 Flip (Horizontal, Vertical) an Image

Figure 2.9: (from left) a picture that flip


horizontally, vertically, rotate right, and rotate
left
45

2.4.9 Change Brightness (Brighten, Darken) of an Image

In this source code, user is allowed to control the value variable through a
slide bar as shown in Figure 2.10. Each intensity of picture layer (alpha, red, green,
blue) are increased or decreased manually based on the value variable that user
defined. The result is shown in Figure 2.10.
46

2.4.10 Change Contrast (High Contrast, Low Contrast) of an Image

Same with change brightness code, user is allowed to control the value
variable through a slide bar. Each contrast of picture layer (alpha, red, green, blue)
can be controlled either increased or decreased manually. The result is shown in
Figure 2.11.
47
48

Figure 2.10: the normal brightness (left) is controlled and increased (right)

Figure 2.11: the contrast of the picture is elevated

Anda mungkin juga menyukai