Anda di halaman 1dari 83

.

IMAGE PROCESSING SUITE

Submitted in the partial fulfillment of the requirements


For the award of the degree of

Bachelor of Technology
In
Computer Science
(2007-2011)

Guide Name:

Submitted by:-

Mrs. Tanvi

Ekta Agarwal(083)
Naina Verma(064)
Gaurav Jain()
Mayank Kimothi()

CERTIFICATE

This is to certify that the project report entitled Image Processing Suite done by Ekta
Agarwal, Gaurav Jain, Naina Verma and Mayank Kimothi is an authentic work carried
out by them at Northern India Engineering College under my guidance. The matter
embodied in this project work has not been submitted earlier for the award of any
degree or diploma to the best of my knowledge and belief.

Date: 05/05/2011

Signature of the guide:


(Mrs. Tanvi )
CS Department (Senior
Faculty)
NIEC

ACKNOWLEDGEMENT

We would also like to give our heartily thanks to Mr. Saurabh Gupta, HOD of
Computer Science.
We are also very grateful to all the teachers who always helped us to enhance our
software skills and giving their precious time.
We are also very grateful to our friends without whom support we could never done this
project.
We would also thank our Institution and our family members without whom this project
would have been a distant reality. We also extend our thanks to well wishers. With
immense pleasure, we would like to present this project report to Northern India
Engineering College, Delhi. It has been an enriching experience for us to enhance our
skills at the time of the development of this major project, which would not have been
possible without the goodwill and support of all the people around me. We would like to
express our sincere thanks to all those people who helped us during our project
completion.
Words are insufficient to express our gratitude towards Mrs. Tanvi, our mentor of this
project.

ABSTRACT

Image processing suite is an image editor that offers all the standard editing and paint
tools, as well as image layers and several other features that are usually not found in
free image editors. The program also includes screen capture tools, an image browser,
and as well as photo frames, channel adjustments and more. The product is part of the
image processing software family such as Adobe Photoshop, Corel Draw and likewise.
The project is an effort to build a Photoshop on a smaller case, with an easy to use user
interface and an easy to understand user manual guideline to help the user how to play
with image pixels and colors and other parametersThe captured image can be loaded
into the internal image editor for further editing, saved as image file (JPG, PNG, GIF,
BMP) or automatically uploaded to your FTP server (upload the image and copy the
URL to the clipboard).
The Objective of this project is to create desktop-based software for general pc users
and imaging professionals to play with parameters of an image by using the following
features (the features are subject to change):

Upload an image

Save an image

Sharpening

Edge Detection

Contrast Enhancement

Negative

Grayscale

Sketching

Glass View

Darkening

Flipping

Embossing

Blurring

Histogram Equalization

User Friendly Manual to understand the functionality of each feature.

The above-mentioned features will be discussed in detail in the following sections.

TABLE OF CONTENTS
1.
INTRODUCTION

1
1.1 Purpose1
1.2 Objective..1
1.3 Special Features...2
1.4 Product Scope .5
1.5Assumptions and Dependencies...7
2.
SOFTWARE
DESIGN
METHODOLOGY .......................................................10
2.1 Object Model 10

2.2 Use Case11


2.3 Class Association.16
3. FUNCTIONAL REQUIREMENTS...17
3.1 Software Requirements.17
3.2 Hardware Requirements.. 25
4. GRAPHICAL USER INTERFACE DESIGN26
4.1 The Main Window.26
4.2 Upload an image....27
4.3 Save an image....27
4.4 Sharpening.27
4.5 Edge Detection..28
4.6 Contrast Enhancement..29
4.7 Grayscale..30
4.8 Sketching..31
4.9 Glass View31
4.10 Darkening32
4.11 Flipping32
4.12 Embossing...33
4.13 Blurring34
4.14 Histogram Equalization...35
4.15 Negative..37
5 CODING..39
6. OTHER FUNCTIONAL REQUIREMENTS.71
7. CONCLUSION.73
8. BIBLIOGRAPHY..74

TABLE OF FIGURES
1.
INTRODUCTION

1
1.1 Convolution Kernel Filter...2
1.2 Convolution Kernel Matrix.3
1.3 Edge Detection Filter...4
1.4 Mean Filter .5
1.5RGB Color Space 9
2.
SOFTWARE
DESIGN
METHODOLOGY .......................................................10
2.1 Use Case Diagram.12
2.2 Class Diagram..14
3. FUNCTIONAL REQUIREMENTS...17
3.1 Java Architecture...19
3.2 Java Virtual Machine19
3.3 Metal Motif Windows...21
4. GRAPHICAL USER INTERFACE DESIGN26
4.1 The Main Window.26
4.2 Sharpening.27
4.3 Edge Detection..28
4.4 Contrast Enhancement...29
4.5 Grayscale...30
4.6 Sketching...31
4.7 Glass View.31
4.8 Darkening..32
4.9 Flipping..32
4.10 Embossing...33
4.11 Blurring34
4.12 Histogram Graph.36
4.13 Histogram Equalization...36
4.13 Negative..37

Chapter-1
INTRODUCTION

1.1 Purpose
The purpose of this SRS document is to provide a detailed overview of our software
product, its parameters and goals. This document describes the project's target audience
and its user interface, hardware and software requirements.

1.2 Objectives

To explore and implement a basic image-processing program to use with the aim
of providing the user with a basic knowledge of the fundamental techniques of
image filtering.

To provide the user with an easy to user graphical user interface (GUI) with
which the user can filter images using ready loaded filters or custom filters
created by the user.

To gain experience in the Java programming language.

To create the project as an applet for use on the web, so users can log on to my
home page and use this program.

Image processing is a very highly processor intensive activity. There are thousands of
calculations to be completed when filtering an image with a simple 3x3-convolution
matrix. There are many commonly available image-processing libraries which
implement many of the functions within this project, such as DirectX, WinG and Intels
own Image Processing Library (IPL) which use hardware functions on the Intel CPUs.

This document is intended for professionals or general end pc users having keen interest
in image processing and photography, which would like to change certain parameters
and composition of their images.

1.3 Special Features


1.3.1 Filtering
There are two main types of image filtering, linear and non-linear. The method
implemented in my project is the linear method, which is achieved by the use of a
convolution kernel. Image filtering is generally used to clean up or enhance images, by
removing the effects of noise etc. There are many different filters available, from edge
detection, enhancement to blur filters and emboss filters etc. Each of the filters are
implemented as a convolution mask or convolution kernel.

O1

O2

O3

O8

O9

O4

O7

O6

O5

Figure 1.1 Convolution Kernel

The above mask is a simple 3x3 mask/kernel, where O9 is the pixel in question. This
project will be using different filters from 3x3 kernels (above) to larger kernel such as
9x9 and 11x11.
Most off-the-shelf image packages like Paint Shop Pro does not support the larger
masks due to the mass number of calculations required to check/correct each pixel. The
larger the mask, the larger the number of calculations and ultimately the longer it takes
to filter an image.
1.3.2 Standard Convolution Algorithm
The basic operation of a convolution kernel filtering algorithm is based around a kernel
or NxN matrix where N is an odd number. The matrix represents the filter coefficients,
which will be applied to the image. The matrix is shifted over the image a pixel at a
time and the middle value of the matrix is calculated during each iteration. This
involves getting the pixel value in the centre of the matrix as well as, in the case of a
3x3 kernel, the values of its eight neighboring pixels. Each pixel is multiplied by its
value or weight in the kernel, and then these values are added together. This result is
divided by some divisor and finally a biasing factor is added. The final result becomes
the new value of the centre pixel, and the matrix slides over to the next pixel and the
process is repeated.
3x3 Convolution Kernel Image pixels

Figure 1.2 Convolution Kernel


The convolution kernel passes across the image from left to right. It then moves down
to the new row of pixels and continues till the full image has been processed.

1.3.3 Common Filters

Edge Detection

Psychophysical experiments indicate that a photograph or visual signal with crispier


edges is often more subjectively pleasing that an exact photometric reproduction.
Edges are not as sharp to the eye. By applying a discrete convolution filter such as
one of the following to an image, edge become much clearer.

H=

-1

-1

-1

-1

H=

-1

-1

-1

-1

-1

-1

-1

-1

H=

-1

-2

-1

-2

-2

-1

-2

-1

Figure 1.3 Edge Detection Filter

Mean Filter

The average filter is also known as a mean, box or blur filter. The most common use
for the average filter is to reduce noise in an image. The averaging filter does
exactly as its name suggests, it moves over each pixel in the image and assigns the
middle pixel in the kernel the average value of its eight adjacent neighbors. The
kernel coefficients can be seen below. There are however a few drawbacks to using
this filter. The first being that edges within the image may become blurred because
of the changing value regions in that area also if there is a second value (one of the
neighbors) which is highly unrepresentative of the area, the average wont be a true
average.

Figure 1.4 Mean Filter

1.4 Product Scope

Our project goal is to create an image-processing suite that will enable the user
to enhance or lessen the quality of an image by changing certain parameters, or
adding new ones.

Java is used as the basic programming platform for the software. Java provides a vast
library of tools for developing the required user interface of our software, and also the
various imaging algorithms necessary.

Java has become increasingly popular of the last few years due to its ease of
programming, its cross platform capabilities and its ability to be uses on the Internet as
an applet in a web page. One of the downfalls of Java is that it is much slower then
native code. But as time goes by, Java is becoming faster and faster and now isn't far
behind the speed of C or C++.
Key benefits of Java:

Platform independence code (Write once, run anywhere)

Object orientated

Easy of coding (no pointers or functions)

We decided we could gain more knowledge by learning Java then we would if we


created the project in any other language. Using Java, we could also experiment with
the ability of platform independent code, so that this package would run on multiple
operating systems such as Window NT/95/98, Linux and also the possibility of creating
an applet for use on the web.
Many people believe that in the near future, all software products such as word
processors will be run centrally through a wed site and not installed on the local client

computer. When people need to use these products, they simply log onto the web site
and start to work away. There are many advantages for both parties. On the software
producer size it means easier to create software releases, updates and patches, as the
bulk of the program will be centralized on the server.
The client/user can benefit by using a cheaper computer. Payment can be based on an
hourly / monthly charge. Users will be able to try out a product without purchasing it. If
the user doesn't like the product they dont have to use or pay for it. With the ability of
creating my project as an applet for use over the web people can log onto my homepage
and use the program without having to download and install it onto their computers.
A large amount of time will be spent becoming familiar with programming in the Java
programming language, also researching and analyzing the methods by which this
project could be implemented. Computer graphics is a highly processor intensive
activity. As will become clear in our discussion on filtering techniques, the number of
computations for a simple algorithm can often take quite some time.

1.5 Assumptions and Dependencies


This been a software development project using new technologies a sizeable proportion
of time was spent researching strategies for implementation. Therefore the project was
developed using Java and UML. Time spent in analysis and development of a logical
model can highlight many pitfalls prior to coding and therefore will reward us with an
easily maintainable modularized application which lends itself to upgrades more easily
in the future. UML gives these benefits to application development and therefore this

was one of the main reasons it was chosen. UML gives us object models of how the
application should work before coding takes place hence remove design problems
before we start coding.
1.5.1 Image Filtering

Signals that are transferred over almost all forms of communication can be open to
noise; an image may be subject to this noise and interference from several sources.
These noise effects can be minimized by statistical filtering techniques or by application
of spatial adhoc processing techniques.
Image noise arising from noisy sensors or channel transmission errors usually appears
as discrete isolated pixel variations that are not spatially correlated. Pixels that are in
error often appear markedly different from their neighbors. Many noise-cleaning
algorithms make use of this fact. By examining a pixel and checking to see if the
brightness of this pixel is greater then the average brightness of its immediate neighbors
by some threshold level, we can see if this pixel is valid or if it may be noise. If the
pixel is noise then we replace this pixel with the average of the neighbors. Noisy images
have a higher spatial frequency spectrum than normal images. Hence a simple low-pass
filter can smoothen out the noise.
1.5.2 Color Space
A color space is a mathematical representation of a set of color. I will be dealing with
two fundamental color models, RGB (Red, Green, and Blue) used in color computer
graphics and color television and YUV used in broadcast and television. Color spaces
can be converted between each other, but video quality is lost with each conversion.
Care should be taken to minimize the number of color space conversions used in the
video encoding and decoding path, so as to loss as little as possible quality.
8

1.5.3 RGB Color Space


Three fundamental color models are RGB (user in color computer graphics and color
television); YIQ, YUC, or YCbCr (used in broadcast and television systems), and HSV
(Hue Saturation Value). All of the color spaces in common use can be derived from the
RGB information supplied by the devices like cameras and scanners.
When light refracts through a prism, its color components separate to create a rainbow.
This rainbow is a spectrum particular to white light and the color range that the human
eye can perceive. The colors proceed across the spectrum in the order red, orange,
yellow, green, blue, indigo and violet to give the acronym ROYGBIV. Of these colors,
the primaries are red, green and blue, and the color model for light is referred to as the
RGB model
The red, green, and blue (RGB) color space is widely used throughout computer
graphics and imaging. Red, green and blue are three primary additive colors (individual
components that are added together to from a desired color) and are represented by a
three-dimensional, Cartesian co-ordinate system (The Color Cube).
A diagonal from one corner of the cube (Black) to the other (White) represents various
grey levels. The RGB color space is the most prevalent choice for graphic frame buffers
because color CRTs use red, green, and blue phosphors to create the desired color.
Therefore, the choice of the RGB color space for a graphics frame buffer simplifies the
architecture and design of the system. Also, a system that is designed using the RGB
color space can take advantage of a large number of existing software routines, since
this color space has been around for a number of years.

However, RGB is not very efficient when dealing with "real-world" images. All three
RGB components need to be of equal bandwidth to generate any color within the RGB
color cube. The result of this is a frame buffer that has the same pixel depth and display
resolution for each RGB component. Also, processing an image in the RGB color space
is not the most efficient method. For example, to modify the intensity of a given pixel,
the three RGB values must be read from the frame buffer, the intensity or color
calculated, the desired modifications performed, and the new RGB values calculated
and written back to the frame buffer.

Figure 1.5 RGB Color Cube.

Chapter-2
SOFTWARE DESIGN METHODOLOGIES
Before starting to code the project, a decision was made to use OOD (Object Oriented
Design) to design the main points of the project, as the language that was to be used was
an Object-Oriented language.

10

Object-Oriented Design (OOD) practice in software engineering is essential for keeping


down the cost of the ever-increasing amount of software being developed now and that
have to be maintained in the future
The project was designed by using different parts of OMT (Object Modeling Technique)
methodology. OMT is a methodology used by many software engineers to design
interaction between different parts of a project before starting the work on creating the
project. Object Oriented Analysis (OOA) is concerned with generating a problem
statement and investigating what needs to be done, while Object Oriented Modeling
(OOM) addresses the needs of the Analysis model and is based on three different views
of the system, which is the object, dynamic and functional models.
Many problems can be avoided by using good design practices before the software
development/coding stage.

2.1 Object Model


The Object Model shows the static data structure of a system. The model describes
object classes and their relationships to each other. From the general description of the
project problem statement above, a list of objects and classes can be derived.
The list of important classes is shown below:
o

Image processing suite (create the user interface)

Picture (it contains methods two load the image and display the image)

Filter (Responsible for filtering images and various processes).

Histogram( it generates the histogram of the image)

Help(Display the help options to user)

11

2.2 Use Case

Figure 2.1 Use-Case Diagram


2.2.1 User Opens Image (Buffered Image RGB color model)
o Actors

User

o Preconditions

12

The program is running

The image exists on a hard drive or network drive that the user
has access and read permission to.

o Trigger

The image is loaded from the file and displayed on the screen. An
image object is created.

o Actions

Buffered Image is created with a string (path) of the image that is


to be opened.

RGB color model is loaded and image pro create a


SplitImageComponent that will display the image.

This frame is now set as been the latest frame to be opened by the
desktop manager

o Post conditions

The user can now see the image on the screen.

The users can now user different filters on the image.

The user can now create a histogram of the image.

2.2.2 Display a Histogram


o Actors

User

o Preconditions

13

A Buffered image in RGB color model is open on the desktop.

o Trigger

Histogram set selected from the menu bar.

o Actions

The user is asked which image he/she wants the histogram to be


created for, either the original image or the filtered one.

The Histogram is now displayed on the desktop.

o Post conditions

If there is an image on the screen, then the histogram for this


image is now displayed on the desktop.

2.2.3 Filter an image


o Actors

User

o Preconditions

The program is running and the user has an image open and
displayed on the screen.

The user has selected which filter to apply.

The user has checked the accumulated box on or off, depending


on whether the filter is to be accumulated with the previously
filtered image.

o Trigger

User clicks on the selected filter button

14

o Actions

The filter is applied to either the previously filtered image or the


original image depending on if the accumulated checkbox is
selected. If there is no previously filtered image, then the original
image is filtered.

o Post conditions

The filtered images(s) will now be loaded on output frame. The


user can click on the screen to display the filtered side by side
with the original image to compare the difference.

2.2.4 Distortion filters


o Actors

User

o Preconditions

The program is running.

o Trigger

User clicks on the distortion filters menu item from the menu bar.

o Actions

The image is distorted by the user as selected.

o Post conditions

The user can see the changes in the initial image on output frame.

15

2.2.5 Save an Image


o Actors

User

o Preconditions

The program is running.

Image is there in the output frame.

o Trigger

User clicks on the save icon in the file menu of the internal
frame.

o Actions

The image is assigned a name by user.

o Post conditions

The image is saved at desired location selected by user.

If there is no image in output frame a error message is displayed.

16

2.3 Class Associations

Figure 2.2 Class Diagram

17

Chapter-3
FUNCTIONAL REQUIREMENTS
3.1 Software Requirements
Java is a programming language expressly designed for use in the distributed
environment of the Internet. It was designed to have the "look and feel" of the C++
language, but it is simpler to use than C++ and enforces a completely object-oriented
view of programming. Java can be used to create complete applications that may run on
a single computer or be distributed among servers and clients in a network. It can also
be used to build small application modules or applets for use as part of a Web page.
Applets make it possible for a Web page user to interact with the page.
3.1.1 Java History and Development
Java Milestones
1990: Programmer Patrick Naughton starts "Project Green" at Sun Labs.
1991: Programmer James Gosling created new language ("Oak"), based on C++.
Mid 1993: Release of Mosaic WWW browser from NCSA.
1994: WWW Rise in Popularity. Oak renamed "Java", prototype Java WWW browser.
January 1995: Hot Java/Java Development Kit released for Solaris.
Summer 1995: Linux and Windows 95 ports of Java available.
Autumn 1995: Java Beta 1 released. Java applet support announced for Netscape
18

Navigator 2.0.
December 1995: Sun/Netscape announces JavaScript. Microsoft and IBM announce
intention to license Java technology.
23 January 1996: Java 1.0 released
James Gosling an employee of Sun started a new project. He created a new language
based on C++, but which eliminated many of that languages shortcomings. This
language was designed with many goals in mind. It was to be fast, portable and safe to
use for embedded systems. The name of this language was Oak (now named Java due to
trademark clash). In 1993 Mosaic browser was released and started one of the biggest
global infatuations, the World-Wide-Wed (WWW). Later that same year, a web browser
was written in Oak. With the release of this browser, Oaks potential for Internet
programming became apparent. It took C++ 10 years to become as popular as Java
became in 20 months.
Java differs from C++ in many different main ways:
o No functions (Being and entirely object oriented language)
o No pointers
o No global methods or variables
o No operator overloading
o No multiply inheritance
o No Pre-processor
o No header files
When you compile a Java program, you dont get an exe file, you get a class file. This
class file is highly portable binary code. Pure Java binaries are dependent only on the

19

Java Virtual Machine (JVM). Once this interpreter machine has been ported to the target
architecture, the Java binaries will run unmodified. The JVM is a software interpreter
that presents (through either hardware or software) a set of defined features upon which
Java code relies.
Java Bytecode
Java API

Java Virtual Machine


Web Browser

Native Methods
Operating System
Hardware
Figure 3.1 Java Architecture.

Class Loader
Bytecode Verifier
Libraries

Interpreter/
JIT Compiler
Security Manger
Host Platform

Figure 3.2 Java Virtual Machine.


3.1.2 What is Swing and the JFC
The Abstract Window Toolkit (AWT) was the original toolkit that was packaged with
Java for developing User Interfaces (UIs). This toolkit was not original designed to be

20

anything like a high-powered user interface toolkit to be used by more then half a
million developers. It was designed to support the development of simple user
interfaces for simple applets used in web pages. A lot of the 'normal' components found
today in almost all UI toolkits weren't includes in the AWT, such as scroll panes,
printing support, keyboard navigation and popup menus. The AWT was badly designed
and was based on a peer-based architecture. Peers are native user interface components
delegated to by wafer-thin AWT objects. Which left the AWT classes a mere shell
around somewhat complex native peers? This design allowed the Java creators to turn
out components in record time (six weeks). These components are called heavyweight
components as they were associated with a native peer component and were rendered in
their own native window.
Realizing that if something was not done with the UI toolkit, the Java community was
likely to split over a standard user interface toolkit. Javasoft struck a deal with Netscape
who had been working on a set of lightweight classes based on concepts from
NEXTSTEP's user interface toolkits. This deal brought into place the Java Foundation
Classes (JFC), which include the Swing toolkit, which is a collection of over 40
lightweight components, which is four times the number of components provided by the
AWT. In addition to providing lightweight replacements for the AWT's heavyweights,
Swing also provides a wealth of additional components to facilitate the development of
graphical user interfaces.
Lightweight components do not have native peers and aren't rendered in their own
heavyweight container windows. Because these lightweight components are rendered in
their container's window and not a window of their own, lightweight must ultimately be

21

contained in a heavyweight container. As a result, Swing frames, applets and dialogs


must be heavyweight to provide a window into which lightweight Swing components
can draw.
Swing supports the concept of a pluggable look and feel. By modifying the look and
feel of an application the users can create the Windows, Motif, Macintosh or Metal
(Java own look and feel) look and feel on platforms other then the selected look and
feel.

Figure 3.3 Two different look and feels of Java ( Metal Motif Windows)
The down side to swing is its speed. Due to the fact that the components are lightweight
and are entirely written in Java code, they are rendered much slower then their AWT
counter parts. Applications that use Swing can run notably slower then if they used the
AWT toolkit. However this does not pose any big problems to todays standard desktop
computer of 600 MHz and up.
3.1.3 What is Java 2D
Also included in the Java Foundation Classes (JFC), is the Java 2D APIs used for
developing/manipulating graphics.
The Java 2D Application Programming Interface (API) is a set of classes that can be
used to create high quality graphics. It includes features like geometric transformation,

22

alpha compositing, bi-directional text layout, image processing, antialiasing and many
more classes.
Before Java 2D the AWT's graphics toolkit had some serious limitations:
o Few fonts were supported.
o Rotate or scaling wasn't included.
o All lines were drawn with a single-pixel thickness.
o Gradients, special fills and patterns weren't included.
Along with the other Java Media APIs, the Java 2D API was developed to empower
developers to create applications that incorporate advanced user interfaces. The design
goals for the Java 2D API include:
o Supporting high-quality, platform-independent graphics, text, and images
o Delivering a simple and compact 2D graphics and imaging solution
o Leveraging Java's "Write Once, Run Anywhere" paradigm in order to
provide consistent access to 2D graphics across major Java platforms
o Complementing other Java technologies, thus providing an integrated
media solution for Java

3.1.4 Java 2D Features


Graphics

Antialiased rendering
Bezier paths
Transforms
Compositing

23

Arbitrary fill styles


Stroking parameters for lines and curves
Transparency
Text

Extended font support


Advanced text layout

Images

Flexible in-memory image layouts


Extended imaging operations, such as convolution,
lookup tables, and affine transformations

Devices

Hooks for supporting arbitrary graphics devices such as


printers and screens

Color Management

ICC profile support


Color conversion from different color spaces
Arbitrary color spaces
Benefits for Developers
Figure 3.4 Java 2D classes

The Java 2D API provides many benefits to developers who want to incorporate
graphics, text, and images into their applications and applets. In other words, the Java
2D API benefits virtually all Java developers. By enabling the incorporation of
sophisticated graphics and text, the Java 2D API makes it possible to create Java
programs that provide a richer end-user experience. With the Java 2D API, you have the
necessary support to create real-world applications that meet the expectations of today's
user for font, graphics, and image capabilities.

24

The Java 2D API is part of a set of class libraries that are designed to enable you to
develop full-featured Java programs. With these libraries, developers have the essential
tools to build applications that meet market needs. They make it possible to reach a
broad audience running applications on any Java enabled platform.
3.1.5 Images
The Java 2D API provides a full range of features for handling images by
supplementing the image-handling classes in java.awt and java.awt.image with several
new classes, including: BufferedImage, Tile, Channel, ComponentColorModel and
ColorSpace.
These classes give us greater control over images. They allow us to create images in
color spaces other than RGB and characterize colors for accurate reproduction. The Java
2D API BufferedImage class allows us to specify exactly how pixels are laid out in an
in-memory image.
Like all other graphic elements, images are altered by the Transform object associated
with the Graphics2D object when they are drawn. This means that images can be scaled,
rotated, skewed, or otherwise transformed just like text and paths. However, images
maintain their own color information, represented by a color model for interpreting
color data, rather than using the current color. Images can also be used as the rendering
target of a Graphics2D.
I make extensive use of BufferedImage, BufferedImageOp, Transform, ConvolveOp,
Kernel, Graphics2D, and LookupOp in my projects which are all classes in the Java 2D
package.
3.1.5 Naming Conventions

25

The Java language has standard naming conventions. The reason for these conventions
is so people can read code more easily and understand it quicker then if these
conventions were not in place. Classes start with a capital letter, and each word in the
class also starts with one. The class name for a buffered image would be
BufferedImage, the file input stream is named FileInputStream etc. Objects start with a
small letter, the first letter of other words in the name start with a capital, an object
named "bits per pixel" would be bitsPerPixel and so on. Packages are a group of classes
and are named the same way as an object. So java.awt.image is a package and
java.awt.Image is a class. I have used these naming conventions through out my code.

3.2 Hardware Requirements


User will use the mouse as a selecting device and keyboard will be use to name the
image which is to be uploaded. There are no communicating interfaces.

Chapter-4
GRAPHICAL USER INTERFACE DESIGN
This chapter discusses the user interface design of our software.

26

4.1 The Main Window


4.1.1 Description
On running the software, the main window of the software looks like as shown in the
diagram below. There are 2 basic features of the window:

The Menu bar

An area divided into two portions: The Input image and the Output image.

4.1.2 Illustration

Figure 4.1 The Main Window

27

4.2Upload an Image
4.2.1 Description
The user can upload any compatible image format for further processing of the
image.
4.2.2 Stimulus/Response Sequences
The feature can be found under the File option of the toolbar. On clicking the
Upload button, a new window opens for the user to define the path of the
image (the location of the image) to be uploaded.

4.3 Saving an Image


4.3.1 Description
The processed image can be saved to any directory on the pc. The image will be
saved in the same format as it was uploaded.
4.3.2 Stimulus/Response Sequences
The feature can be found under the File option of the toolbar. On clicking the
Save As button, a new window opens for the user to set the path of the image
where it is to be saved.

28

4.4 Image Sharpening


4.4.1 Description
The main aim in image sharpening is to highlight fine detail in the image, or to
enhance detail that has been blurred (perhaps due to noise or other effects, such as
motion).
4.4.2 Illustration

Input Image

Output Image
Figure 4.2 Image Sharpening

4.5 Edge Detection


4.5.1 Description

The purpose of edge detection is to highlight the edges of different objects in


an image where color changes and shadows produce rapid changes in the
color and/or intensity of the image. Other circumstances, such as the edges
of letters on a sign will also trigger edge detection on the basis of strong
color contrasts between the letters and the background.

29

Our edge detection is a feature that grabs the pixel changes in a picture and
displays the picture edges by applying a suitable algorithm on it.

4.5.2 Illustration

Input Image

Output Image
Figure 4.3 Edge Detection

4.5 Contrast Enhancement


4.5.1 Description
The filter changes the brightness and contrast of an image. Contrast enhancement
involves changing the original values so that more of the available range is used,
thereby increasing the contrast between targets and their backgrounds.

30

4.5.2 Illustration

Input Image

Output Image

Figure 4.4 Contrast Enhancement

4.6Image Grayscale
4.6.1 Description

The filter converts an image to a Grayscale image. To do this it finds the


brightness of each pixel and sets the red, green and blue of the output to the
brightness value.

The weighting used by the Grayscale filter is


Luma = 77Red + 151Green + 28Blue

31

4.6.2 Illustration

Input Image

Output Image
Figure 4.5 Gray Scale

4.7 Image Sketching


4.7.1 Description
Sketching is the loose drawing of the main features of an object. The software provides
the same effect that can be produced while drawing with a pencil
.4.7.2 Illustration

Input Image

Output Image
Figure 4.6 Image Sketching
32

4.8 Image Glass view


4.8.1 Description
This feature gives the effect as if image is seen through glass.
4.8.2 Illustration

Input Image

Output Image
Figure 4.7 Glass View

4.9 Image Darkening


33

4.9.1 Description
This feature allows increasing or decreasing the inherent brightness of the image.
4.9.2 Illustration

Input Image

Output Image
Figure 4.8 Darkening

4.10 Image Flipping


4.10.1 Description
This feature flips the image laterally as well as vertically.
4.10.2 Illustration (Flip laterally)

Input Image

Output

Image
Figure 4.9 Flipping

34

4.11 Image Embossing

4.11.1 Description
Image embossing is a computer graphics technique where each pixel of an image is
replaced either by a highlight or a shadow, depending on light/dark boundaries on the
original image. Low contrast areas are replaced by a gray background.
4.11.2 Illustration

Input Image

Output Image
Figure 4.10 Embossing

4.12 Image Blurring


4.12.1 Description
Blurring is the thing that happens when your camera is out of focus, and the image
captured gets dizzy or hazy. What happens is that what should be seen as a sharp point
gets smeared out, usually into a disc shape. In image terms this means that each pixel in
the source image gets spread over and mixed into surrounding pixels. Another way to
look at this is that each pixel in the destination image is made up out of a mixture of
surrounding pixels from the source image.

35

4.12.2 Illustration

Input Image

Output Image
Figure 4.11 Blurring

4.13 Histogram Equalization


4.13.1 Description

Histogram Equalization is a technique for increasing the detail of an image


that is lacking in contrast. This technique changes the intensity levels in the
image to cause the image to conform to some desired histogram. Histogram
Equalization helps the quality of dithered images in the One Bit mode.
Sometimes, this technique increases the contrast too much.

By clicking on the histogram button in the toolbar, a dialog appears asking


the user to draw the histogram of the original image or of the filtered image.
If the YUV frame is open, only histograms for the selected channels
(checked check boxes) will be drawn. The user can open as many histogram
windows as wanted. So histograms of different images can be compared side
by side.

36

There are four radio buttons (RGB, R, G and B) at the top of the histogram
window. By selecting different buttons, four different graphs can be drawn,
one for the RGB value and one for each of the R, G and B values.

Figure 4.12 Histogram Graph

The maximum number of equal color is displays in the top left corner of the
histogram window. This number is there to give the user an idea of the scale
of the histogram plot.

My program uses 36-bits to store each pixel of an image. The top two bytes
is the value of the Alpha component, the next two is the value of the Red
component, then the green and finally blue. So the different number of
possible RGB colors is greater then 16 million colors (24-bit). When I
display a histogram of an image, I only look at 256 different colors. This
works fine when displaying the histogram of the R, G or B channels. But
when I wish to display the diagram of the RGB values, I convert all the
values from 24-bit down to 8-bit (256 colors) and then display them. This
explains why the x-axis ranges from 0 to 256.

37

4.13.2 Illustration

Input Image

Output Image
Figure 4.13 Histogram Equalization

4.13Image Negative
4.13.1 Description

This operation creates an effect that looks like a color negative in


conventional film. Also note that applying this operation twice will restore
the original image; you're basically taking a negative of the negative

To invert an image, we simply subtract each color component from 255.


There are no filter parameters.

38

4.13.2 Illustration

Input Image
Output Image
Figure 4.14 Negative
4.15. Help
4.15.1 Description
This feature lets the user know about the functionality of each feature in our project.
It will add to the knowledge of the user regarding the terminology of image
processing, its functions and techniques.

39

Chapter-5
CODING
5.1 Image Editor Interface
import java.awt.*;
import javax.swing.*;
import javax.swing.border.*;
import java.io.File;
import java.awt.event.*;
import java.awt.event.KeyEvent;
import java.awt.event.ActionListener;
import java.awt.image.BufferedImage;
import java.awt.image.Kernel;
import java.awt.image.ConvolveOp;
import java.awt.image.BufferedImageOp;
@SuppressWarnings("serial")
public class ImageEditor extends JFrame
{
// Variables declaration
private JLabel jLabel4;
private JLabel jLabel5;
private JSplitPane jSplitPane2;
private JPanel contentPane;
private JFileChooser chooser = new JFileChooser();
private Picture pic = new Picture(256,256);
private Picture pic1 = new Picture(256,256);
private Picture pic3 = new Picture(256,256);
// End of variables declaration

40

public ImageEditor()
{
super();
initializeComponent();
this.setJMenuBar(createMenuBar());
this.setVisible(true);
}
private void initializeComponent()
{
jLabel4 = new JLabel();
jLabel5 = new JLabel();
jSplitPane2 = new JSplitPane();
contentPane = (JPanel)this.getContentPane();
jLabel4.setHorizontalAlignment(SwingConstants.CENTER);
jLabel4.setText("Load Input...");
jLabel5.setHorizontalAlignment(SwingConstants.CENTER);
jLabel5.setText("See Output...");
jSplitPane2.setLeftComponent(jLabel4);
jSplitPane2.setRightComponent(jLabel5);
jSplitPane2.setDividerLocation(300);
//
// contentPane
//
contentPane.setLayout(new GridLayout(1,1));
contentPane.add(jSplitPane2);
contentPane.setBorder(new TitledBorder(""));
contentPane.setBackground(new Color(100 , 149, 237));

41

contentPane.setForeground(new Color(239, 19, 19));


contentPane.setFocusable(false);
//
// ImageEditor
//
this.setTitle("ImageEditor... ");
this.setLocation(new Point(550,250));
this.setSize(new Dimension(730,500));
this.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
}
public JMenuBar createMenuBar()
{
JMenu menu,submenu;
JMenuItem menuItem;
// create the menu bar
JMenuBar menuBar = new JMenuBar();
// build the File menu
menu = new JMenu("File");
menu.setMnemonic(KeyEvent.VK_F);

// only needed for Alt-f keyboard

shortcut
menuBar.add(menu);
menuItem = new JMenuItem("Open File...");
menuItem.addActionListener(new OpenFileListener());
menuItem.setToolTipText("Open an Image File...");
menu.add(menuItem);

42

menuItem = new JMenuItem("Save As...");


menuItem.addActionListener(new SaveAsListener());
menuItem.setToolTipText("Save an Image File...");
menu.add(menuItem);
// build the Process menu
menu = new JMenu("Process");
menu.setMnemonic(KeyEvent.VK_P);

// only needed for Alt-p keyboard

shortcut
menuBar.add(menu);
menuItem = new JMenuItem("Brighten");
menuItem.addActionListener(new BrightenListener());
menuItem.setToolTipText("Brighten the image...");
menu.add(menuItem);
menuItem = new JMenuItem("Darken");
menuItem.addActionListener(new DarkenListener());
menuItem.setToolTipText("Darken the image...");
menu.add(menuItem);
menuItem = new JMenuItem("Negative");
menuItem.addActionListener(new NegativeListener());
menuItem.setToolTipText("Invert the image colors...");
menu.add(menuItem);
menuItem = new JMenuItem("Grayscale");
menuItem.addActionListener(new GrayscaleListener());
menuItem.setToolTipText("Turn the image into black-n-white
image...");
menu.add(menuItem);

43

menuItem = new JMenuItem("Blur");


menuItem.addActionListener(new BlurListener());
menuItem.setToolTipText("Blur the image...");
menu.add(menuItem);
submenu = new JMenu("Edge Detection");
menuItem = new JMenuItem("SobelGX Filter");
menuItem.addActionListener(new SobelGXListener());
menuItem.setToolTipText("Edge Detection using Sobel Filter...");
submenu.add(menuItem);
menuItem = new JMenuItem("SobelGY Filter");
menuItem.addActionListener(new SobelGYListener());
menuItem.setToolTipText("Edge Detection using Sobel Filter...");
submenu.add(menuItem);
menuItem = new JMenuItem("Laplasian Filter");
menuItem.addActionListener(new LaplasianListener());
menuItem.setToolTipText("Edge Detection using Laplasian
Filter...");
submenu.add(menuItem);
menu.add(submenu);
menuItem = new JMenuItem("Histogram Equalisation");
menuItem.addActionListener(new HistogramListener());
menuItem.setToolTipText("Enhance the contrast by Histogram
Equalisation Technique...");
menu.add(menuItem);

44

menuItem = new JMenuItem("Sharpen");


menuItem.addActionListener(new SharpenListener());
menuItem.setToolTipText("Sharpen the image...");
menu.add(menuItem);
menuItem = new JMenuItem("Sketching");
menuItem.addActionListener(new SketchingListener());
menuItem.setToolTipText("Pencil sketch impression...");
menu.add(menuItem);
menuItem = new JMenuItem("Emboss");
menuItem.addActionListener(new EmbossListener());
menuItem.setToolTipText("Emboss impression...");
menu.add(menuItem);

// build the Distortion menu


menu = new JMenu("Distortion");
menu.setMnemonic(KeyEvent.VK_D);

// only needed for Alt-d keyboard

shortcut
menuBar.add(menu);
menuItem = new JMenuItem("Flip Horizontal");
menuItem.addActionListener(new FlipHorizontalListener());
menuItem.setToolTipText("Get the mirror image of the original
picture...");
menu.add(menuItem);
menuItem = new JMenuItem("Glass");
menuItem.addActionListener(new GlassListener());

45

menuItem.setToolTipText("Makes image look like it's being seen


through glass...");
menu.add(menuItem);
// build the Distortion menu
menu = new JMenu("Help");
menu.setMnemonic(KeyEvent.VK_H);

// only needed for Alt-d keyboard

shortcut
menuBar.add(menu);
menuItem = new JMenuItem("Help");
menuItem.addActionListener(new HelpListener());
menuItem.setToolTipText("Get the help how to run this
software...");
menu.add(menuItem);
return menuBar;
}
private class OpenFileListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
if (chooser.showOpenDialog(ImageEditor.this) ==
JFileChooser.APPROVE_OPTION)
{
File file = chooser.getSelectedFile();
pic = new Picture(file);
pic1 = new Picture(file);
pic3 = new Picture(file);
jSplitPane2.setLeftComponent(pic.getJLabel());

46

jSplitPane2.setRightComponent(pic1.getJLabel());
pack();
}
}
}
// open a save dialog when the user selects "Save As" from the menu
private class SaveAsListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
if (chooser.showSaveDialog(ImageEditor.this) ==
JFileChooser.APPROVE_OPTION)
{
File file = chooser.getSelectedFile();
pic1.save(file);
}
}
}

private class BrightenListener implements ActionListener


{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
float [] sharpenKernel =
{

47

0.0f, 0.0f, 0.0f,


0.0f, 1.2f, 0.0f,
0.0f, 0.0f, 0.0f
};
ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3,
sharpenKernel), ConvolveOp.EDGE_NO_OP, null);
sharpenOp.filter(pic1.retImage(),pic3.retImage());
for (int x = 0; x < w; x++)
{
for (int y = 0; y < h; y++)
{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}
repaint();
}
}
private class DarkenListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
float [] sharpenKernel =

48

{
0.0f, 0.0f, 0.0f,
0.0f, 0.8f, 0.0f,
0.0f, 0.0f, 0.0f
};
ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3,
sharpenKernel), ConvolveOp.EDGE_NO_OP, null);
sharpenOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x <w; x++)
{
for (int y = 0; y < h; y++)
{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}
repaint();
}
}
//Image Inversion
// Using BufferedImage Raw Data
private class NegativeListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{

49

int[][] r = new int[pic1.width()][pic1.height()];


int[][] g = new int[pic1.width()][pic1.height()];
int[][] b = new int[pic1.width()][pic1.height()];
int argb;
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
argb = pic1.getrgb(x,y);
r[x][y] = 255-((argb >> 16) & 0xff);
g[x][y] = 255-((argb >> 8) & 0xff);
b[x][y] = 255-(argb & 0xff);
}
}
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
argb = (r[x][y] << 16) | (g[x][y] << 8) | b[x][y];
pic1.set(x,y,argb);
}
}
repaint();
}
}
// flip the image horizontally
private class FlipHorizontalListener implements ActionListener
{

50

public void actionPerformed(ActionEvent e)


{
int width = pic1.width();
int height = pic1.height();
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width / 2; x++)
{
Color c1 = pic1.get(x, y);
Color c2 = pic1.get(width - x - 1, y);
pic1.set(x, y, c2);
pic1.set(width - x - 1, y, c1);
}
}
repaint();
}
}
private class GrayscaleListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int r, g, b,gray;
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
Color c = pic1.get(x, y);
r = c.getRed(); g = c.getGreen(); b = c.getBlue();
gray = (r+g+b)/3;
Color c1 = new Color(gray,gray,gray);

51

pic1.set(x,y,c1);
}
}
repaint();
}
}
private class SharpenListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
float [] sharpenKernel =
{
0.0f, -1.0f, 0.0f,
-1.0f, 5.0f, -1.0f,
0.0f, -1.0f, 0.0f
};
ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3,
sharpenKernel), ConvolveOp.EDGE_NO_OP, null);
sharpenOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x < w; x++)
{
for (int y = 0; y < h; y++)
{

52

Color c = pic3.get(x, y);


pic1.set(x,y,c);
}
}
repaint();
}
}
private class LaplasianListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
float [] sharpenKernel =
{
-1.0f, -1.0f, -1.0f,
-1.0f, 8.0f, -1.0f,
-1.0f, -1.0f, -1.0f
};
ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3,
sharpenKernel), ConvolveOp.EDGE_NO_OP, null);
sharpenOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x < w; x++)

53

{
for (int y = 0; y < h; y++)
{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}
repaint();
}
}
private class SketchingListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
float [] sharpenKernel =
{
-1.0f, -1.0f, -1.0f,
-1.0f, 8.0f, -1.0f,
-1.0f, -1.0f, -1.0f
};

54

ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3,


sharpenKernel), ConvolveOp.EDGE_NO_OP, null);
sharpenOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x < w; x++)
{
for (int y = 0; y < h; y++)
{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}
float ninth = 1.0f / 9.0f;
float [] blurKernel =
{
ninth, ninth, ninth,
ninth, ninth, ninth,
ninth, ninth, ninth
};
BufferedImageOp blurOp = new ConvolveOp (new Kernel (3, 3,
blurKernel));
blurOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}

55

int r, g, b,gray;
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
Color c = pic1.get(x, y);
r = c.getRed(); g = c.getGreen(); b = c.getBlue();
gray = (r+g+b)/3;
Color c1 = new Color(gray,gray,gray);
pic1.set(x,y,c1);
}
}
int[][] r1 = new int[pic1.width()][pic1.height()];
int[][] g2 = new int[pic1.width()][pic1.height()];
int[][] b2 = new int[pic1.width()][pic1.height()];
int argb;
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
argb = pic1.getrgb(x,y);
r1[x][y] = 255-((argb >> 16) & 0xff);
g2[x][y] = 255-((argb >> 8) & 0xff);
b2[x][y] = 255-(argb & 0xff);
}
}
for (int x = 0; x < pic1.width(); x++)

56

{
for (int y = 0; y < pic1.height(); y++)
{
argb = (r1[x][y] << 16) | (g2[x][y] << 8) | b2[x]
[y];
pic1.set(x,y,argb);
}
}
repaint();
}
}
private class BlurListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
@SuppressWarnings("unused")
int w = pic1.width();
@SuppressWarnings("unused")
int h = pic1.height();
float ninth = 1.0f / 9.0f;
float [] blurKernel =
{
ninth, ninth, ninth,
ninth, ninth, ninth,
ninth, ninth, ninth
};

57

BufferedImageOp blurOp = new ConvolveOp (new Kernel (3, 3,


blurKernel));
blurOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}
repaint();
}
}
private class GlassListener implements ActionListener
{
public int random(int a, int b)
{
return a + (int) (Math.random() * (b-a+1));
}
public void actionPerformed(ActionEvent e)
{
int width = pic1.width();
int height = pic1.height();
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
int ii = (width + i + random(-5, 5)) % width;

58

int jj = (height + j + random(-5, 5)) % height;


Color c = pic1.get(ii, jj);
pic1.set(i, j, c);
}
}
repaint();
}
}

private class HistogramListener implements ActionListener


{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
int[] hist = new int[256];
int gray,newgray;
int r,g,b;
for (int x = 0; x < pic1.width(); x++)
{
for (int y = 0; y < pic1.height(); y++)
{
Color c = pic1.get(x, y);
r = c.getRed(); g = c.getGreen(); b = c.getBlue();
gray = (r+g+b)/3;
Color c1 = new Color(gray,gray,gray);
pic1.set(x,y,c1);
}
}

59

repaint();
for(int i = 0; i < 256; i++) hist[i] = 0;
for(int i = 0; i < w; i++)
{
for(int j = 0; j < h; j++)
{
Color c = pic1.get(i,j);
gray = c.getRed();
hist[gray]++;
}
}
int totalPixel = w*h;
double[] pr = new double[256];
double[] s = new double[256];
for(int i=0;i<256;i++) pr[i] = ((double)hist[i]/totalPixel);
for(int j=0;j<256;j++)
for(int i=0;i<=j;i++)
s[j] = s[j] + pr[i];
for(int i=0;i<w;i++)
for(int j=0;j<h;j++)
{
Color c = pic1.get(i,j);
gray = c.getRed();
newgray = (int)(s[gray]*255);

60

Color c1 = new Color(newgray,newgray,newgray);


pic1.set(i,j,c1);
}
repaint();
}
}
private class SobelGXListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
float [] sharpenKernel =
{
-1.0f, -2.0f, -3.0f,
0.0f, 0.0f, 0.0f,
1.0f, 2.0f, 3.0f
};
ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3,
sharpenKernel), ConvolveOp.EDGE_NO_OP, null);
sharpenOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x < w; x++)
{

61

for (int y = 0; y < h; y++)


{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}
repaint();
}
}
private class SobelGYListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
float [] sharpenKernel =
{
-1.0f, -2.0f, -1.0f,
0.0f, 0.0f, 0.0f,
1.0f, 2.0f, 1.0f
};
ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3,
sharpenKernel), ConvolveOp.EDGE_NO_OP, null);

62

sharpenOp.filter(pic1.retImage(), pic3.retImage());
for (int x = 0; x < w; x++)
{
for (int y = 0; y < h; y++)
{
Color c = pic3.get(x, y);
pic1.set(x,y,c);
}
}
repaint();
}
}
private class EmbossListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
int w = pic1.width();
int h = pic1.height();
BufferedImage src= new
BufferedImage(w,h,BufferedImage.TYPE_INT_RGB);
src=pic1.retImage();
for (int x = 0; x < pic1.height(); x++)
{
for (int y = 0; y < pic1.width(); y++)
{
int current=src.getRGB(y, x);
int upperLeft=0;

63

if(x>0 && y>0)


upperLeft=src.getRGB(y-1,x-1);
int rDiff=((current>>16) & 255)- ((upperLeft >>
16 ) & 255);
int gDiff=((current>>8) & 255)- ((upperLeft >>
8 ) & 255);
int bDiff=(current & 255 )- (upperLeft & 255);
int diff=rDiff;
if(Math.abs(gDiff) > Math.abs(diff))
diff=gDiff;
if (Math.abs(bDiff) > Math.abs(diff))
diff=bDiff;
int grayLevel = Math.max (Math.min ( (128 + diff
) , 255), 0);
pic1.set(y,x,(grayLevel << 16 )+ (grayLevel <<8)
+ grayLevel);
}
}
repaint();
}
}

private class HelpListener implements ActionListener


{
public void actionPerformed(ActionEvent e)
{
try

64

{
Desktop.getDesktop().open(new File("help.pdf"));
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
}
5.2 Testing from main program
public static void main(String[] args)
{
JFrame.setDefaultLookAndFeelDecorated(true);
JDialog.setDefaultLookAndFeelDecorated(true);
try
{
UIManager.setLookAndFeel("javax.swing.plaf.metal.MetalLookAndFeel");
//use other look and feel
com.sun.java.swing.plaf.motif.MotifLookAndFeel
//javax.swing.plaf.metal.MetalLookAndFeel
}
catch (Exception ex)
{
System.out.println("Failed loading Look & Feel: ");
System.out.println(ex);
}
new ImageEditor();
}

65

5.3 Picture
import java.awt.*;
import javax.swing.*;
import java.io.File;
import java.awt.event.*;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.*;
import java.net.URL;
public final class Picture implements ActionListener {
private BufferedImage image; // the rasterized image
private JFrame frame;

// on-screen view

// create a blank w-by-h image


public Picture(int w, int h) {
image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
// set to TYPE_INT_ARGB to support transparency
}
// create an image by reading in the PNG, BMP, GIF, or JPEG from a filename
public Picture(String filename) {
try {
// try to read from file in working directory
File file = new File(filename);
if (file.isFile()) {
image = ImageIO.read(file);
}
66

// now try to read from file in same directory as this .class file
else {
URL url = getClass().getResource(filename);
if (url == null) { url = new URL(filename); }
image = ImageIO.read(url);
}
}
catch (IOException e) {
// e.printStackTrace();
JOptionPane.showMessageDialog(null,"Incompatible file format! Please select
an image file.","ERROR !",JOptionPane.ERROR_MESSAGE);
throw new RuntimeException("Could not open file: " + filename);
}
// check that image was read in
if (image == null) {
JOptionPane.showConfirmDialog(null, "Select correct image format file");
throw new RuntimeException("Invalid image file: " + filename);
}
}
// create an image by reading in the PNG,BMP,GIF, or JPEG from a file
public Picture(File file) {
try { image = ImageIO.read(file); }
catch (IOException e) {
JOptionPane.showMessageDialog(null,"Incompatible file format! Please
select an image file.","ERROR !",JOptionPane.ERROR_MESSAGE);

67

e.printStackTrace();
throw new RuntimeException("Could not open file: " + file);
}
if (image == null) {
JOptionPane.showMessageDialog(null,"Incompatible file format! Please select an
image file.","ERROR !",JOptionPane.ERROR_MESSAGE);
throw new RuntimeException("Invalid image file: " + file);
}
}
// to embed in a JPanel, JFrame or other GUI widget
public JLabel getJLabel() {
if (image == null) { return null; }

// no image available

ImageIcon icon = new ImageIcon(image);


return new JLabel(icon);
}
public BufferedImage retImage()
{
return image;
}

// accessor methods
public int height() { return image.getHeight(null); }
public int width() { return image.getWidth(null); }
// return Color of pixel (i, j)
public Color get(int i, int j) {
return new Color(image.getRGB(i, j));
}

68

// Method Called in NegativeListener


public int getrgb(int i,int j)
{
return image.getRGB(i,j);
}
// change color of pixel (i, j) to c
public void set(int i, int j, Color c) {
image.setRGB(i, j, c.getRGB());
}
// Method called in NegativeListener
public void set(int i,int j,int rgb)
{
image.setRGB(i,j,rgb);
}
// save to given filename - suffix must be png,bmp,jpeg jpg, or gif
public void save(String filename)
{ save(new File(filename)); }
// save to given filename - suffix must be png,bmp,jpeg, jpg, or gif
public void save(File file) {
String filename = file.getName();
String suffix = filename.substring(filename.lastIndexOf('.') + 1);
suffix = suffix.toLowerCase();
if (suffix.equals("jpg") || suffix.equals("png") || suffix.equals("bmp") ||
suffix.equals("jpeg") || suffix.equals("gif")) {
try { ImageIO.write(image, suffix, file); }

69

catch (IOException e) { e.printStackTrace(); }


}
else {
JOptionPane.showMessageDialog(null,"Incompatible file name extension!
Please select a valid file name extension. (.jpg, .gif, .jpeg, .png,
.bmp)","ERROR !",JOptionPane.ERROR_MESSAGE);
System.out.println("Error: filename must end in .jpg,.gif,.bmp,.jpeg, .png");
}
}
// open a save dialog when the user selects "Save As" from the menu
public void actionPerformed(ActionEvent e) {
FileDialog chooser = new FileDialog(frame,
"Use a .png,.gif,.bmp,.jpeg,.jpg extensions", FileDialog.SAVE);
chooser.setVisible(true);
String filename = chooser.getFile();
if (filename != null) {
save(chooser.getDirectory() + File.separator + chooser.getFile());
}
}

// test client: read in input file and display


public static void main(String[] args)
{
@SuppressWarnings("unused")
Picture pic = new Picture(args[0]);
@SuppressWarnings("unused")
Picture pic1 = new Picture(args[0]);
}

70

71

Chapter-6
OTHER FUNCTIONAL REQUIREMENTS
5.1Performance Requirements

In order to see better results of our system use the software in color monitor
environment.

Also it is recommended to use JDK 1.5 and above so that all features run
smoothly.

It is better to have installed java run time environment version 2.0 and above.

This system does not run on compressed image formats.

5.2 Safety Requirements

Do not try to load any compressed images it can result in malfunctioning of


system.

Some features are only for colored images only, for details see the
performance dependencies.

5.3Security Requirements
There are no security issues as it is a desktop-based project.

5.4Software Quality Attributes

Software works very well in java runtime environment 2.0 and above.

Try to use images of given or specified formats only for better results.

72

This software is in java so it is minimum configuration software, which could


run easily on any system irrespective of any operating system and
environment.

Try to use our software in colored monitor for better quality results.

Our product is really very easy to use and user friendly and its user-friendly
interface make it so simple that any person with slight knowledge about this
software can easily run it.

73

Chapter-7
CONCLUSION
At the start of this project, our main aim was to learn the Java programming language.
But after working with some image filtering techniques, we became increasingly
interested with the different areas of image processing. We have learnt a great deal
about image filtering/processing, different image file formats, creating easy to use User
Interfaces, different color spaces (RGB, YUV) and also Java. At every step along the
way, we met with a new and interesting challenge.
Java programs are relatively slow compared to C and C++. But by using good
programming practices, Java can create applicants with acceptable speeds. Image
processing is a highly processor intensive activity, and this project proves that Javas
APIs are indeed of great high quality. Not only have we become familiar with the image
processing APIs, but also in many different areas of Java APIs.

74

Chapter-8
BIBLIOGRAPHY
1. http://www.jhlabs.com/
2. www.google.com\images
3. www.javaworld.com
4. http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LEC
T5/node2.html#SECTION00020000000000000000:
5. http://rkb.home.cern.ch/rkb/AN16pp/node1.html :
6. www.developer.com
7. http://homepages.inf.ed.ac.uk/rbf/HIPR2/histeq.htm
8. www.java2.com
9. http://www.rspa.com/docs/Reqmspec.html: for SRS template
10. www.processimpact.com/process_assets/srs_template.doc : for SRS
template
11. Herbert Scheldt, Complete reference to Java.
12. Gonzales and Base, Digital Image processing.

75

Anda mungkin juga menyukai