Anda di halaman 1dari 39

COMPUTER ORIENTED

PROJECT
OC 303

A Project Report
on

AUTOMATIC IMAGE INPAINTING


by

Akash Khaitan
08DDCS547

FST
THE ICFAI UNIVERSITY
DEHRADUN
2ND SEMESTER-2010-11
CERTIFICATE

Certified that the project work entitled, AUTOMATIC IMAGE INPAINTING

has been carried out by Mr Akash Khaitan, I.D. No.08DDCS547, during the II

Semester, 2010 – 2011. It is also certified that all the modifications suggested

have been incorporated in the report. The project report partially fulfills the

requirement in respect of Computer Oriented Project – OC 303

Signature of the Instructor Signature of the Student

Date :

Place: FST, ICFAI University


Dehradun
Acknowledgement

I would like to thank my project guide Prof. Laxman Singh Sayana whose constant guidance,
suggestions and encouragement helped me throughout the work.

I would also like to thank Prof. Ranjan Mishra, Prof. Rashid Ansari and Prof. Sudeepto
Bhatacharya for there help in understanding some of the concepts

I would also like to thank my family and friends, who have been a source of encouragement and
inspiration throughout the duration of the project. I would like to thank the entire CSE family for
making my stay at ICFAI University a memorable one.

iii
Table of Contents
Abstract

1 Introduction 1

2 Image Processing Basics 2-6

2.1 Digital Image 2

2.2 Pixel 3

2.2.1 Pixel Resolution 3

2.3 Image Types 4

2.4 Point Operations 5

2.5 Convolution Operations 5

2.5.1 Convolution kernels 5-6

3 Inpainting Techniques 8

3.1 Partial differential equations for Inpainting 8-12

3.2 Total Variational (TV) inpainting model 9

3.3 Curvature-Driven Diffusion (CDD) model 9

3.4 Telea’s Inpainting Algorithm 9

3.5 Exemplar based methods 10

3.6 Convolution Based Method (Oliveria’s Algorithm) 10-11

3.7 Color Match Inpainting 12

3.8 Right – Left Shift Blur 12

4 Source Code 13-20

4.1 Creating Gui 13


4.2 Image Panel Creation 15

4.3 Load New Image to Panel 16

4.4 Color Match Algo 18

4.5 Oliveria’s Algo 18

4.6 Right-left shift Blur Algo 18

4.7 Convoluting a region 19

4.8 Selecting/Sketching an Area 20

5 Results 22-31

5.1 Experiment 1 22-23

5.2 Experiment 2 24-25

5.3 Experiment 3 26-27

5.4 Experiment 4 28-29

5.5 Experiment 5 30-31

6 Future Improvements VII

7 Discussion and Conclusion VIII

8 References IX
Abstract

The project on Automatic Image Inpainting removes the unwanted objects from the image upon
the selection of object by the user and thus reduces the manual task. It uses the ideas of
interpolation of the pixel to be removed, by the neighborhood pixels. The entire work has been
tested under java as it provides appropriate image libraries in order to process an image

vi
1. Introduction

Image inpainting provides a means to restore damaged region of an image, such that the
image looks complete and natural after the inpainting process. Inpainting refers to the
restoration of cracks and other defects in works of art. A wide variety of materials and
techniques are used for inpainting.

Automatic/Digital inpainting are used to restore old photographs to their original condition.
The purpose of image inpainting is removal of damaged portions of scratched image, by
completing the area with surrounding (neighboring) pixel. The techniques used include the
analysis and usage of pixel properties in spatial and frequency domains.

Image inpainting techniques are also used in object removal (or image completion) in
symmetrical images.

1
2. Image Processing Basics

In order to understand the Image inpainting clearly, one must go through this section which
includes the basic ideas of Image processing required in Image Inpainting

This chapter will describe some of the below topics in brief:


• Digital Image
• Pixel
• Image Types
• Point Operations
• Convolution operations

2.1 Digital Image

The projection form is the camera is a two dimensional, time dependent continuous
distribution of light energy.

In order to convert continuous image into digital image three steps are necessary:-

• The continuous light distribution must be spatially sampled


• The resulting function must then be sampled in time domain to create a single image
• The resulting must be quantized to a finite range of integers so that they are
representable within computers

Fig 2.1 a. Continuous Image


b. Discrete Image
c. Finite range of integers (pixel values)

2
2.2 Pixel

In digital imaging, a pixel (or picture element) is a single point in a raster image. The pixel is
the smallest addressable screen element; it is the smallest unit of picture that can be
controlled. Each pixel has its own address. The address of a pixel corresponds to its
coordinates. Pixels are normally arranged in a two-dimensional grid, and are often
represented using dots or squares. Each pixel is a sample of an original image; more samples
typically provide more accurate representations of the original. The intensity of each pixel is
variable. In color image systems, a color is typically represented by three or four component
intensities such as red, green, and blue, or cyan, magenta, yellow, and black.

2.2.1 Pixel Resolution


The term resolution is often used for a pixel count in digital imaging. When the pixel counts
are referred to as resolution, the convention is to describe the pixel resolution with the set of
two positive integer numbers, where the first number is the number of pixel columns (width)
and the second is the number of pixel rows (height), for example as 640 by 480. Another
popular convention is to cite resolution as the total number of pixels in the image, typically
given as number of megapixels, which can be calculated by multiplying pixel columns by
pixel rows and dividing by one million.
Below is an illustration of how the same image might appear at different pixel resolutions, if
the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction
from pixels would be preferred, but for illustration of pixels, the sharp squares make the
point better).

Fig 2.2
An image that is 2048 pixels in width and 1536 pixels in height has a total of 2048×1536 =
3,145,728 pixels or 3.1 megapixels. One could refer to it as 2048 by 1536 or a 3.1-megapixel
image

3
2.3 Image Types
Bit Depth Colours Available
1-bit Black and White
2-bit 4 colours
4-bit 16 colours

8-bit 256 colours

8-bit greyscale 256 shades of grey

16-bit 32768 colours

24-bit 16.7 million colours

32-bit 16.7 million+ 256 Levels of transparency

The number of colours in an image is determined by the number of bits in an image and the
formula is given by 2n where n is the number of bits

An illustration for a 24 -bit image is described below

2.3.1 24-bit image - 16 million colours


With a 24 bit image, you have 16 million
colours, made up from 256 shades of red,
256 shades of green and 256 shades of blue.

All the colours are made up from varying


amounts of these primary colours, so for
example, 0,0,0 would be black and
255,255,255 would be white. 255, 0, 0 is red.
0, 255, 0 is green and 0, 0,255 is blue.255,
255, 0 makes yellow, 255, 0,255 makes
magenta and 0,255,255 makes cyan. Fig 2.3 24 bit color Combinations

Each value of 0 - 255 takes up 8 bits, so the total amount of space to define the colour of each
pixel is 24 bits

4
2.4 Point Operations

Point operations help in modifying the pixels of an image independent of neighboring pixels.
It helps in determining the particular pixel on the basis of its color. The main aim of
discussing point operation is that in the inpainting, the selection of image coordinates will be
done which is having a particular color is described in the later chapter

Some of the operations which can be performed by point operations are:

• RGB Image conversion to grey image


• RGB Image to single color image conversion
• Inversion of Image
• Modifying some pixels on the basis of colours

The operations mentioned above is performed on each pixel, gives a resultant image with
required operation.

2.5 Convolution Operations

Convolution is a common image processing technique that changes the intensities of a pixel
to reflect the intensities of the surrounding pixels. A common use of convolution is to create
image filters. Using convolution, you can get popular image effects like blur, sharpen, and
edge detection

2.5.1 Convolution Kernels

The height and width of the kernel do not have to be same, though they must
both be odd numbers. The numbers inside the kernel are what impact the
overall effect of convolution. The kernel (or more specifically, the values held
within the kernel) is what determine how to transform the pixels from the
original image into the pixels of the processed image. Fig 2.4 Kernel

5
Convolution is a series of operations that alter pixel intensities depending on the intensities of
neighboring pixels. The kernel provides the actual numbers that are used in those operations.
Using kernels to perform convolutions is known as kernel convolution.

Convolutions are per-pixel operations—the same arithmetic is repeated for every pixel in the
image. Bigger images therefore require more convolution arithmetic than the same operation
on a smaller image. A kernel can be thought of as a two-dimensional grid of numbers that
passes over each pixel of an image in sequence, performing calculations along the way. Since
images can also be thought of as two-dimensional grids of numbers, applying a kernel to an
image can be visualized as a small grid (the kernel) moving across a substantially larger grid
(the image).

The numbers in the kernel represent the amount by which to multiply the number underneath
it. The number underneath represents the intensity of the pixel over which the kernel element
is hovering. During convolution, the center of the kernel passes over each pixel in the image.
The process multiplies each number in the kernel by the pixel intensity value directly
underneath it. This should result in as many products as there are numbers in the kernel (per
pixel). The final step of the process sums all of the products together, divides them by the
amount of numbers in the kernel, and this value becomes the new intensity of the pixel that
was directly under the center of the kernel.

Fig 2.5 Convolution kernel modifying a pixel

6
Even though the kernel overlaps several different pixels (or in some cases, no pixels at all),
the only pixel that it ultimately changes is the source pixel underneath the center element of
the kernel. The sum of all the multiplications between the kernel and image is called the
weighted sum. Since replacing a pixel with the weighted sum of its neighboring pixels can
frequently result in much larger pixel intensity (and a brighter overall image), dividing the
weighted sum can scale back the intensity of the effect and ensure that the initial brightness
of the image is maintained. This procedure is called normalization. The optionally divided
weighted sum is what the value of the center pixel becomes. The kernel repeats this
procedure for each pixel in the source image.

The data type used to represent the values in the kernel must match the data used to represent
the pixel values in the image. For example, if the pixel type is float, then the values in the
kernel must also be float values.

7
3. Inpainting Techniques
The restoration can be done by using two approaches, image inpainting and texture synthesis,
whereas the meaning of the first approach is restoring of missing and damage parts of images
in a way that the observer who doesn't know the original image can't detect the difference
between the original and the restored image. It is called inpainting because the process of
painting or fill in holes or cracks in an artwork.

The second approach is filling unknown area on the image by using surrounding texture
information or from input texture sample.

This chapter is dedicated to the discussions of several inpainting techniques with their
benefits and draw backs.

3.1 Partial differential equations for Inpainting

Bertalmio et al [1, 4] pioneered a digital image-inpainting algorithm based on partial


differential equations (PDEs). A user-provided mask specifies the portions of the input image
to be retouched and the algorithm treats the input image as three separate channels (R, G and
B). For each channel, it fills in the areas to be inpainted by propagating information from the
outside of the masked region along level lines (isophotes). Isophotes directions are obtained
by computing at each pixel along the inpainting contour a discretized gradient vector (it gives
the direction of largest spatial change) and by rotating the resulting vector by 90 degrees.
This intends to propagate information while preserving edges. A 2-D Laplacian is used to
locally estimate the variation in color smoothness and such variation is propagated along the
isophote direction. After every few step of the inpainting process, the algorithm runs a few
diffusion iterations to smooth the inpainted region. Anisotropic diffusion is used in order to
preserve boundaries across the inpainted region.

Bertalmio et al [2] have introduced a technique for digital inpainting of still images that
produces very impressive results. Their algorithm, however, usually requires several minutes
on current personal computers for the inpainting of relatively small areas.

8
3.2 Total Variational (TV) inpainting model

Chan and Shen proposed two image-inpainting algorithms. The Total Variational [4] (TV)
inpainting model uses an Euler-Lagrange equation and inside the inpainting domain the
model simply employs anisotropic diffusion based on the contrast of the isophotes. This
model was designed for inpainting small regions and while it does a good job in removing
noise, it does not connect broken edges.

3.3 Curvature-Driven Diffusion (CDD) model

The Curvature-Driven Diffusion (CDD) model [4] extended the TV algorithm to also take
into account geometric information of isophotes when defining the “strength” of the diffusion
process, thus allowing the inpainting to proceed over larger areas. CDD can connect some
broken edges, but the resulting interpolated segments usually look blurry.

3.4 Telea’s Inpainting Algorithm

A Telea [4] proposed a fast marching algorithm that can be looked as the PDE based
approach without the computational overheads. It is considerably fast and simple to
implement than other PDE based methods, this method produces very similar results
comparable to other PDE methods.

The algorithm propagating estimator that used for image smoothness into image gradient
(simplifies computation of flow), the algorithm calculate smoothness of image from a known
image neighborhood of the pixel as a weighted average to inpaint, the FMM inpaint the near
pixels to the known region first which is similar to the manner in which actual inpainting is
carried out , and maintains a narrow band pixels which separates known pixels from
unknown pixels, and also indicates which pixel will be inpainted next.

The limitation of this method is producing blur in the result when the region to be inpainted
thicker than 10 pixels

9
3.5 Exemplar based methods

Exemplar based methods are becoming increasingly popular for problems such as denoising,
super resolution, texture synthesis, and inpainting. The common theme of these methods is
the use of a set of actual image blocks, extracted either from the image being restored, or
from a separate training set of representative images, as an image model. In the case of
inpainting, the approach is usually to progressively replace missing regions with the best
matching parts of the same image, carefully choosing the order in which the missing region is
filled to minimize artifacts. We can go for an inpainting method that represents missing
regions as sparse linear combinations of other regions in the same image (in contrast to, in
which sparse representations on standard dictionaries, such as wavelets, are employed),
computed by minimizing a simple functional.

3.6 Convolution Based Method (Oliveria’s Algorithm[4])

Images may contain textures with arbitrary spatial discontinuities, but the sampling theorem
constraints the spatial frequency content that can be automatically restored. Thus, for the case
of missing or damaged areas, one can only hope to produce a plausible rather than an exact
reconstruction. Therefore, in order for an inpainting model to be reasonably successful for a
large class of images the regions to be inpainted must be locally small. As the regions
become smaller, simpler models can be used to locally approximate the results produced by
more sophisticated ones. Another important observation used in the design of our algorithm
is that the human visual system can tolerate some amount of blurring in areas not associated
to high contrast edges. Thus,

let Ω be a small area to be inpainted and let ∂Ω be its boundary. Since Ω is small, the
inpainting procedure can be approximated by an isotropic diffusion process that propagates
information from ∂Ω into Ω. A slightly improved algorithm reconnects edges reaching ∂Ω ,
removes the new edge pixels from Ω (thus splitting Ω into a number of smaller sub-regions),
and then performs the diffusion process as before. The simplest version of the algorithm
consists of initializing Ω by clearing its color information and repeatedly convolving the

10
region to be inpainted with a diffusion kernel. ∂Ω is a one-pixel thick boundary and the
number of iterations is independently controlled for each inpainting domain by checking if
none of the pixels belonging to the domain had their values changed by more than a certain
threshold during the previous iteration. Alternatively, the user can specify the number of
iterations. As the diffusion process is iterated, the inpainting progresses from ∂Ω into Ω.

Convolving an image with a Gaussian kernel (i.e., computing weighted averages of pixels’
neighborhoods) is equivalent to isotropic diffusion (linear heat equation).The algorithm uses
a weighted average kernel that only considers contributions from the neighbor pixels (i.e., it
has a zero weight at the center of the kernel). The pseudo code of this algorithm and two
diffusion kernels is shown below

Fig 3.1 Pseudo code for the fast inpainting algorithm

Two diffusion kernels used with the algorithm. a = 0.073235,

b = 0.176765, c = 0.125.

Limitations are:

• Applicable only for small scratches


• Much iterations are required

11
3.7 Color Match Inpainting

It is basically used for removing scratches from old image by marking the scratch by the
color which is not used in the image.

Below is the Algorithm:

• The area to be inpainted is colored using pencil tool.


• The color colored is matched for each pixel.
• If the color matches, the nearby 8 pixels surrounding the pixels are seen.
• The center pixel is replaced by any of the pixel which do not have the pencil color

The algo works fine for small scratches

Drawbacks are:

• Not Applicable for large area inpainting


• Cannot remove objects

3.8 Right – Left Shift Blur

This is applicable for symmetric images and can be used to remove the scratches and objects
from the image.

Below is the algorithm:

• The object/scratch to be removed is selected by a rectangular tool


• The area selected copies the half pixels from the right and half from the left
• Finally two or three times the convolution is done in order to produce the inpainted
image

The technique works fine for symmetric images like sceneries

The drawback with this are:

• Blurred area produced


• Fails for a non-symmetric image

12
4. Source Code

The entire coding is done on java as it is platform independent and provides appropriate
image libraries to manipulate image.

This chapter will provide complete source code of my project

package imageprocessing;
import java.awt.Graphics2D;
import java.awt.Graphics;
import java.awt.image.BufferedImage;
import javax.swing.*;
import java.io.File;
import javax.imageio.*;
import java.awt.event.*;
import java.awt.*;
import java.lang.Integer;

class image implements ActionListener


{
JFrame f;
JMenuBar jb;
imageplugins plugins;
Convolution filters;
JMenuItem p1,p2,p3,p4;
JFileChooser jf,jf1;
File file,file1;
static BufferedImage loadImg;
Graphics2D g;
JPanel jp;
Dimension dm;
static JImagePanel panel;
int zm=0,temp=20;
Zoom z1;
JScrollBar hbar,vbar;

4.1Creating Gui
public void creategui()
{
dm = Toolkit.getDefaultToolkit().getScreenSize();
f=new JFrame();
jb=new JMenuBar();
jm=new JMenu("File");
jm1=new JMenu("Image"); //Jmenu Image

13
f.setJMenuBar(jb);
jb.add(jm);
jb.add(jm1);
p1=new JMenuItem("New");
p2=new JMenuItem("Open");
p3=new JMenuItem("Save");
p4=new JMenuItem("Exit");
jm.add(p1);
jm.add(p2);
jm.add(p3);
jm.add(p4);
p2.addActionListener(this);
p3.addActionListener(this);
p4.addActionListener(this);
f.setTitle("Image Processing");
f.setSize((int)dm.getWidth(),(int)dm.getHeight());
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
hbar = new JScrollBar(JScrollBar.HORIZONTAL, 30, 20, 0, 300);
vbar = new JScrollBar(JScrollBar.VERTICAL, 30, 40, 0, 300);
f.add(hbar, BorderLayout.SOUTH);
f.add(vbar, BorderLayout.EAST);
hbar.setUnitIncrement(2);
hbar.setBlockIncrement(1);
plugins=new imageplugins("Transformations",f);
filters=new Convolution("Filters",f);
jb.add(plugins);
jb.add(filters);
f.setVisible(true);
}
public BufferedImage loadImage()
{
BufferedImage bimg = null;
try
{
bimg = ImageIO.read(file);
}
catch (Exception e)
{
e.printStackTrace();
}
return bimg;
}

void saveImage(BufferedImage img, String ref)


{
try
{
String format = (ref.endsWith(".png")) ? "png" : "jpg";
ImageIO.write(img, format, new File(ref));

14
}
catch (Exception e)
{
e.printStackTrace();
}
}
public void actionPerformed(ActionEvent e)
{
if(e.getSource()==p2)
{
zm=0;
jf=new JFileChooser();
int returnVal = jf.showOpenDialog(f);
if(returnVal== JFileChooser.APPROVE_OPTION)
{
file = jf.getSelectedFile();
loadImg=loadImage();
if(panel!=null) //inorder to remove the previous content of panel
{
panel.setVisible(false);
}
int x=(int)(dm.getWidth()/2)-(loadImg.getWidth()/2);
int y=(int)(dm.getHeight()/2)-(loadImg.getHeight()/2);
panel=new JImagePanel(loadImg,x,y);
f.add(panel);
f.setVisible(true);

}
}
if(e.getSource()==p3)
{
jf1=new JFileChooser();
int returnVal1 = jf1.showSaveDialog(f);
if(returnVal1== JFileChooser.APPROVE_OPTION)
{
file1=jf1.getSelectedFile();
String s1=file1.getAbsolutePath();
saveImage(loadImg,s1);
}

4.2 Image Panel Creation


File named JImagePanel.java
Creats Panel and loads image for the first time

package imageprocessing;

import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;

15
import javax.swing.JPanel;

class JImagePanel extends JPanel


{
/**
*
*/
private static final long serialVersionUID = 1L;
private BufferedImage image;
int x, y;
Graphics2D g;
public JImagePanel(BufferedImage image, int x, int y)
{
super();
this.image = image;
this.x = x;
this.y = y;
}
protected void paintComponent(Graphics g)
{
super.paintComponent(g);
Graphics2D g2d = (Graphics2D)g;
g2d.drawImage(image,x,y, null);
}
}

4.3 Load New Image to Panel


File named Loadimage.java
New Panel is created and is added to the frame
package imageprocessing;

import java.awt.Dimension;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;

import javax.swing.JFrame;

public class Loadimage { //Generates


new panel with the image and adds to frame
JImagePanel panel;
Loadimage(BufferedImage tempimage,JFrame f)
{
Dimension dm = Toolkit.getDefaultToolkit().getScreenSize();
if(image.panel!=null) //inorder to
remove the previous content of panel
image.panel.setVisible(false);
if(panel!=null)
panel.setVisible(false);
int x=(int)(dm.getWidth()/2)-(tempimage.getWidth()/2);
int y=(int)(dm.getHeight()/2)-(tempimage.getHeight()/2);
panel=new JImagePanel(tempimage,x,y);
image.panel=panel;
f.add(image.panel);
}

16
package imageprocessing;

import java.awt.Dimension;
import java.awt.Toolkit;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.MouseEvent;
import java.awt.event.MouseListener;
import java.awt.event.MouseMotionListener;
import java.awt.image.BufferedImage;
import java.awt.image.BufferedImageOp;
import java.awt.image.ConvolveOp;
import java.awt.image.Kernel;

import javax.swing.JFrame;
import javax.swing.JMenu;
import javax.swing.JMenuItem;

public class Convolution extends JMenu implements


ActionListener,MouseListener,MouseMotionListener
{
private static final long serialVersionUID = 1L;
JMenuItem destruct,inpaint,oliveria,interpulate,shiftmap,pencil;
public static BufferedImage tempimage,tempimage1;
JImagePanel panel;
JFrame fr;
image im;
int val,temp1=0;
int[] colors= new int[100000];
int temp=0,teval=16646144,ix=0,iy=0,fx=0,fy=0;
public Convolution(String s1,JFrame fr)
{
setText(s1);
destruct=new JMenuItem("Destruct");
pencil=new JMenuItem("Pencil");
inpaint=new JMenuItem("Inpaint");
oliveria= new JMenuItem("Oliveria");
interpulate=new JMenuItem("InterPulate");
shiftmap=new JMenuItem("ShiftMap");
add(inpaint);add(oliveria);add(shiftmap);add(destruct);add(pencil);
this.fr=fr;
destruct.addActionListener(this);
pencil.addActionListener(this);
inpaint.addActionListener(this);
oliveria.addActionListener(this);
interpulate.addActionListener(this);
shiftmap.addActionListener(this);
}
@Override
public void actionPerformed(ActionEvent e2)
{
tempimage=image.loadImg;
if(e2.getSource()==destruct)
{
temp1=0;
image.panel.addMouseListener(this);
}
if(e2.getSource()==pencil)
{
image.panel.addMouseMotionListener(this);

17
image.panel.addMouseListener(this);
}

4.4 Color Match Algo


if(e2.getSource()==inpaint //Color Match Algo
{
int i=0,j=0;
int y=tempimage.getHeight();
int x=tempimage.getWidth();
for(i=0;i<y;i++)
{
for(j=0;j<x;j++)
{
int value =tempimage.getRGB(j,i) & 0xFFFFFF;
if(value==16646144)
{
fillcolor(j,i);
}
}
}
new Loadimage(tempimage,fr);
}

4.5 Oliveria’s Algo


if(e2.getSource()==oliveria) //Oliveria’s Algo
{
float[] elements = { 0.073235f,0.176765f, 0.073235f, 0.176765f,
0.f, 0.176765f, 0.073235f,0.176765f, 0.073235f};
float[] elements1 = { 0.125f,0.125f, 0.125f, 0.125f, 0.f,
0.125f, 0.125f,0.125f, 0.125f};
int x=0;
while(x<100)
{
tempimage=convolveregion(tempimage,elements,ix,iy,fx,fy);
tempimage=convolveregion(tempimage,elements1,ix,iy,fx,fy);
x++;
}
new Loadimage(tempimage,fr);
}
4.6 Right-left shift Blur Algo
if(e2.getSource()==shiftmap) //Right-Left Shift Blur
{
float[] elements = { 0.111111111f, 0.111111111f, 0.111111111f,
0.111111111f, 0.111111111f, 0.111111111f, 0.111111111f,
0.111111111f,0.111111111f };
for(int i=iy;i<=fy;i++)
{
int m=1;
for(int j=ix;j<=((ix+fx)/2);j++)
{
int value =tempimage.getRGB(j-m, i);
tempimage.setRGB(j, i, value);
m=m+2;
}
}
for(int i=iy;i<=fy;i++)
{

18
int m=1;
for(int j=fx;j>((ix+fx)/2);j--)
{
int value =tempimage.getRGB(j+m, i);
tempimage.setRGB(j, i, value);
m=m+2;
}
}
new Loadimage(tempimage,fr);
int x=0;
while(x<2)
{
tempimage=convolveregion(tempimage,elements,ix,iy,fx,fy);
x++;
}
new Loadimage(tempimage,fr);
}
}

void fillcolor(int x,int y)


{
int val1=tempimage.getRGB(x, y+1)& 0xFFFFFF;
int val2=tempimage.getRGB(x+1, y)& 0xFFFFFF;
int val3=tempimage.getRGB(x, y-1)& 0xFFFFFF;
int val4=tempimage.getRGB(x-1, y)& 0xFFFFFF;
if(val1!=teval)
{
tempimage.setRGB(x, y, val1);
}
else if(val2!=teval)
{
tempimage.setRGB(x, y, val2);
}
else if(val3!=teval)
{
tempimage.setRGB(x, y, val3);
}
else if(val4!=teval)
{
tempimage.setRGB(x, y, val4);
}
}

4.7 Convoluting a region


BufferedImage convolveregion(BufferedImage tempimage,float[]
elements,int ix,int iy,int fx,int fy)
{
BufferedImage tempimage1 = new BufferedImage(tempimage.getWidth(),
tempimage.getHeight(), tempimage.getType());
int[] val = {0,0,0,0,0,0,0,0,0};
float sum=0,sum1=0,sum2=0;
for(int i = iy;i<fy;i++)
{
for(int j=ix;j<fx;j++)
{
val[0]=tempimage.getRGB(j-1, i-1)& 0xFFFFFF;
val[1]=tempimage.getRGB(j, i-1)& 0xFFFFFF;

19
val[2]=tempimage.getRGB(j+1, i-1)& 0xFFFFFF;
val[3]=tempimage.getRGB(j-1, i)& 0xFFFFFF;
val[4]=tempimage.getRGB(j, i)& 0xFFFFFF;
val[5]=tempimage.getRGB(j+1, i)& 0xFFFFFF;
val[6]=tempimage.getRGB(j-1, i+1)& 0xFFFFFF;
val[7]=tempimage.getRGB(j, i+1)& 0xFFFFFF;
val[8]=tempimage.getRGB(j+1, i+1)& 0xFFFFFF;
int k=0;
sum=0;
sum1=0;
sum2=0;
for(k=0;k<9;k++)
{
int red=((val[k]>>16) & 0xFF);
int green= ((val[k]>>8) & 0xFF);
int blue=((val[k]>>0)& 0xFF);
sum = sum+(elements[k]*blue);
sum1=sum1+(elements[k]*green);
sum2=sum2+(elements[k]*red);
}
int sum3=0;
sum3=0xFF000000+((int)sum2<<16)+((int)sum1<<8)+((int)sum);
tempimage1.setRGB(j, i,(int)sum3);
}
for(int i = iy;i<fy;i++)
{
for(int j=ix;j<fx;j++)
{
int value=tempimage1.getRGB(j, i)& 0xFFFFFF;
tempimage.setRGB(j, i,value);
}
}
return tempimage;
}

4.8 Selecting/Sketching an Area


public void mouseClicked(MouseEvent arg0) {
// TODO Auto-generated method stu
}
@Override
public void mouseEntered(MouseEvent arg0) {
// TODO Auto-generated method stu
}
@Override
public void mouseExited(MouseEvent arg0) {
}
@Override
public void mousePressed(MouseEvent arg0)
{
if(arg0.getSource()==image.panel)
{
Dimension dm = Toolkit.getDefaultToolkit().getScreenSize();
ix=arg0.getX()(int)
(dm.getWidth()/2)+(tempimage.getWidth()/2);
iy=arg0.getY()-
(int)(dm.getHeight()/2)+(tempimage.getHeight()/2);
System.out.println(+ix+" "+ +iy);
}
}

20
@Override
public void mouseReleased(MouseEvent arg0) {
BufferedImage tempimagesel = new
BufferedImage(tempimage.getWidth(), tempimage
.getHeight(), tempimage.getType());
for(int i = 0;i<tempimage.getHeight();i++)
{
for(int j=0;j<tempimage.getWidth();j++)
{
int value=tempimage.getRGB(j, i)& 0xFFFFFF;
tempimagesel.setRGB(j, i,value);
}
}
if(arg0.getSource()==image.panel && temp1!=1)
{
Dimension dm =
Toolkit.getDefaultToolkit().getScreenSize();
fx=arg0.getX()-
(int)(dm.getWidth()/2)+(tempimage.getWidth()/2);
fy=arg0.getY()-
(int)(dm.getHeight()/2)+(tempimage.getHeight()/2);
for(int i=iy;i<=fy;i++)
{
for(int j=ix;j<=fx;j++)
{
if(i==iy || i==fy || j==ix || j==fx)
tempimagesel.setRGB(j, i, 16646144);
}
}
}
new Loadimage(tempimagesel,fr);
}

@Override
public void mouseDragged(MouseEvent arg0)
{

if(arg0.getSource()==image.panel)
{
temp1=1;
Dimension dm =
Toolkit.getDefaultToolkit().getScreenSize();
int x=arg0.getX()-
(int)(dm.getWidth()/2)+(tempimage.getWidth()/2);
int y=arg0.getY()-
(int)(dm.getHeight()/2)+(tempimage.getHeight()/2);
System.out.println(+x+" "+ +y);
tempimage.setRGB(x, y, 16646144);
}

}
@Override
public void mouseMoved(MouseEvent arg0) {
}
}

21
5. Results

This chapter will show you the results we obtained by applying some of the inpainting
techniques

5.1 Experiment 1:

Fig 5.1 Original Sea Boat Image

Objective: To remove the boat completely

Algorithm to be applied: Right-left-shift Blur

22
Fig 5.2 Boat Selection

Fig 5.3 Boat Removed


5.1.1 Results
• Boat Successfully Removed

23
5.2 Experiment 2

Fig 5.4 Original trees Image

Objective: To remove the last tree

Algorithm to be applied: Right-left-shift Blur

The image is symmetric type so we proceed with Right-left-shift Blur

24
Fig 5.5 Tree Selection

Fig 5.6 Tree removed


5.2.1 Result:
• Tree removed successfully

25
5.3 Experiment 3

Fig 5.7 Original Sea Beach Image

Objective: To remove the people Sitting in the beach

Algorithm to be applied: Right-left-shift Blur

The image is symmetric type so we proceed with Right-left-shift Blur

26
Fig 5.8 People selected

Fig 5.9 People Removed


5.3.1 Results:
• People removed
• Accuracy: 100%

27
5.4 Experiment 4

Fin 5.10 Lincon Photo with Crack

Objective: Crack Removal

Alorithm Applied: Color Match Inpainting

This can be applied to any image which is having smaller cracks/scratches

28
Fig 5.11 Crack Selected

Fig 5.12 Crack Removed


5.4.1 Results:
• Crack Removed
• Accuracy : 80%

29
5.5 Experiment 5

Fig 5.13 Akash Original Image

Fig 5.14 Manual Scratching Done Fig 5.15 Scratches Removed

Objective: To remove Scratches


Algorithm Applied: Color Match Algo

Results:
• Scratches Removed
• Accuracy: 100%

30
Future Improvements

The interpolation technique for 2D matrix can be used in determining the scratches/noises in
the image and removing it automatically

• Value of each pixel of the image will be found

• The value of the pixel would then be interpolated by the nearby value

• The error range of the interpolated value will calculated and would be added and
subtracted with the interpolated value in order to obtain the limits of safe region

• If the value in the first step lies in the range the value will not be replaced by the
interpolated value else replace it by the interpolating value

Use of Artificial intelligence can be combined with image processing in order to produce
more accurate inpainted image

A design of a convolution matrix which would detect the scratches and would remove it
automatically

vii
Discussion and Conclusion

In this report we have described and implemented inpainting algorithms which removes
unwanted objects from the image. Different inpainting algorithms were used to perform the
same purpose. The most common thing in all the algorithm is the selection of region where
the inpainting is to be done

Algorithm like Shift map removed the unwanted object from the symmetrical images like
sceneries whereas algorithm like oliveria is applicable for inpainting scratches which
practically takes smaller area.

The point algorithm was used earlier to remove scratches in small area and the marking of
the scratch was done by red color.

viii
References

[1]Bertalmio, M, Sapiro, G., Caselles, V., Ballester, C. Image Inpainting. SIGGRAPH 2000,
pages 417-424.

[2]Chan, T., Shen, J. Mathematical Models for Local Deterministic Inpaintings. UCLA CAM TR
00-11, March 2000.

[3]Chan, T., Shen, J. Non-Texture Inpainting by Curvature-Driven Diffusions (CCD). UCLA


CAM TR 00-35, Sept. 2000.

Gonzalez, Digital Image Processing-Gonzalez, Englewood, N.J., Prentice Hall, 2 Edition.

Wilhelm Burger, Digital Image Processing An Algorithmic introduction using java, First
Edition, Springer,2008

[4]Manuel M. Oliveira, Brian Bowen, Richard McKenna, Yu-Sung Chang, Fast Digital
Image Inpainting, September 3-5, 2001

ix

Anda mungkin juga menyukai