Anda di halaman 1dari 4

MIPSCCON – 2011

Hybrid Image Compression using DWT and Neural


Networks
S.Sri Durga Kameswari#1Prof. B.I.Neelgar#2
#
Dept. of ECE,GMRIT,Rajam,India
1sridurga426@gmail.com
2hod_ece@gmrit.org

Abstract An image needs a huge amount of data to store it, there is One of the major difficulties encountered in image processing is
pressing need to limit image data volume for transport along the huge amount of data used to store an image. Thus, there is
communication links an ideal image compression system must yield pressing need to limit the resulting data volume. Image
good quality compressed images with good compression ratio, while
compression techniques aim to remove the redundancy present in
maintaining minimal time cost. The goal of image compression
techniques is to remove redundancy present in data in a way that
data in a way that makes image reconstruction possible. It is
enables image compression technique. For the still digital image or necessary to find the statistical properties of the image to design
video, a lossy compression is preferred. Wavelet-based image an appropriate compression transformation of the image; the more
compression provides substantial improvements in picture quality at correlated the image data are, the more data items can be
higher compression ratios. Contrary to traditional techniques for removed.
image compression, neural networks can also be used for data or
image compression. In this proposed work both of these methods for One approach which successfully utilizes neural networks in
compression of images to obtain better quality. transform-based image compression is what is called auto-
Keywords- Image Compression, DWT, Neural Networks associative transform coding. This work started by Cottrel and
Munro and was restricted to two-layer networks. The technique is
I.INTRODUCTION
extended to multi-layer networks.
With the increasing growth of technology and the entrance into A wavelet transform combines both low pass and high
the digital age, a vast amount of image data must be handled to be pass filtering in Spectral decomposition of signals. One-Stage
stored in a proper way using efficient methods usually succeed in Filtering: Approximations and Details For many signals, the low-
compressing images, while retaining high image quality and frequency content is the most important part. It is what gives the
marginal reduction in image size. Image compression using signal its identity. The high-frequency content, on the other hand,
Wavelet Transforms is a powerful method that is preferred by imparts flavor or nuance.
scientists to get the compressed images at higher compression
ratios. It is a popular transform used for some of the image III. Introduction to DWT and Neural network
compression standards in lossy compression methods. Unlike the
discrete cosine transform, the wavelet transform is not Fourier- In wavelet analysis, we often speak of approximations and
based and therefore wavelets do a better job of handling details. The approximations are the high-scale, low-frequency
discontinuities in data. With Wavelet transform based components of the signal. The details are the low-scale, high-
compression, the quality of compressed images is usually high, frequency components.
and the choice of an ideal compression ratio is difficult to make as
it varies depending on the content of the image. Therefore, it is of The original signal, S, passes through two complementary filters
great advantage to have a system that can determine an optimum and emerges as two signals. Unfortunately, if we actually perform
compression ratio upon presenting it with an image. Artificial this operation on a real digital signal, we wind up with twice as
neural networks (ANN) Implementations in image processing much data as we started with. By using down sampling, we can
applications has marginally increased in recent years. Image reduce the number of samples. The filtering part of the
compression using wavelet transform and a neural network was reconstruction process also bears some discussion, because it is
suggested recently [1]. Moreover, different image compression the choice of filters that is crucial in achieving perfect
techniques were combined with neural network classifier for reconstruction of the original signal. The down sampling of the
various applications [1]. Neural network can be trained to signal components performed during the decomposition phase
establish the non-linear relationship between the image intensity introduces a distortion called aliasing. It turns out that by carefully
and its compression ratios in search for an optimum ratio. The choosing filters for the decomposition and reconstruction phases
Wavelet Transform and more particularly Discrete Wavelet that are closely related (but not identical), we can "cancel out" the
Transform (DWT) is a relatively recent and computationally effects of aliasing.
efficient technique for analyzing and extracting information from
image signals. [2]Discrete wavelet transform (DWT) has a good A. Multiple-Level Decomposition
representation at frequency and time scaling with which DWT
was recently used by some international organizations as image The decomposition process can be iterated, with
compression standard such as JPEG2000. MPEG4. successive approximations being decomposed in turn, so that one
signal is broken down into many lower resolution components.
II. PROBLEM DEFINITION This is called the wavelet decomposition tree. In wavelet analysis,
a signal is split into an approximation and a detail. The

SP - 56
MIPSCCON – 2011
approximation is then itself split into a second-level
approximation and detail, and the process is repeated. For an n-
level decomposition, there are n+1 possible ways to decompose or
encode the signal.

A three level decomposition of image is shown here.

Fig 3. Simple compressor model.

The basic computational unit, often referred to as a “neuron,”


consists of a set of “synaptic” weights, one for every input, plus a
bias weight, a summer, and a nonlinear function referred to as the
activation function
Fig 1.Three level image decomposition

In wavelet packet analysis, the details as well as the


approximations can be split. The wavelet decomposition tree is a In a multilayer configuration, the outputs of the units in one layer
part of this complete binary tree. form the inputs to the next layer. The weights of the network are
usually computed by training the network using the back
propagation algorithm. The back propagation algorithm is a
supervised learning algorithm which performs a gradient descent
on a squared error energy surface to arrive at a minimum.

Fig 2. Wavelet Packet analysis

This work suggests that a neural network could be trained to


recognize an optimum ratio for Biorthogonal wavelet compression
of an image upon presenting the image to the network. Two Fig 4. Model of neuron.
neural networks receiving different input image sizes are
developed in this work and a comparison between their Back-propagation is one of the neural networks which are directly
performances in finding optimum Biorthogonal -based applied to image compression coding. Three layers, one input
compression is presented. layer, one output layer and one hidden layer, are designed. The
input layer and output layer are fully connected to the hidden
The main goal of this project is to investigate which neural layer. Compression is achieved by designing the value of K, the
network structure accomplished with different learning algorithms number of neurons at the hidden layer, less than that of neurons at
give the best results compared with standard transform coding both input and the output layers. The input image is split up into
techniques. The various feed forward neural networks with back blocks or vectors of 8 X 8, 4 X 4 or 16 X 16 pixels.
propagation algorithms were directly apply to image compression
coding.

A simple model would be a two layer back propagation


network with n input and n output neurons. The hidden layer
contains m neurons where m is smaller than n. To compress the
image, the input is divided into small blocks, each containing n
pixels. The network is then trained to produce outputs identical to
the input vectors. This is done by randomly sampling the data and
feeding them into the network. Once the network is trained, the Fig 5. Back-propagation neural network.
activation values of the internal neurons are enough to create the
output. With a piece of input given, the activation values of the When the input vector is referred to as N-dimensional which
hidden layer are computed. These values are stored as the is equal to the number of pixels included in each block, all the
compressed image data which can be uncompressed in the same coupling weights connected to each neuron at the hidden layer can
network structure. The hidden activations are clammed into the be represented by {wji, j=1, 2,……, K and I = 1, 2,…., N}, which
hidden layer of the network and the output is computed. can also be described by a matrix of order K X N. From the
hidden layer to the output layer, the connections can be

SP - 57
MIPSCCON – 2011
represented by which is another
weight matrix of order N X K.

Back propagation based neural networks have provided


good alternatives for image compression in the framework of the
K-L transform. Although most of the networks developed so far
use linear training schemes and experiments support this option, it
Fig 8.Decompression stage.
is not clear why non-linear training leads to inferior performance
and how non-linear transfer function can be further exploited to From the above two figures it is evident, how compression and
achieve further improvement. Generally speaking, the coding decompression takes place. The mean square error can be found
scheme used by neural networks tends to maintain the same by comparing the input and output values.
neuron weights, once being trained, throughout the whole process
of image compression. V.BLOCK DIAGRAM
IV. DESIGN AND IMPLEMENTATION Basic architecture for image compression using neural
network is shown in the above figure. Most of the image
In this architecture the network is trained in stages. There are compression techniques use either neural networks for
two stages for compression and decompression respectively. At compression or DWT (Discrete wavelet Transform) based
each stage the original image is scaled down and in the second transformation for compression.
stage it is further scaled down to get the compressed image.
That means that each layer can be trained. The block size can
also be increased for increased performance. Now this
compressed output is passed through the two decompression
stages to get the decompressed image with reduced errors when
compared to its two layered outer-part.

Inputs: Pixel values are chosen as an input to neural network. A


[64 x 64] gray image is taken as an example. Because the
network can be fed with 64 pixel values at a time as an input,
Figure 9. Block Diagram
the whole image can be subdivided into [8 x 8] blocks and each
block is fed at a time. Therefore [8 x 8] block can be easily
The following are the phases of implementation.
converted into a [64 x 1] matrix, and can be fed.
PHASE 1: Here the original image undergoes discrete wavelet
Training: After defining the network and its layers, the network
transformation. The DWT is a transform which can map a block
has to be trained with various pictures. By doing so the network
of data in the spatial domain into the frequency domain. The
gets trained and forms a weight matrix of size [8 x 64] and a
DWT returns information about the localized frequencies in the
bias matrix [8 x 1], by the method of back propagation.
data set.

PHASE 2: Here we get a compressed image using the neural


networks.

PHASE 3: The results are saved and the quantization step


from the compression is reserved. The parts of the image are
then prepared for the inverse discrete wavelet transform. The
Fig 6. Image divided into blocks wavelets packets are subjected to the inverse DWT.
Afterwards, the final reconstructed image may be displayed.
Output: The output thus obtained will be a [8 x 1] matrix. From
this, it is evident that compression has taken place. 64 inputs fed A comparison on compression ratio is made for different
gives an output, which has only 8 values. That means the input techniques and is given in the table below.
is reduced to 12.5% of its original value.
S.No Name of the Method used Compression
Ratio
1 Discrete Cosine Transform 3
2 Discrete Wavelet Transform 2
3 Hybrid Image Compression(DWT 12
and NN)

Fig 7.Compression stage


Table 1. Comparision of CR

SP - 58
MIPSCCON – 2011
compressed data samples does not influence retrieval of
original image using Neural Network (NN) techniques where
VI. RESULTS as in Joint Photographic Expert Group (JPEG) technique noise
effects decompression.

The Hybrid image compression is to combine Discrete


Wavelet Transform (DWT) and Neural Network (NN) with
Biorthogonal high compression ratio with good quality
compressed image and optimize si-area and power is
observed.

VIII. REFERENCES
a) Original image b) One level decomposition
[1] Adnan Khashman and Kamil dimililer “Image
Compression Using Neural networks and Haar wavelet”,
wseas transactions on Signal processing, ISSN 1790-5022,
Issue 5, vol.4, May 2008.

[2] Noman junejo, Naveed Ahmed, Muktiar Ali unar “Speech


and image compression using discrete wavelet transform”,
IEEE Trans. Image processing.
c) Two level decomposition d) Training waveform [3] Weihuang Fu “Discrete wavelet transform IP core Design
for image compression”ICSP’04 Proceedings C-7803-8406-
7/04-2004IEEE

[4] Gonzalez, Woods ‘Digital Image Processing’, Pearson,


2004, Edition-2.

e) Neural Network output f) Reconstruction from NN

g) One level reconstruction h) Final reconstructed image

VII. CONCLUSION

All the disadvantages of Joint Photographic Expert


Group (JPEG) have overcome in Neural Network based
Hybrid image compression, (Hybrid technique concept
proving Combining wavelets and neural network).

The implementation of the proposed method used


biorthogonal image compression where the quality of the
compressed images degrades at higher compression ratios due
to the nature of the lossy wavelet compression. Noise on

SP - 59

Anda mungkin juga menyukai