Anda di halaman 1dari 14

# IMAGE COMPRESSION

## PRESENTATION OF THE 8 UNIT OF THE BOOK:

DIGITAL IMAGE COMPRESSION USING MATLAB BY RAFAEL C.
GONZALEZ, RICHARD E. WOODS, STEVEN L. EDDINS

IMAGE COMPRESSION
What does Image Compression do?
Reduces the amount of data needed to represent an image.

How ?
Image compression is achieved by compressing one or more of
three basic redundancies:
Coding redundancy
Interpixel redundancy
Psychovisual redundancy

COMPRESSION
f(x,
y)

Mapper

Symbol
coder

Quantizer

Symbol
decoder

Inverse
Mapper

## And the compression ratio is given by : C R = n1/n2

f(x,
y)

LOSSLESS OR LOSSY?
Lossless compression
The compressed image contain all the information needed for the exact
representation of the initial image.

Lossless compression
The compressed image has not the information for a exact representation of the
initial image. Thus, some level of distortion is present in the reconstructed image
So we can define an error between f(x,y) and f(x,y)

## Depending on the importance of the data one could choose lossless or

lossy compression.

LOSSY

Lossy

## during the compression procedure which cannot be restored.

So there will be an error between the initial image and the
reconstructed:
This is the error between each pixel. Thus, the total error is:
And the rms error between the two images is the square root
of the squared error averaged over the MxN image:

1.CODING REDUNDANCY
Examples:
An intuitive approach: words such as yes and no ,which
are regularly used, are just one syllable.
The International Morse Code. This code has been designed
in order to minimize statistically the amount of data needed to
transmit messages.

1.CODING REDUNDANCY
Let
the discrete random variable for k = 1,2,n and the
associated probabilities represent the gray levels of an L-grayimage. Then :
If the number of bits used to represent each value of is l(),
then the average number of bits required to represent each
pixel is
Thus the number of bits required to code an MxN image is MN.

1.CODING REDUNDANCY
Example:

## The table on the left shows two codes

Code 1and Code 2. At Code 1 the number of bits
coding each value (r.v.) is the same regardless its
probability. At Code 2 there is variable length. So the first code have = 2
And at Code 2 :
And the resulting compression rate is CR = 2/1.8125 = 1.103

HUFFMAN ALGORITHM
Huffman algorithm is a greedy algorithm. Greedy algorithms
tries to find localized optimum solution which may eventually
land in globally optimized solutions. But generally greedy
algorithms do not provide globally optimized solutions.

## Thus a grey-scale image using encoded with Huffman algorithm

creates Huffman codes which contain the smallest possible
number of code symbols (bits) per source symbol (gray-scale
level).

## HUFFMAN ENCODING USING HUFFMAN

TREES (+EXAMPLE)
In order to implement Huffman algorithm we use Huffman
trees.

## The algorithm followed to build this tree :

Sort the list of probabilities and make the two-lowest elements into

leaves, creating a parent node with a probability that is the sum of the
two lower element's probabilities.
Then repeat the loop, combining the two lowest elements, until the
total probability is 1.
For every left turn put 1 and for every right turn put 0.

## HUFFMAN DECODING USING HUFFMAN

TREES
The Huffman code is an instantaneous uniquely decodable
block code.
It is instantaneous because each code word in a string of code

## symbols can be decoded without referring succeeding symbols.

It is uniquely decodable because a string of code words can be
decoded in only one way.

## To decode a bitstream that have been encoded with Huffman

algorithm we need the Huffman code (dictionary).

TREES
a1 0
a2 10
(EXAMPLE)
a3

110

is :
a4 111
a3a4a4a2a
1

encodin
g

110111111100

## 110 111 111 10 0

decoding a3a4a4a2a
1

3.PSYCHOVISUAL REDUNDANCY
Unlike coding and interpixel redundancy, psychovisual
redundancy is associated with real or quantifiable visual
information.

## The elimination of psychovisual redundant data results in a

loss of a quantitative information, called quantization. This
terminology is consistent with the normal usage of the word,
which generally mean the mapping of a broad range of input
values to a limited number of outputs values.

## Quantization is a irreversible operation and results in lossy

compression

3.PSYCHOVISUAL REDUNDANCY
Unlike coding and interpixel redundancy, psychovisual
redundancy is associated with real or quantifiable visual
information.

## The elimination of psychovisual redundant data results in a

loss of a quantitative information, called quantization. This
terminology is consistent with the normal usage of the word,
which generally mean the mapping of a broad range of input
values to a limited number of outputs values.

compression