Anda di halaman 1dari 13

St.

Anns College of Engineering & Technology Page |1


UNIT 2
Fundamental Concepts in Video
Types of Video Signals:
1) Component Signal:
Higher-end video systems make use of three separate video wires for the Red, Green, Blue channel.
Each color channel is sent as a separate video signal. Most computer systems use Component Video, with
separate signals for R, G, and B signals. For any color separation scheme, Component Video gives the best
color reproduction since there is no crosstalk between the three channels. Component video requires more
bandwidth and good synchronization of the three components
2) Composite Signal:
Color (chrominance) and intensity (luminance) signals are mixed into a single carrier wave.
Chrominance is a composition of two color components (I and Q, or U and V). In NTSC TV, e.g., I and Q are
combined into a chroma signal, and a color subcarrier is used to put chroma signal at the high-frequency end of
the signal shared with the luminance signal. The chrominance and luminance components can be separated at
the receiver end and then the two color components can be further recovered. When connecting to TVs or
VCRs, Composite Video uses only one wire and video color signals are mixed, not sent separately. The audio
and sync signals are additions to this one signal. Since color and intensity are wrapped into the same signal,
some interference between the luminance and chrominance signals is inevitable
3) S-Video Signal:
Separated video or Super-video uses two wires, one for Luminance & another for Chrominance
signal. As a result, there is less crosstalk between the color information and the crucial gray-scale information.
The reason for placing luminance into its own part of the signal is that Black-and-white information is most
crucial for visual perception.

Analog Video:
An analog signal f(t) samples a time-varying image signal, Progressive scanning traces through a
complete picture (a frame) row-wise for each time interval. In TV and in some monitors and multimedia
standards as well, another system, called Interlaced scanning is used. The odd-numbered lines are traced first,
and then even-numbered lines are traced. Solid lines traced from P to Q is known as Odd field, Similarly Doted
line traces in the frame is known as Even field . Odd and Even fields - two fields make up one Frame. Odd
lines starts at top left corner of Frame (P) & ends at bottom center point (T), similarly Even lines starts at center
point of Frame (U) & ends at lower right corner point (V). First solid (Odd) lines are traced, then Even field
starts at U and ends at V. Jump from Q to R is called Horizontal retrace, during which electronic beam in CRT
is Blanked. Jump from T to U is called Vertical retrace, during which electronic beam generate Sync signal. In
general, electron beam will generate Sync. signal for Horizontal retrace in 0.9 s. Later Active scan line signal
is generated in 52.7 s. Different voltages are used to refer different type of signals:
o Blank signal with 0 Volts
o Sync. Signal with -0.286 Volts
o White color with 0.714 Volts
o Black color with 0.055 Volts

Multimedia Applications Development Unit 2


St. Anns College of Engineering & Technology Page |2
Scan lines of Frame: Electron beam signals for tracing scan lines:

Analog Video Representations:


There are three analog video representation techniques as follows:
1) NTSC Video:
NTSC (National Television System Committee) TV standard is mostly used in North America and
Japan. It uses the familiar 4:3 aspect ratio (i.e., the ratio of picture width to its height) and uses 525 scan lines
per frame at 30 frames per second (fps). NTSC follows interlaced scanning system & each frame is divided
into two fields with 262.5 lines/field. Pixel clock divides each horizontal line of video into samples, NTSC uses
YIQ color model. It uses quadrate modulation to combine I & Q signals into single chroma signal
C=I cos (FSCt) + Q sin (FSCt) where Fsc=3.58MHz
This modulated chroma signal is known as Color Sub carrier.
Magnitude= I 2 Q 2 and Phase =Tan-1(Q/I)
NTSC Scanning System: NTSC Frequency Spectrum:

NTSC uses band width of 6.0 MHz, in which various sub carriers used frequencies as follows:
Picture Carrier is at 1.25 MHz
Audio Sub Carrier frequency is 4.5 MHz
NTSC composite signal is composition of Luminance signal (Y) and Chroma signal (C)
Composite=Y+C=Y+ I.cos(FSCt)+Q.sin(FSCt)
Decoding composite signal at receiver:
Separating Y & C, Low pass filters can be used to extract Y signal.
To extract I signal: Multiply signal C by 2.cos(FSCt)
C.2.cos(FSCt) =(I.cos(FSCt)+Q.sin(FSCt)).2.cos(FSCt)
=I.2.cos2(FSCt) + Q.2. sin(FSCt).cos(FSCt)
= I .(1+cos(2FSCt))+ Q.2. sin(FSCt).cos(FSCt)
= I +I.cos(2FSCt) + Q.sin(2FSCt)
Apply Low pass filter to obtain I and discard two higher frequency 2*FSC terms
Similarly C is multiplied by 2.sin(FSCt) to get Q
Multimedia Applications Development Unit 2
St. Anns College of Engineering & Technology Page |3
2) PAL Video:
PAL (Phase Altering Line) TV standard originally invented by germen scientist. It uses 625 scan lines
per frame at 25 fps with 4:3 aspect ratio, PAL uses YUV color model with 8 MHz channel, allocating band
width of 5.5 MHz to Y and 1.8 MHz to each U & V. Color sub carrier frequency is Fsc=4.43MHz. Chroma
signals have altering signs as +U & -U in successive scan lines hence the name Phase Altering Line. The signals
in consecutive lines are averaged so as to cancel chroma signal for separating Y and C. It uses Comb filter at
receiver
3) SECAM Video:
SECAM (Systeme Electronique Couleur Avec Memoire) is invented by French for TV broadcast. It uses
625 scan lines per frame at 25 fps with 4:3 aspect ratio & interlaced fields. SECAM and PAL are similar,
differing slightly in their color coding scheme. In SECAM, U & V signals are modulated using separate color
sub carriers at 4.25 MHz and 4.41 MHz. They are sent in alternate lines that is, only one of U or V signals will
be sent on each scan line

Digital Video:
The advantages of digital representation for video:
Video can be stored on digital devices or in memory, ready to be processed & integrated to various
multimedia applications
Direct access is possible, which makes nonlinear video editing achievable as a simple
Repeated recording does not degrade image quality
Ease of encryption and better tolerance to channel noise.
Digital Video Representation:
There is one digital video representation technique as follows:
Chroma Sub-sampling:
Human vision cannot distinguish black & white image filled with color information in spatial resolution.
It uses YCbCr color model for sub-sampling, where Y is luminance signal gives gray image, combined signal of
Cb, Cr known as Chorma contains color information . Y signal will present in all samples of pixels, but Chorma
may not present in all samples
Chroma Sub-sampling: How many pixel values per four original pixels contains chroma signal are refered as
Chroma sub sampling. There are four schemes for chroma sub-sampling as follows based on chroma sampling:
4:4:4 Scheme: indicates that No chroma sub-sampling is used
o Each pixel's Y, Cb and Cr values are transmitted
o All four pixel have Y,Cb,Cr values out of four pixels that gives More quality images
4:2:2 Scheme: indicates horizontal sub sampling of the Cb, Cr signals by a factor of 2
o Four pixels horizontally labeled as 0 to 3, all four Ys are sent, and every two Cb's & two Cr's are
sent as (Cb0, Y0) (Cr0,Y1) (Cb2, Y2) (Cr2, Y3) (Cb4, Y4)
o Only two pixels have Y, Cb, Cr values out of four pixels that gives Medium quality images
4:1:1 Scheme: indicates horizontal sub sampling by a factor of 4
o Four pixels horizontally labeled as 0 to 3, all four Ys are sent, every one Cb's & one Cr's are sent
o Only one pixel have all Y, Cb, Cr values out of four pixels that gives Poor quality images
4:2:0 Scheme: sub-samples in both the Horizontal & Vertical dimensions by a factor of 2

Multimedia Applications Development Unit 2


St. Anns College of Engineering & Technology Page |4

Scheme 4:2:0 along with other schemes is commonly used in JPEG and MPEG CIF and QCIF
CIF Standards:
CIF stands for Common Intermediate Format specified by the CCITT. The idea of CIF is to specify a
format for lower bit rate. It uses a progressive (non-interlaced) scan. QCIF stands for Quarter-CIF. All the
CIF/QCIF resolutions are evenly divisible by 8, and all except 88 are divisible by 16; this provides convenience
for block-based video coding in H.261 and H.263.

HDTV (High Definition TV):


The first generation of HDTV was based on an analog technology developed by Sony and NHK in Japan
in the late 1970s. MUSE (MUltiple sub-Nyquist Sampling Encoding) was an improved NHK HDTV with
hybrid analog/digital technologies that was put in use in the 1990s. It has 1,125 scan lines, interlaced (60 fields
per second), and 16:9 aspect ratio. Since un-compressed HDTV will easily demand more than 20 MHz
bandwidth, which will not fit in the current 6 MHz or 8 MHz channels, various compression techniques are
being investigated. For video, MPEG-2 is chosen as the compression standard. For audio, AC-3 is the standard.
The salient difference between conventional TV and HDTV:
HDTV has a much wider aspect ratio of 16:9 instead of 4:3. HDTV moves toward progressive (non-
interlaced) scan, Interlacing introduces serrated edges to moving objects and flickers along horizontal edges.
The FCC has planned to replace all analog broadcast services with digital TV broadcasting by 2006
SDTV (Standard Definition TV): the current NTSC TV or higher.
EDTV (Enhanced Definition TV): 480 active lines or higher
HDTV (High Definition TV): 720 active lines or higher.

Basics of Digital Audio:


Digitization of Sound:
Sound is a wave phenomenon like light. With out air there is no sound, since sound is pressure wave.
Even though pressure waves are longitudinal, they have wave properties. One-dimensional nature of sound:
values changes over time in Amplitude. Digitization means conversion to a stream of numbers, and preferably
these numbers should be integers for efficiency. To digitize, signal must be sampled in each dimension: in time
& in amplitude. Sampling means measuring the quantity we are interested in, usually at evenly spaced intervals.
Sampling: The first kind of sampling, using measurements only at evenly spaced time intervals. The
rate at which it is performed is called the sampling frequency. For audio, typical sampling rates are from
8 kHz (8,000 samples per second) to 48 kHz. This range is determined by Nyquist theorem.
Multimedia Applications Development Unit 2
St. Anns College of Engineering & Technology Page |5
Quantization: Sampling in the amplitude or voltage dimension. Number of division along amplitude
axis gives Number of Quantization level. Quantization levels are usually stored as integer values can be
represented with 8-bit, 16-bit etc.
Sampling: Quantization:

To decide how to digitize audio data we need to answer the following questions:
What is the sampling rate?
How finely is the data to be quantized?
Is quantization uniform?
How is audio data formatted? (file format)

Nyquist Theorem:
Signals can be decomposed into a sum of sinusoids,
Weighted sinusoids can build up quite a complex signal.
Frequency is absolute measure, pitch is perceptual,
subjective quality of sound. Harmonics is any series of
musical tones whose frequencies are integral multiples of
frequency of fundamental tone. If we allow non-integer
multiples of base frequency, we allow non-A notes & have
complex resulting sound.
If sampling rate just equals actual frequency, false signal
is detected which is simply constant with zero frequency. If
sample at 1.5 times the actual frequency, we obtain an
incorrect (alias) frequency that is lower than the correct one,
it is half correct one.
Nyquist rate: For correct sampling we must use a sampling rate equal to at least twice the maximum frequency
content in the signal
Nyquist Theorem: If signal is band-limited, if it has lower limit f1 & upper limit f2 of frequency components
in signal, then the sampling rate should be at least 2(f2 f1)

Single Frequency Sampling Sampling 1.5 times

Multimedia Applications Development Unit 2


St. Anns College of Engineering & Technology Page |6
If we have fixed sampling rate, it would be impossible to recover frequencies higher than Nyquist
frequency in any event. Most systems have an anti-aliasing filter that restricts frequency content in the input to
sampler to range at or below Nyquist frequency. Nyquist frequency: frequency equal to half the Nyquist rate.
Folding frequency: Nyquist frequency associated with sampling frequency . The relationship among the
Sampling Frequency, True Frequency, and the Alias Frequency is as follows:
falias = fsampling ftrue, for ftrue < fsampling < 2ftrue

Signal to Noise Ratio (SNR):


The ratio of the power of Correct signal and Noise is called the Signal to Noise Ratio (SNR) - a measure
of the quality of the signal. The SNR is usually measured in decibels (dB). The SNR value, in units of dB, is
defined in terms of base-10 logarithms of squared voltages:

Power in signal is propositional to square of voltage. Decibels are always defined in terms of ration, decibels are
applied to sounds in our environment usually is comparison to just audible sound with frequency 1 kHz.

Signal to Quantization Noise Ratio (SQNR):


Quality of quantization is characterized by SQNR. Quantization noise is defined as the difference
between the value of the analog signal, for particular sampling time and nearest quantization interval value. For
quantization accuracy of N bits/sample, range of digital signal is -2N-1 to 2N-1-1. Actual analog signal is in range
from Vmax to +Vmax each quantization level represents voltage of Vmax/2N-1. The ratio of the power of Correct
signal and Quantization noise is called Signal to Noise Ratio (SQNR).
Vsignal 2 N 1
SQNR 20 log10 20 log10
Vquan_ noise 1/ 2
20 xNx log 2 6.02 N (dB )
Each bit adds about 6dB of resolution, 16 bits provide maximum SQNR of 96 dB. If assume that input
signal is sinusoidal, quantization error is statistically independent & that its magnitude is uniformly distributed
between 0 & half of interval. Expression for SQNR becomes: SQNR=6.02N + 1.76(dB)

Linear & Nonlinear Quantization:


Samples are stored as uniformly quantized values which is Linear format. Linear quantization uses
uniformly quantized values of samples with limited number of bits available. Webbers law says that equally
perceived differences have values proportional to absolute levels:
Stimulus
Response
Stimulus
Inserting constat of proportional k, we have differential equation that states
dr=k(1/s)ds a
with response (r) and stimulus (s), Integrating, we arrive solution
r=klog s+c => r=k log (s/s0) where s0- is lowest level stimulus
Non uniform quantization schemes takes advantage of logarithm characteristics. Non-linear quantization
works by first transforming analog signal from raw s space into theoretical r space, then uniformly quantizing
resulting values. The result is that for steps near low end of signal, quantization steps are effectively more
concentrated on s axis. Whereas for large value of s, one quantization step in r encompasses wide range of s
values, such law for audio is called -law. A very similar rule called A-law.
Equations for these similar encodings are as follows:

Multimedia Applications Development Unit 2


St. Anns College of Engineering & Technology Page |7
Law A Law
sgn( s ) s r
A
( s / sp), ( s / sp) 1 / A
r log 1 | | 1 log A
log(1 ) sp
s
sgn( s )
1 log A | s / sp |,1 / A (s / sp) 1
where, 1 1 log A
sp
1, ifs 0
where, sgn( s )
1, otherwise
Parameter of -law encoder is usually set to =100 or =255, while parameter for A-law encoder is
usually set to A=87.6. Here sp is Peak signal value & s is current signal value. Simply deal with s/sp range from
-1 to 1. Idea of using this laws is that if s/sp is first transformed to values r & then r is quantized uniformly
before transmitting or storing signal. Consider Small change in | s/sp| near value 1.0 in curve. Clearly change in
s has to be much larger in flat area than near origin to be registered by change in quantized r value. Logarithmic
steps represent low amplitude, quiet signals with more accuracy than loud, high amplitude ones
Compressor: Logarithmic transform is applied to analog signal before it is sampled & converted to digital by
Analog-to-Digital converter. Amount of compression increases as amplitude of input signal increases. After
transmission, signal is converted back using Digital-to-Analog converter, then passed through expander circuit
that reverse logarithm. The overall transformation is called as Companding. -Law in audio is used to develop
non uniform quantization rule for sound, put available bits where most perceptual acuity. Try to allocate bit
levels to intervals for which small change in stimulus produces large change in response

Audio Filtering:
AD conversion, the audio signal is usually filtered to remove un-wanted frequencies, for speech
typically from 50Hz to 10 kHz is retained. Band Pass filter: is used to pass the frequencies of specified range
and restrict un-wanted frequencies. DA conversion, even though we have removed high frequencies that are
likely just noise in any event, they reappear in output. This is due to sampling & quantization. At decoder side,
Low pass filter is used after DA circuit, making use of same cutoff as high frequency end of coders band pass
filter.

Audio Quality vs. Data Rate:


Band width, refers to part of response or transfer function of device. Half Power Band Width, refers to
bandwidth between points when power falls to half the maximum power. In analog devices band width is
measured with Hertz, in digital with Bps. Telephony uses -law encoding, A-law in Europe. Others formats use
linear quantization, dynamic range improved from 8 bits to 12/13.

Synthetic Sounds:
Digitized sound must still be converted to analog. There are two fundamentally different approaches to
handle stored sampled audio
1) Frequency Modulation:
Carries sinusoid is changed by adding another term involving Modulating frequency. More interesting
sound is created by changing argument of main cosine term, putting second cosine inside the argument it self. A
time varying amplitude envelope function multiplies the whole signal, another for inner cosines to account for
overtones. Consider a more complex signal :
x(t ) A(t ). cos[ct I (t ) cos(mt m ) c ]
Where Wc is basic carrier frequency, Wm is modulating frequency, A(t) is envelop, I(t) is harmonic function
2) Wave Table Synthesis:
More accurate way of generating sounds from digital signals. Digital samples of sounds from real
instruments are sorted. Sound reproduction is good with wave tables than with FM synthesis. To save memory
Multimedia Applications Development Unit 2
St. Anns College of Engineering & Technology Page |8
space, a variety of special techniques such as sample looping, pitch shifting, mathematical interpolation and
polyphonic digital filtering can be applied. Wave table often include sampling at various notes of instrument, so
that key change need not be stretched too far. Wave table synthesis is more expensive than FM

Musical Instruments Digital Interface(MIDI):


MIDI Overview:
MIDI standard is supported by most synthesizer. PC must have MIDI interface, which is incorporated in
sound cards. Sound card must also have DA & AD converters. MIDI is Scripting language: it codes events
that stand for production of certain sounds, MIDI files are very small
MIDI Components:
1) MIDI Synthesizer:
Standalone Sound generator that can vary pitch, loudness, tone
It can change additional characteristics as Attack & Delay time
2) MIDI Sequencer:
It is used for storing & editing sequence of musical events
3) MIDI Controllers:
Produces no sound, generates MIDI messages
Keyboard generates 3 minutes music stored in 3KB
Wave table stores 1 Minute music in 10 MB
MIDI Concept:
Music is organized into Tracks in sequencer. Particular instrument is associated with MIDI channel,
used to separate messages. There are 16 channels, last Four bits in message are used for instruments.
Multi voice: instrument can play more than one note at once
Multi Timbre: capability of playing many different sounds at the same time, Timbre is quality of sound
Polyphony: number of voices that can be produced at same time
Patch: which is set of control settings that define particular timbre, organized as Banks
General MIDI: mapping specifying what instrument will be associated with what channels. In general, 128
Patches and 10 channels are reserved for instruments. MIDI data stream: Time varying Amplitude modulation,
Multi Sampling.
MIDI Data stream: Frequency changes on Note ON/OFF

Hardware Aspects of MIDI:


Consists of 31.25 KbPS serial connection: 10 bit bytes with Start, Stop bits
Modulation Wheel: adds vibrato
Pitch Bend Wheel: alters frequency
Physical MIDI ports consist of 5 pin connectors labeled:
IN: the connector via which the device receives all
MIDI data
OUT: the connector through which device transmits all
MIDI data it generates itself
THRU: the connector by which the device echoes the
data receives from MIDI IN
Multimedia Applications Development Unit 2
St. Anns College of Engineering & Technology Page |9
Structure of MIDI Messages:
MIDI message mainly classified into two types:
1) Channel Messages: Voice, Mode messages
2) System Messages: Common, Real time, Exclusive messages
1) Channel Messages:
It contains 3 bytes, first byte is status byte. It has two types of messages including Voice & Mode messages
Voice Messages:
It is used to control voice, it sends note on or off, key
pressure, to specify effects such as sustain, vibrato,
tremolo and pitch wheel etc. Here n is status byte.

Mode Messages:
They form a special case of control change message,
therefore all modes have op-code B. mode messages
determine how an instrument processes MIDI voice
messages. Poly means a device will play back several
notes at once if requested to do so. Default mode is
POLY ON. Omni means that devices respond to
messages from all channels. Default is OMNI OFF -
pay attention to your own messages only.
2) System Messages:
They have no channel number and are meant for commands that are not channel specific such as timing
signals for synchronization, positioning information in pre-recorded MIDI sequence. Op-code for all system
messages start with &HF
Common Messages: Real Time Messages: Exclusive Messages:
which relate to timing or positioning. It is Which are related to synchronization It is included so that
measured in beats manufacturers can extend
MIDI standard. After
initial code, they can insert
stream of any specific
messages that apply to
their own product

Quantization and Transmission of Audio:


Coding of Audio:
Quantization & Transformation of data are collectively known as Coding of data. Differences in signals
between present & past time can reduce size of signal values & also concentrate histogram of pixel values into a
much smaller range. The result of reducing the variance of values is that lossless compression methods produce
a bit stream with shorter bit lengths for more likely values. In general, producing quantized sampled output for
audio is called PCM (Pulse Code Modulation). The differences version is called DPCM (and a crude but
efficient variant is called DM). The adaptive version is called ADPCM.
Pulse Code Modulation:
Basic techniques for creating digital signals from analog are Sampling & Quantization. Quantization
consists of selecting breakpoints (boundary levels) in magnitude, and then re-mapping any value within an

Multimedia Applications Development Unit 2


St. Anns College of Engineering & Technology P a g e | 10
interval to one of the representative output levels. Set of interval boundaries are called Decision boundaries &
representative values are called Reconstruction levels.
Coder mapping: Boundaries for quantizer input intervals that will all be mapped into the same output level
Decoder mapping: Representative values that are output values from quantizer
Every compression scheme has three stages:
Transformation: The input data is transformed to a new representation that is easier or more efficient to
compress.
Loss: We may introduce loss of information. Quantization is main lossy step, we use limited number of
reconstruction levels, fewer than in original signal
Coding: Assign a codeword (thus forming a binary bit stream) to each output level or symbol. This
could be a fixed-length code, or a variable length code such as Human coding.
For audio signals, we first consider PCM for digitization method. This leads to Lossless Predictive
Coding as well as the DPCM scheme, both methods use Differential coding. Adaptive version ADPCM, which
can provide better compression. PCM is format term for sampling & quantization. Pulses comes from
engineers point of view that resulting digital signals can be thought of infinitely narrow vertical pulses
Assuming a bandwidth for speech from about 50 Hz to about 10 kHz, the Nyquist rate would dictate a
sampling rate of 20 kHz.
1) Using uniform quantization, the minimum sample size we could get away with would likely be about 12
bits. Hence for mono speech transmission the bit-rate would be 240 kbps
2) With companding, we can reduce sample size down to about 8 bits with same perceived level of quality &
thus reduce bit-rate to 160 kbps
3) Standard approach to telephony, in fact assumes that highest-frequency audio signal we want to reproduce
is only about 4 kHz. Therefore sampling rate is only 8 kHz, & companded bit-rate thus reduces this to 64 kbps.
Original Signal for Digitization: PCM Signal: Reconstructed Signal:

PCM Signal encoding:


Remove high frequency content from analog input signal using Band-limiting filter
Compressor are used to change the amplitudes of signals
Finally perform Linear PCM to get digital signal
PCM Signal decoding:
Now perform Digital-to-Analog conversion to reconstruct output analog signal
Expanders are used to change the amplitudes of signals
Output of Digital-to-Analog converter is then turn passed to Low-pass filter, which allows only
frequencies up to original maximum to be retained

Multimedia Applications Development Unit 2


St. Anns College of Engineering & Technology P a g e | 11
Differential coding of audio:
Audio is offer stored in simple PCM but in form that exploits differences. In general differences are
small number which requires fewer bits to store. An advantage of forming differences is that histogram of
differences signal is usually peaked, with maximum around zero. Generally, time dependent signal has some
consistency over time. Difference signal- subtracting current sample from previous one will have more peak
histogram with maximum around zero
1) Lossless Predictive Coding:
It simply transmit differences, we predict the next sample as being equal to current sample and send not
that sample itself but the error involved in making this assumption. If integer sample values are in range 0 to
255, then the error is just difference between previous and next value. The differences can be as much as -255 to
255. The prediction value is calculated as:
^ ^
f n f n 1 , en f n f n
Our prediction fn to be as close as possible to actual signal fn. Some function of few of previous values
fn-1, fn-2, fn-3 etc produce better prediction function fn. Linear predictor:
^ 2 to 4
f n a
k 1
nk f nk
Such predictor can allow truncating or rounding operation to result in integer values. In case of values
with exceptional large differences, can be solved by considering Shift Up (SU) and Shift Down (SD) . Suppose
samples are in range 0 to 255 and differences will be in -255 to 255. define SU &SD as shift by 32 means rages
from -15 to 16. A simple predictor can be given as:
^
1 ^
f n ( f n 1 f n 2 ) , en f n f n
2
Ex: let us consider 5 frequency values of signal f1=21, f2=22, f3=27, f4=25, f5=22 then f1=21, e1=0
f f en
1 21 21 21-21=0
2 22 21 22-21=2
3 27 (f2+f1)/2=(21+22)/2=21 27-21=6
4 25 (f3+f2)/2=(27+22)/2=24 25-24=1
5 22 (f4+f3)/2=(25+27)/2=26 22-26=-4
When you decode these values we can get as same input
2) Differential PCM (DPCM):
It is exactly same as PCM except that it incorporates a quantizer step. After calculation of en value from
~
fn values, then quantizer step is performed to calculate e n =Q [en] known as quantized en value.
DPCM describes equations:

~ ~

^ ( f f )
f n n 1 n 2
2

~ ~ ~ ^ ~
f n functio _ of ( f n1 , f n2 ,...) f n en

en f n fn

Multimedia Applications Development Unit 2


St. Anns College of Engineering & Technology P a g e | 12
~
(255 en )
e n =Q [en]=16* 256 8
16
~ ~
Transmit codeword en; reconstruct with: f n f n e n
For every block of signal, starting at time i we could take a block of N values fn and try to minimize
quantization error:
i n 1
Min ( f
n i
n Q[ f n ]) 2

signal differences simulate Laplacian probability distribution function. The box labeled Symbol coder in block
diagram simply means a Huffman coder.
Ex: let us consider f1=130, f2=150, f3=140, f4=200, f5=230 then f1=130, e1=0
e~n ~
f f en f
1 130 130 0 0 130
2 150 130 150-130=20 (255 20) (275) 130+24=154
-248= 16*17-248= 24
16*
16

-256+8=16*

16
3 140 (130+154)/2=142 140-142=-2 (255 2) (253) 142-8=134
-248= 16*15-248= -8
16*
16 -256+8=16*
16
4 200 (154+134)/2=144 200-144=56 (255 56) (311) 144+56=200
-248= 16*19-248= 56
16*
16 -256+8=16*
16
5 230 (134+200)/2=167 230-167=63 (255 63) (318) 167+56=223
-248= 16*19-248= 56
16*


-256+8=16*

16 16
3) Delta Modulation (DM):
It is much simplified version of DPCM often used as quick analog to digital converter. Uniform Delta
Dm: the idea is to use only single quantized error value, either +ve or ve. Such as 1 bit coder as follows:
^ ~
f n f n 1
^ ~
en f n f n f n f n1
e~n =+k if en>0, where k is constant
= -k otherwise
~ ~
f n f n en
Ex: let us consider f1=10, f2=11, f3=13, f4=25 then f1=130, e1=0
e~n ~
f
F en f
1 10 10 0 0 10
2 11 10 f2- ~f =11-10=1 +4 10+4=14
1

3 13 14 f3- ~f =13-14=-1 -4 14-4=10


2

4 25 10 f4- ~f =25-10=15 +4 10+4=14


3

4) Adaptive DPCM(ADPCM):
It uses idea of adapting the coder to suit the input much further. It uses adaptively modified quantizer,
by changing step size as well as decision boundaries in non uniform quantizer. Generally making predictor
Multimedia Applications Development Unit 2
St. Anns College of Engineering & Technology P a g e | 13
coefficients adaptive called as Adaptive Predictor Coding (APC). It includes two type of adaptive predictors:
Forward or Backward. The number of previous values used is called as order of predictor.
M
~
f n a i f n i
i 1
If we change prediction coefficients that multiply previous quantized values, because that makes
complicated set of equations to solve for these coefficients. Suppose we decide to use least squares approach to
solving minimization, then:
N
min ( f n fn ) 2
n 1
Explicitly writing in terms of coefficients ai, then:
N M
min ( f n ai f n i ) 2
n 1 i 1

Reference(s):
1. Ze-Nian Li & Mark S. Drew, Fundamentals of Multimedia, PHI, [Chapters 5 & 6]

Multimedia Applications Development Unit 2

Anda mungkin juga menyukai