Anda di halaman 1dari 21

UNIT-3

FUNDAMENTALS OF EQUALISATION & EQUALISERS IN


COMMUNICATION RECEIVER

In telecommunication, equalization is the reversal of distortion incurred by a signal transmitted


through a channel. Equalizers are used to render the frequency responsefor instance of a
telephone lineflat from end-to-end. When a channel has been equalized the frequency
domain attributes of the signal at the input are faithfully reproduced at the output.
Telephones, DSL lines and television cables use equalizers to prepare data signals for transmission.
Equalizers are critical to the successful operation of electronic systems such as analog broadcast
television. In this application the actual waveform of the transmitted signal must be preserved, not
just its frequency content. Equalizing filters must cancel out any group delay and phase
delay between different frequency components.
Audio lines
Early telephone systems used equalization to correct for the reduced level of high frequencies in long
cables, typically using Zobel networks. These kinds of equalizers can also be used to produce a
circuit with a wider bandwidth than the standard telephone band of 300 Hz to 3.4 kHz. This was
particularly useful for broadcasters who needed "music" quality, not "telephone" quality on landlines
carrying program material. It is necessary to remove or cancel any loading coils in the line before
equalization can be successful. Equalization was also applied to correct the response of the
transducers, for example, a particular microphone might be more sensitive to low frequency sounds
than to high frequency sounds, so an equalizer would be used to increase the volume of the higher
frequencies (boost), and reduce the volume of the low frequency sounds (cut).
Television lines
A similar approach to audio was taken with television landlines with two important additional
complications. The first of these is that the television signal is a wide bandwidth covering many more
octaves than an audio signal. A television equalizer consequently typically requires more filter
sections than an audio equalizer. To keep this manageable, television equalizer sections were often
combined into a single network using ladder topology to form a Cauer equalizer.
The second issue is that phase equalization is essential for an analog television signal. Without
it dispersion causes the loss of integrity of the original wave shape and is seen as smearing of what
were originally sharp edges in the picture.
Analog equalizer types
Zobel network
Lattice phase equaliser
Bridged T delay equaliser

Digital telecommunications
Modern digital telephone systems have less trouble in the voice frequency range as only the local
line to the subscriber now remains in analog format, but DSL circuits operating in the MHz range on

PINKU MAURYA B.TECH (E&C.E) ~ 1 ~


those same wires may suffer severe attenuation distortion, which is dealt with by automatic
equalization or by abandoning the worst frequencies. Picturephonecircuits also had equalizers.
In digital communications, the equalizer's purpose is to reduce intersymbol interference to allow
recovery of the transmit symbols. It may be a simple linear filter or a complex algorithm.
Digital equalizer types
Linear equalizer: processes the incoming signal with a linear filter
MMSE equalizer: designs the filter to minimize E[|e|2], where e is the error signal, which is
the filter output minus the transmitted signal.[1]
Zero forcing equalizer: approximates the inverse of the channel with a linear filter.
Decision feedback equalizer: augments a linear equalizer by adding a filtered version of previous
symbol estimates to the original filter output.[2]
Blind equalizer: estimates the transmitted signal without knowledge of the channel statistics,
using only knowledge of the transmitted signal's statistics.
Adaptive equalizer: is typically a linear equalizer or a DFE. It updates the equalizer parameters
(such as the filter coefficients) as it processes the data. Typically, it uses the MSE cost function;
it assumes that it makes the correct symbol decisions, and uses its estimate of the symbols to
compute e, which is defined above.
Viterbi equalizer: Finds the maximum likelihood (ML) optimal solution to the equalization
problem. Its goal is to minimize the probability of making an error over the entire sequence.
BCJR equalizer: uses the BCJR algorithm (also called the Forward-backward algorithm) to find
the maximum a posteriori (MAP) solution. Its goal is to minimize the probability that a given bit
was incorrectly estimated.
Turbo equalizer: applies turbo decoding while treating the channel as a convolutional code.
LINEAR EQUALISATION TECHNIQUES
When all is well in the receiver, there is no interaction between successive symbols; each symbol
arrives and is decoded independently of all others. But when symbols interact, when the waveform of
one symbol corrupts the value of a nearby symbol, then the received signal becomes distorted. It is
difficult to decipher the message from such a received signal. This impairment is called intersymbol
interference and was discussed in Chapter [link] in terms of non-Nyquist pulse shapes overlapping
in time. This chapter considers another source of interference between symbols that is caused by
multipath reflections (or frequency-selective dispersion) in the channel.

When there is no intersymbol interference (from a multipath channel, from imperfect pulse shaping,
or from imperfect timing), the impulse response of the system from the source to the recovered
message has a single nonzero term. The amplitude of this single spike depends on the transmission
losses, and the delay is determined by the transmission time. When there is intersymbol interference
caused by a multipath channel, this single spike is scattered, duplicated once for each path in the
channel. The number of nonzero terms in the impulse response increases. The channel can be
modeled as a finite-impulse-response, linear filter C, and the delay spread is the total time interval
during which reflections with significant energy arrive. The idea of the equalizer is to build (another)
filter in the receiver that counteracts the effect of the channel. In essence, the equalizer must
unscatter the impulse response. This can be stated as the goal of designing the equalizer E so that
the impulse response of the combined channel and equalizer CE has a single spike. This can be cast
as an optimization problem, and can be solved using techniques familiar from Chapters [link], [link],
and [link].

PINKU MAURYA B.TECH (E&C.E) ~ 2 ~


The transmission path may also be corrupted by additive interferences such as those caused by other
users. These noise components are usually presumed to be uncorrelated with the source sequence and
they may be broadband or narrowband, in band or out of band relative to the band limited spectrum
of the source signal. Like the multipath channel interference, they cannot be known to the system
designer in advance. The second job of the equalizer is to reject such additive narrowband interferers
by designing appropriate linear notch filters on-the-fly based on the received signal. At the same
time, it is important that the equalizer not unduly enhance the broadband noise.

The baseband linear (digital) equalizer is intended to (automatically) cancel unwanted effects of the
channel and to cancel certain kinds of additive interferences.

The signal path of a baseband digital communication system is shown in [link], which emphasizes
the role of the equalizer in trying to counteract the effects of the multipath channel and the additive
interference. As in previous chapters, all of the inner parts of the system are assumed to operate
precisely: thus, the up conversion and down conversion, the timing recovery, and the carrier
synchronization (all those parts of the receiver that are not shown in are assumed to be flawless and
unchanging. Modelling the channel as a time-invariant FIR filter, the next section focuses on the task
of selecting the coefficients in the block labelled linear digital equalizer, with the goal of removing
the intersymbol interference and attenuating the additive interferences. These coefficients are to be
chosen based on the sampled received signal sequence and (possibly) knowledge of a prearranged
training sequence. While the channel may actually be time varying, the variations are often much
slower than the data rate, and the channel can be viewed as (effectively) time invariant over small
time scales.

This chapter suggests several different ways that the coefficients of the equalizer can be chosen. The
first procedure, in "A Matrix Description", minimizes the square of the symbol recovery error1 over a
block of data, which can be done using a matrix pseudo inversion. Minimizing the (square of the)
error between the received data values and the transmitted values can also be achieved using an
adaptive element, as detailed in "An Adaptive Approach to Trained Equalization". When there is no
training sequence, other performance functions are appropriate, and these lead to equalizers such as
the decision-directed approach in "Decision-Directed Linear Equalization" and the dispersion
minimization method in "Dispersion-Minimizing Linear Equalization". The adaptive methods
considered here are only modestly complex to implement, and they can potentially track time
variations in the channel model, assuming the changes are sufficiently slow.

Multipath Interference

The villains of this chapter are multipath and other additive interferers. Both should be
familiar from.

PINKU MAURYA B.TECH (E&C.E) ~ 3 ~


The distortion caused by an analog wireless channel can be thought of as a combination
of scaled and delayed reflections of the original transmitted signal. These reflections
occur when there are different paths from the transmitting antenna to the receiving
antenna. Between two microwave towers, for instance, the paths may include one along
the line-of-sight, reflections from nearby hills, and bounces from a field or lake between
the towers. For indoor digital TV reception, there are many (local) time-varying
reflectors, including people in the receiving room, and nearby vehicles. The strength of
the reflections depends on the physical properties of the reflecting objects, while the delay
of the reflections is primarily determined by the length of the transmission path.
Let u(t) be the transmitted signal. If N delays are represented by 1,2,...,N, and the
strength of the reflections is 1,2,...,N, then the received signal is

y ( t ) = 1 u ( t - 1 ) + 2 u ( t - 2 ) + ... + N u ( t - N ) + ( t ) ,

where (t) represents additive interferences. This model of the transmission channel has
the form of a finite impulse response filter, and the total length of time N-1 over which
the impulse response is nonzero is called the delay spread of the physical medium.

This transmission channel is typically modelled digitally assuming a fixed sampling


period Ts. Thus, is approximated by

y ( k T s ) = a 1 u ( k T s ) + a 2 u ( ( k - 1 ) T s ) + ... + a n u ( ( k - n ) T s ) + ( k T s ) .

In order for the model to closely represent the system the total time over which the
impulse response is nonzero (the time nTs) must be at least as large as the maximum
delay N. Since the delay is not a function of the symbol period Ts, smaller Ts require
more terms in the filter (i.e., larger n).

For example, consider a sampling interval of Ts40 nanoseconds (i.e., a transmission rate
of 25 MHz). A delay spread of approximately 4 microseconds would correspond to one
hundred taps in the model . Thus, at any time instant, the received signal would be a
combination of (up to) one hundred data values. If Ts were increased to 0.4 microsecond
(i.e., 2.5 MHz), only 10 terms would be needed, and there would be interference with
only the 10 nearest data values. If Ts were larger than 4 microseconds (i.e., 0.25 MHz),
only one term would be needed in the discrete-time impulse response. In this case,
adjacent sampled symbols would not interfere. Such finite duration impulse response
models as can also be used to represent the frequency-selective dynamics that occur in the
wired local end-loop in telephony, and other (approximately) linear, finite-delay-spread
channels.

The design objective of the equalizer is to undo the effects of the channel and to remove
the interference. Conceptually, the equalizer attempts to build a system that is a delayed
inverse of removing the intersymbol interference while simultaneously rejecting additive
interferers uncorrelated with the source. If the interference (kTs) is unstructured (for
instance white noise) then there is little that a linear equalizer can do to remove it. But

PINKU MAURYA B.TECH (E&C.E) ~ 4 ~


when the interference is highly structured (such as narrowband interference from another
user), then the linear filter can often notch out the offending frequencies.

As shown in Example of , the solution for the optimal sampling times found by the clock
recovery algorithms depend on the ISI in the channel. Consequently, the digital model
(such as formed by sampling an analog transmission path (such as depends on when the
samples are taken within each period Ts. To see how this can happen in a simple case,
consider a two-path transmission channel

(t)+0.6(t-),

where is some fraction of Ts. For each transmitted symbol, the received signal will
contain two copies of the pulse shape p(t), the first undelayed and the second delayed
by and attenuated by a factor of 0.6. Thus, the receiver sees

c(t)=p(t)+0.6p(t-).

This is shown in for =0.7Ts. The clock recovery algorithms cannot separate the
individual copies of the pulse shapes. Rather, they react to the complete received shape,
which is their sum. The power maximization will locate the sampling times at the peak of
this curve, and the lattice of sampling times will be different from what would be
expected without ISI. The effective (digital) channel model is thus a sampled version
of c(t). This is depicted in by the small circles that occur at Ts spaced intervals.

The optimum sampling times (as found by the energy maximization algorithm) differ when there is
ISI in the transmission path, and change the effective digital model of the channel.

PINKU MAURYA B.TECH (E&C.E) ~ 5 ~


In general, an accurate digital model for a channel depends on many things: the
underlying analog channel, the pulse shaping used, and the timing of the sampling
process. At first glance, this seems like it might make designing an equalizer for such a
channel almost impossible. But there is good news. No matter what timing instants are
chosen, no matter what pulse shape is used, and no matter what the underlying analog
channel may be (as long as it is linear), there is a FIR linear representation of the
form that closely models its behaviour. The details may change, but it is always a
sampling of the smooth curve (like c(t) in that defines the digital model of the channel. As
long as the digital model of this channel does not have deep nulls (i.e., a frequency
response that practically zeroes out some important band of frequencies), there is a good
chance that the equalizer can undo the effects of the channel.

ALGORITHMS FOR ADPTIV`E EQUALIZATION


An adaptive equalizer is an equalizer that automatically adapts to time-varying properties of
the communication channel.[1] It is frequently used with coherent modulations such as phase shift
keying, mitigating the effects of multipath propagation and Doppler spreading.
Many adaptation strategies exist. They include:

LMS Note that the receiver does not have access to the transmitted signal when it is not in
training mode. If the probability that the equalizer makes a mistake is sufficiently small, the
symbol decisions made by the equalizer may be substituted.
RLS
A well-known example is the decision feedback equalizer, a filter that uses feedback of
detected symbols in addition to conventional equalization of future symbols.[5] Some systems use
predefined training sequences to provide reference points for the adaptation process.
ORGINAL BASEBAND MESSAGE:
Baseband refers to analog or digital data before being intermixed with other data. ... For Example
the output of an analog microphone is baseband. When an FM station's carrier frequency is stripped
away in the radio (demodulated), the original audio signal that you hear is the baseband signal.
MODULATOR:
A radio frequency modulator (or RF modulator) takes a baseband input signal and then outputs a
radio frequency modulated signal. This is often a preliminary step in signal transmission, either by
antenna or to another device such as a television.
A demodulator is a circuit that is used in amplitude modulation and frequency modulation receivers
in order to separate the information that was modulated onto the carrier from the carrier itself. A
demodulator is the analog part of the modulator. A modulator puts the information onto a carrier
wave at the transmitter end and then a demodulator pulls it so it can be processed and used on the
receiver end.

PINKU MAURYA B.TECH (E&C.E) ~ 6 ~


TRANSMITTER:
A transmitter is an electronic device used in telecommunications to produce radio waves in order to
transmit or send data with the aid of an antenna. The transmitter is able to generate a radio frequency
alternating current that is then applied to the antenna, which, in turn, radiates this as radio waves.
There are many types of transmitters depending on the standard being used and the type of device;
for example, many modern devices that have communication capabilities have transmitters such as
Wi-Fi, Bluetooth, NFC and cellular.
A transmitter is also known as a radio transmitter.
Transmitters are devices that are used to send out data as radio waves in a specific band of the
electromagnetic spectrum in order to fulfill a specific communication need, be it for voice or for
general data. In order to do this, a transmitter takes energy from a power source and transforms this
into a radio frequency alternating current that changes direction millions to billions of times per
second depending on the band that the transmitter needs to send in. When this rapidly changing
energy is directed through a conductor, in this case an antenna, electromagnetic or radio waves are
radiated outwards to be received by another antenna that is connected to a receiver that reverses the
process to come up with the actual message or data.
A transmitter is composed of:

Power supply The energy source used to power the device and create the energy for
broadcasting
Electronic oscillator Generates a wave called the carrier wave where data is imposed and
carried through the air
Modulator Ads the actual data into the carrier wave by varying some aspect of the carrier
wave

PINKU MAURYA B.TECH (E&C.E) ~ 7 ~


RF amplifier Increases the power of the signal in order to increase the range where the
waves can reach
Antenna tuner or impedance matching circuit Matches the impedance of the transmitter to
that of the antenna in order for the transfer of power to the antenna to be efficient and prevent
a condition called standing waves, where power is reflected from the antenna back to the
transmitter, wasting power or damaging it

RADIO CHANNEL:
An assigned band of frequencies sufficient for radio communication. The bandwidth of a radio
channel depends upon the type of transmission and the frequency tolerance. Note 2: A channel is
usually assigned for a specified radio service to be provided by a specified transmitter.
Megahertz means "millions of cycles per second," so "91.5 megahertz" means that the transmitter at
the radio station is oscillating at a frequency of 91,500,000 cycles per second. ... All FM radio
stations transmit in a band of frequencies between 88 megahertz and 108 megahertz.
RF RECEIVER FRONT END:

In a radio receiver circuit, the RF front end is a generic term for all the circuitry between
the antenna up to and including the mixer stage.[1] It consists of all the components in the receiver
that process the signal at the original incoming radio frequency (RF), before it is converted to a
lower intermediate frequency (IF). In microwave and satellite receivers it is often called the low-
noise block (LNB) or low-noise downconverter (LND) and is often located at the antenna, so that the
signal from the antenna can be transferred to the rest of the receiver at the more easily handled
intermediate frequency.
For most superheterodyne architectures, the RF front end consists of:[2]

A band-pass filter (BPF) to reduce image response. This removes any signals at the image
frequency, which would otherwise interfere with the desired signal. It also prevents strong out-
of-band signals from saturating the input stages.
An RF amplifier, often called the low-noise amplifier (LNA). Its primary responsibility is to
increase the sensitivity of the receiver by amplifying weak signals without contaminating them
with noise, so that they can stay above the noise level in succeeding stages. It must have a very
low noise figure(NF). The RF amplifier may not be needed and is often omitted (or switched off)
for frequencies below 30 MHz, where the signal-to-noise ratio is defined by atmospheric and
man-made noise.
A local oscillator (LO) which generates a radio frequency signal at an offset from the incoming
signal, which is mixed with the incoming signal.
The mixer, which mixes the incoming signal with the signal from the local oscillator to convert
the signal to the intermediate frequency (IF).
In digital receivers, particularly those in wireless devices such as cell phones and Wifi receivers, the
intermediate frequency is digitized; sampled and converted to a binary digitalform, and the rest of the
processing - IF filtering and demodulation - is done by digital filters (digital signal processing, DSP),
as these are smaller, use less power and can have more selectivity.[3] In this type of receiver the RF
front end is defined as everything from the antenna to the analog to digital converter (ADC) which
digitizes the signal.[3] The general trend is to do as much of the signal processing in digital form as
possible, and some receivers digitize the RF signal directly, without down-conversion to an IF, so
here the front end is merely an RF filter.

PINKU MAURYA B.TECH (E&C.E) ~ 8 ~


IF STAGE:
In communications and electronic engineering, an intermediate frequency (IF) is a frequency to
which a carrier wave is shifted as an intermediate step in transmission or reception.[1] The
intermediate frequency is created by mixing the carrier signal with a oscillator signal in a process
called heterodyning, resulting in a signal at the difference or beat frequency. Intermediate
frequencies are used in super heterodyne radio receivers, in which an incoming signal is shifted to an
IF for amplification before final detection is done.
Conversion to an intermediate frequency is useful for several reasons. When several stages of filters
are used, they can all be set to a fixed frequency, which makes them easier to build and to tune.
Lower frequency transistors generally have higher gains so fewer stages are required. It's easier to
make sharply selective filters at lower fixed frequencies.
There may be several such stages of intermediate frequency in a super heterodyne receiver; two or
three stages are called double (alternatively, dual) or triple conversion, respectively.
DETECTOR MATCHED FILTER:
In signal processing, a matched filter is obtained by correlating a known signal, or template, with an
unknown signal to detect the presence of the template in the unknown signal.[1][2] This is equivalent
to convolving the unknown signal with a conjugated time-reversed version of the template. The
matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the
presence of additive stochastic noise.
Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected
signal is examined for common elements of the out-going signal. Pulse compression is an example of
matched filtering. It is so called because impulse response is matched to input pulse signals. Two-
dimensional matched filters are commonly used in image processing, e.g., to improve SNR for X-
ray. Matched filtering is a demodulation technique with LTI (linear time invariant) filters to
maximize SNR.[3] It was originally also known as a North filter.
EQUIVELENT NOISE:
In telecommunication, an equivalent noise resistance is a quantitative representation in resistance
units of the spectral density of a noise-voltage generator, given by where is the spectral density, is

the Boltzmann's constant, is the standard noise temperature (290 K), so .


Note: The equivalent noise resistance in terms of the mean-square noise-generator voltage, e2, within
a frequency increment, f, is given by
EQUALIZER:

Equalizer (communications), a device or circuit for correction of frequency dependent distortion


in telecommunications
DECISION MAKER:
Decision maker is a system its decide the signal send or not.
RECONSTRACTED MESSAGE DATA:
The same data are taken in the output of the system.
EQUALIZER PREDICATION ERROR:
It is find out the error in system. Then response takes on the equalizer. That is called equalizer
predication error.

PINKU MAURYA B.TECH (E&C.E) ~ 9 ~


DIVERSITY TECHNIQUES

In telecommunications, a diversity scheme refers to a method for improving the reliability of a


message signal by using two or more communication channels with different characteristics.
Diversity is mainly used in radio communication and is a common technique for
combating fading and co-channel interference and avoiding error bursts. It is based on the fact that
individual channels experience different levels of fading and interference. Multiple versions of the
same signal may be transmitted and/or received and combined in the receiver. Alternatively, a
redundant forward error correction code may be added and different parts of the message transmitted
over different channels. Diversity techniques may exploit the multipath propagation, resulting in
a diversity gain, often measured in decibels.
The following classes of diversity schemes can be identified:

Time diversity: Multiple versions of the same signal are transmitted at different time instants.
Alternatively, a redundant forward error correction code is added and the message is spread in
time by means of bit-interleaving before it is transmitted. Thus, error bursts are avoided, which
simplifies the error correction.
Frequency diversity: The signal is transmitted using several frequency channels or spread over
a wide spectrum that is affected by frequency-selective fading. Middle-late 20th
century microwave radio relay lines often used several regular wideband radio channels, and one
protection channel for automatic use by any faded channel. Later examples include:
OFDM modulation in combination with subcarrier interleaving and forward error correction
Spread spectrum, for example frequency hopping or DS-CDMA.
Space diversity: The signal is transmitted over several different propagation paths. In the case of
wired transmission, this can be achieved by transmitting via multiple wires. In the case of
wireless transmission, it can be achieved by antenna diversity using multiple transmitter antennas
(transmit diversity) and/or multiple receiving antennas (reception diversity). In the latter case,
a diversity combining technique is applied before further signal processing takes place. If the
antennas are far apart, for example at different cellular base station sites or WLAN access points,
this is called macro diversity or site diversity. If the antennas are at a distance in the order of
one wavelength, this is called micro diversity. A special case is phased antenna arrays, which
also can be used for beam forming, MIMO channels and spacetime coding (STC).
Polarization diversity: Multiple versions of a signal are transmitted and received via antennas
with different polarization. A diversity combining technique is applied on the receiver side.
Multiuser diversity: Multiuser diversity is obtained by opportunistic user scheduling at either
the transmitter or the receiver. Opportunistic user scheduling is as follows: at any given time, the
transmitter selects the best user among candidate receivers according to the qualities of each
channel between the transmitter and each receiver. A receiver must feedback the channel quality
information to the transmitter using limited levels of resolution, in order for the transmitter to
implement Multiuser diversity.
Cooperative diversity: Achieves antenna diversity gain by using the cooperation of distributed
antennas belonging to each node.

PINKU MAURYA B.TECH (E&C.E) ~ 10 ~


RAKE RECEIVR
A rake receiver is a radio receiver designed to counter the effects of multipath fading. It does this by
using several "sub-receivers" called fingers, that is, several correlators each assigned to a
different multipath component. Each finger independently decodes a single multipath component; at
a later stage the contribution of all fingers are combined in order to make the most use of the
different transmission characteristics of each transmission path. This could very well result in
higher signal-to-noise ratio (or Eb/N0) in a multipath environment than in a "clean" environment.
The multipath channel through which a radio wave transmits can be viewed as transmitting the
original (line of sight) wave pulse through a number of multipath components. Multipath
components are delayed copies of the original transmitted wave traveling through a different echo
path, each with a different magnitude and time-of-arrival at the receiver. Since each component
contains the original information, if the magnitude and time-of-arrival (phase) of each component is
computed at the receiver (through a process called channel estimation), then all the components can
be added coherently to improve the information reliability.

Rake receiver

If, in a mobile radio channel reflected waves arrive with small relative time delays, self interference
occurs. Direct Sequence (DS) Spread Spectrum is often claimed to have particular properties that
makes it less vulnerable to multipath reception. In particular, the rake receiver architecture allows an
optimal combining of energy received over paths with different. It avoids wave cancellation (fades)
if delayed paths arrive with phase differences and appropriately weighs signals coming in with
different signal-to-noise ratios.

The rake receiver consists of multiple correlators, in which the receive signal is multiplied by time-
shifted versions of a locally generated code sequence. The intention is to separate signals such that
each finger only sees signals coming in over a single (resolvable) path. The spreading code is chosen
to have a very small autocorrelation value for any nonzero time offset. This avoids crosstalk between
fingers. In practice, the situation is less ideal. It is not the full periodic autocorrelation that
determines the crosstalk between signals in different fingers, but rather two partial correlations, with
contributions from two consecutive bits or symbols. It has been attempted to find sequences that
have satisfactory partial correlation values, but the crosstalk due to partial (non-periodic) correlations
remains substantially more difficult to reduce than the effects of periodic correlations.

The rake receiver is designed to optimally detect a DS-CDMA signal transmitted over
a dispersive multipath channel. It is an extension of the concept of the matched filter.

Figure: Matched filter receiver for AWGN channel.

In the matched filter receiver, the signal is correlated with a locally generated copy of the signal
waveform. If, however, the signal is distorted by the channel, the receiver should correlate the
incoming signal by a copy of the expected received signal, rather than by a copy of transmitted

PINKU MAURYA B.TECH (E&C.E) ~ 11 ~


waveform. Thus the receiver should estimate the delay profile of channel, and adapt its locally
generated copy according to this estimate.

In a multipath channel, delayed reflections interfere with the direct signal. However, a DS-CDMA
signal suffering from multipath dispersion can be detected by a rake receiver. This receiver optimally
combines signals received over multiple paths.

Figure: Rake receiver with 5 fingers

Like a garden rake, the rake receiver gathers the energy received over the various delayed
propagation paths. According to the maximum ratio combining principle, the SNR at the output is the
sum of the SNRs in the individual branches, provided that

we assume that only AWGN is present (no interference)


codes with a time offset are truly orthogonal

Different reflected waves arrive


with different delays. A rake
receiver can detect these
different signals separately.
These signals are then combined,
using the diversity
technique called maximum ratio
combining.

Signals arriving with the same excess propagation delay as the time offset in the receiver are
retrieved accurately, because

PINKU MAURYA B.TECH (E&C.E) ~ 12 ~


This reception concept is repeated for every delayed path that is received with relevant power.
Considering a single correlator branch, multipath self-interference from other paths is attenuated
here, because one can choose codes such that

Rake Performance
A spread spectrum receiver with rake outperforms a simple receiver with a single correlator.

Mathematical definition
A rake receiver utilizes multiple correlators to separately detect M strongest multipath components.
Each correlator may be quantized using 1, 2, 3 or 4 bits.
The outputs of each correlator are weighted to provide better estimate of the transmitted signal than
is provided by a single component. Demodulation and bit decisions are then based on the weighted
outputs of the M correlators

Use
Rake receivers are common in a wide variety of CDMA and W-CDMA radio devices such as mobile
phones and wireless LAN equipment.
Rake receivers are also used in radio astronomy. The CSIRO Parkes radio telescope and Jodrell Bank
telescope have 1-bit filter bank recording formats that can be processed in real time or prognostically
by software based rake receivers.
QUANTISATION TECHNIQUES
Quantization, in mathematics and digital signal processing, is the process of mapping input values
from a large set (often a continuous set) to output values in a (countable) smaller
set. Rounding and truncation are typical examples of quantization processes. Quantization is
involved to some degree in nearly all digital signal processing, as the process of representing a signal
in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy
compression algorithms.
The difference between an input value and its quantized value (such as round-off error) is referred to
as quantization error. A device or algorithmic function that performs quantization is called
a quantizer. An analog-to-digital converter is an example of a quantizer.

Mathematical properties of quantization


Because quantization is a many-to-few mapping, it is an inherently non-linear and irreversible
process (i.e., because the same output value is shared by multiple input values, it is impossible, in
general, to recover the exact input value when given only the output value).
The set of possible input values may be infinitely large, and may possibly be continuous and
therefore uncountable (such as the set of all real numbers, or all real numbers within some limited
range). The set of possible output values may be finite or countably infinite. The input and output
sets involved in quantization can be defined in a rather general way. For example, vector
quantization is the application of quantization to multi-dimensional (vector-valued) input data.[1]

PINKU MAURYA B.TECH (E&C.E) ~ 13 ~


.

Analog-to-digital converter
An analog-to-digital converter (ADC) can be modeled as two processes: sampling and quantization.
Sampling converts a time-varying voltage signal into a discrete-time signal, a sequence of real
numbers. Quantization replaces each real number with an approximation from a finite set of discrete
values. Most commonly, these discrete values are represented as fixed-point words. Though any
number of quantization levels is possible, common word-lengths are 8-bit (256 levels), 16-
bit (65,536 levels) and 24-bit (16.8 million levels). Quantizing a sequence of numbers produces a
sequence of quantization errors which is sometimes modeled as an additive random signal
called quantization noise because of its stochastic behavior. The more levels a quantizer uses, the
lower is its quantization noise power.

Ratedistortion optimization

PINKU MAURYA B.TECH (E&C.E) ~ 14 ~


Ratedistortion optimized quantization is encountered in source coding for "lossy" data compression
algorithms, where the purpose is to manage distortion within the limits of the bit rate supported by a
communication channel or storage medium. In this second setting, the amount of introduced
distortion may be managed carefully by sophisticated techniques, and introducing some significant
amount of distortion may be unavoidable. A quantizer designed for this purpose may be quite
different and more elaborate in design than an ordinary rounding operation. It is in this domain that
substantial ratedistortion theory analysis is likely to be applied. However, the same concepts
actually apply in both use cases.
The analysis of quantization involves studying the amount of data (typically measured in digits or
bits or bit rate) that is used to represent the output of the quantizer, and studying the loss of precision
that is introduced by the quantization process (which is referred to as the distortion). The general
field of such study of rate and distortion is known as ratedistortion theory

VOCODERS
A vocoder is an audio processor that captures the characteristic elements of an an audio signal and
then uses this characteristic signal to affect other audio signals. The technology behind the vocoder
effect was initially used in attempts to synthesize speech. The effect called vocoding can be
recognized on records as a "talking synthesizer", made popular by artists such as Stevie Wonder. The
basic component extracted during the vocoder analysis is called the formant. The formant describes
the fundamental frequency of a sound and its associated noise components.
The vocoder works like this: The input signal (your voice saying "Hello, my name is Fred") is fed
into the vocoder's input. This audio signal is sent through a series of parallel signal filters that create
a signature of the input signal, based on the frequency content and level of the frequency
components. The signal to be processed (a synthesized string sound, for example) is fed into another
input on the vocoder. The filter signature created above during the analysis of your voice is used to
filter the synthesized sound. The audio output of the vocoder contains the synthesized sound
modulated by the filter created by your voice. You hear a synthesized sound that pulses to the tempo
of your voice input with the tonal characteristics of your voice added to it.
A vocoder (/vokodr/, a portmanteau of voice encoder) is a category of voice codec that analyzes
and synthesizes the human voice signal for audio data compression, multiplexing, voice encryption,
voice transformation, etc.
The earliest type of vocoder, the channel vocoder, was originally developed as a speech
coder for telecommunications applications in the 1930s, the idea being to code speech in order to

PINKU MAURYA B.TECH (E&C.E) ~ 15 ~


reduce bandwidth (i.e. audio data compression) for multiplexing transmission. In the channel
vocoder algorithm, among the two components of an analytic signal, considering only
the amplitude component and simply ignoring the phase component tends to result in an unclear
voice; on methods for rectifying this, see phase vocoder.
In the encoder, the input is passed through a multiband filter, then each band is passed through
an envelope follower, and the control signals from the envelope followers are transmitted to the
decoder. The decoder applies these (amplitude) control signals to corresponding filters for re-
synthesis. Since these control signals change only slowly compared to the original speech waveform,
the bandwidth required to transmit speech can be reduced. This allows more speech channels to share
a single communication channel, such as a radio channel or a submarine cable (i.e. multiplexing).
By encrypting the control signals, voice transmission can be secured against interception. Its primary
use in this fashion is for secure radio communication. The advantage of this method of encryption is
that none of the original signal is sent, only envelopes of the bandpass filters. The receiving unit
needs to be set up in the same filter configuration to re-synthesize a version of the original signal
spectrum.
The vocoder has also been used extensively as an electronic musical instrument (see Uses in music).
The decoder portion of the vocoder, called a voder, can be used independently for speech synthesis

Applications

Terminal equipment for Digital Mobile Radio (DMR) based systems.


Digital Trunking
DMR TDMA
Digital Voice Scrambling and Encryption
Digital WLL
Voice Storage and Playback Systems
Messaging Systems
VoIP Systems
Voice Pagers
Regenerative Digital Voice Repeaters
Cochlear Implants
Musical and other artistic effects

LINEAR PREDICTIVE CODERS

Linear predictive coding (LPC) is a tool used mostly in audio signal processing and speech
processing for representing the spectral envelope of a digital signal of speech in compressed form,
using the information of a linear predictive model.[1] It is one of the most powerful speech analysis
techniques, and one of the most useful methods for encoding good quality speech at a low bit rate
and provides extremely accurate estimates of speech parameters.
Envelope Calculation

The LPC method is quite close to the FFT. The envelope is calculated from a number of formants or
poles specified by the user.

PINKU MAURYA B.TECH (E&C.E) ~ 16 ~


The formants are estimated removing their effects from the speech signal, and estimating the
intensity and frequency of the remaining buzz. The removing process is called inverse
filtering, and the remaining signal is called the residue.
The speech signal source is synthesized from the buzz parameters and the residue. The
source is ran through the filter formants , resulting in speech.
The process is iterated several time is a second, with "frames". A 30 to 50 frames rate per
second yields and intelligible speech.
Limits

The LPC performance is limited by the method itself, and the local characteristics of the signal.
The harmonic spectrum sub-samples the spectral envelope, which produces a spectral
aliasing. These problems are especially manifested in voiced and high-pitched signals,
affecting the first harmonics of the signal, which refer to the perceived speech quality and
formant dynamics.
A correct all-pole model for the signal spectrum can hardly be obtained.
The desired spectral information, the spectral envelope is not represented : we get too close to
the original spectra. The LPC follows the curve of the spectrum down to the residual noise
level in the gap between two harmonics, or partials spaced too far apart. It does not represent
the desired spectral information to be modeled since we are interested in fitting the spectral
envelope as close as possible and not the original spectra. The spectral envelope should be
a smooth function passing through the prominent peaks of the spectrum, yielding a flat
sequence, and not the "valleys" formed by the harmonic peaks.

Comparing several envelope estimation methods

Sample Quality

LPC usually requires a very good speech sample to work with, which is not always the case with
omnidirectional microphones recordings

PINKU MAURYA B.TECH (E&C.E) ~ 17 ~


Applications
LPC is generally used for speech analysis and resynthesis. It is used as a form of voice compression
by phone companies, for example in the GSM standard. It is also used for secure wireless, where
voice must be digitized, encrypted and sent over a narrow voice channel; an early example of this is
the US government's Navajo I.
LPC synthesis can be used to construct vocoders where musical instruments are used as excitation
signal to the time-varying filter estimated from a singer's speech. This is somewhat popular
in electronic music. Paul Lansky made the well-known computer music
piece notjustmoreidlechatter using linear predictive coding.[1] A 10th-order LPC was used in the
popular 1980s Speak & Spell educational toy.
LPC predictors are used in Shorten, MPEG-4 ALS, FLAC, SILK audio codec, and other lossless
audio codecs.
LPC is receiving some attention as a tool for use in the tonal analysis of violins and other stringed
musical instruments

MULTIPLE ACCEES TECHNIQUES FOR WIRELESS


COMMUNICATION
Multiple access schemes are used to allow many mobile users to share simultaneously a finite
amount of radio spectrum.

Multiple Access Techniques


In wireless communication systems, it is often desirable to allow the subscriber to send information
simultaneously from the mobile station to the base station while receiving information from the base
station to the mobile station.

A cellular system divides any given area into cells where a mobile unit in each cell communicates
with a base station. The main aim in the cellular system design is to be able to increase the
capacity of the channel, i.e., to handle as many calls as possible in a given bandwidth with a
sufficient level of quality of service.

There are several different ways to allow access to the channel. These includes mainly the following

Frequency division multiple-access (FDMA)


Time division multiple-access (TDMA)
Code division multiple-access (CDMA)
Space division multiple access (SDMA)
Depending on how the available bandwidth is allocated to the users, these techniques can be
classified as narrowband and wideband systems.

PINKU MAURYA B.TECH (E&C.E) ~ 18 ~


Narrowband Systems
Systems operating with channels substantially narrower than the coherence bandwidth are called as
Narrow band systems. Narrow band TDMA allows users to use the same channel but allocates a
unique time slot to each user on the channel, thus separating a small number of users in time on a
single channel.

Wideband Systems
In wideband systems, the transmission bandwidth of a single channel is much larger than the
coherence bandwidth of the channel. Thus, multipath fading doesnt greatly affect the received
signal within a wideband channel, and frequency selective fades occur only in a small fraction of the
signal bandwidth.

Frequency Division Multiple Access (FDMA)


FDMA is the basic technology for advanced mobile phone services. The features of FDMA are as
follows.

FDMA allots a different sub-band of frequency to each different user to access the network.
If FDMA is not in use, the channel is left idle instead of allotting to the other users.
FDMA is implemented in Narrowband systems and it is less complex than TDMA.
Tight filtering is done here to reduce adjacent channel interference.
The base station BS and mobile station MS, transmit and receive simultaneously and
continuously in FDMA.

Time Division Multiple Access (TDMA)


In the cases where continuous transmission is not required, there TDMA is used instead of FDMA.
The features of TDMA include the following.

TDMA shares a single carrier frequency with several users where each users makes use of
non-overlapping time slots.
Data transmission in TDMA is not continuous, but occurs in bursts. Hence handsoff process
is simpler.
TDMA uses different time slots for transmission and reception thus duplexers are not
required.
TDMA has an advantage that is possible to allocate different numbers of time slots per frame
to different users.
Bandwidth can be supplied on demand to different users by concatenating or reassigning time
slot based on priority.

PINKU MAURYA B.TECH (E&C.E) ~ 19 ~


Code Division Multiple Access (CDMA)
Code division multiple access technique is an example of multiple access where several transmitters
use a single channel to send information simultaneously. Its features are as follows.

In CDMA every user uses the full available spectrum instead of getting allotted by separate
frequency.
CDMA is much recommended for voice and data communications.
While multiple codes occupy the same channel in CDMA, the users having same code can
communicate with each other.
CDMA offers more air-space capacity than TDMA.
The hands-off between base stations is very well handled by CDMA.

Space Division Multiple Access (SDMA)


Space division multiple access or spatial division multiple access is a technique which is MIMO
(multiple-input multiple-output) architecture and used mostly in wireless and satellite
communication. It has the following features.

All users can communicate at the same time using the same channel.
SDMA is completely free from interference.
A single satellite can communicate with more satellites receivers of the same frequency.
The directional spot-beam antennas are used and hence the base station in SDMA, can track a
moving user.
Controls the radiated energy for each user in space.

Spread Spectrum Multiple Access


Spread spectrum multiple access (SSMA) uses signals which have a transmission bandwidth whose
magnitude is greater than the minimum required RF bandwidth.

There are two main types of spread spectrum multiple access techniques

Frequency hopped spread spectrum (FHSS)


Direct sequence spread spectrum (DSSS)
Frequency Hopped Spread Spectrum (FHSS)
This is a digital multiple access system in which the carrier frequencies of the individual users are
varied in a pseudo random fashion within a wideband channel. The digital data is broken into
uniform sized bursts which is then transmitted on different carrier frequencies.

Direct Sequence Spread Spectrum (DSSS)

PINKU MAURYA B.TECH (E&C.E) ~ 20 ~


This is the most commonly used technology for CDMA. In DS-SS, the message signal is multiplied
by a Pseudo Random Noise Code. Each user is given his own code word which is orthogonal to the
codes of other users and in order to detect the user, the receiver must know the code word used by
the transmitter.

The combinational sequences called as hybrid are also used as another type of spread
spectrum. Time hopping is also another type which is rarely mentioned.

Since many users can share the same spread spectrum bandwidth without interfering with one
another, spread spectrum systems become bandwidth efficient in a multiple user environment.

PINKU MAURYA B.TECH (E&C.E) ~ 21 ~

Anda mungkin juga menyukai