Anda di halaman 1dari 22

Volumetric flow rate

In physics and engineering, in particular fluid dynamics and hydrometry, the volumetric flow
rate, (also known as volume flow rate, rate of fluid flow or volume velocity) is the volume of
fluid which passes per unit time; usually represented by the symbol Q. The SI unit is m3/s (cubic
metres per second). Another unit used is sccm (standard cubic centimeters per minute).
In US Customary Units and British Imperial Units, volumetric flow rate is often expressed as ft3/s
(cubic feet per second) or gallons per minute (either U.S. or imperial definitions).
Volumetric flow rate should not be confused with volumetric flux, as defined by Darcy's law and
represented by the symbol q, with units of m3/(m2s), that is, ms1. The integration of a flux over
an area gives the volumetric flow rate
Volumetric flow rate is defined by the limit:[1]

I.e., the flow of volume of fluid V through a surface per unit time t.
Since this is only the time derivative of volume, a scalar quantity, the volumetric flow rate is
also a scalar quantity. The change in volume is the amount that flows after crossing the
boundary for some time duration, not simply the initial amount of volume at the boundary
minus the final amount at the boundary, since the change in volume flowing through the area
would be zero for steady flow.

Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of
echography to new fields of study such as the functional imaging of the brain, cardiac
electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to
name a few, non-invasively and in real time. In this study, we present the first implementation of
Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating
from a sparse virtual array located behind the probe. It achieves high contrast and resolution while
maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound
system was developed to sample 1024 independent channels and to drive a 32 32 matrix-array
probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single
ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging,
and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves
was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to
obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single
acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns
occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo
interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This
study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness,
tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with
reduced intra--and inter-observer variability.

Beamforming
From Wikipedia, the free encyclopedia

Beamforming
Part of a series on

Antennas

Common types[show]

Components[show]

Systems[show]

Safety and regulation[show]

Radiation sources / regions[show]

Characteristics[show]

Techniques[hide]

Beam steering

Beam tilt
Beamforming
Bell Laboratories Layered
Space-Time (BLAST)

Multiple-input multiple-output (MIMO)

Reconfiguration
Spread spectrum

Wideband Space Division


Multiple Access (WSDMA)

Beamforming or spatial filtering is a signal processing technique used in sensor arrays for
directional signal transmission or reception.[1]This is achieved by combining elements in a phased
array in such a way that signals at particular angles experience constructiveinterference while
others experience destructive interference. Beamforming can be used at both the transmitting
and receiving ends in order to achieve spatial selectivity. The improvement compared
with omnidirectional reception/transmission is known as the receive/transmit gain (or loss).
Beamforming can be used for radio or sound waves. It has found numerous applications
in radar, sonar, seismology, wireless communications, radio astronomy, acoustics,
and biomedicine. Adaptive beamforming is used to detect and estimate the signal-of-interest at
the output of a sensor array by means of optimal (e.g., least-squares) spatial filtering and
interference rejection.
Contents
[hide]

1 Beamforming techniques

2 Sonar beamforming requirements

3 Beamforming schemes
o

3.1 Beamforming history in cellular standards

4 Beamforming for speech audio

5 See also
o

5.1 Beamforming solutions

5.2 Related issues

6 References

7 External links

Beamforming techniques[edit]
To change the directionality of the array when transmitting, a beamformer controls the phase and
relative amplitude of the signal at each transmitter, in order to create a pattern of constructive
and destructive interference in the wavefront. When receiving, information from different sensors
is combined in a way where the expected pattern of radiation is preferentially observed.
For example in sonar, to send a sharp pulse of underwater sound towards a ship in the distance,
simply transmitting that sharp pulse from every sonar projector in an array simultaneously fails
because the ship will first hear the pulse from the speaker that happens to be nearest the ship,
then later pulses from speakers that happen to be the further from the ship. The beamforming
technique involves sending the pulse from each projector at slightly different times (the projector
closest to the ship last), so that every pulse hits the ship at exactly the same time, producing the
effect of a single strong pulse from a single powerful projector. The same thing can be carried out
in air using loudspeakers, or in radar/radio using antennas.
In passive sonar, and in reception in active sonar, the beamforming technique involves
combining delayed signals from eachhydrophone at slightly different times (the hydrophone
closest to the target will be combined after the longest delay), so that every signal reaches the
output at exactly the same time, making one loud signal, as if the signal came from a single, very
sensitive hydrophone. Receive beamforming can also be used with microphones or radar
antennas.
With narrow-band systems the time delay is equivalent to a "phase shift", so in this case the
array of antennas, each one shifted a slightly different amount, is called a phased array. A narrow
band system, typical of radars, is one where the bandwidth is only a small fraction of the centre
frequency. With wide band systems this approximation no longer holds, which is typical in
sonars.
In the receive beamformer the signal from each antenna may be amplified by a different "weight."
Different weighting patterns (e.g., Dolph-Chebyshev) can be used to achieve the desired
sensitivity patterns. A main lobe is produced together with nulls and sidelobes. As well as
controlling the main lobe width (the beam) and the sidelobe levels, the position of a null can be
controlled. This is useful to ignore noise or jammers in one particular direction, while listening for
events in other directions. A similar result can be obtained on transmission.
For the full mathematics on directing beams using amplitude and phase shifts, see the
mathematical section in phased array.
Beamforming techniques can be broadly divided into two categories:

conventional (fixed or switched beam) beamformers

adaptive beamformers or phased array

Desired signal maximization mode

Interference signal minimization or cancellation mode

Conventional beamformers use a fixed set of weightings and time-delays (or phasings) to
combine the signals from the sensors in the array, primarily using only information about the
location of the sensors in space and the wave directions of interest. In contrast, adaptive
beamforming techniques generally combine this information with properties of the signals
actually received by the array, typically to improve rejection of unwanted signals from other
directions. This process may be carried out in either the time or the frequency domain.
As the name indicates, an adaptive beamformer is able to automatically adapt its response to
different situations. Some criterion has to be set up to allow the adaption to proceed such as
minimising the total noise output. Because of the variation of noise with frequency, in wide band
systems it may be desirable to carry out the process in the frequency domain.
Beamforming can be computationally intensive. Sonar phased array has a data rate low enough
that it can be processed in real-time in software, which is flexible enough to transmit and/or
receive in several directions at once. In contrast, radar phased array has a data rate so high that
it usually requires dedicated hardware processing, which is hard-wired to transmit and/or receive
in only one direction at a time. However, newer field programmable gate arrays are fast enough
to handle radar data in real-time, and can be quickly re-programmed like software, blurring the
hardware/software distinction.

Sonar beamforming requirements[edit]


Sonar itself has many applications, such as wide-area-search-and-ranging, underwater imaging
sonars such as side-scan sonar and acoustic cameras.
Sonar beamforming implementation is similar in general technique but varies significantly in
detail compared to electromagnetic system beamforming implementation. Sonar applications
vary from 1 Hz to as high as 2 MHz, and array elements may be few and large, or number in the
hundreds yet very small. This will shift sonar beamforming design efforts significantly between
demands of such system components as the "front end" (transducers, preamps and digitizers)
and the actual beamformer computational hardware downstream. High frequency, focused beam,
multi-element imaging-search sonars and acoustic cameras often implement fifth-order spatial
processing that places strains equivalent to Aegis radar demands on the processors.
Many sonar systems, such as on torpedoes, are made up of arrays of up to 100 elements that
must accomplish beam steering over a 100 degree field of view and work in both active and
passive modes.
Sonar arrays are used both actively and passively in 1-, 2-, and 3-dimensional arrays.

1-dimensional "line" arrays are usually in multi-element passive systems towed behind
ships and in single or multi-element side scan sonar.

2-dimensional "planar" arrays are common in active/passive ship hull mounted sonars
and some side-scan sonar.

3-dimensional spherical and cylindrical arrays are used in 'sonar domes' in the
modern submarine and ships.

Sonar differs from radar in that in some applications such as wide-area-search all directions often
need to be listened to, and in some applications broadcast to, simultaneously. Thus a multibeam
system is needed. In a narrowband sonar receiver the phases for each beam can be
manipulated entirely by signal processing software, as compared to present radar systems that
use hardware to 'listen' in a single direction at a time.
Sonar also uses beamforming to compensate for the significant problem of the slower
propagation speed of sound as compared to that of electromagnetic radiation. In side-looksonars, the speed of the towing system or vehicle carrying the sonar is moving at sufficient
speed to move the sonar out of the field of the returning sound "ping". In addition to focusing
algorithms intended to improve reception, many side scan sonars also employ beam steering to
look forward and backward to "catch" incoming pulses that would have been missed by a single
sidelooking beam.

Beamforming schemes[edit]

A conventional beamformer can be a simple beamformer also known as delay-and-sum


beamformer. All the weights of the antenna elements can have equal magnitudes. The
beamformer is steered to a specified direction only by selecting appropriate phases for each
antenna. If the noise is uncorrelated and there are no directional interferences, the signal-tonoise ratio of a beamformer with
where

antennas receiving a signal of power

is

is Noise variance or Noise power.

Null-steering beamformer

Frequency domain beamformer

Beamforming history in cellular standards[edit]


Beamforming techniques used in cellular phone standards have advanced through the
generations to make use of more complex systems to achieve higher density cells, with higher
throughput.

Passive mode: (almost) non-standardized solutions

Wideband Code Division Multiple Access (WCDMA) supports direction of


arrival (DOA) based beamforming[citation needed]
Active mode: mandatory standardized solutions

2G Transmit antenna selection as an elementary beamforming[citation needed]

3G WCDMA: Transmit antenna array (TxAA) beamforming[citation needed]

3G evolution LTE/UMB: Multiple-input multiple-output (MIMO) precoding based


beamforming with partial Space-Division Multiple Access (SDMA)[citation needed]

Beyond 3G (4G, 5G, ) More advanced beamforming solutions to


support SDMA such as closed loop beamforming and multi-dimensional beamforming
are expected

Beamforming for speech audio[edit]


Beamforming can be used to try to extract sound sources in a room, such as multiple speakers in
the cocktail party problem. This requires the locations of the speakers to be known in advance,
for example by using the time of arrival from the sources to mics in the array, and inferring the
locations from the distances.
Compared to carrier-wave telecommunications, natural audio contains a variety of frequencies. It
is advantageous to separate frequency bands prior to beamforming because different
frequencies have different optimal beamform filters (and hence can be treated as separate
problems, in parallel, and then recombined afterward). Properly isolating these bands involves
specialized non-standard filter banks. In contrast, for example, the standard FFT band-filters
implicitly assume that the only frequencies present in the signal are exact harmonics; frequencies
which lie between these harmonics will typically activate all of the FFT channels (which is not
what is wanted in a beamform analysis). Instead, filters can [citation needed] be designed in which only
local frequencies are detected by each channel (while retaining the recombination property to be
able to reconstruct the original signal), and these are typically non-orthogonal unlike the FFT
basis.

See also[edit]
Beamforming solutions[edit]

Aperture synthesis

Inverse synthetic aperture radar (ISAR)

Phased array antennas, which uses beamforming to steer the beam

Sonar, side-scan sonar

Synthetic aperture radar

Synthetic aperture sonar

Thinned array curse

Window function

Synthetic aperture magnetometry (SAM)

Microphone array

Zero-forcing precoding

Multibeam echosounder

Pencil (optics)

Related issues[edit]

MIMO

Spatial multiplexing

Antenna diversity

Channel state information

Spacetime code

Spacetime block code

Precoding

Dirty paper coding (DPC)

Smart antennas

Space-division multiple access

Wideband Space Division Multiple Access

Golomb ruler

Audio Surveillance

Reconfigurable antenna

Pulse-Doppler signal processing


From Wikipedia, the free encyclopedia

Pulse-Doppler signal processing is a radar performance enhancement strategy that allows


small high-speed objects to be detected in close proximity to large slow moving objects.

Detection improvements on the order of 1,000,000:1 are common. Small fast moving objects can
be identified close to terrain, near the sea surface, and inside storms.
This signal processing strategy is unique for pulse-Doppler radar and multi-mode radar, which
can be pointed into regions containing a large number of slow-moving reflectors without
overwhelming computer software and operators. Other signal processing strategies, like moving
target indication, are more appropriate for benign clear blue sky environments.
Contents
[hide]

1 Environment

2 Sampling

3 Windowing

4 Filtering

5 Detection

6 Ambiguity resolution

7 Lock

8 Track

9 References

Environment[edit]

Pulse-Doppler signal processing begins with samples taken between multiple transmit pulses. Sample
strategy expanded for one transmit pulse is shown.

Pulse-Doppler begins with coherent pulses transmitted through an antenna or transducer.

There is no modulation on the transmit pulse. Each pulse is a perfectly clean slice of a perfect
coherent tone. The coherent tone is produced by the local oscillator.
There can be dozens of transmit pulses between the antenna and the reflector. In a hostile
environment, there can be millions of other reflections from slow moving or stationary objects.
Transmit pulses are sent at the pulse repetition frequency.
Energy from the transmit pulses propagate through space until they are disrupted by reflectors.
This disruption causes some of the transmit energy to be reflected back to the radar antenna or
transducer, along with phase modulation caused by motion. The same tone that is used to
generate the transmit pulses is also used to down-convert the received signals to baseband.
The reflected energy that has been down-converted to baseband is sampled.
Sampling begins after each transmit pulse is extinguished. This is the quiescent phase of the
transmitter.
The quiescent phase is divided into equally spaced sample intervals. Samples are collected until
the radar begins to fire another transmit pulse.
The pulse width of each sample matches the pulse width of the transmit pulse.
Enough samples must be taken to act as the input to the pulse-Doppler filter.

Sampling[edit]

Pulse-Doppler signal processing begins with I and Q samples.

In the diagram, the top shows pieces of the wave-front from the reflector as it arrives at the radar
receiver. The wave-front forms a spiral pattern as time passes. The detectors in the receiver
convert this spiral into two electrical samples called I and Q.
All of the disks (samples) shown in this diagram represent a single sample period taken from
multiple transmit pulses, like sample 1. Each of these samples is separated by transmit period
(1/PRF). This is the ambiguous range.

Sample 2 through sample N would be similar but delayed by one or more pulse widths behind
those that are shown. The signals in each sample are composed of signals from reflections at
multiple ranges.
The diagram shows a counterclockwise spiral, which corresponds with inbound motion. This is
up-Doppler. Down-Doppler would produce a clockwise spiral.
The local oscillator is split into two signals that are offset by 90 degrees, and each goes to the
two different detectors along with the receive signal. One detector produces I(t) and the other
produces Q(t). This is crucial for pulse-Doppler operation.
I(t) and Q(t) are the real and imaginary component of a complex number.
A spinning wheel, mirror and strobe-light can be used to visualize I and Q. The mirror is placed at
a 45 degree angle above the wheel so that you can see the front and top of the wheel at the
same time. The strobe-light is attached to the wheel so that you can see the wheel spin when the
room lights are turned off. You sit directly in front of the wheel so that you view the wheel as a
vertical line while a friend spins the wheel. The view of the front of the wheel (I) and the top of the
wheel (Q) tell you whether your friend has spun the wheel clockwise or counterclockwise.
Counterclockwise is like inbound Doppler. Clockwise is like outbound Doppler.

Windowing[edit]
The process of digital sampling causes ringing in the filters that are used to remove reflected
signals from slow moving objects. Sampling causes frequency sidelobes to be produced
adjacent to the true signal for an input that is a pure tone. Windowing suppresses sidelobes
induced by the sampling process.
The window is the number of samples that are used as an input to the filter.
The window process takes a series of complex constants and multiplies each sample by its
corresponding window constant before the sample is applied to the filter.

Detailed explanation of windowing

Dolph-Chebychev windowing provides optimal processing sidelobe suppression.

Filtering[edit]

Pulse-Doppler signal processing. The Range Sample axis represents individual samples taken in between
each transmit pulse. The Range Interval axis represents each successive transmit pulse interval during
which samples are taken. The Fast Fourier Transform process converts time-domain samples into
frequency domain spectra. This is sometimes called the bed of nails.

Pulse-Doppler signal processing separates reflected signals into a number of frequency filters.
There is a separate set of filters for each ambiguous range. The I and Q samples described
above are used to begin the filtering process.
These samples are organized into the m x n matrix of time domain samples shown in the top
half of the diagram.
Time domain samples are converted to frequency domain using a digital filter. This usually
involves a fast Fourier transform (FFT). Side-lobes are produced during signal processing and a
side-lobe suppression strategy, such as Dolph-Chebyshev window function, is required to reduce
false alarms .[1]
All of the samples taken from the Sample 1 sample period form the input to the first set of filters.
This is the first ambiguous range interval.
All of the samples taken from the Sample 2 sample period form the input to the second set of
filters. This is the second ambiguous range interval.
This continues until samples taken from the Sample N sample period form the input to the last
set of filters. This is the furthest ambiguous range interval.
The outcome is that each ambiguous range will produce a separate spectrum corresponding with
all of the Doppler frequencies at that range.
The digital filter produces as many frequency outputs as the number of transmit pulses used for
sampling. Production of one FFT with 1024 frequency outputs requires 1024 transmit pulses for
input.

Detection[edit]
Detection processing for pulse-Doppler produces an ambiguous range and ambiguous velocity
corresponding to one of the FFT outputs from one of the range samples. The reflections fall into
filters corresponding to different frequencies that separate weather phenomenon, terrain, and
aircraft into different velocity zones at each range.
Multiple simultaneous criteria are required before a signal can qualify as a detection.

Constant False Alarm Rate detection performed on FFT output.

Constant false alarm rate processing is used to examine each FFT output to detect signals. This
is an adaptive process that adjusts automatically to background noise and environmental
influences. There is a cell under test, where the surrounding cells are added together, multiplied
by a constant, and used to establish a threshold.

The area surrounding the detection is examined to determine when the sign of the slope
changes from
to , which is the location of the detection (the local maximum). Detections
for a single ambiguous range are sorted in order of descending amplitude.

Detection only covers the velocities that exceed the speed rejection setting. For
example, if speed rejection is set to 75 mile/hour, then hail moving at 50 mile/hour inside
a thunderstorm will not be detected, but an aircraft moving at 100 mile/hour will be
detected.

For monopulse radar, signal processing is identical for the main lobe and sidelobe
blanking channels. This identifies if the object location is in the main lobe or if it is
offset above, below, left or right of the antenna beam.

Signals that satisfy all of these criteria are detections. These are sorted in order
of descending amplitude (greatest to smallest).
The sorted detections are processed with a range ambiguity resolution algorithm
to identify the true range and velocity of the target reflection.

Ambiguity resolution[edit]

Pulse-Doppler ambiguity zones. Each blue zone with no label represents a velocity/range
combination that will be folded into the unambiguous zone. Areas outside the blue zones
are blind ranges and blind velocities, which are filled in using multiple PRF and frequency
agility.

Pulse Doppler radar may have 50 or more pulses between the radar and the
reflector.
Pulse Doppler relies on medium pulse repetition frequency (PRF) from about
3 kHz to 30 kHz. Each transmit pulse is separated by 5 km to 50 km distance.
Range and speed of the target are folded by a modulo operation produced by
the sampling process.
True range is found using the ambiguity resolution process.

Ambiguity resolution process explanation

The received signals from multiple PRF are compared using the range ambiguity
resolution process.

Range ambiguity resolution process explanation

The received signals are also compared using the frequency ambiguity
resolution process.

Frequency ambiguity resolution process explanation

Lock[edit]
The velocity of the reflector is determined by measuring the change in the range
of reflector over a short span of time. This change in range is divided by the time
span to determine velocity.
The velocity is also found using the Doppler frequency for the detection.
The two are subtracted, and the difference is averaged briefly.

If the average difference falls below a threshold, then the signal is a lock.
Lock means that the signal obeys Newtonian mechanics. Valid reflectors
produce a lock. Invalid signals do not. Invalid reflections include things like
helicopter blades, where Doppler does not correspond with the velocity that
the vehicle is moving through the air. Invalid signals include microwaves
made by sources separate from the transmitter, such as radar jamming and
deception.
Reflectors that do not produce a lock signal cannot be tracked using the
conventional technique. This means the feedback loop must be opened for
objects like helicopters because the main body of the vehicle can be below
the rejection velocity (only the blades are visible).
Transition to track is automatic for detections that produce a lock.
Transition to track is normally manual for non-Newtonian signal sources, but
additional signal processing can be used to automate the process. Doppler
velocity feedback must be disabled in the vicinity of the signal source to
develop track data.

Track[edit]
Main article: Track algorithm
Track mode begins when a detection is sustained in a specific location.
During track, the XYZ position of the reflector is determined using
a Cartesian coordinate system, and the XYZ velocity of the reflector is
measured to predict future position. This is similar to the operation of
a Kalman filter. The XYZ velocity is multiplied by the time between scans to
determine each new aiming point for the antenna.
The radar uses a polar coordinate system. The track position is used to
determine the left-right and up-down aiming point for the antenna position in
the future. The antenna must be aimed at the position which will paint the

target with maximum energy and not dragged behind it, otherwise the radar
will be less effective.
The estimated distance to a reflector is compared with the measured
distance. The difference is the distance error. Distance error is a feedback
signal used to correct the position and velocity information for the track data.
Doppler frequency provides an additional feedback signal similar to the
feedback used in a phase-locked loop. This improves the accuracy and
reliability of the position and velocity information.
The amplitude and phase for the signal returned by the reflector is
processed using monopulse radar techniques during track. This measures
the offset between the antenna pointing position and the position of object.
This is called angle error.
Each separate object must have its own independent track information. This
is called track history, and this extends back for a brief span of time. This
could be as much as an hour for airborne objects. The timespan for
underwater objects may extend back a week or more.
Tracks where the object produces a detection are called active tracks.
The track is continued briefly in the absence of any detections. Tracks with
no detections are coasted tracks. The velocity information is used to
estimate antenna aiming positions. These are dropped after a brief period.
Each track has a surrounding capture volume, approximately the shape of
a football. The radius of the capture volume is approximately the distance
the fastest detectable vehicle can travel between successive scans of that
volume, which is determined by the receiver band pass filter in pulseDoppler radar.
New tracks that fall within the capture volume of a coasted track are cross
correlated with the track history of the nearby coasted track. If position and
speed are compatible, then the coasted track history is combined with the
new track. This is called a join track.
A new track within the capture volume of an active track is called a split
track.
Pulse-Doppler track information includes object area, errors, acceleration,
and lock state, which are part of the decision logic involving join tracks and
split tracks.
Other strategies are used for objects that do not satisfy Newtonian physics.
Users are generally presented with several displays that show information
from track data and raw detected signals.

Plan position indicator

Scrolling notifications for new tracks, split tracks, and join tracks

Range amplitude display

Range height indicator

Angle error display

The plan position indicator and scrolling notifications are automatic and
require no user action. The remaining displays activate to show additional
information only when a track is selected by the user.

What is a Doppler ultrasound?


Answers from Sheldon G. Sheps, M.D.

A Doppler ultrasound is a noninvasive test that can be used to estimate your blood
flow through blood vessels by bouncing high-frequency sound waves (ultrasound) off
circulating red blood cells. A regular ultrasound uses sound waves to produce
images, but can't show blood flow.
A Doppler ultrasound may help diagnose many conditions, including:

Blood clots
Poorly functioning valves in your leg veins, which can cause blood or other fluids to
pool in your legs (venous insufficiency)

Heart valve defects and congenital heart disease

A blocked artery (arterial occlusion)

Decreased blood circulation into your legs (peripheral artery disease)

Bulging arteries (aneurysms)

Narrowing of an artery, such as in your neck (carotid artery stenosis)

A Doppler ultrasound can estimate how fast blood flows by measuring the rate of
change in its pitch (frequency). During a Doppler ultrasound, a technician trained in
ultrasound imaging (sonographer) presses a small hand-held device (transducer),
about the size of a bar of soap, against your skin over the area of your body being
examined, moving from one area to another as necessary.
This test may be done as an alternative to more invasive procedures, such as
arteriography and venography, which involve injecting dye into the blood vessels so
that they show up clearly on X-ray images.
A Doppler ultrasound test may also help your doctor check for injuries to your
arteries or to monitor certain treatments to your veins and arteries.

Doppler spectrum

Doppler shifts
Motion of an antenna produces Doppler shifts of incoming
received waves. We consider a signal received over a multipath
channel, with many incoming waves. Let the n-th reflected wave
with amplitude cnand phase arrive from an angle relative to
the direction of the motion of the antenna.

The Doppler shift of this wave is


,
where v is the speed of the antenna.
The maximum Doppler shift occurs for a wave coming from the opposite direction
as the direction the antenna is moving to. It has a frequency shift

fm

v
= -- fc
c

with fc is the carrier frequency and c the velocity of light.


Such motion of the antenna leads to (time varying) phase shifts of individual
reflected waves. It is not so much this minor shift that bothers radio system
designers, as a receiver oscillator can easily compensate for it. Rather, it is the fact
that many waves arrive, all with different shifts. Thus, their relative phases change
all the time, and so it affects the amplitude of the resulting composite signal. So the
Doppler effects determine the rate at which the amplitude of the resulting
composite signal changes.
Doppler Power Spectrum
The models
behind Rayleigh or Rician fadin
g assume that many waves
arrive each with its own
random angle of arrival (thus
with its own Doppler shift),
which is uniformly distributed
within [0, 2 ], independently
of other waves. This allows us
to compute a probability
density function of the
frequency of incoming waves.
Moreover, we can obtain the
Doppler spectrum of the
received signal.
Figure (on the right): Distribution of angle of arrival in a typical urban propagation situation. Measurement at
1800 MHz. Source: Research group of Prof. Paul Walter Baier, U. of Kaiserslautern, Germany.

This leads to the U-shaped power spectrum for isotropic scattering,


S(f)

1
-------4 fm

1
--------------------(f-fc)2
sqrt( 1 -------)
fm2

where we assumed a unity local mean power.

Figure: Power density


spectrum of a sine wave
suffering from a Doppler
spread.

Figure: Measured Doppler spread at 1800 MHz.


Doppler spread = 60.3 Hz
Source: Research group of Prof. Paul Walter Baier, U.
of Kaiserslautern, Germany.
See also: full scatter plot.

If a sinusoidal signal is transmitted (represented by a spectral line in the frequency


domain), after transmission over a fading channel, we will receive a power
spectrum that is spread according to the above image. The frequency range where
the power spectrum is nonzero defines the Doppler spread.
The Doppler spread is relevant, for instance to compute threshold crossing
rates and average fade durations.
More details

An animation of fading
Samples of a Rayleigh channel.
Slides for a derivation of the spectrum
Scatter Function
How do systems handle Doppler Spreads?
System

Countermeasure

Analog

Doppler causes random FM modulation


which may be audible. Carrier frequency
is low enough to avoid problems

GSM

Channel bit rate well above


Doppler spread

TDMA during each bit / burst


transmission the channel is fairly
constant.

Receiver training/updating during


each transmission burst

Feedback frequency correction

Intended for pedestrian use:

DECT

Only small Doppler spreads


are to be anticipated

o Bit rate is very large

compared to Doppler
spread.

IS95 Cellul
ar CDMA

Wireless
LAN's

Downlink: Pilot signal for


synchronization and channel
estimation

Uplink: Continuous tracking of


each signal

Mobility is slow, thus Doppler spread is


only a few Hertz

Effect of angle of arrival and excess delay


There are two different forms of multipath scattering, according to
the excess time delay of the given channel tap:

1. Small Excess Time Delays


The channel tap may be modelled as the accumulation of
multipath components received from scatters close to the
mobile. This gives rise to the classical Doppler power
spectrum of the received multi- path components.
2. Large Excess Time Delays
The classical Doppler model does not provide a satisfactory

geometric model for this type of scattering. Instead,


multipath energy is more likely to have a narrow Doppler
spread, having arisen from reflec- tions off isolated obstacles
such as buildings or hills. The instantaneous variation of
signal power in space for a channel depends on the angles
of arrival of the multipath components.

Anda mungkin juga menyukai