Anda di halaman 1dari 16

The FFT Analyzer

More:

Background
Spectrum Analysis

Background

This section will cover the operation and theory of the FFT analyzer,
which is the most commonly used piece of signal analysis
equipment in the vibration field. Many workers think of the FFT
analyzer as a "magic box," into which you put a signal and out of
which comes a spectrum. The assumption usually is that the
spectrum tells the truth -- the box cannot lie. We will see that this
assumption is valid in many cases, but we will also see that we can
be misled, for there are several pitfalls in the process of digital
signal analysis. One of the purposes of this section is to help you
avoid falling into any of the pitfalls, and if you do, how to crawl out
smelling like a rose.
FFT analysis is but one type of digital spectrum analysis, but we will
not concentrate on the other types because they do not apply
directly to the VMS program.
Spectrum Analysis

Spectrum analysis, which is defined as the transformation of a


signal from a time-domain representation into a frequency-domain
representation, has its roots in the early 19th century, when several
mathematicians were working on it from a theoretical basis. But it
took a practical man, an engineer with a good mathematical
background, to develop the rationale upon which almost all our
modern spectrum analysis techniques are based. That engineer was
Jean Baptiste Fourier, and he was working for Napoleon during his
invasion of Egypt on a problem of overheating cannons when he
derived the famous Fourier Series for the solution of heat
conduction. It may seem a far cry from overheating cannons to
frequency analysis, but it turns out that the same equations apply
to both cases. Fourier later generalized the Fourier series into the
Fourier Integral Transform. The advent of digital signal analysis
naturally led to the so-called Discrete Fourier Transform and the
Fast Fourier Transform or FFT
More:

Forms of the Fourier Transform


The Fourier Series
The Fourier Integral Transform
The Discrete Fourier Transform
The Fast Fourier Transform
Analog to Digital Conversion
Aliasing
Leakage
Windows
The Hanning Window
Overlap Processing
The Picket Fence Effect
Averaging
Time Synchronous Averaging
Pitfalls in the FFT

Forms of the Fourier Transform

There are four forms of the Fourier Transform, as follows:


Fourier Series -- Transforms an infinite periodic time signal into an
infinite discrete frequency spectrum.
Fourier Integral Transform -- Transforms an infinite continuous time
signal into an infinite continuous frequency spectrum
Discrete Fourier Transform (DFT) -- Transforms a discrete periodic
time signal into a discrete periodic frequency spectrum
Fast Fourier Transform -- A computer algorithm for calculating the
DFT
They will be discussed in more detail in the next section.
The Fourier Series

2
The Fourier Series operates on a time signal that is periodic, i.e., a
time signal whose waveform repeats over and over again out to
infinite time. Fourier showed that such a signal is equivalent to a
collection of sine and cosine functions whose frequencies are
multiples of the reciprocal of the period of the time signal. The
rather unexpected result is that any wave shape whatsoever, as
long as it is not infinite in length, can be represented, as the sum of
a collection of harmonic components, and the fundamental
frequency of the harmonic series is 1 divided by the length of the
wave shape. The amplitudes of the various harmonics are called the
Fourier coefficients, and their values can be calculated easily if the
equation for the wave shape is known. They can also be calculated
graphically from the wave shape itself. A certain physics class is
known to have done this with the silhouette of Marilyn Monroe.
They posted the MM coefficients on the bulletin board as an "in"
joke.
More:

Fourier Coefficients
Fourier Coefficients

The calculation of the Fourier coefficients is defined as a


mathematical transformation from the time domain to the frequency
domain. One important fact emerges from the Fourier Series, and
that is that the original waveform can be reconstructed from the
frequency coefficients; in other words it is possible to transform
from the frequency domain back to the time domain without loss of
information. The Fourier series is perfectly adequate for performing
frequency analysis on periodic waveforms; that is to say on
deterministic signals.
The Fourier Integral Transform

The natural extension of the Fourier series to encompass time


signals of infinite length, i.e., non-repetitive continuous signals, is
the Fourier Integral Transform, or more simply the Fourier
Transform. This integral will transform any continuous time signal of
arbitrary shape into a continuous spectrum extending to infinite
frequency. An interesting characteristic of the Fourier Transform is
that an event encompassing a short time interval will be spread out

3
over a wide frequency range and vice versa. This was seen in the
Introduction to Vibration chapter where a spectrum of a short
impulse is shown.
The Discrete Fourier Transform

Neither the Fourier Series nor the Fourier Transform lends itself
easily to calculation by digital computers. To overcome this hurdle,
the so-called Discrete Fourier Transform, or DFT was developed.
Probably the first person to conceive the DFT was Wilhelm
Friederich Gauss, the famous 19th century German mathematician,
although he certainly did not have a digital computer on which to
implement it. The DFT operates on a sampled, or discrete, signal in
the time domain, and generates from this a sampled, or discrete,
spectrum in the frequency domain. The resulting spectrum is an
approximation of the Fourier Series, an approximation in the sense
that information between the samples of the waveform is lost. The
key to the DFT is the existence of the sampled waveform, i.e., the
possibility of representing the waveform by a series of numbers. To
generate this series of numbers from an analog signal, a process of
sampling and analog to digital conversion is required. The sampled
signal is a mathematical representation of the instantaneous signal
level at precisely defined time intervals. It contains no information
about the signal between the actual sample times.
If the sampling rate is high enough to ensure a reasonable
representation of the shape of the signal, the DFT does produce a
spectrum very close to a theoretically true spectrum. This spectrum
is also discrete, and there is no information between the samples,
or "lines" of the spectrum. In theory, there is no limit to the number
of samples that can be used, or the speed of the sampling, but
there are practical limitations we must live with. Most of these
limitations are the result of using a digital computer as the
calculating agent.
The Fast Fourier Transform

In order to adapt the DFT for use with digital computers, the so-
called Fast Fourier Transform (FFT) was developed. The FFT is
simply an algorithm for calculating the DFT in a fast and efficient
manner.

4
Cooley and Tukey are credited with the discovery of the FFT in
1967, but it existed much earlier, although without the digital
computers needed to exploit it. The FFT algorithm places certain
limitations on the signal and the resulting spectrum. For instance,
the sampled signal to be transformed must consist of a number of
samples equal to a power of two. Most FFT analyzers allow 512,
1024, 2048, or 4096 samples to be transformed. The frequency
range covered by FFT analysis depends on the number of samples
collected and on the sampling rate, as will be explained shortly.

Analog to Digital Conversion

The first step in performing an FFT analysis is the actual sampling


process, which is illustrated here:

Analog to Digital Conversion

The sampling is an analog, not digital, process and is accomplished


with a "sample and hold" circuit. The output of this circuit is a
sequence of voltage levels that are fed into an analog to digital
converter (ADC). Here the voltage levels are converted into digital
words representing each sampled level. The accuracy of the
sampled levels depends in part on the number of bits in the digital
words. The greater the number of bits, the lower the noise level and
the greater the dynamic range will be. Most FFT analyzers use 12-
bit words and this produces a dynamic range of about 70 dB
(3,100:1). Fourteen bit words can achieve 80 dB (10,000:1)
dynamic range.

5
It can be seen here that the sampling rate determines the highest
frequency in the signal that can be encoded. The sampled waveform
cannot know anything about what happens in the signal between
the sampled times. Claude Shannon, the developer of the branch of
mathematics called information theory, determined that to encode
all the information in a signal being sampled, the sampling
frequency must be at least double the highest frequency present in
the signal. This fact is sometimes called the Nyquist criterion.
Aliasing

It is important that there is no information in the sampled waveform


near the sampling frequency to avoid a problem called aliasing.

Aliasing
Here the actual signal is represented in black and the sampled
representation of it is in gray. The vertical lines represent the
sampling frequency. Note that if the sampling frequency is the same
as the sampled frequency, each sample is the same size, and the
output of the sampling circuit will be a constant direct voltage --
obviously having no relation to frequency of the input signal.
Now note what happens if the actual signal is higher in frequency
than the sampling frequency. The sampler output looks like a very
low frequency, and again it is not a correct representation of the
actual signal. This phenomenon is called aliasing, and it can lead to
gross errors unless it is avoided. The best way to avoid aliasing is to
pass the input signal through an analog low-pass filter whose cut-
off frequency is less than one-half the sampling frequency. In most

6
modern FFT analyzers, the sampling frequency is set to 2.56 times
the filter cut-off frequency. The filter must have a very sharp cut off
characteristic, or roll off, and this means it will also have Phase Shift
that can affect the data if one needs phase information near the
upper end of the frequency span of the analyzer. To avoid this,
select a frequency span so the frequency in question is in the lower
half of the frequency range. This is important in performing
balancing with an FFT analyzer, where phase of the 1X vibration
signal is needed.
Aliasing also occurs in other media, such as motion pictures. For
instance, sometimes in western movies the wagon wheel spokes
may appear stopped, or rotating backward. This is optical aliasing,
for a movie is a sampled representation of the original motion.
Another example of optical aliasing is the stroboscope, which is set
to flash at a rate equal to or near the rotation rate of the object
being observed, making it appear stationary or slowly turning.
Sampling Rules for Digital Signal Analysis
The data path must contain an analog Anti-Aliasing low-pass filter
You must sample at least twice as fast as the highest frequency to
be analyzed
The Frequency Response of the analysis depends on the sampling
frequency
These rules apply to all FFT analysis, and the analyzer automatically
takes care of them. The anti-aliasing filter is internally set to the
appropriate value for each frequency range of the analyzer. The
total sampling time is called the time record length and the nature
of the FFT dictates that the spacing between the frequency
components in the spectrum (also called the frequency resolution)
is 1 divided by the record length. For instance, if the frequency
resolution is one Hz, then the record length is one second, and if
the resolution is 0.1 Hz, then the record length is 10 seconds, etc.
From this it can be seen that in order to perform high resolution
spectrum analysis relatively long times are required to collect the
data. This has nothing to do with the speed of the calculations in
the analyzer; it is simply a natural law of frequency analysis.
Leakage

7
The FFT analyzer is a batch processing device; that is it samples the
input signal for a specific time interval collecting the samples in a
buffer, after which it performs the FFT calculation on that "batch"
and displays the resulting spectrum
If a sinusoidal signal waveform is passing through zero level at the
beginning and end of the time record, i.e., if the time record
encompasses exactly an integral number of cycles of the waveform,
the resulting FFT spectrum will consist of a single line with the
correct amplitude and at the correct frequency. If, on the other
hand, the signal level is not at zero at one or both ends of the time
record, truncation of the waveform will occur, resulting in a
discontinuity in the sampled signal. This discontinuity is not handled
well by the FFT process, and the result is a smearing of the
spectrum from a single line into adjacent lines. This is called
"leakage"; it is as if the energy in the signal "leaks" from its proper
location into the adjacent lines.
The shape of the "leaky" spectrum depends on the amount of signal
truncation, and is generally unpredictable for real signals.

Windows

In order to reduce the effect of leakage, it is necessary to see to it


that the signal level is zero at the beginning and end of the time
record. Multiplying the data samples by a so-called “windowing” or
“weighting” function, which can have several different shapes, does

8
this. The most common forms of windows and their uses are
considered next.

If there is no windowing function used, this is called "Rectangular",


"Flat", or "Uniform" windowing. In the figure above, the effect of the
data truncation can be seen as discontinuities in the windowed
waveform. The FFT analyzer only knows what is in the time window,
or time record. It assumes the actual signal contains the
discontinuities, and they are the cause of the leakage seen in the
previous figure. Leakage could be avoided if the input waveform
zero crossings were synchronized with the sampling times, but this
is impossible to achieve in practice.
More:

Windowing for Transient Signals


Windowing for Transient Signals

In the case where the input signal is a transient, it will by definition


begin and end at zero level, and as long as it is entirely within the
time record, no truncation will occur, and the analysis will be correct
because the FFT sees the entire signal. It is very important that the
entire transient fit into the record, and the record length is

9
dependent upon the frequency range of the analysis. Most FFT
analyzers allow the user to see the time record on the screen, so it
can be assured that this condition is met.
The Hanning Window

The Hanning window, after its inventor whose name was Von Hann,
has the shape of one cycle of a cosine wave with 1 added to it so it
is always positive. The sampled signal values are multiplied by the
Hanning function, and the result is shown in the figure. Note that
the ends of the time record are forced to zero regardless of what
the input signal is doing.
While the Hanning window does a good job of forcing the ends to
zero, it also adds distortion to the wave form being analyzed in the
form of amplitude modulation; i.e., the variation in amplitude of the
signal over the time record. Amplitude Modulation in a wave form
results in sidebands in its spectrum, and in the case of the Hanning
window, these sidebands, or side lobes as they are called,
effectively reduce the frequency resolution of the analyzer by 50%.
It is as if the analyzer frequency "lines" are made wider. In the
illustration here, the curve is the actual filter shape that the FFT
analyzer with Hanning weighting produces. Each line of the FFT
analyzer has the shape of this curve -- only one is shown in the
figure.
If a signal component is at the exact frequency of an FFT line, it will
be read at its correct amplitude, but if it is at a frequency that is
one half of delta F (One half the distance between lines), it will be
read at an amplitude that is too low by 1.4 dB.

10
The illustration shows this effect, and also shows the side lobes
created by the Hanning window. The highest-level side lobes are
about 32 dB down from the main lobe.

The measured amplitude of the Hanning weighted signal is also


incorrect because the weighting process removes essentially half of
the signal level. This can be easily corrected, however, simply by
multiplying the spectral levels by two, and the FFT analyzer does
this job. This process assumes the amplitude of the signal is
constant over the sampling interval. If it is not, as is the case with
transient signal, the amplitude calculation will be in error, as shown
in the figure below.

The Hanning window should always be used with continuous signals,


but must never be used with transients. The reason is that the
window shape will distort the shape of the transient, and the

11
frequency and phase content of a transient is intimately connected
with its shape.
The measured level will also be greatly distorted. Even if the
transient were in the center of the Hanning window, the measured
level would be twice as great as the actual level because of the
amplitude correction the analyzer applies when using the Hanning
weighting.
A Hanning weighted signal actually is only half there, the other half
of it having been removed by the windowing. This is not a problem
with a perfectly smooth and continuous signal like a sinusoid, but
most signals we want to analyze, such as machine vibration
signatures are not perfectly smooth. If a small change occurs in the
signal near the beginning or end of the time record, it will either be
analyzed at a much lower level than its true level, or it may be
missed altogether. For this reason, it is a good idea to employ
overlap processing. To do this, two time buffers are required in the
analyzer. For 50% overlap, the sequence of events is as follows:
When the first buffer is half full, i.e., it contains half the samples of
a time record, the second buffer is connected to the data stream
and also begins to collect samples. As soon as the first buffer is full,
the FFT is calculated, and the buffer begins to take data again.
When the second buffer is filled, the FFT is again calculated on its
contents, and the result sent to the spectrum-averaging buffer. This
process continues on until the desired number of averages is
collected.

Overlap Processing

Overlap processing can only be achieved if the time required to


calculate the FFT is shorter than the time record length. If this is
not the case, the spectral calculations will lag behind the data
acquisition leaving gaps of unanalyzed signal. See also the
paragraph on real time speed later in this section.

12
If the overlap is 2/3, i.e., 66.7%, then the overall time weighting of
the data will be flat, and there is no advantage to using a greater
overlap. Most data collection for machinery analysis uses 50% data
overlap, which provides adequate amplitude accuracy for most
vibration work.
Here is a summary of the relationship between sampling rate,
number of samples, time record length, and frequency resolution
that affect FFT analysis. The sampling rate in samples per second,
times the time record length T in seconds, equals the number of
samples N. In the FFT analyzer, the number of samples N is
constrained to a power of two.

FFT Fundamentals

13
The FFT algorithm, operating on N samples of time data produces
N/2 frequency lines. Thus a time record of 512 samples will
generate a spectrum of 256 lines. FFT analyzers generally do not
display the upper spectral lines because of the possibility of their
being contaminated by aliased components. This is because the
anti-aliasing filter is not perfect, and has a finite slope in its cut-off
range. Therefore, a 256 line spectrum will be displayed as a 200
line spectrum, and a 512-line spectrum will be displayed as a 400
line spectrum, etc.
The frequency resolution, DF, is equal to the frequency span divided
by the number of lines, and this is equal to 1/T. Conversely, the
time record length T equals 1/DF. From this it can be seen that as
the frequency resolution increases (smaller DF), the time record
length also increases in proportion. For this reason, to create a
high-resolution spectrum requires a relatively long time to acquire
the data.
The Picket Fence Effect

As has been mentioned before, the FFT spectrum is a discrete


spectrum, consisting of estimates of what the spectral level is at
specific frequencies. These frequencies are determined by the
analysis parameters that are set up in the analyzer, and have
nothing to do with the signal being analyzed. This means there may
be, and probably are, peaks in the true spectrum of the signal that
are between the lines of the FFT analysis. This also means that in
general, the peaks in an FFT spectrum will be measured too low in
level, and the valleys will be measured too high. Moreover, the true
frequencies where the peaks and valleys lie will not be those
indicated in the FFT spectrum.

14
This phenomenon is called resolution bias error, or more commonly,
the picket fence effect. In other words, looking at an FFT spectrum
is a little like looking at mountain range through a picket fence.
Averaging

One of the important functions of the FFT analyzer is that it is easily


able to do averaging of spectra over time. In general, the vibration
signal from a rotating machine is not completely deterministic, but
has some random noise superimposed on it. Because the noise is
unpredictable, it alters the spectrum shape, and in many cases can
seriously distort the spectrum. If a series of spectra are averaged
together, the noise will gradually assume a smooth shape, and the
spectral peaks due to the deterministic part of the signal will stand
out and their levels will be more accurately represented. It is not
true that simply averaging FFT spectra will reduce the amount of
the noise -- the noise will be smoothed but its level will not be
reduced.
There are two types of averaging in general use in FFT analyzers,
called linear averaging and exponential averaging. Linear averaging
is the adding together of a number of spectra and then dividing the
total by the number that was added. This is done for each line of
the spectra and the result is a true arithmetic average on a line-by-
line basis. Exponential averaging generates a continuous running
average where the most recently collected spectra have more
influence on the average than older ones. This provides a
convenient form to examine changing data but still have the benefit
of some averaging to smooth the spectra and reduce the apparent
noisiness of them.
Time Synchronous Averaging

Time synchronous averaging, also called time domain averaging, is


a completely different type of averaging, where the waveform itself
is averaged in a buffer before the FFT is calculated. In order to do
time domain averaging, a reference trigger pulse must be input to
the analyzer to tell it when to start sampling the signal. This trigger
is typically synchronized with an element of the machine that is of
interest.

15
The average gradually accumulates those portions of the signal that
are synchronized with the trigger, and other parts of the signal,
such as noise, are effectively averaged out. This is the only type of
averaging which actually does reduce noise.
More information on applications of time synchronous can be found
in the next chapter on Machine Vibration Monitoring.
Pitfalls in the FFT

This is a summary of the pitfalls that plague the FFT analysis


technique. This is not to say that FFT analysis is no good -- on the
contrary, it has revolutionized the analysis of vibration data. The
important fact is that the problems with FFT analysis can be
overcome by proper technique, and the residual effects that remain
can be reduced to insignificant levels.
Sampling causes aliasing
Time limitation causes leakage
Discrete frequencies in the calculated spectrum causes the picket
fence effect.

16

Anda mungkin juga menyukai