Anda di halaman 1dari 29

Section for Cognitive Systems

Informatics and Mathematical Modelling


Technical University of Denmark
Correlation Functions and Power Spectra
Jan Larsen
8th Edition
c _19972009 by Jan Larsen
i
Contents
Preface iii
1 Introduction 1
2 Aperiodic Signals 1
3 Periodic Signals 1
4 Random Signals 3
4.1 Stationary Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Ergodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3 Sampling of Random Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.4 Discrete-Time Systems and Power/Cross-Power Spectra . . . . . . . . . . . . . 11
5 Mixing Random and Deterministic Signals 14
5.1 Erogodicity Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2 Linear Mixing of Random and Periodic Signals . . . . . . . . . . . . . . . . . . 16
A Appendix: Properties of Correlation Functions and Power Spectra 19
A.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
A.2 Denitions of Correlation Functions and Power Spectra . . . . . . . . . . . . . . 19
A.3 Properties of Autocorrelation Functions . . . . . . . . . . . . . . . . . . . . . . 21
A.4 Properties of Power Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
A.5 Properties of Crosscorrelation Functions . . . . . . . . . . . . . . . . . . . . . . 23
A.6 Properties of Cross-Power Spectra . . . . . . . . . . . . . . . . . . . . . . . . . 24
Bibliography 25
ii
Preface
The present note is a supplement to the textbook Digital Signal Processing [5, 6] used in the DTU
course 02451 (former 04361) Digital Signal Processing (Digital Signalbehandling).
The note addresses correlations functions and power spectra and extends the material in Ch.
12 [5, 6].
Parts of the note are based on material by Peter Koefoed Mller used in the former DTU
Course 4232 Digital Signal Processing.
The 6th edition provides an improvement of example 3.2 for which Olaf Peter Strelcyk is
acknowledged.
In the 7th edition small errors and references are corrected.
In the 8th edition a few topics are elaborated.
Jan Larsen
Kongens Lyngby, November 2009
The manuscript was typeset in 11 points Times Roman using L
A
T
E
X2

.
iii
1 Introduction
The denitions of correlation functions and spectra for discrete-time and continuous-time (analog)
signals are pretty similar. Consequently, we conne the discussion mainly to real discrete-time
signals. The Appendix contains detailed denitions and properties of correlation functions and
spectra for analog as well as discrete-time signals.
It is possible to dene correlation functions and associated spectra for aperiodic, periodic and
random signals although the interpretation is different. Moreover, we will discuss correlation
functions when mixing these basic signal types.
In addition, the note include several examples for the purpose of illustrating the discussed
methods.
2 Aperiodic Signals
The crosscorrelation function for two aperiodic, real
1
, nite energy discrete-time signals x
a
(n),
y
a
(n) is given by:
r
xaya
(m) =

n=
x
a
(n)y
a
(n m) = x
a
(m) y
a
(m) (1)
Note that r
xaya
(m) is also an aperiodic signal. The autocorrelation function is obtained by setting
x
a
(n) = y
a
(n). The associated cross-energy spectrum is given by
S
xaya
(f) =

m=
r
xaya
(m)e
j2fm
= X
a
(f)Y

a
(f) (2)
The energy of x
a
(n) is given by
E
xa
=
_
1/2
1/2
S
xaxa
(f) df = r
xaxa
(0) (3)
3 Periodic Signals
The crosscorrelation function for two periodic, real, nite power discrete-time signals x
p
(n),
y
p
(n) with a common period N is given by:
r
xpyp
(m) =
1
N
N1

n=0
x
p
(n)y
p
((n m))
N
= x
p
(m) y
p
(m) (4)
Note that r
xpyp
(m) is a periodic signal with period N. The associated cross-power spectrum is
given by:
S
xpyp
(k) =
1
N
N1

m=0
r
xpyp
(m)e
j2
k
N
m
= X
p
(k)Y

p
(k) (5)
where X
p
(k), Y
p
(k) are the spectra of x
p
(n), y
p
(n)
2
. The spectrum is discrete with components
at frequencies f = k/N, k = 0, 1, , N1, or F = kF
s
/N where F
s
is the sampling frequency.
Further, the spectrum is periodic, S
xpyp
(k) = S
xpyp
(k + N).
1
In the case of complex signals the crosscorrelation function is dened by rxaya
(m) = xa(m) y

a
(m) where
y

a
(m) is the complex conjugated.
2
Note that the denition of the spectrum follows [5, 6, Ch. 4.2] and differs from the denition of the DFT in [5, 6,
Ch. 7]. The relation is: DFT{xp(n)} = N Xp(k).
1
The power of x
p
(n) is given by
P
xp
=
_
1/2
1/2
S
xpxp
(f) df = r
xpxp
(0) (6)
Example 3.1 Determine the autocorrelation function and power spectrum of the tone signal:
x
p
(n) = a cos(2f
x
n + )
with frequency 0 f
x
1/2. The necessary requirement for x
p
(n) to be periodic is that the
fundamental integer period N is chosen according to Nf
x
= q where q is an integer. That means,
f
x
has to be a rational number. If f
x
= A/B is an irreducible fraction we choose N
min
= B. Of
course any N = N
min
, = 1, 2, 3, is a valid choice. Consequently, using Eulers formula
with q = A gives:
x
p
(n) = a cos
_
2
q
N
n +
_
=
a
2
_
e
j
e
j2
q
N
n
+ e
j
e
j2
q
N
n
_
Thus since x
p
(n) =

N1
k=0
X
p
(k)e
j2
k
N
n
, the spectrum is:
X
p
(k) =
a
2
e
j
(k q) +
a
2
e
j
(k + q)
where (n) is the Kronecker delta function. The power spectrum is then found as:
S
xpxp
(k) = X
p
(k)X

p
(k) =
a
2
4
((k q) + (k + q))
Using the inverse Fourier transform,
r
xpxp
(m) =
a
2
2
cos
_
2
q
N
m
_
When mixing periodic signals with signals which have continuous spectra, it is necessary to deter-
mine the spectrum S
xpxp
(f) where 1/2 f 1/2 is the continuous frequency. Using that the
constant (DC) signal
a
2
e
j
has the spectrum
a
2
e
j
(f), where (f) is the Dirac delta function,
and employing the frequency shift property, we get:
a cos(2f
x
n + ) =
a
2
e
j
e
j2fxn
+
a
2
e
j
e
j2fxn
That is,
X
p
(f) =
a
2
e
j
(f f
x
) +
a
2
e
j
(f + f
x
)
and thus
S
xpxp
(f) =
a
2
4
((f f
x
) + (f + f
x
))

2
Example 3.2 Consider two periodic discrete-time signals x
p
(n), y
p
(n) with fundamental frequen-
cies 0 f
x
1/2 and 0 f
y
1/2, respectively. Give conditions for which the cross-power
spectrum vanishes.
Let us rst consider nding a common period N, i.e., we have the requirements: Nf
x
=
p
x
and Nf
y
= p
y
where p
x
, p
y
are integers. It is possible to fulll these requirements only
if both f
x
and f
y
are rational numbers. Suppose that f
x
= A
x
/B
x
and f
y
= A
y
/B
y
where
A
x
, B
x
, A
y
, B
y
are integers, then the minimum common period N
min
= lcm(B
x
, B
y
) where
lcm(, ) is the least common multiple
3
. If N is chosen as N = N
min
where = 1, 2, 3, the
signals will be periodic and x
p
(n) has potential components at k
x
= p
x
q
x
, where p
x
= N
min
f
x
and q
x
= 0, 1, 2, , 1/f
x
| 1. Similarly, y
p
(n) has potential components at k
y
= p
y
q
y
,
where p
y
= N
min
f
y
and q
y
= 0, 1, 2, , 1/f
y
| 1. The cross-power spectrum does not
vanish if k
x
= k
y
occurs for some choice of q
x
, q
y
. Suppose that we choose a common period
N = N
min
= B
x
B
y
, then k
x
= Nf
x
q
x
= B
y
A
x
q
x
and k
y
= Nf
y
q
y
= B
x
A
y
q
y
. Now, if x
p
(n)
has a non-zero component at q
x
= B
x
A
y
and y
p
(n) has a non-zero component at q
y
= B
y
A
x
then
k
x
= k
y
and the cross-power spectrum does not vanish
4
. Otherwise, the cross-power spectrum
will vanish. If N is not chosen as N = N
min
the cross-power spectrum does generally not vanish.
Let us illustrate the ideas by considering x
p
(n) = cos(2f
x
n) and y
p
(n) = cos(2f
y
n).
Case 1: In the rst case we choose f
x
= 4/33 and f
y
= 2/27. B
x
= 3 11 and B
y
= 3
3
, i.e.,
N
min
= lcm(B
x
, B
y
) = 3
2
11 = 297. Choosing N = N
min
, x
p
(n) has components at
k
x
= 36 and k
x
= 29736 = 261. y
p
(n) has components at k
y
= 22 and k
y
= 29722 =
275. The cross-power spectrum thus vanishes.
Case 2: In this case we choose f
x
= 1/3, f
y
= 1/4 and N = 10. thus N
min
= lcm(B
x
, B
y
) =
lcm(3, 4) = 12. Since N is not N
min
the stated result above does not apply. In fact, the
cross-power spectrum does not vanish, as shown in Fig. 1.

4 Random Signals
A random signal or stochastic process X(n) has random amplitude values, i.e., for all time indices
X(n) is a random variable. A particular realization of the random signal is x(n). The random
signal is characterized by its probability density function (PDF)
5
p(x
n
), where x
n
is a particular
value of the signal. As an example, p(x
n
) could be Gaussian with zero mean, = E[x
n
] =
_
x
n
p(x
n
)dx
n
= 0 and variance
2
x
= E[(x
n
)
2
] =
_
(x
n
)
2
p(x
n
)dx
n
. That is, the PDF is
p(x
n
) =
1

2
x
e
x
2
(n)
2
2
x
, n (7)
Fig. 2 shows three different realizations, x(n, ), = 1, 2, 3 of the random signal. The family of
different realizations is denoted the ensemble. Note that for e.g., n = n
1
that the outcomes of
3
In order to nd the least common multiple of A, B, we rst prime number factorize A and B. Then lcm(A, B) is
the product of these prime factors raised to the greatest power in which they appear.
4
If N = Nmin then this situation happens for qx = BxAy/ and qy = ByAx/
5
Also referred to as the rst-order distribution.
3
0 1 2 3 4 5 6 7 8 9
0
0.5
k
|
X
p
(
k
)
|
0 1 2 3 4 5 6 7 8 9
0
0.2
0.4
k
|
Y
p
(
k
)
|
0 1 2 3 4 5 6 7 8 9
0
0.1
k
|
X
p
(
k
)
Y
p *
(
k
)
|
Figure 1: Magnitude spectra [X
p
(k)[, [Y
p
(k)[ and magnitude cross-power spectrum [S
xpyp
(k)[ =
[X
p
(k)Y

p
(k)[.
0 20 40 60 80 100
5
0
5
x
(
n
,
1
)
0 20 40 60 80 100
5
0
5
x
(
n
,
2
)
0 20 40 60 80 100
5
0
5
n
x
(
n
,
3
)
n
1
n
2
Figure 2: Three different realizations x(n, ), = 1, 2, 3 of a random signal.
x(n
1
) are different for different realizations. If one generated an innite amount of realizations,
4
= 1, 2, , then these will reect the distribution
6
, as shown by
P(x
n
) = ProbX(n) x
n
= lim
K
K
1
K

=1
(x x(n, )) (8)
where P(x; n) is the distribution function and () is the step function which is zero if the argu-
ment is negative, and one otherwise.
Random signals can be classied according to the taxonomy in Fig. 3.
?
Non-Ergodic
?
Stationary
Ergodic
Non-Stationary
Special Cases
Cyclostationary
? ? ?
Random
Figure 3: Taxonomy of random signals. Stationary signals are treated in Sec. 4.1, ergodic signals
in Sec. 4.2, and cyclostationary signals are briey mentioned in Sec. 5.2.
4.1 Stationary Signals
In generally we consider kth order joint probability densities associated with the signal x(n)
dened by p(x
n
1
, x
n
2
, , x
n
k
), i.e., the joint probability of x(n
i
)s, i = 1, 2, , k.
A signal is strictly stationary if
p(x
n
1
, x
n
2
, , x
n
k
) = p(x
n
1
+
, x
n
2
+
, , x
n
k
+
), , k (9)
That is for any k, the kth order probability density does not change over time, i.e., invariant to
any time shift, .
Normally we consider only wide-sense stationary
7
in which random signal is characterized by
its time-invariant the mean value and the autocorrelation function. The mean value is dened by:
m
x
= E[x(n)] =
_

x
n
p(x
n
) dx
n
(10)
where E[] is the expectation operator. The autocorrelation function is dened by:

xx
(m) = E[x(n)x(n m)] =
_

x
n
x
nm
p(x
n
, x
nm
) dx
n
dx
nm
(11)
6
The density function p(xn) is the derivative of the (cumulative) distribution function P(x; n)/x
7
Also known as second order stationarity or weak stationarity.
5
Since the 2nd order probability density p(x
n
, x
nm
) is invariant to time shifts for wide-sense
stationary processes, the autocorrelation function is a function of m only.
The covariance function is closely related to the autocorrelation function and dened by:
c
xx
(n) = E[(x(n) m
x
) (x(n m) m
x
)] =
xx
(m) m
2
x
(12)
For two signals x(n), y(n) we further dene the crosscorrelation and crosscovariance functions
as:

xy
(m) = E[x(n)y(n m)] (13)
c
xy
(m) = E[(x(n) m
x
) (y(n m) m
y
)] =
xy
(m) m
x
m
y
(14)
If
xy
(m) = m
x
m
y
, i.e. c
xy
(m) = 0 for all m, the signals are said to be uncorrelated, and if

xy
(m) = 0 they are said to be orthogonal.
The power spectrum and cross-power spectrum are dened as the Fourier transforms of the
autocorrelation and crosscorrelation functions, respectively, i.e.,

xx
(f) =

m=

xx
(m)e
j2fm
(15)

xy
(f) =

m=

xy
(m)e
j2fm
(16)
The power of x(n) is given by
P
x
=
_
1/2
1/2

xx
(f) df =
xx
(0) = E[x
2
(n)] (17)
The inverse Fourier transforms read:

xx
(m) =
_
1/2
1/2

xx
(f)e
j2fm
df (18)

xy
(m) =
_
1/2
1/2

xy
(f)e
j2fm
df (19)
Example 4.1 Let x(n) be a white noise signal with power P
x
, i.e., the power spectrum is at
(white)
xx
(f) = P
x
. By inverse Fourier transform, the associated autocorrelation function
is
xx
(m) = P
x
(m). Note that according to properties of autocorrelation functions 21,
lim
m

xx
(m) = m
2
x
. That is the mean value of a white noise signal is m
x
= 0.

Example 4.2 Evaluate the autocorrelation function and power spectrum for the signal z(n) =
ax(n) + by(n) + c where and a, b, c are constants and x(n), y(n) are stationary signals with
means m
x
, m
y
, autocorrelation functions
xx
(m),
yy
(m), and crosscorrelation function
xy
(m).
Using the denition Eq. (11), the fact that that the mean value operator is linear, i.e., E[ax+by] =
6
aE[x] +bE[y], and the symmetry property of the crosscorrelation function (
xy
(m) =
yx
(m))
we get:

zz
(m) = E[z(n)z(n m)]
= E[(ax(n) + by(n) + c) (ax(n m) + by(n m) + c)]
= E[a
2
x(n)x(n m) + b
2
y(n)y(n m) + abx(n)y(n m) + aby(n)x(n m)
+acx(n) + acx(n m) + bcy(n) + bcy(n m) + c
2
]
= a
2

xx
(m) + b
2

yy
(m) + ab(
xy
(m) +
xy
(m)) + 2acm
x
+ 2bcm
y
+ c
2
(20)
According to Eq. (15) and (16) the power spectrum yields:

zz
(f) = a
2

xx
(f) + b
2

yy
(f) + ab(
xy
(f) +

xy
(f)) + (2acm
x
+ 2bcm
y
+ c
2
)(f)
= a
2

xx
(f) + b
2

yy
(f) + 2abRe[
xy
(f)] + (2acm
x
+ 2bcm
y
+ c
2
)(f) (21)
Note that the power spectrum is a sum of a continuous part and a delta function in f = 0.

4.2 Ergodic Signals


Assuming a wide-sense stationary signal to be ergodic means that expectations - or ensemble
averages - involved in determining the mean or correlation functions can be substituted by time
averages. For example,
m
x
= x(n) = lim
N
1
N
N1

n=0
x(n) (22)

xy
(m) = x(n)y(n m) = lim
N
1
N
N1

n=0
x(n)y(n m) (23)
In the case Eq. (22) holds the signal is said to be mean ergodic and if Eq. (23) holds the signals are
said to be correlation ergodic, see further [4], [5, 6, Ch. 12]. Most physical processes are mean
and correlation ergodic and in general, we will tacitly assume ergodicity.
4.2.1 Correlation Function Estimates
Suppose that r
xy
(m) is an estimate of
xy
(m) based on N samples of x(n) and y(n). The estimate
r
xy
(m) is recognized as a random signal since it is a function of the random signals x(n) and
y(n). In order to assess the quality of an estimate we normally consider the bias, B[r
xy
(m)], the
variance, V [r
xy
(m)], and the mean square error, MSE[r
xy
(m)], dened by:
B[r
xy
(m)] = E[r
xy
(m)]
xy
(m) (24)
V [r
xy
(m)] = E[ (r
xy
(m) E[r
xy
(m)])
2
] (25)
MSE[r
xy
(m)] = E[ (r
xy
(m)
xy
(m))
2
] = B
2
[r
xy
(m)] + V [r
xy
(m)] (26)
Note that the variance and mean square error are positive quantities.
Suppose that x(n), y(n) are correlation ergodic random signals and we have collected N sam-
ples of each signal for n = 0, 1, , N 1. Using a truncated version of Eq. (23) an estimate
becomes:
r

xy
(m) =
1
N m
N1

n=m
x(n)y(n m), for m = 0, 1, , N 1 (27)
7
For 0 m N 1, the bias is assessed by evaluating
E[r

xy
(m)] = E
_
1
N m
N1

n=m
x(n)y(n m)
_
=
1
N m
N1

n=m
E[x(n)y(n m)]
=
1
N m
N1

n=m

xy
(m) =
xy
(m) (28)
That is B[r

xy
(m)] = 0, and the estimator is said to be unbiased. The variance is more complicated
to evaluate. An approximate expression is given by (see also [5, Ch. 14])
V [r

xy
(m)] =
N
(N m)
2

n=

xx
(n)
yy
(n) +
xy
(n m)
yx
(n + m) (29)
Provided the sum is nite (which is the case for correlation ergodic signals), the variance vanishes
for N , and consequently lim
N
r
xy
(m) =
xy
(m). The estimate is thus referred to as a
consistent estimate. However, notice that V [r

xy
(m)] = O(1/(N m)), i.e., for m close to N the
variance becomes very large.
An alternative estimator is given by:
r
xy
(m) =
1
N
N1

n=m
x(n)y(n m), for m = 0, 1, , N 1 (30)
For 0 m N 1, the bias is evaluated by considering
E[r
xy
(m)] =
1
N
N1

n=m
E[x(n)y(n m)]
=
N m
N

xy
(m) =
_
1
m
N
_

xy
(m) (31)
That is, the bias is B[r
xy
(m)] = E[r
xy
(m)]
xy
(m) = m
xy
(m)/N. r
xy
(m) is thus a biased
estimate, but vanishes as N , for which reason the estimator is referred to as asymptotically
unbiased. The variance can be approximated by
V [r
xy
(m)] =
1
N

n=

xx
(n)
yy
(n) +
xy
(n m)
yx
(n + m) (32)
Thus, generally lim
N
V [r
xy
(m)] = 0. Moreover, V [r
xy
(m)] = O(1/N), which means that
the variance does not increase tremendously when m is close to N, as was the case for r

xy
(m).
The improvement in variance is achieved at the expense of increased bias. This phenomenon is
known as the bias-variance dilemma which illustrated in Fig. 4. If the objective is to nd an
estimator which has minimum mean square error, this is achieved by optimally trading off bias
and variance according to Eq. (26)
8
.
In most situations the r
xy
(m) estimator has the smallest MSE, and is therefore preferable.
8
MSE is the sum of the variance and the squared bias.
8
6
6
?
6
?
r

xy
(m)

xy
(m) = E[r

xy
(m)]
r
xy
(m)
E[r
xy
(m)]
_
V [r
xy
(m)]
_
V [r

xy
(m)]
Figure 4: The bias/variance dilemma.
4.3 Sampling of Random Signals
4.3.1 Sampling theorem for random signals
Following [4]: Suppose that x
a
(t) is a real stationary random analog signal with power density
spectrum
xaxa
(F) which is band-limited by B
x
, i.e.,
xaxa
(F) = 0, for [F[ > B
x
. By sampling
with a frequency F
s
= 1/T > 2B
x
, x(t) can be reconstructed from the samples x(n) = x
a
(nT)
by the usual reconstruction formula
x
a
(t) =

n=
x(n)
sin(/T(t nT))
/T(t nT)
(33)
The reconstruction x
a
(t) equals x
a
(t) in the mean square sense
9
, i.e.,
E[( x
a
(t) x
a
(t))
2
] = 0 (34)
As the autocorrelation function
xaxa
() is a non-random function of time, hence it is an ordinary
aperiodic continuous-time signal with spectrum
xaxa
(F). As a consequence when the sampling
theorem is fullled then as usual [5, 6, Ch. 6.1]:

xx
(m) =
xaxa
(mT) (35)

xx
(f) = F
s

k=

xaxa
((f k)F
s
) (36)
9
Convergence in mean square sense does not imply convergence everywhere; however, the details are subtle and
normally of little practical interest. Further reading on differences between convergence concepts, see [3, Ch. 8-4].
9
4.3.2 Equivalence of Correlation Functions
In order further to study the equivalence between correlations functions for analog and discrete-
time random signals, suppose that x
a
(t) and y
a
(t) are correlation ergodic random analog with
power density spectra
xaxa
(F) and
yaya
(F) band-limited by B
x
and B
y
, respectively
10
. The
crosscorrelation function is dened as:

xaya
() = E[x
a
(t)y
a
(t )] = lim
T
i

1
T
i
_
T
i
/2
T
i
/2
x
a
(t)y
a
(t ) dt (37)
That is,
xaya
() can be interpreted as the integration of the product signal
z
a
(t) = x
a
(t)y
a
(t ) for a given xed . The analog integrator is dened as the lter with
impulse and frequency responses:
h
int
(t) =
_

_
1
T
i
, [t[ < T
i
/2
0 , otherwise
H
int
(F) =
sin T
i
F
T
i
F
(38)
Thus
xaya
() = lim
T
i

z
a
(t) h
int
(t)[
t=0
.
The question is: what is the required sampling frequency in order to obtain a discrete-time
equivalent
xx
(n) of
xaxa
()?. Suppose that X(F), Y (F) are the Fourier transforms of real-
izations of x
a
(t), y
a
(t) for [t[ < T
i
/2. Then, since z
a
(t) is a product of the two signals, the
corresponding Fourier transform is:
Z(F) = X(F) Y (F)e
j2F
(39)
Thus, Z(F) will generally have spectral components for [F[ < B
x
+ B
y
. Sampling z
a
(t) in
accordance with the sampling theorem thus requires F
s
> 2(B
x
+ B
y
). The power spectrum

zz
(f), f = F/F
s
of the discrete-time signal z(n) is sketched in Fig. 5. Notice, in principle we
-

zz
(f)
6
f
6 6 6
(f)
(B
x
+ B
y
)/F
s
(B
x
B
y
)/F
s 1 1
Figure 5: The power spectrum of the sampled z(n),
zz
(f) with possible -functions located at
f + k, k = 0, 1, 2, .
can perform extreme subsampling with F
s
arbitrarily close to zero. This causes aliasing; however,
since the purpose of the integrator is to pick out the possible DC-component, the aliasing does not
introduce error. The drawback is that it is necessary to use a large integration time, T
i
, i.e., the
10
That is, xaxa
(F) = 0 for |F| > Bx and yaya
(F) = 0 for |F| > By.
10
signals need to be observed for a long time. Secondly, we are normally not content with a digital
determination of the crosscorrelation for a single lag, . Often the goal is to determine spectral
properties by Fourier transformation of the discrete-time crosscorrelation function. That is, we
want
xaya
() for lags = m/F
s
where F
s
2B
xy
and B
xy
is the band-limit of
xaya
(F). That
is, x
a
(t), y
a
(t) are sampled with F
s
> 2B
xy
. According to the table in Sec. A.6, [
xaya
(F)[
2

xaxa
(F)
yaya
(F), which means that the band-limit B
xy
min(B
x
, B
y
). In consequence, x
a
(t)
and/or y
a
(t) are allowed to be under-sampled when considering the crosscorrelation function
11
.
4.4 Discrete-Time Systems and Power/Cross-Power Spectra
4.4.1 Useful Power/Cross-Power Expressions
Suppose that the real random stationary signals x(n) and y(n) are observed in the interval 0
n N 1. Now, perform the Fourier transforms of the signals, as shown by:
X(f) =
N1

n=0
x(n)e
j2fn
Y (f) =
N1

n=0
y(n)e
j2fn
(40)
Note that X(f) and Y (f) also are (complex) random variables since they are the sum of random
variables times a deterministic complex exponential function.
The intention is to show that the power and cross-power spectra can be expressed as:

xx
(f) = lim
N
1
N
E
_
[X(f)[
2
_
= lim
N
1
N
E[X(f)X

(f)] (41)

xy
(f) = lim
N
1
N
E[X(f)Y

(f)] (42)
Here we only give the proof of Eq. (42) since the proof of Eq. (41) is similar, see also [1, Ch. 5],
[7, Ch. 11]. We start by evaluating
X(f)Y

(f) =
N1

n=0
N1

k=0
x(n)y(k)e
j2f(nk)
(43)
Next performing expectation E[] gives
12
E[X(f)Y

(f)] =
N1

n=0
N1

k=0
E[x(n)y(k)]e
j2f(nk)
=
N1

n=0
N1

k=0

xy
(n k)e
j2f(nk)
(44)
Let m = n k and notice (N 1) m N 1. In the summation w.r.t. n and k it is easy
to verify that a particular value of m appears N [m[ times. By changing the summation w.r.t. n
and k by a summation w.r.t. m:
1
N
E[X(f)Y

(f)] =
1
N
N1

m=(N1)

xy
(m)(N [m[)e
j2fm
11
When considering second order correlation functions and spectra, it sufces to study linear mixing of random
signals. Suppose that xa(t) = g1(t) + g2(t) and ya(t) = g2(t) + g3(t) where the gi(t) signals all are orthogonal
with band-limits Bi. The band-limit Bx = max(B1, B2) and By = max(B2, B3). Since xaya
() = g
2
g
2
(),
Bxy = B2. Accordingly, Bxy min(Bx, By).
12
Note that the expectation of a sum is the sum of expectations.
11
=
N1

m=(N1)

xy
(m)(1
[m[
N
)e
j2fm
(45)
By dening the signal v(m) = 1 [m[/N then N
1
E[X(f)Y

(f)] is seen to be the Fourier


transform of the product
xy
(m) v(m). That is,
1
N
E[X(f)Y

(f)] = V (f)
N1

m=(N1)

xy
(m)e
j2fm
(46)
where denotes convolution and V (f) is the spectrum of v(m) given by
V (f) =
1
N
sin
2
fN
sin
2
f
(47)
which tends to a Dirac delta function V (f) (f) as N . Consequently,
lim
N
1
N
E[X(f)Y

(f)] =

m=

xy
(m)e
j2fm
=
xy
(f) (48)
Sufcient conditions are that the crosscovariance c
xy
(m) =
xy
(m) m
x
m
y
obey
lim
N
N

m=N
[c
xy
(m)[ < or lim
m
c
xy
(m) = 0 (49)
These conditions are normally fullled and implies that the process is mean ergodic [4].
Eq. (41) and (42) are very useful for determining various power and cross-power spectra in
connection with linear time-invariant systems. The examples below show the methodology.
Example 4.3 Find the power spectrum
yy
(f) and the cross-power spectrum
xy
(f) where x(n)
is a random input signal to a LTI system with impulse response h(n) and output y(n) = h(n)
x(n). Suppose that nite realizations of length N of x(n) and y(n) are given, and denote by
X(f) and Y (f) the associated Fourier transforms which are related as: Y (f) = H(f)X(f)
where H(f) h(n) is the frequency response of the lter. In order to nd the cross-power
spectrum we evaluate
X(f)Y

(f) = X(f)H

(f)X

(f) = H

(f)X(f)X

(f) (50)
Since H(f) is deterministic, the expectation becomes
E[X(f)Y

(f)] = H

(f)E[X(f)X

(f)] (51)
Dividing by N and performing the limit operation yields:

xy
(f) = lim
N
1
N
E[X(f)Y

(f)] = H

(f)
xx
(f) (52)
Since
yx
(f) =

xy
(f) we further have the relation

yx
(f) = H(f)
xx
(f) (53)
12
In the time domain, this corresponds to the convolution

yx
(m) = h(m)
xx
(m) (54)
The output spectrum is found by evaluating
Y (f)Y

(f) = H(f)X(f)H

(f)X

(f) = [H(f)[
2
[X(f)[
2
(55)
Proceeding as above

yy
(f) = [H(f)[
2

xx
(f) (56)
In the time domain:

yy
(m) = r
hh
(m)
xx
(m) = h(m) h(m)
xx
(m) (57)

Example 4.4 Suppose that a signal source g(n) and a noise source s(n) in Fig. 6 are fully orthog-
onal. Find the power spectra
x
1
x
1
(f),
x
2
x
2
(f) and the cross-power spectrum
x
2
x
1
(f).
s(n)
h
3
(n)
h
2
(n) +
-
h
1
(n)
?
-
?
g(n)
6
+
- - -
-
x
1
(n)
x
2
(n)
Figure 6: Two microphones x
1
(n), x
2
(n) record signals from a noise source s(n) and a signal
source g(n).
Since s(n) and g(n) are fully orthogonal the superposition principle is applicable. Using the
results of Example 4.3 we nd:

x
1
x
1
(f) =
gg
(f) +
ss
(f)[H
1
(f)[
2
(58)

x
2
x
2
(f) =
gg
(f)[H
3
(f)[
2
+
ss
(f)[H
2
(f)[
2
(59)
In order to determine
x
2
x
1
(f) we use Eq. (42), hence, we evaluate
X
2
(f)X

1
(f) = (H
3
(f)G(f) + H
2
(f)S(f)) (G

(f) + H

1
(f)S

(f))
= [G(f)[
2
H
3
(f) + G

(f)S(f)H
2
(f) + G(f)S

(f)H

1
(f)H
3
(f)
+[S(f)[
2
H

1
(f)H
2
(f) (60)
13
Performing expectation, dividing by N, and nally carrying out the limit operation gives

x
2
x
1
(f) =
gg
(f)H
3
(f) +
ss
(f)H

1
(f)H
2
(f) (61)
Here we used
gs
(f) = 0 due to the fact that g(n) and s(n) are fully orthogonal.

4.4.2 Some Properties


Following [1, Ch. 5.2.4]: suppose that the cross-power spectrum is expressed by its magnitude and
phase, i.e.,

xy
(f) = [
xy
(f)[e
jxy(f)
(62)
As in Section 4.4.1, consider length N realization of x(n) and y(n) with Fourier transforms X(f),
Y (f) and form a signal z(n) via the frequency domain relation
Z(f) = aX(f) + Y (f)e
jxy(f)
(63)
where a is a real constant. Notice, that X(f), Y (f) and Z(f) are random variables whereas

xy
(f) is not. Using Eq. (63) and the fact that [Z(f)[
2
0 gives:
a
2
[X(f)[
2
+ aX(f)Y

(f)e
jxy(f)
+ aX

(f)Y (f)e
jxy(f)
+[Y (f)[
2
0 (64)
Proceeding as in Section 4.4.1 in particular using Eq. (41) the previous equation leads to
a
2

xx
(f) + a
xy
(f)e
jxy(f)
+ a
yx
(f)e
jxy(f)
+
yy
(f) 0 (65)

xy
(f)e
jxy(f)
= [
xy
(f)[ and
yx(f)
=

xy
(f). Consequently,
a
2

xx
(f) + 2a[
xy
(f)[ +
yy
(f) 0 (66)
Note that Eq. (66) is a quadratic inequality in a, which means that the determinant is negative or
zero, i.e.,
4[
xy
(f)[
2
4
xx
(f)
yy
(f) 0 (67)
or equivalently
[
xy
(f)[
2

xx
(f)
yy
(f) (68)
Evaluating Eq. (66) for a = 1 results in
2[
xy
(f)[
xx
(f) +
yy
(f) (69)
Further properties are listed in the Appendix.
5 Mixing Random and Deterministic Signals
Signals do not normally appear in their basic pure types like random, periodic or aperiodic.
Often it is necessary to be able to handle the mixing of the basic signal types. It is very important
to notice that in general, the mixing of a stationary random signal and a deterministic signal will
result in a non-stationary signal.
14
In order to handle this problem one can adopt the framework of [2, Ch. 2.3]. Suppose that
x(n) is a mixed random/deterministic signal. x(n) is said to be quasi-stationary if the following
conditions hold:
(i) m
x
(n) = E[x(n)], n, [m
x
(n)[ < (70)
(ii)
xx
(n
1
, n
2
) = E[x(n
1
)x(n
2
)], n
1
, n
2
, [
xx
(n
1
, n
2
)[ <

xx
(m) = lim
N
1
N
N1

n
1
=0

xx
(n
1
, n
1
m) =
xx
(n
1
, n
1
m) , m (71)
Here denotes time average and the expectation E[] is carried out w.r.t. the randomcomponents
of the signal. If x(n) is a pure random stationary signal then condition (i) and (ii) are trivially
fullled. If x(n) is a pure deterministic signal condition (ii) gives

xx
(m) = lim
N
1
N
N1

n=0
x(n)x(n m) = x(n)x(n m) (72)
which coincides with the expression Eq. (4) for periodic signals
13
. When aperiodic components
are present,
xx
(m) as dened in Eq. (72) will normally be equal to zero.
For a general signal g(n), we will introduce the notation
E[g(n)] = E[g(n)] = lim
N
1
N
N1

n=0
E[g(n)] (73)
to denote that both ensemble and time averages are carried out.
5.1 Erogodicity Result
[2, Theorem 2.3] states that if x(n) is a quasi-stationary signal with mean m
x
(n) = E[x(n)]
fullling
x(n) m
x
(n) =

q=0
h
n
(q)e(n q) (74)
where 1) e(n) is a sequence of independent random variables with zero mean, ninte variances

2
e
(n), and bounded fourth order moments. 2) h
n
(q), n = 1, 2, are uniform stable lters. That
is, the random part of x(n) can be described as ltered white noise.
With probability one, as N ,
1
N
N1

n=0
x(n)x(n m) E[x(n)x(n m)] =
xx
(m) (75)
In summary, the result is identical to the standard result for stationary random signals Eq. (22),
(23): the time average equals the joint time-ensemble average E().
The framework addressed autocorrelation functions; however, it can easily be adopted to cross-
correlation functions as well.
13
It does not matter whether the average is over one or an innite number of periods.
15
5.2 Linear Mixing of Random and Periodic Signals
The above framework is applied to a simple example of linearly mixing a random and a periodic
signal. Suppose that x(n) is given by x(n) = p(n) + s(n) where p(n) is a periodic signal with
period M, i.e., p(n) = p(n+M), and s(n) is a stationary, correlation ergodic random signal with
mean m
s
.
Consider rst the formal denition of the mean of x(n):
E[x(n)] = E[p(n) + s(n)] = p(n) + E[s(n)] (76)
Thus the mean is time-varying; hence, the process is non-stationary. However, it is easy to verify
that the mean is periodic with M as E[x(n + M)] = p(n + M) + E[s(n)] = E[x(n)]. This is
known as cyclostationarity [3, Ch. 9]. The autocorrelation function of x(n) is:

xx
(n
1
, n
2
) = E[(p(n
1
) + s(n
1
)) (p(n
2
) + s(n
2
))]
= p(n
1
)p(n
2
) + E[s(n
1
)]p(n
2
) + E[s(n
2
)]p(n
1
) + E[s(n
1
)s(n
2
)]
= p(n
1
)p(n
2
) + m
s
(p(n
1
) + p(n
2
)) +
ss
(n
1
n
2
) (77)
Again, it is easy to verify
xx
(n
1
+ M, n
2
+ M) =
xx
(n
1
, n
2
). Thus x(n) is wide-sense cyclo-
stationary. Moreover, x(n) is a quasi-stationary signal, cf. Eq. (70), (71), since E[x(n)] is limited
and

xx
(n
1
, n
1
m) = p(n
1
)p(n
1
m) + 2m
s
p(n
1
) +
ss
(m)
= r
pp
(m) + 2m
s
m
p
+
ss
(m) (78)
is a function of m only. Here m
p
is the time average of p(n).
Next we will show that x(n) is ergodic according to the denition in Section 5.1. Consider the
ergodic formulation of the autocorrelation function of x(n):
r
xx
(m) = x(n)x(n m) = lim
N
1
N
N1

n=0
x(n)x(n m) (79)
Substituting the expression for x(n) gives
r
xx
(m) = (p(n) + s(n)) (p(n m) + s(n m))
= p(n)p(n m) + p(n)s(n m) s(n)p(n m) + s(n)s(n m)
= r
pp
(m) + r
ps
(m) + r
sp
(m) +
ss
(m) (80)
r
pp
(m) and
ss
(m) are the usual autocorrelation functions for periodic and random signals, re-
spectively. The crucial object is the crosscorrelation function r
ps
(m) = r
sp
(m). Focus on
r
ps
(m) = lim
N
1
N
N1

n=0
p(n)s(n m) (81)
If we use the periodicity of p(n) we can rewrite the sum of products by considering rst all
products which involves p(0) = p(kM), next all products which involves p(1) = p(kM + 1) and
so on. Assuming
14
N = KM, we can write:
r
ps
(m) = lim
N
1
N
_
p(0)
K1

k=0
s(kM m)+
14
It is pretty easy to verify that this restriction is not crucial to the subsequent arguments.
16
p(1)
K1

k=0
s(kM m + 1) +
+
p(M 1)
K1

k=0
s(kM m + M 1)
_
= lim
N
1
N
_
_
M1

q=0
p(q)
K1

k=0
s(kM m + q)
_
_
= lim
N
_
_
M1

q=0
p(q)
M
M
N
K1

k=0
s(kM m + q)
_
_
(82)
Since M is a constant we can instead perform the limit operation w.r.t. K, that is,
r
ps
(m) =
1
M
M1

q=0
p(q) lim
K
_
1
K
K1

k=0
s(kM m + q)
_
(83)
Dene m
s
= K
1

K1
k=0
s(kM m + q), then E[ m
s
] = m
s
= E[s(n)] and the variance
V [ m
s
] = E[ m
2
s
] m
2
s
=
1
K
2
K1

k=0
K1

=0
E[s(kM m + q)s(M m + q)] m
2
s
=
1
2K 1
K1

p=(K1)
_
1
[p[
2K 1
_

ss
(pM) m
2
s
(84)
Under the standard mean ergodic condition, lim
K
V [ m
s
] = 0 thus lim
K
m
s
= m
s
. From
Eq. (83) we then conclude that
r
ps
(m) = p(n) E[s(n)], 0 m N 1 (85)
where
p(n) = lim
M
1
M
M1

n=0
p(n) =
1
M
M1

n=0
p(n) (86)
In conclusion, a periodic and a random signal are fully uncorrelated, and the crosscorrelation
function is the product of the time average of the periodic signal and the mean value of the random
signal. If either the random signal has zero mean or the periodic signal has zero time average, the
signals are fully orthogonal, i.e., r
ps
(m) = 0.
Example 5.1 This example considers the determination of a weak periodic signal contaminated
by strong noise. Suppose that s(n) is a random noise signal which is generated by a rst order AR
process, cf. [5, 6, Ch. 12.2]. The autocorrelation function is given by

ss
(m) =
2
a
|m|
(87)
The power of s(n) is P
s
=
ss
(0) =
2
and the squared mean value is
m
2
s
= lim
m

ss
(m) = 0. Further, assume that p(n) is a periodic signal given by p(n) =
b cos(2n/N
0
) with period N
0
. The autocorrelation function is
r
pp
(m) = p(m) p(m) =
b
2
2
cos(2m/N
0
) (88)
17
the power is P
p
= b
2
/2, and we assume that P
p
P
s
.
The autocorrelation function of x(n) = s(n) + p(n) is given by Eq. (80). Using the fact that
r
sp
(m) = 0 as E[s(n)] = 0 we have
r
xx
(m) =
2
a
|m|
+
b
2
2
cos(2m/N
0
) (89)
Thus for [m[ large the rst term vanishes and we can employ the approximation
r
xx
(m)
b
2
2
cos(2m/N
0
), for [m[ (90)
The maximum of r
xx
(m) for m large is then b
2
/2 = P
p
and the period N
0
can be determined as
the difference between two maxima.

18
A Appendix: Properties of Correlation Functions and Power Spec-
tra
A.1 Denitions
x
a
(t): Aperiodic analog real signal with nite energy.
lim
t
x
a
(t) = x
a
.
x
a
(n): Aperiodic discrete-time real signal with nite energy.
lim
n
x
a
(n) = x
a
.
X
a
(F), X
a
(f): Fourier transform of x
a
(t) and x
a
(n).
x
p
(t): Periodic analog real signal with nite power and period T
p
, i.e.,
x
p
(t + T
p
) = x(t).
x
p
(n): Periodic discrete-time real signal with nite power and period N, i.e.,
x
p
(n + N) = x
p
(n).
X
p
(k): Fourier transform of x
p
(n) or x
p
(t).
x(t): Stationary, ergodic random real analog signal with mean E[x(t)] = m
x
.
x(n): Stationary, ergodic randomreal discrete-time signal with mean E[x(n)] =
m
x
.
X(f): Fourier transform of x(n) where n [0; N 1].

xx
(),
xy
(): Auto- and crosscorrelation functions of random signals.
r
xx
(), r
xy
(): Auto- and crosscorrelation functions for aperiodic and periodic signals.

xx
(f),
xy
(f): Power and cross-power spectra for random signals.
S
xx
(f), S
xy
(f): Power(energy) and cross-power(energy) spectra for aperiodic signals.
E[x(n)]: Mean value operator, E[x(n)] = m
x
.
V [x(n)]: Variance operator, V [x(n)] = E[x
2
(n)] E
2
[x(n)].
P: Power, P = P
AC
+ P
DC
.
E: Energy, E = E
AC
+ E
DC
.
A.2 Denitions of Correlation Functions and Power Spectra
A.2.1 Analog Random Signals

xy
() = E[x(t)y(t )] = lim
Tp
1
T
p
_
Tp/2
Tp/2
x(t)y(t ) dt (91)

xy
(F) =
_

xy
()e
j2F
d (92)

xy
() =
_

xy
(F)e
j2F
dF (93)
Remark:
xy
(F) is a continuous cross-power density spectrum.
A.2.2 Discrete-time Random Signals

xy
(m) = E[x(n)y(n m)] = lim
N
1
N
N1

n=0
x(n)y(n m) (94)
19

xy
(f) =

m=

xy
(m)e
j2fm
(95)

xy
(m) =
_
1/2
1/2

xy
(f)e
j2fm
df (96)
Remark:
xy
(f) is a continuous cross-power spectrum, replicated periodically with f = 1 corre-
sponding to the sampling frequency F
s
.
A.2.3 Analog Periodic Signals
r
xpyp
() =
1
T
p
_
Tp/2
Tp/2
x
p
(t)y
p
(t ) dt = x
p
() y
p
() (97)
S
xpyp
(k) =
1
T
p
_
Tp/2
Tp/2
r
xpyp
()e
j2
k
Tp

d (98)
r
xpyp
() =

k=
S
xpyp
(k)e
j2
k
Tp

(99)
Remark: Discrete cross-power spectrum at frequencies F = k/T
p
.
A.2.4 Discrete-time Periodic Signals
r
xpyp
(m) =
1
N
N1

n=0
x
p
(n)y
p
(n m) = x
p
(m) y
p
(m) (100)
S
xpyp
(k) =
1
N
N1

m=0
r
xpyp
(m)e
j2
k
N
m
(101)
r
xpyp
(m) =
N1

k=0
S
xpyp
(k)e
j2
k
N
m
(102)
Remark: Discrete cross-power spectrum at frequencies f = k/N or F = kF
s
/N replicated
periodically with k = N corresponding to the sampling frequency F
s
. Further, note that the pair
of Fourier transforms dened in Eq. (101) and (102) corresponds to the denitions in [5, 6, Ch.
4.2]. The Discrete Fourier Transform dened in [5, 6, Ch. 7] is related by: DFTr
xpyp
(m) =
N S
xpyp
(k).
A.2.5 Analog Aperiodic Signals
r
xaya
() =
_

x
a
(t)y
a
(t ) dt = x
a
() y
a
() (103)
S
xaya
(F) =
_

r
xaya
()e
j2F
d (104)
r
xaya
() =
_

S
xaya
(F)e
j2F
dF (105)
Remark: S
xaya
(F) is a continuous cross-energy density spectrum.
20
A.2.6 Discrete-time Aperiodic Signals
r
xaya
(m) =

x
a
(n)y
a
(n m) = x
a
(m) y
a
(m) (106)
S
xaya
(f) =

m=
r
xaya
(m)e
j2fm
(107)
r
xaya
(m) =
_
1/2
1/2
S
xaya
(f)e
j2fm
df (108)
Remark: S
xaya
(f) is a continuous cross-energy spectrum, replicated periodically with the sam-
pling frequency F
s
.
A.3 Properties of Autocorrelation Functions
Random Periodic Aperiodic

xx
() =
xx
() r
xpxp
() = r
xpxp
() r
xaxa
() = r
xaxa
()
[
xx
()[
xx
(0) [r
xpxp
()[ r
xpxp
(0) [r
xaxa
()[ r
xaxa
(0)
lim

xx
() = m
2
x
= P
DC
r
xpxp
() = r
xpxp
( + T
p
) lim

r
xaxa
() = x
2
a
P =
xx
(0) = m
2
x
+ V [x(n)] P = r
xpxp
(0) E = r
xaxa
(0)

yy
() = r
hh
()
xx
() r
ypyp
() = r
hh
() r
xpxp
() r
yaya
() = r
hh
() r
xaxa
()
Note: Similar properties exists for discrete-time signals.
21
A.4 Properties of Power Spectra
R
a
n
d
o
m
P
e
r
i
o
d
i
c
A
p
e
r
i
o
d
i
c

x
x
(
F
)
=

x
x
(

F
)
S
x
p
x
p
(
k
)
=
S
x
p
x
p
(

k
)
S
x
a
x
a
(
F
)
=
S
x
a
x
a
(

F
)
P
=
_

x
x
(
F
)
d
F
P
=

k
=

S
x
p
x
p
(
k
)
E
=
_

S
x
a
x
a
(
F
)
d
F
P
=
_
1
/
2

1
/
2

x
x
(
f
)
d
f
P
D
C
=
S
x
p
x
p
(
0
)

x
x
(
F
)
=
l
i
m
T
p

1
T
p
E
_
[
X
(
F
)
[
2
_

0
S
x
p
x
p
(
k
)
=
[
X
p
(
k
)
[
2

0
S
x
a
x
a
(
F
)
=
[
X
a
(
F
)
[
2

0
X
(
F
)
=
_
T
p
0
x
(
t
)
e

j
2

F
t
d
t
C
o
n
t
.
r
e
a
l
s
p
e
c
t
.
+
R
e
a
l
d
i
s
c
r
e
t
e
s
p
e
c
t
.
C
o
n
t
.
r
e
a
l
s
p
e
c
t
.
+
m
2x

(
F
)
a
t
F
=
k
/
T
p
x
2a

(
F
)

y
y
(
F
)
=
[
H
(
F
)
[
2

x
x
(
F
)
S
y
p
y
p
(
k
)
=
[
H
(
k
/
T
p
)
[
2
S
x
p
x
p
(
k
)
S
y
a
y
a
(
F
)
=
[
H
(
F
)
[
2
S
x
a
x
a
(
F
)
22
A.5 Properties of Crosscorrelation Functions
R
a
n
d
o
m
P
e
r
i
o
d
i
c
A
p
e
r
i
o
d
i
c

x
y
(

)
=

y
x
(

)
r
x
p
y
p
(

)
=
r
y
p
x
p
(

)
r
x
a
y
a
(

)
=
r
y
a
x
a
(

2x
y
(

x
x
(
0
)

y
y
(
0
)
r
2x
p
y
p
(

r
x
p
x
p
(
0
)
r
y
p
y
p
(
0
)
r
2x
a
y
a
(

r
x
a
x
a
(
0
)
r
y
a
y
a
(
0
)
2
[

x
y
(

)
[

x
x
(
0
)
+

y
y
(
0
)
2
[
r
x
p
y
p
(

)
[

r
x
p
x
p
(
0
)
+
r
y
p
y
p
(
0
)
2
[
r
x
a
y
a
(

)
[

r
x
a
x
a
(
0
)
+
r
y
a
y
a
(
0
)
l
i
m

x
y
(

)
=
m
x
m
y
r
x
p
y
p
(

)
=
r
x
p
y
p
(

+
T
p
)
l
i
m

r
x
a
y
a
(

)
=
x
a

y
a

x
y
(

)
=
m
x
m
y
r
x
p
y
p
(

)
=
0
,

r
x
a
y
a
(

)
=
x
a

y
a

u
n
c
o
r
r
e
l
a
t
e
d
i
f
n
o
c
o
m
m
o
n
f
r
e
q
u
e
n
c
i
e
s
u
n
c
o
r
r
e
l
a
t
e
d

y
x
(

)
=
h
(

x
x
(

)
r
y
p
x
p
(

)
=
h
(

r
x
p
x
p
(

)
r
y
a
x
a
(

)
=
h
(

r
x
a
x
a
(

)
23
A.6 Properties of Cross-Power Spectra
R
a
n
d
o
m
P
e
r
i
o
d
i
c
A
p
e
r
i
o
d
i
c

x
y
(
F
)
=

y
x
(
F
)
S
x
p
y
p
(
k
)
=
S
y
p
x
p
(
k
)
S
x
a
y
a
(
F
)
=
S
y
a
x
a
(
F
)

x
y
(
F
)
=

x
y
(

F
)
S
x
p
y
p
(
k
)
=
S
x
p
y
p
(

k
)
S
x
a
y
a
(
F
)
=
S
x
a
y
a
(

F
)
2
[

x
y
(
F
)
[

x
x
(
F
)
+

y
y
(
F
)
2
[
S
x
p
y
p
(
k
)
[

S
x
p
x
p
(
k
)
+
S
y
p
y
p
(
k
)
2
[
S
x
a
y
a
(
F
)
[

S
x
a
x
a
(
F
)
+
S
y
a
y
a
(
F
)
[

x
y
(
F
)
[
2

x
x
(
F
)

y
y
(
F
)
S
x
p
y
p
(
k
)
=
0
,

k
[
S
x
a
y
a
(
F
)
[
2
=
S
x
a
x
a
(
F
)
S
y
a
y
a
(
F
)
i
f
n
o
c
o
m
m
o
n
f
r
e
q
u
e
n
c
i
e
s

x
y
(
F
)
=
l
i
m
T
p

1
T
p
E
[
X
(
F
)
Y

(
F
)
]
S
x
p
y
p
(
k
)
=
X
p
(
k
)
Y

p
(
k
)
S
x
a
y
a
(
F
)
=
X
a
(
F
)
Y

a
(
F
)
X
(
F
)
=
_
T
p
0
x
(
t
)
e

j
2

F
t
d
t
C
o
m
p
l
.
c
o
n
t
.
s
p
e
c
t
r
u
m
C
o
m
p
l
.
d
i
s
c
r
e
t
e
s
p
e
c
t
r
u
m
C
o
m
p
l
.
c
o
n
t
.
s
p
e
c
t
r
u
m
+
m
x
m
y

(
F
)
a
t
F
=
k
/
T
+
x
a

y
a

(
F
)

y
x
(
F
)
=
H
(
f
)

x
x
(
f
)
S
y
p
x
p
(
k
)
=
H
(
k
/
T
)
S
x
p
x
p
(
k
)
S
y
a
x
a
(
F
)
=
H
(
f
)
S
x
a
x
a
(
f
)
24
References
[1] J.S. BENDAT & A.G. PIERSOL: Random Data, Analysis and Measurement Procedures,
New York, New York: John Wiley & Sons, 1986.
[2] L. LJUNG: System Identication: Theory for the User, Englewood Cliffs, NJ: Prentice-Hall,
1987.
[3] A. PAPOULIS: Probability, Random Variables and Stochastic Processes, Second edition,
New York, New York: McGraw-Hill, Inc., 1984.
[4] A. PAPOULIS: Signal Analysis, Second edition, New York, New York: McGraw-Hill, Inc.,
3rd printing, 1987.
[5] J.G. PROAKIS & D.G. MANOLAKIS: Digital Signal Processing: Principles, Algorithms
and Applications, 4th edition, Upper Saddle River, New Jersey: Pearson Prentice Hall,
Inc., hardcover version ISBN 0-13-187374-1, 2007.
[6] J.G. PROAKIS & D.G. MANOLAKIS: Digital Signal Processing: Principles, Algorithms
and Applications, 4th edition, Upper Saddle River, New Jersey: Pearson Prentice Hall,
Inc., paperback version ISBN 0-13-228731-5, 2007.
[7] G. ZELNIKER & F.J. TAYLOR: Advanced Digital Signal Processing: Theory and Applica-
tion, New York, New York: Marcel Dekker, Inc., 1994.
25

Anda mungkin juga menyukai