Anda di halaman 1dari 49

14.

Stochastic Processes
Introduction
Let  denote the random outcome of an experiment. To every such
outcome suppose a waveform X (t ,  )
X (t , ) is assigned.

The collection of such X (t,  )
n
waveforms form a X (t,  ) 
k
stochastic process. The
set of { k } and the time X (t,  )
2

index t can be continuous
X (t,  )
or discrete (countably 1
t
0 t t
infinite or finite) as well. 1 2

Fig. 14.1
For fixed  i  S (the set of
all experimental outcomes), X (t , ) is a specific time function.
For fixed t,
X 1  X (t1 , i )
is a random variable. The ensemble of all such realizations 1
X (t , ) over time represents the stochastic PILLAI/Cha
process X(t). (see Fig 14.1). For example
X (t )  a cos( 0t   ),
where  is a uniformly distributed random variable in (0, 2 ),
represents a stochastic process. Stochastic processes are everywhere:
Brownian motion, stock market fluctuations, various queuing systems
all represent stochastic phenomena.
If X(t) is a stochastic process, then for fixed t, X(t) represents
a random variable. Its distribution function is given by
FX ( x, t )  P{X (t )  x} (14-1)
Notice that FX ( x, t ) depends on t, since for a different t, we obtain
a different random variable. Further
 dFX ( x, t )
f X ( x, t )  (14-2)
dx
represents the first-order probability density function of the
process X(t). 2
PILLAI/Cha
For t = t1 and t = t2, X(t) represents two different random variables
X1 = X(t1) and X2 = X(t2) respectively. Their joint distribution is
given by

FX ( x1 , x2 , t1 , t2 )  P{X (t1 )  x1 , X (t2 )  x2 } (14-3)

and
  2
FX ( x1 , x2 , t1 , t2 )
f X ( x1 , x2 , t1 , t2 )  (14-4)
x1 x2
represents the second-order density function of the process X(t).
Similarly f X ( x1 , x2 , xn , t1 , t2 , tn ) represents the nth order density
function of the process X(t). Complete specification of the stochastic
process X(t) requires the knowledge of f X ( x1 , x2 , xn , t1 , t2 , tn )
for all ti , i  1, 2, , n and for all n. (an almost impossible task
in reality).
3
PILLAI/Cha
Mean of a Stochastic Process:

 (t )  E{ X (t )}    x f ( x, t )dx

X
(14-5)
represents the mean value of a process X(t). In general, the mean of
a process can depend on the time index t.

Autocorrelation function of a process X(t) is defined as



RXX (t1 , t2 )  E{ X (t1 ) X * (t2 )}    x1 x2* f X ( x1 , x2 , t1 , t2 )dx1dx2 (14-6)
and it represents the interrelationship between the random variables
X1 = X(t1) and X2 = X(t2) generated from the process X(t).

Properties:

1. R (t , t )  R* (t , t )  [ E{X (t ) X * (t )}]* (14-7)


XX 1 2 XX 2 1 2 1

2. RXX (t, t )  E{| X (t ) | }  0. (Average instantaneous power)


2
4
PILLAI/Cha
3. RXX (t1 , t2 ) represents a nonnegative definite function, i.e., for any
n
set of constants{ai }i 1
n n
 i j RXX (ti , t j )  0.
a a *
(14-8)
i 1 j 1 n
Eq. (14-8) follows by noticing that E{| Y | }  0 for Y   ai X (ti ).
2

The function i 1

CXX (t1 , t2 )  RXX (t1 , t2 )   X (t1 ) *X (t2 ) (14-9)

represents the autocovariance function of the process X(t).


Example 14.1
Let
T
z   T X (t )dt.

Then
T T
E[| z | ]   T  T E{ X (t1 ) X * (t2 )}dt1dt2
2

T T
  T  T R XX (t1 , t2 )dt1dt2 (14-10) 5
PILLAI/Cha
Example 14.2
X (t )  a cos( 0t   ),  ~ U (0,2 ). (14-11)
This gives
 (t )  E{ X (t )}  aE{cos( 0 t   )}
X

 a cos  0 t E{cos }  a sin  0 t E{sin  }  0, (14-12)


2
since E{cos }  21 0 cos d  0  E{sin  }.
Similarly
R XX (t1 , t2 )  a 2 E{cos( 0 t1   ) cos( 0 t2   )}
a2
 E{cos  0 (t1  t2 )  cos( 0 (t1  t2 )  2 )}
2
a2
 cos  0 (t1  t2 ). (14-13)
2
6
PILLAI/Cha
Stationary Stochastic Processes
Stationary processes exhibit statistical properties that are
invariant to shift in the time index. Thus, for example, second-order
stationarity implies that the statistical properties of the pairs
{X(t1) , X(t2) } and {X(t1+c) , X(t2+c)} are the same for any c.
Similarly first-order stationarity implies that the statistical properties
of X(ti) and X(ti+c) are the same for any c.
In strict terms, the statistical properties are governed by the
joint probability density function. Hence a process is nth-order
Strict-Sense Stationary (S.S.S) if
f X ( x1 , x2 , xn , t1 , t2 , tn )  f X ( x1 , x2 , xn , t1  c, t2  c, tn  c)
(14-14)
for any c, where the left side represents the joint density function of
the random variables X 1  X (t1 ), X 2  X (t2 ), , X n  X (tn ) and
the right side corresponds to the joint density function of the random
variables X 1  X (t1  c), X 2  X (t2  c), , X n  X (tn  c).
A process X(t) is said to be strict-sense stationary if (14-14) is 7
true for all ti , i  1, 2, , n, n  1, 2,  and any c. PILLAI/Cha
For a first-order strict sense stationary process,
from (14-14) we have
f X ( x, t )  f X ( x, t  c ) (14-15)

for any c. In particular c = – t gives


f X ( x, t )  f X ( x) (14-16)
i.e., the first-order density of X(t) is independent of t. In that case

E[ X (t )]    x f ( x )dx   , a constant. (14-17)
Similarly, for a second-order strict-sense stationary process
we have from (14-14)
f X ( x1 , x2 , t1 , t2 )  f X ( x1 , x2 , t1  c, t2  c)

for any c. For c = – t2 we get


f X ( x1 , x2 , t1 , t2 )  f X ( x1 , x2 , t1  t2 ) (14-18)
8
PILLAI/Cha
i.e., the second order density function of a strict sense stationary
process depends only on the difference of the time indices t1  t2   .
In that case the autocorrelation function is given by

RXX (t1 , t2 )  E{ X (t1 ) X * (t2 )}
   x1 x2* f X ( x1 , x2 ,  t1  t2 )dx1dx2

 RXX (t1  t2 )  RXX ( )  RXX* (  ), (14-19)
i.e., the autocorrelation function of a second order strict-sense
stationary process depends only on the difference of the time
indices   t1  t2 .
Notice that (14-17) and (14-19) are consequences of the stochastic
process being first and second-order strict sense stationary.
On the other hand, the basic conditions for the first and second order
stationarity – Eqs. (14-16) and (14-18) – are usually difficult to verify.
In that case, we often resort to a looser definition of stationarity,
known as Wide-Sense Stationarity (W.S.S), by making use of 9
PILLAI/Cha
(14-17) and (14-19) as the necessary conditions. Thus, a process X(t)
is said to be Wide-Sense Stationary if
(i) E{ X (t )}   (14-20)
and
(ii) E{X (t1 ) X * (t2 )}  RXX (t1  t2 ), (14-21)

i.e., for wide-sense stationary processes, the mean is a constant and


the autocorrelation function depends only on the difference between
the time indices. Notice that (14-20)-(14-21) does not say anything
about the nature of the probability density functions, and instead deal
with the average behavior of the process. Since (14-20)-(14-21)
follow from (14-16) and (14-18), strict-sense stationarity always
implies wide-sense stationarity. However, the converse is not true in
general, the only exception being the Gaussian process.
This follows, since if X(t) is a Gaussian process, then by definition
X 1  X (t1 ), X 2  X (t2 ), , X n  X (tn ) are jointly Gaussian random
variables for any t1 , t2 , tn whose joint characteristic function 10
is given by PILLAI/Cha
n n
j   ( tk )k   C XX
( ti , tk )ik / 2
(14-22)
 (1 , 2 , , n )  e
X
k 1 l ,k

where C XX (ti , tk ) is as defined on (14-9). If X(t) is wide-sense


stationary, then using (14-20)-(14-21) in (14-22) we get
n n n
j  k  12  C ( ti  tk )ik
(14-23)
 (1 , 2 , , n )  e
XX

k 1 11 k 1
X

and hence if the set of time indices are shifted by a constant c to


generate a new set of jointly Gaussian random variables X 1  X (t1  c),
X 2  X (t2  c),, X n  X (tn  c) then their joint characteristic
n
{
function is identical to (14-23). Thus the set of random variables i i 1 X }
and { X i}i 1 have the same joint probability distribution for all n and
n

all c, establishing the strict sense stationarity of Gaussian processes


from its wide-sense stationarity.
To summarize if X(t) is a Gaussian process, then
wide-sense stationarity (w.s.s)  strict-sense stationarity (s.s.s).
Notice that since the joint p.d.f of Gaussian random variables depends
11
only on their second order statistics, which is also the basis PILLAI/Cha
for wide sense stationarity, we obtain strict sense stationarity as well.
From (14-12)-(14-13), (refer to Example 14.2), the process
X (t )  a cos( 0t   ), in (14-11) is wide-sense stationary, but
not strict-sense stationary. t 2

Similarly if X(t) is a zero mean wide   t1  t 2

sense stationary process in Example 14.1, T T 


t1
then  z2 in (14-10) reduces to  T
2T  
T T
 z2  E{| z |2 }   T  T R (t1  t2 )dt1dt2 .
XX

As t1, t2 varies from –T to +T,   t1  t2 varies


Fig. 14.2
from –2T to + 2T. Moreover RXX ( ) is a constant
over the shaded region in Fig 14.2, whose area is given by (  0)
1 1
(2T   )  (2T    d ) 2  (2T   )d
2

2 2
and hence the above integral reduces to
2T 2T | |
    2 t R ( )(2T  |  |) d 
2
z XX
1
2T  2t XX
R ( )(1  2 T ) d . 12
(14-24) PILLAI/Cha
Systems with Stochastic Inputs
A deterministic system1 transforms each input waveform X (t , i ) into
an output waveform Y (t , i )  T [ X (t , i )] by operating only on the
time variable t. Thus a set of realizations at the input corresponding
to a process X(t) generates a new set of realizations {Y (t , )} at the
output associated with a new process Y(t).

Y (t,  i )
X (t,  i )


X (t )
T [] Y

(t )

t t

Fig. 14.3

Our goal is to study the output process statistics in terms of the input
process statistics and the system function.
1A stochastic system on the other hand operates on both the variables t and  .
13
PILLAI/Cha
Deterministic Systems

Memoryless Systems Systems with Memory


Y (t )  g[ X (t )]

Time-varying Time-Invariant Linear systems


systems systems Y (t )  L[ X (t )]
Fig. 14.3

Linear-Time Invariant
(LTI) systems

X (t ) h(t ) Y (t )     h(t   ) X ( )d

LTI system     h( ) X (t   )d . 14
PILLAI/Cha
Memoryless Systems:
The output Y(t) in this case depends only on the present value of the
input X(t). i.e.,
Y (t )  g{ X (t )} (14-25)

Strict-sense Memoryless Strict-sense


stationary input system stationary output.

(see (9-76), Text for a proof.)

Wide-sense Memoryless Need not be


stationary input system stationary in
any sense.

X(t) stationary Y(t) stationary,but


Memoryless
Gaussian with not Gaussian with
system
RXX ( ) RXY ( )  RXX ( ).
Fig. 14.4 (see (14-26)). 15
PILLAI/Cha
Theorem: If X(t) is a zero mean stationary Gaussian process, and
Y(t) = g[X(t)], where g () represents a nonlinear memoryless device,
then
RXY ( )  RXX ( ),   E{g ( X )}. (14-26)

Proof:
RXY ( )  E{ X (t )Y (t   )}  E[ X (t ) g{ X (t   )}]
   x1 g ( x2 ) f X1X 2 (x1 , x2 )dx1dx2 (14-27)

where X 1  X (t ), X 2  X (t   ) are jointly Gaussian random


variables, and hence
 x* A1 x / 2
f X1X 2 ( x1 , x2 )  1 e
2 | A|
X  ( X 1 , X 2 )T , x  ( x1 , x2 )T
 R (0) R ( )   *
A  E{ X X }    LL
*

XX XX

 R ( ) R (0) 
16
XX XX
PILLAI/Cha
where L is an upper triangular factor matrix with positive diagonal
entries. i.e.,
 l11 l12 
L .
 0 l22 
Consider the transformation
Z  L1 X  ( Z1 , Z 2 )T , z  L1 x  ( z1 , z2 )T
so that
1 *1 1 *1
E{Z Z }  L E{ X X }L  L AL  I
* *

and hence Z1, Z2 are zero mean independent Gaussian random


variables. Also
x  L z  x1  l11 z1  l12 z2 , x2  l22 z2
and hence
x A1 x  z L* A1 Lz  z z  z12  z22 .
* * *

17
The Jacobaian of the transformation is given by PILLAI/Cha
| J || L1 || A |1/ 2 .
Hence substituting these into (14-27), we obtain
 
RXY ( )      (l 11
z1  l12 z2 ) g (l22 z2 )  1 1
| J | 2 | A|1/ 2 e  z12 / 2  z22 / 2
e
 
 l11     z g (l
1 22
z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2
 
 l12     z g (l
2 22
z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2

0
 
 l11   z1 f z1 ( z1 )dz1   g (l22 z2 ) f z2 ( z2 )dz2

 l12   z2 g (l22 z2 ) f z2 ( z2 ) dz2
1 e z / 2
2
2

2
l12 
  ug (u)
 u 2 / 2 l222
 l222
1
2
e du,
where u  l22 z2 . This gives 18
PILLAI/Cha
fu ( u )

RXY ( )  l12 l22   g (u ) lu2 1
e  u 2 / 2 l222
du
22 2 l222

df u ( u )
  f u ( u )
du

  RXX ( )   g (u ) f u(u )du,

since A  LL* gives l12 l22  RXX ( ). Hence


0 
RXY ( )  RXX ( ){ g (u ) f u (u ) | 
     g (u ) f u (u )du}
 RXX ( ) E{g ( X )}  RXX ( ),

the desired result, where   E[ g ( X )]. Thus if the input to


a memoryless device is stationary Gaussian, the cross correlation
function between the input and the output is proportional to the
input autocorrelation function.
19
PILLAI/Cha
Linear Systems: L[] represents a linear system if
L{a1 X (t1 )  a2 X (t2 )}  a1 L{X (t1 )}  a2 L{X (t2 )}. (14-28)
Let
Y (t )  L{ X (t )} (14-29)
represent the output of a linear system.
Time-Invariant System: L[] represents a time-invariant system if
Y (t )  L{ X (t )}  L{ X (t  t0 )}  Y (t  t0 ) (14-30)
i.e., shift in the input results in the same shift in the output also.
If L[] satisfies both (14-28) and (14-30), then it corresponds to
a linear time-invariant (LTI) system.
LTI systems can be uniquely represented in terms of their output to
h (t )
a delta function Impulse
response of
 (t ) LTI h (t ) the system
t

Impulse Fig. 14.5 Impulse 20


response PILLAI/Cha
then Y (t )

X (t )
t
X (t ) Y (t )
t LTI 
Y (t )     h(t   ) X ( )d
arbitrary Fig. 14.6 
input
    h( ) X (t   )d (14-31)
Eq. (14-31) follows by expressing X(t) as

X (t )     X ( ) (t   )d (14-32)
and applying (14-28) and (14-30) to Y (t )  L{ X (t )}. Thus

Y (t )  L{ X (t )}  L{   X ( ) (t   )d }

    L{ X ( ) (t   )d } By Linearity

    X ( ) L{ (t   )}d By Time-invariance
 
    X ( )h (t   )d     h ( ) X (t   )d . (14-33) 21
PILLAI/Cha
Output Statistics: Using (14-33), the mean of the output process
is given by

 (t )  E{Y (t )}     E{ X ( )h(t   )d }
Y


     X ( )h(t   )d   X (t )  h(t ). (14-34)

Similarly the cross-correlation function between the input and output


processes is given by
R XY (t1 , t2 )  E{ X (t1 )Y * (t2 )}

 E{ X (t1 )    X * (t2   )h * ( )d }

    E{ X (t1 ) X * (t2   )}h * ( )d

    R XX (t1 , t2   )h * ( )d
 R XX (t1 , t2 )  h * (t2 ). (14-35)
Finally the output autocorrelation function is given by 22
PILLAI/Cha
RYY (t1 , t2 )  E{Y (t1 )Y * (t2 )}

 E{   X (t1   )h(  )d Y * (t2 )}

    E{ X (t1   )Y * (t2 )}h(  )d

    R XY (t1   , t2 )h(  )d
 R XY (t1 , t2 )  h(t1 ), (14-36)
or
RYY (t1 , t2 )  RXX (t1 , t2 )  h* (t2 )  h(t1 ). (14-37)

 (t )
X h(t)  (t )
Y

(a)

RXX (t1 , t2 ) 
 h*(t2)  
R XY ( t1 ,t 2 )
h(t1) 
 RYY (t1 , t2 )
(b)
23
Fig. 14.7 PILLAI/Cha
In particular if X(t) is wide-sense stationary, then we have  X (t )   X
so that from (14-34)

 (t )  
Y X   h( )d   X
c, a constant. (14-38)

Also RXX (t1 , t2 )  RXX (t1  t2 ) so that (14-35) reduces to



R XY (t1 , t2 )     R XX (t1  t2   )h * ( )d

 R XX ( )  h * (  )  R XY ( ),   t1  t2 . (14-39)
Thus X(t) and Y(t) are jointly w.s.s. Further, from (14-36), the output
autocorrelation simplifies to

RYY (t1 , t2 )    RXY (t1    t2 )h(  )d ,   t1  t2
 RXY ( )  h( )  RYY ( ). (14-40)
From (14-37), we obtain
RYY ( )  RXX ( )  h* ( )  h( ). (14-41) 24
PILLAI/Cha
From (14-38)-(14-40), the output process is also wide-sense stationary.
This gives rise to the following representation

X (t ) Y (t )
wide-sense LTI system wide-sense
stationary process h(t) stationary process.
(a)
X (t ) Y (t )
strict-sense LTI system strict-sense
stationary process h(t) stationary process
(b) (see Text for proof )

X (t ) Y (t )
Gaussian Linear system Gaussian process
process (also (also stationary)
stationary) (c)
25
Fig. 14.8 PILLAI/Cha
White Noise Process:
W(t) is said to be a white noise process if
RWW (t1 , t2 )  q(t1 ) (t1  t2 ), (14-42)
i.e., E[W(t1) W*(t2)] = 0 unless t1 = t2.
W(t) is said to be wide-sense stationary (w.s.s) white noise
if E[W(t)] = constant, and
RWW (t1 , t2 )  q (t1  t2 )  q ( ). (14-43)

If W(t) is also a Gaussian process (white Gaussian process), then all of


its samples are independent random variables (why?).

LTI Colored noise


White noise
W(t) h(t) N (t )  h ( t ) W ( t )

Fig. 14.9
For w.s.s. white noise input W(t), we have 26
PILLAI/Cha

E[ N (t )]  W   h( )d , a constant (14-44)

and

Rnn ( )  q ( )  h* ( )  h( )


 qh* ( )  h( )  q ( ) (14-45)

where

 ( )  h( )  h ( )    h( )h* (   )d .
*
(14-46)

Thus the output of a white noise process through an LTI system


represents a (colored) noise process.
Note: White noise need not be Gaussian.
“White” and “Gaussian” are two different concepts!
27
PILLAI/Cha
Upcrossings and Downcrossings of a stationary Gaussian process:
Consider a zero mean stationary Gaussian process X(t) with
autocorrelation function RXX ( ). An upcrossing over the mean value
occurs whenever the realization X(t)
passes through zero with X (t )
positive slope. Let t Upcrossings
represent the probability
of such an upcrossing in
the interval (t , t  t ). t

We wish to determine  .
Fig. 14.10 Downcrossing
Since X(t) is a stationary Gaussian process, its derivative process X (t )
is also zero mean stationary Gaussian with autocorrelation function
 ( ) (see (9-101)-(9-106), Text). Further X(t) and X (t )
RX X  ( )   RXX
are jointly Gaussian stationary processes, and since (see (9-106), Text)
dRXX ( )
RXX  ( )   ,
d
28
PILLAI/Cha
we have
dRXX (  ) dRXX ( )
RXX  (  )      RXX  ( ) (14-47)
d (  ) d
which for   0 gives
RXX  (0)  0  E[ X (t ) X (t )]  0 (14-48)
i.e., the jointly Gaussian zero mean random variables
X 1  X (t ) and X 2  X (t ) (14-49)
are uncorrelated and hence independent with variances
 12  R (0) and  22  R (0)   R (0)  0
XX X X  XX (14-50)
respectively. Thus
 x2 x2 
 1 2  1 2 
1  2 1 2 2  (14-51)
f X1X 2 ( x1 , x2 )  f X ( x1 ) f X ( x2 )  e .
2 1 2
To determine  , the probability of upcrossing rate,
29
PILLAI/Cha
we argue as follows: In an interval(t , t  t ), the realization moves
from X(t) = X1 to X (t  t )  X (t )  X (t )t  X 1  X 2 t,
and hence the realization intersects with the zero level somewhere
in that interval if
X 1  0, X 2  0, and X (t  t )  X 1  X 2 t  0 (14-52)
i.e., X 1   X 2 t. X (t )
Hence the probability of upcrossing X ( t  t )

in (t , t  t ) is given by t t
t  t
 0
t   x  x   x t f ( x1 , x2 )d x1dx2
2 0
X1 X 2
1 2 X (t ) Fig. 14.11
 
  0 f X 2 ( x2 )d x2   x t f X1 ( x1 )d x1 . (14-53)
2

Differentiating both sides of (14-53) with respect to t , we get



   0 f ( x2 )x2 f (  x2 t )dx2
X2 X1 (14-54)
and letting t  0, Eq. (14-54) reduce to 30
PILLAI/Cha
 1 
  0 x2 f X ( x2 ) f X (0)dx2   x2 f X ( x2 )dx2
2R XX (0) 0

1 1 1  (0)
 R XX
 ( 2 2 /  )  (14-55)
2R XX (0) 2 2 R XX (0)
[where we have made use of (5-78), Text]. There is an equal
probability for downcrossings, and hence the total probability for
crossing the zero line in an interval (t , t  t ) equals  0 t , where
1
   (0) / RXX (0)  0.
 RXX (14-56)
0

It follows that in a long interval T, there will be approximately  0T
crossings of the mean value. If  RXX  (0) is large, then the
autocorrelation function RXX ( ) decays more rapidly as  moves
away from zero, implying a large random variation around the origin
(mean value) for X(t), and the likelihood of zero crossings should
increase with increase in  RXX (0), agreeing with (14-56). 31
PILLAI/Cha
Discrete Time Stochastic Processes:
A discrete time stochastic process Xn = X(nT) is a sequence of
random variables. The mean, autocorrelation and auto-covariance
functions of a discrete-time process are gives by
 n  E{ X (nT )} (14-57)
R(n1 , n2 )  E{ X (n1T ) X * (n2T )} (14-58)
and
C (n1 , n2 )  R(n1 , n2 )   n1  n*2 (14-59)
respectively. As before strict sense stationarity and wide-sense
stationarity definitions apply here also.
For example, X(nT) is wide sense stationary if
E{ X (nT )}   , a constant (14-60)
and
E[ X {(k  n )T }X *{(k )T }]  R(n )  rn  r*n (14-61) 32
PILLAI/Cha
i.e., R(n1, n2) = R(n1 – n2) = R*(n2 – n1). The positive-definite
property of the autocorrelation sequence in (14-8) can be expressed
in terms of certain Hermitian-Toeplitz matrices as follows:
Theorem: A sequence {rn }  forms an autocorrelation sequence of
a wide sense stationary stochastic process if and only if every
Hermitian-Toeplitz matrix Tn given by
 r0 r1 r2  rn 
 * 
 r1 r0 r1  rn 1 
Tn   
 n T *
(14-62)

 
 r* r*  r1* r0 
 n n 1
is non-negative (positive) definite for n  0, 1, 2, , .
Proof: Let a  [a0 , a1 , , an ]T represent an arbitrary constant vector.
Then from (14-62), n n
a Tn a   ai ak* rk i
*
(14-63)
i 0 k 0
since the Toeplitz character gives (Tn )i ,k  rk i . Using (14-61), 33
Eq. (14-63) reduces to PILLAI/Cha
n n 
 n 2

a Tn a   ai ak E{ X (kT ) X (iT )}  E   ak X (kT )   0. (14-64)
* * * *

i 0 k 0  k 0 
From (14-64), if X(nT) is a wide sense stationary stochastic process
then Tn is a non-negative definite matrix for every n  0, 1, 2,, .
Similarly the converse also follows from (14-64). (see section 9.4, Text)

If X(nT) represents a wide-sense stationary input to a discrete-time


system {h(nT)}, and Y(nT) the system output, then as before the cross
correlation function satisfies
RXY (n)  RXX (n)  h* ( n) (14-65)
and the output autocorrelation function is given by
RYY (n)  RXY (n)  h(n) (14-66)
or
RYY (n)  RXX (n)  h* ( n)  h(n). (14-67)
Thus wide-sense stationarity from input to output is preserved
34
for discrete-time systems also.
PILLAI/Cha
Auto Regressive Moving Average (ARMA) Processes

Consider an input – output representation


p q
X ( n )    ak X ( n  k )   bkW ( n  k ), (14-68)
k 1 k 0

where X(n) may be considered as the output of a system {h(n)}


driven by the input W(n).
Z – transform of W(n) h(n) X(n)
(14-68) gives Fig.14.12
p q
X ( z )  ak z k
 W ( z )  bk z  k , a0  1 (14-69)
k 0 k 0

or
1 2 q

X ( z ) b0  b1 z  b2 z   bq z  B( z )
H ( z )   h( k ) z  k   1 2 p

k 0 W ( z ) 1  a1 z  a2 z   a p z A( z )
35
(14-70) PILLAI/Cha
represents the transfer function of the associated system response {h(n)}
in Fig 14.12 so that 
X ( n )   h( n  k )W ( k ). (14-71)
k 0

Notice that the transfer function H(z) in (14-70) is rational with p poles
and q zeros that determine the model order of the underlying system.
From (14-68), the output undergoes regression over p of its previous
values and at the same time a moving average based on W (n), W (n  1),
, W (n  q) of the input over (q + 1) values is added to it, thus
generating an Auto Regressive Moving Average (ARMA (p, q))
process X(n). Generally the input {W(n)} represents a sequence of
uncorrelated random variables of zero mean and constant variance  W2
so that
RWW (n)   W2 (n). (14-72)
If in addition, {W(n)} is normally distributed then the output {X(n)}
also represents a strict-sense stationary normal process.
If q = 0, then (14-68) represents an AR(p) process (all-pole
36
process), and if p = 0, then (14-68) represents an MA(q) PILLAI/Cha
process (all-zero process). Next, we shall discuss AR(1) and AR(2)
processes through explicit calculations.
AR(1) process: An AR(1) process has the form (see (14-68))
X (n)  aX (n  1)  W (n) (14-73)
and from (14-70) the corresponding system transfer

1
H ( z) 
1  az 1
  a n n
z (14-74)
n 0

provided | a | < 1. Thus


h( n )  a n , | a | 1 (14-75)

represents the impulse response of an AR(1) stable system. Using


(14-67) together with (14-72) and (14-75), we get the output
autocorrelation sequence of an AR(1) process to be
 |n |
a
RXX (n )   W2 (n )  {a  n }  {a n }   W2  a |n| k a k   W2
k 0 1  a 2
37
(14-76) PILLAI/Cha
where we have made use of the discrete version of (14-46). The
normalized (in terms of R (0)) output autocorrelation sequence is
XX

given by
RXX (n )
 X (n)   a |n| , | n |  0. (14-77)
RXX (0)
It is instructive to compare an AR(1) model discussed above by
superimposing a random component to it, which may be an error
term associated with observing a first order AR process X(n). Thus
Y ( n)  X ( n)  V ( n) (14-78)
where X(n) ~ AR(1) as in (14-73), and V(n) is an uncorrelated random
sequence with zero mean and variance  V that is also uncorrelated
2

with {W(n)}. From (14-73), (14-78) we obtain the output


autocorrelation of the observed process Y(n) to be
RYY (n)  RXX (n)  RVV (n)  RXX (n)   V2 (n)
a |n|
 W 2
  2
 ( n) (14-79)
1 a 2 V 38
PILLAI/Cha
so that its normalized version is given by

 RYY ( n ) 1 n0
Y ( n )    |n|
RYY (0) c a n  1,  2, (14-80)
where
2
c 2 W
 1.
   (1  a )
W
2
V
2 (14-81)
Eqs. (14-77) and (14-80) demonstrate the effect of superimposing
an error sequence on an AR(1) model. For non-zero lags, the
autocorrelation of the observed sequence {Y(n)}is reduced by a constant
factor compared to the original process {X(n)}.
From (14-78), the superimposed
 ( 0)   ( 0)  1
error sequence V(n) only affects X Y

the corresponding term in Y(n)  (k )   (k )


X Y

(term by term). However,


a particular term in the “input sequence” 0
n
k
W(n) affects X(n) and Y(n) as well as
39
all subsequent observations.
Fig. 14.13 PILLAI/Cha
AR(2) Process: An AR(2) process has the form
X (n)  a1 X (n  1)  a2 X (n  2)  W (n) (14-82)
and from (14-70) the corresponding transfer function is given by

1 b1 b2
H ( z )   h( n) z n
   (14-83)
n 0
1
1  a1 z  a2 z 2
1  1 z 1
1  2 z 1
so that
h(0)  1, h(1)  a1 , h(n)  a1h(n  1)  a2 h(n  2), n  2 (14-84)
and in term of the poles 1 and 2 of the transfer function,
from (14-83) we have
h(n)  b11n  b2n2 , n0 (14-85)

that represents the impulse response of the system.


From (14-84)-(14-85), we also have b1  b2  1, b11  b22  a1.
From (14-83),
1  2  a1 , 12  a2 , 40
(14-86) PILLAI/Cha
and H(z) stable implies | 1 | 1, | 2 | 1.
Further, using (14-82) the output autocorrelations satisfy the recursion
RXX (n)  E{ X (n  m) X * (m)}
 E{[ a1 X (n  m  1)  a2 X (n  m  2)] X * (m)}
0
 E{W (n  m) X * (m)}
 a1 RXX (n  1)  a2 RXX (n  2) (14-87)
and hence their normalized version is given by
 RXX ( n )
 X (n)   a1  X ( n  1)  a2  X (n  2). (14-88)
RXX (0)
By direct calculation using (14-67), the output autocorrelations are
given by
RXX (n)  RWW (n)  h* (n)  h(n)   W2 h* (n)  h(n)

  W  h* ( n  k )  h( k )
2

k 0
(14-89)
 | b1 | ( ) b b ( ) b b ( ) | b2 | ( ) 
2 * n * * n * * n 2 * n
 W 
2
 1

1 2 1
 1 2 2
 2

 1 | 1 | 1  12 1   1 | 2 | 
2 * * 2 41
1 2
PILLAI/Cha
where we have made use of (14-85). From (14-89), the normalized
output autocorrelations may be expressed as
RXX (n)
 X ( n)   c11  c2 2
*n *n
(14-90)
RXX (0)
where c1 and c2 are appropriate constants.
Damped Exponentials: When the second order system in
(14-83)-(14-85) is real and corresponds to a damped exponential
response, the poles are complex conjugate which gives a12  4a2  0
in (14-83). Thus
1  r e j , 2  1* , r  1. (14-91)
j
In that case c1  c2  c e in (14-90) so that the normalized
*

correlations there reduce to


 (n)  2 Re{c  }  2cr n cos( n   ).
X
*n
1 1 (14-92)
But from (14-86)
1  2  2r cos  a1 , r 2  a2  1, (14-93) 42
PILLAI/Cha
and hence 2r sin   ( a12  4a2 )  0 which gives
 (a12  4a2 )
tan   . (14-94)
a1
Also from (14-88)
 (1)  a1  (0)  a2  (1)  a1  a2  (1)
X X X X

so that
a1
 X (1)   2cr cos(   ) (14-95)
1  a2

where the later form is obtained from (14-92) with n = 1. But  X (0)  1
in (14-92) gives
2c cos  1, or c  1 / 2 cos . (14-96)
Substituting (14-96) into (14-92) and (14-95) we obtain the normalized
output autocorrelations to be 43
PILLAI/Cha
cos( n   )
 ( n )  ( a2 ) n/2
,  a2  1 (14-97)
cos
X

where  satisfies
cos(   ) a 1
 1 . (14-98)
cos 1  a2  a2
Thus the normalized autocorrelations of a damped second order
system with real coefficients subject to random uncorrelated
impulses satisfy (14-97).

More on ARMA processes

From (14-70) an ARMA (p, q) system has only p + q + 1 independent


coefficients, (ak , k  1  p, bi , i  0  q), and hence its impulse
response sequence {hk} also must exhibit a similar dependence among
them. In fact according to P. Dienes (The Taylor series, 1931),
44
PILLAI/Cha
an old result due to Kronecker1 (1881) states that the necessary and
sufficient condition for H ( z )  k 0 hk z  k to represent a rational
system (ARMA) is that
det H n  0, nN (for all sufficiently large n), (14-99)
where
 h0 h1 h2 hn 
h h h3 hn 1 
Hn   .
 1 2
(14-100)
 
h h h2 n 
 n n 1 hn  2
i.e., In the case of rational systems for all sufficiently large n, the
Hankel matrices Hn in (14-100) all have the same rank.

The necessary part easily follows from (14-70) by cross multiplying


and equating coefficients of like powers of z  k , k  0, 1, 2, .
1Among other things “God created the integers and the rest is the work of man.” (Leopold Kronecker) 45
PILLAI/Cha
This gives
b0  h0
b1  h0 a1  h1 (14-101)

bq  h0 aq  h1aq 1   hm
0  h0 aq  i  h1aq  i 1   hq  i 1a1  hq  i , i  1. (14-102)

For systems with q  p  1, letting i  p  q, p  q  1, , 2p q


in (14-102) we get
h0 a p  h1a p 1   h p 1a1  h p  0

h p a p  h p 1a p 1   h2 p 1a1  h2 p  0 (14-103)

which gives det Hp = 0. Similarly i  p  q  1, gives 46


PILLAI/Cha
h0 a p 1  h1a p   h p 1  0
h1a p 1  h2 a p   hp 2  0

h p 1a p 1  h p  2 a p   h2 p  2  0, (14-104)
and that gives det Hp+1 = 0 etc. (Notice that a p  k  0, k  1, 2, )
(For sufficiency proof, see Dienes.)
It is possible to obtain similar determinantial conditions for ARMA
systems in terms of Hankel matrices generated from its output
autocorrelation sequence.
Referring back to the ARMA (p, q) model in (14-68),
the input white noise process w(n) there is uncorrelated with its own
past sample values as well as the past values of the system output.
This gives
E{w(n) w* (n  k )}  0, k  1 (14-105)

E{w(n) x* (n  k )}  0, k  1. (14-106) 47
PILLAI/Cha
Together with (14-68), we obtain
ri  E{x ( n ) x * ( n  i )}
p q
   ak {x ( n  k ) x * (n  i )}   bk {w(n  k ) w* (n  i )}
k 1 k 0
p q
   ak ri  k   bk {w(n  k ) x * (n  i )} (14-107)
k 1 k 0

and hence in general


p

 ak ri  k  ri  0, iq (14-108)
k 1

and
p

 ak ri  k  ri  0, i  q  1. (14-109)
k 1

Notice that (14-109) is the same as (14-102) with {hk} replaced 48


PILLAI/Cha
by {rk} and hence the Kronecker conditions for rational systems can
be expressed in terms of its output autocorrelations as well.
Thus if X(n) ~ ARMA (p, q) represents a wide sense stationary
stochastic process, then its output autocorrelation sequence {rk}
satisfies
rank Dp 1  rank Dp  k  p, k  0, (14-110)

where
 r0 r1 r2 rk 
r r r3 rk 1 
  1 
Dk  2
(14-111)
 
r r r2 k 
 k k 1 rk  2
represents the (k  1)  (k  1) Hankel matrix generated from
r0 , r1 , , rk , , r2 k . It follows that for ARMA (p, q) systems, we have
det Dn  0, for all sufficiently large n. (14-112) 49
PILLAI/Cha

Anda mungkin juga menyukai