In many real life situation, observations are made over a period of time and they
are influenced by random effects, not just at a single instant but throughout
the entire interval of time or sequence of times.
In a rough sense, a random process is a phenomenon that varies to some
degree unpredictably as time goes on. If we observed an entire time-sequence
of the process on several different occasions, under presumably identical
conditions, the resulting observation sequences, in general, would be different.
A random variable (RV) is a rule (or function) that assigns a real number to
every outcome of a random experiment, while a random process is a rule (or
function) that assigns a time function to every outcome of a random experiment.
Example
Observe the demand per week for certain service over time.
Example
The closing price of HSBC stock observed from June 15 to Dec. 22, 2003.
Example
Outcomes from Mark Six in Years 2000, 2001. We have two sequences of
observations made twice per week. The actual sequence of observation is called
the realization of the random process associated with the random experiment
of Mark Six. The realizations in different years should differ, though the nature
of the random experiment remains the same (assuming no change to the rule
of Mark Six).
A random experiment may lead not only to a single random variable, but to an
entire sequence
{Xi : i = 1, 2, 3, } = {X1, X2, X3, }
of random variables (indexed family of random variables).
1
x1 (t) = 4
2
x2 (t) = 2
3
x3 (t) = 2
4
x4 (t) = 4
5
x5 (t) = t/2
6
x6 (t) = t/2
1. If both T and S are discrete, the random process is called a discrete random
sequence. For example, if Xn represents the outcome of the nth toss of
a fair dice, then {Xn, n 1} is a discrete random sequence, since T =
{1, 2, 3, } and S = {1, 2, 3, 4, 5, 6}.
2. If T is discrete and S is continuous, the random process is called a continuous random sequence.
For example, if Xn represents the temperature at the end of the nth hour
of a day, then {Xn, 1 n 24} is a continuous random sequence, since
temperature can take any value in an interval and hence continuous.
xfX(t)(x) dx.
Z Z
xyfX(t1),X(t2) dxdy.
Note that fX1(t),X2(t) is the second order pdf of X(t) and RX (t1, t2) is a function
of t1 and t2.
CX (t1, t2)
q
|X (t1, t2)| 1.
Solution
(c)
E[Xn] =
=
Z 1
0
"
sn ds
#1
n+1
s
n+1 0
1
.
n+1
RX (n, n + k) = E[XnXn+k ] =
=
"
#1
Z 1
0
s2n+k ds
s2n+k+1
1
=
.
2n + k + 1 0
2n + k + 1
.
2n + k + 1
n+1
n+k+1
(b) Find mZ (t) and CZ (t1, t2), t > 0 and t1 > 0, t2 > 0.
Solution
(a) Since A and B are independent, so do At and B. Let fA(x) and fB (y)
denote the pdf of A and B, respectively, then
x0
1
0
fAt(x ) = fA
t
Z
1
t
fA
x0
t
fB (z x0) dx0.
1
0
if n1 = n2
.
otherwise
n1 6= n2.
1. Random changes of the form Xt+h Xt, for fixed h > 0, are called increments
of the process.
Markov process
A random process X(t) is said to be Markov if the future of the process given
the present is independent of the past. For discrete-valued Markov process
P [X(tk ) = xk |X(tk1) = xk1, , X(t1) = x1]
= P [X(tk ) = xk |X(tk1) = xk1].
P [X1 = 1] = q
P [X2 = 0] = 2pq,
P [X2 = 2] = q 2
Questions
2. Are X10 X4 and X16 X12 independent? How about X10 X4 and X12 X8?
Hint We are considering increments over non-overlapping and overlapping
intervals, respectively.
PX3 (3) = q 3.
In general,
PXk (j) =
k+j kj
k C k+j p 2 q 2 ,
2
k j k,
Why? Let R and L be the number of right moves and left moves, respecitvely.
We have
R+L=k
and
RL=j
so that R = (k + j)/2. Note that when k is odd (even), j must be odd (even).
How to find the joint pmfs?
For example, P [X2 = 0, X3 = 3] = P [X3 = 3|X2 = 0]P [X2 = 0] = 0;
P [X2 = 2, X3 = 1] = P [X3 = 1|X2 = 2]P [X2 = 2] = qp2.
Sum processes
Sn =
n
X
Xi ,
i=1
and S1 = X1, we take S0 = 0 for notational convenience.
1. Sn is Markovian:
P [Sn = n|Sn1 = n1]
= P [Sn = n|Sn1 = n1, Sn2 = n2, , S1 = 1].
This is because Sn = Sn1 + Xn and the value taken by Xn is independent
of the values taken by X1, , Xn1.
Remark
Sn and Sm are not independent since
S n = X1 + + X n
and
S m = X1 + + X m .
n > m.
Solution
P [Sn = , Sm = ] = P [Sn Sm = , Sm = ]
= P [Sn Sm = ]P [Sm = ],
due to independent increments over non-overlapping intervals. Further, from
stationary increments property, we have Sn Sm = Snm S0 = Snm so that
P [Sn = , Sm = ] = P [Snm = ]P [Sm = ].
This verifies that Sn and Sm are not independent since
P [Sn = , Sm = ] 6= P [Sn = ]P [Sm = ].
n
k
X
X
= E
(Xi m)
(Xj m)
i=1
j=1
n X
k
X
i=1 j=1
min(n,k)
X
CX (i, i) = min(n, k) 2
i=1
since E[(Xi m)(Xj m)] = 2i,j and only those terms with i = j survive.
Alternative method
Without loss of generality, we let n k so that n = min(n, k).
j
nj
n Cj p (1 p)
for 0 j n
.
otherwise
n
X
j 2nCj pj q nj .
j=0
Note that
n
X
j=0
j 2nCj pj q nj =
n
X
j(j 1)nCj pj q nj +
j=2
= n(n 1)p2
n2
X
j 0 =0
n
X
j nCj pj q nj
j=0
0 n2j 0
j
+ np, where j 0 = j 2.
n2 Cj 0 p q
`m
(stationary increments)
(k)`mek (n)men
.
=
(` m)!
m!
(c) VAR(Sn) = VAR(Sn+k Sk ) = VAR(Sn+k ) + VAR(Sk ) 2COV(Sk , Sn+k )
so that COV(Sk , Sn+k ) =
=
1
[VAR(Sn+k ) + VAR(Sk ) VAR(Sn)]
2
1
[(n + k) + k n] = k.
2