Anda di halaman 1dari 8

Solutions to Selected Problems in Chapter 7

SOLUTIONS TO SELECTED PROBLEMS IN CHAPTER 7


..

Compound channel with state information available at the encoder. Suppose the
encoder knows the state s before communication commences over a compound
channel p(y|x, s). Find the capacity.
Solution: As in the notes, the capacity is given by
C = min{ max I(Xs ; Ys )}.
sS p (x)

(a) Proof of Achievability:


Codebook Generation: Fix p X (x) for each s S . We generate |S | subcodebooks Cs . For sub-codebook Ci , we generate 2nR xsn sequences according to
ni=1 p X (xsi ).

Encoding: Given m, since the encoder knows the state s of the channel, it
sends message xsn (m) from the subcodebook Cs .

[1 : 2nR ] such that (xsn (m),


yn )
Decoding: The decoder finds the unique m
T(n) (Xs , Ys ) for some s S .

Analysis of probability of error: We have the following error events

E0 := {(Xsn , Y n ) T(n) } for all s S ,

E1 := {(Xsn (m), Y n ) T(n) (Xs , Ys )} for some m = 1 and s S .

P(E0 ) 0 as n by LLN. For the second term, note that by the


packing lemma, P{(Xsn (m), Y n ) T(n) (Xs , Ys )} 0 as n for m = 1
if R < I(Xs ; Ys ) (). We have
P(E1 ) P{(Xsn (m), Y n ) T(n) (Xs , Ys )}
sS

| S | max P{(Xsn (m), Y n ) T(n) (Xs , Ys )}.


sS

Therefore, P(E1 ) 0 as n if R < minsS max p

(b) For the converse, for all s S , we have

nR I(Xs,i ; Ys,i ) + nn
n

i=1
n

max I(Xs ; Ys ) + nn
i=1

p(x )

= n max I(Xs ; Ys ) + nn .
p(x )

Hence, we have R minsS max p(x ) I(Xs ; Ys ) + n .

(x)

I(Xs ; Ys ) ().

..

Solutions to Selected Problems in Chapter 7

Compound channel with arbitrary state space. Consider the compound channel
p(ys |x) with state space S of infinite cardinality.
(a) Show that there is a finite subset Sn S with cardinality at most (n + 1)2|X ||Y |
such that for every s S , there exists s Sn with

T(n) (X, Ys ) = T(n) (X, Ys).

(Hint: Recall that a typical set is defined by the upper and lower bounds on the
number of occurrences of each pair (x, y) X Y in an n sequence.)

(b) Use part (a) to prove achievability for Theorem ?? with infinite state space.
Solution:

(a) The total number of (x, y) pairs is |X ||Y | and for each (x, y) pair, (x, y|x n , y n )
, nn , so we can find a finite subset
takes one of the following values: 0, n1 , . . . , n1
n
Sn S with cardinality at most (n + 1)2|X ||Y | such that for every s S , there
exists s S with T(n) (X, Ys ) = T(n) (X, Ys).
(b) First, by the LLN,

P(E1 ) = P((x n (1), y n ) T(n) (X, Ys ) for all s S )


= P((x n (1), y n ) T (n) (X, Y ) for all s S ) 0 as n .
s

We also have
P(E2 ) = P((x n (m), y n ) T(n) (X, Ys ) for some m = 1, s S )
= P((x n (m), y n ) T (n) (X, Y ) for some m = 1, s S )
(n + 1)

2|X ||Y |

max P((x (m), y )


n

s S

T(n) (X, Ys )

which tends to zero as n if R inf s S I(X; Ys ).


..

for some m = 1),

No state information. Show that the capacity of the DMC with DM state p(y|x, s)p(s)
when no state information is available at either the encoder or decoder is
C = max I(X; Y),
p(x)

where p(y|x) = s p(s)p(y|x, s). Further show that any (2nR , n) code for the DMC
p(y|x) achieves the same average probability of error when used over the DMC
with DM state p(y|x, s)p(s), and vice versa.
..

Strictly causal state information. Consider the DMC with DM state p(y|x, s)p(s).
Suppose the state information is available strictly causally at the encoder, that is,
the encoder is specified by xi (m, s i1 ), i [1 : n]. Establish the capacity (a) when
the state information is not available at the decoder and (b) when the state information is also available at the decoder.
Solution:

Solutions to Selected Problems in Chapter 7

(a) The capacity is CSSIE = max p(x) I(X; Y). The proof of achievability follows
from that for the DMC without state information. For the proof of the converse, consider
nR I(M; Y n ) + nn

= I(M; Yi |Y i1 ) + nn
n

i=1
n

I(M, Y ( i 1), S i1 ; Yi ) + nn
i=1
n

= I(M, Y ( i 1), S i1 , Xi ; Yi ) + nn

(a)

i=1
n

= I(Xi ; Yi ) + nn

(b)

i=1

n max I(X; Y) + nn ,
p(x)

where (a) comes from Xi is a function of (M, S i1 ) and (b) follows from (M, Y i1 , S i1 )
Xi Yi forms a Markov chain.

(b) The capacity is CSSIED = max p(x) I(X; Y|S). The proof of achievability follows
from DMC with state information available only at the decoder. For the proof
of the converse, consider
nR I(M; Y n |S n ) + nn

= I(M; Yi |S n , Y i1 ) + nn
n

i=1
n

I(M, S i1 , Y i1 ; Yi |S n ) + nn
i=1
n

= I(M, S i1 , Y i1 , Xi ; Yi |S n ) + nn

(a)

i=1
n

= h(Yi |S n ) h(Yi | Xi , Si ) + nn

(b)

i=1
n

h(Yi |Si ) h(Yi | Xi , Si ) + nn


i=1

n max I(X; Y |S) + nn ,


p(x)

n
where (a) follows since Xi is a function of (M, S i1 ) and (b) follows since (M, Y i1 , S i1 , Si+1
)
(Xi , Si ) Yi .

Solutions to Selected Problems in Chapter 7

..

DM-MAC with strictly causal state information. Consider the DM-MAC with DM
state Y = (X1 S, X2 ), where the inputs X1 , X2 are binary and the state S
Bern(1/2). Establish the capacity region (a) when the state information is not
available at either the encoders nor the decoder, (b) when the state information
is available strictly causally only at encoder , and (c) when the state information
is available strictly causally at both encoders.
Remark: The above two problems demonstrate that while strictly causal state information does not increase the capacity of point-to-point channels, it can increase
the capacity for multiple-user channels. We will see similar examples for channels
with feedback in Chapter ??.

..

Value of state information. Consider the DMC with DM state p(y|x, s)p(s). Quantify how much state information can help by proving the following statements:
(a) CSI-D C max p(x) H(S|Y).

(b) CSI-ED CSI-E CSI-ED CCSI-E max p(x|s) H(S|Y).

Thus, the state information at the decoder is worth at most H(S) bits. Show that
the state information at the encoder can be much more valuable by providing an
example where
CSI-E C > H(S).
Solution:

(a) The capacity region is R1 = 0 and R2 1. The achievability is by letting


p X2 (0) = p X2 (1) = 21 and p X1 (0) = 1 and p X1 (1) = 0. For the converse, since
I(X1 ; Y) = H(Y) H(Y|X1 ) = H(X1 + S) H(S) = 0 for all p(x), we have
R1 = 0. For R2 , it is similar to the case without noise, so R2 1.

(b) If the state information is available strictly causally only at encoder , by sending state information to the decoder through channel and letting p(x1 )
Bern( 12 ), then the decoder can decode X1 perfectly. Hence, (R1 , R2 ) = (1, 0) is
achievable.
As in part (a), (R1 , R2 ) = (0, 1) is achievable. Thus, by time sharing, R1 1,
R2 1, and R1 + R2 1 is achievable.
For the converse, we have
n(R1 + R2 ) I(M1 , M2 ; Y n ) + nn

I(M1 , M2 , S i1 ; Yi |Y i1 ) + nn
n

i=1
n

= I(M2 ; Yi |Y i1 )+ I(M1 , S i1 ; Yi |Y i1 , M2 ) + nn
n

i=1

i=1

n + I(Xi ; Yi |Y i1 , M2 ) + nn
n

i=1

= n + nn .

Solutions to Selected Problems in Chapter 7

Hence, R1 + R2 1. Also, we have R1 H(X1 ) 1 and R2 H(X2 ) 1.

(c) Nothing changes from the achievability and converse proofs in part (b). Hence,
the capacity region is R1 1, R2 1 and R1 + R2 1.
..

Establish the capacity regions for the DM-MAC with DM state p(y|x1 , x2 , s)p(s),
where S = (S1 , S2 ): (a) when the state information is available only at the decoder,
and (b) when the state information is available at the decoder and state S j is available at encoder j for j = 1, 2.

Solution:
(a) Since

C + max H(S |Y) = max I(X; Y) + max H(S |Y)


p(x)

p(x)

p(x)

max p(x)I(X; Y) + H(S |Y)

= max H(Y |S) H(Y | X)


p(x)

max I(X; Y |S)


p(x)

= CSID ,

so CSID C max p(x)H(S|Y).

(b) Since

CSIE + max(x |s)H(S |Y) = max {I(U ; Y) I(U ; S)} + max H(S |Y)
p

p(u,x|s)

p(x|s)

max p(x |s){I(X; Y) I(X; S) + H(S |Y)}

= max{H(X |S) H(X |Y) + H(S |Y)}


p(x|s)

max{H(X |S) H(X |Y , S)}


p(x|s)

= CSIED ,

so CSIED CSIE max p(x|s)H(S|Y).

(c) Let p(s = 0) = p(s = 1) = 12 and X = Y = {1, 2, ..., 2N}. When s = 0, the
transition probabilities are P(Y = i|X = i) = 1 if i [1 : N] and P(Y = j|X =
i) = N1 if i {N + 1, N + 2, ..., 2N} and j {N + 1, N + 2, ..., 2N}. It is clear that
log2 2N
CSIE log2 N and C = max p(x) {H(Y) H(Y|X)} = log2 2N N1
2N
N+1
2N
N+1
log
.
Thus,
C

log
N

log(N
+
1)
and
when
N large
SIE
2 N+1
2N
2N
enough we have CSIE C > H(S).
..

Establish the capacity regions for the degraded DM-BC with DM state p(y1 |x, s)p(y2 |y1 )p(s):
(a) when the state information is causally available only at the encoder, (b) when
the state information is causally available at the encoder and decoder , and (c)
when the state information is noncausally available at the encoder and decoder .

..

Solutions to Selected Problems in Chapter 7

Common-message broadcasting with state information. Consider the DM-BC with


DM state p(y1 , y2 |x, s)p(s). Establish the common-message capacity for the following settings:
(a) The state information is available only at decoder .
(b) The state information is causally available at both the encoder and decoder .
(c) The state information is noncausally available at both the encoder and decoder .
(d) The state information is causally available only at the encoder.

..

Memory with defects and noise. Consider the model of memory with defects as a
DMC with DM state. Assume that the memory now has temporal noise in addition
to stuck-at faults such that for state s = 2, the memory is modeled by the BSC(p)
for p [0, 1/2]. Find the capacity for this channel: (a) when the state information is causally available only at the encoder, and (b) when the state information is
noncausally available only at the encoder.
Solution:

(a) Let P(S = 0) = P(S = 1) = 2 .


q

C=

max

p(u),x= f (u,s)

I(Y ; U )

min

p(u),x= f (u,s)

H(Y |U )

q
= 1 H + (1 q)(1 p) ,
2

where the last equality follows since

P(Y = 0|U = u) =

q
q2
2

+ (1 q)(1 p) if f (u, 2) = 0,
+ (1 q)p

if f (u, 2) = 1.

By setting X = U Bern(1/2), we achieve C = 1 H 2 + (1 q)(1 p).


q

(b) Choose X = U Bern(1/2) if S = 2, U = X Bern(p) if S = 0, and


U = X Bern(p) if S = 1. We have
H(U |S) = H(X |S) = 1 q + qH2 (p)
H(U |Y) = H(X |Y) = H2 (p)

since p(Y = 0|X = 0) = 1 p, p(Y = 1|X = 1) = 1 p and P(Y = 0) = P(X =


0) = 1/2. Thus I(U ; Y) I(U ; S) = (1 q)(1 H2 (p)). Note that when both
the encoder and the decoder knows the state, the capacity is (1 q)(1 H(p)),
which completes the converse.
..

Channels with state and input cost. Consider the DMC with DM state p(y|x, s)p(s),
and let b(x) 0 be an input cost function with b(x0 ) = 0 for some x0 X . Assume

Solutions to Selected Problems in Chapter 7

that the state information is noncausally available at the encoder and there is a cost
constraint on everey codeword
Eb(xi (m, S n )) nB
n

i=1

for m [1 : 2nR ]. Establish the capacity by proving achievability and the converse.
..

MMSE estimation via writing on dirty paper. Consider the additive noise channel
with output (observation)
Y = X + S + Z,
where X is the transmitted signal and has mean and variance P, S is the state
and has zero mean and variance Q, and Z is the noise and has zero mean and
variance N. Assume that X, S, and Z are uncorrelated. The sender knows S and
wishes to transmit a signal U , but instead he transmits X such that U = X + S for
some constant .
(a) Find the mean squared error (MSE) of the linear MMSE estimate of U given Y
in terms only of , , P, Q, and N.
(b) Find the value of that minimizes the MSE in part (a).
(c) How does the minimum MSE you obtained in (b) compare to the MSE of the
best linear MSE estimate when there is no state at all, i.e., S = 0? Interpret the
result.
Solution:

(a) Since Cov(U , Y) = P + Q, Var(Y) = P + Q + N, and Var(U ) = P + 2Q, the


minimum MSE is
Var(U )
d
MSE =
d
PN
.
P+N

(b) Let
=

0. Then

Cov2 (U , Y)
(P + Q)2
= P + 2Q
.
Var(Y)
P +Q +N

1
(2(1 )PQ + 2PQ)
P+Q+N

= 0, so =

P
P+N

and MSE

(c) If S = 0, then Y = X + Z and U = X. Hence, co(U , Y) = P, Y2 = P + N and


2
PN
U2 = X2 , so MSE = U2 co (U2 ,Y) = P+N
.

By choosing as in part(b), we can achieve the same MSE as if there is no state


information (S = 0). In other words, writing on dirty paper has the same MSE
as writing on clean paper.
..

Cognitive radio. Consider the AWGN-IC


Y1 = 11 X1 + 21 X2 + Z1 ,

Y2 = 22 X2 + Z2 ,

Solutions to Selected Problems in Chapter 7

where 11 , 21 , 22 are channel gains, and Z1 and Z2 are N(0, 1) additive noise.
Sender encodes two independent messages (M1 , M2 ) while sender encodes only
M2 . Receiver needs to decode M1 and receiver needs to decode M2 . Assume
average transmit power constratin P on both senders. Find the capacity region.
Remark: This is a simple example of cognitive radio channel models studied, for
example, in ???.
Solution: First note that the maximum rate achievable for M2 occurs when X2
N(0, P). Next, note that since sender knows both M1 and M2 , it knows the codeword transmitted by sender noncausally. Therefore, it can treat the codeword
X2n as noncausal state information for transmission of M1 to Y1 . Using dirty paper
coding at sender and using Gaussian input at sender , we have U1 = X1 + 21 X2 ,
2
X1 N(0, P), X2 N(0, P) and = P/(11
P + 1). The capacity is then given by
2
R1 C(11
P),

2
R2 C(22
P).

Solution: The capacity region is

2
R1 C(11
P),
2
R2 C(22
P).

2
2
Achievability: We will show that the rate pair (C(11
P), C(22
P)) is achievable.
Let X1 N(0, P), X2 N(0, P) and X1 and X2 are independent. Since Y2 is
2
interference-free, we know that R2 = C(22
P) is achievable. Since sender encodes
two independent messages (M1 , M2 ), we can treat M2 as encoder side information,
2
2
2
so by dirty paper coding, R1 = C(11
P) is achievable. Thus, (C(11
P), C(22
P)) is
achievable.
Converse: From the simple outer bound, we have R1 I(X1 ; Y1 |X2 = x2 ) =
2
2
P).
P) and R2 I(X2 ; Y2 |X1 = x1 ) = C(22
C(11

..

Noisy state information. Consider the DMC with DM state p(y|x, s)p(s), where the
state has three components S = (T0 , T1 , T2 ) p(t0 , t1 , t2 ). Suppose T1 is available
at the encoder, T2 is available at the decoder, and T0 is hidden from both.
(a) Suppose T1 is a function of T2 . Show that the capacity (for both causal and
noncausal cases) is
CNSI = max I(X; Y |T2 ).
p(x|t1 )

(b) Show that any (2nR , n) code (in either the causal or noncausal case) for this
channel achieves the same probability of error when used over the DMC with
state p(y |x, t1 )p(t1 ), where Y = (Y , T2 ) and
p(y, t2 |x, t1 ) = p(y|x, t0 , t1 , t2 )p(t0 , t2 |t1 ),
t0

and vice versa.

Anda mungkin juga menyukai