Anda di halaman 1dari 8

Course No.

: EEM822 Title: DIGITAL COMMUNICATIONS


Class: B.Sc. Engg., Status of Course: Major Course, Approved since session: 2011-12
Total Credits:3, Total pds.(50 mts each)/week: 4(L:3+T:1+P:0+S:0)
Min. pds./sem.:52

UNIT 1: FUNDAMENTAL LIMITS ON PERFORMANCE

Uncertainty, Information & Entropy, Source coding theorem, Huffman coding, Discrete
Memory-less channels, Mutual information, Channel capacity, Channel coding
Theorem, Differential Entropy & Mutual information in continuous ensembles, Channel
capacity theorem.

UNIT 2: DETECTION AND ESTIMATION

Model of a Digital communication system, Gram-Schmidt orthgonalization, Geometric


Interpretation of signals, Response of Bank of correlators to noisy input, Detection of
known signals in noise, Probability of Error, Correlation receiver, Matched Filter
receiver, Estimation - concept & criteria, Maximum likelihood estimation.

UNIT 3: DIGITAL MODULATION TECHNIQUES

Digital Modulation Formats, Coherent Binary Modulation Techniques, Coherent


Quadrature Modulation, Non-coherent Binary Modulation Techniques, M-ary Modulation
Techniques, Power spectra, Bandwidth Efficiency, Effect of Inter-Symbol Interference,
Bit vs. Symbol Error Probabilities, Synchronization, Applications.

UNIT 4: ERROR-CONTROL CODING

Rationale for coding, Types of codes, Discrete Memoryless channels, Linear Block
codes, Cyclic codes, Convolutional codes and their decoding methods, Trellis codes,
Applications

UNIT 5: SPREAD-SPECTRUM MODULATION

Pseudo-noise sequences, Notion of Spread Spectrum, DSS Coherent BPSK, Signal


space Dimensionality and Processing Gain, Probability of Error, Frequency-Hop
Spread Spectrum, Applications.

SUGGESTED READING:

Simon Haykin: DIGITAL COMMUNICATIONS, Wiley.


BP Lathi: MODERN DIGITAL & ANALOG COMMUNICATION SYSTEM, Oxford.
Proakis & Salehi: COMMUNICATIONS SYSTEM ENGG, PHI.
QUESTION BANK
UNIT 1: FUNDAMENTAL LIMITS ON PERFORMANCE

1. Explain what is meant by the terms – uncertainty, information, entropy and discrete memoryless source
in the context of information theory. How are these related to each other? Discuss the properties of
entropy. Derive an expression for the entropy of a discrete memoryless source.

2. An information source can be modeled as a band-limited process with a bandwidth of 6000 Hz. The
process is sampled at a rate higher than the Nyquist rate to provide a guard band of 2000 Hz. It is
observed that the resulting samples take values in the set A = {-4, -3, -1, 2, 4, 7} with probabilities 0.2,
0.1, 0.15, 0.05, 0.3 & 0.2. What is the entropy of the source in bits/sample and in bits/s ?

3. A random variable X is distributed on the set of all positive integers 1, 2, 3 . . . ., with corresponding
probabilities p1, p2, p3 . . . . and having a mean m. Show that among all random variables which satisfy
the above condition, the geometric random variable defined by pi = (1/m)(1-1/m)i-1 i=1,2,3,…. has the
highest entropy.

4. State and explain source coding theorem. Discuss the significance of this theorem.

5. What are prefix codes? Explain with the help of suitable examples. What are the benefits of a prefix
code as compared to other codes? Derive an expression for the average code-word length L of a prefix
code of a DMS in terms of its entropy.

6. Explain using suitable examples how prefix codes may be decoded with the help of decision trees.

7. Given below are four different codes used to code the alphabet A of an information source where A =
{a1, a2, a3, a4, a5}. Identify which of the codes are (i) uniquely decodable, (ii) instantaneous & (iii) prefix
codes. Give reasons for your answers. Also, determine the average code-word length of all the uniquely
decodable codes and identify the code having the least length.

Letter Probability Code 1 Code 2 Code 3 Code 4


a1 p1 = 0.5 1 1 0 00
a2 p2 = 0.25 01 10 10 01
a3 p3 = 0.125 001 100 110 10
a4 p4 = 0.0625 0001 1000 1110 11
a5 p5 = 0.0625 00001 10000 1111 110

8. What is Huffman code? What are its advantages? Explain the algorithm used to synthesize a Huffman
code with the help of suitable examples.
9. The sample function of a Gaussian process of zero-mean and unit variance is uniformly sampled and
then applied to a uniform quantizer having the input-output amplitude characteristic show in figure
below. Calculate the entropy of the quantizer output. Design a Huffman code for the quantizer output
and determine the corresponding average code-word length and its variance over the source alphabet.
Design a Huffman code for the second extension of the quantizer output and determine its average
code-word length. Which of the two coding schemes is more efficient and why?

Output

1.5
0.5 Fig.P-9
-1 0.5 1 Input
1.5

10. What are discrete memoryless channels? Explain the following terms in the context of discrete
memoryless channels – channel matrix, mutual information, channel capacity and probability of error.
Derive expressions for the mutual information and the capacity of a DMC.

11. State and prove the properties of mutual information.

12. (a) Find the capacity of the channels whose transition probability diagrams are as below:
1-
a 0.5 a
0.3 0.2
0.2 
b 0.5 b
0.3 
0.2
0.3
c 0.5 c 1-
Fig.P-12(b)
Fig.P-12(a)

(b) Find the capacity of an AWGN channel with a bandwidth of 1 MHz, power of 10W and a noise power
spectral density N0/2 = 10-9 W/Hz.

13. Two BSCs having the same channel transition probability diagram shown in Fig.P-13(b) are connected in
cascade as shown in Fig.P-13(a). Find the overall channel capacity of the cascaded connection.
1-p
Input BSC 2 Output Fig.P-13(b)
BSC 1 p

Fig.P-13 (a) p
1-p

14. State and explain channel coding theorem. Discuss the significance of this theorem in the context of
BSC.

15. Show that the entropy of a Gaussian random variable is uniquely determined by its variance and is
independent of its mean. Also show that for a given variance, the Gaussian random variable has the
largest differential entropy attainable by any random variable.
16. State and explain channel capacity theorem for band-limited, power-limited Gaussian channels. Discuss
the significance of this theorem.

17. A digital communication system uses a repetition code for channel encoding/decoding where each
transmission is repeated five times. The decoder operates as follows – if the number of 0s exceeds the
number of 1s, the decoder decides in favor of 0, otherwise in favor of 1. If three or more transmissions
out of five are incorrect, then an error occurs. Assuming the channel to be binary-symmetric, show that
the average probability of error is given by Pe = 10p3(1-p)2 + 5p4(1-p) + p5.

18. Show that among all continuous random variables distributed on the positive real line and having a
mean m, the exponential random variable defined by fx(x) = (1/e- x/ has the highest differential
entropy.

19. Alphanumeric data are entered into a computer from a remote terminal through a voice-grade
telephone channel. The channel has a bandwidth of 3.4 KHz. and output SNR of 20 dB. The terminal has
a total of 128 symbols. Assuming that the symbols are equi-probable and successive transmissions are
statistically independent, calculate the channel capacity and the maximum symbol rate for which error-
free transmission over the channel is possible.

UNIT 2: DETECTION & ESTIMATION

20. Explain the Gram-Schmidt orthogonalization procedure used to represent a set of energy signals as
linear combinations of orthonormal basis functions.

21. Show that the response of a bank of correlators having one input as the signal received from a noisy
channel and the second input being the corresponding orthonormal basis function consists of a set of
random variables having variance equal to the variance of noise and covariance equal to zero. Also,
determine an expression for the likelihood functions of an AWGN channel.

22. What is meant by detection of signals? Explain maximum likelihood detection with the help of relevant
expressions. Derive an expression for the average probability of error for this type of detection.

23. What is a matched filter? Derive an expression for the impulse response of a matched filter which
maximizes the output SNR.

24. Describe the structure of a correlation receiver and show how this detector can be replaced by a
matched-filter detector.

25. Discuss in detail the properties of a matched filter and their significance.

26. Explain the concept of estimation. Discuss various criteria used for estimating parameters of a noise-
corrupted signal.

27. Discuss maximum likelihood estimation and show how the phase of a received signal may be estimated
using this technique.

28. Discuss linear estimation and the Wiener filter for minimum mean square estimation error.
29. Express the following set of signals as linear combinations of a set of orthonormal basis functions using
Gram-Schmidt orthogonalization procedure.
s1(t) s3(t) s4(t)
s2(t)
1
1 1 1
3 t
2 t 1 3 t 1 2 t
-1 -1

30. A set of four signals represented by s1(t)=A, 0<t<T/2, s2(t)=A, T/2<t<T, s3(t)=-A, 0<t<T & s4(t)=A, T/2<t<T
and 0 elsewhere is used to transmit information over an AWGN channel having zero mean noise with
variance No/2. Determine the basis functions for this signal set, the impulse response of the matched
filter demodulators and the output waveforms of the matched filter demodulators when the
transmitted signal is s1(t).

31. A pair of polar signals s1(t) = √Eb & s2(t) = -√Eb for 0<t<T is used to transmit information over an AWGN
channel with noise psd = No/2 and zero mean. If the probabilities of s1(t) & s2(t) are 0.7 and 0.3
respectively, determine the decision rule to be used based upon Maximum Aposteriori Probabilities
(MAP) criterion. If maximum likelihood detection were to be used, what would the corresponding rule
be? Determine the average probability of error in both cases.

32. A matched filter has the frequency response H(f) = {1-e-j2fT}/j2f. Determine the impulse response of
the filter and the corresponding signal to which it is matched.

33. In an AWGN channel with noise psd of No/2, two equiprobable messages are transmitted by s1(t )=At/T
& s2(t)=A(1-t/T) for 0<t<T. Determine the structure of the optimum receiver and the probability of error.

34. Find the maximum likelihood estimate of a in the presence of white Gaussian noise of zero mean and
psd No/2 of a signal s(t,a) = as(t), 0<t<T & 0 elsewhere, where s(t) is completely known and a is unknown.
What are the mean and the variance of this estimate?

35. The sample dn of a stationary discrete-time process is described by the first-order difference equation dn
= a1dn-1+w1,n where a1 is a constant and w1,n is the sample of a discrete-time white noise process {W1,n}
of zero mean and variance 12. The sample dn is transmitted through a noisy channel, yielding the
received signal xn = dn + w2,n where w2,n is the sample of another discrete-time white noise process of
zero mean and variance 22, which is independent of {W1,n}. The sequence {xn} is applied to a two-
coefficient Wiener filter designed to produce an estimate of {dn}. Find the coeffiecients h1 opt and h2 opt of
the filter in terms of a1, 12 & 22. Also, find the mean squared error produced by the filter.

UNIT 3: DIGITAL MODULATION TECHNIQUES

36. Describe various types of binary modulation techniques. Draw the corresponding signal waveforms as
well as their signal constellations. Discuss their detection methods.

37. Derive expressions for the average probability of error in the presence of additive white Gaussian noise
in the coherent detection of binary PSK and FSK signals and compare the two.

38. What is meant by coherent quadrature modulation? When and where is it preferred? Describe the QPSK
and MSK techniques.
39. Explain how QPSK modulation and demodulation are performed with the help of neat block diagrams.
Also derive an expression for the average probability of symbol error in a QPSK system.

40. Describe MSK modulation and demodulation with the help of appropriate block diagrams. Compare MSK
with QPSK with respect to signal expression, signal constellation and probability of error.

41. What is noncoherent orthogonal modulation? Describe noncoherent binary FSK and DPSK modulation
techniques? How are they detected? Explain.

42. Describe various types of M-ary modulation schemes, their generation and detection methods. Compare
them.

43. Determine expressions for the psd of various binary and M-ary modulation techniques and plot them.

44. Define bandwidth efficiency and explain its significance. Determine the bandwidth efficiency for
different types of M-ary modulation techniques in terms of M.

45. Binary data is transmitted over a microwave link at the rate of 106 bits/second and the power spectral
density of noise at the receiver input is 10-10 Watts/Hz. Find the average carrier power required to
maintain an average probability of error Pe < 10-4 for coherent FSK. What is the required channel
bandwidth? Repeat for coherent MSK, noncoherent binary FSK and DPSK.

46. An FSK system transmits binary data at the rate of 2.5 x 106 bits/second. During the course of
transmission, white Gaussian noise of zero mean and power spectral density 10-20 watts/Hz is added to
the signal. In the absence of noise, the amplitude of the received sinusoidal wave for digit 1 or 0 is
1microvolt. Determine the average probability of symbol error for coherent detection.

47. Offset QPSK is a special form of quadriphase-shift keying in which the in-phase data stream is delayed
relative to the quadrature data stream by half a symbol period. Determine the average probability of
symbol error for the offset QPSK system. Also, determine the psd of an offset QPSK signal produced by a
random binary sequence in which symbols 1 and 0 ( represented by +1 V ) are equally likely, and the
symbols in different time slots are statistically independent and identically distributed.

48. An M-ary signaling scheme uses a fixed signaling rate that is independent of the number of symbols M.
Show that 10log10(E/No) = 10log10(Eb/No) + 10log10(log2M), where Eb is the signal energy per bit for M=2
and E is the signal energy per symbol for any M and No/2 is the noise psd.

49. In a coherent binary FSK system, symbols 0 & 1 are transmitted with equal probability. The system
parameters are as follows: Average transmitted power = 1W, Noise psd = 10-5 W/Hz & Bit rate = 10-4 bps.
Viewing the system as a BSC, calculate the channel capacity.

50. An M-ary DPSK system operating over an AWGN channel has the following values of Eb/No for varying
values of M for a required probability of symbol error Pe = 2x10-5. Plot the bandwidth efficiency vs Eb/No
for the given values of M. Also plot the capacity boundary on the same graph sheet and comment on the
effect of increasing M in the context of the Shannon limit.
M 2 4 8 16 32
Eb/No, dB 10 12 16 20.9 26
UNIT 4: ERROR CONTROL CODING

51. Justify the need for error control coding. What are the various types of codes usually used in digital
communication? Discuss.
52. Explain the following terms in the context of error controlling coding – block codes, Convolutional codes,
trellis codes, Hamming distance & Hamming weight, syndrome.

53. What are linear block codes? How are they generated and decoded? Explain using suitable examples.

54. Discuss the characteristics of various types of CRC codes.

55. How are Convolutional codes different from block codes? When and where are they preferred as
compared to block codes? Explain code rate and constraint length. Show with suitable diagrams, how an
encoder may be implemented.

56. Describe the Viterbi algorithm for decoding Convolutional codes. Use a suitable example to illustrate
your answer.

57. Given a generator matrix G =[1 1 1], construct a (3,1) code. How many errors can this code correct? Find
the codeword for data vectors d = 0 & d = 1.

1 0 1 1
58. A generator matrix G =[ ] generates a (4, 2) code. Is this a systematic code? What is the
0 1 1 0
parity check matrix of this code? Find the code words for all possible inputs bits. Determine the
minimum distance of the code and the number of bit errors this code can correct.

59. Use the generator polynomial g(x) = x3+x+1 to construct a systematic (7, 4) cyclic code. What are the
error correcting capabilities of this code? Construct the decoding table. If the received word is 1101100
determine the transmitted word.

60. A three-error correcting (23, 12) Golay code is a cyclic code with a generator polynomial
g(x) = x11+ x9+ x7+ x6+ x5+x+1. Determine the code words for the data vectors 000011110000,
101010101010 1 & 10001011110.

61. For the Convolutional encoder shown in Fig.


P-61, the received bits are 01 00 01 00 10 11 11
00. Use Viterbi’s algorithm and trellis diagram
to decode this sequence.

Fig.P-61
UNIT 5: SPREAD SPECTRUM MODULATION

62. Define spread spectrum. What is the main advantage of spread-spectrum modulation? What are the
various types of spread-spectrum modulation? Discuss their principles.

63. What is a pseudo-noise sequence? Give an example of such a sequence and explain how it may be
generated. Also, discuss the properties of maximum-length sequences.

64. Describe Direct Sequence Spread Coherent Binary Phase-Shift Keying. Explain with the help of relevant
block diagrams, the generation and detection of signals of his type.

65. Explain the concept of Frequency-hop spread spectrum. Describe the slow-frequency and fast-frequency
hopping techniques and methods of generation and detection employed.

66. Discuss major applications of spread-spectrum techniques currently in use.

67. A PN sequence is generated using a feedback shift register of length m = 4. The chip rate is 107 chips per
second. Find the following: (i) PN sequence length, (ii) chip duration of the PN sequence & (iii) PN
sequence period.

68. A direct-sequence spread binary phase-shift keying system uses a feedback shift register of length 19 for
the generation of the PN sequence. Calculate the processing gain of the system.

69. (a) A slow FH/MFSK system has the following parameters: number of bits per MFSK symbol = 4 &
number of MFSK symbols per hop = 5. Calculate the processing gain of the system.
(b) A fast FH/MFSK system has the following parameters: number of bits per MFSK symbol = 4 & number
of hops per MFSK symbol = 4. Calculate the processing gain of the system.

70. Consider a fast hopping binary ASK system wherein the binary signal amplitudes are 0 & 2 Volts
respectively and the AWGN psd = 10- 6. The ASK uses a data rate of 100 kbits/s and is detected non-
coherently. It requires 100 kHz bandwidth for transmission. However, the frequency hopping is over 12
equal ASK bands with bandwidth totaling 1.2 MHz. The partial band jammer can generate a strong
Gaussian noise-like interference with total power of 27 dBm. If the partial jammer randomly jams one of
the 12FH channels, derive the BER of the FH-ASK if the ASK signal hops 6 bands per bit period.

71. In a multiuser CDMA system of DSSS, all transmitters are at equal distance from the receiver. The AWGN
spectrum has Sn(f) = 5x10-6. The modulation format for all users is BPSK at the rate of 16 kbits/s. If all
spreading codes are mutually orthogonal, find the desired user signal power Pi required to achieve BER
of 10-5. Repeat if one of the 15 interfering transmitters is 2 times closer to the desired receiver.

Anda mungkin juga menyukai