Communications
COURSE FILE
(AY 2018-19)
1. Cover Page
2. Vision of the Institute
3. Mission of the Institute
4. Vision of the Department
5. Mission of the Department
6. PEOs and POs
7. Syllabus copy
8. Course objectives and outcomes
9. Brief note on the course & how it fits in to the curriculum
10. Prerequisite, if any.
11. Instructional Learning Outcomes
12. Course mapping with PEOs and POs
13. Class Time Table
14. Individual Time Table
15. Lecture plan with methodology being used/adopted.
16. Assignment questions
17. Tutorial problems
18. a) Unit wise question bank
b) Unit wise Quiz Questions and long answer questions
19. Detailed notes
20. Additional topics, if any.
21. Known gaps ,if any and inclusion of the same in lecture schedule
22. Discussion topics, if any
23. University Question papers of previous years
24. References, Journals, websites and E-links
25. Quality Control Sheets. a)course end survey b)Teaching Evaluation
c) CO- attainment.
26. Student List
27. Group-Wise students list for discussion topics
Prepared by Updated by
2) Sign : 2) Sign :
1) Name : 1) Name :
2) Sign : 2) Sign :
3) Design : 3) Design :
4) Date : 4) Date :
2) Sign :
3) Date :
II. To train students with problem solving capabilities such as analysis and design
with adequate practical skills wherein they demonstrate creativity and innovation
that would enable them to develop state of the art equipment and technologies of
multidisciplinary nature for societal development.
Course Objectives:
1. Understand different digital coding techniques such as PCM, DM, DPCM and
Analyze different Digital Baseband Transmission techniques
2. Describe the concepts of information theory and source coding techniques
3. Understand the concepts of Error Correction codes namely, Block codes, Cyclic code
and convolution Codes.
4. Understand various digital modulation techniques namely ASK, FSK, BPSK and
QPSK..
5. Understand the characteristics of Spread Spectrum (SS) modulation techniques.
Course outcomes :
The RF spectrum must be shared, yet every day there are more users for that spectrum
as demand for communications services increases. Digital modulation schemes have greater
capacity to convey large amounts of information than analog modulation schemes
Engineering Mathematics
Basic Electronics
Theory of Signals and Systems
Analog Communications
DC1: Analyze the elements of digital communication system, the importance and
Applications of Digital Communication.
DC2: The importance and the need of sampling theorem in digital communication systems
and Digital representation of analog signals
DC4: Identify the need of pulse shaping for optimum transmission and get the knowledge of
Base band signal receiver model.
DC5: Explain Probability of error, optimum receiver, coherent reception, Matched filter
and understand the Signal space representation
DC6: Describe cross talk and its effect in the degradation of signal quality in digital
communication
DC9: Analyze the different digital communication schemes like Differential PCM systems
(DPCM), Delta modulation, and adaptive delta modulation.
DC10: Compare the digital communication schemes like Differential PCM systems
(DPCM), Delta modulation, and adaptive delta modulation.
DC12: Identify the basic terminology used in coding of Digital signals like Information and
entropy and Conditional entropy, Mutual information and redundancy.
DC13: Compare different types of channels and explain channel capacity, Hartley-Shannon
law and Bandwidth-SNR tradeoff.
DC14. Explain the need for Source coding and types of Source coding methods.
DC15: Solve problems based on mutual information and Information loss due to noise.
DC16: Compute problems on Source coding methods like - Huffman code, Shannon-Fano
codes used in digital communication to increase average information per bit.
UNIT III:
DC17: Illustrate the different types of codes used in digital communication and the
Matrix description of linear block codes.
DC18: Analyze the Hamming encoder and syndrome decoder and find errors, solve the
numerical problems for Error detection and error correction of Hamming codes
DC19: Construct the algebraic structure of cyclic codes and analyze how encoder and
decoders used to detect and correct errors.
DC20: Compute problems based on the representation of cyclic codes and encoding and
decoding of cyclic codes.
DC21: Solve problems to find the location of error in the codes i.e., syndrome calculation.
Convolution Codes
DC22: Identify the difference between the different codes digital communication.
DC24: Solve problems on error detection & correction using state Tree and trellis
diagrams.
DC26: Compute numerical on error calculations and compare the error rates in coded and
uncoded transmission.
DC27: Describe and differentiate the different shift keying formats used in digital
communication.
DC28: Compute the power and bandwidth requirements of modern communication systems
modulation formats like those employing ASK, PSK, FSK, and QAM.
DC29-32: Explain the different modulators like ASK Modulator, Coherent ASK detector,
non-Coherent ASK detector, Band width frequency spectrum of FSK, Non-Coherent FSK
detector, Coherent FSK detector.
DC34: Differentiate the different keying schemes -BPSK, Coherent PSK detection, QPSK
& Differential PSK.
DC35: Draw and analyze eye diagrams and compare performance of various modulation
techniques
UNIT V:
DC 36: Analyze the need and use of spread spectrum in digital communication and gain
knowledge of spread spectrum techniques like direct sequence spread spectrum (DSSS).
DC 37: Describe Code division multiple access, ranging using DSSS Frequency Hopping
spread spectrum.
COMMUNICATION
POs 1 2 3 4 5 6 7 8 9 1 11 1 PSO 1 PSO 2
0 2
Digital Communications 1 2 3 2 2 2 1 2 2 3
LUNCH
Wednesday DSP DSP CSE DC VLSI& DDVH Labs / DSP Lab
Library/Sports/
Thursday SC-I SC-I DSP DSP* BEC/ Mentoring CACG
Friday PE-II HVPE DC PE-II* CSE Finishing School
Saturday CSE* HVPE DC* SC-I* PE-I* Remedial Classes
S.No Subject(T/P) Faculty Name Subject Code Periods/Week
1 Digital Signal processing Dr.C.Venkata Narasimhulu / K.Victor 16EC3201 3+1*
2 Digital Communications S.Krishna Priya 16EC3202 3+1*
3 Control Systems Engineering M.Krishna 16EC3203 3+1*
Professional Elective – I (PE-I)
Electronic Instrumentation and Measurements A.Subramanyam 16EC3204
4 Telecommunication Switching Systems and 3+1*
- 16EC3205
Networks
Digital Systems Design - 16EC3206
Professional Elective – II (PE-II)
Optical Communications - 16EC3207
5 3+1*
Computer Architecture and Organization M.Laxmi 16EC3208
Computer Networks - 16CS3212
Soft Core – I (SC-I) 3+1*
6 Digital Design through Verilog HDL G.SreeLakshmi 16EC3209 Room No: 314
VLSI Design Prof.OVPR.Siva Kumar / B.Mamatha 16EC3210 Room No: 320
Digital Signal Processing Lab K.Victor, Y.Siva Rama Krishna
7 16EC32L1 3
Lab Technician: D.Vivekananda
Soft Core - I lab
Digital Design through Verilog HDL Lab G.SreeLakshmi, M.Laxmi
16EC32L2
8 Lab Technician: ARL.Padmaja 3
VLSI Lab B.Mamatha, B.Jugal Kishore
16EC32L3
Lab Technician: M.Chathar Singh, K.Chalapathi Rao
9 Human Values and Professional Ethics J.Vijaya Lakshmi 16MB32P1 3
10 BEC/SoftSkills G.Karuna Kumari, Dr.B.Nagamani - 2
11 Library/Sports - - 1
12 Mentoring - - -
13 CACG Dr.K.Madhumati - -
14 Finishing School Section Incharge: B.Ramu - 4
15 Remedial Classes - - 2
* indicates tutorial hour Date: 14-11-2018
TT Coord:___________ HOD:____________ Dean Academics:___________Principal:____________
LUNCH
Wednesday DC DC HVPE HVPE VLSI& DDVH Labs / DSP Lab
Library/Sports/
Thursday SC-I SC-I DSP DSP BEC/ Mentoring CACG
Friday PE-I PE-I* PE-II PE-II DSP* Finishing School
Saturday CSE* PE-II* DSP* SC-I* HVPE Remedial Classes
S.No Subject(T/P) Faculty Name Subject Code Periods/Week
1 Digital Signal processing Dr.V.Vineel Kumar 16EC3201 3+1*
2 Digital Communications P.Sudhakar 16EC3202 3+1*
3 Control Systems Engineering A.Sowjanya 16EC3203 3+1*
Professional Elective – I (PE-I)
Electronic Instrumentation and Measurements - 16EC3204
4 Telecommunication Switching Systems and 3+1*
- 16EC3205
Networks
Digital Systems Design Dr.S.Udaya Kumar / D.Sony 16EC3206
Professional Elective – II (PE-II)
Optical Communications - 16EC3207
5 3+1*
Computer Architecture and Organization - 16EC3208
Computer Networks B.Sreelatha 16CS3212
Soft Core – I (SC-I) 3+1*
6 Digital Design through Verilog HDL G.SreeLakshmi 16EC3209 Room No: 314
VLSI Design Prof.OVPR.Siva Kumar / B.Mamatha 16EC3210 Room No: 320
Digital Signal Processing Lab L.Kavya, M.Anand
7 16EC32L1 3
Lab Technician: D.Venkateshwarlu
Soft Core - I lab
Digital Design through Verilog HDL Lab G.SreeLakshmi, M.Laxmi
16EC32L2
8 Lab Technician: ARL.Padmaja 3
VLSI Lab B.Mamatha, B.Jugal Kishore
16EC32L3
Lab Technician: M.Chathar Singh, K.Chalapathi Rao
9 Human Values and Professional Ethics J.Vijaya Lakshmi 16MB32P1 3
10 BEC/SoftSkills G.Karuna Kumari, Dr.B.Nagamani - 2
11 Library/Sports - - 1
12 Mentoring - - -
13 CACG Dr.K.Madhumati - -
14 Finishing School Section Incharge: B.Ramu - 4
15 Remedial Classes - - 2
* indicates tutorial hour Date: 14-11-2018
TT Coord:___________ HOD:____________ Dean Academics:___________Principal:____________
LUNCH
Wednesday PE-II VLSI(C2) / DSP(C1) DC HVPE CSE
Library/Sports/
Thursday PE-II PE-II CSE CSE BEC/ Mentoring CACG
Friday DSP DSP SC-I SC-I* DC Finishing School
Saturday HVPE DC* CSE* PE-II* DSP* Remedial Classes
S.No Subject(T/P) Faculty Name Subject Code Periods/Week
1 Digital Signal processing Dr.V.Vineel Kumar 16EC3201 3+1*
2 Digital Communications S.Krishna Priya 16EC3202 3+1*
3 Control Systems Engineering M.Krishna 16EC3203 3+1*
Professional Elective – I (PE-I) 3+1*
Electronic Instrumentation and Measurements - 16EC3204 -
4 Telecommunication Switching Systems and
Y.Naga Lakshmi 16EC3205 Room No: 322
Networks
Digital Systems Design K.Somasekhara Rao / Ch.Sandeep 16EC3206 Room No: 321
Professional Elective – II (PE-II) 3+1*
Optical Communications M.Anand 16EC3207 Room No: 321
5
Computer Architecture and Organization - 16EC3208 -
Computer Networks J.Bharathi 16CS3212 Room No: 322
Soft Core – I (SC-I)
6 Digital Design through Verilog HDL - 16EC3209 3+1*
VLSI Design B.Jugal Kishore 16EC3210
Digital Signal Processing Lab Dr.V.Vineel Kumar, B.Ramu
7 16EC32L1 3
Lab Technician: D.Vivekananda, ARL.Padmaja
Soft Core - I lab
Digital Design through Verilog HDL Lab - 16EC32L2
8 3
VLSI Lab B.Jugal Kishore, S.Yagnasree
16EC32L3
Lab Technician: M.Chathar Singh, K.Chalapathi Rao
9 Human Values and Professional Ethics B.P.S.Jyothi 16MB32P1 3
10 BEC/SoftSkills G.Karuna Kumari, Dr.B.Nagamani - 2
11 Library/Sports - - 1
12 Mentoring - - -
13 CACG Dr.K.Madhumati - -
14 Finishing School Section Incharge: M.Anand - 4
15 Remedial Classes - - 2
* indicates tutorial hour Date: 14-11-2018
TT Coord:___________ HOD:____________ Dean Academics:___________Principal:____________
LUNCH
Wednesday DC DC CSE SC-I DSP HVPE PE-I
Library/Sports/
Thursday PE-II VLSI(D1) / DSP(D2) BEC/ Mentoring CACG
Friday DSP* CSE* SC-I SC-I* PE-I Finishing School
Saturday PE-I* VLSI(D2) / DSP(D1) PE-II* Remedial Classes
S.No Subject(T/P) Faculty Name Subject Code Periods/Week
1 Digital Signal processing R.Odaiah 16EC3201 3+1*
2 Digital Communications D.Venkata Rami Reddy 16EC3202 3+1*
3 Control Systems Engineering Y.Siva Rama Krishna 16EC3203 3+1*
Professional Elective – I (PE-I)
Electronic Instrumentation and Measurements A.Subramanyam 16EC3204
4 Telecommunication Switching Systems and 3+1*
- 16EC3205
Networks
Digital Systems Design - 16EC3206
Professional Elective – II (PE-II)
Optical Communications - 16EC3207
5 3+1*
Computer Architecture and Organization - 16EC3208
Computer Networks B.Sreelatha 16CS3212
Soft Core – I (SC-I)
6 Digital Design through Verilog HDL - 16EC3209 3+1*
VLSI Design Ch.Sandeep 16EC3210
Digital Signal Processing Lab R.Odaiah, Y.Siva Rama Krishna
7 16EC32L1 3
Lab Technician: D.Vivekananda, ARL.Padmaja
Soft Core - I lab
Digital Design through Verilog HDL Lab - 16EC32L2
8 3
VLSI Lab Ch.Sandeep, M.Chathar Singh
16EC32L3
Lab Technician: K.Chalapathi Rao
9 Human Values and Professional Ethics J.Vijaya Lakshmi 16MB32P1 3
10 BEC/SoftSkills G.Karuna Kumari, Dr.B.Nagamani - 2
11 Library/Sports - - 1
12 Mentoring - - -
13 CACG Dr.K.Madhumati - -
14 Finishing School Section Incharge: V.Sirisha - 4
15 Remedial Classes - - 2
* indicates tutorial hour Date: 14-11-2018
TT Coord:___________ HOD:____________ Dean Academics:___________Principal:____________
LUNCH
Wednesday PE-II CSE DC DC SC-I SC-I HVPE
Library/Sports/
Thursday PE-II PE-II DSP DSP BEC/ Mentoring CACG
Friday HVPE VLSI(E2) / DSP(E1) SC-I Finishing School
Saturday DC* CSE* SC-I* PE-II* DSP* Remedial Classes
S.No Subject(T/P) Faculty Name Subject Code Periods/Week
1 Digital Signal processing L.Kavya 16EC3201 3+1*
2 Digital Communications B.Mamatha 16EC3202 3+1*
3 Control Systems Engineering A.Shankar 16EC3203 3+1*
Professional Elective – I (PE-I) 3+1*
MWE&DC
Electronic Instrumentation and Measurements P.Chandra Prakash Reddy 16EC3204
Lab[315]
4
Telecommunication Switching Systems and
- 16EC3205 -
Networks
Digital Systems Design K.Somasekhara Rao / Ch.Sandeep 16EC3206 Room No: 321
Professional Elective – II (PE-II) 3+1*
Optical Communications M.Anand 16EC3207 Room No: 321
5
Computer Architecture and Organization - 16EC3208 -
Computer Networks J.Bharathi 16CS3212 Room No: 322
Soft Core – I (SC-I)
6 Digital Design through Verilog HDL - 16EC3209 3+1*
VLSI Design M.Krishna Chaitanya 16EC3210
Digital Signal Processing Lab A.Shankar, L.Kavya, K.Victor
7 16EC32L1 3
Lab Technician: D.Vivekananda, ARL.Padmaja
Soft Core - I lab
Digital Design through Verilog HDL Lab - 16EC32L2
8 M.Krishna Chaitanya, B.Mamatha, 3
VLSI Lab
M.Chathar Singh 16EC32L3
Lab Technician: K.Chalapathi Rao
9 Human Values and Professional Ethics B.P.S.Jyothi 16MB32P1 3
10 BEC/SoftSkills G.Karuna Kumari, Dr.B.Nagamani - 2
11 Library/Sports - - 1
12 Mentoring - - -
13 CACG Dr.K.Madhumati - -
14 Finishing School Section Incharge: V.Sirisha - 4
15 Remedial Classes - - 2
* indicates tutorial hour Date: 14-11-2018
TT Coord:___________ HOD:____________ Dean Academics:___________Principal:____________
LUNCH
DC-
Wednesday DC-IIIA
IIIC
Thursday SMII Lab-IIE
DC-
Friday DC-IIIA
IIIC
Saturday DC-IIIC DC-IIIA Dept Meetings
Name of the Faculty: P.Sudhakar Workload: 11+3
09.00- 09.50- 10.40- 11.30- 12.20- 1.00-
Time 1.50-2.40 2.40-3.30
09.50 10.40 11.30 12.20 01.00 1.50
Period 1 2 3 4 5 6 7
DC-
Monday DC-IIIB
IIIB
Tuesday
LUNCH
Wednesday DC-IIIB DC-IIIB GS-IID
Thursday
RS- RS- RS-
Friday
IV(C&D) IV(C&D) IV(C&D)
RS-
Saturday Dept Meetings
IV(C&D)
Name of the Faculty: D. Venkata Rami Reddy Workload:13
09.00- 09.50- 10.40- 11.30- 12.20- 1.00-
Time 1.50-2.40 2.40-3.30
09.50 10.40 11.30 12.20 01.00 1.50
Period 1 2 3 4 5 6 7
DC-
Monday
IIID
LUNCH
UNIT I
1 Elements Of Digital 1 Regular OHP,BB
Communication Systems: Model
of digital communication system
2 Model of digital communication 1 Regular OHP,BB
system
Digital representation of analog
signal
3 Certain issues of digital 1 Regular OHP,BB
transmission
4 advantages of digital 1 Regular OHP,BB
communication systems,
5 Sampling theorem 1 Regular OHP,BB
09
6 A Base band signal receiver, 1 Regular OHP,BB
Different pulses and power
spectrum densities
7 Probability of error, optimum 1 Regular OHP,BB
receiver
8 Optimum of coherent reception, 1 Regular OHP,BB
9 Tutorial class-1 1 BB
16 Assignment test-1 1
UNIT II
36 Assignment test-3 1
51 Assignment test-4 1
UNIT V
60 Assignment test-5 1
Unit 1:
Unit 2:
Unit 3:
Unit 4:
Unit 5:
Tutorial – 1
3. Find the transfer function of the optimum receiver and calculate the error probability
4. Show that the impulse response of a matched filter is a time reversed and delayed
version of the input signal .and Briefly explain the properties of matched filter.
5. For the modulating signal m(t) = 2 Cos (100 t) + 18 Cos (2000 πt), determine the
allowable sampling rates and sampling intervals.
6. Prove that the mean value of the quantization error is inversely proportional to the
7. Explain why quantization noise could affect small amplitude signals in a PCM system
more than large signals. With the aid of sketches show how tapered quantizing level
8. Explain the working of Delta modulation system with a neat block diagram.
clearly bring out the difference between granular noise and slope overload error.
Consider a speech signal with maximum frequency of 3.4 KHz and maximum
20kbps. Discuss the choice of appropriate step size for the modulator.
9. A delta modulator system is designed to operate at five times the nyquist rate for a
signal with 3KHz bandwidth. Determine the maximum amplitude of a 2KHz input
sinusoid for which the delta modulator does not have slope overload. Quantization
10. A signal m (t) is to be encoded using either Delta modulation or PCM technique. The
signal to quantization noise ratio (So/No) ≥ 30dB.Find the ratio bandwidth required
for PCM to Delta modulation.
Tutorial – 3
11. One of four possible messages Q1, Q2, Q3, Q4 having probabilities 1/8, 3/8, 3/8, and
1/8 respectively is transmitted. Calculate average information per message.
12. An ideal channel low pass channel of bandwidth B hz with additive Gaussian white
noise is used for transmitting of digital information.
a. Plot C/B versus S/N in dB for an ideal system using this channel
b. A practical signaling scheme on this channel used one of two waveforms of
duration Tb sec to transmit binary information. The signaling scheme
transmits data at the rare of 2B bits/sec, the probability of error is given by P
(error/1sent) = Pe
c. Plot graphs of
i. C/B
ii. Dt/B where Dt is rate of information transmission over channel.
Tutorial – 4
100011
G = 010101
001110
Find
18. Draw and explain a decoder diagram for a (7,4) majority logic code whose
19. Design an encoder for the (7,4) binary cyclic code generated by g(x)=1+x+x 3 and
verify its operation using the message vector (0101).
1110100
H= 1101010
1011001
a. The code word received is 1000011 for a transmitted codeword C. Find the
22. Draw the state diagram, tree diagram for K=3, rate1/3 code generated by
23. Design an encoder for the (7,4) binary cyclic code generated by g(x) = 1+x+x3 and
verify its operation using the message vector (0101).
Tutorial – 5
24. A convolutional encoder has two shift registers two modulo-2 adders and an output
multiplexer. The generator sequences of the encoder are as follows: g(1)=(1,0,1); g(2)=(
1,1, 1). Assuming a 5bit message sequence is transmitted. Using the state diagram
find the message sequence when the received sequence is
(11,01,00,10,01,10,11,00,00,......)
25. Find the output codeword for the following convolutional encoder for the message
sequence 10011. (as shown in the figure).
26. Construct the state diagram for the following encoder. Starting with all zero state,
trace the path that correspond to the message sequence 1011101. Given convolutional
encoder has a single shift register with two stages,(K=3) three modulo-2 adders and
an output multiplexer. The generator sequence s of the encoder are as follows.
g(1)=(1, 0, 1) ; g(2)=(1, 1, 0),g(3)=(1,1,1).
2.
28. For the convolutional encoder shown below draw the trellis diagram for the message
sequence 110.let the first six received bits be 11 01 11 then by using viterbi decoding
find the decoded sequence.
Tutorial – 6
29. Explain the signal space representation of QPSK .Compare QPSK with all other
digital signaling schemes.
30. Explain in detail the power spectra and bandwidth efficiency of M-ary signals.
34. Find the transfer function of the optimum receiver and calculate the error probability.
35. Derive an expression for probability of bit error of a binary coherent FSK receiver.
Tutorial – 7
37. Explain the Direct sequence spread spectrum technique with neat diagram
41. How DS-SS works? Explain it with a block diagram. Explain the operation of slow
and fast frequency hopping technique.
43. Explain TDMA system with frame structure, frame efficiency and features. Explain
CDMA system with its features and list out various problems in CDMA systems.
2 List two examples each for analog and digital signals (in mathematical Remember
form).
8 Construct the mathematical expression for Minimum sampling rate (fs). Create
Blooms
S. No Question Taxonomy
Summarize differential encoding signaling? Explain with an
1 Understand
example.
2 Define quantization in PCM. Remember
3 Explain a simple model of non uniform quantizer. Understand
4 Define the term quantization noise. Remember
5 Compare the features of PCM and DPCM. Analyze
6 List the advantage gained by the use of robust quantization. Remember
7 Define an output signal-to-quantization ratio. Remember
Mention two major sources of noise which influence the
8 Knowledge
performance of a PCM system.
9 Discuss the advantages of DM over PCM. Create
10 Construct the block diagram of pulse code modulation. Apply
11 Define quantization noise power Remember
12 Discuss about uniform quantization? Create
13 Discuss about Quantization? Create
14 Compare uniform and non-uniform quantization Analyze
Blooms
S. No Question Taxonomy
1 What is meant by distortion less transmission? Remember
2 Discuss entropy and give the expression for it. Creating
3 Explain the channel capacity theorem. Understand
Let X represents the outcome of a single roll of a fair die. What is
5 Remember
the entropy of X?
Blooms
S. No Question
Taxonomy
6 codes?
Blooms
S. No Question
Taxonomy
2 Remember
Sketch the block diagram of ASK generation.
Examine how does pulse shaping reduce inter symbol
3 Analyze
interference?
5 Explain the Bandwidth, power and energy calculations for PSK Evaluate
signal.
6 Explain why PSK is always preferable over ASK in coherent Evaluate
detection?
8 Understand
Explain Phase shift keying with relevant equations and waveforms.
9 Estimate the band width required for frequency shift keying and Create
draw its spectrum.
13 Construct the FSK waveforms for a given input data “1101”. Apply
Blooms
S. No Question
Taxonomy
1 Define spread spectrum communication Remember
2 Explains pseudo noise sequence? Understand
3 Discuss direct sequence spread spectrum modulation Create
4 What is frequency hap spread spectrum modulation? Remember
5 What is processing gain? Understand
6 State four applications of spread spectrum. Create
7 When is the PN sequence called as maximal length sequence? Anal yze
8 What is meant by processing gain of DS spread spectrum system? Understand
9 Discuss the applications of spread spectrum modulation? Create
10 Define frequency hopping. Understand
11 What are the Advantages of DS-SS systems? Remember
12 What are the Disadvantages of DS-SS systems? Remember
13 List the Advantages of FH-SS System Analyze
14 List the Disadvantages of FH-SS System Analyze
Blooms
S. No Question
Taxonomy
Compare the linear block codes, cyclic codes and the
1 convolutional codes? Evaluate
Draw an (n-k) syndrome calculation circuit for an (n, k)cyclic code?
2 Understand
What is meant by random errors and burst errors? Explain about a
coding technique which can be used to correct both the burst and Remember
3 random errors simultaneously.
Discuss about the various decoders for convolutional codes. Create
4
Explain how the channel coding reduces the probability of error Evaluate
5
Evaluate
6 Explain the systematic code form for the binary cyclic codes?
Explain about block codes in which each block of k message bits encoded Evaluate
7 into block of n>k bits with an example.
Demonstrate the Viterbi algorithm for maximum-likelihood decoding of Create
8 convolutional codes.
Remember
9 What is a convolutional code? How is it different from a block code?
(a) Source coding improves the error performance of the communication system.
(b) Channel coding will reduce the average source code word length.
(c) Two different source codeword sets can be obtained using Huffman coding.
(d) Two different source codeword sets can be obtained using Shanon-Fano coding
2. A memory less source emits 2000binarysymbols/sec and each symbol has a Probability of
0.25 to be equal to 1and 0.75 to be equal to 0.The minimum number of bits/sec required for
error free transmission of this source is
(a) 1500
(b) 1734
(c) 1885
(d) 162213.
3. A system has a bandwidth of 3 KHz and an S/N ratio of 29dB at the input of the receiver .If
the bandwidth of the channel gets doubled, then
(b) ARQ scheme of error control is applied after the receiver makes a decision about the
received bit
(d) ARQ scheme of error control is applied when the receiver is unable to make a decision
(c) parity bits of the code word are the linear combination of the message bits
(d) the received power varies linearly with that of the transmitted power
7. Whichofthefollowingprovidesthefacilitytorecognizetheerroratthereceiver?
8. A system has a bandwidth of 3 KHz and an S/N ratio of 29dB at the input of the receiver. If
the bandwidth of the channel gets doubled, then
10. In a communication system, the average amount of uncertainty associated with the
Source, sink, source and sink jointly in bits/message are1.0613,1.5 and2.432 respectively.
Then the information transferred by the channel connecting the source and sink in bit sis
(a) 1.945
(b) 4.9933
(c) 2.8707
(d) 0.1293
11.ABS Chasa transition probability of P. The cascade of two such channel sis
Answers
2. A Field is
3.Under error free reception, the syndrome vector computed for the received cyclic code
word consists of
(a) 5
(b) 4
(c) 3
(d) 6
6. There are four binary words given as 0000, 0001,0011, 0111.Which of these cannot beam
member of the parity check matrix of a(15,11 ) linear Block code?
(a) 0011
(b) 0000,0001
(c) 0000
(d) 0111
7. The encoder of a (7,4) systematic cyclic encoder with generating polynomial g(x)=1+x2
+x3 is basically a
(b) H(X,Y)=2bits/message
(c) H(X/Y)=1bit/message
(d) H(X,Y)=0bits/message
9. A system has a bandwidth of 4 KHz and an S/N ratio of 28 at the input to the Receiver. If
the bandwidth of the channel is doubled , then
10. The Parity Check Matrix of a(6,3) Systematic Linear Block code is
a)101100
b) 011010
c)110001
11.If the Syndrome vector computed for the received code word is [010], then for error
correction, which of the bits of the received code word is to be complemented?
(a)3
(b) 4
(c) 5
(d) 2
Answers
1.D 2.D 3.C 4.B 5.A 6.C 7.A 8.B 9.A 10.C
1. The minimum number of bits per message required to encode the output of source
transmitting four different messages with probabilities 0.5,0.25,0.125and0.125is
(a) 1.5 (b) 1.75 (c) 2 (d) 1
(d) Remaindering(x)/V(x)
6. In a (6,3) systematic Linear Block code, the number of‘6‘bit code words that are not
useful is
7. The output of a source is band limited to 6KHz.It is sampled at a rate of 2KHz above the
nyquist rate. If the Entropy of the source is 2bits/sample, then the entropy of the source in
bits/sec is
9. A communication channel is fed with an input signal x(t) and the noise in the channel is
negative. The Power received at the receiver input is
10. White noise of PSD η/2 is applied to an ideal LPF with one sided bandwidth of BHz.
The filter provides again of 2. If the output power of the filter is 8η, then the value of B in Hz
is
1.B 2.C 3.C 4.A 5.D 6.C 7.B 8.B 9.B 10.A
(a) The syndrome of a received Block coded word depends on the received codeword
(b) The syndrome for a received Block coded word under error free reception consists of all
1‘s.
(c) The syndrome of a received Block coded word depends on the transmitted codeword.
(d) The syndrome of a received Block coded word depends on the error pattern
2. A Field is
3. Variable length source coding provides better coding efficiency, if all the messages of the
source are
(a) FEC and ARQ are not used for error correction
(b) ARQ is used for error control after receiver makes a decision about the received bit
(c) FEC is used for error control when the receiver is unable to make a decision about the
received bit
(d) FEC is used for error control after receiver makes a decision about the received bit
(b) H(X)+H(Y)-H(X,Y)bits/symbol
7. The time domain behavior of a convolutional encoder of code rate 1/3 is defined in terms
of a set of
8. A source X with entropy 2bits/message is connected to the receiver Y through a Noise free
channel. The conditional probability of the source, given the receiver is H(X/Y) and the joint
entropy of the source and the receiver H(X, Y).Then
10. The Memory length of a convolutional encoder is 5. If a 6bit message sequence is applied
as the input for the Encoder ,then for the last message bit to come out of the encoder, the
number of extra zeros to be applied to the encoder is
Answers
1.D 2.D 3.D 4.D 5.A 6.D 7.D 8.A 9.B 10.B
3. . Under error free reception, the syndrome vector computed for the received cyclic
codeword consists of
5.A discrete source X is transmitting m messages and is connected to the receiver Y through
asymmetric channel. The capacity of the channel is given as
(a) H(X)+H(Y)-H(X,Y)bits/symbol
6. The encoder of a (7,4) systematic cyclic encoder with generating polynomial g(x)=1+x2
+x3 is basically a
8.A system has a bandwidth of 4KHz and an S/N ratio of 28 at the input to the Receiver. If
the bandwidth of the channel is doubled, then
9.A source is transmitting four messages with equal probability. Then, for optimum Source
coding efficiency ,
Answers
Slow
Difficult and relatively expensive
Limited amount of information can be sent
Some methods can be used at specific times of the day
Information is not secure.
Examples of Today’s Communication Methods
Fast
Easy to use and very cheap
Huge amounts of information can be transmitted
Secure transmission of information can easily be achieved
Can be used 24 hours a day.
Basic Construction of Electrical Communication System
Analog Signals: are signals with amplitudes that may take any real value out of an infinite
number of values in a specific range (examples: the height of mercury in
a 10cm–long thermometer over a period of time is a function of time that
may take any value between 0 and 10cm, the weight of people setting in a
class room is a function of space (x and y coordinates) that may take any
real value between 30 kg to 200 kg (typically)).
Digital Signals: are signals with amplitudes that may take only a specific number of
values (number of possible values is less than infinite) (examples: the
number of days in a year versus the year is a function that takes one of
two values of 365 or 366 days, number of people sitting on a one-person
Noise: is an undesired signal that gets added to (or sometimes multiplied with) a
desired transmitted signal at the receiver. The source of noise may be
external to the communication system (noise resulting from electric
machines, other communication systems, and noise from outer space) or
internal to the communication system (noise resulting from the collision
of electrons with atoms in wires and ICs).
Signal to Noise Ratio (SNR):is the ratio of the power of the desired signal to the power of
the noise signal.
Bandwidth (BW): is the width of the frequency range that the signal occupies. For example
the bandwidth of a radio channel in the AM is around 10 kHz and the
bandwidth of a radio channel in the FM band is 150 kHz.
Since the introduction of digital communication few decades ago, it has been gaining
a steady increase in use. Today, you can find a digital form of almost all types of analog
communication systems. For example, TV channels are now broadcasted in digital form
(most if not all Ku–band satellite TV transmission is digital). Also, radio now is being
broadcasted in digital form (see sirus.com and xm.com). Home phone systems are starting to
go digital (a digital phone system is available at KFUPM). Almost all cellular phones are now
digital, and so on. So, what makes digital communication more attractive compared to analog
communication?
Famous Types
Amplitude Modulation (AM): varying the amplitude of the carrier based on the
information signal as done for radio channels that
are transmitted in the AM radio band.
Phase Modulation (PM): varying the phase of the carrier based on the
information signal.
Frequency Modulation (FM): varying the frequency of the carrier based on the
information signal as done for channels transmitted
in the FM radio band.
Purpose of Modulation
a) TV in the 1970s:
b) TV in the 2030s:
c) Fax machines
4. Digital Modulator:
The binary sequence is passed to digital modulator which in turns convert the
sequence into electric signals so that we can transmit them on channel (we will see
channel later). The digital modulator maps the binary sequences into signal wave
forms , for example if we represent 1 by sin x and 0 by Cos(x) then we will transmit
sin x for 1 and Cos(x) for 0. ( a case similar to BPSK)
5. Channel:
The communication channel is the physical medium that is used for
transmitting signals from transmitter to receiver. In wireless system, this channel
consists of atmosphere , for traditional telephony, this channel is wired , there are
optical channels, under water acoustic channels etc. we further discriminate this
channels on the basis of their property and characteristics, like AWGN channel etc.
6. Digital Demodulator:
The digital demodulator processes the channel corrupted transmitted waveform and
reduces the waveform to the sequence of numbers that represents estimates of the
transmitted data symbols.
7. Channel Decoder:
This sequence of numbers then passed through the channel decoder which attempts to
reconstruct the original information sequence from the knowledge of the code used by the
channel encoder and the redundancy contained in the received data
The average probability of a bit error at the output of the decoder is a measure of the
performance of the demodulator – decoder combination. THIS IS THE MOST
IMPORTANT POINT, We will discuss a lot about this BER (Bit Error Rate) stuff in
coming posts.
8. Source Decoder
At the end, if an analog signal is desired then source decoder tries to decode the
sequence from the knowledge of the encoding algorithm. And which results in the
approximate replica of the input at the transmitter end
9. Output Transducer:
Finally we get the desired signal in desired format analog or digital.
The points worth noting are :
1. the source coding algorithm plays important role in higher code rate
2. the channel encoder introduced redundancy in data
3. the modulation scheme plays important role in deciding the data rate and immunity of
signal towards the errors introduced by the channel
4. channel introduced many types of errors like multi path, errors due to thermal noise
1.The first advantage of digital communication against analog is it’s noise immunity. In any
transmission path some unwanted voltage or noise is always present which cannot be
eliminated fully. When signal is transmitted this noise gets added to the original signal
causing the distortion of the signal. However in a digital communication at the receiving end
this additive noise can be eliminated to great extent easily resulting in better recovery of
actual signal. In case of analog communication it’s difficult to remove the noise once added
to the signal.
4. Signal when travelling through it’s transmission path gets faded gradually. So on it’s path it
needs to be reconstructed to it’s actual form and re-transmitted many times. For that reason
AMPLIFIERS are used for analog communication and REPEATERS are used in digital
communication. Amplifiers are needed every 2 to 3 Kms apart where as repeaters are neede
every 5 to 6 Kms apart. So definitely digital communication is cheaper. Amplifiers also often
add non-linearity’s that distort the actual signal.
6. when audio and video signals are transmitted digitally an AD(Analog to Digital) converter
is needed at transmitting side and a DA(Digital to Analog) converter is again needed at
receiver side. While transmitted in analog communication these devices are not needed.
7. Digital signals are often an approximation of the analog data (like voice
or video) that is obtained through a process called quantization. The digital representation is
never the exact signal but it’s most closely approximated digital form. So it’s accuracy
depends on the degree of approximation taken in quantization process.
Duo-binary Signaling :
1 if symbol bk is 1 ck ak ak 1
ak
1 if symbol bk is 0
1, | f | 1 / 2Tb
H Nyquist ( f )
0, otherwise
d k bk d k 1
ck ak ak 1
0 if data symbol bk is 1
ck
2 if data symbol bk is 0
ck ak ak 1
d k bk d k 2
symbol 1 if either symbol bk or d k 2 is 1
symbol 0 otherwise
N 1
t
h(t ) wn sin c n
n Tb
N
p(t ) c(t ) h(t ) c(t ) w (t kT )
k N
k
N N
wk c(t ) (t kT )
k N
w c(t kT )
k N
k
N
p (nT ) wk c((n k )T )
k N
Nyquist criterion for distortionless transmission, with T used in place of Tb,
normalized condition p(0)=1
Zero-forcing equalizer
– Optimum in the sense that it minimizes the peak distortion(ISI) – worst case
– Simple implementation
– The longer equalizer, the more the ideal condition for distortionless
transmission
Adaptive Equalizer :
E en2
Ensemble-averaged cross-correlation
e y
2 E en n 2 E en n 2 E en xn k 2 Rex (k )
wk wk wk
Rex (k ) E en xn k
Optimality condition for minimum mean-square error
0 for k 0, 1,...., N
Mean-square error is a second-order and a parabolic function of tap weights as a multi
w k
dimensional bowl-shaped surface
Adaptive process is a successive adjustments of tap-weight seeking the bottom of the
bowl(minimum value )
Steepest descent algorithm
– The successive adjustments to the tap-weight in direction opposite to the
vector of gradient )
– Recursive formula (m : step size parameter)
1
wk ( n 1) wk ( n) m , k 0, 1,...., N
2 wk
wk (n) m Rex (k ), k 0, 1,...., N
Least-Mean-Square Algorithm
– Steepest-descent algorithm is not available in an unknown environment
– Approximation to the steepest descent algorithm using instantaneous estimate
)
Rex ( k ) en xn k
) )
wk (n 1) wk ( n) m en xn k
LMS is a feedback system
In the case of small m, roughly similar to steepest descent algorithm
Implementation Approaches:
Analog
– CCD, Tap-weight is stored in digital memory, analog sample and
multiplication
– Symbol rate is too high
Digital
– Sample is quantized and stored in shift register
– Tap weight is stored in shift register, digital multiplication
Programmable digital
– Microprocessor
– Flexibility
– Same H/W may be time shared
Decision-Feedback equalization:
h0 xn hk xn k hk xn k
k 0 k 0
Using data decisions made on the basis of precursor to take care of the postcursors
– The decision would obviously have to be correct
6.1 Introduction
ii) Storing the messages in digital form and forwarding or redirecting them at a later point in
time is quite simple.
iii) Coding the message sequence to take care of the channel noise, encrypting for secure
communication can easily be accomplished in the digital domain.
iv)Mixing the signals is easy. All signals look alike after conversion to digital form
independent of the source (or language!). Hence they can easily be multiplexed (and
demultiplexed)
The quantizer converts each sample to one of the values that is closest to it from
among a pre-selected set of discrete amplitudes. The encoder represents each one of these
quantized samples by an R -bit code word. This bit stream travels on the channel and reaches
the receiving end. With fs as the sampling rate and R -bits per code word, the bit rate of the
PCM System is
The decoder converts the R -bit code words into the corresponding (discrete)
amplitudes. Finally, the reconstruction filter, acting on these discrete amplitudes, produces the
analog signal, denoted by m’(t ) . If there are no channel errors, then m’(t ) approx= m(t ) .
777773333333333333333333333333333333333333333333333333
444444477744444444ggggggggggggggggggg77777777774444477777777777
1, 0 t T
1
h (t ) , t 0, t T (3.11)
2
0, otherwise
The instantaneously sampled version of m(t ) is
m ( t ) m(nT ) (t nT )
n
s s (3.12)
m ( t ) h ( t )
m ( )h (t )d
m(nT ) ( nT )h(t )d
n
s s
m(nT )
n
s
( nTs )h (t )d (3.13)
The most common technique for sampling voice in PCM systems is to a sample-and-
hold circuit.
The instantaneous amplitude of the analog (voice) signal is held as a constant charge
on a capacitor for the duration of the sampling period Ts.
This technique is useful for holding the sample constant while other processing is
taking place, but it alters the frequency spectrum and introduces an error, called
aperture error, resulting in an inability to recover exactly the original analog signal.
The amount of error depends on how mach the analog changes during the holding
time, called aperture time.
To estimate the maximum voltage error possible, determine the maximum slope of the
analog signal and multiply it by the aperture time DT
In pulse width modulation (PWM), the width of each pulse is made directly proportional
to the amplitude of the information signal.
In pulse position modulation, constant-width pulses are used, and the position or time of
occurrence of each pulse from some reference time is made directly proportional to the
amplitude of the information signal.
As in the case of other pulse modulation techniques, the rate at which samples are
taken and encoded must conform to the Nyquist sampling rate.
The sampling rate must be greater than, twice the highest frequency in the analog
signal, fs > 2fA(max)
Quantization Process:
Figure 3.10 Two types of quantization: (a) midtread and (b) midrise.
2
(3.28)
12
When the quatized sample is expressed in binary form,
L 2R (3.29)
where R is the number of bits per sample
R log 2 L (3.30)
2m max
(3.31)
2R
1 2
Q2 mmax 22 R (3.32)
3
Let P denote the average power of m(t )
P
( SNR ) o
Q2
3P
( 2
)2 2 R (3.33)
mmax
(SNR)o increases exponentially with increasing R (bandwidth).
m - law
log(1 m m )
(3.48)
log(1 m )
dm log(1 m )
(1 m m ) (3.49)
d m
A - law
A(m) 1
0 m
1 log A A
(3.50)
1 log( A m ) 1
m 1
1 log A A
1 log A 1
dm 0 m
A A (3.51)
d 1
(1 A) m m 1
A
Digital Multiplexers :
Advantages of PCM
2. Efficient regeneration
4. Uniform format
6. Secure
n
mq n sgn(e i )
i 1
n
eq i (3.55)
i 1
The modulation which has an integrator can relieve the draw back of delta
modulation (differentiator).
Because the transmitter has an integrator, the receiver consists simply of a low-pass filter.
Consider a finite-duration impulse response (FIR) discrete-time filter which consists of three
blocks :
3. Set of adders ( )
Figure 3.27 Block diagram illustrating the linear adaptive prediction process
p
J E x 2
n 2 w
k 1
p p
w j wk E
j 1 k 1
J p
2 R X k 2 w j R X k j 0
wk j 1
p
w R k j R k R k
j 1
j X X X , k 1,2 , ,p (3.64)
rX [ R X [1], R X [2],..., R X [ p ]] T
R X 0 R X 1 R X p 1
R 1 R X 0 R X p 2
RX X
R X p 1 R X p 2 R X 0
R X 0 , R X 1 , , R X p
Substituting (3.64) into (3.63) yields
p p
J min X2 2 wk R X k wk R X k
k 1 k 1
p
X2 wk R X k
k 1
2
X rXT w 0 X2 rXT R X1rX (3.67)
rXT R X1rX 0, J min is always less than X2
ECE DEPARTMENT DIGITAL COMMUNICATIONS
Differential Pulse-Code Modulation (DPCM):
Usually PCM has the sampling rate higher than the Nyquist rate .The encode signal
contains redundant information. DPCM can efficiently remove this redundancy.
e n m n m ˆ n (3.74)
mˆ n is a prediction value.
The quantizer output is
eq n e n q n (3.75)
where q n is quantization error.
The prediction filter input is
mq n m
ˆ n e n q n (3.77)
m n
mq n Gain:
Processing m n q n (3.78)
Need for coding speech at low bit rates , we have two aims in mind:
Information sources
Definition:
The set of source symbols is called the source alphabet, and the elements of the set are
called the symbols or letters.
Using this definition we can confirm that it has the wanted property of additivity:
The basis ‘b’ of the logarithm b is only a change of units without actually changing
the amount of information it describes.
Let us consider a discrete memory less source (DMS) denoted by X and having the
alphabet {U1, U2, U3, ……Um}. The information content of the symbol xi, denoted by I(x i) is
defined as
Units of I(xi):
For two important and one unimportant special cases of b it has been agreed to use the
following names for these units:
b =2(log2): bit,
b =10(log10): Hartley.
log2a=
Definition:
In order to get the information content of the symbol, the flow information on the
symbol can fluctuate widely because of randomness involved into the section of symbols.
H(U)= E[I(u)]=
We will usually neglect to mention “support” when we sum over PU(u) · log b PU(u), i.e., we
implicitly assume that we exclude all u
It may be noted that for a binary source U which genets independent symbols 0 and 1
with equal probability, the source entropy H(u) is
Bounds on H(U)
Where
To derive the upper bound we use at rick that is quite common in.
Formation theory: We take the deference and try to show that it must be non positive.
Note that the definition is identical to before apart from that everything is conditioned
on the event Y = y
Note that the conditional entropy given the event Y = y is a function of y. Since Y is
also a RV, we can now average over all possible events Y = y according to the
probabilities of each event. This will lead to the averaged.
Block Codes:
Hamming Distance:
Where d min is the minimum Hamming distance between 2 codeword’s and means
the smallest integer
• As seen from the second Parity Code example, it is possible to use a table to hold all
the codewords for a code and to look-up the appropriate codeword based on the
supplied dataword
• Alternatively, it is possible to create codewords by addition of other codewords. This
has the advantage that there is now no longer the need to held every possible
codeword in the table.
• If there are k data bits, all that is required is to hold k linearly independent codewords,
i.e., a set of k codewords none of which can be produced by linear combinations of 2
or more codewords in the set.
• The easiest way to find k linearly independent codewords is to choose those which
have ‘1’ in just one of the first k positions and ‘0’ in the other k-1 of the first k
positions.
• For example for a (7,4) code, only four codewords are required, e.g.,
1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1
• So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in
the list are added together, giving 1011010
• This process will now be described in more detail
c=(c1 c2……..cn)
• Thus,
k
c dia i
i 1
k k k k
So, c3 d 3i a i (d1i d 2i )a i d1i a i d 2i a i
i 1 i 1 i 1 i 1
c 3 c1 c 2
• The Hamming distance of a linear block code (LBC) is simply the minimum
Hamming weight (number of 1’s or equivalently the distance from the all 0 codeword)
of the non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to search among the 2k codewords
to find the min Hamming weight – far simpler than doing a pair wise check for all
possible codewords.
•
Linear Block Codes – example 1:
a2 = [0101]
1 0 1 1
0 1 0 1
c
_ _ _ _
1 1 1 0
Linear Block Codes – example 2:
Systematic Codes:
• For a systematic block code the dataword appears unaltered in the codeword – usually
at the start
• The generator matrix has the structure,
000000 0
000001 1
000010 1
000011 0
……… .
• A linear block code is a linear subspace S sub of all length n vectors (Space S)
• Consider the subset S null of all length n vectors in space S that are orthogonal to all
length n vectors in S sub
• It can be shown that the dimensionality of S null is n-k, where n is the dimensionality of
S and k is the dimensionality of S sub
• It can also be shown that S null is a valid subspace of S and consequently S sub is also
the null space of S null
and so, k k
b j .c b j . d i a i d i (a i .b j ) 0
i 1 i 1
• This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To
ensure this it is required that the rows of H are independent and are orthogonal to the
rows of G
• That is the bi span the remaining R (= n - k) dimensions of the codespace
• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in this case a plane) spanned by
a1 and a2
• In this example the H matrix has only one row, namely b1. This vector is orthogonal
to the plane containing the rows of the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid
codeword) will thus have a component in the direction of b1 yielding a non- zero dot
product between itself and b1.
Error Syndrome:
• For error correcting codes we need a method to compute the required correction
• To do this we use the Error Syndrome, s of a received codeword, cr
s = crHT
s = 0 + eHT
• That is, we can add the same error pattern to different code words and get the same
syndrome.
– There are 2(n - k) syndromes but 2n error patterns
– For example for a (3,2) code there are 2 syndromes and 8 error patterns
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and 128 error patterns.
– With 8 syndromes we can provide a different value to indicate single errors in
any of the 7 bit positions as well as the zero value to indicate no errors
• Now need to determine which error pattern caused the syndrome
where I is the k*k identity for G and the R*R identity for H
1 0 0 0 0 1 1
0 1 0 0 1 0 1 0 1 1 1 1 0 0
G I | P
0 0 1 0 1 1 0
H - P | I 1
T
0 1 1 0 1 0
0 0 0 1 1 1 1 1 1 0 1 0 0 1
Standard Array:
c1 (all zero) c2 …… cM s0
e1 c2+e1 …… cM+e1 s1
e2 c2+e2 …… cM+e2 s2
e3 c2+e3 …… cM+e3 s3
… …… …… …… …
eN c2+eN …… cM+eN sN
• The array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows
(i.e., the number of syndromes)
•
Hamming Codes:
• We will consider a special class of SEC codes (i.e., Hamming distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each bit that might need to be
corrected
– This is achieved if each column of H is a different binary word – remember s
= eHT
1 0 0 0 0 1 1
0 0 1 1 1 1 0 0
1 0 0 1 0 1
G I | P
0 0 1 0 1 1 0
T
H - P | I 1 0 1 1 0 1 0
1 1 0 1 0 0 1
0 0 0 1 1 1 1
1 1 1 0 0 0 0
1 0 0 0 1 1 1 1
0 0 1 1 0 0
G H
0 1 1 0 0 1 1
0 1 0 1 0 1 0
1 0 1 0 1 0 1
1 1 0 1 0 0 1
• Compared with the systematic code, the column orders of both G and H are swapped
so that the columns of H are a binary count
• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-systematic H is col. 7
in the systematic H.
Introduction:
x ''' j m j 2 m j
g ( 2 ) [111 1]
State diagram
• Each new block of k input bits causes a transition into new state
• Hence there are 2k branches leaving each state
• Assuming encoder zero initial state, encoded word for any input of k bits can thus be
obtained. For instance, below for u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1
1, 1 0, 1 1, 1 1) is produced:
• Problem of optimum decoding is to find the minimum distance path from the initial
state back to initial state (below from S0 to S0). The minimum distance is the sum of
all path metrics that is maximized by the correct path.
.
• Exhaustive maximum likelihood method must search all the paths in phase trellis (2k
paths emerging/entering from 2 L+1 states for an (n,k,L) code)
• The Viterbi algorithm gets its efficiency via concentrating into survivor paths of the
trellis
• Assume for simplicity a convolutional code with k=1, and up to 2k = 2 branches can
enter each state in trellis diagram
• Assume optimal path passes S. Metric comparison is done by adding the metric of S
into S1 and S2. At the survivor path the accumulated metric is naturally smaller
(otherwise it could not be the optimum path)
(Black circles denote the deleted branches, dashed lines: '1' was applied)
• In the previous example it was assumed that the register was finally filled with zeros
thus finding the minimum distance path
• In practice with long code words zeroing requires feeding of long sequence of zeros to
the end of the message bits: this wastes channel capacity & introduces delay
• To avoid this path memory truncation is applied:
– Trace all the surviving paths to the depth where they merge
– Figure right shows a common point at a memory depth J
– J is a random variable whose applicable magnitude shown in the figure (5L)
has been experimentally tested for negligible error rate increase
– Note that this also introduces the delay of 5L!
• H(7,4)
• Generator matrix G: first 4-by-4 identical matrix
• Transmission vector x
• Received vector r
and error vector e
g [1 0 11]
(1)
g ( 2 ) [111 1]
11
12 3 00 01
1 2311 01
1 411 4 2 4 410 3
01
• Back ground
– Turbo codes were proposed by Berrou and Glavieux in the 1993 International
Conference in Communications.
– Performance within 0.5 dB of the channel capacity limit for BPSK was
demonstrated.
• Features of turbo codes
– Parallel concatenated coding
– Recursive convolutional encoders
– Pseudo-random interleaving
– Iterative decoding
• Comparison:
– Rate 1/2 Codes.
– K=5 turbo code.
– K=14 convolutional code.
• Plot is from:
– L. Perez, “Turbo Codes”, chapter 8 of Trellis Coding by C. Schlegel. IEEE
Press, 1997
• In a coded systems:
– Performance is dominated by low weight code words.
• A “good” code:
– will produce low weight outputs with very low probability.
• An RSC code:
– Produces low weight outputs with fairly low probability.
– However, some inputs still cause low weight outputs.
• Because of the interleaver:
– The probability that both encoders have inputs that cause low weight outputs
is very low.
– Therefore the parallel concatenation of both encoders will produce a “good”
code.
Iterative Decoding:
The Turbo-Principle:
Turbo codes get their name because the decoder uses feedback, like a turbo engine
0
10
-1
10
1 iteration
-2
10
-3
2 iterations
10
BER
-4
10 6 iterations 3 iterations
-5
10 10 iterations
-6
10 18 iterations
-7
10
0.5 1 1.5 2
E b/No in dB
• The amplitude (or height) of the sine wave varies to transmit the ones and zeros
A cos 2f 2t binary 1
s t
A cos 2f 2t
binary 0
FSK Bandwidth:
• Applications
– On voice-grade lines, used up to 1200bps
DBPSK:
• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element
A cos 2f c t
11
4
3
A cos 2f c t 01
s t
4
3
A cos 2f c t 00
4
A cos 2f c t
4
10
Concept of a constellation :
Using multiple phase angles with each angle having more than one amplitude, multiple
signals elements can be achieved
R R
D
L log 2 M
QAM:
Figure 6.26 Block diagrams for (a) binary FSK transmitter and (b) coherent binary FSK
receiver.
6.28
Figure 6.30 (a) Input binary sequence. (b) Waveform of scaled time
function s1f1(t). (c) Waveform of scaled time function s2f2(t). (d)
Waveform of the MSK signal s(t) obtained by adding s1f1(t) and
s2f2(t) on a bit-by-bit basis.
Figure 6.31 Block diagrams for (a) MSK transmitter and (b) coherent MSK receiver
CDMA Example:
20.Additional Topics:
Voice coders
Regenerative repeater
Feedback communications
Advancements in the digital communication
Signal space representation
Turbo codes
Even with the need to record several frequencies, and the additional unvoiced sounds,
the compression of the vocoder system is impressive. Standard speech-recording systems
capture frequencies from about 500 Hz to 3400 Hz, where most of the frequencies used in
speech lie, typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate).
The sampling resolution is typically at least 12 or more bits per sample resolution (16 is
standard), for a final data rate in the range of 96-128 kbit/s. However, a good vocoder can
provide a reasonable good simulation of voice with as little as 2.4 kbit/s of data.
'Toll Quality' voice coders, such as ITU G.729, are used in many telephone networks.
G.729 in particular has a final data rate of 8 kbit/s with superb voice quality. G.723 achieves
slightly worse quality at data rates of 5.3 kbit/s and 6.4 kbit/s. Many voice systems use even
lower data rates, but below 5 kbit/s voice quality begins to drop rapidly.
Several vocoder systems are used in NSA encryption systems:
LPC-10, FIPS Pub 137, 2400 bit/s, which uses linear predictive coding
Code-excited linear prediction (CELP), 2400 and 4800 bit/s, Federal Standard 1016,
used in STU-III
Continuously variable slope delta modulation (CVSD), 16 kbit/s, used in wide band
encryptors such as the KY-57.
Mixed-excitation linear prediction (MELP), MIL STD 3005, 2400 bit/s, used in the
Future Narrowband Digital Terminal FNBDT, NSA's 21st century secure telephone.
(ADPCM is not a proper vocoder but rather a waveform codec. ITU has gathered G.721
along with some other ADPCM codecs into G.726.)
Vocoders are also currently used in developing psychophysics, linguistics, computational
neuroscience and cochlear implant research.
Modern vocoders that are used in communication equipment and in voice storage devices
today are based on the following algorithms:
Since the late 1970s, most non-musical vocoders have been implemented using linear
prediction, whereby the target signal's spectral envelope (formant) is estimated by an all-
pole IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank
of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum)
and again at the decoder to re-apply the spectral shape of the target speech signal.
During the last few years, a number of new equalization algorithms for digital
communication systems have been developed with the scope of achieving higher information
rates over band limited communication channels. 1 Various efforts had already been made to
achieve optimization, in some sense or other, of the parameters which characterize equalizers
with predetermined structures, such as the tapped delay line equalizer (t.d.l.e.). 2 " 7 and the
decision feedback equalizer (d.f.e.). 8 " 20 Successively, experts have focused their attention
on the problem of determining the optimal structure of the equaliser within a determined
class. In this connection, the Kalman filter equalizer ( k.f.e.) 21 "32 and the Viterbi algorithm
equalizer (v.a.e.)* ( 33~ 44 are the best equalizers that can be derived by means of a
statistical state-variable approach to the equalization problem. Thek.f.e. yields an unbiased
channel-state filtered estimate, which is optimal in the minimum mean-square-error (mjn.s.e.)
sense among the linear estimates. The v.a.e. yields the maximum-likelihood estimate of the
whole transmitted sequence when the transmission channel is modeled as a finite-state
machine.
The well-known Shannon-Hartley law tells us that there is an absolute limit on the
error-free bit rate that can be transmitted within a certain bandwidth at a given signal to noise
ratio (SNR). Although it is not obvious, this law can be restated (given here without proof) by
saying that for a given bit rate, one can trade off bandwidth and power. On this basis then, a
certain digital communications system could be either bandwidth limited or power limited,
depending on its design criteria.
Practice also tells us that digital communication systems designed for HF are
necessarily designed with two objectives in mind; slow and robust to allow communications
with weak signals embedded in noise and adjacent channel interference, or fast and somewhat
subject to failing under adverse conditions, however being able to best utilize the HF medium
with good prevailing conditions.
Taken that the average amateur radio transceiver has limited power output, typically
20 - 100 Watts continuous duty, poor or restricted antenna systems, fierce competition for a
free spot on the digital portion of the bands, adjacent channel QRM, QRN, and the marginal
condition of the HF bands, it is evident that for amateur radio, there is a greater need for a
weak signal, spectrally-efficient, robust digital communications mode, rather than another
high speed, wide band communications method.
• Bandpass Signal
Characteristics:
Let us consider DN = {(xi , yi) : i = 1, .., N} iid realizations of the joint observation-class
phenomenon (X(u), Y (u)) with true probability measure PX,Y defined on (X ×Y, σ(FX ×
FY )). In addition, let us consider a family of measurable representation functions D, where
any f(·) ∈ D is defined in X and takes values in Xf . Let us assume that any representation
function f(·) induces an empirical istribution Pˆ Xf ,Y on (Xf ×Y, σ(Ff ×FY )), based on the
training data and an implicit learning approach, where the empirical Bayes classification rule
is given by: gˆf (x) = arg maxy∈Y Pˆ Xf ,Y (x, y).
Turbo codes
In information theory, turbo codes (originally in French Turbocodes) are a class of
high-performance forward error correction (FEC) codes developed in 1993, which were the
first practical codes to closely approach the channel capacity, a theoretical maximum for
the code rate at which reliable communication is still possible given a specific noise level.
Turbo codes are finding use in 3G mobile communications and (deep
space) satellite communications as well as other applications where designers seek to achieve
reliable information transfer over bandwidth- or latency-constrained communication links in
the presence of data-corrupting noise. Turbo codes are nowadays competing with LDPC
codes, which provide similar performance.
Prior to turbo codes, the best constructions were serial concatenated codes based on
an outer Reed-Solomon error correction code combined with an inner Viterbi-decoded short
constraint lengthconvolutional code, also known as RSV codes.
In 1993, turbo codes were introduced by Berrou, Glavieux, and Thitimajshima (from
Télécom Bretagne, former ENST Bretagne, France) in their paper: "Near Shannon Limit
Error-correcting Coding and Decoding: Turbo-codes" published in the Proceedings of IEEE
International Communications Conference.[1] In a later paper, Berrou gave credit to the
In the figure, M is a memory register. The delay line and interleaver force input bits
dk to appear in different sequences. At first iteration, the input sequence dk appears at both
outputs of the encoder,xk and y1k or y2k due to the encoder's systematic nature. If the
encoders C1 and C2 are used respectively in n1 and n2 iterations, their rates are respectively
equal to
An interleaver installed between the two decoders is used here to scatter error bursts
coming from output. DI block is a demultiplexing and insertion module. It works as
a switch, redirecting input bits to at one moment and to at another. In OFF
state, it feeds both and inputs with padding bits (zeros).
Consider a memoryless AWGN channel, and assume that at k-th iteration, the decoder
receives a pair of random variables:
where and are independent noise components having the same variance .
is a k-th bit from encoder output.
Redundant information is demultiplexed and sent through DI to
(when ) and to (when ).
yields a soft decision, i.e.:
1. The DC as per the curriculum is not matching with the real time applications
2. This subject is not matching with the coding techniques presently using.
Action to be taken: following additional topics are taken to fill the known gaps
Part A is compulsory which carries 25 marks. Answer all questions in Part A. Part B consists of 5
Units. Answer any one full question from each unit. Each question carries 10 marks and may
have a, b, c as sub questions.
PART - A
(25 Marks)
1.a) Compare the performance of PCM and DM system. [2]
b) What is slope overload distortion? Explain. [3]
c) Write the expression for baud rate of BPSK system. [2]
d) Explain advantages of coherent digital modulation schemes. [3]
e) Sketch the wave form of the FSK signal for the input binary sequence 1100100010. [2]
f) Define entropy and conditional entropy. [3]
g) Define code rate of block code. [2]
h) Mention various types of errors caused by noise in communication channel. [3]
i) Define processing gain and jamming margin [2]
j) Explain the generation of PN sequence. [3]
PART - B
(50 Marks)
2.a) A voice frequency signal band limited to 3kHz is transmitted with the use of the DM system.
The pulse repetition frequency is 30,000 pulses per second, and the step size is 40mV. Determine
the permissible speech signal amplitude to avoid slope overload.
b) Derive the expression for overall SNR in a ADM system. [5+5]
OR
3.a) In a binary PCM system, the output signal to quantizing noise ratio is to be held to a
minimum of 40dB. Determine the number of required levels and find the corresponding output
signal to quantization noise ratio.
b) Explain the modulation and demodulation procedure in DPCM system. [5+5]
6.a) Derive the bit error probability of a coherent ASK signaling scheme.
b) Apply Shannon-Fano coding procedure of M=2 and M=3 [x]=[x 1, x2, x3, x4, x5, x6, x7, x8] with
probability [P]=[1/4, 1/8, 1/16, 1/16, 1/4, 1/16, 1/8 ,1/16]. [5+5]
OR
7.a) Compare code efficiency of Shanon Fano coding and Huffman coding when five source
messages have probabilities m1=0.4, m2=0.15, m3=0.15, m4=0.15, m5=0.15.
b) Obtain the probability of bit error for coherently detected BPSK. [5+5]
8.a) We transmit either a 1 or a 0, and add redundancy by repeating the bit. (i) Show that if we
transmit 11111 or 00000, then 2 errors can be corrected. (ii) Show that in general if we transmit
the same bit 2t+1 times we can correct upto t errors.
b) What are code tree, code trellis and state diagrams for convolution encoders? [5+5]
OR
9.a) Design the encoder for the (7, 4) cyclic code generated by G(p)=p 3+p2+1 and also verify the
operation for any message vector.
b) Derive the steps involved in generation of linear block codes. Define and explain the
properties of syndrome. [6+4]
10.a) Derive the necessity of DSSS techniques. Draw the transmitter and receiver block diagram
and explain.
b) Write a note on CDMA. [6+4]
OR
11.a) Explain the advantages and applications of spread spectrum modulation.
b) Discuss the frequency hopping spread spectrum technique in detail. [4+6]
R13
Code No: 126AN
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
B.Tech III Year II Semester Examinations, October/November-2016
DIGITAL COMMUNICATIONS
(Electronics and Communication Engineering)
Time: 3 hours Max. Marks: 75
6.a) Draw and explain the working of optimum receiver with a neat diagram.
b) Define eye diagram. Draw the eye diagram for FSK. [5+5]
OR
7.a) Explain Huffman coding with an example.
b) Explain crosstalk concept. [5+5]
PART - A
(25 Marks)
1.a) What are the drawbacks of delta modulation? [2]
b) Explain the need for non-uniform quantization in digital communication. [3]
c) Draw the Signal space Diagram of ASK. [2]
d) List out the Advantages of Pass band Transmission over Baseband transmission. [3]
e) Define Entropy. [2]
f) Derive the Expression for the Information Rate. [3]
g) Explain in one sentence about (i) Block Size (ii) Linear block codes. [2]
PART – B
(50 Marks)
2.a) With neat block diagram, Explain the process of Sampling and Quantization in digital
communication.
b) Derive the expression for the Quantization error. [5+5]
OR
3.a) Explain about the noise in PCM systems.
b) Write the comparison between PCM and Analog modulation techniques. [5+5]
4.a) With neat diagrams and equations, explain about PSK system.
b) Draw the space representation of BPSK. And also draw its waveforms? [5+5]
OR
5.a) The bit stream 1011100011 is to be transmitted using DPSK. Determine the encoded
sequence and transmitted phase sequence.
b) Explain about DPSK system. And also give the comparison between DPSK and PSK.
[5+5]
6.a) What is the need of pulse shaping for optimum transmission in baseband transmission?
Explain.
b) What is meant by Cross talk? Explain in detail about the causes for cross talk. [5+5]
OR
7.a) Briefly explain about Variable length coding.
b) Explain in detail about Huffman coding and Lossy source code. [5+5]
10.a) Explain the role of code division multiple access technique in present generation?
b) Give a brief history about direct sequence spread spectrum. [5+5]
OR
11.a) Explain about PN-Sequences generation and their characteristics.
b) What is meant by Synchronization? Why we require synchronization in spread
spectrum? Explain in detail. [5+5]
1. The effect of distortion, noise and interference is less in a digital communication system.
This is because []
a) The disturbance must be large enough to change the pulse from one state to the other.
b) The disturbance must be small enough to change the pulse from one state to the other.
c) The disturbance must be large enough to change the step signal from one state to the other.
d) The disturbance must be small enough to change the step signal from one state to the other.
2. A signal g(t) having the Upper Cutoff frequency, fu = 120KHz and the Lower Cutoff
frequency fl =70KHz. The ratio of upper cutoff frequency to bandwidth of the signal g(t) is [ ]
a) 2.1 b) 2.2 c) 2.3 d) 2.4
3. The quantization process presents an error defined as the difference between the input
signal, x(t) andthe output signal, y(t). This error is called the __________. [ ]
a) Quantization Noise b) Quantization c) Signal to Noise ratio d) Error
6. The spectrum of band pass signal g(t) has bandwidth of 0.6 kHz centered around ±12 kHz.
The Nyquistrate is _________ [ ]
a) 0.3 kHz b) 0.6 kHz c) 1.2 kHz d) 2.4 kHz
7. 18. The signal x(t) =sinc2(200t) the Nyquist interval for x(t) is _______________ [ ]
a) 5 msec b) 2.5 msec c) 1.25 msec d) 0.625 msec
8. In which the carrier used in the receiver is of same frequency and phaseof the transmitted
one is called__________. [ ]
a) Non coherent receivers. b) Coherent receivers. c) ASK d) FSK
10. In BPSK modulation, the modulating signal shifts the phase of the waveform to one of
two states,____________ [ ]
a) 0 b) π c) Either 0 or π d) Neither 0 nor π
TEXT BOOKS
Websites:-
1. http://en.wikipedia.org/wiki/digital_communications
2. http://www.tmworld.com/archive/2011/20110801.php
3. www.pemuk.com
4. www.site.uottawa.com
5. www.tews.elektronik.com
Journals:-
1. Communicaions Journal
REFERNCES:
1. Digital communications- John g. Prokaris, Masoud salehi-5th edition Mc Graw-Hill,
2008.
2. Digital communication- Simon Haykin, Jon Wiley, 2005.
3. Digital communications-Lan A.Glover, Peter M.Grant.2nd edition, pearson edu., 2008.
4. Communication systems-B.P.Lathi, BS Publication, 2006.
DEAN-ADMIN
DEAN-ADMIN
DEAN-ADMIN