Anda di halaman 1dari 25

Digital Electronics


Lecture Notes #3

Dr. Andrei Dinu

De Montfort University
Faculty of Computer Sciences and Engineering
Department of Engineering and Technology
Digital Electronics – ELEC2102


3.1 Bistables
Sequential circuits are digital circuits whose present output signals depend not only on the present
inputs but also on the past history of the inputs, i.e. they have “memory”. This property is generated by
internal feed-back loops in the circuit. The structure and the behaviour of one of the simplest possible
sequential circuits is presented in Fig. 3-1. Initially both signal A and signal B are ‘0’. When the input signal
A changes from ‘0’ to ‘1’ the output changes as well and the effect propagates back trough the feed-back
loop reinforcing the output state. As a result A can change back to ‘0’ without having an effect over B.

Fig. 3-1 – Sequential circuit

Such a circuit does not have a practical application because once the output is ‘1’ there is no way it
can be reversed to ‘0’. However, it can be transformed by adding a supplementary input, which is capable of
returning the output back to the initial state. One possible solution is presented in Fig. 3-2 where the
operation of the feed-back loop is controlled by an AND gate. If the input signal C is ‘0’ the new circuit
behaves exactly like the previous one but when C=’1’ the signal on the feed-back loop is ‘0’ and therefore
the output B equals the input A. To sum up, the output B can be forced to ‘1’ if A=’1’ and C=’0’ while B can
be brought back to ‘0’ if A=’0’ and C=’1’. If both A and C are ‘0’ in the same time B retains its previous
state. This is the effect of the “memory” property of such a circuit.
Thus, when A and C are both ‘0’ the circuit in Fig. 3-2 is in one of two possible stable states (B=’0’ or
B=’1’) where the output B depends not on the present inputs but on the past input. A circuit with two stable
states is called a bistable.

Fig. 3-2 – Simple bistable

A special situation occurs when A and C are simultaneously ‘1’. In this case, the future evolution of
the signal B depends on which signal returns first to ‘0’. If both signals become ‘0’ (almost) simultaneously
then the reaction of the bistable depends on the hazard produced by its internal components and is
unpredictable. In the case particular case presented in Fig. 3-2 it is most probable that the effect of signal A
will be faster than the effect of C because C needs to propagate through two more logic gates. However, the
bistable presented above is not the only implementation solution. Other implementations may behave
differently in a similar situation. The truth table associated with the bistable in Fig. 3-2 is Table 3-1.
Table 3-1
Truth Table
0 0 Bt-1

Digital Electronics – ELEC2102
0 1 0
1 0 1
1 1 1
The most well-known bistable circuits are SR-type bistables. They have two inputs: Set (or S) and
Reset (or R) and two outputs Q and Q (one is the inverse of the other during normal operation). The Set
input forces the main output (Q) to ‘1’ while the Reset forces Q to ‘0’. Three possible implementation
alternatives for SR bistable are presented in Fig. 3-3, while the corresponding truth tables are illustrated by
Table 3-2 and Table 3-3.

Fig. 3-3 – Alternative implementations of SR bistables

Table 3-2
Truth Table for Bistable A Truth Table for Bistable B
S R Qt Qt S R Qt Qt
0 0 Qt-1 t −1 0 0 — —
0 1 0 1 0 1 1 0
1 0 1 0 1 0 0 1
1 1 — — 1 1 Qt-1 t −1
Table 3-3
Truth Table for Bistable C
S R Qt Qt
0 0 Qt-1 t −1
0 1 0 1
1 0 1 0
1 1 — —
The inputs Set and Reset can be “active high” meaning that they act when ‘1’, or “active low” (active
when ‘0’). In Fig. 3-3, the inputs are active high for circuits (a) and (c), and active low in circuit (b).
Both inputs should normally be at a logic level ‘0’ in the case of the NOR circuit in Fig. 3-3 (a).
Changing the input Set to a logic level ‘1’ will force the Q to ‘0’. This signal feeds back to the input of the
lower NOR gate, and Q becomes ‘1’ thereby reinforcing the output of the upper NOR gate. Similarly, if
Reset is forced to ‘1’ instead, then Q will be forced to ‘0’ while Q becomes ‘1’.
Exercise: Explain the operation of the other two bistable circuits in Fig. 3-3.

Digital Electronics – ELEC2102
The simultaneous activation of both Set and Reset brings the bistable into a paradoxical state where
both Q and Q have the same value. As a result, it is customary to avoid the simultaneous activation of both
Set and Reset and the corresponding combination of inputs is not analysed in the truth tables (see Table 3-2
and Table 3-3). Moreover, the simultaneous deactivation of Set and Reset in the circuits presented in Fig. 3-3
after they were both active generates unpredictable results due to the circuit symmetry. The final state after
such a transition depends on minute differences of propagation delays through the two (or four) gates of the
bistable. So apparently, identical circuits will behave differently due to such small propagation time
differences, which cannot be properly controlled during the manufacturing process.

3.1.1 Latches
Latches are particular cases of bistable circuits that have an additional input (usually named CLOCK,
CLK, ENABLE or EN) that enables and disables the transitions on the outputs. Two possible
implementations of SR latches are presented in Fig. 3-4. The state of such circuits can change only when the
CLK input is ‘1’, while CLK=’0’ “freezes” the internal state and the outputs of the latch.

Fig. 3-4 – The structure and the operation of SR latches

There are different types of SR latches depending on each type of input. Some inputs may be active-
high (active when they are ‘1’) while others may be active-low (active when ‘0’). Thus, the inputs of the
latch in Fig. 3-4 (a) are active high, while the inputs of the latch in Fig. 3-4 (b) are active low. The low-active
inputs are represented by a small circle on the corresponding symbol as shown in Fig. 3-5.

Fig. 3-5 – SR latch symbols

Digital Electronics – ELEC2102

3.1.2 Flip-Flops
Flip-flops are edge-triggered bistables. Their outputs change in accordance with the inputs only when
the clock signal changes from ‘0’ to ‘1’ (the rising edge of the clock) or from ‘1’ to ‘0’ (the falling edge of
the clock). Flip-flops are active either on the rising or on the falling edge of the clock but not both of them
simultaneously. Flip-Flops with Input Validation Circuits

Fig. 3-6 presents the simplest implementation of SR flip-flop. Here the operation of two input gates is
controlled by the signal generated by an input analogue circuit. This circuit generates validation pulses for
each rising edge of the clock.

Fig. 3-6 – Positive edge-triggered bistable

Thus, when the clock signal changes from ‘0’ to ‘1’ (from 0V to +5V) the capacitor charges and a
current flows through the resistor R1. This current produces a voltage, which is transmitted almost unchanged
to the inputs of the NAND gates, because the current through R2 is negligible (the diode D does not conduct
a current to the ground but only from the ground). The charging process is described by equations
 qc
u c = C q dq
 ⇒ c + R 1 ⋅ c = u clk (3-1)
R ⋅ i = R ⋅ dq c C dt
 1 c 1
where uc is the voltage across the capacitor and qc is the charge stored by the capacitor. The general solution
of the differential equation above is
q c = K1 ⋅ e p ⋅ t + K 2 (3-2)

The constant p is the solution of the attached polynomial characteristic equation (3-3), while the
constants K1 and K2 can be determined from the initial and the final conditions of the system as sown in
(3-4). Initially, the capacitor is completely discharged qc=0 while the final charge is qc=C⋅uclk.
1 1
+ R 1p = 0 ⇒ p = − (3-3)
C R 1C

q c (0 ) = K 1 + K 2 = 0 K = −C ⋅ u clk
 ⇒ 1 (3-4)
q c (+∞) = K 2 = C ⋅ u clk K 2 = C ⋅ u clk
Therefore the solution of the differential equation in (3-1) is

 −

q c (t ) = C ⋅ u clk ⋅ 1 − e
 R 1C 
 
 
The voltage uNAND applied on the inputs of the NAND gates is approximately

Digital Electronics – ELEC2102
t t
dq 1 − −
u NAND ≅ R 1 ⋅ i1 = R 1 ⋅ i c = R 1 ⋅ c = R1 ⋅ C ⋅ u clk ⋅ ⋅ e R 1C = u clk ⋅ e R 1C (3-6)
dt R 1C
This is a decreasing exponential. The voltage signal is interpreted as ‘1’ as long as it is larger than VIHmin (the
minimum input voltage guaranteed to be recognised as logic ‘1’) and is interpreted as ‘0’ when the it falls
below VIHmin. Consequently, this formula can be used to calculate the necessary capacitor and resistor in
order to obtain the required validation time. For instance, if the validation time Tv should be less than 1ns
and the logic gates are implemented in the CMOS technology (VILmax=1.5V) then the following calculations
need to be performed:

TV 10−9
VIL max = u clk ⋅ e R 1C
⇒ R 1C ≤ = = 8.306 ⋅ 10 −10 s (3-7)
 u   5 
ln clk  ln 
 VIL max   1.5 
Several pairs of values for R1 and C can be used provided that their product complies with the condition
(3-7). If for example C=1pF then R1 has to be less than 830.6Ω.
When the clock voltage changes from +5V to 0V (during the falling edge of the clock) the capacitor
discharges through both R1 and R2 because the diode conducts the current from the ground to the capacitor.
As a result the discharging current is larger and the discharging process is faster than the charging. The
voltage supplied to the NAND gates is the voltage across the diode and it does not surpass -0.7V.

Fig. 3-7 – The validation signal of the flip-flop

Exercise: Transform the circuit in Fig. 3-6 into a flip-flop triggered by the falling edge of the clock.

The validation signal can be generated by purely digital means using the circuits in Fig. 3-8, exploiting
the phenomenon of hazard. Thus, the signals x and y at the inputs of the AND gate do not change
simultaneously due to the delays generated by the buffer and the NOT gates. As a result, the AND gate
generates a short logic pulse ‘1’ whose widths is proportional to the number of logic elements used to delay
the signal ‘x’. Consequently, the circuit in Fig. 3-8 (A) generates a narrower validation pulse than the circuit
in Fig. 3-8 (B).
This digital solution is less flexible than the analogue approach previously presented because in this
second situation the validation pulse width is always an integer multiple of one gate delay whereas in the
first case it could have any width (depending on C and R1).

Digital Electronics – ELEC2102

Fig. 3-8 – Digital validation signal generators with buffer and inverter (A) and only with inverters (B) Master-Slave Flip-Flops

Master-slave flip-flops are built using two cascaded latches where the first is the “master” and the
second is the “slave”. The clock signals for the two latches are always opposite so that the master and the
slave are enabled and disabled by turns. Therefore, the general outputs Q and Q of the master-slave circuit
can only change during one of the clock transitions, from either ‘0 ‘to ‘1’ (rising edge of the clock) or from
‘1’ to ‘0’ (the falling edge of the clock). The symbol of the flip-flop always indicates which clock edge
triggers the output changes (see Fig. 3-9). Fig. 3-10 illustrates the operation of a SR flip-flop triggered by the
rising edge of the clock.

Fig. 3-9 – Master-slave SR flip-flops: falling-edge triggered (a) and rising-edge triggered (b)

Fig. 3-10 – SR flip-flop operation (active on the rising edge of the clock)
The most important flip-flops are JK, D (or Data) and T (or Toggle). These flip-flops are derived from
the basic SR flip-flop (Fig. 3-9) in the manner shown in Fig. 3-11. Depending on the inputs of the master and
slave SR latches (active-low or active-high) the resulting flip-flops will have different types of inputs (active-
low or active-high). Thus, two types of truth tables are possible: Table 3-4 refers to flip-flops with active-

Digital Electronics – ELEC2102
high inputs while Table 3-5 applies to flip-flops with active-low inputs. The active inputs are shaded in
both tables below.
Table 3-4 – JK, D and T flip-flop truth tables (inputs active-high). The active inputs are shaded.
JK flip-flop D flip-flop T flip-flop
0 0 Qt-1 Q t −1 0 0 1 0 Qt-1 Q t −1
0 1 0 1 1 1 0 1 Q t −1 Qt-1
1 0 1 0
t −1
1 1 Q Qt-1

Table 3-5 – JK, D and T flip-flop truth tables (inputs active-low). The active inputs are shaded.
JK flip-flop D flip-flop T flip-flop
0 0 Q t −1 Qt-1 0 1 0 0 Q t −1 Qt-1
0 1 1 0 1 0 1 1 Qt-1 Q t −1
1 0 0 1
Q t −1
1 1 Q
IMPORTANT NOTE: The flip-flops with the main inputs (S, R, T, J, K) active high are by far more
popular than the ones with inputs active low. Therefore, Table 3-4 shows the truth-tables used in most
practical cases.

Digital Electronics – ELEC2102

Fig. 3-11 – JK, D and T flip-flops (structure and symbols)

NOTE: As shown in Fig. 3-11, constructing a JK flip-flop out of a SR flip-flop always implies closing two
feedback loops. The way these feedback loops are closed depends on the type of SR flip-flop inputs. Thus,
AND gates are used if the inputs are active-high (see the top row in Fig. 3-11) but OR gates are used if the
SR inputs are active-low.
Some flip-flops have extra inputs with direct and immediate effect on the outputs regardless of the
clock signal. These are named asynchronous inputs and force the flip-flop either to ‘0’ (the input Clear) or
to ‘1’ (input Preset). Fig. 3-12 presents an SR flip-flop with active-low ‘Preset’ and ‘Clear’.
Exercise: Explain exactly how these inputs affect the outputs of the flip-flop.
Question: Why are Preset and Clear connected both to the master and to the slave?

Fig. 3-12 – SR flip-flop with asynchronous preset and clear

Digital Electronics – ELEC2102
Flip-flop excitation tables show the inputs necessary to obtain any required changes of the flip-flop
output. There are only four possible output changes: 0→0; 0→1; 1→0; 1→1. These are covered in Table 3-6
where the sign ‘*’ stands for any input (either ‘0’ or ‘1’). Based these tables and using logic gates where
necessary, any flip-flop can be transformed in any other flip-flop. Some of the conversions are illustrated in
Fig. 3-11.
Table 3-6 – Flip-flop excitation tables (inputs active high).
SR flip-flop JK flip-flop
Qt Qt+1 S R Qt Qt+1 J K
0 0 0 * 0 0 0 *
0 1 1 0 0 1 1 *
1 0 0 1 1 0 * 1
1 1 * 0 1 1 * 0
D flip-flop T flip-flop
Qt Qt+1 D Qt Qt+1 T
0 0 0 0 0 0
0 1 1 0 1 1
1 0 0 1 0 1
1 1 1 1 1 0

3.1.3 Flip-Flop Conversions

To convert one flip-flop into another the excitation tables of the two flip-flops types need to be merged
into a single one as in Table 3-7, which is used to transform a D-type flip-flop into a T-type flip-flop. Using
this table the function T(Qt, D) is derived:

( )
T Qt , D = Qt ⋅ D + Qt ⋅ D = Qt ⊕ D (3-8)

and implemented with logic gates (Fig. 3-13).

Table 3-7
Initial Emulated
Flip-Flop Flip-Flop
Input Input
Qt Qt+1 D T
0 0 0 0
0 1 1 1
1 0 0 1
1 1 1 0

Fig. 3-13 – The conversion of a D flip-flop into a T flip-flop

A second example presents the implementation of a JK-equivalent circuit using D flip-flop. Based on
Table 3-8 the Karnaugh map in Fig. 3-14 is derived. The map corresponds to the Boolean function
D = J ⋅ Q + Q ⋅ K , which is implemented in the final circuit as shown in Fig. 3-14.

Table 3-8

Digital Electronics – ELEC2102
Initial Emulated
Flip-Flop Flip-Flop Inputs
Qt Qt+1 D J K
0 0 0 0 *
0 1 1 1 *
1 0 1 * 1
1 1 0 * 0

Fig. 3-14 The conversion of a D flip-flop into a JK flip-flop

3.2 Flip-Flop Applications

3.2.1 Counters
A digital counter is a set of flip-flops whose states change in response to pulses applied at the input to
the counter. The flip-flops are interconnected so that their combined state at any time is the binary equivalent
of the total number of pulses that have occurred up to that time. Most often (but not always), this binary
equivalent is actually the binary correspondent of the number of clock pulses received.
The typical outputs of a 4-bit counter are presented in Table 3-9. Thus, the least significant output
changes after each clock pulse while Out1 toggles only when Out0 changes from ‘1’ to ‘0’. Likewise, the
output Out2 toggles when Out1 changes from ‘1’ to ‘0’ and Out3 toggles when Out2 changes from ‘1’ to ‘0’.
Based on these considerations, the structure in Fig. 3-15 can be derived where each flip-flop provides the
clock signal for the next flip-flop. If active-high flip-flops are to be used then Fig. 3-15 can be transformed
into Fig. 3-16. These two examples are asynchronous counters because the flip-flops do not have a common
clock source to synchronise them.
Table 3-9
No. of
Out3 Out2 Out1 Out0 clock
0 0 0 0 0
0 0 0 1 1

0 0 1 0 2
0 0 1 1 3
← ←
0 1 0 0 4
0 1 0 1 5

0 1 1 0 6
0 1 1 1 7
← ← ←
1 0 0 0 8
1 0 0 1 9

1 0 1 0 10
1 0 1 1 11
← ←
1 1 0 0 12
1 1 0 1 13

1 1 1 0 14
1 ← 1 ← 1 ← 1 15

Digital Electronics – ELEC2102

Fig. 3-15 – Asynchronous (ripple-through) counter with active-low T-type flip-flops

Fig. 3-16 – Asynchronous (ripple-through) counter with active-high T-type flip-flops

Large asynchronous counters cannot be used at high frequency because the total delay through the
counter is N times larger than the delay through one flip-flop (N=the number of flip-flops) and N⋅Tdelay can
by larger than the clock period Tclk. Large counters using high clock frequencies can be built based on the
synchronous paradigm where the same clock signal is distributed to all the flip-flops (see Fig. 3-17and
Fig. 3-18).

Fig. 3-17 – Synchronous counter with T-type flip-flops

Fig. 3-18 - Synchronous counter with JK-type flip-flops

Digital Electronics – ELEC2102

Fig. 3-19 - Synchronous counter with D-type flip-flops

Fig. 3-20 – Down counter with T-type flip-flops

Fig. 3-21 – Up-down synchronous counter with T-type flip-flops

Fig. 3-22 – Synchronous counter with reset input

Digital Electronics – ELEC2102

Fig. 3-23 – Modulus-10 counter

Fig. 3-24 – Simple ring counter

Fig. 3-25 – Johnson counter

Table 3-10 – Johnson counter operation

Feed-back bit Out0 Out1 Out2 Out3
1 0 0 0 0
1 1 0 0 0
1 1 1 0 0
1 1 1 1 0
0 1 1 1 1
0 0 1 1 1
0 0 0 1 1
0 0 0 0 1
1 0 0 0 0

Digital Electronics – ELEC2102
1 1 0 0 0 Beginning
1 1 1 0 0 of the
1 1 1 1 0 second

The RESET input is essential for the correct operation of a Johnson counter in Fig. 3-25 because the
counter needs to start from one of its correct states. A correct state is one where all the output bits ‘1’ are
neighbours to one-another. A state like “1011” is not a correct state for a Johnson counter. The counter in
Fig. 3-26 does not need a reset input because it automatically converges to its normal operation cycle
regardless of the initial state. This is achieved by means of the feed-back loop that contains the NOR gate
which feeds a new bit ‘1’ into the counter only when all the outputs are simultaneously ‘0’. An example is
presented in Table 3-11.

Fig. 3-26 – Self-starting ring-counter

Digital Electronics – ELEC2102

Table 3-11
Feed-back bit Out0 Out1 Out2 Out3
0 1 1 0 1 Initial states
0 0 1 1 0
0 0 0 1 1
0 0 0 0 1
1 0 0 0 0
0 1 0 0 0 One normal
0 0 1 0 0 operation
0 0 0 1 0 cycle
0 0 0 0 1
1 0 0 0 0
0 1 0 0 0
Such ring-counters are normally used to control complex devices (as industrial robots, for instance)
where a cyclical task is being performed and the task consists of several steps. Each step is initiated by
activating one output signal (one flip-flop output).

Fig. 3-27 – Self-starting ring-counter without the “0000” state

3.2.2 General Design Procedure for Synchronous Counters

Step 1: Draw the state diagram (each state is defined by the outputs of the flip-flops inside the counter).
Step 2: Derive the state table of the counter
Step 3: Choose the type of flip-flops to be used.
Step 4: Determine the necessary inputs of the flip-flops based on the excitation table of the flip-flops and on
the state table. (This step implies the use of Karnaugh maps.)
Step5: Implement the counter with the chosen flip-flops and with logic gates.

Example: The design of a three-bit Gray counter with D-flip-flops.

Fig. 3-28 – Gray counter state diagram

Digital Electronics – ELEC2102
Table 3-12 –The state table of the Gray counter and the corresponding D inputs of the flip-flops
Q2 Q1 Q0 Q 2t +1 Q1t +1 Q0t +1 D2 D1 D0
0 0 0 0 0 1 0 0 1
0 0 1 0 1 1 0 1 1
0 1 1 0 1 0 0 1 0
0 1 0 1 1 0 1 1 0
1 1 0 1 1 1 1 1 1
1 1 1 1 0 1 1 0 1
1 0 1 1 0 0 1 0 0
1 0 0 0 0 0 0 0 0

Fig. 3-29 – The logic functions underlying the operation of the 3-bit Gray counter

D 0 = Q 2 Q1 + Q 2Q1 = Q 2 Q1 + Q 2Q1 = Q 2 Q1 ⋅ Q 2Q1 = (Q 2 + Q1 ) ⋅ Q 2 + Q1 =)
= Q 2 Q 2 + Q 2 Q1 + Q1 Q 2 + Q1 Q1 = Q 2 Q1 + Q1 Q 2 = Q1 ⊕ Q 2

Fig. 3-30 – Gray counter implemented with D-type flip-flops

The same counter can be implemented using T flip-flops or JK flip-flops instead of D flip-flops. In this
case, Table 3-13 needs to be used to derive the Boolean functions to be implemented by logic gates. This
table contains the excitation functions for the T and JK flip-flops respectively.

Digital Electronics – ELEC2102

Table 3-13 – The state table of the Gray counter and the corresponding T or J and K inputs of the flip-flops
Q2 Q1 Q0 Q 2t +1 Q1t +1 Q0t +1 T2 T1 T0 J2 K2 J1 K1 J0 K0
0 0 0 0 0 1 0 0 1 0 * 0 * 1 *
0 0 1 0 1 1 0 1 0 0 * 1 * * 0
0 1 1 0 1 0 0 0 1 0 * * 0 * 1
0 1 0 1 1 0 1 0 0 1 * * 0 0 *
1 1 0 1 1 1 0 0 1 * 0 * 0 1 *
1 1 1 1 0 1 0 1 0 * 0 * 1 * 0
1 0 1 1 0 0 0 0 1 * 0 0 * * 1
1 0 0 0 0 0 1 0 0 * 1 0 * 0 *

3.2.3 Registers
Registers are digital circuit capable to store N bits in the same time where each bit is stored by one
flip-flop. Usually registers are built with D-type flip-flops but other types of flip-flops can also be used.
Depending on the types of the inputs and the outputs, there are four main categories of registers:
• Parallel-In Parallel Out (PIPO)
• Serial-In Serial-Out (SISO)
• Parallel-In Serial-Out (PISO)
• Serial-In Parallel-Out (SIPO)
Combinations of these main types are also possible.

Fig. 3-31 – 3-bit PIPO register

Fig. 3-32 – 4-bit SISO register

Exercise: Implement a four bit SISO register with T-type flip-flops

Fig. 3-33 – 4-bit bi-directional SISO register

Digital Electronics – ELEC2102

Fig. 3-34 – 4-bit SIPO register

Fig. 3-35 – 5-bit PISO register with asynchronous input control (load)

Fig. 3-36 – 4- bit register with synchronous parallel and serial inputs and a serial output
Exercise: Modify the circuit in Fig. 3-36 to transform it into a register with both a serial output and four
parallel outputs.

3.2.4 Applications of Registers and Counters Pseudo-Random Bit Generators
True randomness is not possible to achieve in a purely digital environment. However, it can be
successfully simulated by generating long cycles of seemingly chaotic numbers. This task is achieved by so
called pseudo-random number generators. They are either software or hardware implemented and use an

Digital Electronics – ELEC2102
initial value (a "seed") to start a series of numeric or logical operations to produce a long pattern of
successive results. The main applications of pseudo-random number generators are:
• simulation of complex phenomena (weather, stock exchange etc.)
• testing computer programs
• testing complex digital circuits
• electronic games
• communication systems.
One of simplest digital random generators consists of a shift register and an XOR gate whose inputs
are connected to some of the outputs of the flip-flops in the register. The XOR gate feeds the serial input of
the register while the pseudo-random bit pattern is provided by the serial output of the register. An example
is provided by Fig. 3-37. Notice that the ‘reset’ input of circuit initialises the register with a bit pattern where
not all the values are ‘0’. If they were all ‘0’ then the XOR gate would always generate ‘0’ and no random bit
pattern could be generated. The quality of the circuit is given by the length of the bit-pattern it generates.
This in turn depends on the register length, the number of XOR inputs and the flip-flops they are connected
to. Generally, such a circuit does not enter its normal operation cycle immediately after reset, but it first
visits a few initial states, which are not revisited again during its normal cyclical operation. The first
operation steps of the presented circuit are calculated in Table 3-14.
Exercise: Find the number of initial states after reset and total length of the operation cycle for the random
generator in Fig. 3-37.

Fig. 3-37 – Example of random bit generator

Table 3-14 – The first operation steps of the random bit generator in Fig. 3-37
XOR-Out D-Out0 D-Out1 D-Out2 D-Out3 D-Out4 D-Out5
1 0 0 1 1 0 0
0 1 0 0 1 1 0
0 0 1 0 0 1 1
0 0 0 1 0 0 1
1 0 0 0 1 0 0
1 1 0 0 0 1 0
1 1 1 0 0 0 1
1 1 1 1 0 0 0
0 1 1 1 1 0 0
1 0 1 1 1 1 0
0 1 0 1 1 1 1
1 0 1 0 1 1 1
1 1 0 1 0 1 1
0 1 1 0 1 0 1
0 0 1 1 0 1 0
1 0 0 1 1 0 1
0 1 0 0 1 1 0
0 0 1 0 0 1 1
0 0 0 1 0 0 1
1 0 0 0 1 0 0

Digital Electronics – ELEC2102 Error Detecting Circuits

The aim of an error detection technique is to enable the receiver of a message transmitted through a
noisy (error-introducing) channel to determine whether the message has been corrupted. To do this, the
transmitter constructs a value (called a checksum) that is a function of the message, and appends it to the
message. The receiver can then use the same function to calculate the checksum of the received message and
compare it with the appended checksum to see if the message was correctly received.
The simplest checksum to detect the presence of an error due to data corruption during digital data
transmission consists of only one bit (a parity bit). A parity bit is an extra bit included with a message to
make the total number of bits ‘1’ either an odd number or an even number. However, if an even number of
bits is corrupted during transmission then the parity method will not detect the error.
The method can be enhanced by choosing a checksum function that is simply the sum of the bytes in the
message mod 256 (i.e. modulo 256),
Example (all numbers are in decimal format)

Message : 6 23 4
Message with checksum : 6 23 4 33
Message with error after transmission : 6 27 4 33

The second byte of the message was corrupted in the above example from 23 to 27 by the communications
channel. However, the receiver can detect this by comparing the transmitted checksum (33) with the
computer checksum of 37 (6 + 27 + 4). If the checksum itself is corrupted, then a correctly transmitted
message might be incorrectly identified as a corrupted one. However, this is a safe-side failure. A dangerous-
side failure occurs where the message and/or checksum are corrupted in a manner that results in a
transmission that is internally consistent. For example:

Message : 6 23 4
Message with checksum : 6 23 4 33
Message after transmission : 8 20 5 33

Unfortunately, this possibility is not completely avoidable and the best that can be done is to minimise its
probability by increasing the amount of information in the checksum (e.g. widening the checksum from one
byte to two bytes). To strengthen the checksum, one could change from an 8-bit register to a 16-bit register
(i.e. sum the bytes mod 65536 instead of mod 256) so as to apparently reduce the probability of failure from
1/256 to 1/65536. While a good idea, it fails in this particular case because the formula used is not
sufficiently "random". With a simple summing formula, each incoming byte affects roughly only one byte of
the summing register no matter how wide it is. For example, in the second example above, the summing
register could be a Megabyte wide, and the error would still go undetected. This problem can only be solved
by replacing the simple summing formula with a more sophisticated formula that causes each incoming byte
to have an effect on the entire checksum register. Thus, at least two aspects are required to form a strong
checksum function:
• WIDTH: A register width wide enough to provide a low a-priori probability of failure (e.g. 32-bits gives
a 1/2^32 chance of failure).
• CHAOS: A formula that gives each input byte the potential to change any number of bits in the register.

Note: The term "checksum" was presumably used to describe early summing formulas, but has now taken on
a more general meaning encompassing more sophisticated algorithms such as the (Cyclical Redundancy
Codes (CRC). The CRC algorithms can be adjusted to satisfy the second condition very well, and can be
configured to operate with a variety of checksum widths. The basic principle of CRC
While addition is clearly not strong enough to form an effective checksum, it turns out that division is,
so long as the divisor is about as wide as the checksum register. The basic idea of CRC algorithms is simply
to treat the message as an enormous binary number, to divide it by another fixed binary number, and to make
the remainder from this division the checksum. Upon receipt of the message, the receiver can perform the
same division and compare the remainder with the "checksum" (transmitted remainder).
Example: Suppose the message consisted of the two bytes (the decimal numbers 6 and 23) as in the previous
example. These can be considered to be the hexadecimal number 0617 which can be considered to be the
binary number 0000-0110-0001-0111. Suppose that we use a checksum register one-byte wide and use a
constant divisor of 1001, then the checksum is the remainder after 0000-0110-0001-0111 is divided by 10012
which is 00102. Therefore, the transmitted message would look like this (in hexadecimal): 06172 (where the
Digital Electronics – ELEC2102
0617 is the message and the 2 is the checksum). The receiver would divide 0617 by 9 and see whether the
remainder was 2.
Although the effect of each bit of the input message on the quotient is not all that significant, the 4-bit
remainder gets kicked about quite a lot during the calculation, and if more bytes were added to the message
(dividend) it's value could change radically again very quickly. This is why division works where addition
While the division scheme described in the previous section is very similar to the check summing
schemes called CRC schemes, the CRC schemes are in fact a bit weirder. The key concept behind the CRC
algorithms is summed up by the word "polynomial". A given CRC algorithm will be said to be using a
particular polynomial, and CRC algorithms in general are said to be operating using polynomial arithmetic.
What does this mean?
Instead of the divisor, dividend (message), quotient, and remainder (as described in the previous
section) being viewed as positive integers, they are viewed as polynomials with binary coefficients. This is
done by treating each number as a bit-string whose bits are the coefficients of a polynomial. For example, the
ordinary number 23 (decimal) is 17 (hex) and 10111 binary and so it corresponds to the polynomial:

1*x^4 + 0*x^3 + 1*x^2 + 1*x^1 + 1*x^0

or, more simply:
x^4 + x^2 + x^1 + x^0

Using this technique, the message, and the divisor can be represented as polynomials and we can do
all our arithmetic just as before, except that now it's all cluttered up with Xs. For example, suppose one
wanted to multiply 1101 by 1011. This can be simply done by multiplying the polynomials:
(x^3 + x^2 + x^0)(x^3 + x^1 + x^0)=
=(x^6 + x^4 + x^3 + x^5 + x^3 + x^2 + x^3 + x^1 + x^0)=
= x^6 + x^5 + x^4 + 3*x^3 + x^2 + x^1 + x^0

At this point, to get the right answer, we have to pretend that x is 2 and propagate binary carries from
the 3*x^3 yielding: x^7 + x^3 + x^2 + x^1 + x^0. It's just like ordinary arithmetic except that the base is
abstracted and brought into all the calculations explicitly instead of being there implicitly. So what's the
The point is that IF we pretend that we DON'T know what x is, we CAN'T perform the carries. We
don't know that 3*x^3 is the same as x^4 + x^3 because we don't know that x is 2. In this true polynomial
arithmetic the relationship between all the coefficients is unknown and so the coefficients of each power
effectively become isolated from the others. Coefficients of x^2 are effectively of a different type to
coefficients of x^3, for example. With the coefficients of each power nicely isolated, mathematicians came
up with all sorts of different kinds of polynomial arithmetics simply by changing the rules about how
coefficients work. Of these schemes, one in particular is relevant here, and that is a polynomial arithmetic
where the coefficients are calculated MOD 2 and there is no carry; all coefficients must be either 0 or 1 and
no carries are calculated. This is called "polynomial arithmetic mod 2".
Under the normal arithmetic, the 3*x^3 term was propagated using the carry mechanism using the
knowledge that x=2. Under "polynomial arithmetic mod 2", one doesn’t know what x is, there are no carries,
and all coefficients have to be calculated mod 2. Thus, the previous example becomes:
(x^3 + x^2 + x^0)(x^3 + x^1 + x^0)== x^6 + x^5 + x^4 + x^3 + x^2 + x^1 + x^0

Thus, polynomial arithmetic mod 2 is just binary arithmetic mod 2 with no carries. While polynomials
provide useful mathematical machinery in more analytical approaches to CRC and error-correction
algorithms, for the purposes of exposition they provide no extra insight and some encumbrance and have
been discarded in the remainder of these handout notes in favour of direct manipulation of the arithmetical
binary system with no carry. Binary Arithmetic with No Carries

Having dispensed with polynomials, we can focus on the real arithmetic issue, which is that all the
arithmetic performed during CRC calculations is performed in binary with no carries. Adding two numbers
in CRC arithmetic is the same as adding numbers in ordinary binary arithmetic except there is no carry. This
means that each pair of corresponding bits determine the corresponding output bit without reference to any
other bit positions.

Digital Electronics – ELEC2102

There are only four cases for each bit position:

1+1=0 (no carry)

Subtraction is identical:
0-1=1 (wraparound)

In fact, both addition and subtraction in CRC arithmetic is equivalent to the XOR operation, and the
XOR operation is its own inverse. This effectively reduces the operations of the first level of power
(addition, subtraction) to a single operation that is its own inverse. This is a very convenient property of the
By collapsing of addition and subtraction, the arithmetic discards any notion of magnitude beyond the
power of its highest one bit. While it seems clear that 1010 is greater than 10, it is no longer the case that
1010 can be considered to be greater than 1001. To see this, note that you can get from 1010 to 1001 by both
adding and subtracting the same quantity:
1001 = 1010 + 0011
1001 = 1010 - 0011
This makes nonsense of any notion of order.

Having defined addition, we can move to multiplication and division. Multiplication is absolutely
straightforward, being the sum of the first number, multiplied by the bits of the second, shifted and added in
accordance with the previous rules.
1101 x 1011
1101 .

Division is a little messier as we need to know when "a number goes into another number". To solve
the problem, a so-called “weak” definition of magnitude can be invoked: X is greater than or equal to Y if
and only if the position of the highest 1 bit of X is the same or greater than the position of the highest 1 bit of
Y. Adopting this definition makes the subtraction possible again.

101101110011 : 1001 = 10100011

Digital Electronics – ELEC2102

The issue of divisibility in the polynomial modulo 2 arithmetic is the same as in the normal arithmetic:
one number is dividable by another if the remainder generated by the division is zero. As a consequence of
the mathematical properties of the new arithmetic described, if M and Y are two numbers and M=Z*Y+R (R
is the remainder of the division M/Y) then M+R is dividable by Y.The polynomial division is used to
generate the checksum for the CRC algorithms. A typical CRC algorithm consists of two stages:
• The calculation of the checksum before message transmission. The message is considered a
polynomial and the checksum is calculated as the remainder generated by the division of this
polynomial with a so-called “generator polynomial”. However the message is padded with a few
final zeroes before division. Thus if the message is M=1101001 and the generator polynomial is
100101, then the message becomes M’=110100100000 before division. The number of final zeroes
equals the order of the generator polynomial. In the example above N=5. The calculated checksum
(remainder) is added to the message to ensure that the transmitted block of bits (M’+R) is dividable
by the generator polynomial Y. In the example above the transmitted bits are
• The check for errors at the reception. The received block of bits M’ is considered as a number
that is divided by Y using polynomial modulo 2 arithmetic and the remainder is verified. If the
remainder is “0000…000” then the reception is correct. Otherwise, an error has occurred during

The CRC algorithm has the advantage that it can be implemented much simpler than normal divisions
of binary numbers. Fig. 3-38 presents a CRC implementation where the generator polynomial is 10100101.
The circuit is constructed only with a PIPO register and three XOR gates.

Fig. 3-38 – Error detector using a CRC algorithm Serial Adders

A serial adder uses a single full adder that is supplied with bits in a step-by-step manner by two serial
registers storing the two operands. The carry information is stored into a D-flip-flop connecting the carry-out
pin of the full adder with the carry-in pin. An adequate number of control pulses controlling the registers
operation is generated by means of a down counter whose least significant bit plays the role of the clock
signal. The counting process is started by the reset signal that loads a predetermined constant into the counter
which is designed to stop when the zero value is attained.
Example: For an 8-bit serial adder a 5-bit counter can be used. The counter is loaded with number 16
(100002) during reset so that the least significant bit generates 8 rising edges (16→15, 14→13, 12→11, …
Question: What type of counter and what initialisation value should be used for a 20-bit serial adder?

Digital Electronics – ELEC2102

Fig. 3-39 – Serial adder