Anda di halaman 1dari 26

Calculating the Hamming Code

The key to the Hamming Code is the use of extra parity bits to allow the identification of a
single error. Create the code word as follows:
1. Mark all bit positions that are powers of two as parity bits. (positions 1, 2, 4, 8, 16, 32,
64, etc.)
2. All other bit positions are for the data to be encoded. (positions 3, 5, 6, 7, 9, 10, 11,
12, 13, 14, 15, 17, etc.)
3. Each parity bit calculates the parity for some of the bits in the code word. The
position of the parity bit determines the sequence of bits that it alternately checks and
skips.
Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. (1,3,5,7,9,11,13,15,...)
Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc.
(2,3,6,7,10,11,14,15,...)
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc.
(4,5,6,7,12,13,14,15,20,21,22,23,...)
Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc. (8-15,24-31,40-
47,...)
Position 16: check 16 bits, skip 16 bits, check 16 bits, skip 16 bits, etc. (16-31,48-
63,80-95,...)
Position 32: check 32 bits, skip 32 bits, check 32 bits, skip 32 bits, etc. (32-63,96-
127,160-191,...)
etc.
4. Set a parity bit to 1 if the total number of ones in the positions it checks is odd. Set a
parity bit to 0 if the total number of ones in the positions it checks is even.
Here is an example:
A byte of data: 10011010
Create the data word, leaving spaces for the parity bits: _ _ 1 _ 0 0 1 _ 1 0 1 0
Calculate the parity for each parity bit (a ? represents the bit position being set):
• Position 1 checks bits 1,3,5,7,9,11:
? _ 1 _ 0 0 1 _ 1 0 1 0. Even parity so set position 1 to a 0: 0 _ 1 _ 0 0 1 _ 1 0 1 0
• Position 2 checks bits 2,3,6,7,10,11:
0 ? 1 _ 0 0 1 _ 1 0 1 0. Odd parity so set position 2 to a 1: 0 1 1 _ 0 0 1 _ 1 0 1 0
• Position 4 checks bits 4,5,6,7,12:
0 1 1 ? 0 0 1 _ 1 0 1 0. Odd parity so set position 4 to a 1: 0 1 1 1 0 0 1 _ 1 0 1 0
• Position 8 checks bits 8,9,10,11,12:
0 1 1 1 0 0 1 ? 1 0 1 0. Even parity so set position 8 to a 0: 0 1 1 1 0 0 1 0 1 0 1 0
• Code word: 011100101010.
Finding and fixing a bad bit
The above example created a code word of 011100101010. Suppose the word that was
received was 011100101110 instead. Then the receiver could calculate which bit was wrong
and correct it. The method is to verify each check bit. Write down all the incorrect parity bits.
Doing so, you will discover that parity bits 2 and 8 are incorrect. It is not an accident that 2 +
8 = 10, and that bit position 10 is the location of the bad bit. In general, check each parity bit,
and add the positions that are wrong, this will give you the location of the bad bit.
Try one yourself
Test if these code words are correct, assuming they were created using an even parity
Hamming Code . If one is incorrect, indicate what the correct code word should have been.
Also, indicate what the original data was.
• 010101100011
• 111110001100
• 000010001010

Hamming Code (1 bit error correction)


Achieves the theoretical limit for minimum number of check bits to do 1-bit error-
correction.

Bits of codeword are numbered: bit 1, bit 2, ..., bit n.


Check bits are inserted at positions 1,2,4,8,.. (all powers of 2).
The rest are the m data bits.
Each check bit checks (as parity bit) a number of data bits.
Each check bit checks a different collection of data bits.
Check bits only check data, not other check bits.

Hamming Codes used in:


Wireless comms, e.g. Fixed wireless broadband. High error rate. Need correction
not detection.

Any number can be written as sum of


powers of 2
First note every number can be written in base 2 as a sum of powers of 2
multiplied by 0 or 1.
i.e. As a simple sum of powers of 2.

Number is sum of these: 1 2 4 8 16

Number:
1 x
2 x
3 x x
4 x
5 x x
6 x x
7 x x x
8 x
9 x x
10 x x
11 x x x
12 x x
13 x x x
14 x x x
15 x x x x
16 x
17 x x
18 x x
19 x x x
20 x x
21 x x x
22 x x x
23 x x x x
24 x x
25 x x x
26 x x x
27 x x x x
28 x x x
29 x x x x
30 x x x x
31 x x x x x
...

Scheme for check bits


Now here is our scheme for which bits each check bit checks:

Checked by check bit: 1 2 4 8 16

Bit:
1 (not applicable - this is a check bit)
2 (n/a)
3 x x
4 (n/a)
5 x x
6 x x
7 x x x
8 (n/a)
9 x x
10 x x
11 x x x
12 x x
13 x x x
14 x x x
15 x x x x
16 (n/a)
17 x x
18 x x
19 x x x
20 x x
21 x x x
22 x x x
23 x x x x
24 x x
25 x x x
26 x x x
27 x x x x
28 x x x
29 x x x x
30 x x x x
31 x x x x x
32 (n/a)
...
Check bit records odd or even parity of all the bits it covers, so any one-bit error
in the data will lead to error in the check bit.

Assume one-bit error:


If any data bit bad, then multiple check bits will be bad (never just one check bit).
• Calculating the Hamming Code (check bits do even parity here)

How it works
21 (as sum of powers of 2) = 1 + 4 + 16
Bit 21 is checked by check bits 1, 4 and 16.
No other bit is checked by exactly these 3 check bits.
If assume one-bit error, then if exactly these 3 check bits are bad, then we know
that data bit 21 was bad and no other.

Assume one-bit error:


1. Error in a data bit:
Will cause multiple errors in check bits. Will cause errors in exactly the
check bits that correspond to the powers of 2 that the bit number can be
written as a sum of.
2. Error in a check bit:
Will affect nothing except that check bit. i.e. One bad check bit (not
multiple bad check bits as above).

Hamming Code example for 3-bit data


Consider standard encoding of numbers 0 to 7:

000
001
010
011
100
101
110
111
(bits 1 to 3).
Encode this such that a 1 bit error can be detected and corrected.

Add check bits:

cc c 0 00

cc c 0 01

cc c 0 10

cc c 0 11

cc c 1 00

cc c 1 01

cc c 1 10

cc c 1 11

(now have bits 1 to 6).

Check bit 1 looks at bits 3 5.


If the number of 1s is 0 or even, set check bit to 0.
If the number of 1s is 1 or odd, set check bit to 1.

0c c 0 00

0c c 0 01

1c c 0 10
1c c 0 11

1c c 1 00 (flip previous 4 bits)

1c c 1 01

0c c 1 10

0c c 1 11

Check bit 2 looks at bits 3 6.


If the number of 1s is 0 or even, set check bit to 0.
If the number of 1s is 1 or odd, set check bit to 1.

00 c 0 00

01 c 0 01

10 c 0 10

11 c 0 11

11 c 1 00 (flip previous 4 bits)

10 c 1 01

01 c 1 10

00 c 1 11
Check bit 4 looks at bits 5 6.
If the number of 1s is 0 or even, set check bit to 0.
If the number of 1s is 1 or odd, set check bit to 1.

00 0 0 00

01 1 0 01

10 1 0 10

11 0 0 11

11 0 1 00

10 1 1 01

01 1 1 10

00 0 1 11

Error detection:

Distance from pattern: 0 1 2 3 4 5 6


7

Pattern:
0 000000 - 3 3 4 3 4 4
3
1 010101 3 - 4 3 4 3 3
4
2 100110 3 4 - 3 4 3 3
4
3 110011 4 3 3 - 3 4 4
3
4 111000 3 4 4 3 - 3 3
4
5 101101 4 3 3 4 3 - 4
3
6 011110 4 3 3 4 3 4 -
3
7 001011 3 4 4 3 4 3 3
-

Minimum distance 3.
If assume only 1 bit error, can always tell which pattern nearest.

Error correction:
List all patterns and find nearest one? Computationally expensive. Especially with longer
strings (much more patterns).
With Hamming, can find nearest quickly by just looking at one pattern:

1. Let's say error in a data bit:


100 sent
111000
became: 111001
i.e. data 101, but check bits wrong
Check bit 1 - 1 - checks bits 3,5 - 1 0 - OK
Check bit 2 - 1 - checks bits 3,6 - 1 1 - WRONG
Check bit 4 - 0 - checks bits 5,6 - 0 1 - WRONG
The bad bit is bit 2 + 4 = bit 6. Data was corrupted. Data should be 100.

2. Let's say error in a check bit:


100 sent
111000
became: 011000
i.e. data 100, but check bits wrong
Check bit 1 - 0 - checks bits 3,5 - 1 0 - WRONG
Check bit 2 - 1 - checks bits 3,6 - 1 0 - OK
Check bit 4 - 0 - checks bits 5,6 - 0 0 - OK
The bad bit is bit 1. Check bit was corrupted. Data is fine.

Summary
If assume 1-bit error:

1. If 1 check bit bad:


Data is good, check bit itself got corrupted.
Ignore check bits. Data is good.
2. If more than 1 check bit bad:
Data in error (single-bit error in data). Which check bits are bad shows you
exactly where the data error was. They point to a unique bit which is the
bit in error.
Can reconstruct data.

i.e. If 1 bit error - can always tell what original pattern was.
This, by the way, proves that distance between two patterns must be at least 3.
Q. Any other way of proving distance >= 3?

• Digital Communications course by Richard Tervo


○ Intro to Hamming codes
○ CGI script for Hamming codes

Q. Show that Hamming code actually achieves the


theoretical limit for minimum number of check bits to do
1-bit error-correction.

Example
Hamming code to correct burst errors
Basic Hamming code above corrects 1-bit errors only.
Trick to use it to correct burst errors:

Consider sending k codewords, each length n.


Arrange in matrix (as in diagram), each row is a codeword.
Matrix width n, height k.
Normally would transmit this row-by-row.
Trick: Transmit column-by-column.
If a burst of length k occurs in the entire k x n block (and no other errors) at most 1 bit is
affected in each codeword.
So the Hamming code can reconstruct each codeword.
So the Hamming code can reconstruct the whole block.

Yellow is burst error.

Uses kr check bits to make blocks of km data bits immune to a single burst error
of up to length k.
EE4253 Digital Communications

Department of Electrical and Computer Engineering - University of New Brunswick, Fredericton, NB,
Canada

Error Correction and the Hamming Code


The use of simple parity allows detection of single bit errors in a received message.
Correction of such errors requires more information, since the position of the bad bit must be
identified if it is to be corrected. (If a bad bit can be found, then it can be corrected by simply
complementing its value.) Correction is not possible with one parity bit since any bit error in
any position produces exactly the same information - "bad parity".
If more bits are included with a message, and if those bits can be arranged such that different
errored bits produce different error results, then bad bits could be identified. In a 7-bit
message, there are seven possible single bit errors, so three error control bits could potentially
specify not only that an error occured but also which bit caused the error.
Similarly, if a family of codewords is chosen such that the minimum distance between valid
codewords is at least 3, then single bit error correction is possible. This distance approach is
"geometric", while the above error-bit argument is 'algebraic'.
Either of the above arguments serves to introduce the Hamming Code, an error control
method allowing correction of single bit errors.

The Hamming Code


Consider a message having four data bits (D) which is to be transmitted as a 7-bit codeword
by adding three error control bits. This would be called a (7,4) code. The three bits to be
added are three EVEN Parity bits (P), where the parity of each is computed on different
subsets of the message bits as shown below.
7 6 5 4 3 2 1

D D D P D P P 7-BIT CODEWORD

D - D - D - P (EVEN PARITY)

D D - - D P - (EVEN PARITY)

D D D P - - - (EVEN PARITY)
Why Those Bits? - The three parity bits (1,2,4) are related to
the data bits (3,5,6,7) as shown at right. In this diagram, each
overlapping circle corresponds to one parity bit and defines the
four bits contributing to that parity computation. For example,
data bit 3 contributes to parity bits 1 and 2. Each circle (parity
bit) encompasses a total of four bits, and each circle must have
EVEN parity. Given four data bits, the three parity bits can
easily be chosen to ensure this condition.
It can be observed that changing any one bit numbered 1..7
uniquely affects the three parity bits. Changing bit 7 affects all
three parity bits, while an error in bit 6 affects only parity bits
2 and 4, and an error in a parity bit affects only that bit. The
location of any single bit error is determined directly upon
checking the three parity circles.

For example, the message 1101 would be sent as 1100110,


since:
7 6 5 4 3 2 1

1 1 0 0 1 1 0 7-BIT CODEWORD

1 - 0 - 1 - 0 (EVEN PARITY)

1 1 - - 1 1 - (EVEN PARITY)

1 1 0 0 - - - (EVEN PARITY)

When these seven bits are entered into the parity circles, it can
be confirmed that the choice of these three parity bits ensures
that the parity within each circle is EVEN, as shown here.
It may now be observed that if an error occurs in any of the seven bits, that error will affect
different combinations of the three parity bits depending on the bit position.
For example, suppose the above message 1100110 is sent and a single bit error occurs such
that the codeword 1110110 is received:
transmitted message received message
1 1 0 0 1 1 0 ------------> 1 1 1 0 1 1 0
BIT: 7 6 5 4 3 2 1 BIT: 7 6 5 4 3 2 1
The above error (in bit 5) can be corrected by examining which of the three parity bits was
affected by the bad bit:

7 6 5 4 3 2 1

1 1 1 0 1 1 0 7-BIT CODEWORD

1 - 1 - 1 - 0 (EVEN PARITY) NOT! 1

1 1 - - 1 1 - (EVEN PARITY) OK! 0


1 1 1 0 - - - (EVEN PARITY) NOT! 1

In fact, the bad parity bits labelled 101 point directly to the bad bit since 101 binary equals
5. Examination of the 'parity circles' confirms that any single bit error could be corrected in
this way.
The value of the Hamming code can be summarized:
1. Detection of 2 bit errors (assuming no correction is attempted);
2. Correction of single bit errors;
3. Cost of 3 bits added to a 4-bit message.
The ability to correct single bit errors comes at a cost which is less than sending the entire
message twice. (Recall that simply sending a message twice accomplishes no error
correction.)

Hamming Distance = 3
The Hamming Code allows error correction because the minimum distance between any two
valid codewords is 3. In the figure below, two valid codewords in 8 possible 3-bit codewords
are arranged to have a distance of 3 between them. It takes 3 bit changes (errors) to move
from one valid codeword 000 to the other 111. If the codeword 000 is transmitted and a
single bit error occurs, the received word must be one of {001,010,100}, any of which is
easily identified as an invalid codeword, and which could only have been 000 before
transmission.

Sixteen Valid Codewords


7 6 5 4 3 2 1 The Hamming Code code essentially defines 16 valid
codewords within all 128 possible 7-bit codewords.
0 0 0 0 0 0 0 0 The sixteen words are arranged such that the
1 0 0 0 0 1 1 1 minimum distance between any two words is 3. These
words are shown in this table, where it is left as
2 0 0 1 1 0 0 1 an exercise to check that from any codeword

3 0 0 1 1 1 1 0 N={0..F} in the table to any other word M, the


distance is at least 3.
4 0 1 0 1 0 1 0
Example:
5 0 1 0 1 1 0 1
For N=3, codeword 3 = 0011110 is expected to be a distance of at
6 0 1 1 0 0 1 1 least 3 from all the other codewords.
The distance is 4 between 3 = 0011110 and 0 = 0000000
7 0 1 1 0 1 0 0 The distance is 3 between 3 = 0011110 and 1 = 0000111
The distance is 3 between 3 = 0011110 and 2 = 0011001
8 1 0 0 1 0 1 1 ...

9 1 0 0 1 1 0 0 The distance is 4 between 3 = 0011110 and D = 1100110


The distance is 4 between 3 = 0011110 and E = 1111000
A 1 0 1 0 0 1 0 The distance is 3 between 3 = 0011110 and F = 1111111
Therefore, codeword 3 is a distance of at least 3 from any other valid
B 1 0 1 0 1 0 1 codeword.

C 1 1 0 0 0 0 1

D 1 1 0 0 1 1 0

E 1 1 1 1 0 0 0

F 1 1 1 1 1 1 1

The Distance Argument

Looking again at the Venn diagram (at right) it can be observed


that a change in any of the data bits (3,5,6,7) necessary changes
at least two other bits in the codeword. For example, given a
valid Hamming codeword, a change in bit 3 changes three bits
(1,2,3) such that the new codeword is a distance (d=3) from the
initial word. The clever arrangement of the Hamming
codewords ensures that this is the case for every valid
codeword in the set.

A Final Note
Any set of codewords is useful for error control provided that the minimum distance between
any two of them is some value D. (some may be more that D but none will be less than D)
So there is no unique set of codewords with L=7 and D=3. The Hamming code shown here
(L=7,D=3) is useful because it is easy to generate and to check this particular set of
codewords by hand. The distances would still be the same if you swapped two columns or
complemented the bits in any column, but the codewords would look very different (and the
Venn diagrams would not work!).
To explore this subject further, continue with Hamming Code Revisited
To see sets of codewords with various distance properties, Online Hamming Code Tool

The value of carefully choosing error control schemes is self-evident in the Hamming Code.
Still, for very long messages another approach is desirable.

Tue Feb 8 21:23:10 AST 2011


Richard Tervo [ tervo@unb.ca ] Back to the course homepage...
Last Updated: 19 SEP 2005

Hamming code
From Wikipedia, the free encyclopedia

Jump to: navigation, search

In telecommunication, a Hamming code is a linear error-correcting code named after its


inventor, Richard Hamming. Hamming codes can detect up to two simultaneous bit errors,
and correct single-bit errors; thus, reliable communication is possible when the Hamming
distance between the transmitted and received bit patterns is less than or equal to one. By
contrast, the simple parity code cannot correct errors, and can only detect an odd number of
errors.
In mathematical terms, Hamming codes are a class of binary linear codes. For each integer

there is a code with m parity bits and 2m − m − 1 data bits. The parity-check matrix
of a Hamming code is constructed by listing all columns of length m that are pairwise
independent. Hamming codes are an example of perfect codes, codes that exactly match the
theoretical upper bound on the number of distinct code words for a given number of bits and
ability to correct errors.
Because of the simplicity of Hamming codes, they are widely used in computer memory
(RAM). In particular, a single-error-correcting and double-error-detecting variant commonly
referred to as SECDED.

Contents
[hide]
• 1 History
○ 1.1 Codes predating Hamming
 1.1.1 Parity
 1.1.2 Two-out-of-five code
 1.1.3 Repetition
• 2 Hamming codes
○ 2.1 General algorithm
• 3 Hamming codes with additional parity (SECDED)
• 4 Hamming(7,4) code
○ 4.1 Construction of G and H
○ 4.2 Encoding
○ 4.3 Hamming(7,4) code with an additional parity bit
• 5 See also
• 6 References
• 7 External links

[edit] History
Hamming worked at Bell Labs in the 1940s on the Bell Model V computer, an
electromechanical relay-based machine with cycle times in seconds. Input was fed in on
punched cards, which would invariably have read errors. During weekdays, special code
would find errors and flash lights so the operators could correct the problem. During after-
hours periods and on weekends, when there were no operators, the machine simply moved on
to the next job.
Hamming worked on weekends, and grew increasingly frustrated with having to restart his
programs from scratch due to the unreliability of the card reader. Over the next few years he
worked on the problem of error-correction, developing an increasingly powerful array of
algorithms. In 1950 he published what is now known as Hamming Code, which remains in
use today in applications such as ECC memory.
[edit] Codes predating Hamming
A number of simple error-detecting codes were used before Hamming codes, but none were
as effective as Hamming codes in the same overhead of space.
[edit] Parity
Main article: Parity bit

Parity adds a single bit that indicates whether the number of 1 bits in the preceding data was
even or odd. If an odd number of bits is changed in transmission, the message will change
parity and the error can be detected at this point. (Note that the bit that changed may have
been the parity bit itself!) The most common convention is that a parity value of 1 indicates
that there is an odd number of ones in the data, and a parity value of 0 indicates that there is
an even number of ones in the data. In other words: the data and the parity bit together
should contain an even number of 1s.
Parity checking is not very robust, since if the number of bits changed is even, the check bit
will be valid and the error will not be detected. Moreover, parity does not indicate which bit
contained the error, even when it can detect it. The data must be discarded entirely and re-
transmitted from scratch. On a noisy transmission medium, a successful transmission could
take a long time or may never occur. However, while the quality of parity checking is poor,
since it uses only a single bit, this method results in the least overhead. Furthermore, parity
checking does allow for the restoration of an erroneous bit when its position is known.
[edit] Two-out-of-five code
Main article: Two-out-of-five code

A two-out-of-five code is an encoding scheme which uses five digits consisting of exactly
three 0s and two 1s. This provides ten possible combinations, enough to represent the digits 0
- 9. This scheme can detect all single bit-errors and all odd numbered bit-errors. However it
still cannot correct for these errors.
[edit] Repetition
Main article: triple modular redundancy

Another code in use at the time repeated every data bit several times in order to ensure that it
got through. For instance, if the data bit to be sent was a 1, an n=3 repetition code would send
"111". If the three bits received were not identical, an error occurred. If the channel is clean
enough, most of the time only one bit will change in each triple. Therefore, 001, 010, and 100
each correspond to a 0 bit, while 110, 101, and 011 correspond to a 1 bit, as though the bits
counted as "votes" towards what the original bit was. A code with this ability to reconstruct
the original message in the presence of errors is known as an error-correcting code. This
triple repetition code is actually the simplest Hamming code with m = 2, since there are 2
parity bits, and 22 − 2 − 1 = 1 data bit.
Such codes cannot correctly repair all errors, however. In our example, if the channel flipped
two bits and the receiver got "001", the system would detect the error, but conclude that the
original bit was 0, which is incorrect. If we increase the number of times we duplicate each
bit to four, we can detect all two-bit errors but can't correct them (the votes "tie"); at five, we
can correct all two-bit errors, but not all three-bit errors.
Moreover, the repetition code is extremely inefficient, reducing throughput by three times in
our original case, and the efficiency drops drastically as we increase the number of times each
bit is duplicated in order to detect and correct more errors.
[edit] Hamming codes
If more error-correcting bits are included with a message, and if those bits can be arranged
such that different incorrect bits produce different error results, then bad bits could be
identified. In a 7-bit message, there are seven possible single bit errors, so three error control
bits could potentially specify not only that an error occurred but also which bit caused the
error.
Hamming studied the existing coding schemes, including two-of-five, and generalized their
concepts. To start with, he developed a nomenclature to describe the system, including the
number of data bits and error-correction bits in a block. For instance, parity includes a single
bit for any data word, so assuming ASCII words with 7-bits, Hamming described this as an
(8,7) code, with eight bits in total, of which 7 are data. The repetition example would be
(3,1), following the same logic. The code rate is the second number divided by the first, for
our repetition example, 1/3.
Hamming also noticed the problems with flipping two or more bits, and described this as the
"distance" (it is now called the Hamming distance, after him). Parity has a distance of 2, as
any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to
be flipped in the same triple to obtain another code word with no visible errors. A (4,1)
repetition (each bit is repeated four times) has a distance of 4, so flipping two bits can be
detected, but not corrected. When three bits flip in the same group there can be situations
where the code corrects towards the wrong code word.
Hamming was interested in two problems at once; increasing the distance as much as
possible, while at the same time increasing the code rate as much as possible. During the
1940s he developed several encoding schemes that were dramatic improvements on existing
codes. The key to all of his systems was to have the parity bits overlap, such that they
managed to check each other as well as the data.
[edit] General algorithm
The following general algorithm generates a single-error correcting (SEC) code for any
number of bits.
1. Number the bits starting from 1: bit 1, 2, 3, 4, 5, etc.
2. Write the bit numbers in binary. 1, 10, 11, 100, 101, etc.
3. All bit positions that are powers of two (have only one 1 bit in the binary
form of their position) are parity bits.
4. All other bit positions, with two or more 1 bits in the binary form of their
position, are data bits.
5. Each data bit is included in a unique set of 2 or more parity bits, as
determined by the binary form of its bit position.
1. Parity bit 1 covers all bit positions which have the least significant
bit set: bit 1 (the parity bit itself), 3, 5, 7, 9, etc.
2. Parity bit 2 covers all bit positions which have the second least
significant bit set: bit 2 (the parity bit itself), 3, 6, 7, 10, 11, etc.
3. Parity bit 4 covers all bit positions which have the third least
significant bit set: bits 4–7, 12–15, 20–23, etc.
4. Parity bit 8 covers all bit positions which have the fourth least
significant bit set: bits 8–15, 24–31, 40–47, etc.
5. In general each parity bit covers all bits where the binary AND of
the parity position and the bit position is non-zero.
The form of the parity is irrelevant. Even parity is simpler from the perspective of theoretical
mathematics, but there is no difference in practice.
This general rule can be shown visually:
1 1 1 1 ...
Bit position 1 2 3 4 5 6 7 8 9 14 15 16 17 18 19 20
0 1 2 3

Encoded p p d p d d d p d d d d d d1 d1 p1 d1 d1 d1 d1
data bits 1 2 1 4 2 3 4 8 5 6 7 8 9 0 1 6 2 3 4 5

Parity p1 X X X X X X X X X X
bit
coverag p2 X X X X X X X X X X
p4 X X X X X X X X X

p8 X X X X X X X X
e
p1
X X X X X
6

Shown are only 20 encoded bits (5 parity, 15 data) but the pattern continues indefinitely. The
key thing about Hamming Codes that can be seen from visual inspection is that any given bit
is included in a unique set of parity bits. To check for errors, check all of the parity bits. The
pattern of errors, called the error syndrome, identifies the bit in error. If all parity bits are
correct, there is no error. Otherwise, the sum of the positions of the erroneous parity bits
identifies the erroneous bit. For example, if the parity bits in positions 1, 2 and 8 indicate an
error, then bit 1+2+8=11 is in error. If only one parity bit indicates an error, the parity bit
itself is in error.
If, in addition, an overall parity bit (bit 0) is included, the code can detect (but not correct)
any two-bit error, making a SECDED code. The overall parity indicates whether the total
number of errors is even or odd. If the basic Hamming code detects an error, but the overall
parity says that there are an even number of errors, an uncorrectable 2-bit error has occurred.
[edit] Hamming codes with additional parity (SECDED)
These codes have a minimum distance of 3, which means that the code can detect and correct
a single error, but a double bit error is indistinguishable from a different code with a single bit
error. Thus, they can detect double-bit errors only if correction is not attempted.
By including an extra parity bit, it is possible to increase the minimum distance of the
Hamming code to 4. This gives the code the ability to detect and correct a single error and at
the same time detect (but not correct) a double error. (It could also be used to detect up to 3
errors but not correct any.)
This code system is popular in computer memory systems, where it is known as SECDED
("single error correction, double error detection"). Particularly popular is the (72,64) code, a
truncated (127,120) Hamming code plus an additional parity bit, which has the same space
overhead as a (9,8) parity code.
[edit] Hamming(7,4) code
Graphical depiction of the 4 data bits and 3 parity bits and which parity bits
apply to which data bits

Main article: Hamming(7,4)

In 1950, Hamming introduced the (7,4) code. It encodes 4 data bits into 7 bits by adding three
parity bits. Hamming(7,4) can detect and correct single-bit errors. With the addition of an
overall parity bit, it can also detect (but not correct) double-bit errors.
[edit] Construction of G and H

The matrix is called a (Canonical) generator matrix of a linear (n,k)


code,

and is called a parity-check matrix.


This is the construction of G and H in standard (or systematic) form. Regardless of form, G
and H for linear block codes must satisfy

, an all-zeros matrix [Moon, p.89].


Since (7,4,3)=(n,k,d)=[2m − 1, 2m−1-m, m]. The parity-check matrix H of a Hamming code is
constructed by listing all columns of length m that are pair-wise independent.
Thus H is a matrix whose left side is all of the nonzero n-tuples where order of the n-tuples in
the columns of matrix does not matter. The right hand side is just the (n-k)-identity matrix.
So G can be obtained from H by taking the transpose of the left hand side of H with the
identity k-identity matrix on the left hand side of G.

The code generator matrix and the parity-check matrix are:

and

Finally, these matrices can be mutated into equivalent non-systematic codes by the following
operations [Moon, p. 85]:
• Column permutations (swapping columns)
• Elementary row operations (replacing a row with a linear combination of
rows)
[edit] Encoding
Example

From the above matrix we have 2k=24=16 codewords. The codewords of this binary code

can be obtained from . With with ai exist in F2 ( A field with


two elements namely 0 and 1).
Thus the codewords are all the 4-tuples (k-tuples).
Therefore,
(1,0,1,1) gets encoded as (1,0,1,1,0,1,0).
[edit] Hamming(7,4) code with an additional parity bit

The same (7,4) example from above with an extra parity bit

The Hamming(7,4) can easily be extended to an (8,4) code by adding an extra parity bit on
top of the (7,4) encoded word (see Hamming(7,4)). This can be summed up with the revised
matrices:

and
Note that H is not in standard form. To obtain G, elementary row operations can be used to
obtain an equivalent matrix to H in systematic form:

For example, the first row in this matrix is the sum of the second and third rows of H in non-
systematic form. Using the systematic construction for Hamming codes from above, the
matrix A is apparent and the systematic form of G is written as

The non-systematic form of G can be row reduced (using elementary row operations) to
match this matrix.
The addition of the fourth row effectively computes the sum of all the codeword bits (data
and parity) as the fourth parity bit.
For example, 1011 is encoded into 01100110 where blue digits are data; red digits are parity
from the Hamming(7,4) code; and the green digit is the parity added by Hamming(8,4). The
green digit makes the parity of the (7,4) code even.
Finally, it can be shown that the minimum distance has increased from 3, as with the (7,4)
code, to 4 with the (8,4) code. Therefore, the code can be defined as Hamming(8,4,4).

[edit] See also


• Golay code
• Reed–Muller code
• Reed–Solomon error correction
• Turbo code
• Low-density parity-check code
• Hamming bound

[edit] References
• Moon, Todd K. (2005). Error Correction Coding. New Jersey: John Wiley &
Sons. ISBN 978-0-471-64800-0.
http://www.neng.usu.edu/ece/faculty/tmoon/eccbook/book.html.
• MacKay, David J.C. (September 2003). Information Theory, Inference and
Learning Algorithms. Cambridge: Cambridge University Press. ISBN 0-521-
64298-1. http://www.inference.phy.cam.ac.uk/mackay/itila/book.html.
• D.K. Bhattacharryya, S. Nandi. "An efficient class of SEC-DED-AUED
codes". 1997 International Symposium on Parallel Architectures,
Algorithms and Networks (ISPAN '97). pp. 410–415.
doi:10.1109/ISPAN.1997.645128.

[edit] External links


• CGI script for calculating Hamming distances (from R. Tervo, UNB,
Canada)
Retrieved from "http://en.wikipedia.org/wiki/Hamming_code"

Categories: Coding theory | Error detection and correction | Computer arithmetic

Personal tools
• Log in / create account
Namespaces
• Article
• Discussion
Variants
Views
• Read
• Edit
• View history
Actions
Search
Top of Form
Special:Search

Search

Bottom of Form
Navigation
• Main page
• Contents
• Featured content
• Current events
• Random article
• Donate to Wikipedia
Interaction
• Help
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
Toolbox
• What links here
• Related changes
• Upload file
• Special pages
• Permanent link
• Cite this page
Print/export
• Create a book
• Download as PDF
• Printable version
Languages
• Català
• Česky
• Deutsch
• Español
• Euskara
• ‫فارسی‬
• Français
• 한국어
• Italiano
• ‫עברית‬
• Nederlands
• 日本語
• Polski
• Português
• Русский
• Simple English
• Türkçe
• Tiếng Việt
• 中文
• This page was last modified on 17 December 2010 at 23:20.
• Text is available under the Creative Commons Attribution-ShareAlike
License; additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a
non-profit organization.
• Contact us
• Privacy policy
• About Wikipedia
• Disclaimers

Anda mungkin juga menyukai