So-called linear codes create error-correction bits
by combining the data bits linearly. The phrase "linear
combination" means here single-bit binary arithmetic.
Table 1
0⊕0=0
0
0
0
|
1⊕1=0
1
1
0
|
0⊕1=1
0
1
1
|
1⊕0=1
1
0
1
|
0·0=0
·
0
0
0
|
1·1=1
·
1
1
1
|
0·1=0
·
0
1
0
|
1·0=0
·
1
0
0
|
For example, let's consider the specific (3, 1) error correction
code described by the following coding table and,
more concisely, by the succeeding matrix expression.
c1=b1
c2=b1
c3=b1
c
1
b
1
c
2
b
1
c
3
b
1
or
c=Gb
c
G
b
where
G=(
1
1
1
)
c=c1c2c3
b=b1
G
1
1
1
c
c
1
c
2
c
3
b
b
1
The length-KK (in this simple
example
K=1K1)
block of data bits is represented by the vector bb, and the
length-NN output block of the
channel coder, known as a codeword, by
cc. The
generator matrix GG defines all block-oriented
linear channel coders.
As we consider other block codes, the simple
idea of the decoder taking a majority vote of the received bits
won't generalize easily. We need a broader view that takes into
account the distance between codewords. A
length-NN codeword means that the
receiver must decide among the
2N
2N
possible datawords to select which of the
2K
2K
codewords was actually transmitted. As shown in Figure 1, we can think of the datawords
geometrically. We define the Hamming distance
between binary datawords
c
1
c
1
and
c
2
c
2
, denoted by
d
c
1
c
2
d
c
1
c
2
to be the minimum number of bits that must be "flipped" to go
from one word to the other. For example, the distance between
codewords is 3 bits. In our table of binary arithmetic, we
see that adding a 1 corresponds to flipping a bit. Furthermore,
subtraction and addition are equivalent. We can express the
Hamming distance as
d
c
1
c
2
=sum
c
1
⊕
c
2
d
c
1
c
2
sum
c
1
c
2
(1)
Show that adding the error vector
col[1,0,...,0] to a codeword flips the codeword's leading
bit and leaves the rest unaffected.
In binary arithmetic (see Table 1), adding 0 to a
binary value results in that binary value while adding 1
results in the opposite binary value.
The probability of one bit being flipped anywhere in a codeword
is
N
pe
1−
pe
N−1
N
pe
1
pe
N1
. The number of errors the channel introduces equals
the number of ones in ee; the probability of any
particular error vector decreases with the number of errors.
To perform decoding when errors occur, we want to find the
codeword (one of the filled circles in Figure 1) that has the highest probability of occurring:
the one closest to the one received. Note that if a dataword
lies a distance of 1 from two codewords, it is
impossible to determine which codeword was
actually sent. This criterion means that if any two codewords
are two bits apart, then the code cannot
correct the channel-induced error. Thus, to have a
code that can correct all single-bit errors, codewords must have
a minimum separation of three. Our repetition code
has this property.
Introducing code bits increases the probability that any bit
arrives in error (because bit interval durations decrease).
However, using a well-designed error-correcting code corrects
bit reception errors. Do we win or lose by using an
error-correcting code? The answer is that we can win
if the code is well-designed. The (3,1)
repetition code demonstrates that we can lose ((Reference)). To develop good
channel coding, we need to develop first a general framework for
channel codes and discover what it takes for a code to be
maximally efficient: Correct as many errors as possible using
the fewest error correction bits as possible (making the
efficiency
KNKN
as large as possible.) We also need a systematic way of finding
the codeword closest to any received dataword. A much better
code than our (3,1) repetition code is the following (7,4) code.
c1=b1
c2=b2
c3=b3
c4=b4
c5=b1⊕b2⊕b3
c6=b2⊕b3⊕b4
c7=b1⊕b2⊕b4
c
1
b
1
c
2
b
2
c
3
b
3
c
4
b
4
c
5
b
1
b
2
b
3
c
6
b
2
b
3
b
4
c
7
b
1
b
2
b
4
where the generator matrix is
G=(
1000
0100
0010
0001
1110
0111
1101
)
G
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
1
1
1
0
0
1
1
1
1
1
0
1
In this (7,4) code,
24=16
24
16
of the
27=128
27
128
possible blocks at the channel decoder correspond to error-free
transmission and reception.
Error correction amounts to searching for the codeword
cc closest to the
received block
c
^
c
^
in terms of the Hamming distance between the two. The error
correction capability of a channel code is limited by how
close together any two error-free blocks are. Bad codes would
produce blocks close together, which would result in ambiguity
when assigning a block of data bits to a received block. The
quantity to examine, therefore, in designing code error
correction codes is the minimum distance between codewords.
∀
c
i
≠
c
j
:
d
min
=mind
c
i
c
j
c
i
c
j
d
min
min
d
c
i
c
j
(2)
To have a channel code that can correct all single-bit errors,
dmin
≥3
dmin
3
.
Suppose we want a channel code to have an error-correction
capability of nn bits. What
must the minimum Hamming distance between codewords
d
min
d
min
be?
How do we calculate the minimum distance between codewords?
Because we have
2K
2
K
codewords, the number of possible unique pairs equals
2K−1(2K−1)
2
K
1
2
K
1
, which can be a large number. Recall that our channel
coding procedure is linear, with
c=Gb
c
G
b
. Therefore
c
i
⊕
c
j
=G
b
i
⊕
b
j
G
c
i
c
j
G
b
i
b
j
. Because
b
i
⊕
b
j
b
i
b
j
always yields another block of data bits,
we find that the difference between any two codewords is
another codeword! Thus, to find
d
min
d
min
we need only compute the number of ones that comprise all
non-zero codewords. Finding these codewords is easy once we
examine the coder's generator matrix. Note that the columns of
GG are codewords (why
is this?), and that all codewords can be found by all possible
pairwise sums of the columns. To find
d
min
d
min
, we need only count the number of bits in each column and sums
of columns. For our example (7, 4), GG's first column has three ones,
the next one four, and the last two three. Considering sums of
column pairs next, note that because the upper portion of
GG is an identity
matrix, the corresponding upper portion of all column sums must
have exactly two bits. Because the bottom portion of each
column differs from the other columns in at least one place, the
bottom portion of a sum of columns must have at least one bit.
Triple sums will have at least three bits because the upper
portion of GG is an
identity matrix. Thus, no sum of columns has fewer than three
bits, which means that
d
min
=3
d
min
3
,
and we have a channel coder that can correct all occurrences
of one error within a received 77-bit
block.
"Electrical Engineering Digital Processing Systems in Braille."