Because the idea of channel coding has merit (so long as the
code is efficient), let's develop a systematic procedure for
performing channel decoding. One way of checking for errors is
to try recreating the error correction bits from the data
portion of the received block
c
^
c
^
. Using matrix notation, we make this calculation by
multiplying the received block
c
^
c
^
by the matrix HH known as the parity check
matrix. It is formed from the generator matrix
G
G
by taking the bottom, error-correction portion of GG and
attaching to it an identity matrix. For our (7,4) code,
H=
1
1
1
0
0
1
1
1
1
1
0
1
︸
Lower portion of
G
1
0
0
0
1
0
0
0
1
︸
Identity
H
1
1
1
0
0
1
1
1
1
1
0
1
︸
Lower portion of
G
1
0
0
0
1
0
0
0
1
︸
Identity
(1)
The parity check matrix thus has size
(
N
−
K
)
×
N
(
N
−
K
)
×
N
,
and the result of multiplying this matrix with a received word
is a length-
(
N
−
K
)
(
N
−
K
)
binary vector. If no digital channel errors
occur—we receive a codeword so that
c
^
=c
c
^
c
— then
H
c
^
=0
H
c
^
0
. For example, the first column of
G
G,
1000101T
1
0
0
0
1
0
1
, is a codeword. Simple calculations show that
multiplying this vector by
H
H results in a length-
(
N
−
K
)
(
N
−
K
)
zero-valued vector.
Show that
Hc=0
H
c
0
for all the columns of
G
G. In other words, show that
HG=0
H
G
0
an
(
N
−
K
)
×
K
(
N
−
K
)
×
K
matrix of zeroes.
Does this property guarantee that all codewords also satisfy
Hc=0
H
c
0
?
When we multiply the parity-check matrix times any codeword
equal to a column of GG, the result consists of the
sum of an entry from the lower portion of GG and itself that, by the laws
of binary arithmetic, is always zero.
Because the code is linear—sum of any two codewords is a
codeword—we can generate all codewords as sums of columns of
GG. Since
multiplying by
H
H
is also linear,
Hc=0
H
c
0
.
When the received bits
c
^
c
^
do not form a codeword,
H
c
^
H
c
^
does not equal zero, indicating the presence of one or
more errors induced by the digital channel. Because the presence
of an error can be mathematically written as
c
^
=c⊕e
c
^
c
e
, with ee a vector of
binary values having a 1 in those positions where a bit error
occurred.
Show that adding the error vector
10…0T
1
0
…
0
to a codeword flips the codeword's leading bit and leaves
the rest unaffected.
In binary arithmetic see this table, adding 0 to a binary
value results in that binary value while adding 1 results in
the opposite binary value.
Consequently,
H
c
^
=Hc⊕e=He
H
c
^
H
c
e
H
e
.
Because the result of the product is a length-
(
N
−
K
)
(
N
−
K
)
vector of binary values, we can have
2N−K−1
2
N
K
1
non-zero values that correspond to non-zero error patterns
ee. To perform our channel
decoding,
- compute (conceptually at least)
H
c
^
H
c
^
;
- if this result is zero, no detectable or correctable
error occurred;
- if non-zero, consult a table of length-
(
N
−
K
)
(
N
−
K
)
binary vectors to associate them with the minimal
error pattern that could have resulted in the
non-zero result; then
- add the error vector thus obtained to the received
vector
c
^
c
^
to correct the error (because
c⊕e⊕e=c
c
e
e
c
).
- Select the data bits from the corrected word to produce
the received bit sequence
b
^
n
b
^
n
.
The phrase
minimal in the third item raises
the point that a double (or triple or quadruple …) error
occurring during the transmission/reception of one codeword can
create the same received word as a single-bit error or no error
in
another codeword. For example,
1000101T
1
0
0
0
1
0
1
and
0100111T
0
1
0
0
1
1
1
are both codewords in the example (7,4) code. The second
results when the first one experiences three bit errors (first,
second, and sixth bits). Such an error pattern cannot be
detected by our coding strategy, but such multiple error
patterns are very unlikely to occur. Our receiver uses the
principle of maximum probability: An error-free transmission is
much more likely than one with three errors if the bit-error probability
pe
pe
is small enough.
How small must
p
e
p
e
be so that a single-bit error is more likely to occur than a
triple-bit error?
The probability of a single-bit error in a length-
N
N
block is
N
p
e
1−
p
e
N−1
N
p
e
1
p
e
N
1
and a triple-bit error has probability
N3
p
e
31−
p
e
N−3
N
3
p
e
3
1
p
e
N
3
. For the first to be greater than the second, we
must have
p
e
<1(N−1)(N−2)6+1
p
e
1
N
1
N
2
6
1
For
N=7
N
7
,
p
e
<0.31
p
e
0.31
.
"Electrical Engineering Digital Processing Systems in Braille."