Summary: A description of some strategies for minimizing there errors in a transmitted bit-stream.
For the (7,4) example, we have
e | He |
---|---|
1000000 | 101 |
0100000 | 111 |
0010000 | 110 |
0001000 | 011 |
0000100 | 100 |
0000010 | 010 |
0000001 | 001 |
This corresponds to our decoding table: We associate the parity check matrix multiplication result with the error pattern and add this to the received word. If more than one error occurs (unlikely though it may be), this "error correction" strategy usually makes the error worse in the sense that more bits are changed from what was transmitted.
As with the repetition code, we must question whether our (7,4) code's error correction capability compensates for the increased error probability due to the necessitated reduction in bit energy. Figure 1 shows that if the signal-to-noise ratio is large enough channel coding yields a smaller error probability. Because the bit stream emerging from the source decoder is segmented into four-bit blocks, the fair way of comparing coded and uncoded transmission is to compute the probability of block error: the probability that any bit in a block remains in error despite error correction and regardless of whether the error occurs in the data or in coding buts. Clearly, our (7,4) channel code does yield smaller error rates, and is worth the additional systems required to make it work.
Probability of error occurring |
---|
![]() |
Note that our (7,4) code has the length and number of data bits
that perfectly fits correcting single bit errors. This pleasant
property arises because the number of error patterns that can be
corrected,
|
|
|
---|---|---|
3 | 1 | 0.33 |
7 | 4 | 0.57 |
15 | 11 | 0.73 |
31 | 26 | 0.84 |
63 | 57 | 0.90 |
127 | 120 | 0.94 |
Unfortunately, for such large blocks, the probability of
multiple-bit errors can exceed the number of single-bit errors
unless the channel single-bit error probability
What must the relation between
In a length-
"Electrical Engineering Digital Processing Systems in Braille."