So in this video. We're going to consider Shannon's crowning result. That it called this in 1948. The so called, Noisy Channel Coding Theorem. What this theorem states, that it is possible to communicate through a digital channel that introduces errors. So that the error correcting code can correct all the errors. Not just the single bit errors but all the errors. And we'll be very interested in the conditions under which you can do that. This is going to lead us to find the capacity of a channel which is the magic number that summarizes the error behavior of the channel. And once we understand that result, this will lead us to understand why is everything digital. There's no longer really much left anymore that's analog. And we'll see why that is in just a second. So let's review the Digital Communication Model. We have a digital source. And it goes through the compression stage. And It then goes through the channel coder which introducues the erorr corerecting cods. It gets sent through a channel that has a probability of Pe of flipping a bit. And we've already seen there are error correcting codes such that the decoder can correct those errors as long as there aren't too many of them. We worry about the single bit error correcting code. And once you correct those errors, you have now the, the same you can get out the bits that you were sent which allows you to reconstruct the signal and, and you are very happy. However this only mitigates the channel errors. If there happens to be two bits an error. Set of one, there's nothing that, that code we talked about can do. It will totally foul up. So we can reduce the errors with those error, a single bit error correcting schemes but we can't make it go to zero. Well Shannon asked, can we remove all channel induced errors? Is there such a powerful error corrupting code that we can use it and not worry about any kind of errors that the channel may be introducing. And clearly the answer was yes you can but let's see what the statement or the result is. So the noisy channel coding theorem. Says, says let e be the efficiency of the code. So if you recall, we have an NK where N is the length of the codeword that's used to represent K bits. K data bits. If that efficiency is less than the capacity C of the channel, then there is an error correcting code that has the probability. That, as the length of the, the code word goes to infinity, the probability that you, that you can decode without the, that, the decoding will incur an error. Let me get this straight. So if the decoding will incur in error is 0. In other words, you can get the data through without error. And that's so long as the efficiency of the code is less than the capacity and I'll show you in the next line what the capacity is. I want to point out some things though first, and that is this limit here means that the code word length is going to infinity. Since we're keeping the efficiency at constant that means that more and more data bits are being used in the error correcting code. So this code word is going to get very, very long but if you do that despite what we may have been seeing there exists an error correcting code out there. So its that you can make the probability of an error occurring in the recovering data block, 0. He then proved a result which makes this capacity a very special number. If the code efficiency is greater than C then all error correcting codes. We'll always incur decoding errors. In other words, again, as the block then goes to infinity, a probability that there's an error anywhere in the data block is going to be one. So you better have a code, use a code, use efficiency less than the capacity. If you try to use one that's was a bit weaker. Doesn't add as much error corerection. You will never be able to correct the errors. So, what is the capacity? So, if our little channel, the digital channel we've been talking about. The capacity is given by this formula. And if you seems, say that looks a little bit familiar. It does look like it's the entropy formula applied to Pe and 1 minus Pe. In fact 1 minus, it's 1 minus an entropy formula involved in those quantities. So here's a plot of capacity. So let's focus on the extremes here first. Let's look at Pe equals zero, the capacity is 1. And since E of, the efficiency of the code has been less than C N. Needs to be less than c. That means that we can have efficiency code of, efficiency of 1, when Pe is 0. Well, efficiency of 1 means there is no coding. That all thee, all this in and code words are data bits, and certainly if there are no errors in the channel will just send the data through, if that's your problem. On the other hand, on the other extreme, Pe is half, that means that the channel is randomly flipping the bits. And, it's total chaos, basically coming out of the channel and there is no error correcting code, because you cannot send times through the efficiency has 0. And why bother? So at some intermediate Pe, you cannot use a code that's greater than this capacity if you have any hope of getting the message through without error. If you try to use something that has a greater efficiency, greater than capacity, you're going to have errors. You just can't do it. What's interesting about this plot is that you can plot it for Pe greater than a half. There's no reason you can't, and it kind of falls in on the map that indicates where Pe is 1. That means if the channel is flipping reliably every bit changing a 0 to a 1, changing a 1 to a 0. And if the efficiency is 1, you might as well just send all the data because all the channel decoder has to do, it just flip the bits back. So, automatically, that result is taken into account. So a really bad pe, turns out to be equivalent to a really good one. So Channel's result is symmetric in that way. So let's talk about a few issues with the Noisy Channel Coding Theorem. Which are pretty important. And the first is this phrase that keeps coming up, there exists an error-correcting code. Well, to this day, we don't know what that code is. There are two candidates out there that are being studied today. And they come very close to the so called capacity limit but no ones proven that they can satisfy the noisy channel coding theorem. Shannon's proof with the noisey shany, noisy channel coding theorem is fine. There's nothing wrong with it. It's just that it's not what we call constructive. It doesn't give you an example. Proof of it did not use an example code. It was proven a different way. We still don't know what the code is. But there is a more practical problem which has to do with this limit, as N goes to infinity. So this means the code word length has to go to infinity which means you're also taking in more and more data bits and encoding them all at once, sending them through the channel and then decoding them. So this incurs what we call a coding delay. And you also have to wait for all the data to come in before you can run your channel code. Now, that isn't very nice and the capacity that obeys this result is sometimes called the C infinity capacity, which means you need an infinite length code word. And what we'd like to talk about is what is the capacity for a finite length code word. It's certainly, if it exists is going to be less than C infinity capacity. But we don't know there are a noisy channel coding theorem for such a thing. It's not clear that you can correct all codes with a finite length block. But people are certainly working on that, so we can have practical codes. Well, Shannon also derived a capacity. When we take into account the fact that underneath our digital channel, there's really an analog channel. So as we always abstracted the analog channel, channel it's adding noise, the signal set choice and everything and just call it a digital channel. But, if you go back down and you talk about a white noise channel having a bandwidth of W. And having to have figured out what the signals sets are. Then Shannon's results says yes. If the datarate of the source is less than the capacity, then you can have what are called, we call reliable communication, no errors you can't, will occur. If you have the right error correcting code. So if the rate is less than capacity, you can send information through a channel without error, somehow. And here's the formula for capacity. So, W is the bandwith of the channel in hertz. The signal to noise ratio is the signal-to-noise ratio in the channel. And this result is in bps because of the law of this too. And just as before, if you try to send it at a data rate greater than capacity, it's impossible to have reliable communication. You're always going to incur error. So, nowadays when you try to design a digital communication system, one of the things you do is calculate the capacity to make sure. And make sure that the data rate your trying to sent to that channel satisfy this bound. If they don't, you're going to have errors. If you use a data rate less than capacity then there's hope that you can, there is a signal set and error correcting code that'd do the job, but like I said, we don't know what they are yet. But no one tries to send at a data rate greater than capacity because there's no hope. So I'm going to go through our little example that we've done before and if you recall we were trying to send an analog signal through a channel here. Th, and the analog signal had a bandwidth of 4 kHz. So, we're going to send it an A to D converter and sample it. Now, thing to note is that the anolog signal, incurred a quantization error. And once we, when it's in that A to D convertor, and you sample it with a finite number of bits, here four. There is an unrecoverable error. We cannot get the original signal back. So what I want to ask is can we mitigate the communication errors? Can you get rid of them? Simply only have the quantization error of the A to D converter. So, for this example the sampling rate is, of course, 8 kHz. Twice the bandwidth of the signal. And I'm using a band, or a number of bits of 4 and the A to D converter in this example. So we have a data rate of 32 kilobits. So we want to know, for what channels, is the capacity greater than 32 kilobits. And I have here a color plot. That chose of how that works. So, the large area in color here. All I'm doing is plotting the capacity as a function of SNR. And channeled down with in hertz. So in the 32 bit kilobit plane is shown. And so this boundary here forms, shows us which channels can support the data rate of 32 kilobits, So that you can get the information through without error, at least in theory. So, we can see that, as the. The we have approaches, about 4 kHz. You're going to need a very high quality channel, because the SNR you're going to need is very, very high. Furthermore, if the SNR gets low, it turns out the bandwidth is going up and up and up. And, that's kind of standard. So if you're anywhere in this region, it's impossible. You're in big trouble. So you want to be in the region that's inside this boundary. You want to be in there somewhere. In order to communicate this signal you have a hope whether it's sending the signal through, with the, the help there. I want to compare this result with the plot we had for the signal-to-noise ratio. So if you remember this plot, I was comparing analog communication, which is the resulting message signal-to-noise ratio was given via that. Y is a function of the channels that signal is shown, with 4 bits and 8 bits. And, I'm only going to do the 4 bit example again. So, turning things around a little bit, if you, you want to know what signal ratio would work? And the data rate, of course, is the sampling rate, times the number of bits in the de-converter. And when you work that all out, you get this. Well, for the case that we were, worrying about. We were worrying about an 8 kHz, y channel. Which so that you could use the same bandwidth as the analog signal requires, modulated analog communication required 8 kHz bandwidth channel. Suppose you made the digital channel live with eight kilo hertz? What signal noise ration are you going to need? Well you can do that in this calculation. Wc and Fs are the same quantity, so you just get 2 to the B minus 1. So the answer, answer is 15 which in dB turns about to be about 11. So what results is I can send my information through my channel, my channel somehow. Yet if the SNR is bigger than this number then I will incur a no communication error. The only thing I'm left with is the quantization error, that's this plateau. But once I drop below this magic number, it turns out to the signal-to-noise ratio goes to basically 0 and I can't get anything through. And this is the capacity wall, its called. And once you get, you transmit enough signal-to-noise ratio so which your above magic threshold, things are fine. But if you fall below it, things are not very good at all. You're going to be in big trouble. And as you can see from this plot, you're better off not using any channel coding at all. And this is what happens when you use BPSK and don't do any channel coding. In that situation, in this regime, you're better off not trying to do channel coding at all, but you're always incurring error, and that's the sad result. But there is a region with which you can get through and live only with the quantification there. Very, very nice. So. Why is everything digital? So. Sources can be compressed, first of all. And that's either lossy or lossless, depending on whichever you're going with. If it's lossy and you could consider running an analog signal through and A to D converter as being a lossy compression scheme. It's representing signal bits but you cannot recover it. Same thing is compressing music with an MP3 compre- compressor as lossy compression. Be then as it may, you can convert basically in its any source into a sequence of letters which can be compressed in the source coders. Somehow, we can find a error cracking code that's very good. As long as it obeys the noisy channel coding theorem, we know we can actually decode without error. So, that means once you are wiling to incur the error in digitizing analog signals converting it to bits. Turns out that means now. Digitized signals and inherently digital signals, like text, can now be transmitted over a common communication scheme without error, and that is really interesting. So, once you have bits, you got a chance. You can send the bits through a channel with no error. There is no similar result for analog channels. Once you have error and noise introduced in an analog channel you have to live with the errors if you stay analog. But if you go digital, or if you have a digital source, you can correct those errors if they channel. Well you could do this if the situation is correct. In other words, if the data rates is less than the capacity. Well, this ability to send any kind of signal once its bit. That leads to repute computer networks. Well computer networks can carry any kind of signals. Be it SMS. Be it your voice. Be it an image. Be it a video. We all can put all those different kind of sick messages into a common communication scheme. And the reason for that is Shannon's Noisy Channel Coding Theorem.