So, in this video, we're going to determine how you receive digital signals, bits, how do you recover from that analog signal that's sent through our analog channel, how do you recover those bits and how do you do it well. In particular, I want to focus on this word, optimal. We're not going to prove it here, but the receiver I'm going to show you here has been proven to be optimal. There is no better receiver in the sense that the error probability of making an incorrect choice for the bit is the smallest it can be for the receiver I'm going to show you. And that's going to lead us to be able to compare the performance for different signal set choices. It turns out it's, it's an interesting contrast to the bandwidth calculation we did in the previous video. So, here is the to recall, the digital communication model, both for analog and digital. And I want to point out that the channel is exactly the same as the one we've been talking about. What's more important here in the digital case is the presence of this delay. That turns out causes all kinds of problems in addition to the attenuation and the noise. Again, we're going to assume that the interference is very small. Delay is important because now we have to figure out where those bit interval boundaries are. We don't know that delay, which we usually don't. Somehow, the receiver and transmitter must be synchronized. And we assume that there is no other information available to the receiver other than what it receives. He's going to receive a noisy signal that you don't know the delay [unknown]. So, the synchronization problem is to find those bit boundaries inaccurately. I'm going to assume here we found those bit boundaries somehow. Now, to keep it simple, it's actually pretty complicated to build a synchronizer for bit streams. So, we're going to assume these are the actual bit boundaries. You can see them labeled by the dashed lines and some were buried inside this very noisy signal, is a baseband BPSK signal set representing a particular bit sequence. What receiver will tell us what that bit sequence is? I think you'll all agree that looking at that signal as it is and by eye, in trying to figure out what that bit sequence, is pretty difficult. It's going to turn out, as I'll show you in the simulation I'm going to run, it's actually very easy in this case for the optimal receiver. It's astonishingly easy. Let's see what that reciever is. It's called the Correlation Receiver because the multiplication of one signal by another integrating is called correlation, just the name. And here's the way it works. So, we're assuming that we're looking at the signal over one of the bit intervals and it's called bit interval n. So, I'm going to take that signal, the received signal coming at the channel, I'm going to multiply it first by the signal representing at 0 and then also represent, by the signal representing a bit 1, integrate, and get an answer for each. So, that's mathematically is what I do. I take the received signal, no matter what it is, and I multiply by each of the two signals. I then want to compare them. Whichever one was the largest is my choice for the bit, okay? So, I write that mathematically as being the arg max. This may be a new mathematical notation. If you forget the arg for a second, max means, of course, find the maximum with respect to the signal index. We don't really care about what the value of the maximum is. That's what maximum returns. It returns the maximum value. What we care about is which value of i has the maximum. And that's what arg max means, which index has the largest value. I don't care what the maximum value is, I just care which one is biggest. And that is going to be our guess, our best guess, it turns out our optimal guess for what it was. And this is a very simple operation. You multiply by a signal, integrate, and pick out which one is bigger. So, let me show you how this works at least in the situation where there's no noise. So, let's assume we have our BPSK. They use band signal set and what I'm going to do is I'm going to send a 0 through this and try to see and show you that this works perfectly in the case of no noise. So, during the time interval that we're talking about, r of t, is the same as S0. So, we get S0 squared, which just has a value of A squared. Integrate that over a bit interval, we get A squared T. And the other part of the receiver, we multiply by S1. Well, that's given by minus A times the plus A. Because S0 is what we are assuming was sent, that's what RT equals, we get a minus A squared integrated over an interval we get minus A squared T. Now, what happens is we choose the biggest of these. Well, it turns out for BPSK, one is always going to be the negative, the other, so which one is positive is the guy that we want and so we would choose that and, of course, that turns out to be absolutey the right case. And it's always going to be that way in the case of no noise, so we have a perfect receiver that creates no errors on its own when there's no noise around, the channel is being nice to us. Well, now, suppose things are a bit tougher. There's not only noise, but there's attenuation. Alright, so exactly the same scenerio. I'm sending a bit of 0, so r of t is S0 times alpha plus noise. So, the signal term, you might call it that, for each of these is clearly going to be this and the values are now attenuated by alpha. Well, that has the effect of bringing them closer together. Instead of the value being plus 1 and minus 1, it could be plus a tenth and minus a tenth. You're much, much closer together in value if alpha, for example, is at tenths. So, right away, you see it brings them together closer. What happens with the noise term is that these are random numbers. The bigger the channel noise is, the more random they're going to be which means you could confuse them. That means, this value could be bigger than the, the correct one and an error will occur. So, it's because of the noise that creates errors. And so, there is a nonzero probability that the bit choice is wrong when we have noise and the attenuation doesn't help one bit. It makes it worse. So, how do we figure out that probability? We're going to figure that out in a second. I want to show you just how good this receiver is before we get too far along so I'm revealing for you what the bit sequence was. 1, 0, 1, 0, 0, 1, and it's carefully labeled here. And I show in dash lines what the transmitted reform looks like. What I'm about to do is plot each of these outputs here. I'm going to use a red one for the 0 part of this and a blue for the 1 part of this. And what I'm going to plot is this quantity instead of what's indicated here. And so, the value, the only thing that matters is the value of that integral at each of these bit counters, okay? So, that's where we're going to look to see which one is largest but it's interesting to see what it looks like in-between as you're going to see in a second. So, you send our very noisy signal through our correlation receiver and the first thing that strikes you, it's amazingly clean. The noise has been greatly reduced. It's just astounding. And again, the blue signal here corresponds to the 1, the red signal corresponds to the 0. And you can see that when the bit was a 1, it's very clear the one, part one as one part of the receiver goes up dramatically as 0 sigma has to be the negative of it, is clear that we made the right choice there. 1 is bigger than 0. Now, the bit in the transmitted bit sequence flips, becomes a 0, and sure enough, the outputs of our receiver flip, and a zero becomes the biggest. And it's, in this example, it's always correct. Just amazing, to me, that something this noisy, the correlation receiver got it exactly right every time. This is not going to happen in general, even for this signal-to-noise ratio in this example. Eventually, there's going to be a time in which the correlation receiver picks the wrong bit. The 0 will be sent and it's going to say, no, a 1 was sent. It's going to be wrong. There's no way of correcting that because we're assuming the only link between the transmitter and receiver is our noisy channel. Well, like I said, what is this probability? Well, it turns out it's a complicated analysis. And it's clearly affected by signal set choice, through a relationship that has to do with the energy of the difference signal. So, what's important about the signal set is the value of this interval, which is, we now know that when you square something and integrate, that's energy. And this is the difference between the two entities in the signal set. So whatever, the bigger that energy is, it turns out the smaller the probability of error. That's a pretty important factor here. And we're going to see how the difference between BPSK and FSK is in just a second. Also, the channel attenuation enters into it. And as I indicated, thus, the greater the attenuation, the smaller the signal is coming through the channel and that's going to make the probability of error go up. And also, of course, it's going to depend on the how variable the noise is. The more variable the noise, the probability of error goes up. And a subtle aspect is that it depends on the probability distribution of the noise. So we have to assume about, something about the range of values and which values occurred more frequently than others in the noise. And what we're going to assume is the so-called Gaussian model, where the noise amplitude tends to have this kind of distribution. So, this says that the amplitudes close to 0 occur more frequently, and very infrequently do you get very big values. So, this plot here basically reflects the probability of getting at a given amplitude at any particular time. And this Gaussian which is given by something, it looks like even minus x squared over 2 turns out to be a very common noise model that's used all the time in communication. Let's see what the final result is. I'm not going to derive it here. This result, you can derive in any additional communication course that you might find. It's a well-known result. And the final answer is that the probability of error is given by a function Q, that I'll show you in a second. The argument of Q is the important thing. So, here's that signal difference energy. Here's how alpha, the attenuation enters into it, and N0 comes in, in the denominator. So, what is Q? Q turns out to be what's known as the tail integral of that Gaussian curve. So, if I were to plot that Gaussian again, here it is, the Q function. Notice it's an interval from x to infinity. So, start here at x and you get that area. So clearly, as x gets bigger, Q gets smaller. So, it's a decreasing function, and I plot it here on logarithmic coordinates. This is really interesting. Now, quickly, the Q function goes to 0. So, notice that a Q of 4, that turns out to be a value that's just [unknown] 3 times 10 to the minus 5, a very small number. Well, we want probabilities of error that are small. And it turns out that's, this reflects how well the correlation receiver works. But let's look at this in more detail. The bigger the argument for Q, the smaller Pe is. We want a small Pe. You want this to be as big as possible. The channel attenuation is not going to help, it's going to make it smaller. The noise, the, the bigger the noise term, N0, is, that also makes it smaller. The only thing you can do is use more transmitter power. That's the capital A, a BPSK examples. And the signal set choice is that even for a given transmitter amplitude, you want this difference between the two signals and your signal set to have as biggest energy as possible. So, nothing, like I keep saying, nothing good happens in a channel. It attenuates, it's noisy, that's going to hurt. The probability of error is going to make the probability of error bigger. The only thing you can do is try to compensate for my signal set choice, that's a design choice, and how much transmitter power you have. So, let's see the signal set, the effective signal set. So, I normalize things in terms of what's called Eb, which is the energy per bit, which, each of them has, puts out the same amount of energy for a given bit whether it's a BPSK or FSK, I'm assuming, modulated here. Notice that in here, there's a factor of 2. Well, that factor of 2 is entirely because of the BPSK. Fsk does not have that factor till it pops up. Well, that makes this argument always bigger, than it is, and for FSK. And you say, well, what's a factor of 2 and how important can that be? Well, it's because of the nonlinearity of Q that can have a dramatic effect on the probability of error. When the signal-to-noise ratio is 10 dB, I want you to note that there's about 2 and a half orders of magnitude difference. The probability of error is 2 and a half orders of magnitude smaller for BPSK than FSK. So, it turns about BPSK can be far superior to FSK. If you go to lower signal-to-noise ratio, there's not much difference. But the probability to error is only about a tenth, which is probably unacceptable in most applications. Now, I'd also point out that it's been proven that BPSK is the best possible signal set you can have. There is no other signal set for a given Eb that produces a smaller probability of error. So, BPSK is optimum, but as we've shown, FSK can use a smaller bandwidth. So here, you have the classic tradeoff. You want small Pe, well, that might mean you need a wider bandwidth channel. And FSK takes a smaller bandwidth, but your performance isn't as good. And it, of course, depends on the signal-to-noise ratio that you get which is Eb over N0. Alright, so, let's summarize now. Digital communications symptoms are not inherently perfect. Noisy channel and the channel attenuation for that matter introduces bit errors that occur with some probabilty which we want to make small as possible. You don't have much control over the noise or the channel attenuation. That's not what you can control as a designer. You can, to some degree, control the transmit of power, you have to use what's available. But you can certainly control the signal set design. That's up to you. And that can have a significant impact on performance, although it doesn't look like it from the formulas. Now ,we point out that the worst probability of error occurs you get a, it's a half. It turns out the probability of error should never be worse than a half, which I guess, is reassuring. But how small should Pe be? So, let's think about this. Suppose you have a data rate of R bits per second and let's assume for simplicity that the probability of error is more with that rate. So, in my example, if we have a one megabit per second datarate, suppose Pe is 10 to minus 6, that seems like a pretty small Pe. But I point out that means, you get one error every second on the average. And that's probably unacceptable. I would think I would want something at least like 10 to minus 9 but even that, means an error on the average, every thousand seconds, which again may be a bit more frequent than you can willing to put up with . So this may seem like we're kind of stuck what we have, what we, the only thing we can really do to overcome this is to have more transmitter power so we can boost the signal-to-noise ratio to be Pe smaller and smaller. Well, that turns out not quite to be the whole story. Just after World War II, an engineer named Claude Shannon developed Information theory, which we are about to talk about. He showed that it is possible in digital communication to send bit sequences through analog channels and introduce noise. You can get that bit sequence through with no error at all, 0. You can overcome the noise that the channel introduces. That's what makes digital communication a lot of fun. I hope you join me for the succeeding videos to hear that story.