1 00:00:00,490 --> 00:00:05,226 So, in this video, we're going to determine how you receive digital signals, 2 00:00:05,226 --> 00:00:10,110 bits, how do you recover from that analog signal that's sent through our analog 3 00:00:10,110 --> 00:00:14,109 channel, how do you recover those bits and how do you do it well. 4 00:00:14,109 --> 00:00:17,238 In particular, I want to focus on this word, optimal. 5 00:00:17,238 --> 00:00:21,813 We're not going to prove it here, but the receiver I'm going to show you here has 6 00:00:21,813 --> 00:00:26,544 been proven to be optimal. There is no better receiver in the sense 7 00:00:26,544 --> 00:00:31,752 that the error probability of making an incorrect choice for the bit is the 8 00:00:31,752 --> 00:00:35,991 smallest it can be for the receiver I'm going to show you. 9 00:00:35,992 --> 00:00:41,456 And that's going to lead us to be able to compare the performance for different 10 00:00:41,456 --> 00:00:44,981 signal set choices. It turns out it's, it's an interesting 11 00:00:44,981 --> 00:00:49,107 contrast to the bandwidth calculation we did in the previous video. 12 00:00:49,108 --> 00:00:56,710 So, here is the to recall, the digital communication model, both for analog and 13 00:00:56,710 --> 00:01:00,188 digital. And I want to point out that the channel 14 00:01:00,188 --> 00:01:03,857 is exactly the same as the one we've been talking about. 15 00:01:03,858 --> 00:01:09,717 What's more important here in the digital case is the presence of this delay. 16 00:01:09,718 --> 00:01:15,682 That turns out causes all kinds of problems in addition to the attenuation 17 00:01:15,682 --> 00:01:18,293 and the noise. Again, we're going to assume that the 18 00:01:18,293 --> 00:01:23,516 interference is very small. Delay is important because now we have to 19 00:01:23,516 --> 00:01:27,676 figure out where those bit interval boundaries are. 20 00:01:27,676 --> 00:01:30,289 We don't know that delay, which we usually don't. 21 00:01:30,290 --> 00:01:36,853 Somehow, the receiver and transmitter must be synchronized. 22 00:01:36,853 --> 00:01:41,590 And we assume that there is no other information available to the receiver 23 00:01:41,590 --> 00:01:46,646 other than what it receives. He's going to receive a noisy signal that 24 00:01:46,646 --> 00:01:51,960 you don't know the delay [unknown]. So, the synchronization problem is to find 25 00:01:51,960 --> 00:01:55,665 those bit boundaries inaccurately. I'm going to assume here we found those 26 00:01:55,665 --> 00:01:59,441 bit boundaries somehow. Now, to keep it simple, it's actually 27 00:01:59,441 --> 00:02:03,506 pretty complicated to build a synchronizer for bit streams. 28 00:02:03,506 --> 00:02:07,000 So, we're going to assume these are the actual bit boundaries. 29 00:02:07,000 --> 00:02:13,489 You can see them labeled by the dashed lines and some were buried inside this 30 00:02:13,489 --> 00:02:20,287 very noisy signal, is a baseband BPSK signal set representing a particular bit 31 00:02:20,287 --> 00:02:25,180 sequence. What receiver will tell us what that bit 32 00:02:25,180 --> 00:02:29,467 sequence is? I think you'll all agree that looking at 33 00:02:29,467 --> 00:02:35,473 that signal as it is and by eye, in trying to figure out what that bit sequence, is 34 00:02:35,473 --> 00:02:39,378 pretty difficult. It's going to turn out, as I'll show you 35 00:02:39,378 --> 00:02:44,481 in the simulation I'm going to run, it's actually very easy in this case for the 36 00:02:44,481 --> 00:02:48,217 optimal receiver. It's astonishingly easy. 37 00:02:48,218 --> 00:02:52,745 Let's see what that reciever is. It's called the Correlation Receiver 38 00:02:52,745 --> 00:02:58,820 because the multiplication of one signal by another integrating is called 39 00:02:58,820 --> 00:03:03,159 correlation, just the name. And here's the way it works. 40 00:03:03,160 --> 00:03:09,125 So, we're assuming that we're looking at the signal over one of the bit intervals 41 00:03:09,125 --> 00:03:14,514 and it's called bit interval n. So, I'm going to take that signal, the 42 00:03:14,514 --> 00:03:21,147 received signal coming at the channel, I'm going to multiply it first by the signal 43 00:03:21,147 --> 00:03:27,582 representing at 0 and then also represent, by the signal representing a bit 1, 44 00:03:27,582 --> 00:03:34,800 integrate, and get an answer for each. So, that's mathematically is what I do. 45 00:03:34,800 --> 00:03:39,882 I take the received signal, no matter what it is, and I multiply by each of the two 46 00:03:39,882 --> 00:03:43,540 signals. I then want to compare them. 47 00:03:43,540 --> 00:03:50,555 Whichever one was the largest is my choice for the bit, okay? 48 00:03:50,555 --> 00:03:55,809 So, I write that mathematically as being the arg max. 49 00:03:55,809 --> 00:04:01,230 This may be a new mathematical notation. If you forget the arg for a second, max 50 00:04:01,230 --> 00:04:05,847 means, of course, find the maximum with respect to the signal index. 51 00:04:05,848 --> 00:04:08,970 We don't really care about what the value of the maximum is. 52 00:04:08,970 --> 00:04:12,667 That's what maximum returns. It returns the maximum value. 53 00:04:12,668 --> 00:04:18,100 What we care about is which value of i has the maximum. 54 00:04:18,100 --> 00:04:22,038 And that's what arg max means, which index has the largest value. 55 00:04:22,038 --> 00:04:25,942 I don't care what the maximum value is, I just care which one is biggest. 56 00:04:25,942 --> 00:04:31,931 And that is going to be our guess, our best guess, it turns out our optimal guess 57 00:04:31,931 --> 00:04:35,922 for what it was. And this is a very simple operation. 58 00:04:35,922 --> 00:04:42,957 You multiply by a signal, integrate, and pick out which one is bigger. 59 00:04:42,958 --> 00:04:48,300 So, let me show you how this works at least in the situation where there's no 60 00:04:48,300 --> 00:04:52,156 noise. So, let's assume we have our BPSK. 61 00:04:52,156 --> 00:04:58,370 They use band signal set and what I'm going to do is I'm going to send a 0 62 00:04:58,370 --> 00:05:05,390 through this and try to see and show you that this works perfectly in the case of 63 00:05:05,390 --> 00:05:10,445 no noise. So, during the time interval that we're 64 00:05:10,445 --> 00:05:18,574 talking about, r of t, is the same as S0. So, we get S0 squared, which just has a 65 00:05:18,574 --> 00:05:23,550 value of A squared. Integrate that over a bit interval, we get 66 00:05:23,550 --> 00:05:29,610 A squared T. And the other part of the receiver, we 67 00:05:29,610 --> 00:05:35,266 multiply by S1. Well, that's given by minus A times the 68 00:05:35,266 --> 00:05:38,741 plus A. Because S0 is what we are assuming was 69 00:05:38,741 --> 00:05:43,992 sent, that's what RT equals, we get a minus A squared integrated over an 70 00:05:43,992 --> 00:05:49,718 interval we get minus A squared T. Now, what happens is we choose the biggest 71 00:05:49,718 --> 00:05:53,242 of these. Well, it turns out for BPSK, one is always 72 00:05:53,242 --> 00:05:58,512 going to be the negative, the other, so which one is positive is the guy that we 73 00:05:58,512 --> 00:06:04,207 want and so we would choose that and, of course, that turns out to be absolutey the 74 00:06:04,207 --> 00:06:07,721 right case. And it's always going to be that way in 75 00:06:07,721 --> 00:06:13,501 the case of no noise, so we have a perfect receiver that creates no errors on its own 76 00:06:13,501 --> 00:06:17,926 when there's no noise around, the channel is being nice to us. 77 00:06:17,926 --> 00:06:23,147 Well, now, suppose things are a bit tougher. 78 00:06:23,148 --> 00:06:26,119 There's not only noise, but there's attenuation. 79 00:06:26,120 --> 00:06:36,544 Alright, so exactly the same scenerio. I'm sending a bit of 0, so r of t is S0 80 00:06:36,544 --> 00:06:44,163 times alpha plus noise. So, the signal term, you might call it 81 00:06:44,163 --> 00:06:50,034 that, for each of these is clearly going to be this and the values are now 82 00:06:50,034 --> 00:06:53,660 attenuated by alpha. Well, that has the effect of bringing them 83 00:06:53,660 --> 00:06:57,748 closer together. Instead of the value being plus 1 and 84 00:06:57,748 --> 00:07:01,900 minus 1, it could be plus a tenth and minus a tenth. 85 00:07:01,900 --> 00:07:06,810 You're much, much closer together in value if alpha, for example, is at tenths. 86 00:07:06,811 --> 00:07:10,517 So, right away, you see it brings them together closer. 87 00:07:10,518 --> 00:07:16,823 What happens with the noise term is that these are random numbers. 88 00:07:16,824 --> 00:07:22,988 The bigger the channel noise is, the more random they're going to be which means you 89 00:07:22,988 --> 00:07:27,768 could confuse them. That means, this value could be bigger 90 00:07:27,768 --> 00:07:32,286 than the, the correct one and an error will occur. 91 00:07:32,287 --> 00:07:37,217 So, it's because of the noise that creates errors. 92 00:07:37,218 --> 00:07:43,405 And so, there is a nonzero probability that the bit choice is wrong when we have 93 00:07:43,405 --> 00:07:47,250 noise and the attenuation doesn't help one bit. 94 00:07:47,250 --> 00:07:51,624 It makes it worse. So, how do we figure out that probability? 95 00:07:51,625 --> 00:07:55,739 We're going to figure that out in a second. 96 00:07:55,740 --> 00:08:01,076 I want to show you just how good this receiver is before we get too far along so 97 00:08:01,076 --> 00:08:04,439 I'm revealing for you what the bit sequence was. 98 00:08:04,440 --> 00:08:09,646 1, 0, 1, 0, 0, 1, and it's carefully labeled here. 99 00:08:09,647 --> 00:08:16,396 And I show in dash lines what the transmitted reform looks like. 100 00:08:16,397 --> 00:08:20,536 What I'm about to do is plot each of these outputs here. 101 00:08:20,536 --> 00:08:29,641 I'm going to use a red one for the 0 part of this and a blue for the 1 part of this. 102 00:08:29,642 --> 00:08:41,207 And what I'm going to plot is this quantity instead of what's indicated here. 103 00:08:41,208 --> 00:08:47,684 And so, the value, the only thing that matters is the value of that integral at 104 00:08:47,684 --> 00:08:52,250 each of these bit counters, okay? So, that's where we're going to look to 105 00:08:52,250 --> 00:08:55,498 see which one is largest but it's interesting to see what it looks like 106 00:08:55,498 --> 00:08:57,667 in-between as you're going to see in a second. 107 00:08:57,668 --> 00:09:02,498 So, you send our very noisy signal through our correlation receiver and the first 108 00:09:02,498 --> 00:09:05,372 thing that strikes you, it's amazingly clean. 109 00:09:05,372 --> 00:09:10,556 The noise has been greatly reduced. It's just astounding. 110 00:09:10,557 --> 00:09:23,066 And again, the blue signal here corresponds to the 1, the red signal 111 00:09:23,066 --> 00:09:30,140 corresponds to the 0. And you can see that when the bit was a 1, 112 00:09:30,140 --> 00:09:37,140 it's very clear the one, part one as one part of the receiver goes up dramatically 113 00:09:37,140 --> 00:09:43,340 as 0 sigma has to be the negative of it, is clear that we made the right choice 114 00:09:43,340 --> 00:09:45,442 there. 1 is bigger than 0. 115 00:09:45,443 --> 00:09:51,618 Now, the bit in the transmitted bit sequence flips, becomes a 0, and sure 116 00:09:51,618 --> 00:09:57,622 enough, the outputs of our receiver flip, and a zero becomes the biggest. 117 00:09:57,622 --> 00:10:02,237 And it's, in this example, it's always correct. 118 00:10:02,238 --> 00:10:09,950 Just amazing, to me, that something this noisy, the correlation receiver got it 119 00:10:09,950 --> 00:10:15,392 exactly right every time. This is not going to happen in general, 120 00:10:15,392 --> 00:10:18,598 even for this signal-to-noise ratio in this example. 121 00:10:18,598 --> 00:10:23,775 Eventually, there's going to be a time in which the correlation receiver picks the 122 00:10:23,775 --> 00:10:26,960 wrong bit. The 0 will be sent and it's going to say, 123 00:10:26,960 --> 00:10:29,330 no, a 1 was sent. It's going to be wrong. 124 00:10:29,330 --> 00:10:34,574 There's no way of correcting that because we're assuming the only link between the 125 00:10:34,574 --> 00:10:37,664 transmitter and receiver is our noisy channel. 126 00:10:37,664 --> 00:10:41,416 Well, like I said, what is this probability? 127 00:10:41,416 --> 00:10:45,717 Well, it turns out it's a complicated analysis. 128 00:10:45,718 --> 00:10:51,871 And it's clearly affected by signal set choice, through a relationship that has to 129 00:10:51,871 --> 00:10:55,192 do with the energy of the difference signal. 130 00:10:55,192 --> 00:11:00,776 So, what's important about the signal set is the value of this interval, which is, 131 00:11:00,776 --> 00:11:05,817 we now know that when you square something and integrate, that's energy. 132 00:11:05,818 --> 00:11:12,217 And this is the difference between the two entities in the signal set. 133 00:11:12,218 --> 00:11:16,910 So whatever, the bigger that energy is, it turns out the smaller the probability of 134 00:11:16,910 --> 00:11:21,420 error. That's a pretty important factor here. 135 00:11:21,420 --> 00:11:26,394 And we're going to see how the difference between BPSK and FSK is in just a second. 136 00:11:26,394 --> 00:11:29,482 Also, the channel attenuation enters into it. 137 00:11:29,483 --> 00:11:34,783 And as I indicated, thus, the greater the attenuation, the smaller the signal is 138 00:11:34,783 --> 00:11:39,839 coming through the channel and that's going to make the probability of error go 139 00:11:39,839 --> 00:11:42,754 up. And also, of course, it's going to depend 140 00:11:42,754 --> 00:11:46,519 on the how variable the noise is. The more variable the noise, the 141 00:11:46,519 --> 00:11:52,382 probability of error goes up. And a subtle aspect is that it depends on 142 00:11:52,382 --> 00:11:58,495 the probability distribution of the noise. So we have to assume about, something 143 00:11:58,495 --> 00:12:04,222 about the range of values and which values occurred more frequently than others in 144 00:12:04,222 --> 00:12:08,180 the noise. And what we're going to assume is the 145 00:12:08,180 --> 00:12:15,044 so-called Gaussian model, where the noise amplitude tends to have this kind of 146 00:12:15,044 --> 00:12:19,965 distribution. So, this says that the amplitudes close to 147 00:12:19,965 --> 00:12:26,160 0 occur more frequently, and very infrequently do you get very big values. 148 00:12:26,160 --> 00:12:31,452 So, this plot here basically reflects the probability of getting at a given 149 00:12:31,452 --> 00:12:35,606 amplitude at any particular time. And this Gaussian which is given by 150 00:12:35,606 --> 00:12:38,945 something, it looks like even minus x squared over 2 turns out to be a very 151 00:12:38,945 --> 00:12:41,735 common noise model that's used all the time in communication. 152 00:12:41,735 --> 00:12:54,150 Let's see what the final result is. I'm not going to derive it here. 153 00:12:54,150 --> 00:12:59,020 This result, you can derive in any additional communication course that you 154 00:12:59,020 --> 00:13:01,355 might find. It's a well-known result. 155 00:13:01,356 --> 00:13:08,108 And the final answer is that the probability of error is given by a 156 00:13:08,108 --> 00:13:12,524 function Q, that I'll show you in a second. 157 00:13:12,524 --> 00:13:19,197 The argument of Q is the important thing. So, here's that signal difference energy. 158 00:13:19,198 --> 00:13:24,815 Here's how alpha, the attenuation enters into it, and N0 comes in, in the 159 00:13:24,815 --> 00:13:29,457 denominator. So, what is Q? 160 00:13:29,458 --> 00:13:35,775 Q turns out to be what's known as the tail integral of that Gaussian curve. 161 00:13:35,775 --> 00:13:45,621 So, if I were to plot that Gaussian again, here it is, the Q function. 162 00:13:45,622 --> 00:13:49,234 Notice it's an interval from x to infinity. 163 00:13:49,234 --> 00:13:57,698 So, start here at x and you get that area. So clearly, as x gets bigger, Q gets 164 00:13:57,698 --> 00:14:02,510 smaller. So, it's a decreasing function, and I plot 165 00:14:02,510 --> 00:14:08,368 it here on logarithmic coordinates. This is really interesting. 166 00:14:08,368 --> 00:14:16,927 Now, quickly, the Q function goes to 0. So, notice that a Q of 4, that turns out 167 00:14:16,927 --> 00:14:25,312 to be a value that's just [unknown] 3 times 10 to the minus 5, a very small 168 00:14:25,312 --> 00:14:28,990 number. Well, we want probabilities of error that 169 00:14:28,990 --> 00:14:32,970 are small. And it turns out that's, this reflects how 170 00:14:32,970 --> 00:14:38,010 well the correlation receiver works. But let's look at this in more detail. 171 00:14:38,010 --> 00:14:44,480 The bigger the argument for Q, the smaller Pe is. 172 00:14:44,480 --> 00:14:50,347 We want a small Pe. You want this to be as big as possible. 173 00:14:50,347 --> 00:14:55,987 The channel attenuation is not going to help, it's going to make it smaller. 174 00:14:55,987 --> 00:15:03,637 The noise, the, the bigger the noise term, N0, is, that also makes it smaller. 175 00:15:03,638 --> 00:15:07,439 The only thing you can do is use more transmitter power. 176 00:15:07,440 --> 00:15:14,528 That's the capital A, a BPSK examples. And the signal set choice is that even for 177 00:15:14,528 --> 00:15:19,424 a given transmitter amplitude, you want this difference between the two signals 178 00:15:19,424 --> 00:15:22,923 and your signal set to have as biggest energy as possible. 179 00:15:22,924 --> 00:15:28,399 So, nothing, like I keep saying, nothing good happens in a channel. 180 00:15:28,400 --> 00:15:31,859 It attenuates, it's noisy, that's going to hurt. 181 00:15:31,859 --> 00:15:34,820 The probability of error is going to make the probability of error bigger. 182 00:15:34,820 --> 00:15:39,460 The only thing you can do is try to compensate for my signal set choice, 183 00:15:39,460 --> 00:15:43,793 that's a design choice, and how much transmitter power you have. 184 00:15:43,793 --> 00:15:49,710 So, let's see the signal set, the effective signal set. 185 00:15:49,711 --> 00:15:56,856 So, I normalize things in terms of what's called Eb, which is the energy per bit, 186 00:15:56,856 --> 00:16:04,740 which, each of them has, puts out the same amount of energy for a given bit whether 187 00:16:04,740 --> 00:16:09,249 it's a BPSK or FSK, I'm assuming, modulated here. 188 00:16:09,249 --> 00:16:14,834 Notice that in here, there's a factor of 2. 189 00:16:14,834 --> 00:16:19,774 Well, that factor of 2 is entirely because of the BPSK. 190 00:16:19,774 --> 00:16:23,377 Fsk does not have that factor till it pops up. 191 00:16:23,378 --> 00:16:29,481 Well, that makes this argument always bigger, than it is, and for FSK. 192 00:16:29,481 --> 00:16:34,927 And you say, well, what's a factor of 2 and how important can that be? 193 00:16:34,927 --> 00:16:41,158 Well, it's because of the nonlinearity of Q that can have a dramatic effect on the 194 00:16:41,158 --> 00:16:46,587 probability of error. When the signal-to-noise ratio is 10 dB, I 195 00:16:46,587 --> 00:16:53,392 want you to note that there's about 2 and a half orders of magnitude difference. 196 00:16:53,392 --> 00:16:59,830 The probability of error is 2 and a half orders of magnitude smaller for BPSK than 197 00:16:59,830 --> 00:17:05,275 FSK. So, it turns about BPSK can be far 198 00:17:05,275 --> 00:17:11,015 superior to FSK. If you go to lower signal-to-noise ratio, 199 00:17:11,015 --> 00:17:16,170 there's not much difference. But the probability to error is only about 200 00:17:16,170 --> 00:17:20,546 a tenth, which is probably unacceptable in most applications. 201 00:17:20,546 --> 00:17:26,812 Now, I'd also point out that it's been proven that BPSK is the best possible 202 00:17:26,812 --> 00:17:32,450 signal set you can have. There is no other signal set for a given 203 00:17:32,450 --> 00:17:35,978 Eb that produces a smaller probability of error. 204 00:17:35,978 --> 00:17:41,765 So, BPSK is optimum, but as we've shown, FSK can use a smaller bandwidth. 205 00:17:41,765 --> 00:17:47,356 So here, you have the classic tradeoff. You want small Pe, well, that might mean 206 00:17:47,356 --> 00:17:52,888 you need a wider bandwidth channel. And FSK takes a smaller bandwidth, but 207 00:17:52,888 --> 00:17:58,415 your performance isn't as good. And it, of course, depends on the 208 00:17:58,415 --> 00:18:03,596 signal-to-noise ratio that you get which is Eb over N0. 209 00:18:03,596 --> 00:18:09,478 Alright, so, let's summarize now. Digital communications symptoms are not 210 00:18:09,478 --> 00:18:13,840 inherently perfect. Noisy channel and the channel attenuation 211 00:18:13,840 --> 00:18:17,340 for that matter introduces bit errors that occur with some probabilty which we want 212 00:18:17,340 --> 00:18:23,225 to make small as possible. You don't have much control over the noise 213 00:18:23,225 --> 00:18:29,604 or the channel attenuation. That's not what you can control as a 214 00:18:29,604 --> 00:18:32,602 designer. You can, to some degree, control the 215 00:18:32,602 --> 00:18:35,696 transmit of power, you have to use what's available. 216 00:18:35,696 --> 00:18:39,480 But you can certainly control the signal set design. 217 00:18:39,480 --> 00:18:43,906 That's up to you. And that can have a significant impact on 218 00:18:43,906 --> 00:18:48,240 performance, although it doesn't look like it from the formulas. 219 00:18:48,241 --> 00:18:54,394 Now ,we point out that the worst probability of error occurs you get a, 220 00:18:54,394 --> 00:18:57,643 it's a half. It turns out the probability of error 221 00:18:57,643 --> 00:19:01,778 should never be worse than a half, which I guess, is reassuring. 222 00:19:01,779 --> 00:19:07,260 But how small should Pe be? So, let's think about this. 223 00:19:07,260 --> 00:19:13,815 Suppose you have a data rate of R bits per second and let's assume for simplicity 224 00:19:13,815 --> 00:19:18,037 that the probability of error is more with that rate. 225 00:19:18,038 --> 00:19:26,102 So, in my example, if we have a one megabit per second datarate, suppose Pe is 226 00:19:26,102 --> 00:19:30,939 10 to minus 6, that seems like a pretty small Pe. 227 00:19:30,940 --> 00:19:36,443 But I point out that means, you get one error every second on the average. 228 00:19:36,443 --> 00:19:42,288 And that's probably unacceptable. I would think I would want something at 229 00:19:42,288 --> 00:19:47,717 least like 10 to minus 9 but even that, means an error on the average, every 230 00:19:47,717 --> 00:19:54,125 thousand seconds, which again may be a bit more frequent than you can willing to put 231 00:19:54,125 --> 00:19:57,684 up with . So this may seem like we're kind of stuck 232 00:19:57,684 --> 00:20:03,260 what we have, what we, the only thing we can really do to overcome this is to have 233 00:20:03,260 --> 00:20:09,328 more transmitter power so we can boost the signal-to-noise ratio to be Pe smaller and 234 00:20:09,328 --> 00:20:16,202 smaller. Well, that turns out not quite to be the 235 00:20:16,202 --> 00:20:20,615 whole story. Just after World War II, an engineer named 236 00:20:20,615 --> 00:20:26,052 Claude Shannon developed Information theory, which we are about to talk about. 237 00:20:26,052 --> 00:20:32,268 He showed that it is possible in digital communication to send bit sequences 238 00:20:32,268 --> 00:20:36,094 through analog channels and introduce noise. 239 00:20:36,094 --> 00:20:42,771 You can get that bit sequence through with no error at all, 0. 240 00:20:42,772 --> 00:20:47,067 You can overcome the noise that the channel introduces. 241 00:20:47,068 --> 00:20:49,602 That's what makes digital communication a lot of fun. 242 00:20:49,602 --> 00:20:54,033 I hope you join me for the succeeding videos to hear that story.