So in this video, I want to go through an extended example that compares analog and digital communication. So, what I'm going to assume is that I have an analog signal that I want to send from one place to another. I have two choices, I can use amplitude modulation, completely analog system. Or I can run the analog signal, sample it with an A to D converter, converts it to bits. Use digital communication schemes and then put it through a D to A to get back your original signal. Which is better? Particularly, when you are trying to use the same channel for both. So, I'm trying to make a very fair comparison. So, we're going to send, compare an analog and digital scheme, who are on the same channel. Our criterion for quality, our criterion for which one wins, is going to be the one that produces the largest signal-to-noise ratio in the final result. That's going to be our test to see which one is better. Now as we've seen in previous videos, analog communication because of the channel noise is going to result in a noisy signal. And we already know what that signal-to-noise ratio is. And I'm just going to, recall that result and use it. We're going to have to do a bit of calculation though for the digital channel. And I want to point out some things before we go into the details. Recall that because of the amplitude quantization in the A to D converter. That introduces an unrecoverable error. Once you quantize to bits, you can not get back the original analog signal perfectly. So that results in a, a unrecoverable error that we won't be able to do anything about. Also the channel introduces error because the probability of error in receiving a bit correctly is not zero. So that also introduces noise into the result, which makes the SNR go down. So guess what? No matter if it's analog or digital the result is going to be noisy. So we have to learn how to live with that and that's just a fact of life. More than that, which one results in the smallest amount of noise? Which one is better? And, so here's the conditions for my little test here. I'm assuming I have a baseband signal. That has a bandwidth of 8 kHz I'm sorry 4 kHz. I'm just using this as an example a 4 kHz signal. And for the digital side it's going to be sampled with a B-bit converter. I'm going to let the number of bits be your variable here to see what the effect of that is. This happen can be little bit surprised in how long how that effects the results. Well, so lets go through the analog system. I have to use in modulated scheme and so we know that if you have a bandwidth of w the transmission band width. That you use in doing analog is twice that bandwidth so we have an 8 kHz transmission bandwidth. And we also know from previous results that the signal-to-noise ratio that received a message signal has that formula. I do want to point out that I have not written the alpha from the channel in here, the channel insinuation. I'm just merging that with A. As you know, the correct expression isn't alpha squared A squared. And so, I'm just going to merge the two into one symbol so. So we simplify the expressions a little bit. Okay, now for digital communication we have to sample, and we're going to have to sample at a sampling frequency of 8 kHz, twice the highest frequency. Okay, and we also know that the transmission bandwidth. Is 3 times r for modulated bpsk which is a very good signal set. So the data rate is going to be the sampling rate, 8 kilohertz times the number of bits in that a to d converter. Because we have to send the same signal every capital T seconds, T being the sampling interval. You use more bits, you're going to have to have a higher transmission bandwidth because the data rate goes up. So we're going to use this result a little bit later. So again to repeat what I said on the first line, the digital side of things, the final signal you get out of the communication system is the original signal plus the quantization error. Which is due to the A to D converter's amplitude quantization plus the communication error, the effect of Pe not being zero. So we're going to f-figure out each of these and just add them up and then compute a signal to noise ratio. So the quantization error we've gone through. So some more little details are we're going to assume that the signal. As an amplitude that's less than, 1. Just, so I can scale things appropriately. And if you express the final result out of the A to D converter. And then the D to A in terms of bits. You get this, for the formula. So to understand this let's suppose we draw our, our diagram that we've been doing. So signal goes between minus 1 and 1. And we're use, we're dividing this up into quantization intervals somehow. So if you set all the bits in this expression to 0, what you get for a result is minus 1. So 0, 0, 0, 0, 0, our remaining bits we use, is going to correspond to an amplitude of minus 1. And when you set it equal to all 1s, you'll get something very close to 1, if you evaluate this expression, so that's the, this forum formula does not really come out of the air. It's just trying to relate bits to an amplitude that's in this range, okay. Well, when it's received we're going to have a quantiza-, we're going to have an error because the received bits aren't necessarily the same. And that's called a Channel Error. But because the fact, back in the analog and digital converter, we did not get back exactly the answer that we started with. We have already figured out in a previous video. That that's the power in the error. 2 to the minus 2b, B being the number of bits in the converter, divided by 3. And this result we derive previously. But now we're going to look at the channel errors. Now, the channel errors, The resulting error because of errors in the communication just revolve around the fact that the transmitted bit and the receive bit are not the same. So we want the power in that error and I'm going to write that in kind of a new way. These angle brackets mean average. So that's a typical notation for main average. So we're going to square the error in average, to get an idea what the average power in the error is. If you go through the calculations, what you discover is just the average Of the square of the difference between the 2 bits. So we have to figure out now what that average is and it's a pretty easy calculation. So, it's pretty clear, when the bits agree that, that difference is 0 and there's nothing that contributes to the. Average, you know term that we'll get is when the bits disagree. So we could have sent a zero, and received a one. And we could have sent a one and received a zero. Well these errors occur with a probability of pe. But the probability we send to zero or send to one is a half. So that makes the total probability of that term being a half Pe. Total probability for this term being a half and they all add up to just be Pe so it turns out this averaging term here is really quite easy to see. I do wnat to piont out something else before we go on. Some bits matter more than others. Notice this exponent out here. This has to do with the fact that the higher bit, the one that has a bigger value of k here, you make an error in that bit, it really effects the result for the error. If the smaller bit were k0, doesn't has, have as much effect. So some errors and some bits matter more than others and we want to see the overall effect of channel errors. Okay so, so I'm using my result that Pe was the average value here, it's a constant, and to sum this term up it turns out that's a finite geometric series, which we've already talked about. And once you do all those calculations, this is what you wind up with. Now, 2 to the 2b, I'm going to assume, is a whole lot bigger than 1. Which is normally the case. And that's where the approximation, approximate sign comes in. And once you drop that one. And to simplify, you get 4 3rds Pe. So you get a rather simple result for the channel error. So that's the, power and the channel error we have the power and the quantization error. And so the final result for the SNR, our digital communications scheme. Is, there's the signal power, of course, and I just add up the two powers from quantization and channel errors. Okay. So, there's a little detail here, and that is we have to be fair about the transmitter power I want to make the transmitter power the same. So, I'm going to recall something here. So, recall for digital that the energy per bit which si the critical part of probability of error expression is A squared T. Well T is the dur-, is the bit interval duration. And so, what's T? And T, as I wrote in the previous slide, is 1 over R. So, R, the data rate, is the sampling frequency times the number of bits in the A to D converter. So relating it back to bandwidth and bits this is expression we get and so you figure it all our amplitude squared which is from the transmitter in the effect of the channel Its related to the energy per bit through this expression. So I'm going to take my analog expression, and convert it into something that's related to what I call Es. Which is the energy per bit times the number of bits. So Es is the energy per sample that's used in the communications scheme. I want to keep that the same between the analog and digital. Systems so I can compare them fairly. So our signal to noise ratio here, is given by this. Notice the power in s disappeared because its going to be the same in both expressions. I'm assuming basically it's one, just to keep it simple. And here's Pe. And here's Pe. So it's for a QPSK thing. The result was 2Eb over N not. And i've simply substituted in Es for that so I can explicitly take into account how many bits are being So what I'm going to do. I'm going to plot this as a, the analog system as a function in Es or N not. I'm going to plot the result of this communication schemes a function of Es over N not. And Es over N not, remember is the basically summarizes how much and attenuation was used in the, the transmitter and the channel compared to the channel noise. So this is the axis we want to compare on. And I've plotted the analog and digital cases here the digital for two different bits in the A to D converter. And as we showed, the analog is the simplest because the overall signal-noise-ratio for the at the output is the same as Es over N not, so you get a straight line. And that's the equality line. So. For 4 bits is the dash curve. For 8 bits you get the solid red curve. It's kind of interesting. Let's see if we can understand this. First of all, as the Channel Signal-to-noise ratio goes up. Pe basically is going to 0. So what happens is you're left with the, with the quantization errors. So these are the limits of the quantization error, and as we know, the more bits you use, the smaller that error gets, which, since this is the signal to noise ratio That means the SNR goes up. So that part I understand. You go down to the other extreme, Pe is basically a half. No matter how many bits you use, because the channel is really, really, really noisy. So that's why these two curves converege. In between it's interesting, that the 4 bit result gives you a higher SNR than the 8 bit result. And the reason for that is, because Pe for the four bit case is actually smaller than it is for the eight bit case. Because you're dividing up time less from dividing it by 4 rather than 8. That has an effect of producing a slightly bigger SNR in the result using 4 bits. But only for a while. Once the SNR gets big enough the 8 bit becomes far superior. However, can't lose sight of the fact, that the digital curves, lie above the analog curves. So, in this comparison, the analog system is inferior, it's worse than digital systems. This is despite the fact we introduce error in the A to D converters, that we can never get around. It's clear that at least in this comparison that analog is not as good as digital. So, digital wins. And that's the kind of analysis that you want to do as an engineer to figure out which scheme is going to work. You use some criterion, like the signal to noise ratio for the, final receive message. And then you pick your design criterion here is analog or digital according to that choice. There is a little caveat here, I do want to point out, that one thing that isn't fair with this comparison is that channel bandwidth needed for analog is much smaller that viewed for the digital schemes we just talked about. Digital transmission bandwidth, like I said, is twice the highest frequency of the message so it's just 8 kHz. However, for 4 bits you need 96 kHz and for 8 bits you need twice that much so in some sense this isn't a very fair comparison. So what that means. Is that, what happens if you make them the same? What happens if you constrain the transmission bandwidth to be the same for both cases. Well, it turns out that if you constrain, digital schemes to work within a 8 kHz regime. You can show that. Amplitude modulation, the analog scheme now wins. It always results in a higher SNR, it's kind of interesting. However, if you use more bandwidth than that, something like this, it turns out you basically cannot effectively use that extra bandwidth for analog communication. It doesn't have that flexibility. And then it turns out digital ones. So, in the grand scheme of things, overall this is one of the reasons why, in the modern uses, everything is being converted to bits and sent in a digital way. Now, I'm going to talk about a situation In a upcoming videos, we'll find out, that you can actually send bits through a noisy channel with Pe winding up being zero. And we're going to figure out how that works. And it involves some very clever engineering. And we'll see that in, in the coming, upcoming videos.