So in this video, we're finally going to get the signal into the computer. We've talked about how numbers are represented by computers, but now we have to figure out how we actually convert an analog signal, like a voltage into a number that goes into a computer. This is called Analog-to-Digital Conversion. And I think the name is pretty obvious. We'll go through the details of that. It actually goes through two stages. One is to acquire values of the signal at various times. we'll learn that there's something called the sampling theorem, very important result in this business in which you can sample a signal without error. Amplitude quantization is another story. Now, we have to convert those continuous amplitude values, characteristic of analog signals, into a set of discreet values. That incurs error, as we'll see, and we'll learn how to analyze and control those errors. Alright. So, let's go over the definition again of an analog an a digital signal. analog signals are functions of a continuous variable, like time and I show an a, a typical analog signal here wiggles around and has a continuous range of values for the amplitude and its functions of continuous time. Digital signals are discreet value functions of the integers. So, I show as a bubble plot. values of a signal here, they're isolated, they only occur at the integers. And you'll notice these various fixed set of amplitudes and those are the only amplitudes that are allowed because it's a discrete-valued function. So, I want turn now to this is that signal gone through an A to D converter and you may wonder, well, that doesn't look anything like it. How in the world can you go back from the digital signal to the analog one? I'm going to show you you can't. but it's a, a very interesting story when you get into all the details. Alright. Well, first thing we're going to do is acquire individual values of the signal. And we can describe that in the following very simple way. we're going to start with our, our analog signal. There it is. And here is our old friend, the periodic pulse train. So, the separation between the pulses is T sub s. s means sampling, and T sub s is going to be the sampling interval. The width of each pulse is very narrow. Delta is going to be very small for us. Well, if you just multiply those two signals together, the analog signals times the periodic pulses, you get this waveform as shown in the bottom here. And, we're going to make delta so small that all that really results is that we get the value of the signal at the center of the pulse. And we'll assume that these pulses are really, really very narrow. Now, the qeustion is going ot be for us, can we, from these values, can we connect the dots? Can we connect the dots filling in what is zero so that looks like the original signal? And the asnwer is, we're going to be able to do that with no error as long as we play our cards right. Let's see how that works. Well, how do you figure that out? you got to look in the frequency domain. that's why we went. One of the reasons we went through all this effort to learn about signals in both time and frequency. So, what is its Fourier transform? So, what I'm going to do is I'm going to express the periodic pulse sequence in its Fourier series. And we know that that formula corresponds to the coeffcient values. So, let's do that multiplication by substituting in that the Fourier series. Rearrange things a little bit, so we're going to put all the functions that depend on time together. The questions going to be, what is the Fourier transform of that? Once I know that, because the Fourier is linear, I know it's just going to be a weighted linear combination of those individual spectra. Well, this is an old Fourier property. You multiply by a complex exponential and the time domain, and that corresponds to a shift in frequency, a delay in frequency in the frequency domain. So we get a readily simple result from our Fourier properties that we're going to have, each terms has a Fourier transform which corresponds to the original signal spectrum, capital S, shifted over by k/Ts because that is the frequency of that complex exponential. And then, we're going to get all added up. So, it's pretty easy actually to find the Fourier transform of our signal, our sample signal. It's just a rated linear combination of shifted spectra. Well, let's look at that in more detail and plot it. So, I'm going to use for my example a a rooftop spectrum, which I like to use. This is known as bandlimited. It's a very important consideration for sampling. And what that means it, it's spectrum is zero beyond some frequency. Here, that frequency is W. It also turns out that W is its bandwidth, that's the amount of the positive frequency axis where the spectrum is nonzero. But, what's important for us is what the cut off limit is for the bandlimited part of the signal. So, that's what the individual spectra look like. Now, when I delay and add them up with constants, this might be what I get. So, the, you can see I have taken the original when k=0, here's the original spectrum, multiply it by C0. And then, I shift it over by 1/Ts, multiply it by C1 I get that one, shift it over 2/Ts. I handle the negative indices the same way. So, this goes on forever and ever, and ever, as you might imagine. Now, the question's going to be, can I get back the original spectrum? Can I go from this and somehow get back? That would not bear. And the problems going to be these overlap regions. This is called aliasing, it's a technical term. And what happens in here is that we get these two pieces are added together, they overlap with each other and they're going to add up, that's what this formula says. And once they're added, I can't tell you what the individual components are. If two numbers add up to 3.2, I can't tell you which two unique numbers were added together to get 3.2. It's impossible. So, once you get into this area of overlap, I can't tell you what the original signal spectrum was in that region. And here, I can, but not in there. I'm in big trouble. So, what is the criterion by which you don't overlap? Separation between these is 1/Ts. And that separation has to be greater than the amount that comes from the part of the origin plus the amount that comes over from that direction. Well, each of those amounts is W. So, that has to be greater than 2W, and I can write that the other way by turning this upside down. The T sub s has to be less than 1/2W. If we meet this criterion, these individual rooftop spectra will not overlap. And at least we have a change of getting back to the original signal spectrum. Well, in this case, what I did is explicitly draw so that the spectra did overlap and that's what I get for a spectrum here. I get these overlapping areas, aliasing as a curve because I didn't sample quick enough, Ts needs to be smaller. Here's what happens when you do Ts to be smaller. And now, the individual pieces, the individual parts, do not overlap. I can see them all very clearly and that's because I've obeyed the rule that I draw up. Now, can I go from this back to this? How would I do that? And I think the answer is fairly obvious. What I would do is filter. I would apply an, what's called Ideal Low-Pass Filter, having a cut-off frequency of W that would cut-out all these higher shifted versions of the spectrum. And sure enough, I get back my original spectrum multiplied by C0. C0 is just a gain and I could easily compensate for that a little bit. What's much more important is to get rid of these frequency overlaps. We do not want aliasing. So, now the criterion for the sampling theorem, which I'm going to point out here, here's the criterion. So, we have to have two things. One is, this signal has to be bandlimited, and we're going to assume it's W Hertz. And the sampling, the pulse sequence, has to have a period, sampling interval, that's less than 1/2W. And what we've shown, just by example, is that if you obey those two rules, then you get a picture that looks like this. By simply applying a low-pass filter, I can get back the original signal. So, if I take my signal values, it's all I have is just the values. Every so often, every Ts, if I multiply that times my original pulse sequence. So, I take every pulse amplitude and make it s of nTs accordingly. And then, pass that through a low-pass filter, low-pass filter is this filter, I get back the original signal with no error. That is really very nice. And when it got, by the way, remember this is a frequency domain way of looking at it. It wasn't in the frequency domain, we wouldn't be able to say one way or the other. Now, this criterion is usually not specified in terms of the sampling interval. Rather, it's usually specified in terms of what we call the sampling or rate. So, the rate of course, is just one over the interval. And so, our criterion for obeying the sampling theorem is that the sampling rate has to be twice the highest frequency in the signal s. So, this low-pass filter here, by the way, is here to make sure that the signal is bandlimited. there are some A to D cards out there for computers that are pretty sloppy. And they don't have this first low-pass filter. And so, it is possible that you can send in a signal and that would be aliased once you've sampled it. But high quality A to D converters, analog to digital converters, have this front end filter which is called an anti-aliasing filter, for obvious reasons. It prevents aliasing. Then, the sampling rate inside has to be greater than twice the bandwidth of the front end filter. And if you do that, I can, from these sample values of the signal, get back the original reform in continuous time, I emphasize that. Just from a few values, I can get back the original signal very, very nice. Well, that's the first phase. we have a discrete time signal. Discrete time means we are in discrete time we only have the signal values of every T sub s seconds. It's not digital yet because the values coming out of here are continuous, in general. You put in a sign wave you get kinds of different values. It's not a discrete set of amplitudes, and so you have to do that phase next. And that's the nice part of a real world A to D converter. So, here is the sampling part to be created in order. And, of course, we can get back the original signal with that error if we just stop there. But now, we have to convert it to a set of discreet values, and that's called Amplitude Quantization. And I denote the amplitude quantization function iq. And what it looks like is what's called a staircase function. So, this is a function so any value here in that range gets called a 5, as you can see. And that's how we convert from a continuous set of values into a set of discrete ones. Here, I'm going from something that goes from -1 to 1 and from something that has integer value 0 to 7. Well, I think you can see this may be a function going this way. But coming back, if I tell you it's 5, I can't tell you precisely what the original signal value was. And that's the error that's inherent in amplitude quantization. Amplitude quantization introduces error. So, here's the, their, the signal at various stages in our A to D converter. Here's our regional signal, here it is after it's been sampled. And if the signal is being limited than I can from my little discrete values in time, get back the original signal which I showed is this ghostly dotted line underneath. I can connect the dots in exactly the right way. But then, once I go through the amplitude quantization part, that's now I only have eight possible values for the amplitude corresponding with the integer 0 to 7, and you can see the layering. And since these values do not correspond exactly to the values over here, there's an error. I cannot get back the original signal. So, we'd better appre, get some understanding of how big these errors can be. That's going to be important. So, and can we control those errors. So here's my quantization function. And let's blow up one of those intervals, let's look at it in some detail. So, every value in this interval is called a 4, right? So here is my original signal value. And, since it's in this interval, it's called 4. Well, once I get a value of 4, how do I translate that back into some amplitude value? There's going to be an error, which I call epsilon. But the question is, what do I use to, for the value to call it? And I think it makes a lot of sense that the value on a side tube the amplitude is the value in the center of the interval. If you picked over here, there's a huge error involved. If you picked over here, well, the error may be smaller for this signal value but the one over here is going to incur a large one. It seems to me a good compromise. It's just to pick the middle. And you can show that, that is the right one to pick. Well, the question is, how big is that error sort of on average? And what I'm going to calculate is the RMS value of the error. So, same calculation we've done before, we take the squared error. we're going to integrate it over the width of the interval. Since we're assuming we're in the center, we go from 2 plus delta over 2 down to -2 delta over 2 relevant to that center, and divide by the total width. Take the square root, do the integral, and divide by delta, you'll get delta squared over 12. It's a classic answer. So the RMS is the square root of delta square over 12. Now, little important fact is what's delta. Now, I have assumed that the signal is amplitude limited, which is not a bad assumption for most real world signals, goes between -1 and 1. In general, the signal goes between plus and minus its maximum magnitude, I think is pretty obvious. So, the total range across here is 2 times the maximum, value. The number of quantization intervals is 2^B. So, I show here 8 quantization intervals, that's what I've used in this example. What is B that corresponds to 8 quantization intervals? Well, I'm sure that you know, since you know your powers of 2, that 8 is 2^3. So, this is what's known as a 3-bit converter. It turns out you can't buy a 3-bit converter that's way too few, and use it here, using it here as an example so we can see what's going on. Alright. What I need to do, though, is have some measure of quality of the amplitude quantized signal. And the typical thing that we use is called signal-to-noise ratio. So, it's defined to be the ratio of the signal power to the noise power. And noise in quote in here because it's the amplitude quantization here. So, the bigger the SNR, the better, the happier I am. That means, relative to the signal, the error is small. So, we have a signal plus error, and this is the original signal value. And so, the bigger this is relative to that, relative to the, relative to the error, the bigger the SNR. So, let's go through the calculation. So, what's the power in the signal? Well, I'm going to assume that for this calculation, that my signal is a sinusoid whose amplitude is A. And we know from what we've already done that the power in a sinusoid is the amplitude squared over 2. Epsilon, as is determined by delta, the power in the error is delta squared over 12 And now, delta is going to be the twice the amplitude of our sine wave, 2A/2B. And so, you simply do that calculations to get in. And notice that the amplitudes cancel so our result is going to apply no matter how big the signal was, or how small, as long as the A to D converter takes into account that range of variation the SNR is fixed, it doesn't matter. And what you get is a somewhat weird answer. It's 3/2*2^2B. Now, the important part here is the SNR increases exponentially with B. So, the more bits you use, the finer the quantization interval, the smaller it is, and the SNR goes up exponentially. So, the way to reduce quantization error is to have more bits. it turns out that more bits could be a very expensive, A to D converter to buy. But no matter how many bits you can use in your A to D converter, there's still going to be some error. We cannot get back the original signal exactly, but the error can be reduced by controlling the number of bits. Now, it turns out that SNR is not usually stated this way. It's stated in a quantity called decibels. And here is a complete video on decibels that you may want to look at if you don't know about the decibel scale. It's a logarithmic scale. And since this, since this is a power ratio, x in dB, is always written lowercase d uppercase B, is equal to 10 log 10 of x divided by some reference number. That's how you convert from, or number x that you may calculate and, and express it in dB. Well, the reference here is 1. And after I take the log and take care of it, this is what I get for the SNR expressed in decibels. This term, And I think it's pretty clear it comes from the 3/2. And I'm going to let you figure out that 6b amounts to 2^2B expressed in decibels if you want to figure that out for yourself. So, what's very common is, is to have a 8-bit converter. Most computers have at least 8-bit converters. And that means the signal to noise ratio is 48 plus a little bit. Well, 48 is almost 50. And because the definition of decibels, that's almost 5 orders of magnitude. So, with a 8-bit converter, we're almost guaranteed that the signal in general has a power that's 10^5 times the power of the quantization error. You may say, wow, that really is pretty good. I don't think I need to get much better than that. It turns out that there are people out there who think, they claim to be able to hear a noise that's that small. That's with the SNR is 50db, they claim they can hear that. And I think that's pretty close to the threshold of hearing. So, 8-bits may suffice for some applications for high quality audio more bits are used. In fact, I want you to go and look up what the CD sampling rate is and how many bits are used in CDs, in the original CDs. I think you'll be pretty surprised at what the answer is. And furthermore, once you know what the sampling rate is, I want you to tell me what the upper limit for the signal is assumed to be. What's the bandlimited quantity w for the signal for a given sampling rate used for CDs. Okay. So, this is the story of Analog-to-Digital conversion. the sampling part, which we see here occurs with no error. No error at all. It's all in the amplitude quantization part. It reduces error. It's inherent, we cannot avoid quantization here. But, we can figure out how big it is, and we can control it by using more bits. The more bits we use, the smaller the error.