So in this video, we're going to continue our discussion of the frequency domain. We're thinking about signal structure, in the frequency domain. In addition to the time domain. We're first going to prove something called Parseval's Theorem, which shows us how to plot, compute the tower and a signal, entirely in the frequency domain. In many cases, it's easier to compute it that way than directly in the time limit. We'll then talk about constructing signals explicitly in the frequency domain. that's a digital communication example. It's a lot of fun to see a little bit, get a little glimpse of what digital communication is all about. And finally, we're going to filter a periodic signal. it's a fairly simple extension of what we've already done, already, when we talked about analog filters. Alright, so the crucial thing we're going to prove is something called Parseval's Theorem. And this is a very important result in the consideration of signals and of power. So just to review, this is the Fourier series of a periodic signal, S. It's got a period of capital T. We know these are the 48 coefficients, and it's expressed as a superposition of complex exponential signals whose frequencies are harmonics of 1/t. To find the Fourier coefficient we calculate this integral, we have to plug in what the signal is and just do the integral and find it for [INAUDIBLE] Okay, so here's Parseval's Theorem stated in words. The power in time-domain equals power calculated in the frequency domain. So let's see what that means. First of all power in a signal is given by this expression. Now, we have talked about power in terms of the circuits faultaging current. The idea here is that the average power of a periodic signal, Is going to be related to the integral of the square root of the signal, as if it were being passed through a one ohm resistor. So that's the idea, so, we look back at the power formulas. No matter if this is a voltage, or a current, this would be the average, power, expression, assuming it was, going through 1 ohm resistor. I hip, want to really point out, that this is a very general expression that the power over here is not in watts. It's proportional to the power expressed in watts. Of course it depends, if it's a voltage or current and what the value of the resistor it goes through as to whether this expression is numerically correct. It usually isn't but for theory we just we'll figure that out later, but this is the key element. The power in the signal of course depends on the signal wave form, how big it is, all kinds of things. So, how are we going to prove Parseval's Theorem? And the idea is, that I'm going to take this expression, for the, for s, as a Fourier series, and plug it into there. Now we're going to do a little bit of mathematics, because we have the square of the sum. So let's talk about, just in general. Suppose I have a sum, a k's, and I square them. What's the expression for that? Well it's not the sum, of ak squared. That's not the answer. It's the, it's a double sum, which I'm going to write as a sum over k and l, with a sub k, a sub l. And to see that, look at the very example of x+y^2. Well, that equal to x times x plus x times y is equal to all possible pairwise products. Right? So that's why this is equal to that. And, so, when I plug the, Fourier series into this expression, I'm going to make use of this property here. Okay. So, what is 1 divided by T, the integral, from 0 to t, of, the double sum, k and l, of c sub k, c sub l, e to the plus, and I'm going to combine the exponential parts. And this ought to look very familiar from the previous video, because this has to do with the orthogonality relationship I showed previously. What I'm going to do is push the interval inside, so it becomes clear, and now we know what the value of this interval This integral is equal to 0, if K does not equal -L. And with that situation you got 1, occurred when this term right here was 0. So, the integral is 0 when K does not equal -L. You see where the 1 from k=l. Well that's great. So this becomes, equal to, and by the way when, the 1/t makes this a 1 up here so, we have the sum over k. Of c sub k, c sub minus k. Well, remember our properties? What is c of minus k? That's equal to c k conjugate. Conjugate symmetry here. So, c sub k times c sub k conjugate, that's the magnitude squared of c sub k. So that's Parseval's Theorem, is that the power of a signal can be expressed, e, can, can be calculated, expressed, either in the time domain, or in the frequency domain. The expressions are very similar in that you square something and add them up. Integration is like adding in, in many ways, so, this is quite useful. You may decide that this is an easy way to calculate our, and this, and I'm about to give you some examples of when that's true. When that comes up, abd you think about using the, for a series, as an approximation when you have a finite number of terms. So, for all kinds of reasons you may decide I only want capital K terms in my, for a series Going from -K to +K is what I mean by capital K terms, rather than the full series. Well how good is that approximation? Well, we can talk about the error, which is this, the original signal minus the approximation. Well, when you subtract out essentially the middle terms out of this sum, what you get left are the terms from minus infinity to minus K, minus 1, and from K plus 1 to infinity. Well, you think about this another way, this is just this, these terms over here are just the Fourier series for, for the error signal, here. So how much power, is in the error? the RMS value, square root power, is given by this expression. So, I did a few things here to point out some We know that for negative frequency, negative index. It's C sub minus K equals C K conjugate. That means the magnitude of C minus K squared is equal to the magnitude of C sub K squared. So when I sum up That is Parseval's Theorme says, from minus, minus infinity to infinity. The negative index terms, negative frequency terms are going to be the same as the positive frequency terms. So you can write the whole thing and this is a very good example of why you want to do that is only being over the positive index parts and you put in a 2. Because you can, that's, your concluding the negative frequency terms. Okay, so now we have an idea of how well the Fourier series with K terms approximates some signal. We can look at the RMS value of the error signal. Great example of using the frequency domain to calculate power. So we go back to our approximation of the square wave. Here we have a plot of the magnitude squared of K, and you can see it decreases like 1 over k squared, like we would expect. Now this the squared, and now if you look at the RMS error that's calculated from the expression given above, you see it's going down but it doesn't go down very quickly. that's just, so this tells you that if you want to approximate the Fourier, want to approximate a square way with a Fourier series, you're going to need lots and lots of terms. 49 is a pretty high number and that gets the error down to right about there. I calculated it. So if you really wanted something with a very, very small area. you're going to need lots of terms in the Fourier series. It's a very interesting way of looking at it, I think. So there's an alternate view of using the Fourier series to approximate a signal, and that is to measure its distortion. So the idea is I have some periodic waveform. In this example it's just the square wave. And the question is, how close to a sinusoid is it? And this is called harmonic distortion, because this is used in audio to access how linear amplifiers really are. And the measure that's used is called the Total Harmonic Distortion, a very long term. So THD stands for Total Harmonic Distortion, and it has the following definition. The numerator is the power in the higher harmonics. Notice it starts at 2, and of course, this 2 has to do with the fact that we are including the negative, frequency terms as well. And, the denominator is the power in the signal. Now the, a little detail here is that you subtract off c0. And the reason for that is that c0 is known as the average value of the signal, and if you look at the formula for computing c0, you'll see it's just the integral of the signal over a period divided by the period. Well, that's called the average value. In computing harmonic distortion, I'm interested in how it departs, from a sine wave. I'm not really interested in whether there's a, a, com, a average value that isn't zero. It's shifted up or down. That's really not the issue. The issue is, how much does it depart, from a sine wave. So that's why this little term in here is kind of a detail and serving for my square wave, for example of c zero is zero, which you will notice the numerator ignores c zero as well. And so using our theory series expressions, through Parseval's Theorem, we see that the power in the signal, forgetting about the other value, is that, and, the numerator is that. Now, to compute the harmonic distortion, I would have to, plug in what the 4a series is, and do the sum. And, gotta tell you, that's really not a terribly good way of doing it, because those sums, by enlarge, don't have, do not have closed forms expressions. So let me point out something, that this is equal to the power in the original signal, which is all the terms summed up from 1 to infinity. That's the de, denominator minus the power in c1. The first harmonic is what that's supposed to mean. So I can use that to compute my numerator, only thing we have left is compute the power and the signal. Well, that's easily done, that's, you just square the signal. Which in the case of the square wave, is going to give us a value of 1. Integrate that over a period, divide by the length of the period, we get 1. So, we can find the power in the original signal, usually quite easily. So, c 1, if you remember what, the, Fourier series is, for a square wave, is just 2, over j pi. Which makes that numerator equal to one, minus two, times 2 over pi squared. And, this 1 Is the power in the signal which we calculated just by looking at what the waveform was and acting accordingly. Well you plug in the numbers, you do the calculations. You wind up with about 20% harmonic distortion. What that means is that the square wave has about 20% of its power in higher harmonics and another way of saying it is the sine wave approximates the square wave in such a way that it captures 80% of the power. Well it turns out that that is a spectacularly large harmonic distortion. the idea is in audio that you want to assess how linear an amplifier is. If the amplifier were perfectly linear if you put in a sign wave into your amplifier, you put in a sign wave, you should get out a sign wave. Perfect sine wave, which would make, the harmonic distortion 0, because all of these are 0. Well, amplifiers have some moderate amount of distortion and so you want one number that quantifies how linear it is and they use a total harmonic distortion for that through a series a test equipment. harmonic distortion of even 1% is considered pretty large. In fact, all audio amplifiers are specified through their harmonic distortion. Let's explore the frequency domain in another case of digital communications. Let me tell you How digital communication works. We're going to explore this in much more detail later. So, here's the idea. I want to send a bit, either a 0 or a 1 to, to the sin, to the receiver. And one way of doing it, is either to send nothing. Over an, of really/g, interval of, T, to represent a 0, or you send a sine wave to represent a 1. And, to send a sequence of bits, you alternate between a 0 and a sine wave, depending on what the bit is you want to send at any one time. Okay, this is a one bit at a time scheme, so I'm sending 1 bit within the, interval 0 to T. Now you can think of a fancier scheme, that sends two bits at a time. So, in, instead of just thinking, you know, same T second interval, I can, send 2 bit 0 0. I'll just send nothing. If I see zero one, I'll send a zero plus the sine wave of a given frequency, but if want to send 1 0, I'll send a sine wave of a different frequency. And finally if I have both bits of 1, I'll send to sum. Essentially, what I'm doing is constructing the transmitted signal in the frequency domain by essentially using super position and indicating which are the two bits are by choosing which frequencies are turned on and turned off. So that is a total construction of a signal in the frequency domain. Here's an example, where I got, here's the frequency f 1, it's over here and the frequency f 2 is over there, and here I have the sequence 1 0. Here's the sequence one one, here's the bit sequence, zero one, and, here's the, waveforms that are produced. Entirely constructed by thinking about encoding information, encoding bits, as freq, in the frequency domain, by representing them by frequencies. So. Again, you want to think about signals both in the time domain, or in the frequency domain depending on what the application is. Very useful, to be flexible and be able to go back and forth, depending on, what the application is. Okay, now we need to filter, a signal. So we know how this works for what we've done with circuits. We know that if the signal is a complex exponential and the filter is linear in timing variant, that the output is given by the transfer function times a complex exponential of the same frequency as the input, So, given a transfer function, you'll know what the filter is going to do. And, of course if the frequency is just harmonic of some one over t, we just plug in into the transferred function what that frequency is, and of course we carry along the frequency of the input. And as I showed you when we talked about circuits, if you have a superposition of two complex exponentials, the output is a superposition. This is the definition of a linear, system, so guess what happens when you have a Fourier series? It's just a superposition of lots and lots of complex exponentials, so the output is given by the transfer, sum of the transfer functions evaluated at those harmonic frequencies, times, the c sub k's, for the original signal. So, in essence, this is the Fourier coefficient, for the output. So what do you do? To figure out what a filter does to a signal, you start with the signal, usually in the time domain. You find it's Fourier series representation. You find those C sub K's. You plug that into this expression along with the transfer function, evaluate it for those harmonic frequencies, and reconstruct what y is. So, start in the time domain, go into the frequency domain, figure out what's going on there and then reconstruct in the time domain. This may seem tedious, but it turns out to really be a easy way of doing things and very efficient, as we'll see a little bit later. Okay. So let's do an example. So, here's a filter. And, the input I'm going to use, is a periodic pulse train. And we know that spectrum is, I'm not going to, write the expression again, it's pretty long-winded. But of course it has a Fourier series. And the filter I'm going to put it through, is our old friend the low-pass filter. So, if you recall The, transfer function is given by that expression, and if you plot the magnitude as a function, if you can see it rolls off and has a cut off frequency which in this case, is equal to 1 over 2 pi rc. We showed this in previous videos. I want to figure out what happens when I put this periodic pulse train as the source signal so that's going to be x(t). The output, vout. That's going to be y of t. So, how do I find it? I use the, superposition principle. And this is what we get. here's the transfer function. And here it is evaluated at k over t. The kth harmonic. And multiply it times c. And then reconstruct the signal. Well, this reconstruction, to say the least, We can't do it, analytically. And so this is where you just, call upon a computer that can evaluate these things and add them up. And here's what you get, really kind of interesting. So, first of all, I had to tell you, or I had to pick what t was, the period of our periodic pulse train. I picked 1 millisecond. What does that make the fundamental frequency? The first harmonic of that signal? What is that frequency? And of course the answer is, it's 1/t so it's 1 kilohertz. Let's keep that in mind. And along with the previous example that I gave for the periodic pulse train the width of the pulse is about 20% of the, period. Okay which corresponds to the examples I'll show you. So, I'm now am looking at what happens to the output and the outputs are shown down here. This is the spectrum of the output. So these are the Fourier coefficients. For filter of various frequencies. so I'm changing r times c to produce these various cut off frequencies. Okay, don't forget that the fundamental is at 1 kilohertz. So, you recall the Fourier series for periodic pulse train has power at all permonics of the fundamental. So it has power of 1 kilohertz, 2 kilohertz, 3 kilohertz, 4 kilohertz, etc. Okay. So, picking a call frequency of a 100 hertz, that's really lowpass filtering. The the, first harmonic is well above, 10 times the cutoff frequency of the filter. So it attinuates the the spectrum of the input a lot, as you can see by this figure. When you get produced, a somewhat lumpy looking output, it's not even clear that their were pulses in there originally, it's really low past filtering. So low past filtering tends to produce, tends to produce signals that don't bury very much in time. They can't wiggle very quickly because the higher frequencies have been surpressed. By the filter. They're low pass, remember they tend to attenuate high frequencies. When the cut off frequency is now one kilohertz, which is right at the first harmonic. The first harmonic won't be suppressed very much, be suppressed by 1 of the square root of 2. You can sort of see the pulses coming out, kind of, it is not clear the origin of the pulses which tend to get which i call the short fins, it goes up the comes down, goes up and comes down, this is periodic, now this is periodic of course. And then when the cutoff frequency of the filter is well above the first harmonic, well above the, eighth harmonic, ninth harmonic, it's way up there. What you will see, is that the output very closely resembles the original, pulse, and there you can see, if you look very carefully, of course you can see it, that it's rounded, the edges here, both, at the tops and at the bottoms. So, it doesn't filter very much, but, if it's, the cutoff frequencies really low compared to the first harmonic. You get a lot of change of the output. So, and this is a common use for lowpass filters, is to change the wave form to be something that you want given some various simple wave form that you could probably produce in the lab. Okay. So, signals can be defined either in time or frequency. What I concentrated on in this video, frequency domain, characterizations of signals. So the idea was I started with s sub ks. And I use that to construct signals. In going the other way you can take your signal of course and figure out what its frequency domain representation is. And that really means you can go either way, you can start in time, go to frequency, start in frequency and then go to time. all, all choices are available to you. So, you can define them in either domain depending on the application. You can also study, the structure of the signal in either domain. You may be very interested in what the pulse width is for example, but you're also maybe interested in how high up, what's the highest harmonic that's needed to have a 90% approximation of the original pulse train. Either way of thinking about it it may be important, depending on the application. And, finally, for linear, time-invariant filters, we can determine their outputs for periodic inputs because we can use the Fourier series results that we've got in superposition to get the answer. So we're really becoming more sophisticated. Next video, what we're going to do is drop periodic. We're going to talk about how to generalize these results so we can talk in general about general signals, whether they be periodic or not, and that means we'll really know a lot about signals and systems.