In this video we're going to implement a digital filter. Now, I'm assuming that we have a signal. That we have run through an A2D converter, it's been sampled appropriately. And now we're have it inside the computer and now we want to do some signal processing on that signal. The most common thing to do of course is to filter it and we have to figure out how to actually implement additional filter. Implementation here, of course, means software and we have many things we can do in software. You can do esentially anything you want, but we need something that will wind up, giving us a linear shifting during filter, and the technique I'm going to talk about today is the difference equation approach. This is a quite general approach, and will allow us to have any kind of filter we want low pass, high pass, band pass. We're going to talk though about some filter categories that come up in the digital world that this kind of categorization doesn't come up in analog filters at all. So it turns out to be very important for the digital filters to understand this category, and we're going to develop new input, output relationships both in the time domain and in the frequency. And we'll see how that works. Well first of all we have to define what a difference equation is. So, we have for example a, what we want to do is to implement a linear shift invariant system. And as always we are going to let x be our input. And y would be our output. And the big question is going to be how do you actually build filters? How do you implement them for digital signals? Don't have circuits anymore you're inside the computer. It's all done in software. So the technique we're going to use is one called the difference equation. And a difference equation is quite simple. You have here the output and you can see that the output depends on previous outputs, the one just before it and the one that was [unknown] samples away. You multiply them by the constants and you add them all up. Then you take the current input, the previous input when q back in time, multiply them by their own constants, add them up, take the sum of all that, and that is your current output. So the output now. Equals the previous output times A1, plus the 1P times ago times AP, plus B0 times the current input, et cetera. So, you can see, it's a pretty simple thing, and it turns out we will only need to know these What are known as filter coefficients. The a's and the b's. And how many of them there are. That's the p, and q. Once we have those and on the filter coefficients, we can do anything we want. Literally, you can do some very fancy filters, even ones that go beyond bandpass, low-pass, high-pass, that kind of thing. Now I want to point out, that this is an explicit input-output formula. What I mean by that. Not only does this equation, the difference equation, specify the kind of filter we want through its coefficients. But this is a way you could implement it. You can actually program this up. You set up a table consisting of the previous ys that you need. The previous xes that you have. Input comes in. You stick it into this formula. Multiply those things you stored in tables by constants. You've got the previous outputs. Multiply them by their constants. Add em all up, then now you've got the current output. For the next in, output value to compute, you Put this over that place in the table. Put this one over here, etcetera. Do the same thing to the xes. And you have now implemented a digital filter. You can actually, it's a very simple programming exercise. Just consistent multiplies and additions. And turns out it's a very, very easy way to implement a wide variety of folders. Alright. So let's see what the output is for a very simple filter here. For a very simple input. So I only have one previous output there that enters into the difference equation, just the previous one. And I'm just going to take the current input. Input. So, the current input gets multiplied by b, the previous output gets multiplied by a. Add those two things up, and you get the output of the, the filter. I do emphasize, this is linear and is shifting variant, system. The simple input I'm going to use is just a, a unit sample. So, it's a, a very, very simple input, I mean one value is not here. We'll learn in this video that this is a very important input to know the output for. So, what do we do? Well, what I like to do to point out how the difference equation works, is to construct a table. So, what I'm going to do is have the in, the time index. The input value at that time. And the output value of that time. And we want to fill in the right hand column of the table. Now let's start at n equals minus 1. So, we know that at this time the input is zero, because that's that value there. The question is now, what's y at minus 1? What the difference equation says. Is that, that is equal to a times y of n minus 2 plus b times x at minus 1. And we know that's zero. But what's y of minus 2? Well, that y of minus 2 from the difference equation of course depends on y of minus 3. Y minus 3 depends on Y minus 4. Going on, and on, and on back in time. Well, how do we get around this? Have these values been specified? And there is an assumption that we make and [inaudible] is that all sigmas if minus infinity are 0. So, with that and the fact that the input is 0 for all these values back in negative time, we know that Y of minus 2 is equal to 0. Well, that, of course, gives us that the output at time N equals minus 1 is 0. And now lets let that input come in, and now we're going to talk about what happens at n equals zero. We know that the input is a 1 there. And what we're going to do is to find y of n. We're going to multiply that by b. We're going to multiply that by a. Add them up and of course you get just b. And that also applies to the next value. We now, again, all we do is multiply that value by B and that value by A. Add them up. That's exactly what the difference equation says to do. And, of course, we just get A times B. And to go into the next moment in time and all succeeding ones, I think it's pretty clear that what happens, for example at time 2, if we multiply that by B and that times A we're just going to get a B squared and it's, kind of think, pretty obvious that at, at any time N. As long as it's greater than or equal to 0, the output is b times a to the n. So, when I put in a unit sample, my output is this signal for n positive and I can write that very succinctly Using the unit step[INAUDIBLE] in my notation here. So, unit step, of course, is equal to zero for all previous values. And then negative values. And then it's equal to 1. Full values [inaudible] and that are zero are positive. And that's a very concise way of using this to write down this. This output we're going to find this happens all the time and is very, very useful. Okay, so what does this output look like? So there it is our difference equation our input and we have computed that for an output. Let's plot it. For various values of a and d. Now, I think you can see that the coefficient b here amounts to a gain. If I change b all it's going to do, is make the signal bigger or smaller. So in all of these plots I've said b equal to 1. To simplify things and here's my favorite value for filtered coefficient one half and you can see. The output filter here for this unit sample input is a one half to the end and it just decays. And in fact, it decays exponentially. If you have n equals minus a half, it's also decaying exponentially. But now, every other, the odd value samples alternate in sine. They're negative, the pos-. The ones for the even ones are positive. And now, he has something that goes up and. Up and down, and has a very different character than when a is positive. Another interesting value here to play with is when a is greater than 1. And when you do that, does, the output here is going to be 1 point 1 to the nth. And it's going to grow. And this turns out to be, not be terribly useful. If you think about it for a second. This part here. All these previous outputs, when you're out in here, are going to keep getting bigger and bigger and bigger. And swamp out whatever values you had for the input. And basically, the filter ignores the input. Good, the n gets big enough. So what we have is that we had better have the absolute value of a to be less than 1 or we don't have a filter that's interesting. It will [unknown] of a outside that range to. And the sample response will just take off, and the filter has a mind of its own, as it were. Okay. So, lets think about what this filter is. We know how to compute it. We can plot it, but what kind of filter is it? And that of course means that we have to hop into the frequency domain. Well, to determine what the filter is, we're going to do exactly the same thing we did with analog circuits. I'm going to assume the input is a complex exponential, at some frequency f, having a complex amplitude of x. And, we're going to assume that, the output has the same form, complex exponential in, complex exponential out. And we're going to see if this assumption is correct by seeing if we can determine an output amplitude that will satisfy the difference eq [inaudible]. So I'm just going to stick in this these assumptions for x, y and x. And what you get is something that looks like this, and notice this intermediate term in here, y of n minus 1. When you stick that in for the complex exponential. A factor of e to the minus j two pi f comes out and we're left with our friend the complex exponential. And that's what I used when I wrote this equation. But I think you can see right away that what happens is the complex exponentials cancel And sure enough, if I bring the AY term to the other side, factor out the Y and divide by 1 minus A, I have a solution for capital Y that works. As long as the ratio of Y and X is equal to this. I stick in a complex exponential I get out a complex exponential. Well, we call this ratio a transfer function. So we can, have used this technique to find that the transfer function has this form. Well as we've seen already before, the b here just amounts to the gain or a simple filter. So, whatever number you put there just supplies a gain for the transfer function. For the filtering those affected by a, what kind of filter it is, how good a filter it is. Is all determined by A because that's where the terms involving frequency are. Alright so, what we've seen is that the output is equal to the transfer function times the complex amplitude of the input Times the complex exponential, output's exponential N, complex exponential N. Well, what does this, transfer function look like? So, here are some plots and, for various values of A. And again, just like I did before, I set B equal to 1 because that's just the game, we don't need to worry about it too much. Okay. So, for my favorite filter value here, an A of a half, you can see that it starts out and kind of gets small. So, I guess you'd call that a low pass folder. It's not a very good one. And what are these values here at the low and high frequencies here? At F equals zero, you're putting in F equals 0 into this formula, you get 1 over 1 minus. Hey. Stick in f equals one half, well, e to the minus j two pi f times a half, well that's b to the minus j pi, and if you know that's minus one, And so what we get for the value at frequency half, is 1 over 1 plus a. So this value is something like 2 thirds, and this value up here is 2. So it doesn't have much difference in the game at high and low frequencies. However, if you stick in a Bigger value. A like .9. Remember, we have to keep the values of a in magnitude between 0 and 1. So, A is a pretty big value for A. And you can see, we get a much nicer filter. And it is low pass. So what we have found out here is that for a positive, that implies low pass open/g. So if you now think about the, negative values of a You get a high pass [inaudible]. And, if I use a value of A that's minus point 9 then you get something that's symmetric and gives us a very nice high pass[INAUDIBLE]. So, a greater than 0 but less than 1. I get a low pass filter. If I get, have an a less than zero and greater than minus 1, I get a high-pass filter. Okay. So, what happens in general? How do we find the transfer function for a difference equation in general? And you do exactly what we've just gone through. Set x equal to a complex exponential. And assume that the output is the same. And I think it's pretty easy to see. It's all going to work. And what you get for a transfer function. Always looks like the following. You have a numerator term, which consists of the input terms from the difference equation. You are going to have a denominator that consists of all the terms that involved the previous outputs. The coefficients of the filter are staring you in the face from the difference equation. Very, very easy to write this down. That's not very difficult at all. I do note that these minus signs come about because of the way you have to solve things. Remember we have to bring these terms to the other side when we found the transfer function. So the Ds all come in with plus signs. The As enter downstairs with minus signs. And this we can use to do what's called filter design. So, what you want is a way of choosing these, filter coefficients a and b in such a way that you get the kind of filter you want. Where there are software packages that do this. It's not nearly as hard as it might be in the analogue world if you're stuck with circuits. You can use software packages. And they will you tell them what kind of filter you, you tell the program what kind of filter you want. And they'll pass let's say cut off frequency of .2. And you specify how good a filter it is how sharply it goes to 0 from 2 and it will pop out filter coefficients that you need. It's not very hard at all. Well, let me point out something that is different than the analog world. And this is a special filter, special kind of filter. And what makes it special is that there's no term here that talks about the previous outputs. That's right. So, this filter, the difference equation only depends on the input. And we want to figure out how this works. Well, I'm going to do the same thing I did before and assume I have a unit sample input and I'm going to build my table. Well it turns out to be a whole lot easier than it was before. Since this output here only depends on the previews, inputs, there's no concern about what y minus 2 is. So we were [unknown] n minus 1 what this filter does, is it takes together We all the in the input values here. Adds them up. Divide by the number of terms. And puts out the answer. Well of course that's 0. Well if we now encompass that unit sample. We look here. We go back, q returns, add them all up divide by q. Well it's pretty clear that you get 1 over q. And if you go to time 2, we're now right here, so what the filter does is it takes that guy you. N K and this previous adds them up divide by q well you're only getting this innate sample in here everything else is 0 so we[UNKNOWN] for q. And we can keep going with this I think it's pretty clear we still get answer 1 over q. Until we get to the time Q minus 1. And that turns out to be right here. So, you look at this last term, respond to this, x of q minus 1 minus q plus 1, and that's x of 0. So this is going to be the last time in which our filter grabs the input at timing equals zero. And of course, we get 1 over q for that. Now, beyond that point, the filter is sitting out there. And it grabs only these inputs, adds those up. They're all 0 so you get 0. So, for all subsequent times the output is just, just 0. So, for our special filter, which has this difference equation. We put in the unit sample. The output basically looks like a pulse. And that's what it is. It's a pulse. So here I plotted for specific value q and pretty easy to implement and see what's going on. Let me point out something about this filter that makes it special. What is this filter doing? It's taking q values of the input, adding them up, and dividing by q, the number of terms. What would you call that? In, standard, non-techno plots . You take Q values together, add them up, divide by the number of terms, and that's called averaging. So, this filter is special in the sense that it averages it's input over Q values and that's. What the output is. Sometimes this is called running average because you keep doing this every N and whichever N you pick, you're averaging this, then you're averaging that, then you're averaging that, etc. And what kind of filter is the averager? Remember we want to implement our And software our filter with a difference equation, but what kind of filter is it? Well, what I'm going to do is find that transfer function. And remember, the general form is that when you only have, the inputs, what you do is your going to get a transfer function that just has that in the numerator. So, for the transfer function here is going to be, so we're going to get a 1 over q, times 1 plus e to the minus j 2 pie f plus E to the minus j, 2 pie f times q minus 1, and all the [inaudible] coefficients are 1 over q. Well, we've seen this sum before. And that should remind you of the digital sin, and there it is. So there's our digital sink. There's over q, and we get this linear phase term that we know comes from adding up all of these terms. Well, what does that look like? And what we see is that our averager. Is a low pass filter. This is a very important concept to have. Averager is equivalent to low pass filter. The cutoff frequency if you will is at 1 over q. This is a plot for q as follows. And so if I come up with a filter with a bigger value of q, I average more terms the gain at the origin is still 1, but the cutoff frequency is down here and the ripples get smaller actually. So you can add a better low pass filter By averaging more terms we'll also, the cutoff frequency moves to lower frequencies. And I think intuitively this is exactly what averaging does. It removes high frequencies, and leaves only low frequency variations. And I think it makes intuitive sense that averaging. Could be just a low pass filter. And I think it's very neat that, in, electrical engineering. We can actually implement averaging. And think about it as a filter. Especially on a computer. Well, and now I want to do something special and point out why the unit sample response is so important. So, here's our linear shipped invariance system and it turns out I'm not going to worry about whether it's implemented with a difference equation or not. Is just linear in shift and variant. What I want to do is for an input I want to use a unit sample. And I'm going to refer to the output for that special input, as little h of n. So in notation, a little concise notation. If the input signal is a unit sample, we're going to call that special output little h of n. Okay? Now, because this filter is shift invariant, if I delay that unit sample, by n at the input, the output is just going to be a delayed version of the output, little h, okay, a very important fact to realize. For what's coming up. Well as we've seen, you can think about every signal in discrete time as a super position of unit samples. So, the unit sample at the origin, and it's aplitude is x0. Sample it time one, it's amplitude is x sub 1. Here's x of minus 2, and that's just a unit-sample of time minus at time. What am I saying, minus here should be plus. So unit-sample This is a unit sample whose amplitude is exa plus 2, exa plus 1. And I can write that concisely, that expresses the super position. Well, if I stick a super position and signals into a linear shift and variance system, what do I get out? I get a super position of the applets to each. And therefore we can immediately write that the output for our input x expressed as a super position of unit samples is given by, the, a super position of delayed unit sample responses. That's what little h is it's called unit sample response. Well, in fact we now have a very general way of expressing the input output relationship for a linear system. You give me. The input signal. You give me the unit sample response. I can use this sum to compute, what the output is. You don't need the difference equation, this is an alternative. It's usually not a efficient way of computing the output. The difference equation is much, much, much better but for theoretical reasons as you' re about to see this is important input output relationship. Well I want to find the transfer function cause it should be as we've seen for a linear shifting variant system. The, we should be able to find the, the 4A transformer. The output is related to the 4A transformer, the input. And for that N, I'm going to compute the DTFT of the output, which means I take the output signal, multiply it by a complex exponential and add them up. I do the same thing to the other side. And what I've done is, since I'm summing over n here. I've moved that sum on in-, inside, and do it first. 'Because there's only 1 term that depends on n. Well, what's this? What is that? That, to me, looks like. The DTFT of a delayed signal. And so I know what that is. That's the same as the DTFT of little h times a linear phase. Well, what do you call the DTFT of little h? You call it big H. So I think you see where this is going. Now, we're going to get to the transfer. Function. There's what we're going to call the transfer function in just a second. And so there's the linear phase that has to do with that delay. If I pull this term out of the sum on m. It doesn't depend on m. I now see that I've got the dtft. Of the input. And I arrive at a very nice result. That the DTFT of the output equals the DTFT of the input times the transfer function. But what the new thing is here. Is that this, is the forier transform, of the unit sample response. So transfer function here. Unit sample response here. And they are Fourier, they constitute a Fourier Transform pair. So, this is a way of finding the transfer function. That's why finding the end sample response in all those examples, what you did for difference equations, was so important. It's because I know that if I can computer the DTFT of that unit sample response, I've got the transfer function right away. Very, very nice. Well, let's talk about those filter categories I mentioned. What we like to think about digital filters as falling into one of two Classes. The FIR and IIR. And what does these acronyms mean? Well, it turns out, the FIR case. It turns out it is finite. There's a word missing in the acronym, duration. Impulse, response. That's what FIR stands for. And the impulse is a, is another name for unit-sample. Example. So to keep things short, this has been adopted over, since the beginning of digital-signal processing. And so what that means, as we've seen, is that the difference equation has no terms that depend on previous outputs. Only consists of terms that add together, previous out, input values. When you have a difference equation that's a little bit more general, that includes those output values, in addition to the input values, we know that the unit sample response as we've seen and our example is infinitely long. Remember, our, when we had one term our unit sample response, what looked like A to the N, was lasted forever. Even though the unit sample, which is only one sample long, yet the output lasts forever. So, that's what IIR stands for, it stands for infinite duration impulse response. So, these categories depend entirely on the behavior of h. Well, so one little thing I want to do is compute the duration of the output for a finite duration input in the FIR case. And this turns out to be really important. So, lets assume that my input is something very simple. Is three samples long. Not very interesting, but will serve the purpose for this example. And my little h is something that's going down, looks like that, and then it's 0. Okay? Well, what's the output going to be? We know by the superposition principle that all we do is found the output for each of these inputs and then add them up. So the first one gives that for an output and goes through zero. The next one gives that for an output. [unknown] And this one gives that output, [unknown] and then we add them up. Well how long is this imput, I mean sorry is this output? The imput went to time N minus 1. This end, goes to output. Q minus 1, it's duration is q, they're q non 0 values. And this time right here, is n plus q minus 2. So that means it has duration n plus q minus 1. Duration of the output equals the duration of the input ti-, plus the duration of the unit sample response, q, minus 1. So this little minus 1 here. Sort of a bookkeeping kind of thing but that's important to get this down as you're going to see we're going to be talk about this later. So, back to the IIR case. If you have a finite duration input the output has an infinite duration in general. So, there's no real calculations to do here, but this is very important as we'll see in, in another video. Well another property of FIR filters is very important, has to do with the phase. You can have a linear phase and we seen that already for our averager. But in the IIR case, it always has a non linear phase. And, let me bring up the phase plots for our filters we've been talking about. So, here's our linear phase coming out of an FIR averager. And what does linear phase mean? [inaudible]. Linear phase means delay. So what this means when you have a transfer function at, with a linear phase. Is that every frequency. Maybe filtered, this amplitude is changed but every frequency, no matter what it is, experiences this same delay. When you have a non-linear phase you're talking about a frequency here and a frequency here, well for this 1, it experiences that, but since it's non-linear. These two frequencies experience different delays. Well, why is this important? This has to do with the subtleties in ways signals look. So it turns out your ear is very insensitive to fa-, to fe- Phase. If you had the same filter implemented with a FIR filter or done with an IIR filter and listened to the result you couldn't tell them apart. You can phase shift the input, the speech input, music input it doesn't really matter as much as you want. With non-linear phase, linear phase you cannot with your ears tell the original from the phase shifted input. We just can't hear, your ear doesn't care. However, if you reply a non-linear phase shift to an image. And filter that, it will really mess it up. It will be entirely gibberish in most cases. So for image processing they like to stick with FIR filters, because you can design them to have linear phase in which everything just gets shifted over. But [unknown] phase for an image is very, very. Very bad. And you'll see that as you take other in, or advanced digital signal processing. Okay, so, digital filters. Very interesting and very important classes are described by difference equations. And now only is this the way of describing the filter, this is a way of implementing. Now it turns out that the difference equation is not always the most efficient. And we're going to talk about this in another video. It's not the best way implementing a digital filter. What I mean by that is, it is not as fast. Clearly you don't want to wait for your. Your results to come in, you want your filter to run very quickly. Well it turns out in terms of me getting to the details of counting the multiplications and additions it takes to find the output for a given input. It turns out the difference equation works fine, but it isn't the best. And we're going to talk about that in another video. So again the input output relationship for general of linear filter can be expressed at either the time domain or in the frequency domain. Time domain is this. Frequency domain is given by our old friend the transfer function kind of relationship and we can do either specify how what the filter does. Either way I find using the transfer function is more useful when you're trying to figure out what kind of filter it is for example. What the new thing is that the unit-sample response and that transfer function are Fourier transform pairs. So, if you can find this, you can find the transfer function very easily. It's not hard at all, you just find. Fine with the DTFT. And in succeeding videos we're going to talk about efficient ways of implementing filters. And it turns out many of those cases wind up being implementing filters in the frequency domain. Kind of a not obvious thing, and we'll talk about that coming up.