So in this video we're going to learn how to compute the DFT efficiently. And what this has to do with is how much time does it take to compute a spectrum, and how does that time depend on the amount of data, the length of the transform. If you're like me you want that a spectral calculation to, come back as soon as possible. The computation to be quick. That's where efficiency comes in. We wanted to find out how to compute the DFT as quickly as possible. And it turns out, this has led to the discovery of something called The Fast Fourier Transform. Just a way of computing the DFT. It's not any, it's not a new transform it's just a way of computing the DFT. And it's known as the FFT in, in the DS digital processing book. To talk about this, we're going to introduce the concept of competition. The complexity that comes from computer science. That as a systematic way of evaluating how many computations and how many program a formula is going to require. So let's look at the DFT formula. So, we have to compute this formula, this is the formula we've seen before. And I want to look at, what do I have to do for each frequency? So, k is fixed here. The, the complex exponential is fixed, in terms of k. And now I want to do this multiplication, minus of n times the complex exponential, and add the results up. So, how much work is required? So, what work means computations means I'm consider multiplications and additions, in some detail. So every term in this sum has this multiply in it and, recall that that's a complex exponential which means it's a complex number. And in most computers the top computers I know are complex numbers are stored as real measurement for a convenient for doing multiplication. So, This formula becomes that, and that means I have to multiply, multiplications in here that are real multiplications and that's what we want to consider. No computer I knew of has inherently built in complex arithmatic qual properties. So, we're going to talk about this in terms of real multiplications. So, we have 2 real multiplications for every term in the sum. And we had n terms in the sum. So we get 2n for the number of real [INAUDIBLE]. How about additions? So, in terms of additions, I'll point out something rather straightforward. Suppose you're adding up 3 numbers. You only need to do 2 addition. So the number of additions in the sum with N terms is N-1, and because we have to sum the real and imaginary terms separately that's where the factor 2 comes in. One for the real part and one for the imaginary. So, total number of computations Is going to be required is 4n-2 just adding up those two numbers. So but, this is for each frequency. Since we have to evaluate this for n different frequencies we come up with a total number of computations. That's n*4n-2 computations. So writing this out a little bit here. This is 4n^2-2n. Okay, so to summarize this in a way that's a bit more manageable. The term, the concept of computational complexity has arose. And computational complexity is worried about the worse case behavior. What happens as N goes to infinity, as N gets. Big. The term that dominates, clearly, is the, the quadratic term. You can essentially forget about this when n gets big. You want to see that, take n it'd be 1000, this is well over a million. And you should [INAUDIBLE] a couple of thousand. It doesn't matter very much. Furthermore, we don't worry about the constant [INAUDIBLE]. In computation complexity, because we're worried about how fast things compute. this is going to [UNKNOWN] depend on the speed of the computer and the fact that 4 really doesn't tell us much about that. What we want to know in essence is on the same computer how much more work is going to be required is a change. The number of points in a transform, return capital N. So, what is known as the computational complexity of this calculation is n^2. And this is red, as this is an order. N^2. Computation, then all you do is look at the number of computations and look at the worse case term, drop any constants they may be from and that tells you that this you that double the length of the transform is going to take you 4 times longer. That is the kind of information that you want. Well, not being satisfied with that result, some guy named Gauss, yes that Gauss, in 1805 figured out a more efficient, what that means is a smaller computational complexity. Then this formula would first appear, he can compute the DFT rather efficiently. And it's now known as the fast foray transform or the FFT. Turns out he discovered this in 1805. he never published it in his lifetime, he wrote it in his personal notes, put it in a drawer. Was only discovered after he died in his collected works, including what was in the drawer unpublished. if you want to read more about this, I wrote a paper about the history of the FFT and Gauss which you may find interesting. Well, let's see if we can learn what Gauss's thinking was. He made an assumption, which is not a very bad one we assume that they transform one goes 2 to a power. This is one of those reasons why those powers of 2 I showed you in a previous video are very important to learn. We have to do the transform ones in the FFT. Now we know from our discussion of the DFT But all that we have to have is that the transform length be greater or equal to the actual signal duration. So, what I'm doing, is changing the signal duration. I'm assuming that the transform length is to a real power, and so what's normally done in the, in the, in practice, is that whatever your length of data is 608. I pick up the next power to this, just bigger than that. And the DFT will work, and won't be any problem. I mean, I have to pad with zeros, okay? So that's okay. So, let's see what Gauss did. What Gauss did was he separated this sum into the sum of the even numbered terms, and the sum of the odd numbered terms. So he hasn't changed any kind of computation complexity at all. He's just, he's a mathematician he's going to play with the formulas a little bit. You then notice he could, he'd write them a bit more succinctly. In the following way, and so he pulls out a faze factor out of the the expression for the odd numbered terms. Then he knows that these exponentials. For corresponding terms and the two are the same. So, we have the same computation doing up here as we are doing down here and we take the one for the odd number terms, multiply by this and add them up and that gives the DFT. Again this hasn't saved any work yet. And in fact it may look like we're going to have to do more work because we're going to have to multiply by this thing to use this result. Gauss has a little surprise for us. So let's look at this expression again. So Here's our formula for the DFT. And, I want to point out. What's critical here, is, we have to evaluate this right at our, n frequency values. So, I have to evaluate this thing [UNKNOWN] and value this thing [UNKNOWN] multiplied by this [UNKNOWN] and now we'll add them up. Well, what he noticed first of all was that each of these is an N/2 DFT. You can see, I am sampling here, that k/n/2. So these are DFT formulas, well that's kind of interesting I guess. so, this is a DFT of the even numbered terms. This is a DFT of the odd numbered terms. I multiply the one for the odd numbered terms by a complex exponential, add them up, and I get that I want. Again, I still haven't saved anything yet, until Gauss being a very smart mathematician, realized one of the properties of the DFT. The DFT is periodic in its length. This thing is periodic, with a period of N/2. So, what does that mean? That means that once I calculate this formula, for zero up to N/2-1, what happens at N/2, is the same thing that happens at zero, and what happens at N/2+1, is the same thing that happened at 1. I don't need to compute these formulas for the higher values of K. I can reuse my previous computations. The only ones I need to compute are the ones here for the First half after that to evaluate these 2 DFTs I just reuse my values, it's periodic because of the properties of the DFT. I do have to multiply this for every value of k, there are no special properties of that. But still, I'm saved half my computations. Say that again, I've saved 1/2 right away. So, let's see in terms of a block diagram what Gauss' insight was. He says I can take the original sequence, original data and I can divide it into the even numbered terms and the odd numbered terms. I can take the Length-4 DFT of each So this is this expression and that is this expression. And to compute the S0 I take the value [UNKNOWN]k=0 from that DFT, the value for k=0 from that DFT multiplying by the complex exponential, which is pretty easy and add him up. And I do the same thing, a similar thing for s1, s2, s3. But when it comes down to s4, I can reuse the values I've already computed for the DFT. I have to multiply the odd numbered DFT by a different. A complex exponential. But, that's what I do. Alright. So, and this saves half the work. It's pretty clear. I do not have to evaluate, these length 4 DFTs for 8 different values. I'm going to do it for 4 each. Well, this is a power of 2. What [INAUDIBLE]knows is that N, since N is a power of 2, N/2, is also a power of 2. I can do it again, and keep doing it until I get N, a length-2 transforms. So in our length-8 example. here's what the first stage of this decomposition he then decomposes each of these into 2, length 2 DFTs and he decomposes the length 4 DFT it evens into 2 length DFTs. Now he saved another factor of 2 Computations. So this keeps going if you have a higher power of 2 you keep doing this and doing it and doing it. Keep increasing your computation of savings. So now let's look at the details and see what you've got. So the, FFT divides the everything into stages. And the number of stages is equal to the number of times we had to divide by 2. So, since n, we assumed was 2 to a power. The number of stages is the log 2 of n. Which, of course, is equal to l. So we have L stages, each of which consist of N/2 length-2 DFTs. If you look at every stage in this diagram you see that that's true. Alright, now, every pair of length-2 DFTs... is combined after one is multiplied by complex exponential these, these are combined together. so if cannot deliver computations, its a real multiplies and adds you get ten computations each. So for each stage, you get 5*N/2. The reason it's 5 is because this 10 applied for every pair. So, we do 5N/2 computations for every stage. The number of stages we got was log2N. So the total complexity is going to be obtained by multiplying those together, and sure enough we obtain the rather interesting result with the complexity of computing the FFT is NlogN so The question is, is this really a savings. And this looks to be a bit more complicated program you have to write than the straightforward DFT. So let's compare their complexities. So on this plot, I have the DFT complexity which is N squared. And I've compared that with that of the FFT log 2 of N and log2N. However, only for the powers of 2 and I think you can see that there is a huge gap. Look at the far apart they are and I want you to look at the verical scale, that's in millions of operations. It is spectacularly more efficient to use the FFT than just to use the DFT in it's raw form. Turns out, it's been shown that there is no other algorithm for computing the spectrum that has a smaller complexity the FFT. The FFT is some sense is the most efficient algorithm you can have. It has the smallest possible complexity. There is no other algorithm yet to be discovered that has a smaller complexity. So, this is why the FFT is used pervasively. In fact, if you want to compute the DFT using cctave and Matlab, the function name is FFT. So, the one that point out that the FFT is spectacularly efficient for computing the DFT, much more so the just programming the formula from DFT. But I do want to emphasize that this is not a new Fourier transform instead it's a algorithm a way of computing the DFT. It does nothing different then compute the DFT but very efficient which means very quickly. also so, so it has no more proprietaries in terms of linearity and the properties, signal properties that we associate with the DFT, it's just a way of computing it. So there's nothing new in that regard. But it's so efficient that it really opens the door to many signal processing ideas including computing a filter. The input output of a filter within the frequency domain, it turns out that is really the way to go in many applications and we're going to talk about that in the next video.