Okay. So let's write out our general for a transform matrix. So F sub N. It's, just some matrix one / square root N. Ones across the first row. One Omega, omega squared, omega^N-1. One Omega^N-1. Omega^ N - one squared. Okay, so it's some matrix like this. And then of course, what you can do is in general, you give it as input some N dimensional vector alpha naught to alpha N-1 and you get as output a complex vector beta naught through beta N-1. Okay. So, so the computing the Fourier transform is the, the fourier transform is a transformation which takes as input an N dimensional vector alpha and outputs and N dimensional vector, beta. Now, how difficult is this task? Well classically the algorithm takes order N squared steps. And the reason is because to, to compute some arbitrary entry, beta sub J, what you'd have to do is take the J-th column, sorry the J-th row of this matrix and take the inner product with the, with this column vector. And that, of course requires order N multiplications and additions but now, you have to repeat this for every entry. And so, so that takes N times all the N steps which is all the N squared steps. Now there was one of the, one of the great breakthroughs in, in the field of algorithms was this was, was the, was the fast fourier transform which was also called the FFT. Which was formalized by Cooley and Tukey in the 60s. And what this does is it gives a, it gives an algorithm which runs in order, N log N steps so nearly linear time. This speed up of a quadratic factor was a very dramatic was a very important speed up. And this is responsible, you know this is, are the basis of much of digital signal processing. So, you know the, the, the use of digital technology in audio and video processing is basically made possible by this algorithm. Now let's, let's turn instead to the quantum case. So in the quantum case, of course this, this input, this, this N dimensional input vector, it's natural to interpret it as the state of a, of a system of log N quantum bits. So if you let litt le n, equal to log of capital N so think of capital N as a power of two. Then, this is the state of little n qubits which happen to be in the state sum over J of alpha J and J. And what the quantum fourier transform does is it transforms this state into the new state, sum / K of beta(K), okay. And now the, the remarkable thing about this is that the circuit, the quantum circuit to carry this out works in a number of steps that only scales or nominally in little n. So in the next video we will see that we can, we can carry out this quantum fourier transform in big O of little n squared steps. Which is just big O of log squared capital N steps. It turns out that if you work a little harder you can get down to nearly linear in little n steps. Which means, nearly linear n log n steps. But, but it doesn't matter so, the, the main thing is you can go from, going from here to here is an exponential improvement. Okay. And this is, you know this is where the power of quantum mechanics comes in, the power of quantum computation comes in. Except that there's a bit of a rub here. The rub is that when the quantum fourier transform does carry out this transform from the alpha vector to the beta vector, exponentially faster than the classical fourier transform can, can carry out this transformation. The problem is that the quantum state of course, you know the, the, the amplitudes within the quantum state are inaccessible to us. And so all we can do is we can perform a measurement. And when we measure, we just see K with probability beta sub K magnitude squared. So we just see one of these indices K, right? With, with some, with probability, beta sub K magnitude squared so we don't get access to the beta, beta sub K but just to this much simpler probability distribution, to a sample, from this distribution. So, you know there was once the, the family of Intel Corporation quoted more of his giving lecture. There he was talking about Moore's Law in this, this phenomenon that the you know that the number of transistors that you c an pack onto a chip has been doubling roughly every eighteen months going back several decades now. You know, so this exponential increase in the number of transistors in computing power, you know decrease in size decrease in cost. So he was talking about how you know if, if this kind of if the automobile industry had been on, on the same trajectory, then, you know already he was giving this, this speech over a decade ago. He was, he was saying well, then if, if the automobile industry had, had been on that same, same trajectory, then a Rolls Royce would give you hundreds of thousands of miles to a gallon of, of petrol. It would travel at a tenth of the speed of light. And it would cost, you know less than a nickel. And then some wise guy in the audience piped up and said, and it would be smaller than a matchbox. Well, so that's sort of what we have here. We have, in the quantum fourier transform, we have something which is incredibly powerful. You know, it's very fast. It's incredibly powerful. It's, you know there is this exponential speed up but then on the other hand, by being so, so powerful, you know there is the price to pay which is very little of it. So, you get, you don't get the entire output. You just to get, get a sample of the output. And so the whole trick to using the quantum fourier transform is, you know how do you use this effectively in the algorithm. And that's what we will see in the, in the next lecture actually.