Okay, so in this video we will talk about Quantum Complexity Theory. Now, Quantum Complexity Theory is a, is a vast subject and you shouldn't expect to learn a lot about it in a, in a 15-, 20-minute video. So, I think of this video as a taste of quantum complexity theory. It'll try to give you a picture of what, you know, what the basic quantum complexity class s-, I-, classes, class is, and how it relates to classical complexity classes. Now, of necessity, this is, you know, this is, not a lecture that I expect you to understand in detail, so, just, just think of this as something that'll give you, that will whet your appetite, give you a picture of what Quantum Complexity theory is all about, and, and if you, if you don't follow it don't worry too much about it. Okay. So, let's start by, by a, a brief introduction to classical complexity theory. So, let's first talk about what a computation problem is. So a computational problem is, is something like, make, you know, you, you, you, you specified some input-output relationship. And you want to try to achieve this input-out relationship via a suitable algorithm. Okay. So, here, here are examples. So you might want to ma, multiply two matrices, M and N, which I've given to you. You might want to test whether given input, given number is a prime. You might want to solve the related problem of actually producing a prime factorization of a given input number n. Or you might want to test whether given boolean formula f is satisfiable by actually producing a satisfying assignment x. Okay, now we say that. A given computational problem is solvable in polynomial time. If there's a, if there's a algorithm, which on inputs of size n, holds in time, or n to the k for some constant k. Meaning in time, it holds in time some bounded by some polynomial n, and it outputs the correct answer. Now, in this course so far, we've been talking about circuits. So, the corresponding statement for circuits would be that on inputs of size N, you know, you have a circuit C, which takes inputs of size N and outputs C of X. Which is the correct answer to this problem. And the running time, instead of running time. We'll talk about the size of C. So we'll say you know, saying that there's a polynomial timed algorithm. Corresponds to saying that there's a family of circuits C sub N, one for each length N such that on inputs of, of, of length N, C sub N outputs the correct answer. And moreover the size of C sub N, C sub N, the number of gates in C sub N, is bound by some polynomial in N. There's also a, you know, some sort of uniformity condition which says that the circuit C can be, C sub N can be efficiently computed given N as input. But that's, we, we won't get into the, the details of these definitions. Okay, so. So now, the class P or polomial time, is a class of all computational problems, with polomial time algorithms. So, in our, in our, in our example, Matrix multiplication as polynomial time, solvable. It's in the class P. But we don't think factoring is, and we certainly don't think satisfiability is. What about primality testing? Well primality testing is sort of. It's, it's very, it comes very close to being in the class P. What it isn't is the class VPP, which corresponds to polynomial time on a, for a randomized algorithm. See here's what we mean by that, what we mean is that there's some circuit which takes as input X, but it also takes as input some random bits R. And now, what we, what we want is that c of xr, is the correct answer, with high probability. So we might say, well, it's correct, say, with probability, at least, three quarters. But now if we want to boost the chance of success, all we have to do is run the, run the circuit many times with, independent random bits, and take the majority answer. And we can boost the success probability from three quarters to, something arbitrarily close to one. Okay. So, our mantra is that polynomial time is good, exponential time is bad. Why is exponential time bad? Well, we've seen this already. Exponential time grows so quickly if the exponential function grows so quickly, that already for even small values of n, exponential in n is larger than, number of particles in the universe, or age of the universe. And that, that cannot be a good time bound. Okay. So that's our basic distinction between polynomial and exponential. Now, there's a, there's a question that we beg, which is, how did we decide our model of computation? Why was it circuits or Turing machines or algorithms? How did we decide, what, you know, what model we are going to run our algorithms on? And this is a very important consideration. This, the you know, this is what the, foundations of complexity theory are built on, and it's called the extended Church Turing thesis. What this thesis asserts is that it doesn't much matter what your model of computation is. Your running times will work out to being. Essentially the same to within polynomial factors. So, if you're talking about polynomial time competition. It's robust against changes in models, the model of computation. So this another reason we like polynomial time. Because it's a robust notion of, running time. So what the extended Church-Turing thesis actually says is that, any reasonable model of computation can be simulated on a probabilistic Turing machine with, at most, polynomial simulation overhead. What does this mean? It means, for example, that if you have, if you have a running time of T steps on some model of computation that you've, that you invented, which you think is reasonable, reasonable meaning you can implement it in principle. Physically implementable in principle, then you can also run this algorithm. Say, in order, t squared steps, or some polynomial and t steps on a Turing machine, on the simplest model of them all. Now, where does this Extended Church-Turing thesis come from? What, how do you interpret it? Well, one interpretation for this Extended Church-Turing Thesis was. That a touring machine describes the set of all functions, that are computable by a mathematician. So you think of the touring tape, as being an infinite supply of paper.. And you think of the finite state control as your mathematician, and the read-write head as a pencil. So you, so you have a mathematician armed with infinite supply of pencils, infinite supply of paper. And you want to know, what can this mathematician compute? What are the functions? What are the problems that this mathematician can solve in a reasonable amount of time? Well, that's what polynomial time describes in this one interpretation. The second interpretation of, of this thesis is that the class polynomial time represents what can be physically computed in this, in this universe in a reasonable amount of time. And the rationale for this, is that Turing machines can simulate cellular automata with, with polynomial overhead. Right there polynomially equivalent to cellular automata. And some of the arguments is that cellular automata represents everything that you can compute in the, in the, in the classical universe. Okay. Remember we are talking about classical complexity theory so far. So why is it, that, that a cellular automaton. By the way this is, this is a cellular automaton that's, that's simulating what's called The Game of Life. Like Conway's Game of Life. Okay, so, so what is it about a cellular automaton that it makes it such a convincing model for anything that can be computed in the, in the classical universe? So, so, in a cellular automaton, each cell looks at its, its small neighborhood, the cells around it and its new state, you know. It's in one of a finite number of states. And its new state is a function of its, you know? The states of all the cells in a small neigh-, you know, its, its neighbors. Okay. Now, this is directly analogous to something in, in physics. It says that, so classical physics is described by local differential equations. Which means that, what you're saying is that, you know? If you want to know how some quantity changes over time, electricity, magnetism, whatever. Then, you, wh-, what you, what you have to say is that, you know? The, the value of that, that physical quantity at, at a certain point. At, you know, at t plus delta t is determined by, sort of, you know, that, that point just looks at an infinitesimal neighborhood around it. And, based on the value at time t in this infinitesimal neighborhood, you can determine the value of, at, at this particular point at time t plus delta t. Okay, but now so you can see that, that cellular automaton, These are, these are discreet discretizations of local differential equations. And you can argue that if you want to compute. Then your, you, your computer has to be fault tolerant. That, you know, it cannot be that, that you assume infinite precision in your model. And so you have to discretize it. You have to have a digital abstraction. And once you take your local differential equations and you, and you subject them to a digital abstraction you, you get cellular automata. So that's the argument that polynomial time is the limit of what you can compute in the physical world classically. And so. Quantum computation is the only model of computation that violates this extended Church-Turing thesis. It's the only reasonable, model of computation. Which cannot be simulated in polynomial time on, on a Turing machine, on your classical Turing machine. So what's our evidence for this? Well, we have two kinds of evidence that it violates the extended Church-Turing thesis. One is a black box sep-, separations. So we already saw these examples of, of algorithms. You know, such as the recursive Fourier sampling problem or Simon's problem, where we said there is no polynomial time. You, you know, you, you can solve these problems in polynomial time on a quantum Turing machine or with a quantum circuit, but there is no corresponding polynomial time solution to these problems. Classically, is in the classical algorithm. We also saw that, that there are problems that we strongly believe to be difficult classically. In fact, we believe that are, we believe this so strongly that we base cryptography on it. These are problems like factoring and discrete logarithms. And for these problems we know that there is a polynomial time quantum algorithm but we strongly believe that it takes exponential time to. To solve them classically or at least. All the algorithms that we know of so far take exponential time. So these are the two kinds of evidence that we have that quantum computers violate this extended Church-Turing thesis. Now, you could ask, why are we messing around with this kind of evidence? Why don't we just flat out prove that quantum computers violate this extended Church-Turing thesis? Well the answer has to do with, with this. Okay, let's, let's first define this class BQB, Bounded Error Probabilistic Bonham Milton. Bounded Error, Sorry. Quantum polynomial time. Right it's, it's the class of computational problems which have polynomial time quantum algorithms that I'll put the correct answer with high probability. So for example factoring belongs to BQP. Now Okay, so. The, you, you can show this inclusion that, that P, polynomial time is in BPP, probabilistic polynomial time, which is contained in BQP. So BQP is, is this class that we are interested in showing is so much larger than P or BPP. But then you can also show further containments. You can also show that this is contained in something called P to the sharp P, which is contained in P space. P space. Is a class of problems that you can solve using not a polynomial amount of time but polynomial amount of space. So you can, you can, you know, if you have only a polynomial, polynomial amount of memory in your algorithm, but you can keep using this memory, well then that's p space. Now the interesting here is that. P versus P space whether or not P equal to P space this is one of the major open questions in complexity theory. So, this is an open question in complexity theory. We strongly believe that P is not equal to P space. But we don't know for sure. We don't know how to prove it. So now you can see, if we could show that BQP is strictly larger than P and BPP. Then we'd have shown that P is not equal to P space. Thus, solving this major open question. And this is why we have to deal with this sort of evidence, that BQP really, that quantum polynomial time is larger. That quantum computers violate this [inaudible] thesis. And's what's B, what's sharp B? Well sharp B. This is so called counting clause. It's a complexity class having to do with counting. So let's say you have a, you have a circuit. See here we have some function'f, ' which maps'x' in'y' to'f' of'xy'. Okay. Now we can, we can define accounting problem associated with this. We can say, the count of X, is the sum over all Y. Of F of XY. You see so. So, what, what, what this is saying is, if you were given x and y, you could of course compute f of xy quickly because f is implemented by a polynomial size circuit. But now what you want to do is you want to sum up f of xy over exponentially many different y's and this is now much more complicated problem. It turns out that if you could solve this problem than you can actu, actually solve, you know, then you can, then you can simulate any quantum computation that you want. And the way you reduce a quantum computation to this sort of counting problem is. It's reminiscent of Feynman's path integrals. So, in fact, in some sense, what Feynman came up with was a way of showing that BQP is in P to the sharp P. It's actually not a very difficult proof, it's, it's a simple proof. But I really don't want to get into this now. It's, you know, we don't really have the time. And, or, or, you know, it's, it's, it wouldn't be appropriate. We couldn't really do it justice in this, in this [inaudible] setting. Okay, so now. We could try to understand, you know, our complexity classes A little bit better. So, here we have P of polynomial time. And then remember there was this class NP. It contained many of the problems that we'd like to solve. Like satisfiability. Now if you take a problem which is in MP like satisfiability. You could also look at the compliment of it. And you get, get this class, co NP, which contains problems like, not satisfiable. Is this formula not satisfiable? And then, it turns out there's a, that there's a high, you know, so, so, there's an, there's an infinite hierarchy of such classes. Mp, Co-MP, what's called. Sigma 2P, etc. Pi to P, etc. And these, You know this, this entire class of problems. Sort of forms what's called the polynomial hierarchy. And all this, you know, this entire structure sits in inside. P to the sharp P. And which finally sits inside P space. So now you could ask. Well, you know, we've already said that we believe that BQP, the, that you cannot solve the NP-complete problems. So the problems right here, we, we cannot solve them in polynomial time. I-, so in, in polynomial time on so, so this is, let's say, the NP-complete problems. And, we believe that this cannot be solved in polynomial time on a quantum computer. Right? So. What does this class BQP look like in this picture? So, is it contained inside of MP? Well, we already have a hint. What, the, the, the hint that we have is that this, this, this problem recursive Fourier sampling, we can show that in the, in the black-box model, this does not belong to MA, Merlin-Arthur, which is the probabilistic generalization of NP. So it lies outside of this NP but, not only does it lie outside of NP, but it even lies outside the probabilistic generalization of NP. We can show this, you know, like, like all of these results, we can show this unconditionally. This is in a black-box model. Or, also called an oracle model or setting. Okay. So, so now, what should we think about, about, BQP? So, the idea is that BQP somehow, it doesn't contain the MP complete problems. It, instead, is some sort of complexity clause which, which heads out like this. And it contains recursive Fourier sampling. And how far it heads out into this whole thing we, we don't quite, you know, we don't yet understand. So we could ask, how, how far outside of MP are these problems that BQP might contain? And could it be that in fact it even extends outside of the polynomial hierarchy and contains some problems out here. And the conjecture, which is very long standing. It goes back almost twenty years. Says that in fact the foray sampling problem should be one such example, which is not in the [inaudible] hierarchy. This has proved to be remarkably, to, you know to, to resolve. And a few years ago, Scott Erinson came up with a, with a simplified problem which, which, Which he conjected is also not in the polomial hierarchy, but with which can be solved, in polomial time on a quantum computer. And this is called, the Fourier checking problem. So here's a, here's a, here's a, a pointer to the, to a paper that discusses this problem. But it's actually a very simple problem. It's, it's very simply stated and very elegant. So, let's say BMW are random unit vectors in real n-dimensional vectors where, where capital n is two to the little n. So it's, these are exponential dimensional real vectors. And now given such a real vector, you can, you can. Think of it as defining. A boulion function. Okay. So its a, its a function which, you know, for. So, so in other words, you have this, you have this exponential dimensional vector where, each entry is a real number. So this, this, this real number, let's say is R sub K. And now you could think of this as defining a bullion function for you, where maps K to the, sign of K. Right? So it, it, it maps an N bit number to +one or -one. Okay, so, so now, since you have two such, such, such vectors that define two different functions. F and G, which are bullion functions on little N bits. So what we want to do is distinguish. So we are given two, two, two such functions. Either we are given F equal to sine of V, and G equal to sine of W. You know, the, these, the sine functions defined by these two random vectors. Or, we are given F is equal to sine of V. And g equal to sine of h Dove.. So, so, instead of wv, now you use the same vector v. Once as it is, and once having applied the Hadamard transform to it. And we want to be able to distinguish these two situations from each other. And the, the, the point is that there's a very simple quantum algorithm that will distinguish this situ-, the first situation from the second one. But the conjecture is. That you cannot do this distinction anywhere in the program with hierarchy. Okay. So, I should mention that this is one of the central open questions in quantum complexity theory. Understanding the power of BQP, Quantum Polynomial Time. And this is completely wide open as, as we speak.