Okay. So, in the second video, we are going to talk about Hermitian matrices, and Eigenvectors and Eigenvalues. So, let's see. What's a Hermitian matrix? A matrix is Hermitian so, so, A is Hermitian if and only if A is equal to A conjugate transpose. Okay, so in particular, it has to be a square matrix. And so for example, this is a two by two Hermitian matrix. So the, the entries on the diagonal have to be real. So they could be, it could be like this. And then the matrix has to be equal to it's, if you conjugate the entries then, then, and you transpose the matrix, you have got to get back to the same place. So for example, if this was one + i, then this matrix, this entry must be it's conjugate transpose so it must be one - i. So, if A is, if A is Hermitian and real, then it's also a symmetric matrix. Okay, so now we say that the vector phi is an Eigenvector of A if A times phi is lambda times phi. So what A does to phi is it either, it just shrinks or stretches it. Okay, in general of course, lambda is going to be a complex number. Okay. So now, there's this beautiful theorem about Hermitian matrices which is called the Spectral Theorem. So what, what the spectral theorem says is that, if is A is Hermitian implies that A has orthonormal set of Eigenvectors with real Eigenvalues. Okay, so let's, let's call these Eigenvectors phi sub zero through phi sub k minus one. And the corresponding Eigenvalues, let's call them lambda naught through lambda k minus one. And so, what this is telling us? Okay, so let's, let's pretend for a moment that A was, it was a real matrix. So it's, it's just a symmetric way, matrix with all real entries. So then, what is the, what is the spectral theorem telling us? Well, what it's, what it's telling us is, if you look at the unit ball in this k-dimensional space, so I'm thinking of ks2 right now. I couldn't think of anything. Well, what happens to it under, under the action of A? Well, to, to figure this out what you have to do is you first have to locate the Eigenvectors of A, a nd perhaps one of the Eigenvectors is this one and one of them is that one. So it's, let's see these are the two Eigenvectors. They are, they are orthogonal to each other. And now, what you are guaranteed is that what A does to this is, it just scales each of these. Okay. So, so what it might do is it might, it might shrink this vector and it might expand this one. Okay. So, and of course, once we know what it does to each of these we know what it does to any linear combination and so what it does is it takes this circle or a sphere and k-dimensions and it converts it, sorry about my drawing here. It sort of distorts it into an ellipse like this. Okay. So, A maps the sphere, the unit sphere. Okay. So, in this case a circle into an ellipse, and the unit sphere into an ellipsoid. Okay where there are, where there are k principle axis which, whose lengths are given by these Eigenvalues, and then you have this kind of object except maybe three space or four space, or whatever you have. So let's look in at an, at an example of this. So for example we already looked at this operator x which was a bit flip operator. It was given by this matrix which is of course, Hermitian. And so we can ask, what are its Eigenvalues and what are the Eigenvectors? So this you can solve just by inspection. You can, you can, you know, it's, it's sort of easy to see that one of the Eigenvectors of x is, is plus. So the, the Eigenvectors of x are plus and minus. Okay? So what is, what does x do to plus? Well, well, what is, what is you know, let's, let's write it out in the, in the usual vector notation. So what's plus? It's one over square root two, one of a square root two. Well what's this product? It's exactly one over square root two, one over square root two. Right? So, what's, what's the corresponding lambda? What's lambda plus? Well, it's exactly one. Okay? So lambda, x maps plus to pluss. What does x map minus do? Well it's, exactly the same thing, one over square root two, minus one over square root two. And thi s time, it flips them which is minus of what we started from. So lambda minus, is -one. Okay so, so x has two Eigenvectors, plus and minus, with Eigenvalues one and -one. Okay. But, you could, you could also ask, well, how would you figure out these Eigenvalues and Eigenvectors if you, if you couldn't guess them? Well then, you know, just to remind you how one does that you, you sort of solve of what you say is, for example, you know, let's, let's say that you wanted to work this out instead for the Hadamard transform H which is one over square root two, one over square root two, one over square root two minus one over square root two. Well again, you know, actually it's a little easier to guess what the, what the Eigenvectors and Eigenvalues are because remember what, what the Hadamard transform does? There's the zero state, there's plus, there's minus and there's one. And what, what the Hadamard transform does it's, it's a rotation by pi about this axis here. Right? About this pi by eight axis. So what are the Eigenvectors going to be? Well clearly, one of them should be this vector, the five by eight vector. And the other should be orthogonal to it. Right? This would have a Eigenvalue one and this would have Eigenvalue -one because it flips when you apply H to it. Okay, but, but, let's go back and think about, how would you actually work this out? Well, let's, let's carry out, you know, so what you would do is you would say, okay, so I want to find some vector phi such that H times phi is lambda times phi, which means that H minus lambda times the identity times phi equal to zero. But we want a nonzero solution to this so for this to happen, this must be a singular matrix, so it must have determinant equal to zero. So let's write out this condition. So what's H minus lambda I? Well, H is one over square root two minus one over square root two, one over square root two, one over square root two. And now we want to subtract off lambda times the identity, so we subtract off lambda from the diagonal. And now we want the determinant of this to be equal to zero, but what's the determinant? It's this times this minus this times this, so it's one over square root two minus lambda, times minus one over square root two minus lambda minus one over square root two over one over square root two which is minus a half plus lambda squared minus a half equal to zero which means lambda squared equal to one. So lambda equal to plus or minus one as we wanted. And now, we just, we just choose each of these values and we figure out what, what phi must be. Okay. So, so finally, let's you know, let's, let's do one last thing which is, what we said is A has an orthonormal set of Eigenvectors phi naught through phi k minus one with real Eigenvalues lambda naught through lambda k minus one. Okay. What another way you can say this is, that if you were to change your basis to, to this phi naught through phi k minus one, then what's the action of A? Well it's, it's just a diagonal matrix, so in the phi naught through phi k minus one bases, so if you, if you, if you were to write A in this, in this, in this phi naught through phi k minus one basis, then A would look like this. It would look like lambda naught, lambda one, through lambda k minus one. It would be a diagonal matrix, because all it does is it takes phi naught. What does A do to it? It, it multiplies it by lambda naught and so on. So let's call this diagonal matrix capital lambda, okay? So, now we can write A as the following. So, if we first do a change of basis, so if we, if we first perform some unitary transformations, some rotation that rotates our standard basis into this basis of Eigenvectors, then the action of A is just given by this diagonal matrix, lambda. And then if you want to switch back to the standard bases, then we'd be applying the inverse of U, which is U dagger. So what we are saying is, A can always be written as U dagger lambda U. Where U is a matrix which changes the standard basis to the, to the phi basis, the basis of Eigenvectors. Okay. So what, wha t is, what is U look like? Well U obviously the columns of U are going to look like the phi, the phi [inaudible]. So the first column will look like phi naught, the second column will look like phi one, etc. The last column will look like phi k minus one. Okay. And now if you look at it, what does this tell you? Well A is just the following. It's just lambda naught times the projection onto phi naught plus lambda one times the projection onto phi one plus lambda k minus one times the projection onto phi k minus one. So you can, you can either read it off of this, off of this, right? Because, you know, the rows of U dagger are the, the phi J's complex conjugated. And so when you multiply all this out since these phi's are orthonormal, all you will get is, the j-th row with the j-th column, of course multiplied by lambda jn, and that's what these. Or you could just, or you could just check that when you multiply this times phi, times, times any phi j, you get lambda j times phi. Okay. So, so finally what, what this is telling us is that, is that A can just be written as the sum of lambda sum over j went from zero to k minus one of lambda sub j times the projection onto the, onto phi sub j, let's call that projection matrix P sub j, where P sub j is phi sub j bra phi sub j, sorry, ket phi sub j bra phi sub j.