Hey folks, my name is Nathan Johnston and welcome to lecture 16 of Advanced Linear Algebra. Today we're going to put together all of the tools that we've been developing over the last four weeks to see some of the neat types of problems that we can solve with them. In particular, we're going to go through two examples. The first one, we're going to talk about how we can construct the square root of a linear transformation, in particular, the transposed linear transformation.
How can we find a linear transformation with the property that when you apply it twice, you get the transpose of a matrix and this seems very counterintuitive because all the transpose map does is rearrange its entries in a matrix so how can you find a linear transformation that you have to apply twice to just swap the positions of entries in the matrix but turns out we can do it okay and then the other application that we're going to go through again it's a square root sort of thing except we're going to apply to calculus remember we've been representing the derivative map as a matrix and while we can take square roots or other roots of matrices using techniques from the previous course So we're going to be able to do things like find the square root of the derivative. In other words, we're going to be able to find a linear transformation that acts like half of a derivative. Just like if you take the derivative twice, you get the second derivative, but we're sort of going the other way.
We're taking fractions of the derivative or just pieces of it. A linear transformation that you have to apply twice to get the original derivative linear transformation. And that's kind of neat and counterintuitive as well. All right, so let's see how this works.
So we're going to start off just with a brief reminder. of things that we learned in the previous course, how to diagonalize a matrix and take non-integer powers of a matrix once you have that diagonalization. Okay, so here's the setup.
Suppose you've got some square matrix, then we say that that matrix can be diagonalized if you can write it as p times d times p inverse, where d has to be diagonal and p has to be invertible. Of course, p has to be invertible for p inverse to even exist, okay? And we have a theorem from the previous linear algebra course that tells us exactly when you can do this, and furthermore, how to do it in the situations where it can be done. And that theorem says that the only way you can ever diagonalize a matrix is by throwing the eigenvalues of your matrix A along the diagonal of D.
So that diagonal matrix in your diagonalization, it necessarily has the eigenvalues of A as its diagonal entries, in some order. The order doesn't matter. And similarly, this matrix P...
it must have the corresponding eigenvectors of A as its columns. And I said the order doesn't matter in D, but the order sort of matters in P. All that matters is that the order agrees, the order matches up. So if you put one particular eigenvalue as your top left diagonal entry in D, then the first column of P has to be a corresponding eigenvector of that same eigenvalue. So first diagonal entry, first column.
Second diagonal entry, second column, and so on down the line. Make sure your eigenvalues and eigenvectors match up. And the nice thing about diagonalizations is you can take non-integer powers of matrices with them. So remember, you can define integer powers straightforwardly. Just a to the power k is a times itself, k times.
But using diagonalization, you can extend this to non-integer powers. And the way that you do that is you define a to the power r is just p times d. d to the power r times p inverse where this p d and p inverse these pieces were all the same from this diagonalization and d to the power r well because d is diagonal the way you can compute that is you just take the rth power of those diagonal entries and sort of the idea here the reason that this works and gives us useful things is because matrix multiplication with diagonal matrices is entrywise multiplication. If you work through some examples and try diagonal matrix times diagonal matrix, you'll see that you just get the entrywise product of their diagonal entries.
So basically diagonal matrix multiplication works the same way as sort of naive multiplication would, entrywise multiplication. So everything works out if you just do powers naively on diagonal things, and this lets us ramp it up to diagonalizable matrices rather than just diagonal matrices. Okay, so let's go through a couple examples of the types of things that we can do with this now that we also know about linear transformations.
Okay, so the first example that we're going to go through is the transpose map. We're going to find a square root of the transpose map acting on the space of 2x2 matrices. In other words, what we want to do is we want to find a linear transformation S acting on the space of 2x2 matrices with the property that when you square that linear transformation, in other words, when you apply it twice, you... get the transpose map.
And the idea here is just use your usual power laws. Imagine you're working with real numbers or something like that. What exponent gives you a square root of a number?
Well, the exponent of one half. So what we're going to do is we're going to compute t to the power one half. And then again, because diagonal matrix multiplication works out so nicely, all of your usual power laws are going to work. So this matrix will really be a square root. of the transpose map okay so How do we do this?
Well remember we've got to diagonalize the transpose map. In other words, we want to diagonalize the standard matrix of the transpose map. Okay, well we've actually already seen some particular basis that the transpose map is diagonal in.
Okay, so remember one of our examples from an earlier lecture was that if we work in this poly basis here, then the transpose map, its standard matrix, just looks like this. It's already diagonal, which is perfect for taking powers of it, right? Because again, for diagonal matrices...
you just take powers entry-wise. Okay, so then all you do to take the one-half power of this standard matrix, well, you take the one-half power, in other words, the square root of each of these diagonal entries, right? Square root of one, that's this one here.
Square root of one, also one. Square root of minus one, and we're gonna have to go to complex numbers to help us out here. We get an I. Square root of one, also one.
Okay, so this matrix right here is a square root. of this matrix up here. So what that means is that this is a matrix representation of exactly the linear transformation that we want.
This is a matrix representation of s which is t to the power one half. So then from this point on it's just unraveling things. So if this is the matrix representation of the linear transformation that we want, what does that mean for the linear transformation itself?
Well again remember we're working in this basis b here, we're working in this basis. So each of these columns corresponds to something going into the linear transformation, and then the rows correspond to things coming out. So in particular, if we plug in this first basis vector, the identity matrix, then we get this linear combination of the basis vectors.
So in other words, if I feed the identity matrix into my linear transformation, I get one times the first basis vector plus zero plus zero plus zero times all of the others. So I get one times the first basis vector plus zero times the next plus zero times the next plus zero times the next. In other words, S of the identity just equals the identity.
OK, and then you do the same thing for all of the other basis vectors. If I plug in the second basis vector, what happens? Well, now I get this coordinate vector popping out.
In other words, I get zero times the first basis vector plus one times the second basis vector plus zero plus zero times the third and fourth basis vector. So in other words, if I plug in the second basis vector, I just get the second basis vector again. Okay, so that is a fixed point of that linear transformation. And similarly with the fourth one, if I plug in the fourth basis vector, then I'm looking at the fourth column, and now my linear combination is 0, 0, 0, 1. So again, I'm just getting the fourth basis vector out.
So that guy is also unchanged by this linear transformation. But something interesting happens when I plug in the third basis vector. If I plug in this guy, now I'm getting 0, 0, i, 0. So in other words, If I plug in the third basis vector, I get 0 of the first, 0 of the second, 0 of the fourth, and I get i of the third.
So in other words, s of the third basis vector equals i times the third basis vector, all right? And then you just do that multiplication there, and you get the matrix 0, 1, minus 1, 0, all right? So s, the square root of transpose map, it's the linear transformation that does this.
This is how it acts on two by two matrices. Because b is a basis, Once you know what it does to those four basis matrices, you've completely specified the linear transformation. Now I know what it does to everything thanks to linearity and the fact that these basis vectors span the entire space of two by two matrices. Okay, but still it's kind of a weird and uncomfortable way to describe this linear transformation S, but you can unravel things. You can figure out how it acts on the standard basis matrices instead, and in particular if you do that calculation this is what pops out.
S of the matrix ABCD Equals this junk over here. It leaves the diagonal alone It leaves the a and the D alone which you can sort of see from the fact that hey s of the identity is the identity And s of sort of this no no skewed identity or something where you got minus one in the bottom right corner That's also unchanged so it leaves everything on the diagonal alone the interesting stuff happens on the off diagonal what it does is it sort of mixes B and C in a weird complex numbers the theoretic sort of way and you can check if you're uncomfortable with this formula it's probably a good idea to check that if you apply this linear transformation twice really what's going to happen is you're just going to get a c up in the top right corner and you're going to get a b down in this bottom left corner in other words s yeah it really is a square root of the transpose map you have to apply it twice to get the transpose map and it needs to go through the complex numbers to do that even if the matrix that you're starting with is real there is no real square root of the transpose map because it's got this negative one eigenvalue. So you need to get complex numbers when you square root it. Okay. So that's square root of the transpose map.
And as another final example to wrap up this week, let's think now about doing the same sort of thing except with derivatives, okay? So remember we talked earlier about how... if you take powers of the derivative linear transformation, that corresponds to just doing multiple derivatives.
So it like gives you the transformation that does the second derivative, and the third derivative, and the fourth derivative, and so on. Well now that we know how to take non-integer powers of linear transformations, you can do really neat things. You can take non-integer derivatives. So you can think do things like take half of a derivative of a function, which is kind of neat.
It lets you do what's called fractional calculus. Okay, so let's go through an example to see how this works. Let's find a half derivative of sine and cos. In other words, let's find a formula for applying a linear transformation to sine and cos with a property that you have to apply that formula twice to get the derivative of these functions.
Okay, and then once we've done that, we'll actually sort of generalize this and talk about how to find the rth derivative of sine and cos. No matter what r is, r can be pi, r can be 2.7, r can be square root of 2. It doesn't matter. It does not have to be an integer or even a rational number. You can take the rth power for any real number and we'll see how to do that. Okay, so the basic setup to solve this type of problem is the same as always.
Turn things into matrices. Okay, so let's find a standard matrix for the differentiation linear transformation acting on sine and cos. So here's the setup.
Let b, that's going to be the basis of our vector space, just be sine. and cos okay and we don't need to throw in other functions this time like we did in some of our previous examples because when you take the derivative of sine you're going to get cos when you take the derivative of cos you get minus sine that's okay that's just a linear combination of sine and you just go back and forth you never get new functions outside of their span when you take derivatives okay so b this is the basis of my vector space v my vector space is just going to be the span of b okay and now d that's going to be the differentiation map acting on that vector space. In other words, it's the linear transformation that just takes the derivative.
So d of sine equals cos and d of cos equals minus sine. And that's going to tell us what our coordinate vectors are. The coordinate vector of d of sine, that's 0, 1. Coordinate vector of d of cos, that's minus 1, 0. You have minus sine and zero cos's. All right.
and then you throw those into a matrix. I'm going through quickly here because we've done these standard matrix calculations a bunch of times now. Then the standard matrix of D, you just throw these coordinate vectors in as columns. Okay so is my first column and then that's my second column.
All right now to find the square root of this matrix and therefore find a square root of D What we've got to do is we've got to diagonalize this matrix. And remember what I said earlier, the only way to diagonalize a matrix is find its eigenvalues and find its eigenvectors. And those are going to tell us the diagonal piece and the matrix P respectively.
Okay, so I'm skipping over the calculation here. This is something that you should try on your own to remind yourselves of how to compute eigenvalues and eigenvectors. This matrix here, it turns out that it has eigenvalues plus and minus one.
Okay, and the corresponding eigenvectors are 1, minus or plus i. Okay, so I've written it with a minus or plus to try to indicate that, you know, they correspond in to these eigenvalues plus and minus i in sort of the opposite order that you might expect. Like the plus i corresponds to the vector 1 minus i, and vice versa. Minus i corresponds to the vector 1 plus i.
Okay, so those are my eigenvalues and eigenvectors. How do you diagonalize now that you've got them? Well, you just do what we said earlier.
You take these eigenvalues, you throw them into a diagonal matrix along the diagonal in whatever order you like. So I'm just gonna do I and then minus I. You could do it the other way around if you like. It's not gonna change the final answer.
Okay, and then the corresponding matrix P that has to be invertible, what you're gonna do is you're gonna take the corresponding eigenvectors, throw them in as columns in the same order. So again, remember I said I. corresponds to 1 minus i. So that's my first eigenvalue, so that must be my first eigenvector. And then minus i, my second eigenvalue, corresponds to the second column, which is 1i.
That's my second eigenvector. All right, so that's my diagonalization of the standard matrix of D. And I apologize, I had to switch notation a little bit here. I'm using s for the diagonal matrix just because I've already used D to stand for derivative, okay?
So I don't want to use D again for diagonal matrix. So I'm writing P S P inverse here instead of PDP inverse like I usually do. All right.
Once you've got your diagonalization, the way you compute square roots of that matrix is you just take the square root of the diagonal part in the middle and you leave the P and P inverse on the sides. Forget about them. Okay.
So the one half power of the standard matrix is just, well, P and P inverse still on the left and right hand side. You leave those alone and you do the one half power of S. Okay, so you're just plopping this S down over here, and you're doing one half power.
And again, because it's diagonal, the way you do that is you just raise each of the diagonal entries to that exponent of one half. Okay, and then this is just a little calculation with complex numbers. i to the power of 1 half is just 1 plus i over root 2, and minus i to the 1 half is just 1 minus i over root 2. Okay, so that's your new diagonal matrix in the middle.
And then you just multiply all this junk back together and get your final answer, get your square root matrix. Okay, so just plop in, hey, what is p? Throw it in there. And then what is p inverse? Well, that's a little calculation.
You do your Gaussian elimination thing to find p inverse. and you plop it in there, and you do your matrix multiplication, and you get that matrix there. Okay, and this is really nice, actually. I really like this example, because even though there are complex numbers floating around in every one of these matrices, after you multiply them together, you end up with a real answer. Okay, so I know like some of you are probably still a little uneasy with complex numbers.
I know some of you haven't taken complex analysis, but this really highlights the fact that like we're not cheating with complex numbers. There are helpers that can often help us. get real answers at the end of the day. Okay, so all right, great.
So this is the matrix, this is the standard matrix of the linear transformation that I want, of the square root of d. And now I just unravel things like I did with my transpose example. Okay, remember columns correspond to inputs.
Okay, so I'm working in the basis b which consists of sine and cos, so my first column tells me what happens when I plug sine into my linear transformation. I get one sine, and one cos, sorry, divided by root two. All right, so this tells me that the half derivative of sine, in other words, d to the power one half of sine, equals, well, one over root two sine plus one over root two cos.
So that's where this formula is coming from, okay? And now if I plug in the second basis vector, d to the power one half of cos, that's the second column of this matrix. And this time I'm gonna get minus one over root two times sine.
and then plus one over root two times cos. Alright, so the half derivative linear transformation, this is how it acts on sine and cos. And again, if you want to convince yourself that this really is the square root of d, that it really makes sense to call these half derivatives of sine and cos, just apply this formula twice, right? If you apply this formula to this guy twice, you're going to end up with cos at the end of the day, which is the full derivative of sine.
If you apply the formula to this guy again, you're going to end up with negative sine, which, yeah, that's the full derivative of cos. All right, well, what if we want to generalize this to arbitrary powers, okay, so the rth power rather than just the one-halfth power? We could do this exact same calculation, just replace these one-halves here with r's, and then, you know, find out what the rth powers of i and minus i are, and then multiply everything out, and we get some matrix, and we could get some formula here using the exact same method. I'm going to do it a different way though, because there's a nice geometric way to think about this problem. So another way to think about this problem is notice that, hey, the standard matrix that we computed, the standard matrix of the derivative map, 0, minus 1, 1, 0, that actually, that's a rotation matrix.
We learned about these types of matrices back in linear algebra 1, in introductory linear algebra. What this matrix does is it rotates two-dimensional space counterclockwise by 90 degrees, by pi over 2. Okay, so... Little picture here. Again, the way to convince yourself of this, remember, a matrix is determined by what it does to the standard basis vectors, okay?
So it's determined by what it does to E1 and E2. Well, if it sends E1 to 0, 1, okay, then now it's up here. This matrix sends E1 to E2.
It sends it to 0, 1. Now similarly, what's it do to E2? In other words, what's the second column of this matrix? Well, now it's minus 1, 0, all right?
So it sends E2 over here. And then just look at this picture. It rotated each of these vectors counterclockwise by 90 degrees. It rotated E2 over this way.
So yeah, this is the standard matrix of the rotation counterclockwise by 90 degrees. Well, think about what roots of that should be. Well, the square root, for example, should just be a rotation counterclockwise by 45 degrees. By pi over 4, you just rotate half as much. And if you look back, that's exactly what we got up here.
This matrix that we got after we did all of our calculations, this d to the power one half, this is exactly the rotation matrix counterclockwise by pi over four by 45 degrees. More generally, if you want the rth power of this matrix, all you have to do is you rotate by an angle of pi r over two counterclockwise. So in other words, the rth power of d It's just this matrix here.
This is the standard matrix of d to the power r in this sine and cos basis. Alright, so that's really nice because that just gives us our final answer, our final form of this rth power of the matrix sort of right away just by thinking about it geometrically. And now we can unravel that into the linear transformation in the exact same way that we did up above with the square root. Okay, so again, remember, this first column tells us what happens when we plug in sine. The second column tells us what happens when you plug in cos.
So what happens when we plug in sine? Well, we get cos of pi r over 2 times the first basis vector. In other words, cos of pi r over 2 times sine of x. and then we get sine pi over two, sine pi r over two, times the second basis vector.
So sine pi r over two times cos, okay? And if you use a couple, a trig identity, right, this is just an angle sum identity here, actually you can rewrite this in a really nice way. This is just equal to sine of x plus pi r over two.
All right, if we do the same thing for cos, it's the same calculation except now you're working with a second column now. So you get minus sine pi r over two times sine x. and plus cos pi r over 2 times cos x, and again use the trig identity, and you find that that equals cos of x plus pi r over 2, okay? So that gives us a formula for the rth derivative, no matter what r is, it can be just an arbitrary real number, and actually this makes it a little bit clearer even what's going on with integer derivatives of sine and cos. Remember, like derivatives of sine and cos are kind of weird, like sine goes to cos, cos goes to minus sine, and then they start looping around after four derivatives.
And this makes it a little bit clearer what's really going on here. Every time you take a derivative, all that's happening is you're refacing the function that you're working with, right? The derivative of sine is just sine of x plus pi over 2. Derivative of cos is just cos of x plus pi over 2. Every time you take a derivative, you're just refacing it by 90 degrees, by pi over 2. Okay, so if you want to take the rth derivative, well, just adjust how much you reface it by.
Alright, so I don't know, I think that's a neat example. That's something nice that you can do now that we can represent all these things via matrices and just do linear algebra on them.