Hey folks, my name is Nathan Johnston and welcome to lecture 13 for Advanced Linear Algebra. So the theme for this week is we're going to be looking at properties of linear transformations and we're going to see that all of these properties, well you can learn about them just by investigating their standard matrices. Because we know lots of nice things about matrices from our previous linear algebra course, we're going to be able to directly infer lots of nice things about linear transformations, at least in the finite dimensional case.
All right, so that's where we're going. And for today's lecture, we're going to start off with invertibility of linear transformations. Okay, so here's the setup.
Suppose that you've got some linear transformation acting on vector spaces v and w. Okay, we say that that linear transformation t is invertible. If there's some other linear transformation that we call t inverse that undoes what t does. Okay, so this linear transformation is going to take the output space of t and send it back to the input space of t, okay?
So these vector spaces go the other way for t inverse. And what it does is it undoes what t does, okay? So in other words, t inverse of tv is just equal to v for all v in the original vector space.
And similarly, we want them to undo each other in the other order as well. So t of t inverse of w equals w for all w in the output vector space, okay? Or equivalently, right?
We learned about composition last week. If you do t inverse composed with t... Well, that's going to give you the identity linear transformation, right? That's exactly what this first equation says here.
What is the linear transformation that sends v to v? Well, it's the identity linear transformation acting on the vector space v, okay? So i with a subscript of v here, I'm just clarifying that I'm talking about the identity on this vector space v here. Okay, because you have to be a little bit careful when you compose them in the other order, T composed with T inverse, well again you get the identity transformation, but this time it's acting on a different vector space, right?
Think about what vectors go in and out of these linear transformations. T inverse takes in the vector in W, sends it to V, and then T sends it back to back to W. So we get T composed with T inverse is the identity on W, because that's what goes in and comes out of this linear transformation. Okay, so inverses are linear transformations that undo each other, just like matrix inverses are matrices that undo each other under matrix multiplication.
Okay, and well, our first theorem tells us how can you determine whether or not a linear transformation is invertible, at least in the finite dimensional case. And furthermore, how do you find the inverse if it is invertible? So that's what this theorem does for us.
And what it says, well, in a nutshell, it says that, well, your matrix, you're sorry, your linear transformation is invertible if and only if your standard matrix is invertible. And maybe we have to be a little bit careful here. Remember, every linear transformation has lots of standard matrices because you can pick different bases on the input and output spaces.
We have flexibility in how we choose B and D here. But no matter which bases you choose on the input and output spaces, well, your linear transformation is invertible if and only if the standard matrix that you computed is invertible. So here's how it works.
Linear transformation acting on vector spaces. You have bases of those vector spaces. Then T is invertible if and only if the standard matrix with respect to whatever bases you chose is invertible. And furthermore, what is the inverse of that standard matrix? So what is the inverse of the standard matrix?
Well, here, it's really nice. The inverse of the standard matrix is the standard matrix of the inverse linear transformation. So in a sense, sort of notationally, all that you're doing is you're pulling the inverse here inside of the standard matrix brackets. Standard matrix, then inverse is the same as inverse, then standard matrix. The only thing you have to be a little bit careful of is here we have a standard matrix of T, which sends V to W.
In other words, it sends the basis B to the basis D. Whereas over on the right-hand side, you've got the standard matrix of T inverse. And T inverse, remember, it's in W to V. Okay, so the order of these bases has to swap.
It sends bases D to bases B because it sends W to V instead of V to W. Okay, so how do you prove this theorem? Where does this come from?
Well, it's just going to come almost immediately from all of those theorems that we had last week. Okay, so the setup is... Well, okay, so first off, it's an if-and-only-if theorem, so we have to prove it in two directions, right? We have to show that if the linear transformation is invertible, then so is the standard matrix, and we have to show that if the standard matrix is invertible, then so is the linear transformation. All right, so we're going to start off with the only-if direction.
In other words, we're going to start off with assuming that the linear transformation is invertible, let's show that the standard matrix is invertible. All right, so here. Assume that T inverse exists.
Then what we're going to do, the way to show that a matrix is invertible is multiply by a thing you think might be its inverse. See if you get the identity matrix. Okay, so we're going to start off with the standard matrix, and we're going to multiply it by something and hope we get the identity matrix.
If we do, then we know this matrix must be invertible. So I'm going to multiply it by the thing I think might be the inverse. And the reason that I think it might be the inverse is because that's what the theorem is trying to get me to prove, right?
So I'm thinking the inverse of the standard matrix is this matrix, so I'm just going to plop it in here. If I multiply them together and I get the identity, then I'm happy. Okay, so I've got two standard matrices multiplied together. Well, fortunately... I had a theorem from last week that told me how I handle two standard matrices multiplied together.
The product of two standard matrices is exactly the standard matrix of the composition of those two linear transformations, right? That was one of the major results that we had last week. All right, so T inverse composed with T, then standard matrix equals this product. But T inverse composed with T, that's exactly the identity matrix, right? That's what it means for these two linear transformations to be inverses of each other.
I've just taken this equation up here. T inverse composed with T is the identity and plopped it down here. All right, so this product of two standard matrices equals the standard matrix of the identity, and the standard matrix of the identity just equals the identity matrix. We didn't prove that as a theorem or anything, but it's straightforward to show, and hopefully it seems intuitive enough, right? Like that's why the identity matrix is what it is.
It's the matrix that does nothing when you multiply it by a vector, so it should correspond to the linear transformation that does nothing to vectors. Okay, well, yeah, and now we're basically done. We've shown that the product of these two matrices is the identity, and we had a theorem from introductory linear algebra that says if that happens, they're inverses of each other.
Remember, the definition of matrix inverses is two-sided. You need A inverse A equals the identity and A inverse equals the identity. But we had a theorem that said, well, as long as you do it in one order, then you're fine. They're inverses of each other. You only have to check one side or the other.
And so we've done that. So we're happy. Yeah, the inverse of this matrix must be this matrix, which is exactly what we wanted to show. Okay, and to go in the other direction, right, to prove the if direction of this theorem, you have to show that, okay, if the standard matrix is invertible, then so is the linear transformation. Well, you can try that on your own.
Basically, you just run this logic backwards, okay, and nothing too weird happens. So I'll just leave that to try on your own rather than going through it in the video. Okay, now as a really neat example of something that we can do with standard matrices and inverses, let's look at the calculus problem that we can solve with these techniques. And the problem that we're going to look at is we're going to look at the problem of computing the indefinite integral of x squared e to the power 3x. And before we get running with this, let's think about why this is the type of problem that we can now solve.
We've already looked at how we can take derivatives using standard matrices in linear algebra. You construct the standard matrix of... the derivative linear transformation, and then powers of that standard matrix just corresponded to how many times you took the derivative. Well, here we're just going to take the minus 1th power, right?
We're going to take the inverse of that standard matrix, and that previous theorem tells us the inverse of the standard matrix is going to correspond to the standard matrix of the inverse linear transformation, and the inverse of the derivative linear transformation is going to be integration, right? All right, so let's see how this works. What we're going to do, it's going to be the same sort of setup that we've seen with these other derivative type problems, right? We're going to construct a basis and a vector space and the derivative linear transformation acting on that vector space.
And here's the basis of the vector space that we want. It's going to be e to the 3x, x e to the 3x, and x squared e to the 3x. And the reason for that is we're going to want to construct the standard matrix of the derivative on this vector space, right?
And when we take the derivative of x squared e to the 3x, well... we're going to use the product rule and we're going to end up with terms that are scalar multiples of x e to the 3x. Okay and then if we were to take the derivative of this well now we would use the product rule again and we would get terms that include scalar multiples of e to the 3x. But then after that we're done.
Then when we take derivatives of any of these three functions here, we get just linear combinations of those three functions. Okay, and that's kind of what we want, right? We want our derivative just to act on whatever vector space we're working on and not get a stuff outside of it. So once we've thrown all three of these functions in here, we've made it big enough that it captures everything we need. Alright, so that's going to be our basis.
Then v, our vector space, is just the span of that basis. And yes, this basis b here is linearly independent, so it really is a basis. That's just a calculation.
So you can check that if you like. And then we're going to let d be the differentiation map on that vector space. In other words, it's the function that takes the derivative. Alright, now how do we construct the standard matrix?
We did this last week, right? All you do is, well, you take the derivative of each of... the vectors and the basis, right? So you start off with e to the 3x, you take its derivative. Derivative of x, e to the 3x, you gotta use the power rule.
And similarly, derivative of x squared, e to the 3x. You got to use the power rule. Okay, next, you construct the coordinate vectors of each of these derivatives. So for this last one, for example, you're just asking, hey, how many of each of these basis vectors are there in 2x e to the 3x plus 3x squared e to the 3x?
And while there's 0 e to the 3x's, right, there's no terms. They're just a scalar multiple of e to the 3x. And then there's 2x e to the 3x's and there's 3x squared e to the 3x's.
So my coordinate vector is 0, 2, 3. And then after you've constructed all of those coordinate vectors, you just stick them into a matrix as columns. So 3, 0, 0 becomes my first column. 1, 3, 0 becomes my second column.
And 0, 2, 3 becomes my third column. And now that's your standard matrix of the derivative. All right, now my previous theorem says, great, you've got the standard matrix of D, the derivative. Well, if you want the standard matrix of D inverse, the integral, all you do is you invert the standard matrix. So I just take the inverse of this matrix here, and I get this matrix over here.
So that was the calculation I skipped, right? To compute the inverse of a matrix, you have to use that method that we saw in the last class, right? You take your matrix, augment with an identity matrix, then row reduce, and then this guy over here will be sitting on the right-hand side.
So that's our inverse. Okay, so this is the standard matrix of the integral, right? Now we have everything that we need to actually do the computation, right?
The coordinate vector of the integral is just the coordinate vector of D inverse of x squared e to the 3x, right? Because D inverse is the integral, right? And to compute this, you just take the standard matrix of D inverse and you multiply it by the coordinate vector of this guy here.
So here's standard matrix of D inverse that we just computed. And the coordinate vector... of x squared e to the 3x, well that's just 0, 0, 1, because this is one of the members of the basis, right?
It's the third member of the basis. So there's none of the first member, none of the second member, and one of the third member, okay? And now you just do the matrix multiplication. If you do this matrix multiplication, you get 1 over 27 times 2 minus 6, 9, all right? And what this means, what coordinate vector equals this means, is that this function here is, well, 2 over 27 times the first basis vector.
minus 6 over 27 times the second basis vector, plus 9 over 27 times the third basis vector. Okay, so that gives us our answer. And the only minor technicality that we have left to deal with is this vector space V that we've been working over does not have constants in it. So we've got to sort of manually add back in the plus C at the end of the day, right?
Indefinite integrals, they're only determined up to a constant. Okay, so just as one final quick note before we leave, I want to note that because of this link between standard matrices and linear transformations, basically all properties of invertibility that we saw in the previous course for matrices, they carry over in a very natural way to linear transformations on finite dimensional vector spaces, right? Remember that in the previous course, we learned that invertibility of matrices was equivalent to all sorts of different things, like a matrix is invertible if and only if its determinant is non-zero. A matrix is invertible if and only if it has linearly independent columns. A matrix is invertible if and only if the linear system ax equals zero has a unique solution x equals zero.
Okay so we're just going to freely use these equivalent properties for linear transformation. transformations now without even really thinking about it, just because we have this bridge between matrices and linear transformations now. Okay, and in particular, there's one property that we're going to use this week that I just wanted to point out. All right, so if you have finite dimensional vector spaces with the same dimension, all right, we just need the same dimension so that the standard matrix ends up being square, and it makes sense to talk about invertibility.
Okay, then your linear transformation is invertible if and only if t of v equals zero implies v equals zero. And this is just equivalent to, for matrices, a matrix is invariable if and only if ax equals zero implies x equals zero. It was equivalent to linear systems having unique solutions.
And that's all this is really saying. It's just a linear transformation version of it. And we're going to make use of that later on in this week. All right.
So that'll do it for today's lecture. I will see you all in lecture 14.