Hey, linear algebra folks, my name is Nathan Johnson, and this is going to be the first in a series of videos where we look at how we can use linear systems to solve a variety of problems. In this video, the problem that we're going to start with is using linear systems to find a vector or vectors that are orthogonal to a given set of vectors. Let's get into it. Okay, so let's start off in two dimensional space where this problem is simply enough that we don't even need a linear system to help us solve it.
So let's suppose that we're given the vector 2, 1, and we're trying to find a vector that is perpendicular to it, orthogonal to it. The way we can do this is notice that, hey, this line 2, 1, it lies on a line that has slope 1 half. And well, one way to find a line that's perpendicular to another line is construct a line that has slope equal to the negative reciprocal of the slope of the original line.
So if the original line had slope 1 half, then I want to put my new vector, my orthogonal vector, on a line that has slope negative 2. Okay, and the way I can do this is just pick any vector whose y coordinate is negative two times the first coordinate. Okay, so for example, the vector negative one, two does the job. Okay, so let's ramp up a little bit.
Let's talk about how to do this in three dimensions. Okay, so suppose that you're in three dimensional space and you're given two vectors, one, two, three, and negative one, negative three, three. And suppose that you want to find a vector that's orthogonal to both of them. Well, one way of doing this that you might have seen already is via something called the cross product. The cross product, it's an operation that takes in two vectors, spits out another vector, and that other vector that it spits out, it's always going to be orthogonal to the first two that you put into it.
Okay, so that's one way to solve this problem. We're not going to do that because I don't like the cross product. Okay, cross product, it's one of the very few things in linear algebra that's actually dimension dependent.
So I don't like it. Let's talk about the dimension independent way of solving this problem. And that is via linear systems.
Okay, so how do you solve this problem via linear systems? Well, what you do is you say, well, I want to find this vector v, I don't know what its entries are, just give names to its entries, call them x, y, and z. Okay, and now what I want is I want to be orthogonal to these two other vectors.
So in other words, what I want is I want my vector v x, y, z to have dot product with these other two vectors equal to zero. So I want x, y, z. dotted with one, two, three, t equals zero. And I want x, y, z dotted with negative one, negative three, three, t equals zero.
And if you expand these out, these are just linear equations, right? I mean, if I do x, y, z dotted with one, two, three equals zero, that's the same as saying x plus two y plus three z equals zero. And that's a linear equation in the variables x, y, and z. Similarly, I get the linear equation negative x minus three y plus three z equals zero from the second dot product.
So then I can take this linear system here. and just throw it into matrix form, right, it's going to correspond to the augmented matrix 1230 and negative one negative 330. And remember, the way this works is the rows correspond to equations, and the columns correspond to variables. So those columns correspond to the variables x, y and z in that order.
And now I'm going to take this augmented matrix here, just do my row reduction magic to it to put in row echelon form in particular, if I just add that first row to the second row. when I'm going to get zero in the bottom left corner, now it's in row echelon form, I have this nice diagonal pattern of leading entries that I like, it's row echelon form now, so I can solve this linear system. Okay, and in particular, you're going to notice that, hey, this linear system here, it has a free variable, right? Because there's no leading entry in that third column in the Zed column. So that tells me that Zed, it's a free variable, the other two variables, x and y, they're leading.
So I'm going to be able to solve for x and y, the leading variables. in terms of Z, the free variable. Okay, because that is a free variable, I can pick it to be whatever I want.
Okay, I'm going to pick Z equals one, you could pick Z equals seven, or Z equals negative 13 over nine, it does not matter. The only thing you have to be a little bit careful of is you don't want to pick Z equals zero, because if you do that, you're going to get X and Y both equal to zero as well. So then you'll get pay V is the zero vector. And yeah, the zero vector is orthogonal to those two vectors that were given to me. but the question specifically asked for a non-zero orthogonal vector.
So just pick Z to be something non-zero to avoid that technicality there. All right, so anyway, I picked Z equals 1, so now what I'm going to do is I'm going to solve for X and Y in terms of Z. And the way I do that is I go back up to that matrix that's in row echelon form up there. In particular, the second row of that matrix, that tells me negative Y plus 6Z equals 0. I'll just rearrange.
I know Z equals 1. Plug that in there. I'm going to find that Y has got to be 6. Similarly, the first row of that row echelon form tells me that x plus 2y plus 3z equals 0. Well, I know y and z now, so I just plug those in and I rearrange and I'm going to get, oh, x is negative 15 this time. I bring everything over to the right-hand side and plug in my values for y and z. And there I've done it, okay? I've found an orthogonal vector, negative 15, 6, 1. That is an orthogonal vector.
It's just, you know, it's basically the same as the vector that we found via the cross product earlier. It's just the negative of that vector. which makes sense whenever you're finding these orthogonal vectors, like there's not just one of them, but every scalar multiple of an orthogonal vector is also an orthogonal vector, right?
So there's not just one correct answer to this. And that corresponds to the fact that we could choose Z arbitrarily. If I chose Z equals negative one earlier, I would have found exactly the cross product answer. It's maybe helpful to think geometrically about what we just did here. Okay, these two vectors that we started off with one, two, three, and negative one, negative three, three.
Those lie on some plane in three-dimensional space, right? There's a unique plane that contains both of them. And by asking for a vector that's orthogonal to both of them, we're asking, in other words, for a vector that's orthogonal to that plane, that's perpendicular to that plane. Okay, so let's ramp up again, this time to four dimensions where geometric considerations won't help us anymore. There's no cross product.
Basically... the way to solve this problem now is linear systems. Okay. So suppose that we're asked to find a vector that's orthogonal to each of 1, 2, 2, 2, and 2, 1, negative 2, 1, and 1, 0, 2, 0. Okay.
So these are three vectors living in four dimensional space. We're trying to find a vector that's perpendicular to all of them. We're going to mimic exactly what we did before.
Okay. We're going to say, I don't know what my vector V is. I'm trying to find it, give names to its entries, call its entries V1, V2, V3, and V4.
Okay. And I want its... dot product with these three vectors to all equal zero, because that's what orthogonality is.
Okay, so I just write down vector v, v1, v2, v3, v4, dotted with each of these vectors equals zero. And again, these are all just linear equations, right? v dotted with 1, 2, 2, 2 equals zero is just the same as saying v1 plus 2v2 plus 2v3 plus 2v4 equals zero. And similarly, for the other two dot products, those are two more linear equations. And I can throw this all in a matrix in the usual way, each row of this matrix, one row corresponds to one equation.
So we have three rows, because there are three equations. And similarly, there are going to be four columns plus the augmented all zeros column, because there are four variables, each of these columns corresponds to one of the four variables. And then to solve this linear system, all I do is I apply Gaussian elimination to bring this matrix down into row echelon form.
And actually, I'm going to go one step farther with this particular problem, I'm going to go all the way to reduce rush long form, you don't really have to, but I'm going to find that in this particular problem, it sort of cleans things up a little bit easier, it makes it a little bit simpler to write down the final answer. Okay, so to start, I'm going to do row two minus two, row one, and then row three minus row one, because I want to clear out the middle and bottom entries in that leftmost column, I like the leading entry of one at the top left, I don't like the nonzero stuff below it. So just do those two row operations.
And now I'll turn that first column into 100. Okay, next, because I like avoiding fractions, when I can, I'm going to rescale the second and third rows so that the leading entries are both ones now. So I'm going to do negative a third times row two, and negative a half times row three, okay, and I'll turn both of those leading entries in the second column into ones. So at this point, I'm happy with that one in the two, two entry in the middle of the second column, I don't like the two above it, or the one below it.
So I'm going to do row one minus two row two. and row three minus row two to put zeros in those entries and to sort of clean up the rest of that second column. And now again, to avoid fractions, I'm going to do negative a half times row three to turn row three's leading entry from negative two into one.
And then finally, I can get the rest of the way to reduce for echelon form just by clearing up the entries that are above that third leading entry in that third column there, get rid of the negative two, get rid of the two by doing row one plus two row three and row two minus two row three. that clear up that third column. And hey, now it's in reduced Roshlan form, we're just going to be able to read off what our solutions are.
In particular, remember, hey, that fourth column there, get that corresponds to the variable v4. There's no leading entry in that fourth column. So that tells me the variable v4 is free. All the others are leading v1, v2, v3, those are all leading. Okay, so what this means is I can pick v4 to be whatever I want.
And I can use that value of v4 to solve for v1, v2, v3. So I'm going to do something similar to what we did in the three dimensional case. I'm just going to pick v4 to be one, you can pick it to be any non zero value you like, though, and then just solve for the other variables.
Okay, so from that bottom row of the reduced for echelon form, well, that's an equation that just tells me one times v3. equals zero. So V3 is zero.
The second row in that reduced row echelon form tells me one times V2 plus one times V4 equals zero. Move the V4 to the other side. V2 is negative V4, but V4 I chose to be one.
So V2 is negative one. And then the top row of that reduced row echelon form, that just says one times V1 equals zero. So V1 is zero. And then just throw all of this into a vector.
Okay. It tells us that, hey, the orthogonal vector that we're looking for, it's just V1, V2, V3, V4, which is 0, negative 1, 0, 1. Okay, so let's ramp up one final time. This time, we're not going to ramp up in terms of dimension.
We could go all the way to like 19 dimensions. Doesn't matter. The procedure is going to be the exact same.
This time, what we're going to ramp up is, well, we're going to ramp up. ramp up two things. For one thing, we're not going to ask just for a single vector that's orthogonal to all of the given ones. We're going to be asked this time to describe all vectors that are orthogonal to the given vectors, 1, 2, 3, 4, 2, 1, 3, 2, and negative 2, 2, 0, 4. Okay.
We want to describe every single vector that's orthogonal to all of those. So that's one way that we're ramping up. The other way that we're ramping up is this time during our solution, we're going to go through the exact same procedures that we just went through, but something's going to go kind of weird when we do it.
And we're just going to talk about how to deal with that. So same setup as last time, though, okay? We don't know what our vector is.
And just give names to its entries, b1, b2, b3, b4, okay? Take the dot product with each of the three given vectors said equal to zero, and that turns into these three linear equations right here, okay? You take those three linear equations, throw them into a matrix just like before, okay? And then we do row operations to bring it down to reduce rational on form.
Okay, so all of this, it's the exact same procedures that we just went through before, I'm going to not read out the details, you can sort of see what's happening, try the computation on your own. Okay, this is just solving a linear system. Okay, but once you get to this point, hey, now we're in reduced rational on form. Okay, so let's interpret it. Okay, this time, there are only two leading entries in the v1 and v2 column.
So v1 and v2, those are leading V3 and V4, those are free because the third and fourth columns have no leading entries. So this time, one thing that's different is there are two free variables. And what that tells you is the solution space of this linear system.
It's two dimensional instead of one dimensional like it was in the previous example. In other words, there's not just a one dimensional line of orthogonal vectors. There's a two dimensional plane of orthogonal vectors.
Inside a four dimensional space, we have these three starting vectors. and there's a two dimensional space that's entirely orthogonal to them. In other words, these three starting vectors, they lived on a two dimensional space. And then there's two dimensions left over to be perpendicular to all of them. So because we're not trying to find just a single solution, this time, we're not going to make particular choices for those free variables, we're just going to say v3 and v4, those are free, I'm going to leave them like that I'm going to leave them free unspecified, I'm going to solve for v1 and v2 in terms of them.
Okay, so for example, the second row of that reduced row echelon form that tells me v2 plus v3 plus two v4 equals zero. So I just rearrange that v2 equals negative v3 minus two v4. Okay, and you'll notice that that has the form leading variable equals some junk in terms of the free variables. And that's what you want.
Similarly, the first row in that reduced row echelon form that tells me one times v1 plus one times v3 equals zero. So rearrange that, move the free variables over to the right-hand side, leading variable, there will only ever be one of them, stays on the left-hand side, and you get v1 is negative v3. So then this time, when I throw this all back into a vector, I say, ah, my orthogonal vectors, they all look like v1, v2, v3, v4. Well, write down all four of those entries in terms of the free variables. So for example, replace v1 that's leading by its free variable equivalent, negative v3.
and v2, oh, that's leading, I've got to replace it in terms of the free variables, replace v2 by negative v3 minus 2v4, okay, and then v3, v4, those are already free, so just leave them alone, okay, so this question, it wasn't super specific, it said describe all vectors that are orthogonal to these three starting vectors, so one final answer is you could think, ah, all orthogonal vectors, they're the vectors that look like this. for some choice of v3, v4, right? And you can't really get more specific than that.
There are infinitely many vectors in particular, like there's two dimensions of solutions to this problem. So you just say, you know, v3 and v4 can be anything, and then you compute this vector. That's a fine final answer. Maybe something that's slightly more illuminating though, and actually gives you a bit of a better of hint of what's going on here, is you can factor this vector, okay? You ask yourself, how many v3s are there in each entry?
And well, there's negative one v3s in the first entry. negative one V3s in the second entry, and then one V3 in the third entry and zero in the fourth entry. Okay, so I'm going to write this vector here as V3 times negative one, negative one, one, zero, and then plus. Well, how many V4s are there in each entry? Well, there's zero in the first, negative two in the second, zero in the third, and one in the fourth.
So I can write, you know, plus V4 times the vector zero, negative two, zero, one. Okay. And then this factored vector at the bottom, that's the exact same as the sort of unfactored vector directly above it.
So the way to think about this is, again, still V3 and V4, they can be anything at all. Okay, but this thing at the bottom here, this is some linear combination of those two particular vectors, negative one, negative one, one, zero, and zero, negative two, zero, one. Okay, so you can think about this as, ah, here are two particular vectors that are orthogonal to all of the given starting vectors. And well, any linear combination of them is orthogonal to those starting vectors as well.
So this sort of describes to you what the two dimensional space solution space looks like, this two dimensional solution space of orthogonal vectors, it looks like that. vector pointing in this direction, negative one, negative one, one, zero, also vector pointing this way, zero, negative two, zero, one, and sort of everything in between them, the entire plane that contains those two vectors, actually, that's what the solution space looks like. All right, so that's all I got for you today. Thanks for watching, everyone.
We're gonna have a whole bunch of more videos over the next couple weeks talking about other problems that you can also solve once you understand how to solve a linear system. So I'll see you then.