Hello and welcome to the "Linear
Algebra" day of the Pre-reqs refresher! I'm Ella Batty. I am a Lecturer and Curriculum
developer for Computational Neuroscience at Harvard. Within Neuromatch, I am the Coordinating
Academic Officer for the Comp Neuro course. As you can tell, I'm interested in comp neuro
education, and I'm also interested in using Machine Learning approaches to model neurons, and
in methods for understanding complex networks. Outside of science, I like reading, dogs, and
(most importantly) binging murder mysteries. So, I want to start with why you should care
about linear algebra. Why we're asking you to spend the next couple hours of your life learning
it? Linear algebra is a really foundational math. It's central to most other areas of math and a
lot of applications. So, it's used all the time in physics, computer science, economics,
and others. Importantly for us, it's also used a lot in neuroscience, and especially in
computational and theoretical neuroscience. And so, we'll actually use concepts from linear
algebra in over half of the comp neuro days. It's a good fit for neuroscience because you can
think of linear algebra as the language of data. It's how we organize, transform, analyze
data. As a really simple example of that, let's assume that you are recording from three
different neurons. So, neuron 1, neuron 2, and neuron 3. While you are recording from these
neurons you present an image to the animal. So, let's say you present an image of a dog, and you find that "neuron 1" fires at "10 Hertz",
where Hertz is the unit spikes per second. "Neuron 2" fires at "50 Hertz". And "neuron 3"
at "2 Hertz". Then you can show another image, let's say of a cow, and you can again record
the activities of these three neurons. So, we only have 6 numbers, but it's already
starting to get a little bit messy. I have to tell you a lot of different pairings of: "neuron
2" to Dog is this, "neuron 3" to Cow is this, and so on. And so we can organize this data
using a linear algebra concept called "vectors". And these brackets are denoting
that this is now a vector. So, vector is basically an ordered list of
numbers. So, here the ordering is our neural ordering, so you know that the first component
in the vector is always "neuron 1", the second component is "neuron 2", and the third is "neuron
3". So now if I give you a new vector and tell you it defines rates to a tree stimulus, you instantly
know which number corresponds to which neuron. And we can also do operations on these vectors.
So, we'll talk about this operation later in the course, but we can do "vector subtraction". And
so we can do this to start to understand how the neurons are responding differently to dogs and
cows. So, when we do vector subtraction, we're subtracting the individual components. So this is
"10 - 12" is "-2", "50 - 8" is "42", and so on. So, this difference vector tells us how
much more the neuron responded to the dog image than the cow image. So, today you
will start with this tutorial on vectors. You'll learn about the definition of a vector,
more about vector properties and operations, and how to define space through vectors. In the
second tutorial, you'll learn about matrices, so you'll learn about how matrices can transform
space, about their properties and operations, and about eigenvalues and eigenvectors.
And then if you have time (but absolutely no worries if you don't), you can tackle the
bonus tutorial on "Discrete Dynamical Systems". So in this tutorial, you'll model a very
simple neural circuit, and then you'll understand the dynamics of this neural circuit
using eigenvalues. And don't worry if all of these words are nonsense to you. Hopefully,
by the end of the day, they will not be.