Hello! Welcome to the first lecture for Computational Neuroscience, Systems 552, Biology 487. This is taught at the University of Waterloo, winter of 2021. And yeah, for this lecture we're gonna introduce neurons because that seems like a really good way to start computational neuroscience. What the heck is a neuron? As I noted before, we're gonna try to come back to this diagram about you know, what sort of scale are we talking about right now. And for this lecture we're going to be focusing on the bottom half of the scale, so everything from neurons down to molecules.
Next lecture we'll talk about the central nervous system, which will be everything from neurons up to the whole central nervous system. The readings are posted on the course website and it's basically chapter two of Kendall et al. and there's some optional readings that will go into more detail with all of the stuff if you find it interesting. But the purpose of this is to give us a general overview of how a neuron works so that we can just have that in mind as we're talking about all of the computational neuroscience details that we can get into throughout the course. First of all, I do want to start by pointing out that it wasn't obvious to everyone right from the beginning that neurons and the brain are the source of the mind or the source of behavior or thinking.
famously Aristotle has this really interesting treatise on how the brain is just a system for cooling the blood and that's why it's got its weird you know weird weird structure is that it's just a radiator that turns out not to be true brains seem to actually be the basis of our behavior so How the heck does that work? This whole process of really getting into what our neurons doing, we probably want to trace it, the beginning of modern neuroscience is sort of the early 1900s. And one way to sort of really mark that was our researcher Ramoni Cahal, who did these amazing diagrams. What's happening here is he's gotten slices of brains. and stained them.
In particular, found a way of staining them such that only a couple of the neurons changed their color. So there's lots and lots of neurons there. If you had something that just made every neuron go black, well, the whole thing would turn black. So that doesn't help. But if you can find something that will cause only a few of the neurons to absorb the pigment and and turn black.
Well, then now you can just point a microscope at things and you look at them and go, oh cool, I can draw them. And so what these are are literally him sitting down and just staring at things and drawing them really, really carefully. And because, you know, at this point we have no idea what the heck's going on there, so who knows what's important, but this is the wide variety of things that he ends up seeing. Okay, so this thing over here on the left, that is one neuron. It just, just, what?
this is ridiculous! And that's the structure observed. You can sort of see in this diagram that sort of those empty spots, those are going to be other neurons, ones that weren't stained, and it's sort of the places that those other neurons would be, which is kind of cool. Anyway, so that's just one neuron. Over here we've got more sort of cells that are sort of in the cortex.
And, again, wildly different shape. What the heck is going on here? This...
huh? Even calling these things the same sorts of components as this one over here is weird. Because they're just wildly...
they look wildly different. Over here on the right you have a retina, sort of, and you've got this gorgeous structure going on here. We've got a bunch of different neurons stained, but still not all of them. and you've got this whole sort of fascinating structure.
What the heck is going on there? Clearly at this point, all right, it's definitely not just a radiator. Something is going on here.
How do we understand anything about these things? One place that people have really started and quite successful is, look, yes, there's lots and lots of different things. But we can actually start to organize them and start to classify them. And when we do that, we're going to find some common features that they all have. And that's what's going to let us sort of get some sort of handle on trying to understand these things.
So there's definitely huge variety in the shapes and sizes and sort of morphology. But then you can use the same sorts of techniques to get an understanding of all of these. And that's a great example of that in this paper that's linked here at the bottom, Henry Markham et al. Lots and lots of different neurons, lots of different structures.
You go take a look at one in biology and then you go build a computer simulation of it using the same principles as all the other ones, and you're like, okay, good, we can get something with a computer simulation. matches behavior pretty well with the actual biological system. I will note that there's a huge effort so that et al down there so Henry Markham and others yeah the and others is 84 other people so like huge amount of work but we can do that and seems to have a really good match to the biological systems.
There's also some really cool stuff that's been happening lately about trying to get much more detailed 3D pictures of what these neurons are actually doing. Well, or what the actual 3D structure of these neurons is. It's a little bit more difficult. You've got to actually get really precise slices and you want to have not just one neuron, but you want to see that neuron and all the neurons that are also in that same area. So it's starting to be possible now.
It's a great video. that's here in the notes. I'll just show a little tiny bit of it, and this is sort of a reconstruction. They've taken some actual biological neurons, sliced it, and gone ahead and figured out, you know, exactly the shapes of a bunch of neurons.
So we're seeing here is a bunch of those neurons. Still not all of them, and we, in order to really see all of them, we're gonna just focus on that one little area there, and you see that mess there? Yeah, that's all of them. There is none of this big empty space anywhere. So You know, you'll often see pictures of neurons where there's like big empty spaces between neurons.
No, that does not happen in the brain. Neurons are packed together in the brain. In fact, you can sort of see that we just sort of explode this all out. Those are all different other neurons all crossing by and passing through that same little space. So all sorts of things pack together into very small space, but we are starting to be able to reconstruct it.
It's also worth pointing out neurons aren't the only brain cells. In this course we're just going to talk about neurons for the vast majority of the time. There are other things in the brain. There are things that maintain...
so things like here in the middle we have these Schwann cells that sort of maintain the sort of an insulation layer on part of the neuron. That seems to be really important for efficiently passing information around. There's other neurons that seem maybe to be more about providing nutrition or recycling neurotransmitters or things like that.
There's definitely some people who say that, hey, well, look, hey, those other things are also involved in computation somehow. And that definitely could be true and is probably is definitely true to some degree. We're going to focus on neurons. because it's at least a little bit less controversial. But it is also worth pointing out that yes, there are these other things generally called glial cells that are doing things other than the sort of signal transmission and computation that we're going to be focusing on in this course.
They might have a computational role. There's like, every few years there's a couple papers that are sort of really excited about finding some particular computational role about these systems, but the general consensus, or at least the non-controversial view right now, would be to say that, yeah, they're just providing maintenance and maintaining the health of the system, and we can sort of ignore them for computational purposes, maybe. All right, but what can we say about a sort of a canonical common neuron? So yeah, there's a huge variety of neurons. What is it that we can sort of say that's always there?
I want to start by just focusing on sort of physical structure, because that's, again, like historically that's what people knew first is, you know, okay, we can see the physical structure, so let's try to understand it that way. And it really wasn't until much more recently that you can actually try to get what the functional meaning of these structures are. But it's definitely really common to have one big area of sort of this lots of branching stuff.
And that tends to be sort of connected to... That's sort of a slightly wider thing that we're going to call the cell body. So we've got all these branching stuff that we're going to call dendrites. And again, from branching.
And then we've got this cell body that is basically the sort of the core of the cell, the thing that has all the machinery that lets the cell live. And then there tends to be a long output. Sorry, I shouldn't say output yet because we didn't know their output yet. It tends to be a long thing coming off of that cell body that then branches a little bit and is then up next to the next neurons. So that sort of structure is pretty common-ish.
And how it's going to sort of map more onto functionality is it's going to turn out these dendrites, these sort of big branching things, tends to be the inputs. So there's going to be some things that are happening along these dendrites that are allowing current to flow into the system. That's going to affect the stuff that's going on here.
And that's going to cause some sort of output signal to come along here. And then that output is going to go to other neurons. So terminology-wise, the important terminology here is dendrites is the things that are attracting inputs into the neuron.
Axon is going to be this one long cable that is sort of... producing an output that we're going to go send to other places, and then the synapse is going to be the thing that lets that output connect to the next neuron. Of course in the examples there I was just saying neurons connected to neurons.
For the particular case of a sensory neuron, then that input is going to be coming from some sort of sensory signal. So we'll have some sort of things here that are sensitive to photons, if this is a retina neuron, or sensitive to stretch of the muscle, if this is a neuron that's sensing muscle tension, whatever. So there can be other things that are going to cause inputs. The vast majority of time for what we're going to talk about, the inputs are coming from other neurons. And again, the outputs, the vast majority of the time, the output is something goes to another neuron, but in the particular case of motor neurons, that output would be something that, say, goes to a muscle and causes the muscle to contract.
Cool! So that sort of structure is going to be there. But there's also pretty big diversity.
But all that diversity is still going to be consistent with that structure. There's going to be interesting exceptions. So for example, yes, it is possible to have dendrites that are connected onto an axon.
Pretty rare, but yes, it can happen. And this is something you would only know by sort of looking at, again, the functionality of, oh, hey, look, this is where inputs are coming from. the signals are going this way.
But again, majority of the time we have this sort of, yay, I have dendrites, cell body, some long signal going to an output. Sometimes those dendrites are ridiculously complicated, like these ones that I've been showing. Interesting.
Okay, wide variety of diversity there. What about that output? So that output axon thing, that seems to be pretty common.
What's that output like? And this is something where we can sort of talk about a universal, well, it's not quite a universal mechanism. It certainly is not something in all neurons, but it's pretty darn common, right?
This is the thing that's also sometimes called a spike. It's an action potential. This is the idea of...
what is the signal that a neuron is communicating to the next neurons? And the idea seems to be that if in the absence of input, the voltage level, the voltage difference here between the inside and the outside of the neuron, has some sort of resting value around, you know, varies depending on the neuron in this particular diagram and a particularly common value is minus 70 millivolts. If that voltage gets a little bit that voltage gets up to, on this diagram, minus 55 millivolts, then something kicks in and you get this massive shooting up of the action potential. So this diagram is sort of measuring the voltage at one particular point along this axon.
So you end up seeing this massive shoot up of the voltage and then it drops suddenly and then it recovers to the resting potential. And that pulse, that spike, travels down the axon. That's just nice and easy to measure this.
You connect some wires to the neurons and you just measure, you know, provide it. you know, just measure it and you, oh okay, look, you get this. So this seems to be an incredibly common thing for neurons to do. And interestingly, it seems to be, the shape seems to be pretty fixed. So even if you observe the same neuron for the same time, watch it spike over and over again, the shape of this pulse doesn't seem to change much.
The pattern of pulses changes a lot. So like it might be going very slowly, or it might be sort of giving some bursts of pulses, and then a quiet time. Might have some really complicated pattern, but it looks like the actual shape of the pulse is not passing any information.
It's just the times of when pulses happen. That's what seems to be giving us information about what the input is. So that sort of story seems to be really common, and that it is just the sort of this all-or-nothing event, and that's why when we sort of talk about neurons, an interesting thing that people will often measure about them is when are they spiking.
I don't care about the exact shape of each spike, I just care about when they spike. And in particular, as I said, there's a huge variety. In fact, as you play around with different neuron models and sort of different shapes of neurons or adjust different parameters of your neuron model, you can get for exactly the same input very different outputs.
Still all spikes, but different patterns of spikes. So for example, over here on the left, I have the same, you know, four of these diagrams, actually six of these diagrams, all have exactly the same input. That's this little thing at the bottom of the diagram. So it's just sort of, okay, I have no input, and then I have this constant input for this whole period of time.
And sometimes that'll cause a neuron to spike a bunch and then start spiking more slowly. Sometimes that'll cause a neuron to just spike once, then do nothing. Sometimes it'll cause a neuron to give bursts of spikes and then nothing and then a burst of spikes and then nothing and a burst of spikes then nothing or just give a single burst of spikes and then nothing. All of these behaviors you can get by slightly adjusting your neuron model and these are all different things you see in real biological neurons.
So lots and lots and lots of different options about what the heck's going on here. And all the rest of these diagrams are just showing different things with different weird ways of putting an input. So, but again, when people are then looking at that, the interesting thing seems to be, all right, when do these spikes occur, not what's the exact shape of the voltage around the spikes.
All right, so what do these spikes do? Generally the spike doesn't just directly pass voltage from one neuron to the next neuron. It's not like... Okay, again, anytime you say anything about biology, there's an exception.
There are definitely places in biology, certainly very common in insects, where yes, what is happening is there is a direct electrical connection from one neuron to the next, and just the pulse of voltage just gets passed right on. It's called a gap junction. They're also present in humans, it's just a little bit less common.
But the vast majority of the time what's happening is synaptic neurotransmitter release. What's that? What's going to happen is the spike is going to hit the synapse, which is sort of a thing that's going to help connect one neuron to the next neuron, but instead of electrically connecting to the next neuron, what it's going to do is that spike is going to cause neurotransmitter to be released. Neurotransmitter is a bunch of chemicals that are going to get dumped into the space between the two.
the one neuron to the next, and that neurotransmitter is going to affect the next neuron. And what we're seeing in this diagram is sort of, here's a single neuron where if we cause it to, okay well this is a sensory neuron, that it's sort of detecting the stretch of a muscle. So if we give it a little bit of a stretch, the neuron spikes a little bit, cool, but then it also releases a little bit of neurotransmitter.
and then if we give it more of a stretch, I get more spikes, and that releases even more neurotransmitter. Give it even more stretch, I get even more spikes, more neurotransmitter. And the idea is that this release of chemicals is what's going to go and affect the behavior of the next neuron.
All right, so how is this neurotransmitter release going to do anything? Um, this starts taking us into saying a little bit more about the actual makeup of a neuron. So let's drop way down to the level of LLD cells of what's going happening at the level of individual ions and individual proteins in the neuron itself. So we've got our canonical neuron down here.
We're zooming in a little bit right on the wall of the neuron, right? So this is this top here, this is outside the neurons, bottom is inside the neuron, and what we're gonna do is we've got this wall and this wall is sort of an insulating layer that just sort of maintains the separation of the inside of the neuron from the outside of the neuron but that wall um sometimes called the phospholipid bilayer um because it's made by a bunch of lipids in the structure but anyway um this has a bunch of giant proteins in it that are called ion channels And what's going to happen is normally these big giant proteins are just big giant walls. But then when a neurotransmitter, if there's a bunch of neurotransmitter in this area outside the neuron, that neurotransmitter can connect to this giant protein and changes the shape of the protein in such a way that there's now a big hole in it and now ions can flow.
And if you make it different sizes, then ions of different sizes can flow. That's what's going to allow changing the neurotransmitter concentration, you know, releasing neurotransmitter between these things, that's going to allow for the flow of information, or for the flow of ions in and out of the cell, that's going to change the voltage of the cell. So that's how we're going to get some sort of behavior of allowing the spike from one neuron to release neurotransmitter, and the neurotransmitter causes current to flow into the next.
into the cell. There's tons of fun stuff that can be said about that process. Indeed, we'll talk about it a lot more later on in the course when we start building a detailed biological model of particular individual neurons. The quick hand wavy stuff right now is there's going to be a couple core mechanisms that are going to maintain a particular concentration of sodium and potassium ions inside and outside of the cell.
And it's going and it's set up such that we're going to get some sort of rest. So this is an active process. This is part of why the brain requires energy to to to function. Is that this process is going to maintain this voltage difference in this ion concentration difference sitting around minus 70 millivolts is sort of a common number.
And and then what's going to happen. is we can then allow the neurotransmitters to change this voltage. But that's, by default, it's going to be sitting there maintaining this particular concentration difference.
And then two different things can change this. One thing that can change it is, again, if I said if a neurotransmitter comes in to one of these proteins, that's going to open a channel. This is called ligand gating. So ligand just being a thing that is going to...
you know, bind to the protein and change its shape. Neurotransmitter being the most common one that we're going to be worried about. So we've got neurotransmitters that can open up these gates, but there's also going to be the possibility of there's other gates that are going to be opened just by having the voltage difference being at a certain level, and that's going to be important about how the spike works. So let's do kind of an interesting example of this. So here's an individual neuron, and I'm just going to sort of manually stimulate this spot on the neurons, or manually sort of give a pulse of neurotransmitter and see what happens.
Okay, and Let's look at this bottom example first. So what I'm measuring is what the voltage is at the cell body, and we're sort of also just sort of at the beginning of this axon. And if I just sort of pulse it once, then I get this sort of, oh, all right, the voltage comes up, and then it goes back down to its resting voltage, where it was before.
So I just gave it a little bit of input current. It put things out of whack a little bit, but then it went back to its resting, normal resting potential. And if I do that again after it's gone back to the resting potential, eh, fine.
But what if I'm in a situation where I do that a little bit faster, or if that neurotransmitter sort of sticks around for a longer period of time? So then I give this first pulse. and it starts decaying back. But then I give the second pulse, and now something very different happens.
What's going to happen here is if that second pulse takes us above that threshold, above some sort of threshold, that's the threshold at which these voltage-gated ions come in, kick in, open up channels, allow more, even more current to flow in, and that's why we get this positive feedback loop, and the voltage shoots up really high. Then there's going to be some other mechanism that's going to kick in that's going to bring it back down, and then it's going to recover. But that's going to be the story of where a spike comes from. And this is also kind of a story about what sort of computation the neurons can do, because what we're seeing here is we're seeing a system that only produces an output if I've got two inputs close to each other. So a single input is sort of ignored.
two inputs close to each other produces an output. And that's sort of our first indication of, oh okay, I can use neurons to do some sort of computing. Of course it doesn't have to be inputs at just one spot.
I can do a very similar story, but let's say I have two inputs at two different spots on the dendrites of this neuron, and I can set up a system where this neuron will only spike if both of those get an input. So that's sort of our vaguely, or that's our overall story. To say a little bit more about the structure of that spike, again we've got this resting potential, the voltage sort of is sitting here at minus 70 most of the time. I've got some inputs, maybe they're, you know, happening over time or two of them, different dendrites happen very close to each other.
Something happens about our input that eventually gets our internal voltage of this cell up to about minus 55, about some sort of threshold here, up to the threshold at which those gates that are allowing sodium ions to just flood in. And that's going to cause, once that happens, we're now going to get this. very sort of canonical, it's just this positive feedback loop, the voltage is just going to shoot up. This is part of why people believe that the shape of the spike doesn't really give us much information, because really the shape of the spike is just controlled by these, by the particular dynamics of these gates. It's nothing about the input itself.
The input itself is what got us the voltage close enough to start causing this to happen. But once this starts happening, voltage shoots up, Eventually we hit another sort of voltage level where another set of ion channels open up and start allowing potassium, so this is the potassium channels, to flood out. And that now, so it's potassium ions are charged the same as sodium ions, but they're flowing in a different direction.
So that means we're going to drop the voltage. So the voltage is going to drop back down. Eventually that whole system is... going to get everything back to where it was before, but now we've got to recover things because, all right, now we've got to recover everything and get it back to the normal state that it was in before so that it can fire again. Okay.
So that's the general story about where these spikes come from. Cool. Of course, if you do that, that process that we were showing there was just sort of at one point along the axon.
And what's going to happen is if that one point starts doing it, that's going to cause the next point to also go through a similar process, and the next point to a similar process, and the next point to do a similar process. And so then you're going to get this pulse that is traveling down the axon. On the left is going to be areas of the axon that are still in that recovery period. but then wherever the pulse is that's going to trigger the next area and the next one and the next one and the next one you should get this and that's what's going to cause the spike of activity to travel down the axon.
That's the general story if you want things to go a little bit faster. If you've got a really long axon for example then if you put some sort of insulating sheath on here in biology this is called a myelin sheath. then you can actually have little chunks of the neuron where instead of it being this sort of active process that I just described, now you can actually use the electrical transmission to just sort of, OK, now that you see electrical current flows across there and everything's fine. And so that's why having.
myelinated neurons seems to be really really useful for having more efficient passing of information, but you still need this core mechanism in order to generate the spike in the first place. All right, spike is going to come down to the axon. It's going to hit the end.
What's going to happen next? Well, there's the synapse thing at the end. We've got the synapse and we're going to and there's just going to be the sudden jump in voltage. that's going to happen because of the mechanisms I just described.
So what's that going to do? So that turns out to open up calcium channels, and that's going to change the level of concentration of calcium on the neuron, and that turns out to make it more likely for these little tiny, what are called vesicles, to be attracted to the neuron wall. what's a vesicle? It's just a little bubble with a bunch of neurotransmitter in it and the wall of this bubble is basically the same stuff that the wall of the neuron is built out of.
And what's going to happen is these vesicles, these neurotransmitter vesicles, because of the change in the calcium concentration are going to be more likely to bind to the wall of the synapse. When that happens, they're going to release their neurotransmitter, because they no longer exist anymore. That neurotransmitter is just going to flow in between from one synapse to the next. The neurotransmitter is going to then bind with those channels in the next one. That's going to allow ion to flow in, and the system continues.
That's the basic story for neurons communicating to other neurons. I do want to highlight, before going on past synapse, I do want to highlight, like, it's such a complicated mechanism, and a whole bunch of things are going to affect how much influence one spike is going to have on the next neuron. So that seems to be an important measure here, like, for every spike that comes down here, if, you know, if we don't really, if it's not the shape of the spike doesn't really do anything, because the shape of the spike doesn't really change how likely it is for these neurons. this neurotransmitter, this whole process to happen.
But for every one spike, how much influence does that have on the next neuron? And that's, there's a whole bunch of factors involved there. So one of them is just, well, how many of these vesicles are there, and how likely are they to actually bind to the wall given a spike? And that can be wildly varying. That can be like 20%, that can be 99%.
There's a pretty wide range of of, you know, for any particular neuron it's pretty constant, but neurons in different brain areas or even two neurons next to each other might have very different values. So there's that sort of probability of vesicle release that's an important measure. There's also the question of, well, how long is this neurotransmitter going to stick around? Because there's other mechanisms that are going to like reabsorb this neurotransmitter or it's just going to flow out of the space anyway.
how long is it going to stick around here because if it sticks around for a longer time then it's going to have more influence on the next neuron. And there's also the fact of well how many of these proteins are how many of these receptors, neurotransmitter receptors, are there in the postsynaptic neuron. If there's one of them then it's going to have a small amount of voltage effect onto the next neuron.
If there's more, eight, ten, 16, something like that seems to be sort of the upper end of what you will see at a synapse. There's going to be sort of a small finite number of these things. But if there's 16 of them, then it's going to have 16 times more influence than if there was just one of them.
So all of these things together are going to depend, are going to control how much influence one particular spike has on the postsynaptic neuron. And it's good for us to keep in mind that right now because a lot of times when we sort of go to the artificial neural network side of things, we're going to say, look, that's way too much complexity, let's just collapse all that together and we'll turn it into one number, we'll call it the weight of how strong this one neuron is connected to the next neuron. So that'll be a common thing, but it's also hiding a lot of stuff. All right, that's a synapse. Synapse hits there, everything is released, and now of course the neuron has to recover a little bit.
And that's mostly about rebalancing ions and putting things back to how it was before. One way that that's sometimes, so sorry, this period of time is called a refractory period. And what's generally happened is they're going to be time while that neuron, so while that neuron is in the process of spiking and recovering from that spike, basically during that time, input during that time is not going to do anything to that neuron.
There is whatever input is coming into that neuron right now. It's not going to cause it to spike again because I'm in the middle of spiking or I'm in recovering from the spike. I can't make another spike right now. So it means there's going to be this period of time that the neuron is ignoring its input, and you know that so that's one aspect of what's going to happen here. There's also a little bit of sort of a during the process of the recovery time, yeah you probably could still spike if you had a ridiculously strong input, but the the but it's going to be really really hard to spike during that little window in time.
That's sort of called the relative refractory period. The absolute refractory period is a much more common one for people to focus on. So... Yeah, but that's that's going to limit our behavior of our neuron.
One interesting thing it's going to do is it's going to make sure these neurons, no matter what their input is, they can't fire faster than a certain... at a certain rate just because they can't recover. So that's...
Yeah, so that sort of structure is going to be there and can definitely affect the behavior of the neurons. Everything's got to recover and do all of that. Cool. So this is a very complicated process. And there's lots and lots and lots of things that could go wrong during this process.
And they can go wrong in different ways. And it's sort of starting to be possible to separate out. ways that things go wrong.
Problems with the neurotransmitter process, so for instance if you have other sorts of chemical, if you have over sensitivity to particular neurotransmitters or other chemicals that are blocking the neurotransmitter systems from working, that seems to be where people are looking for treatments for things like ADHD, bipolar disorders, schizophrenia, that sort of direction of things. that's where people are looking at. So really changing how the brain processes information, because I think that's what we would tie to changes to the synaptic receptors. But then there's also much more extreme things like neurotoxin. So, for instance, that's a picture there in the middle of the puffer fish, that's the poisonous sushi thing that is one of the more deadly poisons and basically what it does is it goes ahead and just blocks those channels that are responsible for balancing sodium and potassium, sodium in particular for this one and if that system is knocked out, then yeah, this is a serious problem, you just cannot process anything and brain shuts down.
Yeah, people use it in experiments just sort of you can also shut down little tiny parts of the brain and see what happens. So that's that's a sort of a more extreme example. You can also have problems with things like that myelin sheath. If you damage the myelin sheath, it's over here, then the information is going to get passed much more slowly to the end and it's going to be attenuated along that process and so we end up seeing less strong signals.
so one common thing that gets tied to is some forms of... or is multiple sclerosis. So lots of different things people can look at and sort of tie down into this low-level biology.
And it's also worth pointing out, finally, that yeah there's a lot of complexities in what I've just shown there, but it turns out there's pretty good math models for it. And the general way of doing this math model is this circuit model where basically what you're doing is you're saying, look, that membrane that is keeping voltage separated between the input and outside of the cell, yeah, that's a capacitor. And all of these gates that are sort of opening and closing to the flow of ions, yeah, those are just controllable resistors.
and input being dumped into the cell, okay, that's just a current source. And sort of building up an electrical circuit model, and that's sort of the basis for making a computational model of the details of what's going on inside these neurons. Of course, it gets pretty complicated because you want to make sure that you're doing this in a way that takes into account the morphology of the cell.
So how do you map the spatial structure of the cell onto this sort of circuit? but this is going to form sort of the core of computational models of what's really going on as we just treat it as an electrical system and we allow ions to flow in and out and go ahead and model that. Again we will get into that in much more detail when we get into that point of the course. Alright that was a lot of stuff. What's the core take-home out of all of that?
Neurons are extremely diverse. There's lots and lots of, you know, the structure, the response, the behavior, how they're connected to things, wild diversity there. But there's some pretty common functional components. There's dendrites.
Those are the places where we're getting gathering input from. Input is some sort of electrical signal, builds up voltage inside the cell. The soma is sort of the cell body.
Once that voltage in the cell body builds up enough, that's going to start creating one of these spikes that's going to go down the axon. And then at the end of the axon, there's going to be a bunch of synapses that will go and connect to the next neurons. And it's going to be that pattern of spikes that is passing information as opposed to the actual shape of individual spikes. That whole process is just controlled by chemical and electrical gradients, and ion channels opening and closing based on the amount of different chemicals, so different neurotransmitters, and based on the electrical gradient, so based on the voltage. These spikes, also known as action potentials, are going to be created just by a positive feedback loop that is going to kick in once the voltage hits a certain level.
That's the threshold. And that's going to produce the spike. It's going to travel down the axon. And it's going to travel a lot better if the neuron has electrical sheathing on it. So it's myelinated.
That spike is going to hit the end of the neuron. It's going to hit the synapse. That's going to release neurotransmitters. And then the neuron has to recover.
It goes into this refractory period to recover a little bit. So it goes back to equilibrium so it can go do it again. Cool.
For more details on that, chapter two of Candle It All. If you want even more details, chapters five through eight have all sorts of fun, more stuff going on about membranes and action potentials and channels and synapses. All sorts of really cool stuff in there. And I do also want to just as a just to start getting people thinking about their projects because again. In this course, there is a final project and it's basically take anything in this course and go do something with it Go implement it and vary something and see how it behaves so I'm going to try at the end of each lecture to sort of here here's some ideas and For this since so much of it was about hey look there are existing computational models out there of different neurons then doing things like taking those models and modifying them in different ways seems like interesting projects.
So taking a model of the spike propagation and then putting it in a weird situation, so I don't know it's a really long really thin axon, how does that change the behavior? Taking a look at what level of detail people have modeled the synaptic release process and saying okay well let's model it in more detail, does that change how things, how the behavior works? Let's take maybe there'll be an existing model that we see later in the course of here's a simple little neural circuit but maybe the neurons in that circuit don't model the recovery period particularly well and so hey what happens if you add in those recovery dynamics how does that change the overall system those would all be interesting projects okay that's all we've got for to this lecture next lecture we're going to go on and say okay well this is neurons That was complicated enough.
Now we're going to take a bunch of neurons and connect them together into whole brains and that's just going to add even more complexity in this whole system. So, looking forward to that, and thank you, and we'll see you then.