Transcript for:
නියුරොවිද්‍යා නෙට්වර්ක් ආදර්ශ

hi everyone um unfortunately uh as probably has been announced at this point um i either have covered or that it's covered in my household meaning uh i'm not actually able to be there in person unfortunately um so i'm gonna do my best to record this before i go um as you can imagine there's been a lot to to get done in the last couple of days so this is gonna be like a single take version so i apologize if it's a little bit on the rough side okay so welcome to the tutorial uh and we're gonna talk about uh spiking neural network models uh in neuroscience this is a topic that has been my research topic since the start of my neuroscience and it's particularly exciting at the moment i think because in the last few years there's been some enormous developments that really changed the way this topic is done um so let's have a quick look at what we're going to learn about today so part one is going to be what i'm going to call the classical spike neural networks that's basically the stuff that happened before this change that i'm talking about that happened in the last few years uh and this is basically just to give a bit of background so uh so we're all on the same page so uh i'll because i don't know exactly what everyone knows and this is quite a diverse audience i'll start off with just a little bit of biology what is an snn um i'll talk about some neuron models the leaky integrated fire for example the hodgkin-huxley type model and i'll give you a little tour of neuron dynamics basically that's i think the thing that makes spiking neural networks interesting is that the individual neurons have sort of a rich set of internal possible dynamics um which potentially lets them some carry out some interesting functions as an example of that i'll talk about coincidence detection um and i'll show some code relating to that as well and then there's going to be an exercise and the exercise is that i'm going to give you a network that can do sound localization using spiking neural networks and coincidence detection and your task will be to improve on uh on my sound localizer and we'll have a little competition so the people who uh the the the group that can do the best um sound localizer uh with the lowest error uh will then maybe present their solution uh time dependent um to to the rest of the uh to the rest of the um class cool okay and then we'll move on to part two and this is the the sort of the recent revolution i guess in spiking neural networks uh basically applying machine learning techniques to spiking neural networks so i'll briefly introduce this by talking about how we do learning with spiking neural networks uh then i'll also as before i don't know what the level of everyone is in terms of all the background here so very briefly introduce some stuff about neural networks and gradient descent and then i'll talk about this new development which is called uh well there's various similar ones the one that i'm going to talk about is called surrogate gradient descent i'll talk about how to code that up in pytorch which is a python machine learning framework uh and then we'll have a little surprise at the end as well so the exercise uh is going to be um to implement again the sound localizer sound localization network but this time using surrogate gradient descent rather than a sort of hand implemented method which is what we did in the first part cool okay so brief mention of the the sort of philosophy of this course is that it's supposed to be quite practical so basically after this course you should have an idea about how to start using snns i'm not going to be able to tell you everything um but i will point you to some further resources that will be in some material that i'll send out with this uh i won't talk about much theory there's a lot of really interesting theory about spiking neural networks the reason i'm not going to talk about it is it's well covered in lots of good textbooks already and you can have a little look at the the reading list that i'll send out with this that means i'm going to be missing out a lot but um in only a couple of hours that's that's all that we can manage realistically the other aspect of the course is it's going to be computation oriented so what i mean is that the brain uses spiking neural networks to carry out computations and there's a lot of previous research that sort of focuses on the properties of the spiking neurons particularly some of the theory work and this is of course very important to understand them but what i want to get across in this course is how do we actually use those properties of spiking neurons and how do they contribute to do to the brain doing useful computations so uh useful can mean for the brain but maybe maybe it could also be useful for machine learning i think that's still uh an open topic whether or not it can be so why should i care about snns um well in neuroscience biking neural networks i mean that's what spiking neurons is what the brain is composed of and that's how it functions so if we want to really understand the brain we do need to understand spiking neural networks um another way of putting this is that they are the basis of the only known system that we have for general intelligence so perhaps that's important i also think that they're a really interesting intellectual challenge so you have a combination of of hybrid combination of continuous and discrete dynamics going on in spiking neural networks and because of this computation emerges now that's to me quite wild it's quite unlike uh computers it's conlight unlike analog systems and i find that a really interesting intellectual challenge to think about how how that can do these incredible computations um i also think that there's a coming revolution maybe it's already started and this is what i'm going to talk about in the second half and so now is a good time to join and be part of that revolution um in particular i feel like there's a lot of low-hanging fruit to to be picked um so it's a it's a good time to to get in on the ground floor of uh that also interesting stuff going on in neuromorphic hardware so that's computing devices designed to mimic the brain in some way and and one of the most interesting things about those is that they can have very low power consumption uh compared to traditional computing devices might there also be advantages of computation with snns purely as computational devices i i kind of think so i think that's probably why the brain uses them or one of the reasons but i'd say that's still an opening open question and that's interesting to be able to approach that open question some of my guesses on that are it lets the brain do very rapid uh decision making so fast computations because it uh yeah essentially on on the basis of a single spike or a single volume of spikes you can already start to make uh your decisions and then as more spikes come in you can update that i also think that multiplexing may have something to do with it um so basically the ability to sort of multiplex multiple computations in the same network i think that's a potentially powerful thing that spikey neural networks can do okay so let's get on to talking about this classical spiking neural networks approach so what is a spike in neural network well here we go is a picture of uh of a couple of neurons cartoons um so this thing here is a neuron um you have dendrites up here these are the inputs of the neurons that's where inputs come in here you have the the soma the the sort of cell body and this is the axon down here that connects to the dendrites of the next one and so on so what happens is a spike or an action potential is initiated here travels down the axon uh jumps across the synapse here where it connects to another neuron and potentially causes that intern to fire uh and that's i mean in a in a crude sense that's what the that's how the brain is working it's just they combined effect of many many neurons doing this sort of thing um why do we call it a spike well if you record um the potential difference so it's an electrical activity that travels along here and there's a potential difference from the inside to the outside of the cell if you record that you see a graph a little a little bit like this so you might have some sort of noise in this and every now and again you see the sudden peak and because they look like spikes are on these plots um that's why they're called spikes it's as simple as that uh and basically these spikes are the emergence of um of one of these action potentials and we can convert that into a spike train basically the times uh of these spikes and the interesting thing about these is they're sort of all or nothing so there either is a spike or there isn't a spike i mean you can see that there's slight differences in the heights of these spikes but those essentially aren't really meaningful they're probably just electrical noise in the recording okay so let's start with a simple model of this and the simplest well one of the simplest models you can have is the leaky integrated file in your own actually there are simpler models but let's now get into that right now so in this model you have a membrane potential v and this membrane potential evolves according to this differential equation so tau dv by dt is minus v and basically what that means is you can easily solve this thing it means exponential decay so if the voltage is say this value 0.8 at some time zero then it decays down towards zero with a time constant of tau in other words after after time tau it's gone to one over e of its starting value when a neuron receives a spike v instantly increases by some synaptic weight w so here we go that there's at time 10 milliseconds there's a spike jumps up and then starts to decay again and so here's how we write that so v maps to v plus w after a spike and if at some point the neuron uh the membrane potential v crosses a threshold value v t in this case i've just set that to one the neuron fires a spike and then resets so you can see that here there's regular inputs coming in every five milliseconds first one isn't enough to bring it to the threshold the second one isn't but the third one pushes it above this threshold it fires a spike and resets to zero continues until the next bite comes in and then the same thing repeats so here you can see that the red dashed lines are where it fires the spike and the blue dashed lines dotted lines are where a spike comes in all right so why is it called a leaky integrated fire neuron well this is a leak essentially we have a current leaking out of the of the neuron this is um essentially sort of integrating the input you have inputs coming in uh and they cause this increase in the membrane potential uh and then you have firing a spike and this is interesting because you have non-linear discontinuous dynamics here um for um for this why is it nonlinear and discontinuous well it only happens when v crosses a threshold so you have some function here essentially that goes from zero to one jumps between those instantaneously uh and that's what makes these diff these uh these neurons difficult to study i guess um and also what's what makes their dynamics interesting and powerful okay so how do we uh simulate one of these leaky integrated via neurons so i'd like to actually now hop over to a jupiter notebook um and show you some code for that so here we go here's one i made earlier so here we have uh this is one lif in the repository if you want to look at that so the leaky integrated fire neuron just got a little bit of text describing the mathematics here but which we just talked about so now let's talk about the implementation details how do you implement this in code so we do something like this pseudocode so for each time period t we update the value of v and the value it has at time t to the value that it has at time t plus dt so dt is some small value typically we use something like 0.1 milliseconds and it's the the integration time window we process any incoming spikes that is if there are any chromium spikes we increase the v value by by the corresponding synaptic weight w we check if v is across the threshold and if so we emit a spike which then needs to be processed elsewhere and reset the value of v and that's it we just repeat that over and over and over again and that's pretty much that's all we have to do um to update the value of v of t to v of t plus dt we we use the we use the differential equation and this has this solution that the value at time t plus dt is the value of time t multiplied by e to the minus dt divided by tau so this is straightforward to to solve this differential equation you can have a good doing that if you wish um and we also notice that this quantity here e to the minus dt divided by tau doesn't depend on the time t right it only depends on the time step and the time constant it doesn't also depend on the membrane potential v so we calculate this once outside the loop we set that to a value alpha and then inside the loop all we need to do to do the integration is multiply v of t by alpha so v of t plus dt is alpha times v of t okay so here's how that looks in code well i'm doing this in python so i'm just going to import a few things we don't need to worry about that this is just some plotting stuff um let me just let me just quickly show you what this looks like the output of this looks like here first of all so it's a sort of interactive plot here and you can move the spike times around so we have three input spike times and as you see as i move that around the plot down here updates a little slowly because i'm doing this on a fairly low powered laptop okay so how how does that code work so we have three incoming times t0 t1 and t2 of the spikes we have the time constant in milliseconds we have the synaptic weight w we have the threshold and the reset values we sort those times into reverse order so that the last spike will be the first one because basically as we go we're going to pop these off the list so that's their reverse order is how we want to have those we have 100 millisecond duration we set the time constant to 0.1 milliseconds we compute this alpha value and we're going to record the membrane potentials as we go we initialize the membrane potential as zero we compute the array of spike times that we're going to iterate through and also we're going to record the output spikes into a list so now for each t in this array of times that we're going to simulate we record a copy of the membrane potential at the beginning of the loop we multiply by alpha as i said uh and then we say if uh there are any remaining spikes in the incoming spiked list and the current time is larger than the final spike time right so that means that one of the spikes has just happened uh we increase v by w and we pop that uh final that spike incoming spike time off the list uh we record again we wouldn't normally do this i'm doing this here so that when you plot it you can see the membrane potential both before and after the incoming spike uh it just makes it a bit easier to visualize but not normally no necessary to record the v value twice during a loop if he has gone over the threshold we reset it and we append the spike to our outcoming spikes list and that's it that is a very simple single neuron uh simulation um and as i uh showed you uh this oh by this this stuff creates uh this interactive uh widget where you can scroll these around and see the effect it has okay and so already you can see that this has some fairly interesting uh properties so for example um you can see that the third spike in this case is the first one that is strong enough to bring this neuron to fire let's just increase the weight a little bit move the first spike earlier and decrease the time constant a bit okay so now with quite a short time constant uh six milliseconds in this case you can see that these three spikes were never enough even with a higher synaptic weight as input to cause this neuron to fire a spike however if i move one of these closer to one of the other spikes you can see as it gets closer to it it now causes the neuron to fire a spike that's because it's been prepared by the first spike and it's closer to threshold at the time that that other spike comes in now that's enough to push it over the over the edge so this is the property that we call coincidence detection these uh leaky integrate and fire neurons respond more um to um to a to coincidence bikes or spikes arriving at a at a similar time uh super linearly than they do to spikes spread out in time uh and that's a property we'll be making use of uh in a bit all right um all right let's get back to this okay so now we can have a look at a slightly more complicated model a 2d model of a leak integrative so the model that we looked at before was a one-dimensional leaky integrated fire model it only has one variable v and this isn't enough to capture all of the interesting dynamics of neurons you typically will need to go to at least two dimensions to to capture a wide range of uh of sort of interesting neuron dynamics in this case we're going to do a really simple one it's not super biologically realistic but it's enough to to show the point that you can start getting more interesting dynamics with this so what we're going to do is we're going to add a dynamic threshold vt so now vt rather than just being a single constant is going to be something that changes over time so when the threshold is higher it's going to be harder to produce a spike than when the threshold is lower the threshold dynamics are governed by a differential equation much as v was and in this case they have their own time constant and rather than decaying to the value 0 they decay to the value one so it starts at one remember the the the threshold starts at the value one and sometimes it'll go up but after it's gone up it'll start decaying back down to one again over time um so one is is it sort of natural resting state uh we have a new spike threshold condition instead of uh vt just being a constant or checking v greater than one we now check if it's bigger than this dynamic variable vt and after spike as well as reducing v to zero resetting v to zero we also increase the threshold by some small constant or large constant if we want delta v t okay so let's have a quick look at what that looks like something like this um so basically uh over time you you have the green line here is the threshold and as you can see after two spikes it hits the threshold and the threshold now jumps up and then you have another two spikes it's still enough to get it above the threshold and the threshold jumps up but now that the threshold is quite a lot higher these two incoming spikes are not enough to push it over threshold and it requires a third spike to push it over threshold then another three spikes is enough to push it over threshold again and again but now it's the threshold is so high that it actually takes four spikes uh before it's pushed over threshold and you can see that here in the inter-spike intervals that's the the time between output spikes so at the beginning there's two um five millisecond blocks so that's a 10 millisecond into spike interval then it goes up to 15 15 again 15 again and then it goes up to 20. so you can see that the time between outside spikes is getting longer and uh and it's getting more and more difficult to drive this neuron and this is called adaptation and this is a very simple form of adaptation it's um yeah it's as i say it's it's biologically unrealistic there's much more there's many other variants of adaptation it's interesting enough that you do see research papers that use this um just to introduce some some simple adaptation dynamics there's also what's called sub-threshold adaptation so that is adaptation that happens even if an output spike isn't produced and there's more interesting dynamics for the threshold change than just increasing by by a small constant okay and it might be interesting to [Music] have a go at the exercise that is in that notebook which basically says can you modify your code to implement this threshold this dynamic threshold and produce a plot like this but i think we won't do that um today that's something that you can do afterwards if you're interested okay so i mentioned at the beginning that we would also talk about other um more more complicated neurons i'm only going to very briefly talk about these uh basically in the on the hodgkin huxley um model and and this is one of the models in a way that started the whole field of computational neuroscience you can combine currents from many different types of ion channels you each of those has an equation you add them together and that gives you a great big equation uh something like something like this this is i think this is the the classic hodgkin-huxley um equations from there from that paper and you can see that this thing is kind of hard to understand um it's not impossible to understand you can work with these things but personally i prefer to use these reduced 1d or 2d lif models and you can actually derive these reduced models in a sort of mathematically principled way typically what you do is you divide variables into ones that are varying slow and ones that are very varying fast uh and then you assume that you know they take that to infinity as it were and then these 2d models will typically pop out depending on how exactly you set up those assumptions um interestingly um something that's potentially doesn't surprise machine learning people but these reduced models actually often seem to fit the data better there was a neuron modeling competition a while back um i entered it but didn't win it and you were given some recordings of a neuron and you had to try and come up with a model that best fit that data on on some data that they that they held back and all of the ones that did best were essentially these uh reduced models and the more detailed hodgkin huxley tape models fit the data much less well so i think that's quite interesting totally justified therefore in using simple 1d or 2d models also i can understand them which i can't really understand this these sorts of equations okay i'd like to quickly do a sort of when we call it a whistle stop tour into the world of neuron dynamics i just want to give a flavor of some of the things that neurons can do that these models have i haven't really shown you in these models so far the first one i'm going to talk about is bursting so this is um some some neural recordings of bursting behavior in neurons so you can see here instead of just firing one spike one spike these neurons fire a little burst of spikes and then pause and then a little burst and then pause a little burst or they might fire a long burst where the time into spike interval is getting longer or many many different uh types of bursting dynamics and there's a very very rich um set of dynamics that these these can have there's a whole bunch of different mechanisms underlying this uh different types of bursting different underlying mechanisms typically this is something to do with the interaction of slow and fast dynamics a bit like what i talked about for reducing neuron models to 2d [Music] and what role might they have uh many things have been suggested for this so there's they may be more reliable they may have a better signal to noise ratio than single spikes they might be involved in multiplexing signals you could potentially have information that is carried in the burst rate that is different than from the spike rate um one interesting uh recent paper suggested that this was a way to backdrop error signals um and uh there's actually a paper from ruri costas group at this conference i think it's a poster so do go and do go and check that out that develops on this theme um you can look at the way that they respond to an input current so this is distinction between type 1 and type 2 neurons so type 2 type 1 neurons as you increase like an input current they do nothing for a while and then they start to increase uh their firing rate uh starting from zero whereas type two they do nothing for a while until they suddenly jump uh to some non-zero firing rate uh so the leaky integrating fire neuron that we've seen uh is type two uh basically if the input current isn't enough to push it over threshold it won't fire but as soon as it is enough to push it over threshold it'll start uh it'll start firing regularly um again multiple biological mechanisms underlying this you i won't get into this but you can analyze these with the the bifurcation type um there's a lot of mathematical work on on this and you can classify all the different ways all the different sort of bifurcation types based on the dynamics of the neural model then you can get very deep into that if you want to possible roles again there's several one interesting is that these types might correspond to whether or not they are integrators or resonators so an integrator just integrates its inputs uh basically it's it's sort of summing what its inputs are a resonator might respond particularly strongly to inputs that themselves have some preferred frequency so here you can see that as stimulation there's a stimulation frequency on the x-axis and in the case of the resonator it has a preferred stimulation frequency so you can see that it could be useful to have neurons that have different curves here essentially and then there's the whole world of spatial structure so here's just a few example neurons that i plotted from euromorpho.org which is a big repository of detailed 3d reconstructions of neurons lots of mechanisms involved you have active versus passive dendrites so the way i've talked about it it's as if the dendrites just collect inputs but actually they can be actively involved and do computations themselves there's an effect of where the location of the input is um and there's a huge number of possible roles too many to list it's still a very active topic and one particular one that i wanted to to mention just to give an idea of how important the spatial structure can be is a recent paper showing that a single neuron with a complex dendritic structure can uh can solve the mnist um a machine learning benchmark on its own now so that just gives you some idea of how much computational power you can potentially get from these spatial structures okay so now i want to talk about coincidence detection and lead into into the exercise so let's take a look at some code so now i'm moving to notebook 2 which is the coincidence detection notebook in this notebook i'm going to use this brian spikey neural network simulator package and yeah i designed this uh package to simulate spiking neural networks i would definitely recommend that you use it that's a totally unbiased opinion um it's really really very good okay so sound localization for those who don't know we can essentially detect which direction the sound is coming from and one of the properties that we use to do that is is the time arrival time difference between the ear so here you can see if sound is coming in from this direction you can see that it arrives at the right ear before it arrives at the left here so the signal has got to this point or this point and then it has to travel this extra distance before it gets to the left ear and that depends on this angle theta and that arrival time difference is called the interaural time difference for the purposes of this notebook the signals are going to be sine waves so there's not going to be a necessarily a well-defined time difference because these are periodic signals so a time difference of one time could be the same plus 2 pi f t or four pi ft or plus six pi ft or whatever um [Music] so yeah so there's also the concept of an integral phase difference which is well defined in the case of a sine wave input and that's basically what is the difference in the phases of the signal simultaneously arriving at the right ear and the left ear okay so there is um a very classical model i think going back to um i think the 40s maybe from lloyd jeffress which is how the brain attempts to infer the itd and it does that by compensating with multiple neural delay lines so you imagine you have some signal arriving at the left and the right ear and it arrives a little bit later one than the other and it travels along these axons at a constant speed and that means that it gets to this neuron from the left ear before it gets to this one before it gets to this one and so on whereas the contrary in the right ear so the signal from the right ear gets first to this one then to this one then to this one and gets lost to this one in the middle they arrive at the same time right so the signal arrives in the left gets to this one at the same time as the signal arriving from the right now if these neurons are coincidence detectors they're going to fire at a higher rate if they're receiving the same input so if the sound was right in front of you so that there was no internal time or phase difference then this neuron would be the most active and these other neurons would be the least active whereas if the sound was uh arriving first at the right ear and then later at the left ear it might be this neuron that was most active right because the left ear would get to this neuron very quickly and the right ear would delay before it got to this ear but it arrived earlier here and therefore the extra time it spends traveling along here compensates for the acoustic delay between the two ears now the idea is that basically looking at the activity pattern of these coincidence detectors you can then infer what the time difference the arrival time difference of the sound was okay so how do we code that up um well let's uh let's quickly have a look at that so like i said this is going to use brian we've got some standard importing stuff here that i won't talk about let's have a look at the input signal um so what we're going to do is i'll just quickly show you the output of that and that is not working let's try that again here we go good um so yeah so this is what the input signal will look like so the blue is the left ear and the orange is the right ear um and you can see that we're going to convert that into spike trains uh at this firing rate so actually we're going to do that with press-on distributed uh personal distributed uh spike times so basically what that means is that it's a poisson process uh at a firing rate that is proportional to the height of this curve so you can see it tends to fire more spikes here than it does over here where the value is lower for the blue okay so how do we uh implement that in in brian well it's fairly straightforward we have a theta variable which is well it's going to be the phase so it depends on the time it depends on the frequency of the sine wave and the phase is two pi f t um for a time t but for the left ear that's going to be i equals zero um it's just the phase and for the right ear you would have i equals one and therefore you're going to add this interaural phase difference to that okay so this is brian syntex not python syntax but it's fairly standard mathematical notation the only thing you really need to know is that this colon one here and colon hertz here is giving the input unit of the equation being defined here yeah so theta is dimensionless and rate has dimensions one over time or hats so now the firing rate is the maximum firing rate times a half one plus sign theta in this case let's just make sure the firing rate is always positive because you can't fire spikes with a negative firing rate now we create a group of two neurons so neuron group two having those equations and where the threshold is this equation you take a random number distribute uniformly distributed between zero and one and you check if it is less than the rate that's this rate up here times the time constant dt and what that basically does is it's it's an in if you if you think about it's it's an approximation of the poisson process if you can't have more than one spike per time step which you can't in these simulations this is an approximation of the probability of having one spike uh in a given time step of width dt in the poisson process this is a very standard common assumption to make in these sorts of simulations okay and we record some of these values we run the simulation and then we plot it okay so that's giving you just some some rough idea of how that's working now we're going to set up the coincidence detectors and what we're going to do is we're going to have n neurons and their best delays are going to be equally distributed between zero and a maximum itd value and that maximum itd value is going to be one over f because it's meaningless to have more than that um if you've got a sine wave of frequency f um after that it becomes um impossible to resolve the ambiguity for a sine wave now the coincidences the coincidence receptor neurons are going to be standard leaky integrated fire neurons as we've seen before but what we're going to do is we're going to store a copy of their best ipd and their best itd and we're going to create synapses from the ear neurons to the coincidence detector neurons that depend on this best ipd and best itd we're going to use a small time constant tau to get a strong coincidence detection effect right remember that means that the time constant tau is small they decay very rapidly back to zero so they require simultaneous inputs before they'll produce a spike okay so how do we do that okay so uh there's a there's a slight complication here with the signal being on don't worry about that for the moment this is basically as it was before we now create the standard leaky integrated via neuron remember it's got dv by dt as minus v divided by tau and we're also going to store for each of these neurons a best ipd and that's going to vary from 0 to 2 pi that's what this does and the best corresponding best itd which is just divide that by 2 pi f so now we create n of those neurons following those equations the threshold is going to be v greater than 1 no no adaptation here and their reset value is going to be v equals 0. [Music] and we're going to create synapses from the ear neurons to the coincidence detector neurons when there is a presynaptic spike in other words when one of these ear neurons fires a spike this condition triggers and it increases v in the postsynaptic neuron by a value w we're going to connect each ear neuron to each postsynaptic neuron that's what that line does and we're going to introduce some synaptic delays uh so all delays are zero by default so what we're going to do is for the neurons from the synapses that come from the right ear we're going to set their delay to the best itd of that neuron which we've calculated up here okay and now we're going to record the spikes for that run the simulation and the estimate that we're going to give of the ipd is going to be based on which neuron has a maximum speed count so first of all we calculate the maximum spec count we calculate the set of all neurons that has that equal have that spike count right because there might be more than one neuron that has the same spike count and our estimated ipd is going to be the mean of all the neurons that have that maximum spike count okay this is a very simple uh approach to doing that and we run that cell and after a few moments yes we get something like this so here is the ipd in in degrees um and each so each of the coincidence detector neurons is a position on this x-axis this is their best diabetes right and the spike count of the vaccine synthetics the neuron is on the y-axis so the model was presented with the true internal phase difference at this blue line and because this neuron had the highest firing rate it guessed this red line as the true ipd so you can see it did quite well but with a little bit of error and i won't do it now um because i suspect that i'm running over a bit already but you can change these parameters and get a feel for what governs whether or not this works or doesn't work okay so we can also evaluate the performance of this thing um what i'll do in this code which i won't talk through in much detail is just run it once for every ipd from zero on to 360 and i'm just doing this in steps of 10 degrees calculate the estimated ipd and the errors as a function of the ipd plot that and compute the mean error and i've separated this out into different functions because the exercise is going to be reusing this code and trying to improve on what i did so the first bit generates the input signal right and now what this does after you run the simulation is it returns the indices of the neurons that have spiked and the corresponding times at which those neurons spike and then that gets fed into the localization network which has the coincidence detectors in it and then the code is otherwise as it was before and then we as i say we generate all of the different ipds from 0 to 360 in steps of 10 and we return the ipds and the estimates of those ipds and that takes a minute or so to run if you run that and now we compute the errors we have to be a little bit clever about that because uh we need to take a circular computation into account so if the answer um is uh is is one degree instead of 359 you wouldn't want that to be an error of like 358 degrees but of just two degrees okay uh so so that's how you calculate that into account and then we plot the results and we get something like this so true ipd here on the x-axis and the estimated ipd on the y-axis the correct answer is on the diagonal and this is what it gave so you can see it's sort of doing not too bad in this case the mean error it's giving is about 18 degrees okay and so now i'm going to stop here for part one the exercise is can you do better than this network uh the restrictions are limit yourself to only 100 neurons use the generate input signal function that i've done from above to generate the other way to generate the input data so that you're all working on the same input data but otherwise feel free to do whatever you like um some starter ideas might be you could optimize the parameters tau and w i didn't attempt to do that i just picked some values that happen to work reasonably well you might use a different neuron model so maybe adding adaptation would help here maybe it wouldn't you could try that out and see or you could try a different method to estimate the ipd given the set of coincidence detected neuron counts so remember going from this picture to this is the estimate maybe you could do something different to estimate the location from that and maybe that gives you a better result um yeah um later on you might be interested in our paper where we did some of this stuff from 2013 um but i wouldn't i wouldn't go and look at that right now all right uh in terms of the organization of this part um i'm i'm hoping i haven't seen the physical space as i'm recording this but i'm hoping that you're divided up into tables um but you may be in a lecture theater i'm not entirely sure i wasn't entirely sure what that is if you're on tables there should be one of the teaching assistants per table to help you um otherwise i don't know how the teaching assistants are going to manage it but there will be one approximately one teaching assistant per 10 of you who should be able to to advise you on how to do this um i would suggest that you could uh divide yourself into pairs um and um yeah and and try to to do this yeah like do this in pairs um because it's an efficient way of programming and like that if one of you knows something the other one the other one doesn't you can you can share expertise in a useful way but do feel free to ask questions of the people around you i think it's a it's a much better way of learning and if you are in tables then uh doing that in tables is going to be a really good way of doing that as well um okay i'm going to leave it uh what i'm in my recording for there um there may be a someone may give you some extra advice on on how to do this after this this video is finished okay all right that's all for me from now on i'll see you part two a bit later