everyone and just the first in a series of screencaps to get you started with fMRI analysis with SPM and these slides and and the data will all be in older and the common fMRI Fri analysis sessions so you can check back to any of those things m2 and the descriptions of the data so for this screen cap I'm going to go through the vi pre-processing steps you've got three different types of data that you get from a from the scanner when you do a functional imaging study you've got your t1 structural data that's your high-resolution structural data where the images you see are quite detailed 1 millimeter resolution you can see a lot of the anatomical landmarks and this and the shape you've got your functional data now this is a lower resolution than the structural data and because it's acquired and very quickly over time while your subject is doing the task and finally the other type of data you collect is a field map which records the inhomogeneities in the magnetic field which I'll come to in the next step so in order to and pre process your data you will need some information about your scanner sequence now you can get that from the scanner sequence itself which you can get from a PDF printed from the scanner or you can look at the DICOM headers which is one of the first things I'll show you how to do there you can get your scanner parameters which you'll need to enter into SPM 2 preprocessor ok so the first step of pre-processing is to unwarped and realign your data now because of the inhomogeneities in the magnetic field which are caused by things like air pockets caused by sinuses and for example here and the data will not be reconstructed and in the correct fashion we can see here that some of the data has actually been squashed or not maybe not capturing the data from the front of the brain what this data will be lost irrecoverably but some of the data is actually just moved because of these sinuses to the wrong place so when we apply this field map to the data it will actually put the data back into the correct position now it doesn't restore lost data but it does put some of the lost some of the data which has been put into the wrong position back so you can see here after a field map has been applied you can see a lot more data in the frontal next up is slice timing correction so when you think of your functional images they're acquired in slices so for example this each of these slices would occur over a two or three second MTR which is the time that it takes to acquire one functional brain body and and you can see here that these these slices are sequential and or they can have different orders of they can be interleaved or I'll explain with multiband sequences later that they could acquire many slices at the same time and but the point is is that they're they are required at slightly different times within that and that TR which could be anywhere between half a second to three seconds or even longer so say for example this blob of brain activation was acquired at three different time points so it really is good just a slice time correct your data too and to take account of that step is co-registration so that puts your functional data so your afford your data that's acquired over time and the more low resolution data and your structural data our high resolution t1 structural image puts those into the same space here an example of a poor registration where the so if the red outline is your structural and image and the gray brain image is here is your functional data you can see here they're nicely aligned and that the the sulci match up and anatomical landmarks match up and and on the bottom we see an example of a very porridge co-registration where you can see that the the functional data and the structural data are not in the same space and you can see how that would be a problem when you think about when you can't collect a number of em of brains and then you're collapsing them over each other that you might not be comparing like-for-like and similarly that you might not be able to localize it to the right place so we co register the functional and structural data to point the next that is segmentation so this is us and trying to build up a template of how our structural image are detailed t1 high-resolution structural image how that compares to a standard brain template which is based on do this we use tissue probability maps and and we so we've got our so if we look at our structural image here we've got the gray matter or tissue probability map we've got the white matter tissue probability map and we've got the cerebral spinal fluid tissue probability map and the idea is is that we use these tissue probability maps to move our structural image into a standard template too much a standard brain template that's based on thousands of brains and so we would use the deformation fields that come out of the movement of our from our structural image into a standard space to move our functional data into the stack we use those those warps that are occurred to our structural image to get it into standard space and we apply those to our to our functional data and our functional data should align to standard space and I'll show you how to check that at the end and you would access or see and putty is an application which allows you to access Jupiter which therefore allows you to use so you might not already have on your computer so if you have got MATLAB on your computer you've got your own MATLAB license and you don't need to do this step but I think for most people they might need to access Jupiter so to set up yours so there's putty configuration and window and you need to set opportunity to access Jupiter so you need to type in here Jupiter stop McLane harvard.edu and you see the SSH default is selected that's correct and so you need to that's your host name and then in order for you to be able to visualize this you need to setup x11 for them so you've got connections x11 enable x11 forwarding and that allows your connection to Jupiter to be visualized on your on your screen so go back to our session we're looking at Jupiter don't McClain door Harvard - edu and you can also save this so that in future you can just click here you're saved sessions and if you click open then it asks you to log in with your Jupiter credentials which are different to your partners looking they are your Jupiter so that's and that's how you log into Jupiter and there so we want to launch MATLAB from this command line and but to do that we first need to set up we need to where we enabled x11 forwarding we need to just open the application that allows that so you type in X Ming and which will also should already be installed if or not you right-click x-wing and if you just click it then it will be running and if you check in your tray you've got x-men running here and X Ming is the thing that allows you to type in enter and then MATLAB comes up on your screen and and sometimes x-wing doesn't exactly fill your whole screen so it's just to maximize it that way and this is your mat lab environment so you've got your folder structure your workspace so that funny variables you define will be there and your command history so the pass commands that you've done now the first thing that you need to do when you open MATLAB is to set your path so what I'm going to do here is navigate to our common area a little bit fiddley you one level from users and on Jupiter and then ANL and ephemera analysis sessions that just means that in our MATLAB area we will be able to easily all the files that were using nice so the first thing so before we start doing any pre-processing of our data I'd like to show you how you extract your scanner parameters from from your die combs so if we navigate to the DICOM folder CEO CD just stands for change directory 0.0 and then if you just tab and naturally and then that puts us in icon file let's change directory again - and that puts us in the folder where we have our dicot these are the these are the these are the files that are downloaded from Osiris and this is how the data comes off the scanner so if you read these you can see in the header the information that you need to pre-process your data so I'm going to I'm gonna read one of these two icons and how you do this is you know you can call it in for whatever variable you want to call it and the and it's DICOM info so that's the command and then and then you put in the name of the DICOM file that you want to read and you you will know this from where you download it so for example if you go Jerry epi one folder you'll know that you can read and your headers from that relevant to your EPI date I'm going to read a header from one of my functional data files and spy type in this is in there so what comes up is a load of information and also what comes up is this variable called information if you scroll along here you can see and it's not the most user-friendly format but it's standardized for all different scanning and modalities so we see for example here and there you know some of the DICOM header names we're not that obvious but and for this one example that's number of slices is 66 and and then we've also got what else it means me there's here is the slice the time that each slice was acquired and 66 by one so that's a variable that's got 66 entries in it and slice thickness it's two and a half millimeters so that's each brain slice the repetition time was 720 milliseconds so it was a very fast sequence so each brain volume was acquired in less than one second so this is the kind of information you can get from here at DICOM header and I'll show you for example if we were to open the info structure that we have just defined do you word it will let me okay and if we wanted to look at for example that variable that had our slice timing in there so you can see all of these things say we want we want to look at this just got our slice timing those 66 values so if we double click that one you can see here these are the slice timing parameters in milliseconds so we can copy and paste that and put that into our SPM and slice timing correction mode so that's an example of how you get your information from your daikons [Music] moving on I'm going to launch SPM now SPM you need to download it for free online and then add it to your path and once you've done that you can launch it going SPM I just type that in the online and then it will launch SPM what [Music] and so in SPM there I you get you get these different windows and so this is your visualization window and you'll see as we do analyses that and that the images will pop up here and then you've got this is kind of like a progress bar and then this is our menu so this is our menu where we can find most bits of functionality and I'm going to start off with pre-processing using the batch processor so I'm going to click on batch and this is useful because you can and you can add sequential modules to this batch and then and then save it and then load it and then turn it into scripts which we will come to but for now I'm just going to show you how you would do the six pre-processing steps I talked about okay so if you remember we talked about our field map and our first step is to is to process our field map and so to do that we go SPM tools field map calculate vdm so that's our visual displacement map and so you use so you select the module and it comes up here in your module list and then in in this window here is where all the options lie for you to fill in the bits of data to carry out your this module and then when you click in each individual field here you can see the options come up come up below which you can click and specify and then also some information comes up in this area so that's how you navigate through okay so we will for the first subject and we want to and use our field map and here it's gonna be a pre subtracted phase and magnitude data which is how the data comes off and at least the Prisma scanner that McClain so we select that and so our phasing so you'll get 2/3 action three different images off the scanners so you get your phase image which is and the image that tells you the difference between to the to make the two images that are required to calculate the field map and so if we specify expressively and then over in the daikon area okay pre progress we've got and I'm going to use copy five field map o copy five field map so this in this selection area you you select your folders on this side and then the files come up that you would want to select or if you were gonna select a folder so for example I'll just go back so you've got copy five because this is looking for a file it's only going to show files in this area you go field map and this is asking for the phase image which it says here so we have phase and done then our magnitude image and a specify same thing and so in this in this selection pane it's quite useful you can click up here and previous and you can go quickly to where you were before so that's our magnitude image and then field map defaults and you slick you collect here default values because we're gonna enter in the values and these are in these default values are information about your field map which you would get from your daikon headers and which allow you to calculate and how it should apply these these things so the two ecco times and you will have gotten this from your daikon so i'm just going to write these in here and from the daikon header I did I took this already so it is me and is our short Ecco time and twelve point four six is our longer this is and so when that was the echo time about each the two M images were and we will select to mask the brain our blip direction this is also from the die cut from the DICOM header and and one is for posterior to anterior so the direction in which the date until epi readout time now this is something that you need to calculate which will be in your M it is in your pipeline so all of this and all of these commands are actually in detailed in the pipeline so our total epi readout time is something we calculate from our DICOM header from information or dye column headers and I have calculated that to be 29 point 7 will show you how to do that in the in the pipeline and this is a non epi based field map so we select here non epi and and then I think a lot of the defaults are fine here but you need to select your epi for unworking so this is selecting a the first image of your functional data to say this is the data that we're going to be on working with this and with this field map so if we go to our functional data and we see here we've got so t1 repeat and run 1 so our t1 is our function as our structural image and run 1 is our M is our functional data so if we want something that will to filter it we go into this filter box and we put in run press enter only the run um date and here we would enter which frames to put in and for this we only want the first frame so you can see it's AAC run 1 frame 1 so we would select yes match vdm to api we would say yes much the video and there's a few additional quality control options that you can choose I think we might as well write the unwarped dpi and then match the anatomical image to the EPI so that's these are just quality control things that we can add but we might as well say yes to them okay so once we fill all of those things in you can see here that we've got our green button which will allow us to run that module however I want to add all the rest of the modules of the pre-processing pipeline so that we can run them all together so now that we have created we have done our field map preparation we can start adding the other modules so if we the next we want to do the realign and on working module so it's spm spatial realign and unwarped and then you see it appears below okay so we go data [Music] let's do that um okay so the data new session and then the new session appears here now the images these are the functional images that we want to align to each other and then on work and using our field map so if we take the images we want to specify and we want to specify our functional data so we can copy one and then we want to select all of our images so we want all of our functional images so we want only things I have run in them but we want all of them all of the frames and so if you type in here INF into the frames box it will show you all of the frames so each individual volume that was in quiet right click here and select all then you've got all of your functional volumes 612 files in there now our pre are pre calculated phase map um is the thing that we've just prepared in our previous step so we can click it's rather than specifying a file we can actually put a dependency on a previous stage and here we see our voxel DePriest displacement map from the previous module is what is going to feed in there and so that's that will automatically align all of the functional data to the first image that's acquired so if there was any movement it will try to reduce that and the fake and it will use the unworthy to and to put the frontal activation back in the correct place okay so so the next step is the slice timing correction so we go temporal SPM temporal slice timing and so we need to select our data new session session and then we need to put a dependency on our realigned images so we select dependency and then we want to go realign in and warp on Warped images so it's SEM it's it's it so that each sequential step that the pre-processing will be fed into the next step okay and now we so we need to put in our information from our daikon headers about the the slice timing and and how that occurred so we've got our number of slices I'm gonna specify that and that was 66 we got that from the daikon header is the number of slices in each volume and then our TR was remember it was seven hundred and twenty milliseconds so 0.72 Oh cuz this is in seven two cuz this is in seconds RTA is an is the time that the last slice of each volume was acquired and so how you work that it is then you do point seven to minus 0.72 PR divided by the number of slices so if you the time at which the last slice was a CR of 72 the last slice is recorded om so that now in terms of our slice order and so you can have some simple orders where the slices are ascending or and or or descending so they go from the bottom they're from the top or they can be interleaved or with more complicated and sequences such as multiband sequences which which acquire multiple slices at the same time and you have a much more complicated slice order and and that was something that we saw in the DICOM headers so we would specify this lice order and we would specify the individual times that each slice was acquired and so we could copy and paste this and see if MATLAB one still let us that lab might not let us because we're running a spit them already okay well luckily I have this here in our pipeline these are the individual slice times slice order okay so there's valleys in there and you can see we've got 66 so we've got the time in milliseconds of each slice and for these multiband sequences and the reference slices your reference slice on yours if you are going from top to bottom beer bottom sort of if you're going from bottom to top will be your bottom slice if you're going from top to bottom would be your top size but when you're acquiring multiple slices at the same time you really want to pick a slice that is and is nearest to is equidistant to most of the slices so for this one you would pick the middle slice which here is one that occurs at three hundred and thirty that's our slice timing and so after we've done the slice timing and we want to do our co-registration step spatial co-registration are functional and our structural data into the same space so in terms of our reference image and we would want to use our mean image from our realignment stage so that takes our main image that they had the average functional image so that's the one that's would make the sense sense to Co register and then our source image is our structural image so we would specify we can go to our run one and here we go our t1 repeat we select that done so that's us selecting our mean image that's been unwarped of the functional data and our structural image and we want to Co register them and to estimate what the co-registration of that would be okay and after this step the t1 image will actually overwrite but not change the name so you need to be careful to have a separate t1 image for each run of your task in there otherwise they will overwrite each other and then you'll be using the one that's in the wrong space to Co register with your next functional data that you analyze so that's just a pitfall to avoid and so the next step we go spatial is segment so this is applying our tissue probability maps to then later be able to normalize our data so we here we take the output of slice timing correction so we take our so we William oh wait I'm sorry the output of our co-registration so and that's that's what we're going to segment it's all right could we just do data and what we want to do here is and is really these are all the different tissue probability maps and we just want to use everything so if we just were using them the kind of native and dartle imported tissue and warp tissue modulated modulated and unmodulated and this is really just if it will take a little bit longer but it will result in a better segmentation which will result in a better normalization which means that your your registration has got the best chance of being is close to successful as possible okay so just for all the tissue probability maps were selecting native and dirt all imported and modulated and unmodulated that's really turning everything on and there and what you also want to do here is you want to save your deformation fails you want to save your incurs and your forward deformation fields and so the inverse deformation field is what you would what yours what the the the standards brain template would need to do to become like your structural image and the forward information fields are what your structural image would have to has to do to and turn to become closer to the standard term you can see how those forward affirmation pills would be used to to normalize your data hang on I'm sorry just what do you think you need to do you need to close this room oh no no no okay I was gonna just stay for another ten minutes is that okay okay great thank you okay I'm gonna edit that out okay so we savoring form a inverse and forward defamation fails so that we can normalize our data using them while we use our forward ones okay so after we've segmented and the next step is to normalize and here you will be using normalized right because you've already used you've already estimated your parameters with this segmentation step and so you take you new subjects the deformation fails you want to use your dependency from your segmentation step and you want to go your forward deformations okay and the images to rice here dependency here is also your slice timing corrected issues and images [Music] so you just the standard parameters parameters here are fine and then finally we will do smoothing which is where we take our normalized data and we average out the activations over a larger number of voxels than just one which will reduce our noise in our data and so we just take a dependency here of our normalized images okay so once you've got all of this set up and that's here that's your that's your pre-processing pipeline and you can see that all of the did it's all got dependencies on previous steps and so you can save this so you can go save badge and it's always useful to do so that you can then you can open it and modify it for other people or then save it and cause and write a scripture on it so we will call this a AC zero three three okay lets me save us and then we will run us click the green triangle do so what you will start to see here we go so we can see here in our in our progress window that it is doing our pre-processing so it really depends how long this might take and it depends on the size of your data the number of files and how its acquired anywhere from say 20 minutes to um to an hour and also depending on your processor speed but this is good this is your progress which tells you each each step that it's doing and so it feels kind of satisfying to watch that and as the steps are carried out you will see here in your visualization panel you will see the different steps being visualized so you will see your M you're on warping so you will see how to have the field map worked you will also see your structural and functional data how they're Co registered and also when you're done you can check your registration okay so we're starting to see this is our warped and unwarped dpi and this is our field map you see but it's not as it's not as a stark answer as the one that I showed you in the example but we should find some differences between the Warped and unwarped API based on field map correction notice it seem to be an awful you can see that when you click in the anatomical landmarks that you're in the same place in the functional data and if you click around the edges of the brain but the edge of the functional and the edge of the structural so after all of your modules of being carried out and you can see therefore one set of data it took over an hour um which might be something to do at Jupiter being slow but so once you've finished that you've got your normalized data there but you can check your registration so you know I talked about your functional days of being secured things your functional data and your structural data being and Co registered to each other and then we would use the tissue probability maps from the from the segmentation step to to calculate deformation fields where we would move our functional data towards a standard template so so now we would want to check whether are we sold our co-registration that our functional data and our structural data from our participant were aligned and work at work who registered well what we want to check now is we want to check whether our smooth data so our data that we have that we have pre pre processed and that take W or su that's wau so that's the prefix I'm gonna select that file and I'm going to I'm going to see how well it aligns with the canonical standard brain template that's based on thousands of frames so if I go into SPM 12 so this is your standard SPM files that are there and and we've got our going to canonical and we have our single subject t1 so there are the two things that we observe in our visualization window we can see here we click around so this is our functional data and we can see that it is nicely aligned with so this is our functional data that we collect on our task under template our subjects and t1 image it is a standard t1 image that is used and for comparing across lots of different brains and you can see that our pre-processing steps have aligned these two datasets quite nicely and now you would check this is if you click in the sulci and you can see what this lines up with the feather sulci up here in the functional data and click around it's lighter in the functional data and darker in the t1 and and also very importantly that the edges are aligned so I think that looks like a good registration so we have normalized our functional data to the standard template successfully so that we can then look at applying conditions to that functional data to to look at and ungroup functional data between different subjects