This course is aimed to give an introduction on how to start your data processing and how to fit your data. We've been lucky enough to have Bruce agree to come over and give us a talk through the software, which of course is his development. Bruce is well known in the X-App community for his scientific work and his development, so we're really grateful to have him here today.
Without further ado... Thank you, Bruce. All right, thanks, Paul. Hi.
Who are, are you coming to XF for the very, very first time here? Is there anyone here who has not, a handful of people who haven't been to the Beamline at all, but perhaps, perhaps soon? This is, this is something you're going to be doing.
Pardon? Okay. So, um, um, I'm going to be doing a little bit of a demo.
Starting off with... Not quite, but pretty close to the introductory lecture. I'm assuming even those of you who raised your hands about not having been to a beamline yet, I'm assuming that you at least know a little bit about what X-Aps is about. But over the course of this talk, we should go over the most fundamental points.
And then after that, we'll be digging in a lot more detail. All right. Beautiful. Okay, so start off with the acknowledgement.
And this is more or less the acknowledgments for pretty much everything I'm going to present here. So Matt is my longtime collaborator and friend. co-author of all of this software with me.
So, without Matt, none of the rest of this would have happened. And Shelly and Scott are two old friends and people I've been working with over the years for a long time have inspired a lot of how I present the material, but have also been great friends in the development of the software. I was fortunate, I had the great fortune of getting my degree from the University of Washington. of Washington in Seattle, which is the home institution of both Ed Stern and John Rare, who Ed was my thesis advisor and I worked with John also.
And as well as being luminaries in the field, they're both extraordinary people. And if you ever have the opportunity to meet Ed or John at a conference, you'll find that out. They're great.
So, you know, my boss is a great guy for letting me come and do things like this rather than being at home getting real work done. And of course, Paul and Diamond have been very gracious in inviting me here and picking up the cab and bringing us all together to do this. But I also want to thank all of you. It's incredibly flattering that so many of you want to hear what I have to say. So let's get started on that.
So you go to the beamline. You do an X-Aps experiment. You end up with data that looks something like this.
like this, and what we have plotted here is the europium L3 edge and the titanium K edge of a fairly simple crystalline material, europium titanate. You make a measurement and you get a spectrum. Somehow, these data tell us something about the structure of the material. And converse, the structure of the material is the same as the structure of the material. determines what the data is going to look like.
So somehow there's a relationship between this thing that goes up and then wiggles, goes up and then wiggles. There's some relationship between all of those wiggles and how the atoms stack together to make a material. So you go to the beam line to try and get something fundamental about your sample.
You want to know the valence of the absorbing atom. or what kind of species surround the absorber. That is, if you're doing, say, a problem in redox chemistry or sulfidation chemistry, you want to know if your absorber is surrounded by oxygen or sulfur or metal atoms. And knowing what kinds of atoms, you'd like to know how many there are.
That is, you want to measure the coordination number in the coordination environment of the absorber. You might also, well you certainly would also want to know how far apart atoms are in your material. And you'd like to know something about how they're distributed, how things are distributed around the absorber, what kinds of thermal and structural disorder might exist in the material. So somehow we're going to go and measure this.
stuff and get out information about all of these things. And that, at the end of the day, is the whole point of going to the synchrotron and doing an excess experiment. So what kinds of things could you pick up from your lab and take to the synchrotron and do an excess measurement on?
Well, the answer is just about anything. And this is one of the great values of... XFs is that you really can measure just about anything.
The fancy phrase that I like for this property of XFs is that in the theory and the analysis, there is no assumption of symmetry or periodicity. That means that unlike a simple diffraction experiment, you don't need something that is crystal, and that is your material doesn't have to scatter in the Bragg sense to measure XFs. It can be very, very far away from being a crystal. It can be a liquid, it can have mixed phases, it can be some kind of engineered material, and on and on. When I was writing up this slide, it was around the time that the Chemistry Nobel Prize was awarded, so I threw quasicrystals on the list also, which is another thing you can measure with X-Aps.
And again, something else that doesn't require a particular assumption of symmetry and periodicity. So, to make your measurement, to do the experiment that you want to do, you... You need to go to the right place. And you need to prepare the sample correctly so that you can measure data of the highest quality and measure the thing that you're actually setting out to measure. So there are some questions you need to think about.
You need to, of course, choose the right beam line. So if you're doing a hard X-ray experiment, you need to go to a beam line that does hard X-ray spectroscopy. If you're interested in doing absorption spectroscopy on something like...
the oxygen k-edge or the carbon k-edge, you would then need to go to the appropriate soft x-ray beam line to do that. And there are some more challenging elements on the periodic table, things like magnesium and aluminum and silicon, that have energies in that extremely inconvenient range between that easily done at a soft x-ray beam line and that easily done at a hard x-ray beam line, and your choices are somewhat more limited. But... there are places to go do that also. So you need to do some footwork ahead of time to make sure that the beam line you're choosing is correct.
There's a whole bunch of issues about sample preparation that you need to worry about. The sample has to be prepared appropriately for the experiment. And there's, I'm going to move past that in this slide, but there's ample information out there about what is meant by having the sample be. be appropriate, but it's the other thing you have to think about. But the beauty of it is that pretty much any way that you have these things over here, they can be prepped in a way that you can go to the right beam line and measure good data.
It's generally not that hard to do sample prep in XF. Generally, the hard part is figuring out what it all means, but actually making the measurement is relatively easy. The one fascinating thing, at least for me as a beamline scientist, one fascinating thing about absorption spectroscopy is that it's used by literally everybody. And if you introduce yourselves to the people to your left and right, you'll probably find that they don't do the same kinds of science as you. For me, as a beamline scientist, that's great.
It means that every week I have a couple of different groups coming in doing some interesting new thing that I haven't thought about. Just this fall. This is the variety of experiments that have been going on at my beamline. I'm not saying nothing of everything else that happens at NSLS and here and all the other synchrotrons in the world. So, widely applicable.
Applicable to a wide range of materials and a wide range of scientific disciplines. And that's pretty cool. So you go to the beam line and you measure some stuff. You put your sample in. You do a good job of sample preparation.
And you put your sample in the beam and you open the shutter and you make a measurement. And sometimes XF is really easy, sort of. of fall off a log easy.
Here's an example of what might sound like a slightly challenging experiment that we did at my beam light a few, a couple years ago, although it turned out not to be. This is germanium antimony alloy, relatively thin film of the stuff on silica. And because it's a thin film, we measured it glancing angle.
We did a bunch of tricks to make the experiment work as well as possible. And what I show here is a single stand. And now this is at my beam line, which is one of the oldest beam lines at NSLS-2. We take a very small fraction of the swath of radiation that's coming out of our beam port. We have no sophisticated...
Oh, right. No sophisticated optics at the beamline. It's pretty much the simplest, dumbest XFs beamline that you could imagine doing useful work at.
And in one scan, 15 minutes, we had beautiful data. Okay, sometimes you go to the beamline and XFs is hard. So here was an experiment I did some years ago at what is pretty much my favorite XFs beamline in North America. 20 B.M.
at the APS. And for a variety of reasons, this turned into a challenging experiment. Part of it is that it was a relatively low concentration experiment with a little bit of mercury bound to some engineered DNA.
And we basically spent the whole day measuring scan after scan after scan after scan. And here's the whole ensemble of data. They're all pretty crappy. Here in blue is...
The chi of k, the extracted, the oscillations extracted from the data for a single scan, and you can see that the noise level is just enormous, right? So each individual scan was awful, but the central limit theorem always works, right? If you're dominated by statistical noise, all you have to do is measure longer.
So we spent a whole day on it and beat the noise down to the level of the red line. Still not beautiful, but it was something that was measurable, and I ended up getting two publications out of this work. So sometimes XF is easy, sometimes it's hard.
And in any case, regardless of that, there are a few things that we have to know how to do, and a few things that we're going to talk about all of these things today and in the next two days. some length. So you have to know how to evaluate the statistical quality of the data.
That is, I needed to know that these were good data and have a way of saying with certainty that they're good data. And I needed to know the extent to which these are bad data. That is, I had to make a decision about how long to measure, how many scans to measure, to turn them.
my noisy data into something that was useful. So we have to be able to evaluate the quality of our data. We have to recognize the difference between statistical and systematic error. That is, if I had to recognize that in those mercury data, the data I said were difficult or bad, that it was shot noise, that it was something that would go away by simply measuring for enough time.
That's in contrast to some kind of non-linearity in the beamline or a problem in the sample preparation, something that is a systematic problem that would be in every scan that you measure. And if you have a systematic problem, well, it's not a matter of measuring more. It's a matter of fixing the beamline or fixing the sample.
And the point here is that you need to be able to recognize the difference between statistical and systematic error and know what to do. In the case of the first, it's measure more. In the case of the second, it's find the problem and fix it. And you need to be able to recognize that.
And here's an interesting point that may not occur to you the first time you think through a problem, but as a consequence of being able to recognize and evaluate statistical quality, you need to know when to stop measuring a sample. That is, you need to know... How much data is enough that you have statistical confidence in the data, but you need to also know when it's time to stop measuring that and move on to the next thing so that you get enough work done during your beam time. And that's all part of evaluating the statistical quality. And, of course, you have to know how to process your data for further analysis, which is...
What we're going to start talking about this afternoon, and that's really the point of this little workshop. So, what I showed you in the first couple of examples there were basically conventional X-AFS experiments. Something that probably everybody in the room will do, but probably everybody in the room will do things that are more, I hesitate to say interesting, but more... more involved or more elaborate than just a simple conventional X-Aps experiment.
Because with new technologies and spiffy, fancy new synchrotrons like here, like the APS or like the new synchrotron that we're building back home in the US, Brookhaven, NSLS 2. We get to exploit lots of interesting new technologies that let us do interesting new experiments, but as I'm going to show you, a whole bunch of interesting experiments end up distilling down to something that is an excess spectrum. So here, for example, is some really beautiful work that was done a few years back at one of our microprobes at NSLS. And...
looking at a plant that hyperaccumulates metals from the soil. That is, if you grow this plant in a metal-contaminated soil, it will more quickly than normal suck the metals out of the soil in a way that and so it provides a way of potentially remediating metal-contaminated soil. And this little critter has the additional interesting features of forming these sort of star-shaped inorganic structures.
studded all over the leaf. These folks made these beautiful pictures showing the co-location of calcium, which is calcium carbonate, which is what makes up the little inorganic stars that stud the leaves, and various metals, nickel and cobalt and zinc, things that might have been in the soil where this plant grew. Now, this by itself is a pretty great result. and is the kind of thing that you might publish. However, at one of these microprobe beamlines, you can put the beam there or there, put the beam in a special place on the sample, and come up with X-Aft spectra.
And here, the X-Afts has been processed to the point of Chi of K in the Fourier transform, the point being that with the X-ray beam there, compared to, it must be something like there, you end up with two very different species of cobalt. And so that's really powerful. You not only see elemental distribution, but you can do all of the speciation, all of the stuff that's great about absorption spectroscopy, with spatial resolution. But the point is, the point I'm making in bringing this up, is that you use this fancy technology of the microprobe At the end of the day, one of the things you end up with is an X-Aft Spectrum.
Again, we have to know how to process the X-Afts, and we have to know how to evaluate the quality of the X-Afts. Similarly, if you're doing some kind of time-resolved measurement, using either a dispersive apparatus of the sort that is being built across the street at the X-Aft Beam Line I-20. Did I get that number right? I-20.
Or if you're using a quick-standing monochromator of the sort that they're commissioning at B-18, you do this time-resolved experiment. And by plotting the data in a clever way, you can see time evolution of things as they change in your sample. But again, even though this is a mountain of data, they're all excess spectra. And again, to do this kind of experiment, you need to know all the things on top of... All the details of the more sophisticated experiment, you have to be able to evaluate the quality of the XFs.
Here's an experiment I did some years ago, a diffraction anomalous fine structure experiment, which is a cute trick where you coordinate the motion between the goniometer and the monochromator or use some kind of fancy aerial detector to... to measure the changes in the diffraction pattern as you change the incident energy. And you do this through the resonant energy of an atom in the crystal, and you end up with these interesting spectra where the diffracted intensity changes significantly and includes oscillatory structure that... ...shows up, I hope, much better in the handout than it's showing up on the screen. And if you process these data correctly, you end up with something that is an XF spectrum that you...
analyze, that you process and analyze exactly like you do with X-Sats. So again, an elaborate experiment where you measure a lot of different things. And at the end of the day, one of the products is an X-Sats measurement.
Yet another example. Here is a fairly clever inelastic scattering spectrometer that was developed at 20 ID at the APS. The basic idea is that X-rays come in, strike the sample, and you have A bunch of crystals subtending this arc over the sample so that you can measure the inelastic scattering as a function of momentum transfer. a lot of details, but you measure this interesting inelastic scattering spectrum using these crystal analyzers and point detectors, one for each of the crystal analyzers.
And what you end up on each of these channels is a spectrum that looks something like this, where I've cut off the elastic peak, which is quite enormous. You see this big Compton scattering peak that disperses through the data, it turns out, as you go... over the arc of the detectors and you change the amount of momentum transfer.
But if you look at the fine details, you see at energies that correspond to binding energies for electrons in the material, you see energy loss spectra that are associated with the different things in the sample. Focusing in on this little bit right here and measuring more finely, we end up with something that looks just like a Zane spectrum. So we do this. immensely complicated experiment that involves a complicated spectrometer and measure this complicated inelastic scattering spectrum and at the end of the day we focus in on one part of it and interpret it exactly like a Zane spectrum.
So the point of all of these examples, the thing I'm driving at is that whether you do the conventional XS experiment or you do something much more interesting that we get to do these days at the synchrotron This basic skill of knowing how to evaluate your XF's data and process it well and correctly is immensely useful, hence this course. Okay, so... Before I launch into the topic of what we do, the overview of what we do with our X-AFS data, just want to make sure that we're all using the same vocabulary. So we often split up the X-AFS data, the absorption spectrum, into a near-edge region and an extended region. And the jargon, some of which...
Well, one small part of which doesn't, I would say, doesn't make a lot of sense, but it's the jargon that we use. There's this main rising part that we call the edge, or the threshold, the threshold into the unbound states that the photoelectron can be promoted into. Below the edge in some materials, particularly, you'll particularly see this in transition metal oxides and other transition metal compounds, there's a, there are There's one or more interesting, often small, but not always small, little peaks that are often referred to as the pre-edge peaks, which is, I think if you dwell too long on the word pre-edge, it gets a little confusing. But the sense in which it's meant is the spectral features that show up before the main rising edge into the continuum.
There's a phrase near edge that is often used to discuss these things. above the main rising edge, but not all the way out into the X-Afts. These are all kind of squishy terms in the sense that where I chose to draw these cut-offs is a little ambiguous. Many of these materials, particularly oxide materials and transition metal and rare earth oxide materials, will show a very tall, very sharp feature.
Right at the beginning of the scan, that's often referred to as the white line. And then everything that's beyond all of these other things is referred to as the extended X-ats. And here in a few minutes I will explain to you some ways how you think about these different parts of the spectrum. So, um... Sort of the very most obvious way to use absorption spectroscopy is as a fingerprinting tool.
That is, you have a sample and you want to know is this an iron oxide, is it an iron sulfide, is it iron metal? You just you want to know the most basic thing about the material. And the reason that this works is because the details of what the data look like have to do with the coordination and valence environment of the absorbing atom. So, the little quiz that I have here is I have four spectra, all measured at the iron edge, but measured on four different things. And the question is, can you, just looking at the data, if you were to have just measured a sample that somebody handed to you but did not identify, could you identify the sample?
And the answer is yes, as long as it's a pure material, because each of these things has a distinct spectrum. And I apologize for the next bit if any of you are red-green colorblind. The transition won't make a lot of sense if any of you are colorblind, and I apologize for that. It turns out that this one, the second one from the bottom, is ferrihydride because that's what ferrihydride looks like.
The second one from the top is iron pyrite or iron sulfide, and you know that because that's what iron sulfide looks like. The top one is the metal and the bottom one is a different oxide called hematite. And so you can use X-Aps as a fingerprinting tool. So if you have your unknown thing with an unknown iron species in it, And it could be anything. It could be your dirt, your catalyst material, your paint chip, your animal tissue, you know, whatever it is that you take to the synchrotron, if you want to simply know what is the dominant species in my sample, XF is a way to do that kind of fingerprinting.
But that's a sort of qualitative kind of analysis, and there's many, many things we can do that are a lot more quantitative than that. Although fingerprinting is not to be discounted, it's immensely valuable. It's sort of your first line of attack, your first wave of attack against your data is, do these look more like an oxide or do they look more like a sulfide? And that's the value of fingerprinting.
There's a whole bunch of things that are more quantitative that we can think about doing. So the first two I use the word positioning for. What I mean by that is that You're looking at some characteristic of the Zanes data and making a sort of semi-quantitative analysis based on the gross features of the Zanes data. The example on this side, on the left as you're facing it, is approximating the amount of reduction in... So...
What's going on in this experiment, in this paper, is that an oxidized form of uranium is being exposed to a kind of bacteria that is known to draw energy from the uranium by reducing the uranium from U6 to U4. And in this way, the bacteria is actually drawing the energy that it needs for life from this otherwise toxic and radioactive material. And at the top, you see a standard that is pure U6, and at the bottom, you see a standard that is pure U4.
And by using some feature of the Zanes, say the point at which you've gone halfway up the edge, or the peak of the first derivative, or perhaps the peak of the white line, by using some feature of the Zanes, you can then quantify the amount of reduction in these various samples just by seeing... where they stand between U4 and U6. And in this way, these, whatever APSA and NGA and all of those, whatever all of those mean, they're able to quantify the eruption of the uranium. Another example of this is looking at sort of a cluster analysis of the pre-edge peaks in various titanium containing minerals.
And this is a pretty famous, pretty well cited paper that a lot of people who work in various aspects of titanium mineralogy and titanium chemistry use. The basic concept is by looking at the heights and positions in energy of the pre-edge peaks and various titanium compounds, you can cluster things together. If you go and measure something that is an unknown titanium compound, and you measure the height of the pre-edge peak and the position of the center of weight of the pre-edge peak, you can...
And let's say it falls somewhere in this area or somewhere in this area, then you can say I have something that is titanium 5 plus or I have something that is titanium 6 plus by simply doing this sort of cluster analysis on, you know, this fairly... fairly large, fairly gross feature of the titanium spectrum. However, either of these analyses only make sense if you really pay careful attention to the data processing. That is, you have to process and normalize your data quite well for either of these kinds of analyses to make sense. And so these are quantitative methods to the extent that you can do a good and defensible job in processing your data.
Another thing that is often used is a peak fitting approach where you take your normalized data and build a sort of heuristic model of the data as a combination of various line shapes. And what I show here is coming up with a model for these lead titanate data. And here you see some interesting pre-edge peaks, the edge, and then the Zanes has just started up here. And as a combination of an arctangent and three peak functions, so Gaussians or Lorentzians, I'm able to approximate the shape of the data with these three functions. Now, the drawback of a peak-fitting approach is that it is often ambiguous what physical or chemical or, I suppose, electronic meaning to ascribe to the various peaks.
But the sense in which it's a useful... quantitative tool is if you have an ensemble of data where something is changing from the beginning of the ensemble to the end of the ensemble. You can relate the quantitative changes in these various features to whatever else is going on in the material. So if you have, say, something that is heating up and you want to try and understand something about the evolution of the system, you might be able to do so quite well by doing this kind of analysis over the entire ensemble of data. Another approach and a topic that I will go into in great length tomorrow is to do linear combination fitting.
The basic concept here is that if you have a sample that is a mixture of phases or a mixture of states, then the data that you measure on that sample can be understood as a linear combination of the spectrum measured on the pure materials. What's shown here, and what will be in the example I go into some length on tomorrow, is a system of a gold chloride solution that is being reduced to metallic gold in the presence of some biomass. So there's some kind of chemical interaction between this very caustic gold chloride and...
whatever is in the biomass, there's some kind of reducing interaction so that after some great amount of time, all of the gold chloride has been reduced to metallic gold. The concept here is that at some intermediate point, and here this is data taken seven hours into the reaction, that you can describe this as a linear combination of the spectrum of the gold chloride, the spectrum of gold metal, the beginning state and the end state, and as well as some whatever the intermediate state of the system is. In this intermediate time, you can describe this as a linear combination of the two N members, and it turns out one other thing.
And you can do this as a function of time and come up with some kind of time dependence of the system or some quantitative measurement of the rate constant of the chemical reaction. Again, this is a To be a quantitative technique, this is something for which data processing is very key. That is, all of the data have to be processed.
All of the data and the standards have to be processed well and normalized correctly so that you can do this kind of analysis. Another thing that is done with an ensemble of data like the one I just showed you is a thing that's called principal components analysis, which is kind of a funny thing. You take an ensemble of data. What I have here is the time series of data from the previous slide, this gold reduction process. And you perform some linear algebra on the system and decompose these spectra into the principal components, which are by themselves not physically significant.
but they provide an orthogonal mathematical basis from which you can reconstruct all of the data. Here is what the components all look like. There's one dominant component that is more or less the average behavior of the entire system.
So the blue is basically the average of all of the data. And then the rest of the components, which are drawn here, are some kind of measure of the variations from sample to sample. And the number of statistically significant mathematical components that you...
deconvolve out of your data using this technique gives you some sense of the number of species that are present in the data. So you can then determine what ...what the states in the system are by trying to reconstruct standards out of these mathematical components. And so here I'm showing that there is metallic gold in the system, which we know because the system is reducing to metallic gold. We know there's metallic gold in the system because I can construct metallic gold out of this orthogonal, non-physical... basis of principal components, but I'm pretty convinced that there's not gold cyanide because I cannot reconstruct the gold cyanide.
So this is another useful quantitative tool to understand something about an ensemble of Zanes data. Finally, and I'm going to talk about this very, very little, you can use some kind of theory to understand your Zanes data. You can simply do a forward simulation that has come up with some structure and have a recent version of PheF calculate what the zanes would be from your structural model and see how that compares to the data that you measured. By tweaking the parameters that go into the PheF calculation, you can try and better reconstruct your data.
That's a useful quantitative tool. There are a couple of... tools that actually try and do some kind of numerical fitting to Zane's data.
This M-ZAN thing by Benfato and Della Longa is one thing that a lot of people use. And this other program called FitIt is sort of an interesting approach to this where you try and pre-compute spectra over a multidimensional space and then interpolate from a You do a large ensemble of pre-calculations and try and interpolate between those to understand what structural information you can get out of the data. Both of these things could merit a long talk or even a day's worth of instruction, as can theft. And that's sort of the end of what I'm going to say about theory, other than that if this is an approach that appeals to you, there are ways to use theory also to quantitatively...
interpret and analyze your SANE's data. So finally, getting to the X-AFS analysis. So the last several slides were all about the first bit of the data, but there's a ton of information and all those wiggles that keep going on and on well past the edge.
So your X-AFS analysis can be quite simple, and often simple is, you know, sort of everything you need from an X-AFS measurement. So the software tries to help you do the simple things relatively simply. And here is just a quick fit to the first coordination shell of a form of iron oxyhydroxide.
And once you know how to drive the program, importing the data and parameterizing the problem and clicking the fit button and getting to the answer, all of this can be done for a very simple problem like this with about a minute's work once you know how to drive the problem. drive the program. So it took me a minute to get this picture and get these results.
And it's pretty simple. I just assumed that you have a very simple structure in the first coordination shell. It's just iron surrounded by oxygen.
And by doing this simple analysis, I got almost the right answer. So I measured that N was 4.6, and in this form of oxyhydroxide, there are five near neighbors. I got almost the right answer for the distance and almost the right answer for the distortion parameter.
The reason that I didn't quite get the right answer is because lapidocrosite is actually a fairly messy structure and there's quite a large amount of structural disorder that I did not model correctly in this simple analysis. And so that accounts for the little bit of deviation and coordination number and the slightly incorrect value for R. But for a short amount of work, this is often the level of data analysis that you need to properly answer your question.
But the XS analysis can also be quite sophisticated, quite a bit more sophisticated than what I just did, but I don't have a slide about that because that's what we're going to be talking about for much of the rest of the course. So this slide is really just sort of a teaser for what we're going to be doing tomorrow. And as you, you know, if you dive really deeply into this whole X-Aps business, your X-Aps analysis can be really quite elaborate. And I'm showing you here an example from one of my colleagues, Scott Calvin, something he did a few years back, where he really sort of threw every trick at the book in the problem, and in doing so was able to really... He learned some quite extraordinary things about these nanoparticulate zinc manganese ferrite materials that are useful for the magnetic cores for power devices.
He measured all three edges. Now, in this structure there's a lot of anti-sight disorders. So, All three metals, the manganese, the zinc, and the iron, can and do exist on either the tetragonal site or the octahedral site in the structure.
And there's also the possibility that there will be oxygen vacancies in these materials. There's also the issue that these were nanomaterials, and so there are some issues that are special to the study of nanomaterials that have to be considered. And...
So by really throwing every trick at the book and the problem, Scott was able to create a fitting model that quantified the amount of each metal atom on each of the sites where the metal can exist, quantified the amount of oxygen vacancies in these materials he was looking at, and correctly or reasonably correctly accounted for the effects of the fact that they were nanoparticles through every trick of the book. What is plotted here are the data on his ensemble of different nanoparticles measured at the manganese edge, the zinc edge, the iron edge, fit all these data all in one big complicated fitting model. The thing that might not be obvious from far away from the screen is that every single one of these is data and the fit. There actually is a line that is the fit. for every one of these data.
By knowing everything there is to know about X-AFS analysis, if you're willing to put in the work, you can get remarkable things out of an ensemble of X-AFS data. This is a really great paper, and I highly recommend those of you who want to know really what the limits of X-AFS data analysis are, I really highly recommend that you go off and look at Scott's paper. It's quite a read. It's a heavy read. But it really shows you that big things are possible.
And if you put on your thinking cap and go to the XS beam line, you can go home with great stuff. So how do we understand How do we understand what this XS thing is? So I sort of threw a bunch of data at you, and threw a bunch of data analysis concepts at you, many of which we're going to talk about in much more detail. But at this point, it's really important to have a good mental picture of what's really going on.
going on when you go to the beam line and measure stuff. So, here's the basic picture. Is that you have an atom with a deep core electron and the x-ray comes in and if the x-ray has enough energy to overcome the binding energy of the deep core electron, then a photo electron will be ejected from the atom, leaving behind a core hole. Plotted over here is the probability of that event happening. So for an incident photon that does not have as much energy as the binding energy of this deep core state, not much happens.
And then when you get... to the binding energy, the probability of that photon interacting with the atom goes up dramatically, and that's the step. And then after that, it sort of tails off exponentially, sort of the way you would expect from something that behaves by the Lambert-Beer Law, which is basically what a transmission excess experiment is.
It's the Lambert-Beer Law in the X-ray. You get an exponential, after the absorption edge, you get this exponential decay of the probability of the interaction. The photo electron that gets ejected has some kinetic energy that is the excess energy that the X-ray imparted above the binding energy.
And because that photoelectrode is so high, it has some energy. It has a corresponding wavelength. So a low kinetic energy photoelectron has a long wavelength, and a high kinetic energy photoelectron has a short wavelength.
The bottom line, though, is that it's an electron, so there has to be a state available for it. Any of these states down here below this threshold energy are occupied by, either don't exist, or are occupied by other electrons. no place for that core electron to go.
So there has to be an available state for there to be absorption. And then for this isolated atom... You get something that looks like that. And that's not very interesting. But what is interesting is when you have neighbors around.
So if there's another atom sufficiently close to the absorbing atom, then you get this interesting interaction where the photoelectron can scatter off of this other atom, and the outwardly propagating wave from the photoelectron can interfere with the scattered portion of the photoelectron. And you get these interesting interference patterns. And on top of this sort of step in exponential decay, you then get all of this interesting oscillatory fine structure, which is the business part of the X-AFS experiment.
The interference between the two parts of the photoelectron wave is, of course, energy dependent, because the wavelength changes with respect to the separation between the absorber and the scatterer. That gives you some constructive interference and some destructive interference, and it oscillates. That's why you get the fine structure. Suppose we wanted to enlist the help of one of our theorist friends to go off and calculate a Zane spectrum for us. How might our...
very smart theorist friend go off and do that. Well, first of all, I'll say something that's not a very profound statement. That XF, like just about everything else in physics, is an example of Fermi's golden rule.
That is, we can understand this absorption function by somehow evaluating this integral. That is, evaluating what happens when something makes an electron go from its initial state I to its final state F. And the thing that makes it go that way is of course the incident photon, which it turns out in this physical mathematical description of what's going on, that the photon is a dipole operator or something that has that functional form. So to do this, whatever math is involved in evaluating this proportionality, we have to somehow figure out how the electron gets from its initial deep core state into its final state, which is the state being the photoelectron. Broadly speaking, there are two ways of solving this equation.
You can write down very careful mathematical expression for the initial state. That turns out to be relatively easy. Writing down the mathematical function of the, well, by a certain definition of easy, writing down the mathematical function for the initial state is the kind of problem that one learns how to do in a first year graduate quantum mechanics class.
So that's my definition of not difficult, is something that can be done by a first year physics graduate student, which is still sufficiently difficult and far enough away for me that I'd have some trouble doing it from scratch. And then you would also need to write down the final state, and it turns out that writing down a mathematical expression for the final state is an extremely difficult problem. And so if you were to take this approach of writing down careful representations...
of the initial and final state. You do all of the work coming up with the final state. But then once you have the initial and final state, the rest of it is relatively straightforward math that needs to be evaluated.
The other option is to use a thing called multiple scattering theory, where instead of writing down, instead of doing the difficult work of writing down the final state, we do the difficult work of writing down something that's called a Green's function. And the way to think about the Green's function function is it's the function that describes all the ways that the photoelectron can scatter off of atoms in the surrounding before something goes back to fill in the core hole that was left behind. So here's how we want to think about this. Using this real space multiple scattering approach, all the hard work... is in coming up with this Green's function.
The Green's function, and I'm just going to assert this. You can go off and read the papers. Some of them are quite good. The Green's function is composed of two pieces that have relatively simple physical interpretations.
And as you'll see in the next couple of slides, there's a reason why I'm going over this point in some detail. The point is not that I think it's important that you all know how to write down the math for evaluating this Green's function thing. But I think it is important...
...to have a mental picture of what the Green's function is, and what it is that's being calculated when you run the theory for absorption spectroscopy. So the Green's function, it turns out, can be broken down into two pieces. The piece G0 is a mathematical function that explains how a photoelectron goes from one place to another, where the two endpoints of that could be...
say the absorber and a nearby scatterer, or it could be two scatterers in the material. The point is that G naught is the thing that says how a photoelectron gets from one place to another. The other piece that goes into the Green's function is the so-called T matrix, which is the mathematical function that explains how a photoelectron scatters off of something. Those two pieces together are the whole story. the photoelectron propagates out and scatters off of things.
And if you can write down how it propagates and how it scatters, you can, in principle, solve the whole problem. So when computational Zanes is done, when somebody says, I did a FF9 calculation and I computed the Zanes, what that means is they wrote down in whole this g-naught function, the function that describes the propagation, And they wrote down in whole the T matrix, that is, the thing that describes all possible scattering events from all possible atoms in the cluster, do a bunch of matrix algebra, which these are very large matrices, so this expression turns out to be very computationally expensive, which is why Zane's calculations have only been done routinely in the past 10 or 12 years, because this was so computationally expensive to do this for so long. And by constructing these relatively simple to calculate things, in doing this big matrix algebra thing, you end up with the Green's function, the thing you're looking for, and from there it's relatively easy to finish the job and come up with a spectrum.
Now, the thing that's interesting here, and again, the reason that I'm talking at all about all this complicated... Green's function stuff, is that you may recognize this expression is subject to a thing called a Dyson expansion. That is, this term can be written as this infinite series. And remembering the definitions of g and t, we can then come up with physical interpretations for every term in the series.
That is, this thing that propagates, scatters, and propagates. is the term that describes all the single scattering events. That is, all the ways that the photoelectron can leave the absorber and scatter off of one and only one neighbor.
The second term, G0T, G0T, G0, is the term that describes all of the double scattering events. That is, all possible ways that the photoelectron can leave the absorber and scatter off of one thing, then scatter off of another thing, and then be done. And this is the triple scattering and so on and so forth to all orders of scattering because the Dyson series is an infinite series. Now, what do I mean by single and double and triple scattering? Well, what I mean is things that look kind of like this.
Here's an example of a single scattering path where in each of these the red one is the absorber and the yellow one is the scatterer. And a single scattering path is photoelectron goes out and scatters off of just one thing. Double scattering is scatters off of two things, and so on and so forth.
Now, the clever thing about FETF is that FETF further expands each of these terms. So that the g-naught, t-g-naught term is expanded into a sum of all possible paths that look like this. And then you get to calculate each of these things individually. This term is expanded into a sum of all double scattering paths.
So anything that looks kind of like this, you get to evaluate individually. And so on to all orders. So here's a whole bunch of examples of what I mean in a two-dimensional crystal.
So here's a cut through a plane of something cubic. And here's a bunch of examples of single scattering paths, where if we treat that one as the absorber, here's a really short single scattering path, here's a reasonably long one. Here's a whole bunch of examples of double scattering paths.
The double scattering can be all in a line, or it can make a little triangle, or it can make a fairly large triangle. Triple scattering, similarly, they can all be in a line. It can rattle around back and forth between two atoms, or it can connect four atoms that are just in some big, you know, rhomboid.
And FEPP allows you to calculate each of these things, all the other examples you could imagine by looking at this plot. It allows you to calculate each of them individually. So the trick to the X-Aps analysis then is going to be to get a handle on this very large number of things that you might have to consider to do the X-Aps analysis.
So when I say that FEPP calculates... scattering to all orders, and furthermore, breaks down each order of scattering into a potentially quite large number of examples of that order of scattering, it sounds like a very daunting problem, because it sounds like you have a very large number of things that you have to worry about. But as you'll see tomorrow, it's not really that daunting that we have all of the tools that we need to dig through. this huge pile of possible scattering events and focus our eye in on the things that are actually important to analyzing our data. We'll learn how to do all of that tomorrow.
Finally, for every one of these things, what PheF does is it helps you evaluate the X-Aps equation, where for every kind of scattering path, this equation needs to be evaluated. Thank you. So there's a sinusoidal term, there's the wiggly term, the thing that makes it wiggle.
And that has something to do with how far apart the atoms are in a single scattering path, or how long the path is in a multiple scattering path. That's what the r is. But the oscillatory term also has something to do with what the photoelectron is scattering off of. The amplitude of... The scattering of some scattering event has something to do with the number of them that there are.
So for a single scattering event, that would be a coordination number. But it also has something to do with what you're scattering off of. And f of k and phi of k are together the scattering function that FEPF calculates and is the thing that allows us to identify. what the species of the scatterer is when we're doing the X-Aps analysis.
There's a damping term that has something to do with the disorder in the system. There's a mean free path term. And we use FEPF then to calculate the things that are in blue.
And we do a fit to somehow optimize the things that are in red. And by using FEPF and using the analysis software, we then evaluate the X-Afts equation for every path that we want to consider in the fit, we sum them up and we compare it to the data. That's all very vague right now, but again, tomorrow we're going to go over this in all kinds of detail. I think I'm nearly...
oh, Mary joins us. So, I want to leave you with one last topic in this introductory talk, and that is to remember that you never, ever, ever do an X-Aps experiment without knowing something else. You always know something about your sample going in.
At the very least, you know something about your sample. like what the absorber is, right? But you probably also have an idea about what the coordination environment is. You probably know whether you expect it to be oxidized or metallic or sulfided or whatever. And you've probably done other measurements on your...
sample, you've probably done some microscopy, you might have done some elemental analysis, you might have some diffraction, so on and so forth. You have information about your sample. You never do the X-outs experiment in a vacuum, except possibly at a soft x-ray beam line, right?
Or in a cryostat. But I mean vacuum metaphorically, of course. You have other prior. There's a typo. You have some prior knowledge about your sample, and you get to use that.
So you never know nothing about your sample. You're always going to bring knowledge to your interpretation of your X-aspect. And I want you to remember that at all times through the next couple of days. So that's the end of this talk, and sort of setting the stage for everything else we're going to do. Yeah, now let's get started.
You know now it's now it's time to go on and learn some XFs. So that's uh, that's the introductory talk. Are there any questions at this stage? Any big overarching questions about XFs?