Okay, so we talked about in the barometry, we had some sort of plasma like this, and we had a probing laser beam that went through the plasma, and we split off a fraction that probing laser beam and sent it around the plasma, which meant that when the two beams recombined, we got some interference effect. between them. And we called these beams the probe and the reference, and we said, or we found, that the phase difference between these two beams, delta phi, which is the phase accumulated by the probe beam minus the phase accumulated by the reference beam, that was going to be minus omega over.
to C times the critical density, which is itself a function of the wavelength of the laser, times the integral of the line integrated electron density, so the electron density inside this plasma and E over some plasma length scale L, like that. And so this is a phase difference between the probe and the reference. We want to measure that.
We came up with a simple system where the intensity on our detector was just going to be equal to one plus the cosine of this phase here. And we realized very quickly that this causes us some problems because if we have a signal on our detector of one plus cos delta phi and that signal looks like something like this, so some sort of one plus cosine of some constant a times t like that something is oscillating then we have a lot of phase ambiguity in the sense that we don't know what the phase is going up and down we also can't measure the phase more than modulo 2 pi and for this example here we had a look at what possible paths we could take in our little delta phi space that would give us exactly the same signal we said we could have one that ramps up like this we could also have a delta phi that goes down like that and then every time we get to something multiple of pi we lose track of whether we are going up or down and so we start having these multiple branching pathways and any possible path through this space is a valid will reduce the same signal on our detection we can't tell the difference between them so this we then rechristened a homodyne technique and then we looked at heterodyne techniques instead. So when we started working with our heterodyne technique we borrowed some tricks from FM radio transmission and we now have, going through our plasma, some radiation source which has got a frequency omega 1 and now our reference beam has a frequency...
omega 2. So they've got some frequency shift between them, we talked about techniques for doing that, we put in some recombining beam splitter here and we put it through to our detector and we said if omega 1 is equal to omega 2 we just get back our homodyne system, which is kind of obvious because here we split it so they will have the same frequency. But in the more interesting case where omega 1 is not equal to omega 2, we'll end up with two frequencies. present in our final signal. We'll have a signal that on our detector, even the absence of any plasma, so if we just leave the system running, we'll have a signal that looks a little bit like this.
And there'll be two frequencies inside this. There'll be this envelope frequency called the beat frequency, which oscillates at the difference between the two, omega 1 minus omega 2. And then there'll also be this fast frequency within it at omega 1 plus omega 2. Now these frequencies for any radiation we're likely to use are very high so it's very hard to get a detector to work these frequencies, so we won't actually see this at all. Our detector will just average this out and what we'll see is this slow beat frequency instead and we can detune omega 1 from omega 2 to get a beat frequency that nicely falls within our detector range.
So far so good but there's no plasma physics in here. What we'll end up then is measuring something that looks like I over I naught is equal to one plus cosine of omega one minus omega two t, that's the beat frequency, but we will also have an additional phase term delta phi as we had before, because that's what represents the phase of going through this plasma here, delta phi. And when we looked at this we said, ah that's interesting, we've got frequencies times time, and now we've just got this other term delta phi, which means that if delta phi changes in time, we have a change in delta phi with respect to time, that's going to look like an effective frequency inside here.
And then we can rewrite this as one plus cosine of omega one minus omega two. plus partial of delta phi partial t times time like that. And what we actually measure on our detector at the end of the day is a signal which is oscillating with some frequency which I'll call omega prime which is a sum of these three different frequencies here, which means that if we see a change in the frequency omega prime we know that that change is due to a change in delta phi in the effects. And so now we can measure the temporal change in phase, which is the temporal change in the electron density. The nice thing about this technique is that we found that now we can distinguish between the phase going up in time from the phase going down in time, which we were unable to do with this technique.
That's why we got this ambiguity every time we went past pi, whether we're going up or down. With this technique it was resolved. and we saw that we resolved that by looking at this in Fourier space.
So in the case for the homodyne technique we were detecting effectively this frequency here at some positive omega, so that's phi partial delta phi partial t, but that gave exactly the same result as some negative frequency here. So we couldn't tell the difference between the negative version of this and the positive version. when we moved to the heterodyne technique, because we were effectively encoding our fluctuation quantity around some frequency omega 1 minus omega 2, now we can tell the difference between whether we're in the negative side of that or the positive side of that. So if we shift this frequency a small distance by changing the phase by having some time changing electron density.
So this enabled us to resolve the ambiguity between these two and indeed this technique in general helps resolve a lot of the ambiguities associated with homodyne interferometry. So as I mentioned in the problem set you'll come across a couple of other techniques which can do this in a slightly cheaper but more ambiguous way. So we got through all of that and I just want to pause here and see if there are any questions on that material before we go on and finish off the spatially heterodyne version where we do this as an imaging technique. So questions? Yeah.
So I understand that the detector is not able to resolve the omega one plus omega two component, but does the time change in the phase difference contribute to that oscillation as well as the worry with resolving? Yes, but it would if we look at it in a Fourier domain. I don't want another board. I'll just use this one briefly.
Okay, so. on exaggerated sub-ops. Let's say we've got our two frequencies, omega 1 and omega 2, close together, omega 1, omega 2, like that. The difference between them is down here. This is the beat frequency, omega 1 minus omega 2, and indeed we have some shift to this to higher or lower frequencies, and that shift is due to the change in the phase in time.
the sum frequency is up here, right? This is omega 1 plus omega 2, and indeed it will also be shifted by the same amount here, but it will be at such a high frequency still that there'll be no way to measure it. If you happen to have a system where your phase changes so much that you can shift this one down into the range you can measure, then I don't think you need a heterodyne technique at all at that point. But anyway, certainly if that happens, you should make omega 1 and omega 2 larger until it doesn't happen. Because you remember that we have this condition that you only really get good results of this if delta phi delta t is much, much less than omega 1 minus omega 2, which necessarily means it's much, much less than omega 1 plus omega 2. You don't want your shifted frequency getting any way close to zero.
because then you alialize it, aliaise it again and you won't be able to measure it. So yeah, thank you. Good question.
Any other questions? Well, I've either taught it very well or you're going to find the homework very far. Right, let's go on to spatially heterodyne techniques.
So here is the idea that we actually put an expanded beam through our plasma. So we take our laser beam, which maybe is initially relatively small, and through some beam expander we get out a large laser beam, and we pass that. We've expanded our beam enough that our plasma is maybe slightly smaller than the beam diameter. So there are some regions outside the plasma that we can still image where there won't be any phase shifts.
This turns out to be useful. for zeroing our system. We'll talk about that a little bit later on.
And what we said is this beam of course consists of some wave fronts like this and these wave fronts as they go through the plasma are going to advance because the phase speed in the plasma is faster than the speed of light and so the phase actually advances inside the plasma. And so the wave front that comes out is also going to be advanced in respect to the plasma. We want to measure the change in this wave front. So one thing we could do is we could interfere with a set of plane waves that we derived from the same laser beam.
We'll put a beam splitter in somewhere down here, send these around the plasma, send it back in here, and then we'll have another beam splitter as we normally do that recombines these and then we would have this nice flat phase front and we now have from the probe beam some phase front which is advanced a little bit. And if we put all of this image onto a detector, this is our camera like this, we get a series of constructive and destructive interference fringes that maybe look like this. Just drawing a set of nested contours here, we assume we've got some sort of peat central structure here.
Now the trouble is this, is that this is still a homodyne system, in the sense that we can't tell whether these density contours are going up or down. If I draw like a random line out across this, my density could look like a peaked structure, maybe I've got a prior that that's true, but I can't prove to you that my density doesn't look like that instead, or any number of other different paths through phase space that would give us the same fringe pattern. And so... this is still problematic. So what we want is a spatially heterodyned version of our temporary heterodyned sithlum that we had up there, and we do our spatial heterodyning by tilting these fringes.
That literally means slightly adjusting this mirror here so that the fringes come through at an absolutely tiny angle. So we're not talking about 10 degrees here, we're talking about like much less than a degree. And that means that our phase fronts are now coming at some slight tilt here.
And as opposed to having omega 1 minus omega 2, we now have an inbuilt phase pattern that looks like k1 minus k2. And these are vectors in the xy plane here, we don't care about the z components where this is x and this is y, and by convention we usually have the z coordinates going in the direction of our rays here. So we're obviously putting a camera here, we don't measure anything inside. we measure in y and x, and we're interested in the misalignment of these wave fronts in x and y. So in the absence of any plasma, this misalignment is simply going to give you a series of straight fringes, evenly spaced like this.
And so that is like the signal that you have, your beat signal here, the green line, in the absence of any plasma this signal just goes on and on and on. When we introduce the plasma into our system, each of these fringes will get distorted, and they'll get distorted by an amount that corresponds to their line integrators. electron density. And by looking at the shift between where the fringe was before we added the plasma and where the fringe is afterwards, we can then calculate the amount of density that's been added because we know that the fringe shift is nearly proportional to the density here.
And once again because we've got a heterodyne technique here we have avoided this ambiguity. even if these fringes overlap, even if we have a distortion so large that this fringe goes above this one, when we go out to the edge of the plasma where the fringe shift is zero out here, we can still uniquely identify each fringe with its background fringe, and so we can track them along and we can say, aha this one's done two fringe shifts or four fringe shifts. And there is a Fourier transform way to think about this as well, but now we need to have a two-dimensional Fourier transform, and what we're looking at here... are now kx and ky. And so originally your k1 minus k2 beat frequency is maybe up here, and by symmetry because r signal is emission down here, so just at the negative number as well.
And now we've distorted these fringes, it's taken this initial beat frequency and maybe we now have Fourier components that look a little bit like this or they could look like that and moving around in Fourier space changes what the shape of your background changes look like. This here where my components are roughly equal in kx and ky that will correspond to fringes which are 45 degrees. So if I had my v frequency down here this would correspond to fringes with a k vector in that direction and that would be like that.
So you can choose what your carrier frequency is and there was a good question in the last lecture about your sensitivity to different density gradients in different directions, and indeed you have more sensitivity in the direction perpendicular to where your carrier frequency, spatial frequencies. Okay, there's a lot going on there, I actually have a load of slides with pictures of real data on this that might make it a little bit clearer, but before we get onto that, are there any questions? Yeah, John. So by tilting the wave fronts of the reference beam, what in essence we're doing is we're changing the wave vector component that is interfering with what's coming through the plasma.
I mean we're trying to create some beam interference here. So I guess you know technically we're not changing the magnitude of k. No, because in free space the magnitude of k is fixed precisely because the dispersion relation of the wave in free space, not in a plasma, is omega equal to ck. And so what we've done by tilting the wave is we've changed the distribution of that magnitude in either dimension. So what we'll say is well so now the component of k that is interfering with the wave fronts is slightly different.
Yeah so k2 here is the is for example the reference beam here and k1 is this beam coming through for example. Yeah so k1 is exclusively an x component I guess. So think about that if we align our if we aligned it in this way, but if I align the fringes, if I rotate the fringes, I can measure other components of it as well.
And it turns out, as I'll show you, you still get to measure some of the Y components and things like that, even if you're in this setup which is most sensitively X component. But you're right, yeah. Was there another question? Yeah. So as the probe light propagates through the plasma, it'll refract and bend around and all this.
Why don't you get heterodyne for free from the conserving? changes to the wave propagation as it transmits the plasma. So those, yeah, so the question was why do you get heterodyne into free as you sort of get k changing within the plasma here.
I mean that's effectively what you're measuring here with these phase contours, it's just that they're still ambiguous. You need to shift them so that they're, as we did in frequency space for the heterodyne, temporary heterodyne version, you need to shift them into such a direction that they're completely unambiguous whether your phase shift is up or down. And here at the moment, even with this, you're going to get these ambiguous fringes. I'll show you some pictures that maybe we'll make a bit clearer in a moment.
But yeah, are there any questions online? Hi, yeah. Yeah. So how fine can you get for the amount of x components or y components in your scan? Like how detailed can you get?
Because I'm assuming, because the wavefront is continuous, but I'm assuming you can't get like perfectly granular, like understanding of the plasma's density, just from this. Did everyone in the room hear the question? Like what's the sort of limits on resolution in a way? So our spatial resolution is set by our fringe spacing.
right? So we can usually we can say this is a dark destructive interference and this is a light constructive interference and theoretically you could identify like the gray point halfway between light and dark but it starts getting a bit ambiguous there. Dark and light are pretty obvious and that means that your spatial resolution is set by the spacing between your fringes and if I choose to have my fringes closer together so I choose a k1 minus k2 which is a larger number so a higher beat frequency then I'll have my fringes closer together like this, I would gain spatial resolution. And I'd be able to keep playing that game down to the resolution of my camera, where I need a certain number of pixels to be able to tell the difference between light and dark.
The trouble is, as I shrink this down, I'm gaining spatial resolution, but I am losing resolution of the density, because these fringe shifts are now smaller and smaller. The fringe is now only moving a very small distance. Maybe it's only moving two pixels or one pixel. Now I've got a 50% error.
right? Because I don't know whether it's two or one pixels. So there is a trade-off directly between the density resolution of this diagnostic and the spatial resolution.
So that's a very good question. And that is the same, it's always the same because mathematically it's the same, it's the same to the temporary heterodyne version as well. So you can have time resolution or you can have density resolution, but you can't have both.
They trade off against each other. I see that makes sense. All right, thank you.
Thank you. Yeah. Can this be used for 10-fold minimums as well?
You can just keep, you know... Yeah, so can it be used by temporal measurements as well? Yes, if you have a fast camera, you can do this.
For example, I know a guy who used a CW laser beam, so basically continuous wave laser beam at like 30 watts or something terrifying, and he had a fast camera. And so he took, you know, the fast camera could take 12 pictures, one every five nanoseconds, and you would be able to make a little movement. And depending on the speed of your plasma, you know, if you don't need every five nanoseconds, but you're working with a plasma, where the time scale is milliseconds, then you can actually just have a continuous count. So you need a nice bright light source that is continuous enough and you need a camera which is fast enough and that's what sets the resolution.
So you could make a 2D movie of the density evolution time. So yes, the limitation here is just technological? Yes, yeah, yeah, yeah.
This technique is... I mean that is time resolved, spatially heterodyned interferometry. I don't think you can do temporary and...
spatially heterodyne-interferonct in the same diagnostic. If you work out a way of doing it, let me know. That sounds hard, maybe not necessary, because you've already got around the ambiguity in one way, so I don't know if you need both. And if you had a homodyne system in one sense, I think you'd be able to use lack of phase ambiguity from the heterodyne part of the system to get over that. but I haven't really thought about it that much.
Interesting question. Yeah, could be a fun diagnostic. Very expensive. So yeah, cool.
Any other questions? Yeah. So in this case, it's not just we're measuring the k vector of how we're measuring the wave, we're actually deflecting the probe wave a little. Oh, so the question is, is the probe wave actually being deflected as it goes through a plaza? Well done for spotting that.
I was going to have that as a question later on when we looked at some data, but as you pointed out, you've ruined the game. So I told you earlier that rays are always perpendicular to the phase fronts, right? And so as I'm drawing this, the rays are like this. Fine, the phase fronts are flat, but you can see here that if I drew those lines like this, I would start to get deflection. And so you will have shodography effects overlaid on top of your interferometry signal.
It turns out the modulation from interferometry is usually stronger and so you see that more strongly but if these are very big you may actually just lose the light because your lens will be this big and your rays will exit it. Now in general interferometry is so sensitive that I've drawn this in a very exaggerated way the phase fronts can be almost perfectly planar still and I'll still get really nice interferometry patterns but it won't be so distorted that the light will all be spreading outwards and I won't be able to do anything about it. So yeah you're quite right Mike.
in this picture we should have stratography and all sorts of things like that as well. Okay any other questions? We'll show some pictures of them otherwise.
I have a question. Yes please. Could because of the shifting of the wave fronts, is there a possibility for interference within the same wave front?
So like would k1 could interfere with itself if there's enough of a shift in density plasma? Yeah, in fact, that's when we were talking about shodography and I said we don't really want a coherent light source for shodography. So even if you take out the reference beam and the question is, can this shift so much that it actually interferes with itself? Yes, that happens.
And you can see that in shodography. And it's bad because it's really hard to interpret. But there is a technique. I should have read off on this more before saying this, but it is a technique called phase contrast imaging, which is used in with x rays.
And that actually exploits the interference of the x ray, which is just radiation, like all this other stuff with itself to make very, very precise measurements of sharp density gradients. So in general, you want to avoid coherence in shotography because it messes up your your data and it's hard to interpret. But if you can do it very precisely, you can do some very nice techniques with it.
So it's not always a curse. In general. All of these effects will be overlaid on top of each other.
I've just been presenting them one at a time, but they're all present in the same system. Okay, so this is a very biased sample of interferograms, and it's biased in the sense that I just went through quite a lot of papers I've written and tried to grab them because I was in a hurry, but hopefully some of these will be informative. I tried very hard to find some temporally heterodyned interferometry, and it's actually quite hard to do...
find one where they show the raw signal. because we have such good electronics for this stuff these days that mostly you just do the signal processing on the chip and give the output of it. So you don't really digitize the real signal.
This is my best attempt so far. This was a Heaney laser beam. So that's a green laser beam on Aztec's upgrade. And this is from PayPal from 2017. So it's relatively recent.
They had, the Heaney is obviously green, but they use the heterodyne technique to produce a probe at 40 megahertz. So that's the beat frequency there. And that effectively sets the temporal.
resolution of this. And they actually did something even more complicated than we've discussed where they heterodyne the system and then interfered the heterodyne probe as the heterodyne reference, which is very weird, and they did it with a quadrature system where they shifted all the signals out of phase by 90 degrees, and you'll learn about quadrature in the problem set, and then they digitized those two signals. And so what they saw was these signals here, and if you look at this carefully, which you can't do right now but I'll put the slides up later, you'll find out that these signals are actually 90 degrees out of phase which is really really cool. And then they were able to process those together and they could get out of phase shift. And you can see the time on the bottom here is sort of in millisecond-ish time scale so this is pretty fast for a tokamak.
And you can see that the phase shift is going multiples of two pi so they've resolved that ambiguity. So they're saying look the density went up and then it came down and it went up again and then went down. So they have some confidence that this is real.
So this was quite a nice example. Another example is from the pulse power world. This was on the Z-machine Sandia, where they actually show the raw data, not quite the raw data, what they show is a spectrogram.
So this is like if you do a very short time window Fourier transform on your temporal signal, and you plot what frequency components that are at a single time. So if I take a slice at a certain time here, I can see a dominant frequency component down here, and you can see that dominant frequency component. change in time and they've shifted this so the beat frequency is at zero but you know that would actually be gigahertz or something like that and this would be a shift from that beat frequency of gigahertz and they've done a technique where they shift the beat frequency in each window a different amount and this gives a much higher dynamic range but effectively this is looking at an increase in phase that's chirped in time and the time scale on the bottom here is nanoseconds so over about 100 nanoseconds they've measured a significant phase shift Now what they're doing with this technique is not actually measuring a plasma, they're measuring the motion of a conductor.
So this is photonic Doppler bit of symmetry for the HED kids who've heard about that before. And they're doing that to measure all sorts of cool like squishy metal type things, but exactly the same physics at play because that moving conductor just gives you a phase shift and that phase shift could be density or it could be some moving conductor. So it's up to you afterwards to interpret what the data looks like. And they run out of bandwidth up here, so at 25 gigahertz they can't sample any faster because that's already a very expensive digitizer, which is why they have this clever technique which effectively alienizes the signal. So it goes up on one and the beat frequency appears to go down on the other and then when it hits this point here it starts going back up and they do the same trick several times and by sort of appropriately flipping and splicing these signals together they'll actually get a signal that just keeps going up and up and up and then they can measure this motion of this conductor over a very long time scale.
There's some very cool advanced techniques and electronics involved in all of this, but this at least is closest to the raw data and again for the p-set you'll be making your own raw data so you can see what it looks like there. Any questions on these two temporary result techniques? In the acid-sulfur grade example, do they have any kind of spatial resolution?
No, it's just a cord, so it's a laser beam through the plasma, and that's pretty typical for tokamak plasmas. One reason for that, which doesn't apply here, is that you often want to use like microwaves because the density is more appropriate for microwaves than for lasers. and so that means it's actually quite hard to do like imaging with microwaves right we tend to just have an antenna which launches microwaves and antenna which collects them so you tend to just have a line. With a Healy that's not a limitation but obviously they probably couldn't do a camera that is resolving on this time scale maybe they don't want to.
They certainly couldn't have a camera that covered the entire tokamak cross section on that time scale so I think they went for this sort of time resolved but just one point in space technique instead. We'll talk a little bit about how many chords like that you need in order to do some sort of reconstruction later on in the lecture. Other questions? Anything online? Okay, so these are examples of spatially heterodyned interferograms.
This is the case with no plasma. You see you've got these nice uniformly spaced fringes here and some of them are light that's constructive interference some of them are dark that's destructive interference. So the probe beam is going straight into the page like this and the reference beam is tilted at a tiny angle upwards and we've chosen that angle to give us this nice fringe pattern here because when we put a plasma in the way we have plasma flows coming to the left and the right and we can see that all these fringes are distorted.
You can see most prominently the fringes all tick up in the center here. and that corresponds to an increase in the line integrated electron density. You can also see there are regions where there's quite large fringe shift distortions around here.
These are actually four plasma sources on either side here. And you can see that some of the distortions are so large we've actually formed closed fringes again. So in that place we have violated the condition that k1 minus k2 has to be much, much larger than the spatial derivative of the phase.
We've effectively recovered by accident the Holodain system because we weren't able to keep our fringe spacing close enough together. If we made the spring spacing even closer these closed fringes would go away and we'd lose that ambiguity but we'd also be sacrificing our dynamic resolution of the electron density. So for an interferogram like this I have very strong priors that the density is going to be higher here than here and so I can just when I'm doing the processing on this data make sure the density goes up here instead of down. effectively making choices on that decision tree that we had before. And if you spend a little while processing these interferograms, this is the raw data and this is the line integrated electron density here.
And for this the electron density is in units of 10 to 18 per centimeter cubed. So technically the thing that you get out of this is line integrated electron density, so that's per centimeter squared. In this system we have a lot of symmetry in the south plane direction and we knew how long the plasma was in that direction, so we just divided by that length to get the line averaged plasma density.
And you can see again although we did have these homodymed regions here where we have some ambiguity about phase, because we had strong priors we're able to assign the correct electron density. And if we decided incorrectly, if we said it's going down, we would see like a weird hole here that wasn't on the other side, so it also helps have a bit of symmetry in your system as well to check that you're assigning things correctly. So I won't go into the details of how you process these, there are some vaguely involved techniques, but you can take data like that and get out some really nice pictures of the electron density in your plasma. So that was quite a nice one, you can see all the fringes are still roughly parallel, they only move a little bit. Here's some interferograms of slightly more twisted fringes.
You can see the background fringes are like this on this image, and they're like this on this image, it's just how it was set up in the two experiments. And you can see that this is an example I used earlier of a BDOT probe sticking in a plasma and a bow shock is forming around it and you can see strong distortions of the interference fringes especially very close to the bow shock where in fact the reflections of the rays are so large from the density gradients that they're lost from our imaging system and so we don't have interference fringes here because they've been refracted out of our system we no longer can do interferometry. So when the density gradient gets too large it's very hard to do interferometry.
And again these images were processed and you get out nice pictures of the bow shock in these two cases here. And this paper was comparing bow shocks with magnetic fields aligned with the field of view and perpendicular to the field of view. And we have very different bow shock geometries there.
So those are quite complicated but probably the most complicated one I've ever seen traced was this one by George Swadling, 2013. So this is a 32-wire imploding aluminium z-pinch. There's a scale bar. up there which is actually wrong that's a millimetre so it's a centimetre, it should be a millimetre. No one's ever noticed that before. And so these are the positions of the 32 wires here and there's plasma flowing inwards and as the plasma flows inwards it collides with the plasma flows from adjacent streams and it forms a network of oblique shocks.
So this is two wires here. the first oblique shock is out of the field view of forms of plasma here and then these two plasmas interact and they form another oblique shock structure and then these two interact and they form another oblique shock structure and there may even be like a fourth or a fifth generation in here so this is a complete mess right this is extremely complicated but because the interferoncci is very high quality with a great deal of patience you can follow each interference fringe all the way around and you can work out its displacement and you get out this rather nice map of the electron density and we see that there is still sufficient spatial resolution despite the fact that we're using inference fringes which limit our spatial resolution. There's still sufficient spatial resolution to resolve these very sharp shock features here.
So this is a nice piece of work. Okay, and then my final example for this batch here was actually showing something we've already discussed. This is where we had plasma flows from the left and the right colliding here.
and as opposed to seeing interference fringes we just see this dark void and you can see the fringes are beginning to bend up downwards here which will indicate enhanced electron density but because the density gradients are too large the probe beam has been refracted out of our collection optics but we don't get to see anything now you could say well perhaps it's because the plasma is too dense right if we got the critical density the laser beam going through the plasma will be reflected and so that could be the case it's just that density is really really high and very hard to reach whereas we know that in this system that density gradient is very easy to reach. And so we're pretty certain that in these experiments it was the density gradient rather than the absolute density that caused us to lose our probing in the center here. There's not really much you can do about that. You can go to a shorter wavelength if you've got one so that your beam doesn't get reflected so much. You can use a bigger lens so you collect more light but there's only so big a lens they'll sell you.
And so you know sometimes you just have to The deal was the fact that your data has got holes in the middle. And when this image was processed in these paper, he just masked this region of the data off. And he said, we don't have any data.
That's really the only thing you can do. Okay. Any questions on spatially heterogeneous informatory?
Yes. So I think as you sort of alluded to, just to make sure that I understand, in order to extract useful information from one of these pictures, you need to be able to trace each fringe from edge to edge of the picture. And if you lose that, you're in trouble. So then how do you, I mean, is this all done with computer image processing for your ability to, to say, continually trace all of these fringes and...
So you don't have to be able to trace each fringe from side to side, but you do, ideally you'll be able to assign, say you numbered each of the fringes from the bottom to the top of the image here, you'd like to be able to assign numbers to the fringes in the image without a plasma, so you need a reference interferogram, so you always have to have two pictures, right? because that reference interferogram gives you the background signal that you're effectively modulating here. So like in temporal heterodyne interferometry, you'll have to measure the beat frequency for some time before the plasma arrives.
Okay, the trouble comes that actually you don't need to be able to uniquely allocate each of these fringes to a fringe in the reference interferogram. You'd like to. If you can't do that, then there's some offset constant that you have density that you can't get rid of. So that's ambiguous.
So you can say that my density is going to change from here to here by 10 to the 18, but it may also be like 10 to the 18 plus 10 to the 18 or 10 to the 17 plus 10 to the 18, something like that. So there's some ambiguity there. If the fringes are broken, so in this case some of the fringes, well actually this one is simpler to see, in this case some of the fringes on this side here you can't actually fold them through on the other side, but you can make some pretty good guesses in the absence of the plasma. because there'll be nice straight lines and you can trace them across like that.
So for these complicated interferograms, the best process we found is grad students. But lots of people say, oh, I'm going to write an image processing algorithm. And indeed, I've seen students from the PSFC who were doing a machine learning course who tried to do this. The trouble is humans have incredible visual processing. So when you look at this, you can work out what all these lines are perfectly.
Every single algorithm I've seen is trying to do this automatically. It starts getting hung up on the little fuzziness on this line here. and it's like oh I think that's really important so I'm going to spend all my time trying to fit that perfectly and so there may be techniques which can do it automatically but at the end of the day it really requires a human to look at this region where there's no fringes and say ah we've lost the fringes because I know that density is really high there or in fact rather than these regions here it's hard to see but there's actually some very strong stratography effects there's like brightness in this region here and that looks like additional interference fringes but if you've looked at these enough you'll know that it's to do with geography.
So it seems to be very hard to train a computer to do it. So not saying it's impossible. I just haven't seen it.
a realistic program yet. If you have an interferogram where all the fringe shifts are relatively small and well behaved, you can do this using Fourier transforms. So there are techniques which are Fourier transform based, which basically do a wavelet transform. So it's like a small region Fourier transformation and looks at the local frequencies there.
And that's like those spectrograms that I showed you. That's the 2D equivalent of the spectrograms I showed you here where we have the spectra each time. you want the k spectra at each position and those do an okay job but as soon as there starts being any ambiguous feature or even some relatively large from distortions they also fall over really badly so it seems like a hard problem to automate so that was a long answer to your question i'm sorry okay other questions yeah so the image processing it seems like choosing which areas to mask in which areas to not mask is important yes is there any weird like cut and dry rules for that or at all for like a huge intuition? At least the way that I do it, it seems to be very like intuitive. Yes, exactly.
You sort of have to know what you expect to see and then sort of work with that. Yeah. Other questions? This side of the room is much more questioning than this side of the room. Okay.
Next one. From a practical standpoint, how many fine points do you resolve with Degnaut? Yeah, so you know if your plasma is the question was what's the sort of temporal resolution something like this if your plasma is only lasting for a few hundred nanoseconds, then it depends whether you can afford a camera that can take more than one picture in 200 nanoseconds. These are taken with off the shelf Canon DSLR cameras, born in 2006, and there the shutter is actually open for one second, but the laser pulse is only a nanosecond long, and that sets the time resolution. Yeah.
so you can get one picture in an experiment like this, and then you do the experiment again and you hope it's reproducible enough and you move the laser later in time, you take another picture, you keep doing that, yes it's hard. Yeah. This could be a silly question but what are those circular fringes that are here? Yeah so this is what I was saying where we've effectively violated like the they're very light in the background.
These are the fraction patterns of dust spots. that like there's dust on an optic somewhere it creates a diffraction pattern it's out of focus it modulates the intensity of the laser beam it's another thing that makes it hard for an automated algorithm to work we tend to normalize those out but i'm showing you this is the actual raw data from a camera i haven't done anything to it um but you can do tricks together as those because it's like a slow moving slow changing effect you can do like a low pass filter yeah oh in in these sorts of papers how is uncertainty communicated Yeah, so we tend to we tend to estimate something like your uncertainty intensity is going to be about a quarter of a fringe shift and we'll talk about, I'll talk about what that means actually in the next bit of the chalkboard talk so yeah you can you can estimate uncertainty by saying, how certain are you that the fringe is shifted up this far, this is this far so there's some like a pixel uncertainty, and also how good you are at assigning. This is the lightest part of the fringe, this is the darkest part, because effectively you're you're looking for the light parts and the dark parts but there's several pixels which will be equally light because it's near a maxima or a minimum so so you tend to whack a relatively high uncertainty on it and just call it a day so in the field in this field you know if we get measurements right within about 20 we're pretty happy so this is very different from uh other parts of plasma physics so okay but are any questions online i'm sorry i can't see hands online at the moment because i hid that little bar and i don't have to get it back So if you have put your hand up or something like that, I can't see it.
I assume I'm still on Zoom somewhere. No idea how to get back there. Escape. Ah, okay. There was something in the chat.
No questions here. All right. Well, that was easy. Thank you.
I think we'll go back to the chalkboard for the moment then. I've got a few more pictures depending on how we do for time. Yes, I should look down here for that. remote control at some point So maybe just a little bit more practical stuff.
You know, one thing that really matters is your choice of your probe wavelength. I've been talking a lot about frequency. It turns out that a lot of the time people quote their frequencies in terms of wavelengths and obviously they're very intimately linked. So if you remember, we had our phase shift was minus omega over 2C n critical n eVrl n eVL, like that.
we often define a quantity called a fringe shift which I'm going to write as capital F and a fringe shift is just a shift of an intensity maxima or minima by an amount in time or space that makes it look like another intensity maxima or minima. Having said that out loud I realize it's pretty incomprehensible so let me draw the picture. Let's say that this is space or time, doesn't really matter, and you've got some intensity here and some background fringe pattern like this.
So this is the beat frequency that you're measuring even in space or time in the absence of any plasma. And then say that you have some plasma signal. I'm going to draw this wrong. Give me a moment. Okay, so in the presence of a plasma.
your fringe pattern has been distorted. Oh I got that wrong place. There we go, I did it.
So this fringe here you would think should line up with this one but in fact it's been shifted all the way so it lines up with this one instead. So this is the case with plasma and this is the case with no plasma. And that is the definition of one fringe shift.
So it's effectively delta five over two pi. We're just counting the motion of minima and maxima here. And we can write that in practical units as 0.5 times 10 to the 16 lambda times the line integrated electron density.
So people often write this quantity here like this, I think because it looks nicer on one line in an equation, right, you don't have that big integral site. But effectively it just means the average, the electron density average over some distance L, like that. And all of these units here are SI. And so that means you can then work out what your line integrated electron density is.
The integral of the fringe is, it's 2.2 times 10 to the, that was a minus. 16 here, central plus 15 over lambda times the number of fringe shifts and that's in units of per meter squared. And so now I'm just going to give you um for different lambda what this number actually is so we can have a look at some different sources. So I have a little table where I have wavelength of the source and then I have any L for f equals one and that's in units of meters squared.
I'm just going to go down a list of sources so if for example we're in the microwave range here this might be a wavelength of 3.3 millimeters so that'll be a 90 gigahertz source so this is relatively low density plasmids here. and that density here would be 6.7 times 10 to the 17th. We could jump quite a bit and go to a CO2 laser. This is a nice infrared laser and you can make very powerful CO2 lasers so they're quite popular for some diagnostics. Yeah, Alcatore had a CO2 indendrometer.
So this is 10.6 micrometers here, so I've dropped a couple of orders of magnitude from the microwaves and you can see the densities that we're measuring here have gone up quite a bit by similar amounts and then something like a neodymium YAG laser This is the sort of thing I use. If we use the second harmonic, that would be 532 nanometers. That will make those beautiful green images that we looked at, and that is 3.2 times 10 to the 21 here.
So if I see a fringe shift of one, and on those images that I showed you first, the fringe shift was maybe two or three fringes, each of those corresponds to electron density of 4.2 times 10 to the 21. So just by looking at the image and by eyeballing it, you can start estimating. the line integrator electron density and then if you have some idea of how long your plasma is you can then get a rough estimate of the electron density itself. I think what's very interesting about this quick I worked this out before the lecture and I hope I'm right because I was very surprised by it.
If we take for example a wavelength of 1064 nanometers so this is an infrared NDE ag laser when we take a length of 10 to the minus two meters so a centimeter. So this is the sort of experiment that I might do. One fringe would then correspond to a density of 2 times 10 to the 23 per meter squared. Okay, not that interesting so far, meter cubed. But then you ask yourself what is the critical density here?
This is turns out to be 10 to the 28th per meter cubed. So what is the refractive index we've been saying that it's 1 minus NE over 2 NC? This is a very small number.
This is now, well it's almost closer, it's 1 minus 10 to the minus 5. So all of these interproportory effects we're looking at are to do with changes in the refractive index on the order of 10 to the minus 5 or so. I kind of find that remarkable. we're able to measure very very very small changes in refractive index. It's not like n ever gets close to zero or two or something bizarre like that. Anyway I thought that was interesting.
Okay so you will pick your source to match your plasma right. If you're doing low density plasmas then if you use a 532 nanometer interferometer you won't see any fringe shift. The fringes won't move at all.
You won't be able to measure any plasma so you need to use a long wavelength source that is more sensitive to those lower densities. Conversely, if you try to use a long wavelength source on a nice dense plasma, first of all, the beam may just get absorbed because it'll hit the critical density, or it might get refracted out. And even if it doesn't do any of those, you'll have such huge phase shifts, you won't be able to meet that heterodyne criterion, and you'll just have like very complicated fringe patterns and no chance of processing. So you've got to pick very carefully. what sort of source you have and there are other ones out here as well but I just sort of picked a range that might be relevant to some of the people in this room.
Any questions on that before we move on? So I want to talk about a few extensions to this technique and the first one we're going to talk about is called two-color interparameter. There are two reasons to do two color intercom pretty conveniently.
One is to handle vibrations and the other one is to handle neutrals. So let's have a little talk about vibrations first of all. Your system is made up of lots of mirrors and other optics and there are vibrations everywhere.
And so all these mirrors and optics will be vibrating slightly, which means their path length will be changing by a small amount. How big a deal is this? Well, if we imagine that you've got some mirror, for example, here, and we bounce our beam off it like that, and the mirror is oscillating with an amplitude little l here, we're going to get a phase change very simply just by looking at the distance that this moves.
on the order of 2 pi little l upon lambda. So if the phase, if the amplitude of these vibrations is on the order of the wavelength, you're going to get phase shift of 2 pi, which is actually already pretty huge, that's one fringe shift. But this is a tiny number, I mean if I'm working with green light this means I'm sensitive to vibrations on the order of 532 nanometers.
Right, so this is extremely hard to avoid. You can't get rid of vibrations that small very easily. So you're going to have big problems with these vibrations and if your whole tokamak is vibrating, so you've got all these cryopumps and neutral beams and excited things going on, this is going to be an absolute nightmare.
Turns out not to be a huge nightmare for my stuff because although these very sensitive vibrations, the time scale over which our experiment takes place, the nanosecond for the laser pulse, the vibrations are not a gigahertz. You don't have mechanical vibrations of those frequencies. so we can ignore it.
But if you have vibrations that kill a hertz or even hertz from people walking around, this will ruin your nice more steady state experiment. And what we want to notice is this spaceship in particular, as I've just sort of alluded to, is large for small wavelengths. And this already suggests the beginnings of a scheme that we're going to use, especially because I call it two-color interferometry. to deal with this.
So the solution is we run two interferometers down the same line of sight with two different wavelengths, okay. One of these wavelengths is short and so that could be something like on a tokamak, a laser, so it could just be a hemi laser beam, goes straight through the plasma, doesn't see it at all because it's not dense enough, but that short wavelength is very sensitive to vibrations. Those are vibrations proportional by Vib, should we call it, for vibration, proportional to one over Lambda.
So this will be a very good diagnostic, not of plasma, because it won't see the plasma, but it'll be a very good diagnostic of vibrations, okay? And the other one, a long wavelength here, which might be on a program like something like, you know, a microwave source. So many, many, many...
differences in wavelength by two or three orders of magnitude, that long wavelength will be very sensitive to the plasma because phi plasma is proportionally lambda here. And so what you can do when you measure the overall phi with these two devices, well I'm not going to do it explicitly, you're going to have two sources of phase. one of them is going to be the standard plasma here, NEL, and the other one is going to be the vibrations due to L. So you'll have two unknowns and now you've got two measurements and so you can use the system equations to solve for that. And if you're very very quick you can even use the short wavelength to feed back into your mirrors with the PA zone and stabilize the mirror so you can do like vibration stabilization.
feedback mirrors. So you can use this very fast short wavelength interferometer to vibration stabilize all of your mirrors. I don't know if anyone's actually done this, it's in Hutchinson's book so presumably someone tried it. Sounds like a lot of work but I guess it could be very very effective.
So this is maybe more of a question mark rather than something that everyone does. It's pretty clear that if you digitize both these signals you should be able to work out what was vibration and what was plasma. Of course if the vibrations are huge it might still ruin your medical exam, so it might be worth doing this feedback system.
So that's one use for two-colour interferometry, just in case you're trying to work it out. The two colours is because we have two wavelengths and we tend to associate wavelength with colour. Okay, any questions on that?
Okay, the second thing that I want to deal with is neutrons. So far our refractive index has been derived assuming a fully ionized plasma, right? And so in that fully ionized plasma we just have ions and we have electrons.
Now these have associated frequencies with them, the ion frequency, which is much much less than the electron frequency. And so when we write down the refractive index we just have one, oh this is in fact the next square, 1 minus omega squared over omega ee squared. There's technically another term in here, 1 minus omega squared omega p i squared, but because the arms are so much more massive we always just ignore this term and we subtly drop the subscript on the plasma frequency here.
So this is the refractive index we've been using so far. However, when you've also got neutrals. you've got some density of these as well.
And I'm going to write that N alpha because no, not alpha, NA. The A is sort of being atoms. Okay, so let's go to that.
Now neutrals are much more complicated in fact because they have atomic transitions inside them and those atomic transitions change your refractive index. If you're close to an atomic transition you have a very different effect like absorption than you do if you're far from an atomic transition. And so your spectra or your plot of refractive index for your neutrals here against frequency will look like some sort of spiky minefield of lines, something like that. You know, it will depend exactly on the atomic physics.
And in general we can write down this refractive index as equal to 1. always a good start, plus 2 pi e squared m e, ignore that it's just some constant that they're normalizing by, and then we sum over every single atomic transition in this neutral gas. So first of all we want to sum over all of the atoms in state i here, so for example there may be atoms which are partially ionized, they lost one electron and so then they have different refractive index here. So all of the atoms in a certain state i and then all of the transitions between that state i and some other state k and this is divided by the transition frequency which is one of these lines here by between i and k minus omega squared here.
So in this case Fik is the strength of one of these transitions which determines how likely it is to happen and so how strongly it shows up and this is the frequency of one of these transitions. I misspoke earlier, I spoke about this being through an ionization, it's just through its excitation so it's to do with whether your atom is in like its ground state or some other state or something else like that. Now this formula is intensely complicated right and you can spend a very long time doing quantum mechanics calculations to try and work out both of these two tones. And of course, as soon as you go to something above hydrogen, it becomes very complicated.
Even for hydrogen, it's pretty complicated. But above hydrogen, it's extremely complicated because there are multiple electrons interacting here. So you don't actually stand a chance of solving this directly. Your best bet is the fact that if you look at some part of the spectrum or some part of the refractive index that's away from one of these lines, So this is for omega not equal to any of these transitions here. There's a general formula that works pretty well which says the refractive index is just equal to 1 plus 2 pi alpha n a, where this n a here is still the number density of neutrals.
and this new alpha is a quantity called the polarizability, which is easy to calculate and also very easy to measure. So you can measure this for your different gases and so for example here's a little table of different gases if we have it for helium or hydrogen or argon here. there's 2 pi alpha in units of meters cubed.
The 2 pi is just some normalization constant that comes from somewhere else in the theory. So you know we always quote it with the alpha. I'm just going to quote you 2 pi alpha here.
But this number is like 10 to the minus 30 a helium, 5 times 10 to the minus 30 for hydrogen, and 1 times 10 to the minus 29. I wrote this as argon because I missed the i in my note. It's actually air. So there we go.
And then just as a little calculation here, you know, air at standard temperature and pressure, the number density is roundabouts 2.5 times 10 to the 25 per meter cubed. And so therefore the refractive index of air is about 1 plus 2.5 times 10 to the minus 4. So once again the change in refractive index is very small compared to one for the neutrals. It's on the same sort of order as the change of refractive index you get for a similar sort of plasma, but crucially the refractive index is always greater than one for neutrals and is always less than one for a plasma.
This is our first hint at how we're going to use two-color interferometry here. So let's have a look at how to use two-color interferometry to determine both the number density of the neutrals. and the number density electrons and you might come across this scenario quite a lot if you're doing low temperature plasmas you always come across this scenario even if you're doing something in a tokamak maybe there's a region at the edge where there's a large number of neutrals and your beam has to go through that region at the edge and you want to ignore it you just want to measure the core but you're still actually picking up a big phase shift from these neutrals at the end so this is a big problem and this is a hard one to solve so let's have a look at this So remember that when we derived the phase shift delta phi I did it first of all just in terms of a generic refractive index before specifying it to be a plasma and that refractive index was just n minus 1 dl there's two pi lambda out the front here so this is just affecting the change in the path length effective path length that the probing beam sees okay and we've also said that and A is greater than them and dNa d lambda is equal to zero which is mathematically saying that in this polarizability model here we assume that it doesn't depend on wavelength because as long as we don't go too close to one of these transitions. So the polarizability here, the same as the polarizability here, so the same as the polarizability here. Okay so then we'll end up with a total fringe shift on our interferometer.
of minus 4.5 times 10 to the 16 minus 16 times lambda times the integral of the electron density. That's the plasma component here and we will also have a term which is plus, notice the difference in the sign here, plus 2 pi alpha upon lambda integral of Na dm. So they will cause shift in fringe in different directions, so to a lower effective spatial or temporal frequency, but they also have a different dependence on lambda.
And this is key because, again, as we saw with the vibrations, we have different dependencies on lambda. We can use a two-pellet technique to get around this. So 5-plasma is big for the large wavelengths and by neutral phase shift in the neutrals is big or small wavelengths.
And again we've got two unknowns. We've got the density of the electrons, oh sorry, the neutrals. and if we have a two-color technique we have two equations and so we can solve all of that and I'm not going to write down the algebra now it's quite boring and you can work out uniquely apparently what the electron and neutral densities are and I'll show you that this doesn't generally work in practice so in practice you often end up with negative predictions for your density of both the neutrals and the electrons.
And this tends to be, as far as I can tell from reading the literature, that when we did this approximation of n is about 1 plus 2 pi alpha here, we have assumed that alpha is constant. But it doesn't have to be constant. it could change with wavelength. And if we're using two different wavelengths and there are two different values of alpha, then that will cause chaos with your two equations, two unknowns. Because alpha is actually relatively hard to pin down in some sources, you can very rarely find it the exact wavelength you're working on.
You may also have horrifically ended up using one of your wavelengths like here, halfway up one of these resonances, or even worse at the peak of one of the resonances. And if you did that, your whole model is completely off. and i think this is what causes this and i'll show you some data i took where we predicted negative densities and I'll talk a little bit about that as well. Obviously negative densities are unphysical so we thought that that was probably wrong but we published it anyway because other people were doing the technique and not pointing out they had negative numbers and we thought it'd be nice to point out that we knew that it was wrong.
So okay so that was a quick roundup of two-colour interferometry and I have some quick slides after this showing some examples but I'll just pause here and see if there are any questions. Yes? Does this technique work better for certain levels of ionization?
Like is there a spectrum where it works better than others? The question was does this... sorry go on. If you're using like a really weakly ionized plasma it's really not a good option as opposed to a more...
Great, really good question. So the question was does this work better for different levels of ionization in a plasma? So you might be thinking to yourself if I have a very very weakly ionized plasma then it may be very hard to measure the electrons over the sort of overwhelming change in refractive index from the neutrals. And that's true, it's going to be hard.
But if you look at this equation here, this is the thing you're measuring, the fringe shift, you can choose your two wavelengths to optimize the wavelength sensitivity of one of them to the electrons, and the sensitivity of the other one to the neutrals. So you're going to need to have some widely spaced wavelengths. So if you try and do it with just like two different frequencies from the same laser, that's going to be really hard. But if you have like a microwave interferometer and a Heaney green laser beam like they do on the Tokamak's vibration stabilization, that will work much better.
The difficulty there is then you have two completely different detection techniques and so it's not like easy to compare these two, but that's what you probably want to do if you're dealing with like 1% ionization or something like that, you might have to do this technique. Yeah okay, are there other questions? Anything online? What physically is the polarizability?
Is that like the electric polarizability of the medium? Yeah the question was what physically is the polarizability. Yeah this is this polarizability is very strongly related to how the electron wave functions are distorted by the electric field of the electromagnetic wave. Okay.
Yes, which is why when you get close to a transition and they uh the frequency of the wave is now resonant with some atomic transition, this polarizability changes dramatically as opposed to just going through the medium electromagnetic waves. absorbed. I'm saying it in a very classical way but of course you need to start doing quantum if you want to have like absorption.
So the wave field is inducing a small dipole moment? Yeah absolutely the wave is inducing a dipole moment and that is slowing down the wave, slowing down the phase of the wave. In a plasma remember it always speeds up the phase of the wave. Okay any other questions?
Maybe I should have saved all my pictures and avoided having to buy this thing twice. Can you see this online? Yes. Okay, perfect. Okay, it's showing up slowly here.
So this was a set of experiments we did with a very sexily named but boring device called a plasma gun, which is actually just a bit of coax cable where you've chopped off the end of it. and you pulse it with some current, and the current flows up the inner conductor across the chopped off plastic insulator and back down through the outer conductor, and as it flows across here, it sends plumes of plasma out. And they're moving, you know, 10 kilometers a second, but this only works in vacuum, so it's not a very good gun. Anyway, this was a fun object to study because there was a grad student using it for the PhD thesis, and we put it on our experiment, and we did two-color interferometry, and these interferograms were made using an NDE aglase, an endodermian ag.
We use the second harmonic at 532 nanometers, which shows up as green here, to do one of the measurements, and simultaneously along the same line of sight through the same bit of plasma, we use the third harmonic which is 355 nanometers. So that's in the ultraviolet, you can't see it by eye. You might be asking why does it show up as orange? This is when you remove the ultraviolet filter from your off-the-shelf NMDSLR camera.
the pixels get confused and think that this is orange. Obviously it can't, it can see ultraviolet but what would we render it as on the screen? So anyway, so this is 355 and these are the fringes before the plasma was there and these are the fringes after the plasma was there and you can see the fringe shift is very small, there's just tiny little shifts here and here.
So this wasn't a very high plasma density but what we were able to do is to infer the phase shift. for these two interferograms like this and then We combined these two together using the simultaneous equations. We also did an Arbol inversion which I'll talk about in the next lecture and we got out the electron density and this looks quite reasonable.
We get up to about 10 to the 18 here and it falls off nicely in various directions but we also got a prediction of the neutral density here and these red regions are fine, these are positive numbers but in right in the middle here there's a big negative number. In fact it's so negative we're predicting like many more absences of neutrals than we had electrons. It's like clearly completely nonsense.
And so we went back to some of the textbooks that explain this technique for measuring neutrals and we found that the example data they were showing also had negative numbers in. It's just they didn't bother to mention that this was a huge problem. We think this is a huge problem. So I'm a little bit baffled by the fact that people people will in a textbook say that this technique can be used to measure neutrals. But in reality, it seems to be really, really tricky to do it properly.
And I think the problem is the quality of the polarizability data that we have. So we were trying to use the polarizability data, assuming it worked at 532 and 355 nanometers. But it was derived in the lab by some group in the 80s who published their paper on it. And they did it like at 10.6 microns in the infrared. So there's no really good reason to believe the polarizability is the same.
But it's really hard to get hold of this data in a consistent fashion. So if you're going to try and use this technique. I think you should be very skeptical about the results, especially if you start seeing negative numbers. Okay, so we're going to go a couple more minutes.
I'll just take any questions on this and then we'll do other inversion in the next lecture. But any questions? Yes? How do you go about measuring the polarizability of the parameter? You could use interferometry to measure the polarizability in the absence of any electrons, right?
So then you'd know exactly what you were measuring. So if you're able to. puff some gas in.
You need some measurements of the number density as well. So that's, but you know, you can imagine if you've got a gas cell at a certain pressure and it's at room temperature, then you, from the ideal gas law, you know the number density inside that gas cell, you know the size of the volume and you could do a dephrometry on that volume, for example, and that would give you a measurement of the phase shift. And then you could back out what the polarizability must be.
Maybe we should have done that here, but we didn't. So yeah. So it's just like an additional calibration step? Yes, yes, exactly.
So I think it's a doable calibration, it's just quite hard. So whereas for the electron density, it's like that is all in terms of fundamental parameters, like the electron charge and electron mass, and you're like, okay, we know what those are. So then when you're applying those formulas, you have absolute confidence when you measure the phase change, what the electron density is.
It's just for this, the theory is a little bit more modern. Yeah. Other questions? Yeah.
good data on individual species and then you know you have a certain ratio in your plasma, is there any reason to think that you couldn't just do an average? Oh if you think beforehand you somehow know for some reason the number of then the number of electron density in the neutral density or? Oh sorry like two different neutral species. Yes.
And you have good... Oh we didn't even get into that, that's a nightmare. because then you'll have two different polarizabilities and even more transitions that you're trying to miss okay right so we're already like oh god let's stay a well away from any of these transitions but if you have two species you'll have an even harder time finding a region with no transition so it'll be very hard to find another source yeah any questions online all right thank you very much everyone see you on thursday