Transcript for:
Understanding Quantum Mechanics and Schrödinger Equation

sleep and study don't forget to subscribe we're going to move now in to actually solving the Schrodinger equation this is really the main meat of quantum mechanics and in order to start tackling the shortener equation we need to know a little bit about how equations like the Schrodinger equation are solved in general one of those solution techniques is separation of variables and that's the solution technique that we're going to be applying repeatedly to the Schrodinger equation first of all though let's talk a little bit about ordinary and partial differential equations the Schrodinger equation is a partial differential equation which means it's a good deal more difficult than an ordinary differential equation but what does that actually mean first of all let's talk about ordinary differential equations what an ordinary differential equation tells you is how specific coordinates change with time at least that's most applications so you have something like X as a function of time y as a function of time sorry not y as a function of X Y is a function of time Z as a function of time for example the position of a projectile moving through the air could be determined by three functions x y and z if you're only working in two dimensions for instance let me drop the Z but we might have a velocity as well say v x of T and v y of t these four coordinates position in two dimensions and velocity in two Dimensions fully specifies the state of a projectile moving in two dimensions what an ordinary differential equation might look like to govern the motion of this projectile would be something like the following DX DT is v x d y d t is v y nothing terribly shocking there the position coordinates change at a rate of change given by the velocity well the velocity chain velocities change dvx DT is given by let's say minus k v x and dvy DT is minus k v y sorry k v subscript y now kvy minus G this tells you that um oh where I got these equations this is uh effectively damped frictional Motion in the plane X Y where gravity is pulling you down so in the absence of any velocity gravity leads to an acceleration in the negative y direction and the rest of this system evolves accordingly what that tells you though in the end is the trajectory of the particle if you launch it as a function of time tick tick tick tick tick tick tick tick tick as a projectile moves through the air in say x y space partial differential equations on the other hand pdes you have several independent variables so where an ordinary differential equation we only had time and everything was a function of time and a partial differential equation what you're trying to solve for will have several independent variables for example the electric field the vector electric field in particular as a function of x y and z the electric field has a value both a magnitude and a direction at every point in space so X Y and Z potentially vary over the entire universe now you know how excuse me you know a few equations that pertain to the electric field that maybe you could use to solve to determine what the electric field is one of these is gauss's law which we usually give an integral form the electric feed the integral of the electric field dotted with an area Vector over a closed surface is equal to the charge enclosed by that surface over Epsilon naught now hopefully you also know there is a differential form for gauss's Law and it usually is written like this this upside down Delta is read as Del so you could say this is Del dot e and this is a vector differential operator I'm going to skip the details of this because this is all electromagnetism and if you go on to take Advanced electromagnetism courses you will learn about this in excruciating detail perhaps suffice to say here that most of the time when we're trying to solve equations like this we don't work with the electric field we work with the potential let's call that V and this system of equations here if you treat the electric field as minus the gradient of the potential gives you this or this equation gives you the LaPlace equation Del squared V equals rho over Epsilon naught what that actually writes out to if you go through all the vector algebra is the second derivative of V with respect to X plus the second derivative of V with respect to Y plus the second derivative of V with respect to Z and I've left off all my squares in the denominator here is equal to rho over Epsilon naught this is a partial differential equation and if we had some machinery for solving partial differential equations we would be able to determine the potential at every point in space and that would then allow us to determine the electric field at every point in space and so just an example hopefully you're familiar with some of the terms I'm using here the main solution technique that is used for partial differential equations is separation of variables the separation of variables is fundamentally a guess suppose we want to find sum function in the case of electromagnetism it's the potential X Y and Z the potential is a function of x y and z let's make a Guess That V of X Y and Z can be written as X of x times y of Y times Z of Z so instead of having one function of three variables we have the product of three functions of one variable each does this guess work well it's astonishing how often this guess actually does work this is a very restrictive sort of thing but under many realistic circumstances this actually tells you a lot about the solution for example the wave equation the wave equation is what you get mathematically if you think about say having a string stretched between two solid objects now under those circumstances if you zoom in on if you say pluck the string you know it's going to vibrate up and down mathematically speaking if you zoom in on a portion of that string say it looks like this you know the center of this string is going to be accelerating downwards and the reason it's going to accelerate downwards is because there is tension in the string and the tension force pulls that direction on that side and that direction on that side so it's being pulled to the right and pulled to the left and the net force then ends up being in the downward Direction if the string curved the other direction you would have effectively a net force pulling up into the right and a net force pulling up into a force pulling up into the right a force pulling up into the left and your net force would be up this tells you about forces in terms of curvatures and that thought leads directly to the wave equation the acceleration as a result of the force is related to the curvature of the string and how we express that mathematically is with derivatives the acceleration is the second derivative of the position so if we have the position of this string is U as a function of position and time then the acceleration of the string at a given point and at a given time is going to be equal to some constant traditionally written c squared times the curvature which is the second derivative of U with respect to X again you being a function of position and time so this is the weight equation I should probably put a box around this because the wave equation shows up a lot in physics this is an important one to know but let's proceed with separation of variables U as a function of position and time is going to be X a function of not time x a function of position and t a function of time so capital x and capital T are functions of a single variable each and their product is what we're guessing will reproduce the behavior View so if I substitute this U into this equation what I end up with is the second derivative of x of x t of t with respect to time equals c squared times the second derivative of x of x t of t with respect to position so this hasn't really gotten us anywhere yet but what you notice here is we have derivatives with respect to time and then we have this function of position since these are partial derivatives their derivatives taken with everything else other than the variable that you're concerned with held constant which means this part here which is only a function of position can be treated as a constant and taken outside of the derivative the same sort of thing happens here we have second derivatives partial second derivatives with respect to position and here we have only a function of time effectively a constant for this partial derivative which means we can pull things out and what we've got then is capital x I'm going to drop the parentheses x because you know capital x is a function of lowercase x so you've got big X second partial derivative with respect to time of Big T equals c squared Big T second partial derivative of big x with respect to X that's nice because you can see we're starting to actually be able to pull x and t out here the next step is to divide both sides of this equation by x t by basically dividing through by u in order for this to work we need to know that our solution is non-trivial meaning if x and t are everywhere zero dividing through by this we'll do bad things to this equation but what you're left with after you divide by this is 1 over t second partial of T Big T with respect to Little T and c squared 1 over big X second partial of big x with respect to Little X this is fully separated what that means is that the left hand side here is a function only of t on the right hand side is a function only of x that's very interesting suppose I write this function of t as say F of t this then this part let's call that g of x I have two different functions of T and X normally you would say oh I have F of T and I have G of X and I know what those forms are I could in principle solve for t as a function of x but that isn't what you're going to do and the reason that's not the case is that this is a partial differential equation both x and t are independent variables all of this analysis in order for separation of variables to work must hold at every point in space at every X and at every time so suppose this relationship held for a certain value of t for a certain value of x I ought to be able to change X and have the Val have the relationship still hold so if I change X without changing T the left hand side of the equation isn't changing if changing X led to a change in G of x then my relationship wouldn't hold anymore so effectively what this means is that g of X is a constant in order for this relationship to hold both F of T and G of X have to be constant essentially what this is saying in the context of the partial differential equation is that if we look at the X part here when I change the position any change in the second derivative of the position function is mimicked by this one over X such that the overall function ends up being a constant that's nice because that means I actually have two separate equations F of T is a constant and G of X is a constant what these equations actually look like this was my f of x this is my G or F of T and this is my G of x that constant which I've called a here and the notation is arbitrary though you can in principle save yourself some time by thinking ahead and figuring out what might be a reasonable value for a what's especially nice about these is that this equation is now only an ordinary differential equation since T is Big T is only a function of little T we just have a function of a single variable we only have a single variable here we don't need to worry about what variables are being held constant and what variables aren't being held constant so we can write this as total derivative with d instead of partial derivative with the partial derivative symbol so we've reduced our partial differential equation into two ordinary differential equations this is wonderful and we can we can rearrange these things to make them a little more recognizable you get D Squared T DT squared equals a t and c squared D Squared big X the little x squared equals a times big X multiplying through by Big T in this equation and big X in this equation and these are equations that you should know how to solve if not you can go back to your ordinary differential equations books and solution to ordinary differential equations like this are very commonly studied in this case we're taking the second derivative of something and we're getting something back with a constant out front anytime you take the derivative of something and get itself or itself times a constant you should think exponentials and in this case the solution is T equals e to the square root of a times time if you take the second derivative of this you'll get two square roots of a factors that come down a Time Times e to the root a t which is just Big T you can in principle also have a normalization constant out front and you end up with the same sort of thing for x big X is going to be e to the square root of a over C X with again in principle a normalization constant out front what that means is if I move things up a little bit okay get myself some space U of x and t what we originally wanted to find is now going to be the product of these two functions so I have a normalization constant in front and I have e times root a t and E times root a over c x now if this doesn't look like a wave and that surprises you because I told you this was the wave equation it's because we have in principle some freedom for what we want to choose for our normalization constant and for what we want to choose for our separation constant this constant a and the value of that constant will in principle be determined by the boundary conditions A and A are determined by boundary conditions in consideration of boundary conditions and initial conditions in partial differential equations is subtle and I don't have a lot of time to fully explain it here but if what you're concerned with is why this doesn't look like a wave equation what actually happens when you plug in to your initial conditions and your boundary conditions to find your normalization constants and your actual value for the separation constant you'll find that a is complex and when you do and when you substitute in the complex value for a into these Expressions you'll end up with e to the I Omega T sort of behavior which is going to give you effectively cosine of Omega t up to some phase shifts as determined by your normalization constant and your initial conditions so this is how we actually solve a partial differential equation the wave equation in particular separates easily into these two ordinary differential equations which have solutions that you can go and look up pretty much anywhere you want finding the actual value of the constants that match this General solution to the specific circumstances you're concerned with can be a little tricky but in the case of a wave equation if what you want is say a traveling wave solution you can find it there are appropriate constants that produce traveling waves in this expression so to check your understanding what I'd like you to do is go through that exercise again performing separation of variables to convert this this equation into again two ordinary differential equations this equation is called the heat equation and it's related to the diffusion of heat throughout a material if you have say a hot spot and you want to know how that hotspot will spread out with time since this is a quantum mechanics course let's move on to the time dependent Schrodinger equation this is the full Schrodinger equation in all of its Glory except I've just written it in terms of the hamiltonian operator now H hat is the hamiltonian the hamiltonian is related to the total energy they evidently can't spell total energy of the system meaning it's you know kinetic energy plus potential and we have a kinetic energy operator and we have well we will soon have a potential energy operator but H hat actually looks like is it's the kinetic energy operator which if you recall correctly is minus H bar squared over 2m times the second derivative with respect to position and the potential energy operator is just going it looks a lot like the position operator it's just multiplying by some potential function which here will consider to be a function of x now this is an operator which means it acts on something so I need to substitute in a wave function here and when you do that in the context of the Schrodinger equation you end up with the form that we've seen before I H Bar D PSI DT equals minus H bar squared over 2m D Squared PSI DX squared plus V of x I so that's our Schrodinger equation how can we apply separation of variables to this well we make the same sort of guess as we made before namely PSI is going to be X T where X is a big X is a function of position and Big T is a function of time if I substitute PSI equals x t into this equation you get pretty much what you would expect i h bar now when I substitute x t in here big X Big T big X is a function only of position so I don't need to worry about the time derivative acting on big X so I can pull big X Out and what I'm left with then is a time derivative of Big T this is then going to be equal to minus H bar squared over 2m times the same thing when I substitute x t in here the second derivative with respect to position is not going to act on the time part so I can pull the time part out t second derivative of big x with respect to position and substituting an x t here doesn't really do anything there's no derivatives here so this is not a real it's not a particularly interesting term so we've got you're getting v x t all right now the next step in separation of variables is to divide through by your solution x t assuming it's not zero that's okay and you end up with i h bar one over big X sorry 1 over Big T canceling out the X and you're just left with Big T so 1 over T partial of T DT and then on the right hand side we have minus H bar over 2m sorry H bar squared over 2m1 over big X second partial of X with respect to position Plus V x and t are fully canceled out in this term now as before this is a function of time only and this is a function of space only which means both of these functions have to be constant and in this case the constant we're going to use is e and you'll see why once we get into talking about the energy in the context of the wave function so we have our two equations one i h bar over T first partial derivative of Big T with respect to time is equal to E and on the right hand side from the right hand side we get minus H bar squared over 2m 1 over big X second partial of big hex with respect to position plus v is equal to the energy so these are our two equations now I've written these with partial derivatives but since as I said before these functions Big T and big X are only functions of a single variable there's effectively no reason to use partial derivative symbols I could use D's instead of partials essentially there's no difference if you only have a function of a single variable whether you take the partial diff partial derivative of the total derivative so let's take these equations one by one the first one the time part this we can simplify by multiplying through by Big T as before and you end up with i h bar D Big T DT equals e times t taking the derivative of something and getting it back multiplied by a constant again should suggest two exponentials let me move this IH bar to the other side so we would have divided by i h bar and one divided by I is minus I so I'm going to erase this from here and say minus I in the numerator so first derivative with respect to time of our function gives us our function back with this out front immediately this suggests exponentials and indeed our general solution to this equation is some normalization constant times e to the minus I e over H bar times time so if we know what the separation constant capital E is we know the time part of the evolution of our wave function this is good what this tells us is that our time evolution is actually quite simple it's in principle a complex number T is in principle a complex number it has constant to magnitude time evolving this doesn't change the absolute value of capital T and essentially it's just rotating about the origin in the complex plane so if this is my complex plane real axis imaginary axis wherever capital T starts as time evolves it just rotates around and around and around and around in the complex plane so the time Evolution that we'll be working with for the most part in quantum mechanics is quite simple the space part of this equation is a little more complicated all I'm going to be able to do now is rearrange it a little bit by multiplying through by capital X just to get things on top and change the order of terms a little bit to make it a little more recognizable minus H bar squared over 2m second derivative of capital x with respect to position plus v times capital X is equal to e times capital X and this is all the better we can do we can't solve this equation because we don't know what V is yet V is where the physics enters this equation and where the wave function from one scenario differs from the wave function for another scenario essentially the potential is where you encode the environment into the Schrodinger equation now if you remember back a ways when we were talking about the Schrodinger equation on the very first side of this lecture what we had was the hamiltonian operator acting on the wave function and this is that same hamiltonian this is H hat not acting on PSI now just acting on x so you can also Express the Schrodinger equation as H times x equals e times x the hamiltonian operator acting on your spatial part is the energy of or sorry is the separation constant e which is related to the energy times the spatial part so this is another expression of the Schrodinger equation this equation itself is called the time independent Schrodinger equation or t i s e if I ever use that abbreviation this is really the hard part of any quantum mechanics problem to summarize what we've said so far starting with the Schrodinger equation which is this time derivatives with complex Parts in terms of hamiltonians and wave functions gives you this substituting in the actual definition of the hamiltonian including a potential V and applying separation of variables gets us this pair of ordinary differential equations the time part here gave us numbers that just basically spun around in the complex plane not the imaginary part this is traditionally the real part and this is the imaginary part so the time evolution is basically rotation in the complex plane and the spatial part well we have to solve this this equation being the time independent schroinger equation we have to solve this for a given potential the last comment I want to make in this lecture is a comment about notation my notation is admittedly sloppy and if you read through the chapter Griffiths calls my notation sloppy um in Griffiths since it has the luxury of being a book and not the handicap of having my messy handwriting they use Capital PSI to denote the function of X and time and when they do separation of variables they re-express this as lowercase PSI as a function of position and lowercase Phi as a function of time so for this I used capital x sorry I should put things in the same order I use capital T of T and capital x of x because I have a better time distinguishing my capital letters from my lowercase letters than trying to well you saw how long it took me to write that symbol I'm not very good at writing Capital size there is a lot of sloppiness in the notation in quantum mechanics namely because oops geez I have two functions of time this is Griffith's function of position sorry about that um this here and this here these are really the interesting Parts the functions of position the solutions to the time independent Schrodinger equation what that gives us well what that means is that a lot of people are sloppy with what they call the wave function this is the wave function this is the spatial part or the solution to the time independent Schrodinger equation this is not the wave function but I mean I've already made this sloppy mistake a couple of times in problems that I've given to you guys in class namely I'll ignore the time domain part and just focus on the spatial part since that's the only interesting part um so perhaps that's my mistake perhaps I need to relearn my handwriting but at any rate be aware that sometimes I or perhaps even Griffis or whoever you are talking to will use the term the wave function when they don't actually intend to include the time dependence the time dependence is in some sense easy to add on because it's just this rotation and complex number space but hopefully things will be clear from the context what is actually meant by the wave function so we're still moving toward solutions to the Schrodinger equation and the topic of this lecture is what you get from separation of variables and the sorts of properties it has to recap what we talked about last time the Schrodinger equation i h bar partial derivative of PSI with respect to time is equal to minus H bar squared over 2m second partial derivative of PSI with respect to position plus v times PSI where this is the essentially the kinetic energy and this is the potential energy as part of the hamiltonian operator we were able to make some progress towards solving this equation by writing PSI which is in principle a function of position and time as some function of position multiplied by some function of time why did we do this well it makes things easier we can make some sort of progress but haven't we restricted our solution a lot by writing it this way well really we have but it does make things easier and it turns out that these solutions that are written as products that result from solving the ordinary differential equations you get from separation of variables with the Schrodinger equation can actually be used to construct everything that you could possibly want to know so let's take a look at the properties of these separated Solutions first of all these Solutions are called stationary States what we've got is PSI as a function of position and time is equal to some function of position multiplied by some function of time and I wrote that as capital T on the last slide but if you remember from the previous lecture the time Evolution equation was solvable and what it gave us was a simple exponential e to the we're gonna go minus E sorry I times e times T divided by H bar so this is our time Evolution part and this is our spatial part what does it mean for these states to be stationary well consider for instance the probability density for the outcome of position measurements hopefully you remember this is equal to the squared absolute magnitude of PSI which is equal to the complex conjugate of PSI times PSI now if I plug this in for PSI and its complex conjugate I end up with the complex conjugate of big X as a function of position times the complex conjugate of this the only part that's complex about this is the I here and the exponent so we need to flip the sign on that and we'll have e to the I positive I now e t over H bar that's for the complex conjugate of PSI and for the PSI itself well X of x e to the minus i e t over H bar now multiplying these things together there's nothing special about the multiplication here and this and this are complex conjugates of each other so they multiply together to give the magnitude of the squared magnitude of each of these numbers together which since these are just complex exponentials is magnitude one so what we end up with here is X star X essentially the squared magnitude of just the spatial part of the wave function there's now no time dependence here which means the probability density here does not change As Time evolves so that's one interpretation of the or one meaning of these things being called stationary States the fact that I can write a wave function as a product like this and the only time dependence here comes in a simple complex exponential means that that time dependence drops out when I find the probability distribution another interpretation of these things as stationary States comes from considering expectation values suppose I want to calculate the expectation value of some generic operator capital Q the expression for the expectation of an operator is an integral of the wave function times the operator acting on the wave function so complex conjugate wave function operator wave function now I'm going to go straight to the wave function as expressed in terms of x and t parts so complex conjugate of a spatial part times the complex conjugate of the time part which from the last slide is e to the plus i e t over H bar operator gets sandwiched in between the complex conjugate of the wave function and the wave function itself so this is again not no Stars anymore come on Brent just X and then e to the minus i e t over H bar this is all integrated DX so this is PSI star and this is sine and this is our operator sandwich between them as in the expression for the expectation first now provided this operator does not act on time it doesn't have anything to do with the time coordinate and that will be true for basically all of the operators we will encounter in this course has no time derivatives what that means is that I can I can push this time part past the operator the operator will not act on the time part so that's okay and what that means is as before these two guys will just end up directly multiplying to each other multiplied by each other excuse me and you will end up with just one as a result integral of x star Q hat X DX will be what results again the time part drops out if you know Q has no partial time derivatives which is true for basically all of the operators will meet position X hat that's just multiplying by X momentum that's that has to do with differentiation with X and then kinetic energy that again has second derivatives with respect to position there's no time derivatives in any of these physical sorts of operators that we'll be talking about here what that means is that again the time dependence drops out and the expectation value of this Q operator and Q can be anything here again has no time dependence so our expectation values are constant if our physical system is described by a wave function that separates like this then our expectation values have no time dependence the next topic I'd like to address is the energy of a stationary state which also happens to have a very nice expression the spatial part of the Schrodinger equation that resulted from our separation of variables with something I call the time independent Schrodinger equation time independent Schrodinger equation which could be written simply as the hamiltonian operator acting on the spatial part of the wave function was equal to the separation constant times the spatial part of the wave function H times x equals e times x where H is now an operator so I shouldn't say time I should say the hamiltonian operator acting on X is equal to the separation constant e times would just multiply by the spatial part of the wave function so suppose I want to calculate the expectation of the hamiltonian operator on a hamiltonian operator you know is related to the energy of the system so if I calculate an expectation here it should have something to do with the energy of the system it's not immediately obvious that H acting on the wave function gives you the energy multiplied by the wave function but calculating an expectation value like this makes the connection much stronger so let's write out that expression we have an integral and then we have the wave function itself X star e to the I times the energy times the time over H bar times our hamiltonian and then the non-complex conjugated wave function itself e to the minus i e t over H bar integral DX now we know the hamiltonian operator that has partial derivatives with respect to X and multiplication by the potential so again this operator isn't going to have any time dependence same as what we reasoned when we were calculating expectation values these time dependencies drop out for the same reason they here as they did in the previous slide in general or just left then with the integral of x star H hat X integrated DX but I know this that's this so I can make that substitution knowing this spatial part of the wave function solves the Schrodinger it shows the time independent Schrodinger equation allows me to simplify this I just end up with the integral X star e x integrated DX now these X's are not coordinates they're functions of the x coordinate just to be clear about my notation but this email that's just a constant it can be pulled out of the integral entirely and we're just left with e times the integral of x star X should make it clear these are capitals DX and this if we've properly normalized our wave function is one so what that tells us is that the expectation of the hamiltonian operator is our separation constant e that we got when we applied separation of variables to the time dependent Schrodinger equation the hamiltonian operator is something we expect to be related to the energy so it's a reasonable to identify the separation constant e with the energy associated with this particular state so now what this tells us is that the energy of the wave function the expectation of our hamiltonian is the energy of the wave function now we know there's some uncertainty in quantum mechanics so is there uncertainty in our energy if we actually measure the energy of our wave function do we ever get e the separation constant well in order to calculate the uncertainty in something we need to calculate effectively the standard deviation or the variance let's write it as the variance the squared Sigma sub e or E perhaps now refers to the energy so let me write this as Sigma sub H hat if we want to calculate this well we have to calculate well Sigma sub H hat squared that's equal to the expectation value of H squared minus the expectation value of H quantity squared the expectation squared this expectation of the square minus the square of the expectation hopefully you remember that back from when we talked about variance now if I write out this expression now I just calculated this this was equal to E so this term here is just going to give us e squared this term is going to give us e squared so that's easy enough let's work with this term then the expectation of the hamiltonian squared this is again going to be an integral and I'll drop the time dependent Parts here actually you know what I'll drop all of this we don't need to actually express everything as an integral you know what's going to happen in this integral the expectation of any operator is the integral of PSI star times the operator times PSI integrated over the domain in this case X so if I substitute the hamiltonian squared into this in the context of the discussion we've had over the last few minutes about how the time dependence drops out what we're effectively going to end up with needing to consider is this and this is going to end up looking like hat squared times just the spatial part X now H hat squared times the special part X well that's H hat times hat acting on H hat acting on x as the definition of squaring an operator you just apply the operator twice now as before I know this this is e times x since I know X satisfies the time independent Schrodinger equation so this is going to be h acking on e x there's nothing special about e it's just a number so I can pull it out the hamiltonian operator won't do anything to that number and I'll just be left with E and H hat acting on x well again H hat acting on X that's e times x so I'm going to end up with e squared times x now back in the integral equation this e squared just being a constant can be pulled out front and what we're going what we would end up with is e squared times the integral of x star X DX and again if we've properly normalized this this will just be one and we'll end up with e squared so this is interesting what we got for our expectation of hamiltonian squared was e squared so what this tells us is that Sigma sub H hat squared our variance or our our squared uncertainty in the energy of the system is equal to e squared from this term minus e squared from this term which is zero stationary States like this that solve the time independent Schrodinger equation have energy given by the separation constant e and no uncertainty in energy they essentially have exactly e amount of energy what exactly does that mean well we'll talk about that in a moment just to summarize things for these stationary States the probability density has no time dependence the time dependent part cancels itself out when you're calculating the probability density expectation values of any operator that we're going to be concerned with in this class also have no time dependence and the energy is specified exactly that that separation constant e in for instance the time independent Schrodinger equation H hat x equals e x this is our energy that's nice it has some physical significance but there's not going to be no uncertainty in energy these states have defined energy and it's exact which means if we measured the energy of the system we would always get the same thing now just to comment very briefly on what that actually means what does it mean for a system to have no energy uncertainty if you remember back when I talked about the difference between quantum physics and classical physics and where the boundary between classical and quantum physics fell well it had I gave you an energy time uncertainty relation where the uncertainty in the energy and the uncertainty in the time always had to be greater than about H bar over two now it what does that mean if we have no energy uncertainty well Delta e is zero so zero times something it has to be greater than H bar and H bar is really small but H bar is not zero so there's mathematically some problem here and what actually happens is that delta T has to be Infinity what does that mean why is that a meaningful statement well essentially this delta T here and the energy time uncertainty relationship tells you about when the state exists the duration of the process essentially it's the answer to the question how accurately can you tell me when this state exists and for something like a stationary state it always exists there's no time dependence you could run the clock backwards you could run the clock forwards all the way before the beginning of the universe technically since none of that beginning of the universe stuff is covered in this course essentially this state always exists so the answer to the question when well always whenever you want forever however you want to put it essentially these stationary States always exist they have no time dependence and they're constant forever and that's not the most realistic state in the world but they are the sorts of things that we get from the Schrodinger equation and they actually have some really really nice mathematical properties that we'll start talking about in the next lecture and in the lecture after that lectures after that but that's a safe state for you it's a result of the solution to the time independent Schrodinger equation possibly with the time Independence added back on depending on how sloppy I'm being with my notation at any given moment stationary states are really important and they have some very nice mathematical properties to preview a little bit if you know the stationary states of your system you know everything about the system and you can you can find the answer to any question you might possibly ask about the quantum mechanical behavior of your system now we talked about how the Schrodinger equation can be split by separation variables into a Time independent Schrodinger equation in a relatively simple time dependent part what that gave us is provided we have solutions to that time independent Schrodinger equation we have something called a stationary State and it's called a stationary State because nothing ever changes the probability densities are constant the expectation values are constant and the state effectively since it has a precise exact no uncertainty energy has to live for an infinite amount of time that doesn't sound particularly useful from the perspective of physics we're often interested in how things interact and how things change with time so how do we get things that actually change with time in a non-trivial way well it turns out that these stationary States while their time dependence is Trivial the interaction of their time dependence when added together in a superposition is not trivial and that's where the interesting time dynamics of quantum mechanics comes from superpositions of stationary States now we can make superpositions of stationary States because of one fundamental fact and that fact is the linearity of the Schrodinger equation so the Schrodinger equation as you hopefully remember it by now is I H bar partial derivative of PSI with respect to time is equal to minus H bar squared over 2m second derivative of PSI with respect to X and that's a really ugly sign must fix second derivative of PSI with respect to position Plus V times PSI so this is our hamiltonian operator applied to the weight function and this is our time dependence part now in order for an equation to be linear what that means is that if PSI solves the equation PSI plus some other PSI that also solves the equation we'll solve the equation so if say let's call it a solves the Schrodinger equation and B solves the Schrodinger equation and let me write this out in a little more detail first of all I'm talking about a as if is a function of position and time as is B if a and b both solve the Schrodinger equation then a plus b must also solve the Schrodinger equation and we can see that pretty easily let's substitute PSI equals a plus b into this equation the first step H bar partial derivative with respect to time of a plus b is equal to minus H bar squared over 2m second partial derivative with respect to space of a plus b plus the potential V times a plus b now the partial derivative of the sum is the sum of the partial derivatives that goes for the second partial derivative as well and well this is just just the product of the potential with the sum is the sum of the product of the potential with whatever you're multiplying out I'm going to squeeze things a little bit more here so I can write that out I H bar d by DT of a plus I H bar DB DT equals minus H bar squared over 2m second derivative of a with respect to space I forgot my squared on the second derivative minus H bar squared over 2m second derivative of B with respect to position plus v times a plus v times B that's just following those fundamental rules now you can probably see where this is going this this and this this these three terms together make up the Schrodinger equation the time dependent Schrodinger equation for a foe a for a and this this and this altogether that's the time dependent showing your equation for B so if a satisfies the time dependent Schrodinger equation which is what we suppose when we got started here then this term this term and this term will cancel out they will obey the equality likewise for the parts with B in them so essentially if a solves the Schrodinger equation and B solves the Schrodinger equation a plus b also solves the Schrodinger equation the reason for that is the partial derivatives here the partial derivative of the sum is the sum of the partials and the product with the sum is the sum with the product these are linear operations so we have a linear partial differential equation and the linearity of the partial differential equation means well essentially that if a solves and B solves then a plus b will also solve it that allows us to construct solutions that are surprisingly complicated and actually the general solution to the Schrodinger equation is PSI of position and time is equal to the sum and I'm going to be vague about the sum here you're summing over some index J x sub J as a function of position these are solutions now to the time independent Schrodinger equation the spatial part of the Schrodinger equation times your time part and we know the time part from the Well from our back from when we discussed separation of variables is minus i e now this is going to be e sub J t over H bar so this is a general expression that says we're summing up a whole bunch of stationary State solutions to the time independent Schrodinger equation and we're getting PSI no oh I've left something out and I've left and what I've left out is quite important here we need some constant C sub J that tells us how much of each of these stationary states to add in so this is actually well it's going to be a solution to the Schrodinger equation since it's constructed from solutions to the Schrodinger equation and this is completely General that's a little surprising what that means is that this can be used to express not just a subset of solutions to the Schrodinger equation but all possible solutions to the Schrodinger Schrodinger equation all the solutions to the Schrodinger equation can be written like this that's a remarkable fact and it's certainly not guaranteed you can't just write down any old partial differential equation apply separation of variables and expect the solutions that you get to be completely General and superposable to make any solution you could possibly want the reason this works for the Schrodinger equation is because the Schrodinger equation is well just to drop some mathematical terms if you're interested in looking up information later on the Schrodinger equation is an instance of what's called a Sturm liuvel problem a storm removal problems are a class of linear operator equations for instance partial differential equations or ordinary differential equations that have a lot of really nice properties and this is one of them so the fact that the Schrodinger equation is a sturmoval equation or the fact that the time independent Schrodinger equation is a Sturm removable equation means that this will work so if you go on to study you know Advanced mathematical analysis methods in physics you'll learn about this but for now you just need to sort of take it on faith the general solutions to the Schrodinger equation look like this superpositions of stationary States so if we can superpose stationary States what does that actually give us one example I would like to do here is and this is just an example of the sorts of analysis you can do given superpositions of stationary States is to consider the energy suppose I have two solutions to the time independent Schrodinger equation which I'm just going to write as H hat X1 equals E1 X1 and H hat X2 equals E2 X2 so X1 and X2 are solutions to the time independent schroinger equation and they're distinct Solutions E1 not equal to E2 I'm going to use these to construct a wave function let's say PSI of X and at time T equals zero let's say it looks like this C1 times X1 as a function of position plus C2 X2 which is a function of position now at some time later we can add on our time dependence factors knowing what the time dependence factors look like what that means is that PSI of X and some general time each of these spatial Parts needs some time part to be inserted so C1 X1 and then e to the minus i e 1 t over H bar plus C2 X2 e to the minus i e 2 t over H bar so these complex exponential time dependencies come from the time part from our separation of variables you can think of them as being here as well just with time T equals zero which makes both of these factors to be equal to 1. so if this is our wave function let's consider the energy in particular let's consider the expectation value of the hamiltonian operator what does that look like well the expectation value is going to be PSI star H hat PSI integrated DX and I can substitute in this expression for PSI so I'm going to get an integral of C1 X1 star taking complex conjugates e to the i e 1 T over H bar and I've got a plus sign because I took the complex conjugate plus C2 x 2 Star e to the i e 2 t over H bar again plus sign here because of taking the complex conjugate then your operator and then just PSI itself so C1 X1 e to the minus I e one t over H bar H bar plus C2 X2 e to the minus I E2 t over H bar this is all integrated DX now I know what the hamiltonian does to these time dependence Parts nothing and I know what the hamiltonian does to these spatial Parts since by construction their solutions to the time independent Schrodinger equation so I can apply the operator to this expression for the wave function and when you do that you get C1 X I should be writing my x's bigger here C1 X1 star e to the I E1 T over H bar I'm just copying this expression over plus C2 X2 star e to the i e 2 T over H bar and now after I've applied the operator to this I get C1 E1 X1 e to the minus i e 1 T over H bar and this is just substituting E1 X1 for H hat X1 plus the same sort of term for the second part C2 E2 X2 e to the minus i e 2 T over H bar still integrated DX now what to do we actually get here well as before some of these terms are simpler than others these two terms when we expand out distribute this expression we'll end up with positive and negative e to the i e one t e to the minus i e one t multiplied together is going to give us 1. same thing is going to happen for these two terms when I multiply them together now I'm also going to have terms where I have like this term where I have E2 and E1 together E2 and E1 mixed together like that what I actually get is sorry expanding this out forgot what my slide said for a moment there sorry about that this term the time dependence is going to go away this term the time dependence is going to go away but the cross terms here those two terms the time dependence is not going to go away because you have E1 mixed with E2 and E2 mixed with E1 so what you get expanding that out is as before integral and C1 squared X1 star X1 times E1 with no time dependence that's what you get from the orange terms here C1 squared X1 star X1 and then the time dependencies drop out which and we have the constant E1 also from the blue terms here we get C2 squared X2 star X2 E2 for the same sorts of reasons now our cross terms plus C1 C2 X1 star X2 and then we have in this case an E2 we have some time dependence e to the I E1 minus E2 T over H bar our other cross term looks quite similar plus C1 C2 x 2 Star X1 E1 e to the I 2 minus E1 t over H bar this is all integrated DX as before now these integrals actually have some nice features first of all this first integral here the first term in this integral C1 that's a constant E1 that's a constant so I can pull those out and I'm left with the integral of X1 star X1 if X1 is properly normalized that integral is going to be one so we get C1 squared E1 for this first term the second term here gives us something that looks very similar C2 squared E2 since the integral of X2 star X2 is Unity provided to be properly normalized things now we're actually done we'll talk more about this in detail later but the integral of X1 star X2 is actually going to be zero everything else here C1 C2 E2 and this time dependent part is a constant when we're considering an integral over X so we're just going to be left with the integral of X1 star X2 DX and this is a general feature of stem removal problems when you have distinct Solutions like this X1 and X2 the integral of their product is zero likewise for X2 star X1 DX equals zero we'll see a specific example of this in the next lectures when we were talking about the particle in a box this is connected with Fourier analysis and Fourier series but for now you can think of it just as a quirk of the nice features of equations like the Schrodinger equation that you get solutions that split up like this where your cross terms and integrals like this vanish so essentially what that tells us is that the expectation of the hamiltonian is C1 squared E1 plus C2 squared E2 the energies of the states multiply together in order to check your understanding of this what I'd like to do is have you follow through similar sorts of analysis given this wave function write down in your notebook where the time dependence comes in and write an expression for the probability distribution as a function of time and what you need to do to really check your understanding is explain in your own words why this has non-trivial time dependence that's not an easy question but the non-trivial time dependence comes from the superposition your ques the question for you is why and how that superposition results in non-trivial time dependence so to summarize classic problems in quantum mechanics really they all start with some physical system for instance a box with a particle inside it now what happens next depends on what exactly this situation is but typically in quantum mechanics you will write down a potential V of X in the case of one-dimensional quantum mechanics knowing that potential will allow you to write down the time independent Schrodinger equation that was what we got from separation of variables so solving the time independent sharinger equation gives you the stationary States and it also gives you the energies of those stationary States that's telling you X of not X of t it's telling you X of x and e to the minus I e t over H bar it's telling you what the stationary state looks like the next step and we'll talk about this in great detail is the expression of the initial conditions of the system as a sum of stationary States now you know superpositions are also going to be superpositions of stationary states are also going to be solutions to the Schrodinger equation so if you can express your initial conditions as a superposition of those stationary States you're great you're good the final step is to add the time dependence to each stationary state knowing X of X this is the time dependence I'm referring to you need to know the energy but you got that then basically what you have is that PSI of x and t is equal to that sum over J of x sub J sorry I had forgotten again my constant out front C sub J x sub J of x e to the minus I e sub J T over H bar this is your general solution you've properly chosen your C sub JS such that you solve your initial conditions you're guaranteed to solve the Schrodinger equation because you're expressing things as a superposition of stationary States and this General wave function is then something that you can use to answer meaningful physical questions quantum mechanics is really all about solving the Schrodinger equation that's a bit of an oversimplification though because if there was only one Schrodinger equation we could just solve it and be done with it and that would be it for quantum mechanics the reason this is difficult is that the Schrodinger equation isn't just the Schrodinger equation there are many Schrodinger equations each physical scenario for which you want to apply quantum mechanics has its own Schrodinger equation they're all slightly different and they all require slightly different solution techniques the reason there are many different Schrodinger equations is that the situation over under which you want to solve the Schrodinger equation enters the Schrodinger equation as a potential function so let's talk about potential functions and how they influence well the physics of quantum mechanics first of all where does potential appear in the Schrodinger equation this is the time dependent Schrodinger equation and the right hand side here you know is given is giving the hamiltonian operator acting on the wave function now the hamiltonian is related to the total energy of the system and you can see that by looking at the parts this is the kinetic energy which you can think of as the momentum operator squared over 2m sort of a quantum mechanical and now analog of p squared over 2m In classical mechanics and the second piece here is in some sense the potential energy this V of x is the potential energy as a function of position as If This Were A purely classical system for instance if the particle was found at a particular position what would be its potential energy that's what this function V of x encodes now we know in quantum mechanics we don't have classical particles that can be found at particular positions everything is probabilistic and uncertain but you can see how this is related this is the time dependent Schrodinger equation which is a little bit unnecessarily complicated most of the time we work with the time independent Schrodinger equation which looks very similar again we have a left hand side given by the hamiltonian we have a kinetic energy here and we have a potential energy here if we're going to solve this time independent equation note now that the wave functions here are expressed only as functions of position not as functions of time this operator gives you the wave function itself back multiplied by E which is just a number this came from the separation of variables it's just a constant and we know from considering the expectation value of the hamiltonian operator which is related to the energy for solutions to this time independent Schrodinger equation that we know this is essentially the energy of the state now what does it mean here in this contact or in this uh potential context well you have a potential function of position and you have PSI the wave function so this V of X PSI of x if that varies as a function of position and it will if the wave function has a large value a large magnitude in a certain region and the potential has a large value in a certain region that means that there is some significant probability the particle will be found in a region with high potential energy that will tend to make the potential energy of the State Higher now if PSI is zero in some region where the potential energy is high that means the particle will never be found in a region where the potential energy is high that means the State likely has a lower potential energy this is all very sort of heuristic qualitative argument and we can only really do better once we know what these Solutions are and what these actual potential functions look like um what I'd like to do here before we move on is to rearrange this a little bit to show you what effect the potential energy related to the energy and how it's related to the energy of the state what effect that has on the wave function and in order to do that I'm going to multiply through by this H bar squared over 2m and rearrange terms a little bit what you get when you do that is the second derivative of PSI with respect to X here's my Eraser with respect to X being equal to 2m over H bar squared times V of x minus E PSI so this quantity here relates the second derivative of PSI to psi itself for instance if the potential is larger than the energy of the state you'll get one overall sign relating the second derivative in PSI whereas if energy is larger than potential then you'll end up with a negative quantity here relating the second derivatives of PSI with itself so keep this in the back of your mind and let's talk about some example potential functions this is what we're going to be doing or this is what the textbook does in all of chapter two write different potential functions and solve the Schrodinger equation the first example potential we do and this is section 2.2 is what I like to call the particle in a box the textbook calls it an infinite Square well the particle in a hard box for instance you can think of as a potential function that looks like this get myself some coordinate systems here you have a potential function V of X oops turn off my ruler that looks something like this this is V of X as a function of x the potential goes to Infinity for X larger than some size let's call this you know minus a to a if you're inside minus a to a you have zero potential energy if you're outside of a you have infinite potential energy it's a very simple potential function it's a little bit non-physical though because while infinite potential energy what does that really mean it means it would require infinite energy to force the particle Beyond a if you had some infinitely dense material that just would not tolerate the electron ever being found inside that material and you made a box out of that material this is the sort of potential function you would get much more realistically we have the harmonic oscillator potential the harmonic oscillator potential is the same as what you would get in classical physics it's a parabola this is something you know proportional to x squared V of X being proportional to x squared is what I mean this is what you would get if you had a particle attached to a spring connected to the origin if you move the particle to the right you stretch the spring put Quantum mechanically if you happen to find the particle at a large displacement from the origin the spring would be stretched quite a large amount and it would have a large amount of potential energy associated with it from a more physical down to earth sort of perspective this is what happens when you have any sort of equilibrium position for a particle to be in the particle is sitting here near the origin where there is a flat potential but any displacement from the origin makes the potential tend to increase in either direction this is a like a an electron in a particle trap or an atom in a particle trap harmonic oscillator potentials show up all over the place and we'll spend a good amount of time talking about them foreign we consider is a Delta function potential and what that looks like now I'm starting going to start at 0 and draw it going negative but it's effectively an infinitely sharp infinitely deep version of this particle in a box potential instead of going to Infinity outside of your realm it's at zero and instead of being a zero inside your realm it goes to minus infinity there and this now continues downwards it doesn't bottom out here the overall Behavior will be different now because the particle is no longer disallowed from being outside of the domain there is no longer an infinite potential energy here and we'll talk about that as well these are all sort of weird non-physical potentials the particle in a soft box potential is a little bit more physical if I have my coordinate system here the particle in the soft box potential looks something like this to keep things simple it still changes instantaneously at say minus a and a but the potential energy is no longer Infinity this is for instance a box made out of a material that has some pores in it and the electron or whatever particle you're considering to be in the Box doesn't like being in those pores so there's some energy you have to add in order to push the particle in once it's in it doesn't really matter where it is you've sort of made that energy investment to push the particle into the box and we'll talk about the quantum mechanical states that are allowed by this potential as well finally we will consider what happens when there's no potential at all essentially your potential function is constant that actually has some interesting implications for the form of the solutions of the Schrodinger equation and we'll well we'll talk about that in more detail to map this onto textbook sections this is section 2.2 the harmonic oscillator section 2.3 the Delta function potential is section 2.5 the particle in a box is section 2.6 particle in a soft box is 2.6 and particle with no potential or an overall constant potential everywhere in spaces section 2.4 so these are some example potentials that we'll be talking about in this chapter what do these potentials actually mean though how do they influence the Schrodinger equation and its Solutions well the way I wrote the Schrodinger equation a few slides ago second derivative of PSI with respect to X is equal to two M over H bar squared just a constant times V of x minus E PSI this is now the time independent Schrodinger equation so we're just talking about functions of position here and E keep in mind is really is the energy of the state if we're going to have a solution to the time independent Schrodinger equation this e exists and it's just a number so what does that actually mean let's think about it this way we have a left hand side determined by the right hand side of this equation the left hand side is just the second derivative with respect to position of the wave function this is related to the curvature of the wave function I could actually write this as a total derivative since this is just PSI is only a function of position now so there's no magic going on with this partial derivatives it's going to behave same as the ordinary derivative that you're used to from Calculus class the second derivative is related to the concavity of a function whether something's concave up or concave down so let's think about what this means if you have a potential V of X that's greater than your energy if V of X is greater than e what does that mean that means V of x minus E is a positive quantity that means the right hand side here will have whatever sign PSI has and I'm being a little sloppy since PSI here is in general complex function but if we consider it to just be say positive which isn't as meaningful for a complex number as it is for a real number you would have PSI of x if PSI of X is positive and this number is positive then the second derivative is positive which means that if we're say if PSI is say here PSI is positive when it's multiplied by is positive then the second derivative is positive it curves like this whereas if size down here size negative this is positive second derivative of size negative it curves like this what this means is that PSI curves away from our axis away from this PSI equals zero line on the other hand if V of X is less than the energy this quantity will be negative and we get the opposite Behavior if PSI is up here positive it's multiplied by a negative number and a second derivative is negative you get something that curves downwards if size on the other side of the axis occurs upwards PSI curves toward the axis so this helps us understand a little bit about the shape of the wave function for instance let me do an example here in a little bit more detail suppose I have I'll do it over here coordinate system if I have a potential function let's do these sort of soft particle in a box I can do better than that soft particle in a box so V of x is constant outside your central region and constant inside your central region and has a step change at the boundaries of your region let's think about what our wave function might look like under these circumstances so we have our boundaries of our region here the other thing that we need to know to figure out what the wave function might look like is a hypothetical energy and I'm just going to set an energy here I'm going to do the interesting case let's say this is the energy and I'm plotting energy on the same axis as the potential which is fine this is the energy of the state this is the potential energy as a function of position so they have the same units what this energy hypothetically means is that outside here the potential energy is greater than the energy of the state and inside here the potential energy is less than the energy of the state so we'll get different signs sort of behaviors different curvatures of the wave function so do my wave function in blue here if I say start my wave function this is all hypothetical now this may not work if I start my wave function here at some point on the positive side of the axis at the origin we know the energy of the state is larger than the energy of or then the potential energy so this quantity is negative and PSI curves towards the axis so since size positive here I'm looking at this sort of curvature so I could draw my wave function out sort of like this maybe that's reasonable maybe that's not this is obviously not a quantitative calculation this is just sort of the sort of curvature that you would expect now I only continued these curving lines out to the boundaries since that the boundaries things change outside our central region here the potential energy is larger than the energy of the state and you get curvature away from the axis what might that look like well something curving away from the axis it's going to look sort of like that but where do I start it do I start it going like that do I start it going like that what does this actually look like well if you think about this we can say a little bit more about what happens to our wave function when it passes a boundary like this and the key fact is that if V of X is finite then while we might have the second derivative of PSI with respect to X being discontinuous Maybe might not be in this case the second derivative of PSI is just set by this difference so when we have a discontinuous discontinuity in the potential we have a discontinuity and the second derivative the first derivative of PSI will be continuous think about integrating a function that looks like this I integrate it once I get something maybe with large positive slope going to slightly smaller positive slope there will be no discontinuity in the first derivative what this means for PSI is that it's effectively smooth and that I just by that I just sort of mean no corners the first derivative of PSI won't ever show a corner like this it will be something like that for example no sharp Corners to it what that means in the context of a boundary like this is that if I have PSI going downwards at some angle here I have to keep that angle as I cross the boundary now once I'm on the other side of the boundary here I have to curve and I have to curve according to the rules that we had here so depending on what I actually chose for my initial Point here and what the actual value of the energy was and what the actual value of the potential is outside in this region I may get differing degrees of curvature I may get something that happens like this curves up very rapidly or I may get something that doesn't curve very rapidly at all perhaps it's curving upwards very slowly but it crosses the axis now as it crosses the axis the sign on PSI here changes the curvature is also determined by PSI and PSI gets smaller and smaller the curvature gets smaller and smaller the curvature becoming zero as I crosses the axis then when PSI becomes negative the sine of the curvature changes so this would start curving the other direction curving downwards it turns out that there is actually a state right in the middle sort of a happy medium state where PSI curves curves curves curves curves and just kisses the axis comes towards the axis and when it comes towards the axis and reaches the axis with zero slope and zero curvature it's stuck it will never leave the axis again and these are the sorts of states that you might actually associate with probability distributions you know if PSI is blowing up like this going to positive infinity or up to negative Infinity that your your wave function will not be normalizable but the wave function here denoted by these green curves has finite area therefore is sort of normalizable so these are the sorts of things that the potential function tells you about the wave function um in general what direction it curves how much it curves and how quickly and of course doing this quantitatively requires a good deal of mathematics but I wanted to introduce the mat or before I introduce the math I wanted to give you some conceptual framework with which to understand what exactly this potential means if the potential is larger than the energy you expect things that curve upwards and when you get things that curve upwards you'll you'll have or curve away from the axis you tend to have things blow up unless they just sort of go down and kiss the axis like this so there will be a lot of things approaching the axis and never leaving so that we have normalizable wave functions on the other hand if the potential energy is less than the energy of the state you get things that curve towards and well if you have something that curves towards it tends to do this always curving towards always curving towards always curving towards the axis you get these sort of wave-like states so that's a very hand-waving discussion of the sorts of behavior you get from in this case uh step discontinuous potential and we'll see the sort of behavior throughout this chapter to check your understanding take this step to discontinuous potential and tell me which of these hypothetical wave functions is consistent with the Schrodinger equation now I did not actually go through and solve the Schrodinger equation here to make sure these things are quantitatively accurate they're probably all not quantitatively accurate what I'm asking asking you to do here is identify the sort of qualitative behavior of these systems is the curvature right and let's see yeah is the are the boundary conditions right in particular does the wave function behave as you would expect as it passes from the sort of interior region to the exterior region we've been talking about solving the Schrodinger equation and how the potential function encodes the scenario under which we're solving the Schrodinger equation the first real example of a solution to the Schrodinger equation and a realistic wave function that we will get comes from this example the infinite Square well which I like to think of as a particle in a box the infinite Square well is called that because its potential is infinite and well Square what the potential ends up looking like is if I plot this going from 0 to a the potential is infinity if you're outside the rate the region between 0 and a and at zero if you're in between the region if you're in between 0 and a so what does this look like when it comes to the Schrodinger equation Well what we'll be working with now is the time independent Schrodinger equation the t i s e which reads minus H bar squared over two M times the second derivative of sorry I'm getting ahead of myself the second derivative of PSI with respect to X plus potential as a function of x times PSI is equal to the energy of the stationary state that results from the solution of this equation times PSI now this equation doesn't quite look right if we're outside the region bad things happen you end up with an Infinity here for V of x if x is not between 0 and a the only reason this the only way this equation can still make sense under those circumstances if PSI of X is equal to zero if X is less than zero or X is greater than a so outside this region we already know what our wave function is going to be it's going to be zero and that's just a requirement on the basis of infinite potential energy can't really exist in the real world now what if we're inside then V of x is 0 and we can cancel this entire term out of our equation what we're left with then is minus H bar squared over 2m second partial derivative of PSI with respect to X is equal to e times PSI just dropping that term entirely so this is the time independent Schrodinger equation that we want to solve so how do we solve it well we had minus H bar squared over 2m times the second derivative of PSI with respect to X being equal to e times PSI we can simplify that just by rearranging some constants what we get minus second derivative of PSI with respect to x equal to minus K squared PSI and this is the sort of little trick that people solving differential equations employ all the time knowing what the solution is you can define a constant that makes a little more sense in this case using a square for K instead of just some constant k but in this circumstance K is equal to root or to go root 2 m times e over H bar so this is our constant which you just get from rearranging this equation this equation you should recognize this is the equation for a simple harmonic oscillator a mass on a spring for instance now as I said before the partial derivatives here don't really matter we're only only talking about one dimension and we're talking about the time independent Schrodinger equation so the wave function here PSI is just a function of X not a function of X and time so this is the ordinary the ordinary differential equation that you're familiar with for things like masses on Springs and what you get is oscillation PSI as a function of X is going to be a sine K X plus b cosine KX and that's a general solution A and B here are constants to be determined by the actual scenario under which you're trying to solve this equation in this equation now not the original Schrodinger equation so these are our Solutions Sines and cosines signs and cosines that's all well and good but that doesn't actually tell us what the wave function is because well we don't know what a is we don't know what B is and we don't know what K is either we know K in terms of the mass of the particle that we're concerned with Planck's Constant and the E separation constant we got from deriving the time independent Schrodinger equation while that might be related to the energy we don't know anything about these things these are free parameters still but we haven't used everything we know about the situation yet in particular we haven't used the boundary conditions and one thing the boundary conditions here will determine is the form of our solution now what do I mean by boundary conditions well the boundary conditions are what you get from considering the actual domain of your solution and what you know about it in particular at the edges now we have a wave function that can only be non-zero between 0 and a outside that it has to be zero so we know right away our wave function is zero here and zero here so whatever we get for those unknown constants a B and K it has to somehow obey this we know a couple of things about the general form of the wave function in particular just from consideration of things like the hamiltonian operator or the momentum operator we know that the wave function itself PSI must be continuous we can't have wave functions that look like this the reason for that is this discontinuity here would do very strange things to any sort of physical operator that you could think of for example the momentum operator is defined as minus i h bar partial derivative with respect to X the derivative with respect to X here would blow up and we would get a very strange value for the momentum that can cause problems by sort of contradiction then the wave function itself must be continuous we'll come back to talking about the boundary conditions and the wave function later on in this chapter but for now all we need to know is that the wave function is continuous what that means is that since we're zero here we must go through zero there and we must go through zero there since we're zero here so that means color means PSI of zero is equal to zero and PSI of a is equal to zero what does that mean for our hypothetical solution PSI of x equals a sine k x plus b cosine KX well first of all consider this one the wave function at zero equals zero when I plug 0 into this the sine of 0 K times 0 is going to be zero the sine of zero is zero but the cosine of zero is one so what I'll get if I plug in 0 for PSI is 1 times B so I'll get B now if I'm going to get 0 here that means B must be equal to zero so we have no cosine Solutions no cosine part to our Solutions so everything here is going to start like signs it's going to start going up like that that's not the whole story though because we also have to go through zero when we go through a so if I plug a into this what I'm left with is PSI of a is equal to Capital a times the sine of k a if this is going to be equal to zero then I know something about Ka in particular the sine function goes through 0 for particular values of K particular values of its argument sine of x is 0 for x equals integer multiples of Pi what that actually looks like on our plot here is things like this our wave functions are going to end up looking like this so let me spell that out in a little more detail our PSI of a wave function is a times the sine of K times a and if this is going to be equal to zero Ka has to be either 0 plus or minus pi plus or minus two pi plus or minus 3 pi Etc this is just coming from all of the places where the sine of something crosses zero crosses the axis now it turns out this this is not interesting this means PSI is zero everywhere since the sine of zero is well sine K times a if k a is going to be 0 then everything if Ka is 0 K is zero so the sine of K times x is going to be 0 everywhere so that's not interesting this is not a wave function that we can work with another fact here is that these Plus or minuses the sine of minus X is equal to minus the sine of x sine is an odd function since what we're looking at here has a normalization constant out front we don't necessarily care whether there's a plus or a minus sign coming from the sine itself we can absorb that into the normalization constant so essentially what we're working with then is that Ka equals pi 2 pi 3 Pi Etc which I'll just write as n times pi now if K times a is going to equal n times pi we can figure out what um well let's just substitute in for K which we had a few slides ago was root 2 m capital E over H bar so that's k k times a is equal to n pi this is interesting we now have integers coming from n here as part of our solution so we're no longer completely free we in fact have a discrete set of values now a that's a property of the system we're not going to solve for that M that's a property of the system H bar that's a physical constant the only thing we can really solve for here is e so let's figure out what that tells us about e and if you solve this for e you end up with N squared pi squared H bar squared over two m a squared this is a discrete set of allowed Energies and I'm going to put an exclamation point there because this is important this is the quantum part of quantum mechanics we started with the system that Bion by all respects was continuous and had nothing really discrete about it and what we ended up with in the end was a discrete set of allowed energies a discrete set of solutions our wave functions look like PSI sub n now we don't have just any possible value and they're going to be big a normalization constant times the sine of and if you substitute all of this back in Ka is n Pi what we end up with is n pi over a times x inside our sine function this is our wave function the spatial part our solution to the time independent Schrodinger equation there's only a discrete set of them and that's interesting that is the Quantum in quantum mechanics one more detail to nail down here is the normalization we know the integral from minus infinity to Infinity of PSI star PSI DX has to equal one well in this case we know that's going to reduce to the integral from 0 to a of PSI star PSI where I should write it here PSI n equals a times the sine of n pi over lowercase a x so I can write down PSI star PSI now and that's going to give me a sine sorry a squared sine squared n pi over a times x integrating DX there's no real reason for complex conjugates here since this is a purely real function so the integral just ends up looking like this and this has to equal one if the wave function can be is going to be treated as a probability distribution now integrating sine squared over an interval like this you need to be a bit careful that the integral you're integrating over has a certain number a certain minimum number of Cycles in the in this case it has an even number of Cycles not sorry not an even number it has a specific integer number of Cycles and if you're integrating over an integer number of cycles of sine squared sine squared effectively averages out to a half if you want to do the integral here more rigorously you can make the substitution that sine squared of X is a half minus a half cosine of 2x and the integral of cosine you can do but for now I'm just going to simplify this and say this averages out the sine squared part here averages out to a half and what we end up with then is a squared times a half of the actual interval we're integrating over zero to a so technically this should be a minus zero but you get the idea the integral must equal one in order for things to be normalized which tells us that a is equal to the square root of 2 over a big a is the square root of 2 over little a so that determines our normalization constant now we know everything about our Solutions PSI sub n of X is root 2 over a times the sine of n pi over a x that's our wave function normalized and ready for use so these are our Solutions and these are the energies associated with those Solutions and we only got a discrete set of them in order to better visualize this I'm going to draw a diagram for you and this is a common sort of diagram to draw on quantum mechanics though it does abuse the system of units a little bit if this is our x-axis we know our wave function is only defined only non-zero I mean in between 0 and a so we have this region that we're interested in I want you to think about this y-axis now as a hybrid energy wave function axis now treating it as an energy axis I'm going to make some Marks here and maybe you can go up to 16 there and each of these marks represents sort of one unit of energy I'll label this lowest most tick mark E1 what I get if I substitute 1 in for n in this expression for the energy essentially pi squared H bar squared over 2 m a squared is this value now E1 consider a line at E1 now treat this as an axis for a plot of the wave function now we know what the wave function looks like for E1 if we substitute 1 and for n here as well we just get sine pi over a times x with the normalization constant just to show the shape of this wave function we don't really care about the normalization constant and it looks something like that now if I continue up to e 2 E2 here if I substitute 2 in for this means I'm going to be getting a 4 here so it's 4 times bigger than E1 so I go up to 4 for E2 I draw a second line across now I can plot the wave function PSI 2. we're going to have a 2 here so we're going to effectively go from 0 to 2 pi as X goes from 0 to a so what that means going to look like it's a full cycle not doing a very good job drawing it full cycle of a sine wave I can keep going now if I substituted 3 in here I would get 9 times what I get if I substitute 1 in here so 1 2 3 4 5 6 7 8 9. we're up to here 5 6 7 8 9 here it's going to be E3 and I can draw a line across here consider that now as the x-axis for the wave function PSI 3. and that's going to be 3 Cycles of a sine wave and you can continue on if I go up to say 4 E4 is going to be at 16 times E1 it's going to be somewhere up here it's going to look something like that so half a cycle two half Cycles three half Cycles four half Cycles gradually moving up an energy from effectively one to four to nine to Sixteen Going up as N squared so this is what our wave functions look like and they have a lot of nice properties that we'll talk about later but just to highlight one if you look at the middle of your of the interval here a over 2 either the wave function has a maximum or it has a zero maximum zero and the trend continues maximum zero maximum zero maximum zero maximum zero as you continue to go up in energy if you Center yourself at the midpoint of this interval your wave function is either even or odd and it's alternatingly even and odd even about the midpoint odd about the midpoint even about the midpoint odd about the midpoint and this sort of General structure lead and the degree of symmetry we have here leads to some really nice mathematical properties that connect to this analysis with Fourier analysis Fourier Series in particular which is what we'll talk about next for now to check your understanding here are two arguments about what we did over the last couple of slides that disagree with what we did and your job is to figure out what's wrong with these arguments I keep talking about solutions to the time independent Schrodinger equation and how they have nice mathematical properties what that actually means is well what I'm referring to are the orthogonality and completeness of solutions to the time independent Schrodinger equation what that actually means is the topic of this lecture to recap first of all these are what our stationary States look like for the infinite Square well potential this is the potential such that V of x is infinity if X is less than zero or X is greater than a and zero for X in between 0 and a so if this is our potential you express the time independent Schrodinger equation you solve it you get sine functions for your Solutions you properly apply the boundary conditions mainly that PSI has to go to zero at the ends of the interval because the potential goes to Infinity there and you get n pi over a times x as your argument to the sine functions and you normalize them properly you get a square root of 2 over a out front the energies associated with these wave functions and this energy now is the separation constant in from in the conversion from the time dependent Schrodinger equation to time independent Schrodinger equation are proportional to n that index the wave functions themselves look like sine functions and they have an integer number of half wavelengths or half Cycles in between 0 and a so this orange curve this is n equals one the blue curve is n equals two the purple curve is n equals three and the green curve is n equals four if you calculate the squared magnitude of the wave functions they look like this one hump for n equals one two humps for the blue curve n equals two three humps for the purple curve n equals three and four humps for the green curve n equals four so you can see just by looking at these wave functions that there's a lot of symmetry one thing we talked about in class is that these wave functions are either even or odd about the middle of the box and this is a consequence of the potential being an even function about the middle of the box if I draw a coordinate system here going between 0 and a either the wave functions have a maximum or they have a zero at the middle of the box so for n equals one we have a maximum for n equals two we have a zero and this pattern continues the number of nodes is another property that we can think about and this is the number of points where the wave function goes to zero for instance the blue curve here for n equals two has one node this trend continues as well if I have a wave function that for instance let me draw it in some absurd color has one two three four five six seven nodes you know this would be for n equals eight it should be sort of like the wave function for n equals eight these symmetry properties are nice they help you understand what the wave function looks like but they don't really help you calculate what helps you calculate are the orthogonality and completeness of these wave functions so what does it mean for two functions to be orthogonal let's reason to at this from a perspective which you're more familiar the orthogonality of vectors we say two vectors are orthogonal if they're at 90 degrees to each other for instance so if I had a two dimensional coordinate system and one vector pointing in this direction let's call that a and another Vector pointing in this direction let's call that b I would say those two vectors are orthogonal if they have a 90 degree angle separating them now that's all well and good in two Dimensions it gets a little harder to visualize in three dimensions and well what does it mean for two vectors to be separated by 90 degrees if you're talking about a 17 dimensional space in higher Dimensions like that it's more convenient to Define orthogonality in terms of the dot product and we say two vectors are orthogonal in that case if the dot product of those two vectors is zero now in two Dimensions you know the dot product is given by the X components of both vectors ax times BX plus the Y component of those two vectors multiplied together a y times b y if this is zero we say these two vectors are orthogonal in three dimensions we can say Plus a z times b z and if this is equal to zero we say the vectors are orthogonal and you can continue this multiplying together like components or same dimension of the components of vectors in each Dimension multiplying them together A1 B1 A2 B2 A3 B3 A4 B4 all added up together and if this number is zero we say the vectors are orthogonal we can extend this notion to functions but what does it mean to multiply two functions like this in the case of vectors we were multiplying like components both X components both y components both Z components in the case of functions we can multiply both functions values at particular x coordinates and add all those up and what that ends up looking like is an integral say the integral of f of x g of x DX so I'm scanning over all values of X instead of scanning over all dimensions and I'm multiplying the function values at each individual point at each individual X together and adding them all up instead of multiplying the components of each Vector together at each individual Dimension and adding them all up the overall concept is the same and you can think about this as in some sense a DOT product of two functions now in quantum mechanics since we're working with complex functions it turns out that we need to put a complex conjugate here on F in order for things to make sense this should start to look familiar now you've seen expressions like the integral of PSI star of x times PSI of x DX is equal to 1 our normalization condition this is essentially the dot product of PSI with itself PSI of course is not orthogonal to itself but it is possible to make a fun pair of functions that are orthogonal and we say functions are orthogonal if orthogonal orthogonal if and only if the integral over the domain of the functions now which I'm leaving off there are limits on this integral but I'm leaving them off F star of x g of x DX is equal to zero as a brief side note here we can also make a connection with the magnitude of a vector or the norm for instance we say if a vector dot a vector is equal to one we call this a unit vector and in the case of functions like this if for instance the integral of PSI star PSI DX is equal to one then we say PSI is normalized so both of these Concepts like dot products and unit vectors dot products and normalized functions or inner products of functions you may hear that term as well can be generalized orthogonality turns out to be really useful because integrals like this appear a lot in quantum mechanics and it's very handy when we can look at an integral and say oh it's zero in the case of the particle in a box or the infinite Square well potential we got sine functions so what does this actually look like in real life well sine functions obey an orthogonality condition and this is the orthogonality integral now I'm just going between 0 and a and I have the sine of n pi over a times x sine of M pi over a times x and right now I'm going to stipulate that m is not equal to m m is not equal to n you'll see where this comes in later this integral can be done reasonably easily if you remember your trig identities and I certainly don't remember my trig identities I have to go and look them up all the time but sine of x times sine of Y this is a product identity for sine is equal to one half of cosine x minus y minus the cosine of x plus y if you apply this identity to this product what you end up with is a half out front the integral from 0 to a as before and now we have two cosine terms and we're going to have a cosine of n minus m pi x over a Plus cosine of n plus m pi x over a all integrated DX this is now an integral you can do it's just an integral of cosine and what you get if n is not equal to M this term is non-zero this term is non-zero both of these just work out fine and we end up with for our integral or to go a half out front as before a Over N minus m pi times the sine of n minus M pi x over a Plus third plus sorry I've gotten a sign backwards here this sign should be minus same as this sign sine should be minus here as well a over n plus m pi sine n plus M pi x over a and this whole thing is evaluated between 0 and a now I can do the evaluations if I plug in a for this I'm going to be plugging in an a here for this x so the A's are going to cancel and I'm just going to be left with n minus M pi this is going to be a half ugly half going to be a half of actually let's pull the A out front since I have an A here and an a here let's pull the pi out front as well so I've got a over 2 pi out front and then I'm going to have sine of n minus m pi Over N minus m minus the sine of n plus m hi Over N plus m this is it evaluated at a and if I evaluate it at zero well I get zero because if I substitute 0 in for X here or 0 in for X here the whole argument of sine is zero and sine of zero is zero so this is our answer but we know that sine of N and M both being integers the sine of an integer times pi is also zero so these are zero as well the sine term here goes to zero the sine term here goes to zero and what we're left with is just zero which means subject to our assumption that m is not equal to m the sine of N and M these two sine functions of n pi x over A and N M pi x over a are orthogonal in the case of normalized wave functions these constants out here end up canceling out and what you end up with is the integral of PSI star PSI now I'll write out the dependence the integral of PSI sub n star of X and PSI sub M star of x integral DX now the integral is from zero to a is equal to I'm going to write this in terms of something called The Chronic or Delta Delta m n where Delta m n is defined to be one if m equals n and zero if m is not equal to n so these are the sort of orthogonality conditions we'll be working with in quantum mechanics and writing them out in terms of The Chronic or Delta is a handy way of doing things that's what orthogonality actually looks like for the solutions to the time independent Schrodinger equation for the infinite Square well potential the particle in a box if I move forwards a little bit where are we going with this these orthogonality conditions are really handy thanks to something called fourier's trick what uh what your textbook what Griffiths calls fourier's trick and the trick goes like this suppose I have some general function of x I'm going to hypothetically say I'm going to write this function f of x as an infinite sum of constants multiplied by sine functions If This Were possible how would I find the CN C sub n necessary to actually to necessary to write this and it turns out that you can do this pretty easily what you do is take f x which is equal to the sum Over N equals 1 to Infinity of C sub n sine n pi x over a and from the left I'm going to integrate both equations but before I integrate them I'm going to multiply them by sine M pi x over a sine M pi x over a and these are both integrals DX which I can make some space for so I'm taking this original equation and I'm multiplying it from the left by sine M pi x over a and I'm integrating from 0 to a DX both sides of this equation if I do this there's not much I can do with the left side since I don't know what f of x is but I can I can work with the right hand side so let's look at the right hand side here the right hand side first thing I'm going to do is I'm going to exchange the order of summation and integration and I'm going to pull the constant C sub N Out of the integral so what I have on the far left then is the sum from n equals 1 to Infinity C sub n times the integral of 0 from 0 to a of sine M pi x over a times this sine n pi x over a DX now you know what this integral looks like this looks like the orthogonality condition we were working with on the last page so it turns out this this is going to be equal to zero if n is not equal to m so if you imagine this sum as being this term repeated over and over and over again for different values of n all of those terms are going to vanish except for the one term when n equals m so what that means is that we no longer have a sum here we have only a single term and that single term is given by c m integral from 0 to a of sine of M pi x over a times the sine of M pi x over a since n is now equal to m so I'm just going to write this as sine squared DX and this looks like our normalization condition we know how to do this integral this just comes out to a over 2. so we've done all of our integrals and we've made our sum go away which is a pretty neat trick we have our left hand side over here and our right hand side is just CM times a over 2. so we can solve for cm and what you get is that cm is equal to 2 over a times the integral from 0 to a of f of x sine M pi x over a DX this tells us that if this is possible to write f of x as a sum like this it gives us a formula for what numbers to use in the sum so that's all well and good but does that actually work this is nice because it allows us or it hypothetically allows us to express any function in the context of the Schrodinger equation this would be any initial conditions to the Schrodinger equation and it allows us to express that as a sum of these are now going to be our stationary States so our initial conditions maybe we can express them as a sum of stationary States you know superpositions of stationary states are also solutions to the Schrodinger equation so this is good it allows us to construct whatever sort of wave function we want in terms of the functions that we have if we follow this formula maybe it will work does it work that's the other property that's really nice about solutions to the time independent Schrodinger equation they form what's called a complete basis they are a complete basis set this is like having a set of unit vectors with which you can express any other unit Vector for instance x-hat y hat and Z hat unit vectors pointing the X Y and Z directions form a basis for 3D space if we have the set a set of solutions to the Schrodinger equation for instance the solutions to the Schrodinger equation for the infinite Square well potential are sine function they actually form a complete basis for functions which means this formula for expressing some function f of x in term of a sum in terms of a sum of sine functions where the numbers used in the sum are calculated by this fourier's trick sort of integral this actually works for damn near any f of x not quite any just for the sake of being mathematically rigorous this is really only going to work for smooth Square integrable functions if f x blows up to Infinity this isn't going to work and if f of x has a lot of corners and discontinuities this isn't going to work either but for smooth Square integral functions which happens to be what we really care about for quantum mechanics this works how does this actually work out why does this actually work out just to say very briefly conceptually how this works this works because it is possible to write sum of C sub n sine of n pi x over a such that if I plotted this as a function of x between 0 and a I can make functions that look like this you can make functions that are very sharp and very tall and I can make these functions wherever I want by suitable choices of this CN so if I change CN I can change the position of this Spike and I can make this Spike as sharp as I want and I can make it as tall or as short as I want and what that means is I can make whatever function you want for instance suppose the function you want looks something like this I can make it by adding up a bunch of spikes I can have a little Spike here and a little Spike here and a little Spike here a little Spike here Etc if I effectively fill this whole Space with these very sharp spikes going up to the value of the function I can recreate whatever function you want no matter what shape it is provided it's you know reasonably well behaved in square integrable this actually works really well what this looks like graphically is shown here this is some hypothetical f of x f of x is shown in Black now and it runs from down here to up here it's just a straight line now if I only include the first term in this long sum remember now we're expressing f of x as the sum of N equals 1 to Infinity of C sub n times sine of n pi x over a if I only let the sum go from n equals one to one I get the blue curve here so there's only one term in the sum you just get a sinusoid it's not a very good approximation to the straight black curve but if I let N become larger in this case I think I have n equals 20 here for the purple curve you can see the purple curve drops very rapidly Wiggles but is mostly going straight along the black curve it's having some difficulty matching the black curve at the endpoints here and that's because part of this assumption here that we're working with sine functions these are from the solutions to the Schrodinger equation not from a rigorous treatment of Fourier series or Fourier expansions of functions so since we're requiring our purple curve here to go through zero of course it's going to have to give up on fitting the function near the endpoints but if you include a lot of terms in this you can make this approximation quite good generally the more terms you add the closer you get to your function so to sum up the solutions to the time independent Schrodinger equation for the particle in a box are these sine functions and these sine functions obey an orthogonality condition that orthogonality condition allows you to find out relatively easily what constants to use in an expression of any function as a sum of sine functions as a sum of stationary States so if we have some initial conditions for our wave function we can express it as a sum of stationary States we then know the way stationary States evolve with time we know then everything about how our wave function will evolve forwards in time to check your understanding here are two relatively straightforward problems to use fourier's trick in the orthogonality conditions for sine to determine for instance C2 C3 and C4 for this f of x or C2 for this f x so we've been working with solutions to the time independent Schrodinger equation for the infinite Square well potential the particle in a box case how do these things actually work though in order to give you guys a better feel for what the solutions actually look like and how they behave I'd like to do some examples and use a simulation tool to show you what the time evolution of the Schrodinger equation in this potential actually looks like the general procedure that we've followed or will be following in this lecture is once we've solved the time independent chosen Schrodinger equation we get the form of the stationary States knowing the boundary conditions we get the actual stationary States the stationary State wave functions and their energies these can then be normalized to get true stationary State wave functions that we can actually use these stationary State wave functions will for the most part form an orthonormal set PSI sub n of x we can add the time part knowing the time dependent Schrodinger equation or the time part that we got when we separated variables in time dependent Schrodinger equation we can then express our initial conditions as a sum of these stationary State wave functions and use this sum then to determine the behavior of the system so what does that actually look like in the real world not like not like very much unfortunately because the infinite Square well potential is not very realistic but a lot of the features that we'll see in this sort of potential will appear in more realistic potentials as well so this is our example these are our stationary State wave functions this is what we got from the solution to the time independent Schrodinger equation this was the form of the stationary States these were the energies and then this was the normalized solution with the time dependence added back on since the time dependence is basically trivial the initial conditions that I'd like to consider in this lecture are the wave function evaluated at zero is either zero if you're outside the sorry this should be a if you're outside the domain you're zero if you're inside the domain you have this properly normalized wave function we have an absolute value in this which means this is a little difficult to work with but what the plot actually looks like if I draw a coordinate system here going from 0 to a is this it's just a tent a properly built tent with straight walls going up to a nice peak in the middle our general procedure suggests that we express this initial condition in terms of these stationary states with their time dependence and that will tell us everything we need to know one thing that will make this a little easier to work with is getting rid of the absolute values we have here so let's Express PSI of x time T equals zero as a three-part function first we have root 3 over a 1 minus now what we should substitute in here is what we get if say zero is less than x is less than a over 2. sort of the first half integral interval going out to a over 2 here in this case we have something sloping upwards which is going to end up in this context being 1 minus a over 2 minus X over a over 2. so to say another word or two about that X is less than a over 2. this quantity here will be negative so I can get rid of the absolute value if I know that this quantity in the numerator is positive so I multiply the quantity in the numerator by a minus sign which I can express more easily just by writing it as a over 2 minus x a over 2 minus X that will then ensure that this term here this term here is positive for x's in this range 1 minus that is then uh this term in our wave function for the other half of the range root 3 over a 1 minus something and this is now from a over 2 is less than x is less than a the second half of the interval for the second half of the interval X is larger than a over 2. so x minus a over 2 is positive so I can take care of this absolute value just by leaving it as x minus a over 2. I don't need to worry about the absolute value in this range so this is x minus a over 2 all over a over 2. and of course if we're outside that we get zero this technique of splitting up absolute values into separate ranges makes the integrals a little easier to express and a little easier to think about so that is our initial conditions how can we express these initial conditions as a sum of stationary State wave functions evaluated at time T equals zero this is where fourier's trick comes in if I want to express my initial conditions as a sum of stationary State wave functions I know I can use this sort of an expression this is now my initial conditions and my stationary State wave functions are being left multiplied complex conjugated integrated over the domain and that gives us our constants C sub n that go in this expression for the initial conditions in terms of the stationary State wave functions the notation here is that if PSI appears without a subscript that's our initial condition that's our actual wave function and if PSI appears with the subscript it's a stationary State wave function so what does this actually look like well we know what these functions are first of all we know that this function which has an absolute value in it is best expressed if we split it up in two so we're going to split this integral up into one going from zero to a over two and one going from zero to a so let's do that we have C sub n equals the integral from 0 to a over 2. of our normalized stationary State wave function which is root 2 over a times the sine of n pi x over a that's this PSI sub n star evaluated at time T equals zero I'm ignoring time for now so even if I had my time Parts in there I would be evaluating e to the 0 where time is zero so I would get one from those parts then you have PSI our initial conditions and our initial conditions for the first half of our interval was root 3 over a 1 minus a over 2 minus X over a over 2. when I'm integrating that DX in the second half of my integral integral from a over 2 to a looks much the same root 2 over a sine n pi x over a that part doesn't change the only part that changes is the fact that we're dealing with the second half of the interval so the absolute value gives me a minus sign up here more or less root 3 over a 1 minus x minus a over two over a over 2. DX so substitute in for n and do the integrals this as you can imagine is kind of a pain in the butt so what I'd like to do at this point is give you a demonstration of one way that you can do these integrals without really having to think all that hard and that's doing them on the computer you can of course use all from alpha to do these you can of course use Mathematica but the tool that I would like to demonstrate is called sage Sage is different than Wolfram Alpha and Mathematica and its age is entirely open source and it's entirely freely available you can download a copy install it on your computer and work with it whenever you want it's a very powerful piece of software unfortunately it's not as good as the commercial Alternatives of course but it can potentially save you a couple hundred dollars the interface of the software that I'm using is their notebook web page so you can use your Google account to log into this notebook page and then you have access to this sort of an interface so if I scroll down a little bit here I'm going to start defining the problem a here that's our domain our domain goes from zero to a h bar I'm defining to be equal to one since that number is a whole lot more convenient than 10 to the minus 31st n x x and t those are just variables and I'm defining those variables given by these strings and x and t now we get into the physics the energy that's a function of what index you have what your uh which particular stationary State you're talking about this would be PSI sub n this would be e sub n e sub n is equal to N squared pi squared H bar squared over 2m a squared that's an equation that we've derived PSI of x and t PSI sub n of x and t in particular is given by this it's square root of 2 over a times the sine function times this complex exponential which now uses the energy which I just defined here PSI star is the complex conjugate of PSI which I've just done by hand by removing the minus sign here more or less just to copy paste G of x is what I've defined the initial conditions to be which is square root of three over a times this one minus absolute value expression and C sub n here that's the integral of G of x times PSI from 0 to a over 2 plus G of x times PSI going from a over 2 to a that's all well and good now I've left off the PSI stars but since I'm evaluating at time T equals zero it doesn't matter PSI is equal to psi star at T equals zero I did have to split up the integral from 0 to a over 2 and a over 2 to a because otherwise Sage got a little too complicated in terms of what it thought the integral should be but given all this I can plot for instance G and if I click evaluate here momentarily a plot appears and this is the plot of G of x as a function of X now I Define a to be equal to one so we're just going from zero to one this is that tent function I mentioned if I scroll down a little bit we can evaluate C of n this is what you would get if you plugged in to that integral that I just wrote on the last slide you can make a list evaluating C of n for X going from 1 to 10. and this is what you get you get these sorts of Expressions 4 times square root of 6 over pi squared or minus 4 Root 6 over pi squared divided by 9 4 Root 6 over pi squared over 25 4 Root 6 over pi squared over 49. you can see the sort of pattern that we're working with some number divided by an odd number raised to the nth power or squared we can approximate these things just to get a feel for what the numbers are actually like and we have 0.99 minus 0.11 plus 0.039 Etc moving on down so that's the sort of thing that we can do relatively easily with sage get these types of integral expressions and their values you can see I've done more with this Sage notebook and we'll come back to it in a moment but for now these are the sorts of Expressions that you get for C sub n so our demo with sage tells us C sub n equals some messy expression and can evaluate that messy expression and tell us what we need to know now the actual form of the evaluated C sub n was not actually all that complicated and if we truncate our sum instead of summing from now this is expressing PSI of x t our wave function as an infinite sum n equals 1 to Infinity of C sub n PSI sub n of x and t if I truncate this sum at say n equals 3 I'll just have a term from PSI 1 and PSI 3. recall back from this age results that PSI 2 the coefficient of PSI 2 C sub 2 was equal to zero so let's find the expectation of x squared knowing the form of these functions and now knowing the values of these C sub n from Sage you can write out what x squared should be this is the expected value of x squared and it's going to be an integral of these numbers 4 root 6. over pi squared times PSI 1. which was root 2 over a sine uh not n so you're just dealing with PSI one now we have pi x over a we have to include the time dependence now since I'm looking for the expected value of x squared as a function of time now and we have e to the where'd it go minus I times pi squared H bar squared t over 2 m a squared all divided by H bar or I could just cancel out one of the H bars here that's our first term in our first term of our expression the next term we have 4 Root 6 over 9 pi squared from this coefficient now PSI 3 is root 2 over a sine of 3 pi x over a times again complex exponential e to the minus I pi squared H bar squared T over 2 um sorry 9 pi squared H bar squared T over 2 m a squared all divided by H bar now what is this this whole thing needs to be complex conjugated because this is PSI star what's next well I need to multiply this by x squared and I need to multiply that by the same sort of thing e to the plus this minus same sort of thing e to the plus this so these this is the term and orange brackets here is PSI star this is our X the term in blue brackets here is our PSI so we're just using the same sort of expression only you can certain only see just how messy it is this is the integral of PSI star x squared PSI this is PSI star this is x squared and this stuff is sine we have to integrate all of this DX from 0 to a it's pretty Messy as well messy but doable now since I was working with sage anyway I thought let's see how the time dependence in this expression plays out in Sage so going back to Sage we know these C sub NS these These are the C sub NS that I chose for C sub 1 and C sub three and C sub n of x gives me some digits all right sorry C sub n evaluated gave me these numbers in just in decimal form now I can use these C sub NS to express that test function where I truncated my sum at PSI sub 3. so this is our test function and you found if you evaluate it it's a lot more simple when you plug in the numbers assign 3 pi x and sine pi x when H bar is one and a is one these no these expressions are a lot easier to work with which gives you a feeling for why quantum mechanics quantum mechanics often we assign H bar equal to one the expected value of x squared here is then the integral of the conjugate of my test function times x squared times times my test function integrated from 0 to a and Sage can do that integral it just gives you this Sage can also plot what you get as a result now you notice sage has left complex exponentials in here if you take this expression and manually simplify it you can turn this into something with just a cosine there is no complex part to this expression but Sage isn't smart enough to do that numerically so if I have so I have to take the absolute value of this expression to make the complex Parts the tiny tiny complex Parts go away and if I plot it over some reasonable range this is what it looks like it's a sinusoid or a cosine you saw it actually and what we're looking at here on the y-axis is the expected value of x squared this is related to the variance in x so it's a measure of more or less the uncertainty in position so our uncertainty and position is oscillating with time what does this actually look like in the context of the Y function well the wave function itself is going to be a sum you know C sub 1 times PSI 1 C sub 3 times PSI c 3 C sub five times PSI five C sub seven times PSI seven Etc I can do that in general by making this definition of a function where I just add up all of the C sub NS and all the PSI sub n's for n in some range um f of x if I go out to 7 looks like this you get you can get a feel for what it would look like if I added more terms as well now the plot that I'm showing you here is a combination of four things first it's the initial conditions shown in red that's the curve that's underneath here the tent I'm also you showing I'm also showing you this approximate wave function when I truncate the sum at 2 just the first term that's this poor approximation here smooth curve the function if I truncate the approximation at 4 that will include PSI 1 and PSI 3. that's this slightly better approximation here this one and if I continue all the way up to 20. that's this quite good approximation the blue curve here that comes almost all the way up to the peak of the tent so that's what our approximate wave functions look like but these are all evaluated at T equals zero what does that look like for instance in terms of the probability density and as a function of time so let's define the probability density rho of x t has the absolute value of our approximate function and I'll carry the approximation all the way to n equals 20. absolute value squared and I'm getting the approximate form with this Dot N at the end so this is our approximate form of the probability density calculated with the first 20. stationary State wave functions this plot then shows you what that time dependence looks like I'm plotting the probability density at time T equals zero probability density at time T 0.04 0.08 0.12 0.16 we start with blue dark blue that's this sort of peaked curve which you should be more or less what you expect because we did a problem like this for this sort of wave function in class then you go to dark green which is under here underneath the yellow it seems to have lost the peak and it spread out slightly red is at time 0.08 and if I scroll back up to our uncertainty as a function of time plot 0.08 is here so it's pretty close to the maximum uncertainty you expect the uncertainty the width to start decreasing thereafter if I scroll back down here this red curve then is more or less as wide as this distribution will ever get and if we continue on in time now going to 0.12 that was the orange curve here and the orange curve is back on top of the green curve the wave function has effectively gotten narrower again if you keep going all the way up to 0.16 you get the cyan curve the light blue curve which is more or less back on top of the dark blue curve so the wave function sort of spilled outwards and then sloshed back inwards I can sort of Imagine This is ripples in a tank of water radiating out and then coming back to the center this is what the time Evolution would look like as calculated in Sage you can make definitions of functions like this you can evaluate them you can plot them and you can do all of that relatively easily now I'll give you all a handout of this worksheet so that you get a feel for the syntax if you're interested in learning more about Sage please ask me some questions I think Sage is a great tool and I think it has a promising future especially in education like this for for students the fact that this is free is a big deal so that's what the time variability looks like we had our wave function which started off sort of sharply peaked our probability density excuse me rho of x which I should actually write as row of x and t which sort of got wider and then sloshed back in so we sort of have this outwards motion followed by inwards motion where our expectation of x squared related to our uncertainty oscillated oh sorry it didn't oscillate about x equals zero it oscillated about some larger value sorry I didn't isolate about zero it isolated about some some larger value so there's some sort of mean uncertainty here sometimes you have less uncertainty sometimes you have more uncertainty that's the sort of time dependence you get from quantum mechanical systems to get an even better feel for what the time variability looks like there's a simulation that I'd like to show you and this comes from falstead.com which as far as I can tell is a guy who was sick of not being able to visualize these things so he wrote a lot of software to help him visualize them so here's the simulation and I've simplified the display a little bit to make things easier to understand these circles on the bottom here each circle represents a stationary State wave function and he has gone all the way up to stationary State wave functions that oscillate very rapidly in this case this is our ground state this is our first excited state second excited state third excited state Etc n equals one two three four five six seven Etc now in each of these circles there may or may not be a line the line the length of the line represents the magnitude of the time part of the evolution of that particular stationary State and the angle going around the circle here represents the phase as that Evolution proceeds so if I unstop this simulation you can see this slowly rotating around you're also probably noticing the color here changing the color of this represents the phase this vert the vertical size of this represents the probability density and the color represents the phase so it's a representation of where you're likely to find it and a represent and a sort of color-based representation of how quickly it's evolving the vertical red line here in the center tells you what the expectation value for position is and in this case it's right down the middle if I freeze the simulation and add a second wave function this is now adding some component of the first excited state and by moving my mice around here I can add varying amounts either adding none or a lot and I can add it at various phases I'm going to add a lot of it an equal amount is the ground state and I'm going to do it at the same phase but I'm going to release and let that evolve so you can see now the probability density is sort of sloshing to the left and sloshing back to the right and if you look at our amplitude and phases you can see the ground state is still rotating the first excited state is rotating but the first excited state is rotating four times faster so when they align you have something on the right when they anti-align something on the left they're aligned they're anti-aligned and this sloshing back and forth is one way where we can actually get motion out of uh stationary States you notice the phase is no longer constant you have some red parts and purple parts and things are sort of moving around in an awkward way the colors are hard to read but you know now that the phase of your wave function is no longer going to be constant as a function of position so those exponential time parts may be giving you a wave function that's purely real here and purely imaginary here or some combination of purely real and real and imaginary some general complex number and that complex number is not simply e to the I Omega T it's e to the I Omega something that's a function of position as well as time it's it's complicated I can of course add some more wave functions here and you get even more complicated sorts of evolution our expected value of x is now bouncing around fairly erratically our phase is bouncing around even more erratically but what we're looking at here is just the sum of the first one two three four five six stationary States each evolving with the same amplitude and different phases now I'm going to stop the simulation and clear it now another thing I can do with this simulation tool is put a gaussian into the system so I'm going to put a gaussian in here so this is sort of our initial conditions and the simulation has automatically figured out well I want this much I want a lot of the first of the ground state side one a lot of PSI 3 a lot of sci-fi a lot of size seven a little bit of PSI nine a little bit of PSI 11 Etc and if I play this slow this down a little bit first if I play this you see the wave function gets wide becomes 2 gets narrower again and sloshes back where it started if you watch these arrows down here you can tell when it comes back together the arrows are all pointing in the same direction and when it's dispersed the arrows are sort of pointing in opposite directions since our initial conditions were symmetric there's no reason to expect the expected value to ever be non-zero non ever move away from the center of this well but as you're say PSI 1 psi 3 PSI 5 PSI seven Etc oscillate at their own rates in time the superposition results in a relatively complicated Dynamics for the overall probability density and of course I can make some ridiculously wacky excited era initial conditions that's just sort of oscillate all over the place in a very complicated way there are a lot of contributions to this wave function now and not know any no one contribution is particularly winning to occasionally see little flashes of order in the wave function I highly encourage you to play with these simulations just to get a feel for how time evolution in the Schrodinger equation works there are a lot more than just the square well here there's a finite well harmonic oscillator a pair of Wells there are lots of things to play with so you can get a reasonably good feel with how the Schrodinger equation behaves in a variety of physical circumstances so that's our simulation and hopefully you have a better feel now for what solutions to the Schrodinger equation actually look like to check your understanding explain how these two facts are related time variability in quantum mechanics happens at frequencies given by differences of Energies whereas In classical physics you can set the reference level for potential energy to whatever you want sort of equivalent to saying I'm measuring gravitational potential from ground level versus from the bottom of this well thanks for watching please subscribe and don't miss out on new videos and lectures