This fluid simulator is what we are going to build to this series. And once it is ready to go, we study all kinds of fluid dynamical phenomena. Familiar things like lift and drag, but also more unfamiliar things like vortex shedding or pressure-velocity interplay. And there is a very good reason, why we don’t just pick any of the ready-made software tools. Partly, it comes in handy; we dive into the fundamentals anyway, so why not directly applying what we learn. But more importantly, building it on our own makes us also believe in it. Not just learning concepts and equations but really seeing where they come from and how they come into play, that’s what makes the difference. And we won’t miss out on that opportunity. And to get all this going, we start at the subatomic level that underlies molecular interactions. We use then different approaches to reduce the problems complexity. In particular, we average out the molecules by establishing the fluid as a continuous field. This continuum is then discretized spatially and temporally, whatever that will mean. And we wrap it all up with a proper boundary treatment. Once we really understand the simulator, we can rely on it for answering any upcoming question. And a super nice benefit of having our own simulator is that we have full control over it. Meaning, we can actively manipulate the physics and see how phenomena are inherently linked to it. But first things first. To get the mathematical foundation right, we have to dive into fields like computational fluid dynamics – or CFD for short. And CFD is one of those topics. It likes to come with the notion of being unsettling. But whenever you feel overwhelmed, it is a good idea to step back and appreciate that all can be broken down into a few core ideas. And we will uncover all of them in this series. In this first part we focus on the microscopic foundation that really justifies all of the macroscopic implementations of fluid simulation. So, my primary goal here is that by the end of this video, you will have a clear understanding of the microscopic perspective of fluid dynamics ... ... and by the end of the series you will have a good sense of fluid simulation in general. Alright, let’s dive in. I bet all of you have a certain idea of how fluids work. Maybe you think of it in terms of discrete colliding particles ... ... or in terms of a continuous flowing medium. To get on the same page, we start off by aligning our perspectives of fluids. They are all useful, but some are more useful depending on the context. Here is what I mean. The central theme underlying simulating fluids is reducing information in every possible way. The reason is simply, we can’t just process all the raw information that is out there. So, before we even start simulating anything, let’s get a rough idea of the multitude of information we are dealing with. If we are going to represent all the molecules that make up the fluid on a computer, two things show up immediately: (a) there are simply too many of them, and not just a bit too much, but overwhelmingly too much, and (b) there is information we would like to know but rather statistically or maybe not at all. Take the rotation of a molecule. Think about it. Do you expect a significant offset, say in lift, if this particular molecule is rotated differently? This right here. Or take the location of the molecules. What matters in the end, are the average states of larger portions of them. And given all that, modelling subatomic mechanisms on a quantum level for each molecule would be of the charts ... ... at least when we are interested in quantities of a more global character like the net-lift-force for instance. Ok, I think we can all agree on that we have to get rid of a lot of this information. But how exactly? How do we emphasis mathematically what’s relevant for us, and averages out the rest we don’t care about? Well, there is not “the” correct way. The approach we will take consists of several layers or modules. But these can be swapped by alternatives as needed. And this right here is important, building a simulation in a modular way, means giving structure. We will use a lot of these tools in future series, and a structured perspective will pay off. Ok, to really build it from ground up; let’s get clear about a good starting point – the quantum mechanics. And to understand why we need each layer of abstraction, we have to understand what’s problematic about everything underneath. So, what is the problem that quantum mechanics tries to approach? As we will see, this problem is so severe, that simulating even small fluid systems on a quantum level is practically impossible. Ok, to see what is going on, we have to look at fundamental properties of nature. It ultimately boils down to how we think about measuring. For a macroscopic object, like this ball for instance, tracking its path or trajectory is an easy task. You just look at it or use any other device that looks for you. The problem is how do you track the path of smaller things, like an elementary particle? This could be an electron that moves around the nucleus of a hydrogen atom for instance? You have to rethink what looking means. Let’s build an experiment. Here we use a laser to shoot photons towards the atom which itself is placed in an electric field. A photon may then kick the electron out of the electrostatic potential well, |that surrounds the atom. And when that happens, the electron is accelerated by the electric field to the right side, where its localized influence is detected. Now, the details are not important here, but if this experiment is performed in the right way ... ... it is possible to gain information about the original location of electrons within an atom. This is awesome, seeing where electrons where in an atom ... ... although by doing so, we change the electrons future significantly. In this sense, looking really means interacting. The specimen and the measuring device - electron and photon - are simply closer on the energy scale compared to any macroscopic composition. So, while you measure, you exert influence. Even if we use an experiment that does not rip the atom apart, we have to feel the electron in one way or another. So, practically, tracking an undisturbed trajectory of such small things appears impossible. Unfortunately, there is an even more fundamental limit about how certain you can ever be in terms of location and momentum, of a particle ... ... regardless of the measurement process – the Heisenberg uncertainty principle. So even theoretically, talking about trajectories of such small things loses meaning. What does not lose meaning, however, is averaging many observations to yield a different kind of information ... ... a probabilistic perspective. Because what makes sense nonetheless is to think about how likely it would be to detect an electron in a certain region of the atom. And depending on properties such as the electron energy, or strength of the electric field ... ... these probability distributions come in many different shapes. If there are multiple electrons around an atom, the picture is even more complicated since electron motion is correlated. So, a true multi-electron probability distribution is not simply a composition of the single-electron shapes ... ... although it can be approximated by using them as a starting point. Either way, the result is always a probability distribution. The next peculiar thing about these small elementary particles is ... ... they behave not only particle-like, which means there is an integer number of them, and once measured you observe their impact at a certain distinct location ... ... but they also behave wave-like, meaning they go around corners and superimpose - diffract and interfere - as long as you do not observe them. In this series, I don’t want to focus on why nature has chosen behaves this way ... ... ut what it means for us, as we want to derive models upon this basis. After all, we want to study fluid dynamics. But first we have to get there. So, to make sense of these probabilities and wave-like properties, some people decided to develop a mathematical tool ... ... that reflects this indeterminate nature, they see in all their experiments, in its core. And this tool is called quantum mechanics. The core of the approach relies on two mathematical objects ... ... a wave function that contains information about the states of all considered particles ... ... and an evolution equation – the Schrödinger equation that works on the wave function and tells us how it changes with time. From the new wave function you can then derive the probabilities again. And this is the same kind of probability we talked about before. It tells us how likely it is do detect a particle in a certain region of space. We have a model for what we see in our measurements. The exact form of the evolution equation is here irrelevant; it is the underlying concept that is problematic. What we are really concerned about is, how computationally expensive it would be to perform such calculations for a given volume of our fluid. And in that regard, we only need to know that it takes a certain amount of time to perform the evolution operation. So, let’s build some simulations. For our single particle here, the wave function assigns a complex number to each point in this two-dimensional space. The evolution equation now takes all these numbers and a problem dependent term that specifies the rules of evolution – the Hamiltonian ... ... and advances these numbers continuously and deterministically along time. So, the same initial wave function will always give the same evolution. We can now represent the infinite number of these points and time steps by a finite number of values, that a computer can handle. We will talk about such spatial and temporal discretizations later in more detail. For now, let’s just assume, we restrict our view to a sub region of space ... ... and subdivide this field to have a thousand sections along each side. This means, each of these cells stores a single complex number ... ... and Erwin here has to figure out, how a million complex numbers change ... ... since each of these numbers represents a part of the particles wave function. It can be everywhere, and we need to account for that. A million … just for one particle to move through a tiny portion of space. You may think, the problem is now simply that we need too many of these cells to fill up the fluid volume. But it gets even worse, by orders of magnitude. Look at this. If we look at only one slice of our 2D plane, we cannot explain ... ... why the wave suddenly behaves differently compared to a particle that lives in 1D. We don’t see the walls in this slice. This influence has to be provided by neighboring slices. Likewise, if we look at only one cell, we have absolutely no idea why it does what it does. Everything is coupled through the evolution of one underlying wave function ... ... and neighboring cells have to tell us what is going on over there. This dependence on one wave function is even more striking when we think about multiple particles. These two particles live in one dimension, they move towards each other, and in general, their motion depends on each other. And you can now quantify how likely it is that particle one is detected in this arbitrarily selected region ... ... WHILE particle two is detected in that region. This gives a joint probability. Since each particle’s individual probability distributions cover all of 1D space ... ... we need more degrees-of-freedom, more space to store the joint information. So, there is still only one wave function that gives rise to the probabilities ... ... but it lives in two dimensions! Both particles are then represented by the joint probability of detecting one particle at a certain location ... ... WHILE the other particle is detected somewhere else. You can still describe the probability of a particle alone ... ... by extending the range of detection along the other particle axis to +-infinity. Again, the wave function describes a joint probability and tells you something about a combined measurement outcome. And by considering every possible position of the other particle, you get the individual particle probabilities. In this sense, both particles’ probabilities eventually appear as so called marginal probabilities ... ... by looking along different directions, each giving a unique perspective on the same high dimensional wave function. So far, we have a pure mathematical framework ... ... that we should now equip with different properties to reflect known physical behavior. For instance, some particles repel each other ... ... and by using inter particle potentials we can force the wave function to account for that. We will learn more about potentials later in this part. A particularly important property is, that particles of the same kind are indistinguishable ... ... they can’t be physically labeled. So, you don’t detect electron one and electron two. You detect one of the electrons and the other one. By implementing symmetries, you see that both electrons can be detected in switched positions ... ... making this model useful for physics. So, after all, while the physical 1D space may be discretized using just 1000 cells. The 2D wave function for both particles requires, in the worst case without symmetries, one million cells. If we have two particles in a 2D physical space ... ... the wave function lives in a 4D space. As much as this may be confusing at first, it is also a challenging task for Erwin here. He has now to work on a million times a million cells. For two particles! So far, it seems we are simply lost in computational complexity. Ok, there will be shortcuts and approximations. But then again, through five more particles in ... ... each bringing its own set of additional dimensions along. Oh, and we do live in three dimensions ... ... and some particles have additional properties such as spin ... ... so the probability space is even larger! You see where I am going with this. It gets impossible really quickly, at least when using conventional computers. Ok, this is the result we expected, since we knew from the beginning ... ... that it will be impossible to simulate a fluid quantum mechanically. But I’m always curious about where everything comes from ... ... and really seeing why something is a bad idea is not a bad idea. So, we need to do better. We need to construct a surrogate model, that represents the essential physical properties, ... ... while being way faster to solve. And whenever you want to simplify, you can usually select among a variety of approaches. One popular way to reduce information builds on the fact that dynamical systems often evolve in specific patterns. To demonstrate this, we look at this one-dimensional particle here represented by its wave function components. To keep it around the center of the scene, we add a so-called potential ... ... that pushes it back, the further it goes off-center. It is nothing special; it works a bit like a marble in a bowl ... kind off. We discuss potentials in a minute when we focus on modeling inter-atomic behavior. Anyway, this is the kind of instruction we must tell the evolution equation via the Hamiltonian ... ... which basically keeps track of the total energy of the system, and potential energy is certainly part of it. So, as we start the simulation ... Erwin, please... We see the wave function and so the probability wiggles around and a certain repetitiveness appears. And here is the clue, it is possible to construct a few specific wave function shapes or standing waves ... ... that combined in the right way approximate the true evolution astonishingly accurate. So instead of evolving 1000 individual cells, we only evolve a few individual scaling factors ... ... of these superimposed shapes, also called mode shapes. And, in our case, due to the linear form of the evolution equation, ... ... these scaling factors are even simpler to compute. So, once you have these shapes, you save a lot of computational time here. The name “standing wave” simply reflects that the probability which determines the measurement outcome does not change, ... ... although the underlying wave function components do oscillate with a certain frequency. This is possible, if each individual complex number happens to evolve along a circle in the complex plane. In this way, the probability as the absolute squared of the wave function value stays constant. And the oscillation frequency of this rotation will be super important later on. What’s good is, the choice of shapes and their number, ... ... gives you the freedom to adjust the accuracy for the reduced model to your needs. And this approach is not limited to quantum mechanics. It is a general mathematical tool, sometimes called model order reduction ... ... but it is known by many names, and it is a huge topic on its own ... ... and we will explore and apply it many times in the future. But right now, it is not what we need. It does provide the features for a drastic reduction and is often applied with great success, ... that’s why I had to mention it here, but it does not change what you are modelling. Here it means, you are still concerned with wave functions and probabilities ... ... and representing a gazillion fluid cells by a trillion modes does not really help for our goal. In fact, computing these shapes for higher dimensional systems in the first place is complicated on its own. Right now, we need a different kind of reduction. We need a change of the underlying paradigm. We NEED molecular dynamics. You see, so far, we evolved all values of the wave function in a possibly large space, ... ... simply because the nature of joint probabilities left us no other choice. But what if we could ignore the wave-like properties and probabilities? What if particles could be distinctly located and we could nail down on single trajectories? Instead of iterating over all of space, we would simply evolve some unique location and velocity vectors. Ok, for these two particles here we see, the vector of positions and velocities combined lives actually in a larger space, ... ... called phase space, compared to the space of the wave function, which is written only in positions or momenta (and spin etc. ;-) ). But it is only one point in this phase space that we need to evolve, ... ... no matter how high dimensional the phase space will be. And that is the benefit of having localized particles. You just don’t need to keep track of other possible scenarios (meaning possible states). But we learned that electrons and other sub-atomic particles do evolve in vastly different scenarios ... ... and prior to any measurement we can only know so much about their location. So, at what mass or length scale, is it okay to assume having almost localized behavior, although it will never be truly correct? Well, it turns out, around the size of atoms. And this here is where molecular dynamics comes into play. It is a computational method that models all atoms as particles in a classical mechanics framework. So, all sub-atomic particles of an atom are composed into a single mass that moves on a unique trajectory. Likewise, all interactions in terms of attraction, repelling, and bonding between these atoms ... ... are represented by forces, that depend on the distance. And these forces are implied by so-called inter-atomic potentials, ... ... which describe how much work is needed to move a particle from one place to another. But as work is just the integration of a force over a path, its derivative is just the force. This here, is the Lennard-Jones potential, and it is a popular choice ... ... for inter-atomic modeling of weak Van der Walls attraction while also having strong repulsion. That’s just typical atomic behavior in action. And as we had an evolution equation for the wave function in quantum mechanics, ... ... we have an evolution equation for positions and velocities in classical mechanics, Newtons second law of motion. In essence it states, the net force acting on a mass determines its rate of change of momentum. By numerical integration, you get the updated velocities and positions, you simulate. Now, looking at this setup, it appears quite simple. Just some atom-particles moving in potentials instead of wave functions ... And it is truly a big step forward! But how do we justify it? How is the inter-atomic potential computed? Why can we see particles as being localized? Let’s break it down into simpler parts to really get the connection ... ... between quantum mechanics and molecular dynamics. To understand the atom-to-atom interaction, we have to look at the interactions of the subatomic constituents. And here we start with a typical nucleus-electron interaction. The attractive force between a nucleus and an electron can be derived by the Coulomb potential. As we learned before, this force is equal to the rate of change of momentum, which of cause applies to both particles. And for constant mass, the rate of change of momentum is just mass times acceleration. So, given the same force, the heavier nucleus builds up motion much slower ... ... and can as well be seen as “not moving” from the super-agile electron perspective. In contrast, from the viewpoint of the nucleus, an electron reacts almost instantaneously. This works equally well in a quantum mechanical perspective. Here, the probability of detecting the electron changes way faster compared to the nucleus probability. It all looks as if nuclei and electrons live in two different worlds, at least motion-wise. In this sense, the computational treatment of the motion of multiple electrons and nuclei can be decoupled ... ... as if they live in two different simulations and they only interact in a mathematically simplified way. The nuclei do wander around – as particles in the end, and provide possible fixed positions for the electron simulation. The electrons in return take these fixed locations and provide potential energy values, ... ... from which the forces between the nuclei can be derived. Now, to compute these potential energy values, a wave function purely for the electrons is constructed, ... ... which considers the nuclei with their Coulomb potential at fixed positions. And here is the trick... We are not interested in any possible electron wave function, ... ... but the one that electrons reach over time due to energy loss by radiation, the one with the lowest energy. Why? Because that is the one that the slower nuclei effectively see most of the time! Remember, from their perspective, the electrons react so quickly; ... ... the final electron wave function seems to be built up almost instantaneously. Now, what is the final wave function? It is the first standing wave, and the energy is related to its frequency of oscillation. The exact shape of this standing wave depends obviously on the problem. For a single hydrogen atom, it is one of the electron-clouds I showed at the beginning. And for the abstract – one particle – example it is one of the shapes we used for reduction. But the actual shape is not so important. It is the associated energy value that is interesting. And the first standing wave with its frequency truly marks the lowest possible energy in each particular setting. By repeatedly computing these lowest energy values for different nuclei positions, ... ... we get a so-called inter-atomic potential energy surface. And its derivative gives the net force between nuclei. As you can see, the full development of this potential is explanatory but also exhausting, ... ... so you often bypass the whole process by using empirical approximations like the Lennard Jones potential, ... ... or in this case of interatomic bonding the Morse potential. And the propagation of the nuclei within this inter-atomic potential happens either as a wave function itself ... ... or as particles. The particle approximation for the nuclei is here reasonable, ... ... since the higher spread in momentum, that follows from the Heisenberg uncertainty relation for more and more localized particles, ... ... is mostly established by the higher nuclei mass, ... ... keeping the velocity spread low and so the future position spread still slow. Ok, there is so much more to all of this, ... ... and I have to admit, answering any one question leaves us immediately with five new questions. So, we will dig into quantum mechanics in much greater detail in another series. For our goal here of simulating fluids, we are good to go by knowing that … ... with these key assumptions we can simulate atoms as particles moving on trajectories within potentials. So, atoms can form molecules, which can move or translate, rotate, collide, vibrate and so on. The vibration within a molecule usually happens on much smaller length scales (according to Molecular Dynamics!). I really exaggerated it here to make it visible. And the accuracy of the molecular dynamics approximation depends highly on the conditions of the system. So, while we are mainly in a classical mechanics setting, ... ... more or less parts of the simulation can still be represented quantum mechanically. It depends on what accuracy you want to achieve. Here, we go fully classical mechanics from now on. So, finally, to set the simulation up, we need to specify parameters for the masses and potentials. And what is now usually done, and makes this an empirical approach, is to set these values in such a way, ... ... that the statistical behavior that can be drawn from many molecule interactions ... ... matches up to what measurements suggest. So, it fits on average, at least in the range of conditions you developed it for. Ok, you are not really comparing trajectories, but derived quantities that are easier to measure. Let’s just assume that we somehow found good parameter values. What we can do now is to replace the inter-molecular potential-based interactions by instantaneous collisions; ... ... this will cut the computational cost even further. Instantaneous collisions are just faster to compute, you basically reflect velocities, ... ... and it is ok here, since it is empirical anyway. We can modify parameters for both variants to yield similar results ... ... at least when viewed from a more distant perspective. You see, we are slowly embracing a more classical statistical perspective here. We get more precise about what “viewed from a distance” means when we talk about the next layer of abstraction. See this example here as a glimpse on how we might reduce information in the upcoming layers, ... ... by keeping an eye on the global dynamics. And just to be clear, ... ... the reduction due to molecular dynamics is not only established by combining sub-atomic particles and simplifying potentials. Remember, it is much more due to very nature ... ... how the simulation processes can be decomposed within different mechanical realms. In quantum mechanics, the high dimensional position space of the joint probability ... ... is an inherent part of the simulation, increasing the computational costs at every iteration significantly. In classical mechanics, you can also talk in terms of probabilities. But you may first compute a bunch of trajectories in phase space and average afterwards ... ... which totals up to less computational cost. At least you have the option to trade-off statistical significance for computing fewer trajectories. So, entering a classical mechanics setting is a huge leap forward concerning the fluid volume that we can simulate. Given my limited hardware and my focus here on writing rather educational code; ... ... compared to quantum mechanics, where we may simulate a few sub-atomic particles (without modes), ... ... in molecular dynamics, we can at least come up with some 100,000 atoms. So, we increase the fluid volume we can simulate by … Well, let’s say we start having a volume at all! Ok, we are slowly building, what could be called a fluid. And looking at what we have done so far, the next layer should come as no surprise. It simply carries the assumptions of molecular dynamics one step further. The kinetic theory of gases. So, as we combined subatomic particles to form atom particles, ... ... the next logical step is to combine atom particles to form molecule particles. And the name says it all: it is a theory of GASES. So, it should work best when particles are spread out and ... ... the free flight phases in between interactions take way longer than the interactions themselves. Therefore, these mostly repelling interactions can as well be seen as instantaneous collisions, ... ... exactly as in our simplified molecular dynamics simulation. Intuitively it seems reasonable, but let’s have a look how it works out step by step. The approach represents a molecule by a single point mass with an effective collision radius. Ok, the mass should be about the sum of its components, but what about the radius? Let’s figure that out by an experiment, ... ... and we use the standard molecular dynamics simulation with the empirical potentials to emphasize the problem. Here we shoot molecules with the same total energy towards each other ... ... with a fixed offset between their pre-collision trajectories. The total energy here comprises the kinetic energy from the atoms motion as well as the potential energy from the inter-atomic bonding. Not surprisingly, we have a typical scattering pattern. Now we repeat this experiment with the molecule particles. Since these are only point masses with an effective collision radius, the outcome is always the same. This model simply does not hold the capacity to represent more complex behavior. But this is good; this is our chance to trade off accuracy for computational speed. Our task is now to wisely choose a radius that fits best for the situation. So, like this perhaps. But, we got a problem ... As we change the distance of the pre-collision trajectories, the chosen radius is not so optimal anymore. So we would be better off choosing a radius that fits all experiments. But that’s a bit tricky. If molecules pass each other at a distance where the attractive part of the potential is dominant, ... ... we mostly see a kind of a slingshot maneuver, which cannot be represented by our pure collision model. Well, we could perform multiple experiments with different initial distances and energies and so on, ... ... and see which radius works best for most of them. But how do we weight the influence of each of these experiments on the final choice of the radius? Well, simply by not simulating separate experiments but by simulating the fluid directly as a whole. The pre-collision conditions of all these little individual interactions ... ... will have obviously the randomness they would experience in a fluid, ... ... simply by being in a fluid! And remember, the fluid we build should work in an averaged sense anyway, ... ... so we have to compare its global statistical behavior with the global statistical behavior of the fluid we try to replace. Here, we naively focus on just one statistic – the particle mixing in different layers – or impurity. But there are a ton of possible global statistics to look at. So, what it all means is, we shouldn’t think particle-interaction-wise, but overall-fluid-wise. The particles are still there in the simulation, but we shift our focus to the bigger picture. Under this relaxed perspective it is fine if the fluid works globally, despite having local differences. So, these are the key assumptions that allow us to model fluid molecules as a collection of colliding particles, ... ... which concludes the microscopic perspective. In the next part we use this model to derive a very powerful level of abstraction ... ... that enables us to simulate the flow on arbitrarily large scales – the macroscopic perspective. And we will learn how concepts such as pressure, viscosity, temperature, or flow velocity ... ... appear in a super intuitive way by studying the motion and interaction of these molecules. Let’s recap and highlight what we accomplished so far. We started out by realizing that we have practical limits and more so theoretical limits ... ... about what we can ever know about the elementary particles that make up fluids. This circumstance made probabilities appear in a very natural way. And the probabilistic perspective proved to be a far more general guiding principle ... ... to derive more and more abstract models. The quintessential steps we toke led us from a full quantum mechanical treatment that considered every possible state ... ... more and more to a relaxed perspective on what information we actually need to keep track of. The real game changer here was our leap from quantum mechanics to classical mechanics, ... ... enabling both, the treatment of fluids as a collection of particles with individual trajectories ... ... and in that way also having a scalable approach for modelling probabilistically on demand. And guided by our ultimate goal of reducing information we finally arrived at the kinetic theory of gases ... ... which will be our entry point for understanding fluid flow as a collective behavior of many individual interactions. But most of these steps really just embody our attempt in looking for opportunities to average out information. The single one message I want you to take home is: ... every seemingly complex problem, can be broken down into simpler parts ... ... and we can solve them step by step; ... ... looking for the patterns and trying to find the underlying guiding principles ... ... that really connect all the pieces. Once you have a clear focus on your main challenge – here it was reducing information – ... ... the next steps will become more and more obvious. In that way, methods that may be intimidating at first, ... ... such as quantum mechanics and fluid simulation, will start to render as natural consequences, ... ... making them far more believable. Alright, see you in the next part.