Transcript for:
Webinar on Virtual Twins in Healthcare

Greetings everyone and welcome to this new gen webinar sponsored by Dassault Systèmes entitled Preventing Dementia Through a Virtual Twin Brain. I'm Jeff Pogliscus, Technical Editor for Gen and I'll be the host for today's event. The virtual world is often associated with video games and science fiction movies, but virtual products can be rich representations of their physical counterparts and in some ways indistinguishable.

As such, scientists and clinical investigators have begun to tap into the powerful world of virtual human twins, with the hope of transforming the drug discovery process and patient experience. Several years ago, the Living Heart Project was begun with the intent of developing highly accurate personalized digital human heart models, establishing a unified foundation for cardiovascular in silico medicine, and serving as a common technology base for education and training. medical device design, testing, clinical diagnosis, and regulatory science. The project demonstrated that it is possible to reconstruct a human heart from the genetic makeup of the tissue, to the ions flowing through the muscle fibers, to the details of the resulting blood flow through the body. The information this project has gathered thus far is immense and researchers wanted to push the boundaries even further to a much more enigmatic organism, the human brain.

Using state-of-the-art techniques such as high-precision MRI, tomodensitometry, and electroencephalogram scans, it is possible to reconstitute not only the geometric shape of the brain, but also the connectivity between the regions. Now let's hear from our presenter for this webinar, who will discuss the current state of the living brain project and how it's being applied to such things as traumatic brain injury and neurodegenerative disease progression. Dr. Steven Levine is the Senior Director of Virtual Human Modeling at DesoSystem and is the Founder and Executive Director of the Living Heart Project.

Dr. Levine has more than 30 years of experience in the development of computational tools that translate cutting-edge science into product innovations. Today, Dr. Levine will tell us how he and his team took the immense knowledge they learned from the Living Heart Project and are applying it to the development of virtual living brain models. Thanks. I'm Steve Levine and I lead virtual human modeling at DesoSystems. In this webinar, I'm going to share some of my experiences with this new approach to healthcare and through a number of examples why we believe the concept of the virtual twin can transform the development, approval, and even the treatment protocols for many modern challenges in healthcare, including the treatment of neurodegenerative diseases.

Here's the agenda for today's talk. First, I'll provide a brief introduction to my company, our history, motivation, and our vision for the transformative potential of digital technologies, developing personalized or precision treatments. Next, I'll describe our first deep investigation into what we call the virtual twin experience through the creation of the Living Heart Project, where we took on the challenge of building a fully functional virtual heart. Using what we learned from that, I'll share the development and some applications of a virtual brain, and then we'll open the phone for Q&A. At Tissot Systems, we believe the virtual world can extend our understanding of the real world and allow us to fully utilize our imagination to develop new and more sustainable products and services that improve the real world.

This is not only true for the development of new drugs and medical devices, but increasingly becoming important to deal with the historic challenges we're now facing in healthcare, and we believe essential to personalized, patient-centric care. Who are we and what is our legacy? We're a 40-year-old software company with roots in the aerospace industry, where a few pioneering engineers back in the 80s developed, for the first time, 3D software capable of replicating an entire part for a commercial jet. A decade later, they partnered with Boeing to design the entire 777 commercial jet digitally, ushering in what we now know as computer-aided design.

Over the next decade, companies used our tools to manage the entire lifecycle of complex products from concept to manufacture and production in regulated industries as complex and competitive as automobiles. In 2012, we launched a new cloud-based platform that allows companies of all sizes now to collaborate and digitally experience lifelike 3D replicas of their products or recreate environments as complex as an entire city. This year, we've taken on our greatest challenge to create realistic virtual twins of the human body capable of acting as surrogates in development of new therapies and treatment protocols.

Just an example of how a company like Toyota will use the 3DEXPERIENCE platform. More than 40,000 people across the country will use the digital world to design and test new cars, new ideas, exploring innovations in sustainable power and construction materials connected in self-driving cars. By first creating their ideas in the virtual world, they allow others to experience and understand their ideas, building on them to meet the challenges of their future. What is this technology enabled? Well today more than 99% of all vehicles are tested virtually, optimized and then only then are they built and physically tested.

As a result there are few surprises and never fail a physical test. Here you see a virtual car crash with all the fidelity of a real car including the crash dummy inside. Each design can be created and tested in a single day with incredible detail using high performance visualization.

You can see all the detail, for example, down to the shockwave passing through the windshield and crumpling the roof. Further, because it's a virtual car, we can peel away the skin and look inside and see what's happening to the passenger. You can look from all angles, and you can see in great detail. If you see something that's unacceptable, you can change it on the computer, and the next day completely retest the device, the car. And you can share that, and anyone can see that information.

In contrast, the complexity of understanding the human body today results in a far less reliable process for bringing new therapies to market. Less than half new devices successfully pass their clinical trials and less than 10% of new drugs. So how can we help reduce this risk and improve the chance of success?

Well, the key challenge is sufficiently understanding the human body to build virtual systems. capable of helping to improve the odds of success. This is a pretty big challenge. If we hope to develop realistic models of humans, it will most certainly be enabled by advances in medical imaging and reconstruction.

Since we didn't create the human body, to accurately represent its complexities in 3D, we must reverse engineer it first. The best way to do that is reconstruction of 2D medical scans into 3D representations as a start. Here you see some examples from orthopedic, neurology, and cardiovascular. We now have the technology to segment these images into fine detail, assigning realistic behavior.

For example, the video on the right shows how we can take a virtual human leg and look inside to understand in great detail exactly what's happening during activity. Of course, medically relevant understanding of the physiological behavior and response of the human body really is an immense challenge. with knowledge spread through labs and clinics all around the world. We decided to see if we could address this challenge. We knew the full human body was far too complex, so we decided to take on what is still the number one cause of death globally, heart disease.

We knew that billions have been spent to study the cardiovascular systems, and trillions more spent on treatment. So I wondered if all the people who had knew part of the story were to share their knowledge using a virtual model to capture each of their individual understanding, did we collectively know enough to build a fully functioning heart? So I created the Living Heart Project with the idea that if we could unite experts from research, industry, clinical practice, and regulatory to develop a model, Could we satisfy all the demands of each of these stakeholders? Okay, well that's challenging enough, but where do we start? To be complete, we'd need to represent the heart from its function all the way down at the molecular level up to entire populations.

So we decided to be patient-centric or more specifically organ-centric, and build the heart from detailed tissue models, creating a phenomenological model that could be compared directly to clinical evidence. It could also serve as a solid foundation to push down the link scale and incorporate cellular biophysics, gene and molecular level behavior. Likewise, we can aggregate these patient-specific models into population models using artificial intelligence and other techniques.

And as you'll see, we've actually begun to expand in both of these directions. In brief, here are the major components that we needed to assemble a fully functioning heart model using the identical mechanisms that are found in a real heart. First, the geometry, as I mentioned, needs to be created or recreated from images. Both active and passive mechanical properties of the tissue throughout the heart has to be represented.

The details of the muscle fibers have to be included as they strongly control electrical and structural response. The electrical system must be precisely specified and then coupled with a physical model to activate the pumping motion. Valves and fluid pressure has to be included and all of it has to be connected to a system model to represent circulation in the entire body. For each of these areas, we sought out world experts and combined their knowledge and understanding together and the result was actually a complete beating heart for the first time.

Here I'm showing an animation of our first living heart model with a little bit of artistic rendering. And adding to that visual realism, you actually can see the details of the muscle tissue that's being driven by the electrical system being conducted through the Purkinje network. This is really a fascinating model, and as you'll see, a major breakthrough in the science.

Through the project, we now have a complete functioning fully 3D model of the human heart, allowing exploration into structural problems such as valve disease or electrical problems such as arrhythmias. In all cases, the complete function of the heart can be considered so there should be fewer surprises and each researcher can leverage the knowledge gained by others. With our initial focus on medical device design, the living heart now offers the ability to truly design a device.

or patient-specific treatment in the virtual world before ever doing animal testing or relying on costly clinical trials. This should open up innovation to a new level and hopefully result in faster, more effective devices. For the industry, we believe this is the breakthrough we've been waiting for and hope we can use this to leverage a completely new way of developing devices. I don't have time to go into a lot of detail on the applications, so I just chose one example that I think hopefully gives an idea of how the virtual heart can be used. And this...

case study, we're looking at the clinical placement of a TAVR device or trans-aortic valve replacement. These are replacement valves in the heart when someone's valve is no longer effective, but the replacement actually happens non-invasively. So the surgeon no longer needs to crack the chest or look inside and open up the heart, which is extremely traumatic and something many heart patients are too weak to have. As a result, they're able to get to many more patients, however, challenged with the idea that they can't necessarily see what they're going to do beforehand. And as you can see, this was the work of many collaborators working together.

Using the methodology I described before to create the living heart, collaborators can reconstruct patient-specific models of the aortic arch, the aortic valve, and the left heart chambers. With that model they can actually perform virtual surgeries implanting the device beforehand and then testing its effectiveness using all the rigor and detail of the human system. By performing these simulations researchers can then look at the phenomenon and the effectiveness of that device for things such as paravalvular leakage, contact pressure to make sure the device stays in place.

and also to minimize the potential disruption to the electrical system. Here on the left I'm showing a video from one of our collaborators, a company called Pheops who are part of our 3D experience labs which is an incubator that helps startups and innovative companies bring these two new types of technologies to market. With these virtual surgeries, you can then compare the virtual analysis to the real analysis. You can make sure that your placement is consistent.

You can validate the model for structural pressure, and you can actually understand what's happening inside, looking at the profiles of flow and detail, such as I mentioned, paravalvular leakage, etc. Further, Besides selecting the right device, you can actually provide guidance to placement procedures. So in this case, the researchers were actually looking at the effect of the same device, but being placed in three different depths inside the aortic valve.

As you can see from the analysis, if the valve was placed more closer into the aorta, you could actually lose contact. and the valves slip and cause significant reduction in its effectiveness. So using these kinds of analyses, surgeons can be given the guidance of ideally how to place the device and, of course, better insight into how to design them and how to select for a given patient. Hopefully that gives you some idea of how the model can possibly be used. For my next example, I'd like to show how we've been able to take our organ level representation and working with experts, push down the length scale to model drug interactions at the molecular level, and for the first time produce multi-scale models that predict in full three dimensions what will happen at the full body level.

In this case study, the collaborators are addressing the very important topic of cardiotoxicity, in particular drug-induced arrhythmias, which is a part of every new compound's clinical evaluation. In this case, they're able to digitally link the molecular-level phenomena, for example, the blocking potential of a drug molecule in an ion conduction channel that's responsible for delivering the energy that's used in muscle contractions inside the heart. the heartbeat. For example, the beta blocker shown here.

We can model all 12 conduction channels in a cell, map that on to a full 3D human heart, and predict the effects on conduction through the living heart where it could then produce clinical measures such as ECG or ejection fraction. Cardiotoxicity is a big problem already recognized by the FDA as a critical blocking factor for the approval of new medications, so much so that they've even developed their own internal program to help them understand and model the phenomenon. It's our hope that the FDA will be able to help them understand and model the phenomenon.

hope to continually refine these models and based on the results we get from clinical trials to allow these models to help accelerate approvals, but more importantly provide far better understanding of the true risk to a given patient, which we hope will open up many safe populations for drugs that even today show efficacy but are considered too dangerous by today's generic non-patient specific standards. So what can we do? Using the methodology I described, the authors have been able to build a full 3D model of the heart, with conductivity controlled down at the ion channel level directly, and then introduce various drug molecules, and then observe what happens to the heartbeat.

From these models, an ECG is then created. What they observed was amazing. It was a spontaneous disruption of the normal sinus rhythm in cases that correlate very well with observation.

Unfortunately, I don't have the time to show the data that they were able to produce, but I recommend you refer to the journal article for more details. Now with these models in hands, the author, with a follow-up study, could evaluate the safety of drug dosages. Again, a library of known compounds could be tested and then evaluated for their proarrhythmic potential at various dosages. The authors were able to classify the drugs that were safe under certain limitations, observing that QT interval delays occasionally did not always disrupt the periodic rhythm, and then even use artificial intelligence to determine threshold concentrations and of course high dosages that should never be allowed.

Once again, the details of this are available in the in the publication mentioned below. I certainly don't have time to share the hundreds of different applications where the Living Heart or its methodology has already been used. In each case, the collaborators are pushing the limits of our technology, validating where it works, and guiding us how to improve the model where it doesn't. While it's taken a particularly unique form of global collaboration, unlike most purpose-built models, here you can see that we have a single model that's able to address a wide range of applications. In this case, the model allows sharing of data and knowledge across disciplines in a way that's never been before available.

Of course, the most important is to recognize that these are the true heroes of this project. The collaborators across the world have all come together to share their experiences, their wisdom, publish the knowledge that they've been able to create, communicate with one another. and amazingly cross disciplines from research and academia to industry, speaking with clinical cardiologists and even the regulators have gotten involved.

Been particularly fortunate in the involvement from the FDA, which is now going on its fifth year collaborating with us. In fact, the FDA, not only having been a member of the project from the first year, but just last year, the center director, Jeff I'm sure announced that they would be working with us on a virtual clinical trial using the living heart as a virtual patient population to see if we could use these patients to understand the safety of products better with the hope to reduce the burden on animal testing and of course ultimately human testing in clinical trials. We're really excited about this opportunity and we think it will represent a significant breakthrough in the introduction of new products to market.

So that was a very quick overview, but if you're interested in learning more, and I hope you are, we post all the videos, technical papers, etc. on a webpage at 3ds.com. Hopefully you'll go there and get a better understanding of the amazing work that the collaborators have been able to do. Hopefully, that's given you some idea of the rapid advances that we've been able to make through the methodology of the Living Heart Project.

That's allowed us a much deeper understanding of human physiology and building a method to collect collaborate and capture the best knowledge that humans have to offer. So taking what we've learned in the Living Heart Project, we've begun to explore the development of a living brain model. While we have much more to go, as you'll see already, we have a range of applications that we can already address, which are giving us confidence for the future.

So let me share a few of these examples. I've broken the examples down into three different types of treatment simulations. The first represents the use of brain models to explore damage done to the brain as a result of physical phenomena, such as traumatic brain injury and surgical interventions. The next examples involve patient-specific guidance for neuromodulation, and the final are some exciting new research into predicting neurodegenerative disease progression.

First, I'll discuss traumatic brain injury, or TBI. TBI, as you probably know, is a widespread condition considered to be any non-degenerative or non-congenital intrusion into the brain from an external mechanical force. This mechanical force can lead to temporary or even permanent impairment of cognitive, physical, or psychological functions.

When we first developed the brain model, in fact an entire head and neck model, it was done in collaboration with our partners at Synopsys Symbolware and the Naval Research Labs, who were obviously interested in understanding how to protect soldiers from blasts at all scales. Of course, TBI can occur from many sources and affect many different regions of the brain. As with the living heart, our goal is to develop horizontally deployable models that are not so finely tuned that they would be only limited to individual use cases. We're all very familiar with injuries from sports, such as direct impact in professional football, and the rapid decelerations from automotive crashes, but some of our greatest challenges are how to protect our soldiers. For example, the recent strike in the U.S. base in Iran, we know there's already more than 100 soldiers who suffered from TBI.

I'll briefly describe two examples, one from a direct impact and the other from frontal blast pressure wave. As you can see, the impact scenarios for each case are very different, and the one impact profile is much broader and the location is centralized. In the other, the pressure wave moves more rapidly in just a few milliseconds. I'll go through each analysis. Developing the head-neck model follows more or less the same procedure as with the heart.

The model begins with clinical image data as transformed into 3D geometry and then segmented into the necessary functional elements of the head. Each segment then is assigned the physical properties of a real head, which are derived based primarily on cadaver testing. For this model, the properties we have really only represent the physical characteristics necessary to reproduce the physical environment. Interpreting the psychological impact of brain function from the impact would require a far more detailed model and not considered for this analysis.

Even so, the model requires 30 million degrees of freedom and can only be run over a very short period of time. This slide shows the details of the 3D head simulation model. Although relatively small in comparison to something, say, like a commercial jet, it's also quite complex.

And therefore we need on the order of three and a half million elements to describe the full detail of the human head and the brain. And representing 33 different anatomical structures, each individually represented so they could be analyzed independently. The model was then accelerated to an impact at 45 degree angle. On the right you can see the results showing the validation data for the pressures created inside the simulation as compared with those that's measured in a cadaver.

As you can see the model shows good correlation with the measured data. Note of course that the model here is not limited to the single scenario. It can easily be modified to increase or decrease its fidelity, can alter the scenario, could include details such as a complete set of neck muscles or head protection or impact from really any angle. Here in this video you can see the improved view using our 3DEXPERIENCE platform which also highlights how much more intuitive it is to see these phenomenon represented in full three dimensions. Moving on, now I'll go through another example of TBI, in this case from a blast loading as represented by a frontal shock wave.

Here we see the experimental setup for calibrating and testing the virtual model. These results are reported in a paper indicated at the upper right. You can see the experimental rig can provide invaluable data for quantities that are readily measured, whereas simulation allows you to dig much deeper into what's happening on the inside.

In the case of a blast loading, the impact is very rapid and the shock wave passes in a few milliseconds. We can see, however, that the compressive wave actually travels through the skull faster than the wave surrounding the skull out in free air. This creates a negative pressure system at the back of the head, which is actually reduced very well by the model as shown in the graph in the upper right, where we compare the measured pressure against that of the simulation. Once again, we get good validation and are able to actually dig more deeply into what's happening inside the head once we know we've reproduced the overall behavior correctly.

Here we show the blast wave traveling through the head from both the external region as well as a mid-sagittal region. Once again, you can observe peak pressures near the base of the brain as the wave passes through. Looking more deeply into the brain itself, deformations in the skull, which I have actually shown here, result in peak strains in the center line of the brain, which most likely correspond to strains that would exceed the injury level. My next case study is actually a clinical treatment simulation that might be used in a severe case of TBI or other brain trauma that would lead to excess pressure buildup inside the skull.

In decompressive craniectomy, a surgeon will gives space to the brain to allow outward herniation, which would prevent compression of the brain stem structures or reconstructive brain perfusion. Decompressive craniectomy, although it can be very effective, remains a controversial surgical procedure with high failure rates as it induces large mechanical strains, which may be the cause of brain damage later. In this work, from collaborators at Stanford, Exeter, and Oxford Universities, the authors attempt to quantify the strains in the brain from a personalized cradiectomy treatment. The simulations can reveal potential failure mechanisms, stretch compression, and shear, and identify the regions of highest risk for brain damage to guide the surgeon in his procedure. For this model, the authors used 190 different scans Each at 0.9 millimeter intervals which were taken from an adult female volunteer which were used to create the model Given the novelty of this analysis, the authors looked at this procedure and studied it with varying degrees of fidelity of the model to explore the sensitivity to the levels of detail and ultimately optimize the model efficiency.

In the simplest model, only hyperelastic tissue is used to represent the brain, with unique properties for both white and gray matter in a model that had about 1.3 million elements. In the poro-elastic model, White and gray matter is represented as more of a saturated medium, which includes fibrosity, fluidic effects such as permeability, wetting, and capillary effects. Much more realistic model, but far more computationally demanding. I don't have time to go once again into the details, but here you can see an overview of the simulation methodology and example of the results.

The craniectomy was simulated by removal. of about 10 centimeter diameter section of the left posterior skull. Friction-ish contact was then used between the brain and the fluid inside the skull, which is shown in the pink section above, which allows the brain the freedom to expand in response to release of the intracranial pressure.

On the right, we can see a systematic analysis of the displacement, the strains, and the stretching at different degrees of swelling. which vary from about 2% to 10% at 2% intervals. What we can see is the axons located right at the leading edge of the opening are at the highest risk due to the tendential stretching forces. Here's a 3D representation provided by the authors, which actually give you a little more insight into the understanding of the analysis. My next set of application examples are of 3D guided neuromodulation.

As I'm sure you know, the brain is highly affected by externally applied or induced electrical signals, and these treatments, sometimes called electroceuticals, can offer life-changing opportunities. The treatments fall into two categories, invasive, which involve embedding electrodes directly into the brain, and non-invasive, where external simulation is applied outside the skull. Because it has the most precise delivery, deep brain stimulation or DBS is the most common today.

However, there still remain many challenges and risks. So let me show you some of the work we've done on DBS. DBS is already a well-established therapy, for example, for the control of motor symptoms in Parkinson's disease.

But there are several critical aspects of DBS to be effective. One is the appropriate targeting and accurate placement of the DBS lead, location and the stimulation, and the precise program delivery of electrical energy impulses. Most of our focus to date has been on the former, using the methodologies to reconstruct the head and brain to guide the surgeon.

We've worked with a company called NeuroTargeting, which has developed an atlas that can be used as a roadmap for a surgeon to understand the likely areas of critical function which are based on a population analysis. These maps can then be used in planning as well as in the OR to guide the surgeon. Here's a short video of the type of visualizations we can provide to the surgeon. In this case, we're looking at an epilepsy patient with probes distributed throughout the brain to identify the regions responsible for convulsive activity. All of this information can be reconstructed in virtual reality where the surgeon can literally deconstruct the brain to understand exactly what's happening, devise the optimal treatment, and then perform it on the real lab.

Now I'll talk about non-invasive treatments, which really represents the most potential for new benefit. By their nature, non-invasive treatments can be readily applied even by the patient at home or dynamically to provide real-time treatment of physical or psychological challenges such as schizophrenia. Of course, as these treatments are applied blinded, they pose a particular challenge in determining a protocol that will deliver the desired results.

This is, let me show you an example of how a virtual brain can be used. In this case study, the authors simulate transcranial electrical stimulation, and once again, I'd like to acknowledge the critical work of the collaborators, in this case, led under the direction of the lead investigator, Dr. Venkatsubramanian, at the National Institute for Mental Health and Neuroscience in India. As I mentioned, the biophysics of electrical brain stimulation is that it predominantly shunts through the scalp, stimulating the nerves below with relatively weak field compared to DBS.

Since these low fields are non-convulsive, the patient can be conscious and feels little discomfort, offering many different delivery modes. As with all ES techniques, both the location and the pulse modality will govern the effectiveness of the treatment. These low-amptitude currents don't trigger any action potentials directly, rather they alter the resting potential, selectively raising and lowering regions as desired.

In this study, transcranial direct current stimulation is used to treat schizophrenia. In this case, the patient is tested to identify those regions where the action is higher than desired and action where it's lower. The goal of the treatment is then to decrease the resting potential in areas where the behavior you'd like to reduce and raise its potential to increase neural activity in the desirable behaviors. As before, the CT scans were segmented in this case into six unique sections, each given an isotropic electrical conductivity which were measured from cadavers.

The model was composed of 3 million elements and about 17 million nodes. In the simulation, a current of 2 milliamps was applied and the exterior was then treated as an insulator so no loss of energy would result. With these detailed patient-specific models, the exact procedure can be performed on the virtual twin of the patient with significant benefits of being able to look inside the skull, to track the propagation of the electrical signal, and actually look deep deep into the brain and see what's happening inside.

This gives the clinician the detail they need to be able to tune the location and pulsing sequence to exactly the location to produce the desired effects. Using the procedure described, patients were actually treated and classified as responders and non-responders. In this study, the goal was to understand why the treatment actually was effective in some but not in others, and having the ability to look into the patient's health and their health conditions. inside the skull to understand exactly what was happening and compared to experimental measurements was invaluable in understanding the differences.

To guide the treatments, in this protocol the measured Excerpted excitation potentials, both positive and negative, could be mapped back into the clinical data, which augmented the scans with virtual surgery information. Having the ability to overlay the clinical data and the simulated data gives insights into the procedure that are invaluable in guiding the treatment and gives real-time understanding. Based on the cohort of patients in the studies, the authors were actually able to devise a two-step procedure to optimize the treatment.

The first step was to simulate the treatment over the full range of options, creating a map of the response surfaces. The data is then provided to the clinician, who is then able to interactively fine-tune the treatment based on the insight from the data and any other relevant knowledge and experience. The result is now a semi-automated digital workflow for creating patient-specific treatment protocols, which go from scan data to optimized treatment using the digital twin as the guide. Although the procedure is still under development, over time the clinical database should provide a fantastic knowledge foundation to dramatically increase the effectiveness of the treatment and possibly lead to AI models, which would then allow real-time optimization. And finally, I'll now describe the use of the virtual brain model for the treatment of neurodegenerative diseases.

For decades, scientists have speculated that the key to understanding age-related neurodegenerative disorders may be found in the unusual biology of the prior. on diseases. Recently, this hypothesis has gained experimental momentum, where it's been observed that specific proteins have been found to misfold and aggregate into seeds that structurally corrupt similar proteins, causing them to aggregate and form assemblies such as large masses of amyloid.

These proteinaceous seeds can then serve as self-propagating agents for the progression of disease. The outcome is the functional compromise of the nervous system because the act of gated proteins become toxic or lose their normal function. In this case study the authors will investigate these mechanisms.

As I mentioned, the authors explore the hypothesis that these disease progressions are governed by the complex reactions that lead to the propagation of toxic proteins, which then concentrate and create lesions, in turn causing cell death and tissue atrophy. There are several possible propagation mechanisms that can be investigated using models, which compare to longitudinal clinical data to help understand what's happening in a given patient. Of course, each patient will experience dementia uniquely.

due to their own dysregulation of protein synthesis, but the underlying mechanism of disease progression would be assumed to be similar, and the patient will experience a combination of these physical, chemical, and biological factors. Once again, the brain is reconstructed using the previously described protocols. To understand each disease progression, the authors compute the biomarker abnormality and create a temporal map of the toxic protein which is based on clinical observations.

Then using a propagation model, the time evolution of the path of the proteins can be predicted and analyzed. Finally, a tissue atrophy model with different atrophy rates in the gray and white matter area are used to create atrophy maps. For computational efficiency, tissue shrinking is computed in a post-processing step based on the values of concentration at different time points. Here's an example of an Alzheimer's patient.

On the left, we see activation maps over time from 0 to 15 years, following the progression of tau inclusions. In the center, we see MRI images taken from a patient compared to the predicted time sequence of toxic proteins in the center, and at the center bottom, we see the resultant brain atrophy predicted from the toxic protein progressions. Using clinical data, the authors were able to develop a simple model of aggregated transneural damage to test the possible interactions between tau proteins and amyloid beta and the coupled behavior between these toxic proteins clearance and proteopathic propagation.

This is a summary slide which shows 2D animation sequences of four different damage maps. Their analysis suggests that amyloid beta and tau proteins work together to enhance nucleation and propagation of different diseases. which sheds new light on the importance of protein clearance and protein interaction mechanisms in prion-like models of neurodegenerative diseases.

Of course, we understand the importance of three dimensions in understanding the actual effect on the human body. So here we see 3D progression maps, which could be further interrogated to reveal more insights into the psychological impact of these disease progression. On the left we see tau inclusions in Alzheimer's disease.

On the right, alpha-synuclea inclusions in Parkinson's disease. Clearly we see very different progressions. This can provide insights into understanding treatment protocols and better behavior. As I mentioned, the authors were able to couple the progression maps with physical atrophy models that mimic clinically observable changes in the brain morphology. In this animation, we see displacements of the brain atrophy, which are exaggerated to aid visual interpretation, but provide very meaningful insights into understanding what's happening at the physical level and can be compared directly to clinical data.

In the future, the authors hope to include a connectome-based dementia model, which could significantly add to the fidelity of the patient representation and interpretation of what actually may be happening to the patient itself. Once again, I need to recognize the authors of this work and recommend that you contact them or look up their publications if you're interested in further information on their work. We also show here a nice 3D visualization which superimpose many of the factors that I've discussed demonstrating the ability for three-dimensional representations to capture very complex human behavior. Taking a look ahead, we believe we now have assembled many of the techniques in place to be able to simulate complex patient-specific treatment protocols. In addition to those I've shown, we're exploring drug delivery mechanisms such as those delivered directly through the cerebral spinal fluid to allow for regional or concentration specificity.

Beginning with what we've learned about simulating microvascular systems, the complex interactions between fluids and soft tissues such as valves, as well as needle penetration, skin permeability, etc. Our multi-scale models can help us go from cellular interactions to an entire patient, which can be built up from scan data or ultimately approximated from libraries or atlases. We think this is the horizon of truly patient-centric medicine and will transform the patient experience forever. In summary, I've tried to give you a flavor of how we've spent the last five years or so creating digital continuity between real patients and virtual patients. We know from experience that the marriage of the real world and virtual worlds really is the key to unlocking the imagination of medical and biomedical innovators.

Although there's still a long way to go, I hope I've convinced you that virtual human twin is really on the horizon, and it can be the transformational element that connects our disciplines, translates fundamental understanding into clinical care. We've made good progress with the living heart, and more recently with the brain, and we hope to continue to map ultimately to the entire human body. And of course, the digital twin can always be accessed on the cloud, so they'll always with you or your doctor, or maybe one day with your coach to guide safer sports.

And with that, I'd like to open the phones for Q&A. Thanks, Steve. Really fascinating presentation, and I look forward to the continuation of this project and the data that it will continue to generate.

All right. Our Q&A session is coming up in just a moment. So I want to remind everyone once again to send in their questions for Steve. All you need to do is click the Ask a Question tab on the right-hand side of your screen, type in your question, and hit Submit. Okay, since we have a bunch of great questions that have come in already, let's get to the Q&A session and try to get to as many of your questions as possible.

Bear with us for just a moment as we transfer into the Q&A session. All right, so let's get to the Q&A session. We have a couple really good questions for Steve here.

Steve, the first question comes from an audience member, and they'd like to know if they could gain access to the models for their research. Sure. Well, great question.

So the short answer is absolutely. I've talked about two classes of models. Heart model and brain model, as I mentioned, the heart model is more mature and therefore more available than brain models. If you're interested in actually contributing to the development and testing, for example, of the heart model, you can actually join the Living Heart Project, and through that project you get complete access to both the project, all the materials, and the network of experts that have helped develop it. Alternatively, you can actually simply license it if you're just interested in using it for developing your own products or research as you would any other software.

For the brain, we're not quite as mature, and so I suggest you contact me or any of the collaborators that I've mentioned. And certainly, if you're interested in joining Living Brain Project, it's in its formative phases, so we'd love to hear that as well. All right, great.

Thanks, Steve. Next question. How long does it take to create a personal model of a brain or heart?

That's a great question. Also a tough one. The short answer is it depends a little bit on the phenomena you're interested in.

Models can be customized or personalized in as little as a few hours. We're developing more and more templates through the network of experts that we have. And if... those templates have already been developed, the methods are straightforward, and if you have scan data that helps us get the original information, then models can be quick.

Many of other phenomena take longer, but time is dropping rapidly, and we see it progressing faster and faster and becoming more and more automated over time. Okay. And in... In your presentation, you mentioned the FDA.

What are the regulatory implications of using these kinds of models? So that's one of the, I guess, more exciting parts of it to me. Of course, I can't speak on behalf of the FDA, but as I mentioned, I've been collaborating with them on development of this technology now for more than five years. And I can tell you that they are investing to both understand understand how these models can help the regulatory process and doing what they can to really promote its use.

They actually believe that these kinds of simulation models can both reduce the cost and improve the quality and time to... to market for products. So from their perspective, they are very eager to understand.

If you use this model, they very much welcome the information and are investing to understand how to interpret it quickly. Great. And so do you think that this technology can help reduce costs to the consumer and patient? Well, certainly we expect it can.

There are a number, again, I described a number of different applications. The two classes, well, essentially the lowest cost will come from getting it right the first time. We all know that the more you plan, the better the execution.

And these treatments and surgeries and things that are very invasive or very complex, the surgeon or the doctor has to rely on 30... indirect evidence or experience. And often it's an iterative process until they get the right answer.

The more and more we can understand the phenomena that they're targeting, the more they'll be able to get it right the first time. And not only that, they'll be able to collect up a knowledge base of when we got it wrong so that over time new developments will come out. And so not only will be individual treatments, hopefully less expensive over time because of fewer treatments, but also more therapies will come to market more efficiently. Great.

And then sort of to add on to that, you know, you mentioned some of the projects. And so are any of these projects still open for additional members? And, you know, what does that cost?

As I mentioned, the Looming Heart Project has been ongoing for about... five years now and continues to grow. There's actually no cost to join the Living Heart project. So if you're interested, please contact me.

What we ask is for you to contribute your feedback and be constructive in helping us make it more and more rigorous. For the Living Brain, we are, it's still a closed project. We haven't opened it the same way we have for the Living Heart, but we anticipate doing that hopefully over the next. year or so. So again, we're very eager to get your input.

Great. Thanks, Steve. A couple questions now that I think are a little bit related to each other, so we'll kind of tick them off one by one.

But the first question, the audience member relates to know, would it be possible to use this technology in teaching anytime soon? Absolutely. In fact, it's already being used in a number of universities.

We have a particular initiative. which we call the workforce of the future. We believe that not only do we need to develop these tools so that they can actually help clinical practice and therapeutic development, but we also need to help train people to learn how to use them. And so we're making these available through teaching, and we're developing actual course curricula, and I'd love some input on that because we're eager to understand. what it is that you'd like to teach those students.

So, again, please contact me on that front. All right. And the next two questions I think are a little related, so I think I'm going to ask them in tandem.

How could you use a virtual twin model in clinical applications and then do you anticipate the development of more efficacious drugs for Alzheimer's by using the virtual brain model? Well, again, it's a really exciting opportunity, I think. It's still a little bit early days, but my perception is twofold. One, we've had a lot of information, done a lot of research collectively on understanding the chemical and biological origins of diseases such as Alzheimer's.

But we've had less success in understanding how to identify their progression and how to reverse them. And I think if we can combine what we've learned about, for example, electrical stimulation, where we can target the electrical treatment and combine it with what we know about chemical and biological treatment, we can probably address the progression more effectively in an individual patient. We can identify biomarkers. that are observable clinically, so we have a guidance of where to target. And if we can be more specific, then many of the treatments that are even already available but are highly toxic, so we can't control their delivery or concentration, by using this combination of techniques, we think we can make those far more available and more effective and safer.

All right, thank you very much, Steve. And our last question, basically, you had some really great slides and some great figures and videos and so forth, so I think this question is pretty fitting. One of our audience members would like to know, what level of computing power do you need to run these models?

That's a great question. So the good news is it can be run on a laptop. Of course, the... The performance is going to be directly, well the performance of your hardware is going to directly gate the complexity of the model and the speed at which you're going to get a response. So the typical heart model on an average four processor or eight processor, let's say eight processor machine takes about four hours to run.

And it goes down from there. So if you scale up. to a large machine, 100 processes, you can get that time down dramatically less. That can go up depending on the complexity. So if you're including details of blood flow or, in the case of the brain models, if you actually want to run the growth and remodeling algorithms, those can take a little bit longer.

All right, thank you very much, Steve. And with that, we've come to the end of our webinar. I'd like to thank Steve again for his really great presentation, full of lots of information. And I'd like to thank you, the audience, for your attention and very thoughtful questions.

And a very special thanks to Dassler System for sponsoring this webinar. Hopefully we'll see you again. Goodbye for now.