Transcript for:
AI and Medicine: Transformative Advances in Healthcare

let me quickly say the old very bad joke what do you call the uh doctor the medical student who graduates at the bottom of this class a doctor doctor and so if you could just merely get the bottom 50% of doctors to be as good as the top 50% they'll be transformative for healthare now there are other superhuman capabilities that we will that we can go towards and we can talk about if we want that do require the next generation of algorithms Nvidia U architectures and data sets but there's so much if everything stopped now we could already transform medicine it's just a matter of The Sweat Equity to create the models figure out how to include them in the workflow how to pay for them um how to create a reimbursement system that uh and a business model that uh works for our society but there's no technological um barrier hey everyone welcome to the drive podcast I'm your host Peter [Music] AA well Zach thank you so much for joining me today this is a a topic that's highly relevant um and and and one that I've wanted to talk about for for some time um but you know wasn't sure who to speak with and so we eventually kind of found our way to you so um again thanks for for making the time and sharing your expertise um give folks a little bit of a a sense of your your background um what uh what was your path through medical school and training it was it was not a very typical path no so uh what happened was I grew up in Switzerland uh nobody in my family was a doctor uh I come to United States decide to major in Med in biology and then uh I get nerd sniped by Computing back in the 70s and uh in the late 70s and so I minor in computer science but I still complete my uh degree in biology and I go to medical school and then in the middle of medical School's first year I realized holy smokes this is not what I expected it's a noble profession it's but it's not a science it's an art it's not a science and I thought I was going into science and so um I bail out for a while to do a PhD in computer science and this is during the 1980s now early 1980s and it's a Heyday of AI it's actually second Heyday this is now we're going through the third Heyday and it was time of great promise and uh with the retrospectoscope very clear that it was not going to be successful that uh there was a lot of over promising there is today but on likee uh today we had not released it to the public it was not actually working in the way that we thought it was going to work and it certainly didn't scale and so uh it was a very interesting period and my thesis advisor solich a professor at MIT said uh Zach you should finish your clinical training because I'm not getting a lot of respect from uh clinicians and so to bring rational decision- making to the clinic uh you really want to finish your clinical training and so I finished medical school did a residency in Pediatrics and then Pediatric Endocrinology which was actually extremely enjoyable uh but when I was done I restarted my research in uh Computing started a lab at Children's Hospital in Boston and then uh a center uh of biomedical informatics at the medical school like in every other almost every other Endeavor getting uh money gets attention from the powers that be and so I was getting a lot of Grants and so they asked me to start the center and then eventually a new Department of biomedical informatics that I'm the chair of and that we have now 16 professors or assistant professors of biomedical informatics and then um I have been involved in a lot of machine learning projects but like everybody else I was taken by surprise except perhaps a little bit earlier than most by large language models I got a call from Peter Lee uh in October October 22 and actually I didn't get a call it was an email like right out of a Michael kton novel it said Zack if you'll answer the phone I can't tell you what it's about it'll be well worth your while and so so um I get a call from Peter Lee and I knew him from before he was a professor of uh computer science at CMU and also department chair there to arpa and then he went to Microsoft and he tells me about GPT 4 and this was before any of us had heard about g a chat GPT which is initially GPT 3.5 he tells me about uh GPT 4 and he gets me early access to it when no one else knows that exists only a few people do and I start trying it against hard cases like child in the I get called down I just remember from my training called down to the nursery as a child with um a small phus and a hole at the base of the phus and they can't palate testicles and they want to know what to do because I'm a pediatric endocrinologist and so I asked gp4 what would you do what are you thinking about and it runs me through the whole um workup of these very rare cases of uh ambigous genitalia in this case it was congenital adrenal hyperplasia where the making of excess androgens during um pregnancy and then subsequently birth causes clitoris to swell from the glands of the penis or and the lady monora defuse to form the shaft of the what looks like a penis but there's no testicles there's ovaries and um so there's a whole endocrine workup with genetic tests hormonal tests ultrasound and it does it all and it blows my mind it really blows my mind because very few of us in computer science really thought that these large language models would scale up the way they do it was just not expected and you know I was talking to uh Bill Gates about this after uh Peter Le introduced me to problem and he told me that his line Engineers that in Microsoft research a lot of his fancy fanciest computer scientists did not expect this but the line engineers at Microsoft were just watching the scale up you know GPT 0 1 2 and they just saw it was going to keep on scaling up with the the size of the data and with the size of the model and they said yeah of course it's going to achieve this kind of expertise but the rest of us I think because we value our own intellect so much we couldn't imagine how we would get that kind of conversational expertise just by scaling up the model and the data set well Zach that's actually kind of a perfect introduction to to how I want to think about this today which is to say look there's nobody listening to us who hasn't heard the term AI um and yet virtually no one really understands what is going on so if we want to talk about how AI can change medicine I think we have to first invest some serious bandwidth in understanding AI now you alluded to the fact that when you were doing your PhD in the early 80s you were in the second era or the second generation of AI which leads me to assume that the first gen gener generation was shortly following World War uh 2 and that's probably why someone by the name of Alan touring has his name on something called the touring test um so so maybe you can talk us through what Alan touring posited what the touring test was and proposed to be and really what gen one AI was we don't have to spend too much time on it but clearly it didn't work but let's maybe talk a little bit about the postulates around it and and and what it was so after World War I we had Computing machines and anybody who was a serious computer scientists could see that you could have uh these processes that could generate other processes and you can see how these processes could take inputs and become more sophisticated and as a result um even shortly after um world War II we actually had artif artificial neural networks the the perceptron which was modeled roughly speaking on the ideas of a neuron that could take inputs from the uh environment and then have certain expectations and if you updated the neuron as to what was going on it would update the weights going into that artificial neuron and so going back to uh Turing he just came up with a a test that said essentially if a computational entity could maintain essentially its side of the conversation without revealing that it was a computer and that others would mistake it for a human then for all intents and purposes that would be intelligent behavior and there's been all sorts of all sorts of uh additional constraints put on it and one of the Hallmarks of AI frankly is that it keeps on moving the go goalposts of what we consider to be intelligent Behavior if you had told someone in the six s that a chess master the World Chess Masters were going to be beaten by a computer program that say well that's AI clearly that's Ai and if you would and then when uh Kasparov was beaten by uh Deep Blue by the IBM uh machine people said well it's just doing search very well it's searching through all the possible moves in the future it also has knows all The Grandmaster moves it has a huge encyclopedic store of all the different Grandmaster moves this is no long this is not really intelligent behavior and then when if you told people it could recognize human faces and find your grandmother in a picture on any picture in the internet they they' say well that's that's intelligence and of course when we did it no that not intelligent and then when um you said it could write a rap poem about Peter ATA based on your web page and it did that well that would be intelligent that would be creative but then if you said it's doing it based on having created a computational model based on all the text ever generated by human beings as much as we can gather which is somewhere between 1 to six terabytes of data and this computational model basically is predicting what is the next word that's going to be say not just the next word but of the millions of words that could be what are the probabilities of that next word and that's that is what's generating that WAP there's people who are arguing that's not intelligence so the gold post post around the turn test keep getting moved I just have to say that I I no longer find that in interesting topic because it's what it's actually doing yeah and whether you want to call it intelligent or not that's that's up to you it's like discussing whether is a is a dog in intelligent is a baby uh intelligent before it can recognize the uh constancy of objects you know initially babies if you hide something from it it's gone and it comes back it's a surprise but at some point they early on they learn that there are there's consty of object even when they don't see them so there's there's this spectrum of intelligent behavior and i' just like to remind you myself that there's a very simple computational model of predicting the next word called a marov model and several years ago people were studying song birds and they were able to predict the full song the next note and the next note of s just using a very simple Mark of model so from that perspective I know we think that we're all very smart but the fact that you and I without thinking too hard about can come up with fluid speech okay so the model is now a trillion parameters it's not a simple Mark of model but it's still a model and perhaps later we'll talk about how this played into uh the late unfortunately the late uh conans uh Notions of thinking fast and thinking slow and this notion of system one which is this sort of pattern recognition which is very much similar to what I think we're seeing here and and system two which is the more deliberate and much more conscious kind of thinking that we pride ourselves on but a lot of what we do is this sort of reflexive very fast pattern recognition I'll stop there well so so if we go back to you know World War II that's to your point where we saw basically kind of rule-based computing come of age and you know everybody anybody who's gone back and watched movies about the Manhattan Project or the decoding of um you know all the sorts of things that took place like Enigma for example um you know again that's straight rules-based computational uh Power and and we're obviously at the limits of you know we we that can only go so far um but it seems that there was a long Hiatus before we went from there to kind of like maybe what some have called context based uh computation like what your what your Siri does or Alexa right which is a step quite beyond that and then of course you you would you know sort of go from there to what you've already talked about blue or Watson where you have you know computers that are probably going even one step further and then of course where we are now which is you know GPT 4 so it's it and I want to talk a little bit about the the computational side of that but but more what I want to get at is this idea that um there's there seems to be a very nonlinear Pace at which this is happening and I I hear your point I'd never thought of it that way um but the I hear your point about the goal post moving um but I think your Instinct around majoring in the the right thing is is also relevant which is let's focus Less on the fact that we're never quite hitting the asmp toote definitionally let's look at the actual output and it is staggeringly different so what was it that was taking place during the period of your PhD what you're calling wave two of AI what was the what was the uh objective and where was the failure so the objective was okay we can now take so in the first era you wrote uh computer programs in assemblar language or in languages like uh or Tren and there was a limit of what you could do you have to be real computational programmer to do something in that uh mode in wave two in the n in the 1970s we came up with these rule-based systems where we said rules in what looked like English if there is a patient who has a fever and you get an isolate from the lab and that the bacteria isolate is Grand positive then you might have a streptococcal infection with a probability of so and so and these rule-based systems which again you're now programming in the level of human knowledge not in computer code the problem with that was severalfold a you're going to generate tens of thousands of these rules and these rules would interact in ways that you could not anticipate and we did not know enough and we could not pull out of human beings the right probabilities and um what is the right probability of you have a um a fever and you don't see anything on uh on the blood test what what else is going on and there's a large set of possibilities and getting all those rules out of human beings ended up being extremely expensive and the results were not stable and and and for that for that reason because we didn't have much data online we could not go to the next step which is have data to actually drive of these mods what what were the data sources then textbooks and and journals as uh interpreted by human experts that's why some of these were called expert systems because they were derived from introspection by experts who would then come up with the rules with the probabilities and and some of the early um work for example there was a program called M um run by Ted shortliffe out of Stanford who developed a antibiotic advisor that was a set of rules based on what he and his colleagues sused out from the different infectious disease textbooks and infectious disease experts and it stayed only up to dat as long as they kept on of looking at the literature adding rules fine-tuning it there's an interaction between two two rules that was not desirable then you had to adjust that very labor intensive and then if there's a new thing you'd have to add some new rules if AIDS happened you'd have to say oh there's this new path I have to make a bunch of rules probability is going to be different if you're an IV drug abuser or if you're a male homosexual and so it it was very very hard to keep up and in fact people didn't what was the language that it was programmed in was this Fortran no no this was uh these were so-called rule-based systems and so the languages for example the system M was called e essential so these these looked like English and so super super labor intensive super labor intensive and and there's no way you could keep it up to date and at that time there was no electronic electronic medical records they were all paper records so not informed by what was going on in the clinic and so the three revolutions had to happen in order for um us to have what we have today and that's why I think we we have such a a Quantum jump uh recently and before before we get to that because I I mean that's obviously that's the exciting question but I just want to go back to the the Gen 2 were there other industries that were having more success than medicine were there applications in the military were there applications elsewhere in government where yes they got a little closer to utility yes so uh there's a company which uh was a remnant of a back in the in the 19 7s there was a whole bunch of computer companies around what we called Route 128 in Boston and uh these were companies that uh were famous back then um like Wang computer like digital Equipment Corporation and it's a very sad story for Boston because all then that was before silon Valley got its pearl of computer companies around it and uh one of the companies digital Equipment Corporation built a program called R1 and R1 uh was an expert in config configuring the Min computers that you ordered so you wanted some capabilities and it would actually configure all the industrial components the the processors the disk and it would know about all the exceptions and what you needed to know what cabling what memory uh configuration all that was done and it basically replaced several individuals who had very very uh rare knowledge to configure um their systems it was also used in several government Logistics efforts but even those efforts although they were successful and used commercially were limited because it turns out human beings once you got to about three four five six Thousand Rules no single human being could keep track of all the ways these rules could work and so the we used to call this the complexity barrier that these rules would interact in unexpected ways and you'd get incorrect answers things that were not common sensical because you had actually not captured everything about the real world and so it was very narrowly focused and if the expertise was a little bit outside the area of focus if let's say it was a infectious disease program and there was a little bit of influence from the cardiac status of the patient and you had not accurately modeled that its performance would degrade rapidly similarly if there was in digital equipment a new model that had a complete different part that had not included and that there were some dependencies that were not modeled it would it would be great in performance so these systems were very brittle did not show common sense they had expert Behavior but it was very narrowly done there were applications of medicine back then that survived till today for example already back then we had these systems doing interpretation of ECGs um uh actually pretty competently um at least a first pass until they would be reviewed by an expert cardiologist so that was uh there's also a program that interpreted um what's called serum protein electroforesis where you look at the proteins separated out by an electric gradient to make a diagnosis let's say of Myoma or other protein disorders and those also were deployed clinically but they only worked very much in narrow areas they were by no stretch of imagination general purpose uh reasoning machines so let's get back to the three things there are three three things that have taken the relative failures of first and second attempts at Ai and got us to where we are today so let's I can guess what they are but let's let's just have you walk us through them okay so first is um the first one was just lots of data and we needed to have a lot of online data uh to be able to develop models of interesting performance and quality so imag net was one of the uh first such uh data sets uh collections of millions of images with annotations importantly you know this has a cat in it this has a dog in it this is a blueberry muffin this has a human in it and having that was absolutely essential to uh allow us to train the first very successful uh neural network models and so having those large data sets was extremely important the other and there's equivalent in medicine which is we did not have a lot of textual information about medicine until pmed went online so all the literature medical literature at least we have an abstract of it in PubMed plus we have or a subset of it that's open access because government has paid for it through grants um there's something called PubMed Central which has the full text so all of sudden that has opened up over the last uh 10 years and then electronic health records um after Obama uh signed the Hightech Act electronic health records which also ruined the lives of many doctors also happened to also generate a lot of text um for uh the use in these systems so that's one is large amounts of data being generated online the second was the neural network models themselves so the perceptron that I mentioned that was developed not too long after World War II was shown by uh one of the pioneers of AI uh Marvin Minsky to have fundamental limitations uh in that it could not do uh certain mathematical functions like what's called an exclusive or gate um and because of that people said these neural networks are not going to scale but there were a few true believers who kept on pushing and making more and more advanced architectures and those um multi-level deep neural network so instead of having one neural network we layer on top of one uh neural network another one another one and another one so that the output of the first layer gets propagated up to the second layer of of neurons to the third layer and fourth so on and I'm sorry was this a theoretical mathematical breakthrough or a technological breakthrough both so it was both because having those uh in the Insight that these we could actually come up with all the mathematical functions we needed to we could simulate them with these multi-level networks whereas was a uh theoretical Insight but we would have never made anything out of it if not for the fact of sweaty te teenagers mostly teenage boys playing video games in order to have shoot first-person shooters capable of running high resolution uh pictures of aliens or Monsters um in high resolution 24-bit color 60 frames per second we needed to have processors very parallel processors that would allow you to render to do the linear algebra that allow you to calculate what was be the intensity of color on every dot of the screen at 60 frames per second and and that's literally just because of the matrix multiplication math that's required to do this you have n bym matrices that are so big and you're you're you're Crossing and dotting huge matrices huge huge huge matrices and it turns out that's something that can be run in parallel so you want to have multiple parallel processors capable of rendering those images again at 60 frames second so basically millions of bits on your screen being rendered at uh 24 or 32bit color and in order to do that you need to have the that linear algebra that you just referred to being run in parallel and so these parallel processors called graphical processing units gpus were developed and the gpus uh were developed by several companies and some of them stayed in business some didn't but they were absolutely essential to the success of video games now it then occurred to many smart mathematicians and computer scientists that the same linear algebra that was used to drive that computation or images could also be used to calculate the weights of the edges between the neurons in a neural network so the mathematics of updating the weights in response to uh stimuli let's say of a neural network the updating of those weights can be done all in linear algebra and if you have this processor that has literally so a typical computer has a central processing unit so that's one Processing Unit A GPU has tens of thousands of processors that do this one very simple thing linear algebra and so by having this parallelism that only supercomputers would have typically on your simple PC because you needed to show the graph at 60 frames per second gave us olon these commodity chips that allowed us to calculate the performance of these multi-level neural networks so that theoretical breakthrough was a second part but it was would not have happened without the actual implementation capability that we had with the gpus and so Nvidia would be the most successful example of this presumably yeah it was not it was not the first but it's definitely the most success UC F example and there's a variety of reasons why it was successful and created an ecosystem of um of implementers who built their uh neural network uh deep learning systems on top of uh the Nvidia architecture so that was there a moment when that would you would you go back and look at the calendar and say this was the kind of year or quarter when there was escape velocity achieved there yeah so it's probably around 2 12 when there was there was an ongoing contest every year saying who has the best image recognition software and these deep neural networks running off gpus were able to outperform significantly all the their other competitors in image recognition in 2012 and that's uh that's very clearly when everybody just woke up and said whoa we knew about neural networks we didn't realize that these conv convolutional neural networks were going to be this effective and seems that the only thing that's going to be stop us is computational speed and the size of our data sets so that moved things very fast along in the Imaging space with with very soon consequences in medicine it was only uh six years later that we saw uh Journal articles about recognition of retinopathy the diseases affecting the uh retina the back of your eye and diabetes and a paper coming out of all places from Google saying we can recognize different stages of retinopathy based on the images of uh the back of the eye and that also was a wake up call because yes it's it's a the gold poost moving it was great that we could recognize cats and dogs in uh in uh web pages but now all of a sudden this thing that we thought was um specialized human expertise could be done by that same stack of software just if you gave it enough cases of these retinopathies it would actually work well and furthermore what was wild was that there's something called transfer learning where you look you tune up these networks get them to recognize cats and dogs and in the process of recognizing cats and dogs it learns how to recognize circles and lines and fuzziness and so on and you did a lot better in training up the neural network first on the entire image net set of images and then on the retinas and if you just went straight to I'm just going to train on the Reas and so H what was so that that transfer learning was impressive and then the other thing as a doctor was impressive to many of us I was actually write asked to write an editorial for the Journal of of the American Medical Association 2018 when a Google article was written what was impressive to us was that what was the main role of doctors in that publication it was just too one was to just label the images that were used for training this is retinopathy it's not re retinopathy and then to serve as um judges of its performance and that was it all the the rest of it was computer scientists working with gpus and images tuning it and that was it didn't look anything like medical school and you were having expert level recognition Ren opathy that was a wake up call so you you've alluded to the 2017 paper by Google attention is all that is needed I think is the title of the paper attention is all you need attention is all that's not what I'm referring to I'm also referring to a 2018 paper in jamama I'm sorry you're you're talking about the great paper uh attention is all you need that was about the invention of the Transformer which is a specific type of neural network architecture I was talking about these are these were vanilla barely vanilla uh convolutional neural networks the same one that can detect uh dogs and cats yeah got it it was a big medical application re retinopathy 2018 except for computer scientists no one noticed the attention is all you need paper and Google had this wonderful paper that said you know if we recognize not just uh text that Coates together because previously so we're to get back away from images for a second there was this notion that I can recognize a lot of similarities in text if I see which words occur together I can implicate the meaning of a word by the company it keeps and so if I see uh this word and it has around it uh Kingdom uh Crown uh uh Throne castle throne y it's about a king and similarly or queen and so on and th that kind of um Association uh uh in which we created what was called uh uh embedding V which just in plain English it's a string of numbers that says for any given word what are what's the probability how often do these other words occur cooccur with it and just using that those embeddings those vectors those lists of of numbers that describe the co-occurrence of other words we were able to do a lot of uh what's called natural language processing which looking at text and saying this is what it means this is what's going on but then in the 2017 paper they actually took a Next Step which was the Insight that where exactly the thing that we were focusing on was in a sentence what was before and after the actual ordering of it mattered not just the simple co-occurrence that knowing what position that word was in a sentence actually made the difference that paper showed the performance went way up in terms of recognition and that Transformer architecture came from that paper made it clear for number of researchers not me that if you scaled that um Transformer architecture up to a larger model so that the position and the position dependence and this Vector was learned across many many more texts like the whole internet and you could ask you could train it to do various tasks so this this transformal model which is called the pre-trained model so I apologize because it's very I find it very boring to talk about because unless I'm working with uh fellow nerds this Transformer this pre-trained model this um can think of it as equivalent of an equation with multiple variables in the case of gp4 we think it's about a trillion variables so that means you have it's like an equation where you have a number in front of each variable a coefficient that's about a trillion long and this um this um model can be used for various purposes one is the chatbot purpose which is predict given these this sequence of words what is the next word that's going to be said now that's not the only thing you could use this model for but that's turns out to have been the Breakthrough application of the transform transform Transformer model for text so is that just to round out what you said earlier Zack would you say that is the three the third thing that enabled this third wave of of AI it is the is the the Transformer actually it was it was not what I was thinking about for me I was thinking of the real breakthrough in datadriven AI I I put around the 2012 ERA this is yet another uh we were all if you talk to me in 2018 I I would have already told you we're in a new Heyday and everybody was agree agree would agree with you there was a lot of excitement about AI just because of the image recognition capabilities okay this was an additional um capability that's beyond what many of us were expecting just from the scaleup of the neural network so the the three just to be make sure I'm consistent was large data sets multi-level neural networks and uh AK deep neural networks and the GPU infrastructure and that you know that brought us well through um the the uh you know 2012 to 2018 the 2017 um blip that became what we now know to be this whole uh large language model um Transformer architecture that's a complete unanticipated for many of us uh development but that was already on the heels of a Ascendant a uh AI era there was already billions of dollars of uh of frothy investment and frothy companies some of which did well and many of which did not do so well so I I think the Transformer architecture has revolutionized um well revolutionized many parts of the human condition I think but it was already part of it I think the third wave and so there's something about GPT where I feel like most people by the time gpt3 came out or certainly by 3.5 this was now outside of the purview of computer scientists people in the industry who were investing in it this was now I mean becoming as much a verb as Google was in probably the early 2000s right like everybody you know there were clearly people who knew what Google was in 96 and 97 but you know by by 2000 everybody knew what Google was right and I think I I feel like that something about GPT 3.5 or 4 was kind of the Tipping Point where I don't think you cannot know what it is at this point and um I don't know if that's relevant to the story or meaning it does that sort of speak to what trajectory we're on now um the other thing that I think Zach has become so um audible in the past year is the elevation in the discussion of how to regulate this thing which seems like something you would only argue about if you felt that there were a chance for this thing to be harmful to us in some way that we do not yet perceive so what can you say about that because that's obviously a nod to the technical evolution of AI um that you know very serious people are having discussions about pausing moratoriums regulations things like that I can't you know there was no public discussion of that in the 80s right which may have spoke to the fact that in the 80s it just wasn't powerful enough to pose a threat so um Can can you maybe give us a sense of what people are debating now what what what is the Smart sensible reasonable argument on both sides of this and let's just have you decide what the two sides are I'm I'm assuming one side says pedal to the metal let's go forth on development don't regulate this let's just go nuts the other side is no we need to have some breaks and barriers well it's actually uh not quite that so um you're absolutely right that uh the chat Bots have now become uh a commonly used noun and that probably happened with the mergence of GPT 3.5 in uh and it's uh in that appeared around I think December of 2022 but now yes because um out of the box that pre-trained model I told you about could tell you things like um how do I kill myself um how do I manufacture a uh a toxin um it could allow you to do a lot of uh harmful things so there was that level of concern and we can tell about what we can talk about what's been done about those uh first order efforts then there's been a group of uh scientists who interestingly went from uh from saying will'll never actually get general intelligence from this particular architecture to saying oh my gosh this uh technology is able to inference in a way that I had not anticipated and now I'm so worried that either because it is malevolent or just because it's trying to do something that has bad side effects for Humanity it presents an existential threat now on the other side I don't believe are is anybody who saying let's just do go heads down and let's um let's just see how fast we can get to artificial general intelligence or if they do think that they're not saying it openly can can you just Define AGI Zack I think we've all heard the term but is is there is there a is there a quasi accepted definition no first of all there's not and I I hate myself for even bringing it up because it starts I was going to bring it up before you anyway you it was inevitable yeah that a that was a unfortunate slip because umart you know artificial general intelligence means a lot of things to a lot of people and and I slipped because I think it's um again a moving Target and it's very much eye in the of eye on the beholder so let's be you know there's a guy called elazer owski one of the soall doomers and he comes up with great scenarios of how a sufficiently uh intelligent system could figure out how to persuade human beings to do bad things or control of our uh infrastructure to you know bring down our uh our uh Communications infrastructure or airplanes out of the sky and we can talk about whether that's relevant or not and other side we have uh let's say open Ai and Google but what's fascinating to me is that open and open AI which working with Microsoft generated uh gp4 was were not saying publicly at all uh let's not regulate it in fact they were saying please regulate me um uh Sam Alman went on a world tour where he said we should be very concerned about this and we should regulate Ai and he was before Congress saying we should regulate uh Ai and so I feel a bit chish about saying this because Sam was uh kind enough to write a um forward to the book I wrote with Peter Lee and Terry Goldberg on gp4 and the revolution in medicine but I was wondering why were they insisting so much on regulation and there's two interpretations one is it's just a sincere and it could very well be that sincere wish that it' be regulation regulated so we check these machines these programs to make sure they don't actually do anything harmful the other possibility unfortunately is something called regulatory lock in which means I'm going to I'm a very well-funded company and I'm going to create regulations with Congress about what uh these what is required which boxes do you have to check in order to be allowed to run and if you're a small company you're not going to have a Bevy of lawyers uh with big checks to uh comply with all the regulatory requirements and so I don't you know I I think Sam is I don't know him personally I imagine he's a very well motivated individual but whether it's for the reason of regulatory lockin or for genuine concern there has uh not been any statements of let's go heads down they they do say let's let's be regulated now having said that before you even go with the Doomer scenario I think there are there are someone uh just as potentially evil that we have to worry about another intelligence that's human beings and how do human beings use these great tools so just as we know for a fact that one of the earliest users of gp4 were uh high schoolers trying to do their homework and solve uh hard uh puzzles given to them we also know that uh various parties have been using the amazing uh text generation and interactive capabilities of these programs to spread misinformation to run uh uh chat Bots and there's a variety of mign things that could be that could be done by uh third parties using those uh the these um engines and I think that's for me the clear and present danger today which is how do individuals decide to use these these uh general purpose uh programs if you know if you look at what's going on in the UK Ukraine Russian war I see more and more autonomous vehicles flying and carrying weaponry and dropping bombs and we see in our own milary a lot more autonomous drones with greater and greater autonomous capabilities those are purpose-built to actually do um dangerous things and um a lot of uh science fiction fans will you know refer to Skynet from the from the Terminator series but we're literally building it right now and Skynet sort of in in The Terminator Zack they kind of refer to a moment I don't remember the year like 1997 or something and I think they talk about how Skynet became quote self-aware or or something self-aware and somehow when it became self-aware it just decided to destroy humans and we were a threat that's right yeah and is is self-aware uh movie speak for AGI like what do you think self-aware means in more technical terms well or is it super intelligence like there's so many terms here and I don't know what they mean okay so self-awareness means a a a process by which the intelligent entity can look back look inwardly at its own processes and recognize it itself now that's very handwavy but uh I'm having a senior moment the guy who wrote gooder uh well let's see what's his name [Music] gooder B oh yeah douglas hoffstead douglas Hoffer has probably done the most uh thoughtful and and clear um writing about what uh self-awareness means and it I will not do it justice but if you want to uh if you really want to read a one full book that spends a whole book trying to explain it it's called I am in a strange Loop and in I am a strange Loop he explains how if you have enough processing power and you can represent uh the processes that if you have essenti models of the processes that you that constitute you in other words you're able to look at what you're thinking you may have some sense of self- awareness now it's there's a bit of an act of faith on that and some many many uh AI researchers don't buy that definition so there's a there's a difference between self-awareness and uh actual raw intelligence you can imagine a super powerful computer that would predict everything that was going to happen around you and was not aware of itself as an entity right and you it's the factor remains you do need to have a minimal level of intelligence be a to be self-aware so a fly may not be self-aware it just goes and finds uh the good smelling uh poop and you know does whatever it's programmed to do on that but dogs have some self-awareness and awareness of uh their surroundings they don't have perfect self- awareness like they don't recog recognize themselves in the mirror and they'll bark at bark at that birds will recognize themselves the mirrors and we are we recognize ourselves in in many many ways so there is some correlation between intelligence and selfawareness but these are not necessarily dependent functions I feel like we got off the track I Peter no it's okay I mean I I think what I'm so what I'm hearing you say is look there's um there are clear and present dangers associated with current best AI Tools in that humans can use them for nefarious purposes um it seems to me that the most scalable example of that is still relatively small in that you know it's not it's not like existential threat to our species large correct well yes and no um if I was trying to do gain up function research with a virus good point yeah I could I could use these tools very effect yeah that that's a that's that's a great example but that's not so there's there's this disconnect and perhaps you understand the disconnect better than I do because there's those real exential threats then there's this more more fuzzy thing that um we're worried about correctly about bias you know incorrect decision make incorrect decisions hallucinations we can get into what that might be and and our use in the everyday um of the human conviction and there's con concerns about mistakes that might be made there concerns about displacement of workers that just as um automation displaced um a whole other series of workers now that we might that we have something that works in the knowledge industry automatically just as we're replacing a lot of copy editors and illustrators with AI where's where's that going to stop it's now much more in the white collar uh uh space and so there is concern around the harm that could be generated there and the medical domain um are we getting good advice are we getting bad advice whose interests are being optim optimized uh in these various decision procedures that's another level that doesn't quite rise at all to the level of Extinction events or but a lot of policy makers and the public seem to be concerned about yeah no that those are those are fair points let's now talk about that state of play Within medicine so um I liked your first examp example almost one we take for granted but um you go and get an EKG at the doctor's office this was true 30 years ago just as it is today you get a pretty darn good readout right it's going to tell you if you have uh you know uh an AV Block it's going to tell you if you have a bundle bundle branch block it's gonna it's G to put it this way they read EKGs better than I do that's not saying much anymore but they but they do um what what was the next area where we could see this it I mean it seems to me that radiology is a field of medicine which is of course image pixel based medicine that would be the most logical next place to to see AI do good work what is the current state of AI in Radiology so in all the visual based medical uh Specialties um it looks like AI can do as well as many experts so it turns out that in recognizing so what are the image appreciation Subs Specialties pathology when you're looking at slices of tissue under the microscope Radiology where you're looking at x-rays or MRIs Dermatology where you're looking at pictures of the Skin So in all those visual-based uh uh Specialties the computer programs are doing by themselves as well as many uh experts but they're not replacing the doctors because that image recognition process is only part of the their job now to be fair to your point in Radiology we already today before AI in many hospitals would send x-rays by satellite to Australia or India where they would be read overnight by a doctor or a specially trained person who had never seen the patient and then the reports uh file filed back to us so because they're 12 hours away from us overnight would have the results of these those reads and that same kind of function can be done automatically by AI so that's replacing a certain kind of uh doctor but but let me dig into that a little bit more so let's start with a relatively simple type of image such as a mamogram or a chest x-ray so it's a single image I mean I guess with a chest x-ray you'll look at a AP and a lateral but let let's just say you're looking at an AP or or a single mamogram um a radiologist will look at that a radiologist will have clinical information as well so uh they will know why this patient presented in the case of the chest x-ray for example in the ER in the middle of the night were they short of breath uh do they have a fever you know do they have a previous x-ray I can compare it to they'll have all sorts of information um is it are we not at the point now where all of that information could be given to the AI to um enhance the pre-test probability of whatever diagnosis it comes to I am delighted when you say preest probability those that's don't talk dirty around me um so that's love my Baye theorem over here yeah yep so you just said a lot because what you just said actually went beyond what the uh straight conval neural networks would do because they actually could not replace uh radiologist because they could not do a good job of taking into account the previous uh history of the patient um and it's required the emergence of Transformers where have both multi multimodality of both the image and the text now they're going to do better than many many Radiologists today and so there is I don't think any any threat yet to Radiologists as a job you know one of the most irritating to uh computer scient uh to doctors predictions was by Jeffrey Hinton that one of the uh intellectuals leaders of neural network architecture he said I think it was in 2016 that in I have this approximately wrong but in six years we would have no need for Radiologists and that was just clearly wrong and the reason was wrong is a they did not have these capabilities that we just talked about about understanding about the clinical context but it's also the fact that we just don't have enough Radiologists meaning to do the training to actually do the to actually do the work um so if you look at American medicine and this is a big uh I'll let you shut me down but there's if you look at residency programs we're not getting enough Radiologists out now we can we getting we have an over abundance of applicants for Interventional Radiology they're making a lot of money it's high Prestige but straight up uh Radiology readers not enough of them primary care doctors uh I go around medical schools and ask who's who's get who's becoming a primary care doctor almost nobody and so Primary Care is disappearing in the United States if uh if you they're in fact Mass General and Brigham announc officially they're not seeing Primary Care patients and people are still going to Dermatology um and they're still going into plastic surgery but there's a what I did Pediatric Endocrinology half of the slots nationally are not being filled pediatric uh developmental disorders like autism those slots half of them filled pdid there's a huge gap emerging in the available expertise so it's not what we thought it was going to be uh that we had a a surplus of doctor that to be replaced it's just we have a surplus in a few focused areas which are very popular and then for all the work of primary care primary prevention the stuff kind of stuff that you're interested in we have almost no doctors uh available yeah let's go back to the radiologist for a second because yeah again I'm fixated on this one because it seems like the most um well the closest one to to address and again if you're saying look we have a Dar of Imaging Radiologists who are able to work the emergency rooms urgent care clinics and hospitals wouldn't that be the first place we would want to apply our best of Imaging recognition with our super powerful gpus and now plug them into our Transformers with our language models so that I can get uh clinical history medical past history pre previous images current images and they don't have to send it to a radiologist in Australia to read it who then has to send it back to a radiologist here to check like we I mean if we're just trying to fill a gap that Gap should be fillable shouldn't it it is and and and that's exactly where it is being filled and what keeps distracting me in this conversation is that there's a whole other group of users of these AIS that we're not talking about which is the patience M and previously none of these tools were available to patients with the release of GPT 3.5 and4 and now Gemini and um Claude 3 they're being used by patients all the time in ways that we had not anticipated and let me let me give you an example so there's a child who um was having trouble walking having trouble chewing and then started intractable headaches mom s brought him to multiple doctors they did multiple Imaging studies no diagnosis kept on being in in intractable pain she just typed into gp4 all the reports and asked gp4 what's the diagnosis and gp4 said tether cord syndrome she then went with all the Imaging studies to a neurosurgeon said what is this he looked at He said Out Of The Core syndrome and we have such an epidemic of misdiagnosis and undiagnosed patients part of my background that uh just mentioned briefly I'm the principal investigator of something of the coordinating Center of something called the undiagnosed Network it's a network with 12 academic hospitals going down the West Coast from University Washington Stanford you CLA to Baylor up the East Coast Harvard hospitals NIH and we see a few thousand patients every year and these are patients who have been undiagnosed and they were in pain and that's just a small fraction of those who are undiagnosed and yes we bring to bear a whole bunch of uh computational techniques and genomic sequencing to actually be able to um help these individuals but it's very clear that there's a much larger burden out there of misdiagnosed individuals and because of that so question but a question for you Zach which is um does it surprise you that in that example the mother was the one that went to gp4 and inputed that I mean she had presumably been to many Physicians along the way were you surprised that one of the Physicians along the way hadn't been the one to say gee I don't know but I got let let's see what this gp4 thing can do most clinicians I know do not not have what I used to call the Google reflex I remember when I was on the wards with a bunch of and we had a child this morphology they look different and I said to the uh fellows this is after res SE what is the diagnosis and they said I don't know I don't know I said he has this and this and this finding what's the diagnosis and they said and I said how would you find out they had no idea and then I just said let's take what I just said and typ into Google and in the top three uh responses there was the diagnosis and that reflex which they do use in their out in their civilian life they did not have in the clinic and doctors are in a very unhappy position these days they're really being driven very very hard and they're being told to use certain technological tools they're being turn into data entry clerks they don't have the Google reflex uh they don't have the reflex who has a time to look up a journal article let alone and they don't do the Google uh reflex and if they if and even less do they have the um let's look at the Patients uh uh history and see what what gp4 would would come up with I was gratified to see early on on doctor saying wow look I just took the patient history plunked it to gp4 and said write me a letter of pre of prior authorization and they were actually uh tweeting about doing this which on the one hand I was very very pleased for them because it was saving them five minutes to write that letter to the insurance company saying please authorize my patient for this procedure I was not pleased for them because if you use uh chat GPT you're using a program that is covered by open AI as use as opposed to a version of gp4 that is being run on protected Google on protected Azure Cloud by Microsoft which is Hippa covered for those you uh audience doesn't know Hippa is the um legal framework on which under which we protect patient privacy and if you violate it you can be fined and even go to prison so in other words if a physician it wants to put any information into uh gp4 they better not identify it that's right so they just plunked in a patient note yeah into chat GPT that's a Hipp violation right if there's a Microsoft version of it which is which is Hipp compliant but it's not anyway so these so these doctors were twe about putting uh so they were using it for to improve their lives the doctors were using it for improving the business the administrative part of of healthcare which is incredibly incredibly important but by and large only a few doctors use it for um diagnostic uh Acumen and and then what about more involved uh Radiology so you know obviously a plain film is you know one of the more straightforward things to do although it's far from straightforward as anybody knows who's you know stared at a chest x-ray um but once we start to look at three-dimensional images such as you know cross-sectional images uh CT scans MRI or even more complicated images like ultrasound and things of that nature um what is the current state-ofthe-art with respect to AI in the assistance of reading these types of images so that's the very exciting news which is remember how I said it was important to have a lot of data that was one one of the three y ingredi so all of a sudden having a lot of data around for example echard echocardiograms the ultrasounds of your heart normally takes a lot of training to interpret those images correctly so there is a recent study from the uh Echo clip group led I think out of uh UCLA and they took a million Echo cardiograms and a million textual reports of those and essentially train the model both creates the those embeddings I talked about of the images and of the text but this is this is just to make sure people understand what we're talking about this is not here's a picture of a cat here's a description cat this when you put the image in you're putting a video in now you're putting a multi-dimensional video because you have time scale you have Doppler effects you have uh this is a very complicated video that is going in it's a very complicated video and it's three-dimensional and it's weird view uh threedimensional of views from different angles and it's dependent on the user in other words the tech the radiology tech can be good or bad if I was the one doing it it would be awful right if if you have thing so radi so the echo card the uh Echo Tech does not have medical school debt they don't have to go to medical school they don't have to learn calculus they don't have to learn physical chemistry all the Hoops that you have to go through in medical school they don't have the attitudinal debt of doctors um and so in two years they get all those skills and they actually do a pretty good job no no they do a fantastic job but my point is their skill is very much an important determinant of the quality of the image so yes but what we still require these days is a cardiologist to then read it and interpret it right that's my that that's sort of where I'm going by the way is we're going to get rid of the cardiologist before we get rid of the technician that is exact well we're on the same page y let's my my Target in this conversation is Nurse practice shers and physician assistance with these tools can replace a lot of expert clinicians and there is an big open question what is the real job for doctors in 10 years from now and I don't think we we know the answer to that because you you fast forward to the conversation just now well let's let's think about it right so so we still haven't come to proceduralists so we still have to talk about the Interventional radiologist the Interventional cardiologist and the surgeon and we we can talk about the role of the surgeon and the da Vinci robot in a moment but I think what we're doing is we're we're kind of identifying the pecking order of uh of Physicians and saying that and let's not even think about it through the lens of replacement let's start with the lens of augmentation which is the radiologist can be the most easily augmented uh the pathologist the dermatologist the cardiologist who's looking at Echo and EKGs and stress tests people who are interpreting visual data and using visual data will be the most easily augmented the second tranch of that will be people who are interpreting language data plus visual data so now we're talking about your internist your pediatrician where you have to interpret symptoms and combine them with laboratory values and combine it with a story and an image is that is that a fair assessment in terms of tier I think it's absolutely a fair assessment my only quibble it's not a quibble is and I'm going to keep on going back to this is in a place where we don't have primary care which I'm claiming is increasingly the a the American Academy of of medical American Association of medical colleges estimates that in in by 35 that's only uh uh 11 years from now will'll be missing on the order 50,000 primary care doctors as I told you I can't get Primary Care at the briam or uh an MGH today and in the absence of that you have to ask yourself how can we replace uh these absent Primary Care practitioners with nourish practitioners with uh physician assistance augmented by these AIS because there's literally no doctor to replace so so tell me Zach where are we technologically on that augmentation are we would if we if Nvidia never came out with another chip if they literally said you know what we are only interested in building golf simulators and we're done with the progress of this and um this is as good as it's going to get um do we have good enough gpus good enough multi-layer neural networks that all you need is more data in training sets that we could now do the augmentation that has been described by us in the last five minutes yes the short answer is yes let me get make be very concrete most coner Services cost in Boston somewhere between five and $20,000 a year you can get this very lowc cost cler service that I'm just amazed that have have have not done the following called one medical one medical was acquired by Amazon and they have a lot of nurse practitioners uh in there and you can make an appointment you can text with them I believe that those individuals could order uh could be helped in ordering the right Imaging studies the right EKGs the right uh medications and uh assess your continuing um heart failure and only decide in a very few cases that you need to see a specialist cardiologist or a specialist endocrinologist today without it would just be a matter of just making the current models better evaluating them because not all models are equal and we a big question for us this is the regulatory question which is which ones do do a better job and they're not all equal but I don't think we need technological break breakthroughs to just make the current set of uh par professionals work at the level of entry-level doctors let me quickly say the old very bad joke what do you call the uh doct the medical student who graduates at the bottom of this class a doctor doctor and so if you could just merely get the bottom 50% of doctors to be as good as the top 50% that'll be transformative for healthare now there are other superhuman capabilities that we will that we can go towards and we can talk about if we want that do require the next generation of algorithms Nvidia um architectures and data sets but there's so much if everything stopped now already transform medicine it's just a matter of The Sweat Equity to create the models figure out how to include them in the workflow how to pay for them um how to create a reimbursement system that uh and a business model that uh works for our society but there's no technological um barrier so in my mind there are basically everything we've talked about is take the best case example of medicine today and augment it with AI such such that you can make you can raise everyone's level of care to that of the best no gaps and it's scaled out exactly okay now let's talk about another problem which is where do you see the potential for AI in solving problems that we can't even solve on the best day at the best hospitals with the best doctors so let me give you an example we can't really diagnose Alzheimer's disease until it appears to be at a point that for all intents and purposes is irreversible um maybe on a good day we can halt progression really really early in a patient with just a whiff of MCI mild cognitive impairment um maybe with an early ameloid detection and an anti-id drug but do you I mean is it science fiction to imagine that there will be a day when an AI could listen to a person's voice watch the movements of their eyes study the movements or of their gate and predict 20 years in advance when a person is staring down the barrel of a neurodegenerative disease and act at a time when maybe we could actually reverse it I mean how science fictiony is that I don't believe it's science fiction at all do you know that looking at retinas today images of retina straightforward convolutional neural neural network not even ones that involve Transformers can already tell you by looking at your retina not just whether you have retinal disease but if you have hypertension if you're male if you're female how old you are and some estimate of your longevity and that's just looking at the back of your eye mhm and has seen enough uh data when you [Music] I'm I did a study I was I was a small player in a study that appeared in nature in 2005 with uh Bruce yaner we were looking at uh frontal loes of individuals uh who had died for variety of reasons often accidents um of various ages and we saw bad news for people like me that after age 40 your transcrypt to the genes that are switched on fell off a cliff like 30% of your transcripton went down and um and so it seemed to be a big difference in the expression of genes around age 40 and there was but there was one 90-year-old who looked like the young guy so maybe there's hope for some of us but but then I thought about it afterwards and there were other things that actually have much smoother functions which are don't have quite the full like our skin so our skin ages and in fact all our organs age and they age at different rates you're you're saying that the transcriptome of the skin you did not see this cliff-like effect at a given age the way you saw it in the frontal cortex okay that's right and so different organs age at different at different rates uh but having the the right data sets um and the ability to see nuances that we don't notice makes it very clear to me that um the early detection part no problem that's all that's is can be very straightforward the treatment part we can we can talk about it as as well but um again um we had early on from the uh very famous Framingham heart study a predictor of when you had going to have heart uh disease based on just a handful of variables now we have these artificial intelligence models that based on hundreds of variables can predict various other diseases and it will do Alzheimer's I believe uh very soon I think you'll be able to see a combination of gate speech patterns um picture of your body picture of skin and eye movements like you said will be a very accurate predictor um I just we just published about recently speaking about eyes it's very nice uh study uh where in a car just by looking at the at the driver can figure out what your blood sugar is because diabetics previously have not been able to get driver licenses sometimes because of the worry about them passing out because of hypoglycemia so there was a very nice study showed that you could just by looking have cameras pointed at the eyes and actually fig out exactly what the blood Sher is so that's that kind of detection is I think fairly straightforward it's a different question about what you can do about it before we go to the what you can do about it I just want to go a little deeper on the on the predictive side um you brought up the Framingham model or the multiethnic study on AOS scoris the Mesa model these are the two most popular models by far for for looking at Major adverse cardia event major ad verse cardiac event risk prediction um but you needed something else to build those models which was enough time to see the outcome right so you had to in the Framingham cohort which was the late 70s and early 80s you then had the Framingham offsp Offspring cohort and then you had to able to follow these people with their ldlc and hdlc and triglycerides and later eventually they Incorporated calcium scores so if today we said look we want to be able to predict 30-year mortality which is something no model can do today this is a big pet peeve of mine is we generally talk about cardiovascular disease through the lens of 10year risk which I think is ridiculous we should talk about lifetime risk but I would settle for 30-year risk frankly and if we had a 30-year risk model where we could take many more inputs and I would absolutely love to be looking at the retina I I Believe by the way Zach that retinal examination should be a part of medicine today for everybody I I would take a retinal exam over a hemoglobin A1c all day every day I'd never look at another A1C again if I could see the retina of every one of my patients um but my point is even if effective today we we could Define the data set and let's overdo it and we can prune things later but we want to see these 50 things in everybody to predict every disease how is there any way to get around the fact that we're going to need 30 years to see this come to fruition in terms of watching how the story plays out or are we basically going to say no you're you know we're going to do this over five years but it will only you know it won't be that useful because a 5-year predictor basically means you're already catching people in the throws of the disease I'll say three words electronic health records so that turns out not to be the answer in United States why because in the United States we move around we don't stay in any given Health Care system that long so very rarely will I have all the measurements made on you Peter uh all your all your glycohemoglobin or your blood pressures all your clinic visits all the Imaging studies that you've had however that's not the case in Israel for example Israel they these hmos Health M organizations and one of them clarit I have a a good relationship with because they published all the big covid studies looking at the efficacy of uh the vaccine and why could they do that because they had the whole population uh available and they have about 20 25 years worth of data on all their patients and in detail and family family relationships so if you have that kind of data and Kaiser Permanente also has that kind of data I think you can actually come close now but you're not going to be able to get retina gate voice so so yeah because we still have to get those prospectively those you still have to but I'm going to claim that there are proxies rough proxies but for gate PS um and for um um you know hearing problems uh visits to the a audiologist now these are noisier measurements yeah and so the so those of us who are uh dat data junkies like I am always keep mumbling to ourselves perfect is the enemy of good and so um waiting 30 years to have the perfect data set is not the right answer to help patients now and there are things that we could know now that we that are noble today that we just don't know because we haven't bothered to look give you a quick example I I did a study of um autism using electronic health records maybe 15 years ago and I saw there was a lot of GI problems and I talked to a a a pediatric expert and they said was a little bit dismissive it said brain bad tummy hurt I said I've seen a lot of inflammatory battle disease like things that you know I I just doesn't make sense to me that this is somehow eff of brain function to make a long story short we did a massive study we're looking forward tens of thousands of individuals and sure enough we found subgroups of patients who had immunological problems associated with their autism and they had type 1 diabetes inflammatory bowel disease lots of infections those were Noble but they were not known and I had frankly parents coming to me more thankful than for anything else I had never done for them clinically because I was telling that these parents telling these parents they weren't hallucinating that these kids had these problems they just weren't being recognized by medicine because no one had the big wide angle to see these Trends so without knowing the field of Alzheimer's the way I do other fields I bet you there are Trends in Alzheimer's that are you can pick up today by looking at enough patients yeah that you'll find some that have more frontal temporal components some that have more effective components some that have more of a infectious uh and immunological component those are knowable today so Zach you've already alluded to the fact that um we're dealing with a customer if the physician is the customer who is not necessarily the most tech forward customer um and truthfully like many customers of AI runs the risk of being marginalized by the technology if the technology gets good enough and yet you need the customer to access the patient to make the data system better to make the training set better so how do you see the interplay over the next decade of that that Dynamic so that's really that's the right question because you need in order for these AI models to work you need a lot of data on a lot of a lot of patients where is that data going to come so there are some Healthcare Systems which like Mayo the Mayo uh Institute um who think they can get enough data in that fashion um there are some data companies that are trying to get relationships with Healthcare Systems where they can get deidentified data I myself will I'm betting on something else there is a trend where consumers are going to have increased access to their own data the 21st century cures Act was passed by Congress and it said that patients should be given access to their own data programmatically now they're not expecting your grandmother to uh write a program to access the data programmatically but by having a right to it it enables others to do so so for example at Apple has something called Apple health and it has this big heart icon on it and if you're one of the 800 hospitals that they've already hooked up with like as general or bring women's and you're a patient there you if you authenticate yourself to it if you give it you're using password it will download into your iPhone your Labs your meds your diagnosis your um uh procedures as well as all the um wearable stuff that your blood pressure that you get as an outpatient um and uh various other uh forms of data that's already happening now there's not a lot of companies that are taking advantage of that but that's a right now that data is available on tens of millions of Americans Now isn't it interesting Zach how unfriendly that data is in its current form like I'll give you just a silly example in our practice right so if we send a patient to lab core or Boston heart or pick your favorite lab and we want to generate our own internal reports based on those where we want to do some analysis on that and lay out Trend sheets we have to you use our own internal software that it's almost impossible to scrape those data out of the labs right because they're sending you PDF reports their apis are garbage like nothing about this is userfriendly and so so even if you have the the my heart thing or whatever the My Health income on your phone it's not navigable it's not searchable it doesn't show you Trends over time like is there is there a more user hostile industry from a data perspective than the health industry right now no um no and there's a good reason why um because they're keeping you captive but Peter the good news is you're speaking to a real nerd and let me tell you two ways where actually we could solve your problem one if it's in the Apple Health thing a someone can actually write a program an app on the iPhone which will take those data as numbers and not have to scrape it and it can run it through your own Trend uh uh trending programs you could actually use it directly also Gemini and GT4 you can actually give it those PDFs and actually with the right prompting it will actually take those data and turn them into TBL spreadsheets it's not but we can't we can't do that because of HIPPA correct yeah absolutely if the patient if the patient gives them to you if the patient gets it from the patient portal absolutely you can do that the patient can do that but I can't use a patient's data that way if the patient gives it to you absolutely really oh yes but it's not deidentified doesn't matter what which which part are you getting hooked I want to there's many uh things that you're worried about if a patient says Peter you can take my 50 Lab Core reports for the last 10 years and you can run them through chat GPT to scrape it out and give me an Excel spreadsheet that will perfectly tabularize everything that we can then run into our model to build Trends and look for things I didn't I didn't think that was doable actually so it's not doable through chat because your lawyers would say Peter you're going to get um a million dollars in fines from Hippa but if you use I don't want I'm not a shell for Microsoft I don't own any stock but if you do GPT on aure cloud that's HPP protected you absolutely can use it with patient consent 100% you can do it and GPT is being used with patient data out of Stanford right now um epics using gp4 um and it's absolutely uh legitimately usable by you that's people don't understand that so we've we've now just totally bypassed ocrs like we do not need to waste our time with Optical for people not in the acronyms optical character recognition which is 15 years ago what we were trying to do to scrape this data Peter let me tell you there's Newland Journal medicine I'm on the editorial board there and we just published uh three months ago a picture of picture of the week a back of this 72y old and it looks like a bunch of red marks like to me looks like someone just scratched themselves and it says blah blah blah they had trouble sleeping the image of the week image of the week yeah and I took that whole thing and I took out one important fact and then gave it to gp4 the the image and the text and it came up with the the two things it could thought it would be either bomy toxicity which I don't know what that looks like and shitake mushroom toxicity what I removed is the fact that guy had eaten mushrooms the day before so this thing just look just like looking at the picture that had never been this was gp4 spit this out yes yes I I don't think most doct know this Zach I don't think most doctors understand I mean I don't first of all I can't tell you how many times I get a rash and I think well I try to send a picture to my doctor or my kid gets a rash and I'm trying to send a picture to their pediatrician and they don't know what it is and it's like what are we we're rubbing two sticks together and you're telling me about the Zippo lighter yes and that's what I'm saying is the patients without a primary care doctors I know I keep repeating myself they understand that they have a Zippo lighter and they wait you know waiting three months because of a rash or these symptoms they say I'll use this Zippo ligher it's better than no doctor for sure and maybe better okay that's now there's there's two more pieces yeah go just just quickly Illustrated I don't know squat about the FDA and so I pull down from the FDA their ad the adverse event reporting files it's a big zip file compressed fil I went set to gp4 please analyze this data and it says unzipping it's these tables based on this table I think this is about the Adverse Events and this is the um the locations uh what do you want to know I say tell me what are the Adverse Events uh for um these anti for these um disease modifying drugs or arthritis says oh to do that I'll have to join these two tables and it just does it it creates its own python code it does it and it gives me a report is this a part of medical education now that you're at Harvard right you're at one of the three best medical schools in the United States arguably in the world is this an integral part of the education of medical students today do they spend as much time on this as they do hystology where I spent a thousand hours looking at slides under a microscope that I've never once tried to understand and I again I don't want to say there wasn't a value in doing that there was and I'm grateful for having done it but but I want to understand the relative balance of Education it's like the stethoscope you know arguably we should be using things other than stetoscope so I'm let me make sure I don't get fired or at least be severely by telling you that George Dy our dean of the medical school has said explicitly he wants to change all of medical education so this these uh learnings are infused throughout the uh four years but it's takes some doing let's now move on to the next piece of medicine so we've gone from purely the recognition image-based to how do I combine image with voice story text uh you've made a very compelling case that we don't need any more technological breakthroughs to augment those it's purely a data set problem at this point and a willingness let's now move to the procedural is there in our lifetimes say Zack um the probability that if you need to have a radical prostectomy which currently by the way is never done open this is a procedure that the da Vinci a robot has revolutionized there's no blood loss anymore this when I was a resident this was one of the bloodiest operations we did it was the only operation by the way for which we had the patients donate their own blood two months ahead of time that's how guaranteed it was that they were going to need blood transfusions so we just said to hell with it come in a couple months before give your own blood because you're going to need at least two units following this procedure today it's it's insane how successful this operation is uh on on a large part of you know the robot but the surgeon needs to move the robot are we getting to the point where that could change so let me tell you where we are today today there's been studies where it's great they col a bunch of YouTube videos of surgery and traded up one of these uh gener of models so it says oh they're putting on the scalpel to cut this ligament and by the way that's too close to the blood vessel they should move it a little bit to the side that's already happening um based on what we're seeing with robotics in the general world I think the da Vinci controlled by a robot 10 years is a very safe bet really it's a very safe bet but in some way in some ways 10 years is nothing it's nothing but it's a very it's a very safe bit the fact is right now I can do a better job by the way just to go back to our previous discussion giving you a genetic diagnosis based on your findings with than any primary care provider and Inter interpreting the a genomic test now it can be outperformed by a oh and so are you using that example Zach because it's a huge data problem like in other words that of that's obvious that you would be able to do that because the amount of data I mean there's three billion base pairs to be analyzed so of course you're going to do a better job are you saying but linking it to symptoms yeah yeah but you're saying surgery is a data problem because if you turn it into a pixel problem pixel and you know and degrees of freedom yeah infinite that's it that's it and remember um there's a lot of degrees of freedom in moving a car around traffic and by the way lives are on the line there too now medicine is not the only job where lives are at stake driving a ton of metal at 60 mil hour in traffic is also putting lives at stake and last time I looked there's several manufacturers who are saying that for some appreciable fraction of that effort they're controlling multiple degrees of freedom with with a robot yeah I very recently spoke with somebody um I won't name the company I suppose but it's one of the companies that's deep in the space of autonomous vehicles and they very boldly stated and I I they made a pretty compelling case for that if every vehicle on the road was at their level of technology and autonomous driving you just you wouldn't have fatalities anymore um but the key was that every vehicle had to be at that level right it because you you if you still have some people driving and some autonom do you do you I don't know if you know enough about that field but do you does that sense check to you yes I do I well first of all I'm a terrible driver and I am a better driver it's not ad but the fact is I'm a better driver because I'm on a Tesla uh because I'm a terrible driver and there's actually a very good message for medicine because I'll I will uh paraphrase this I know enough to know that I I need to jiggle the steering wheel when I'm driving with a Tesla because otherwise it will assume that I'm just zoning out what I did realize is this so I I'm very bad I'll pick up my phone and I look at it and it's looking I didn't realize it was looking at me it says Zach put down the phone so I okay and I pick it up then let okay three minutes later I pick it up again and it says okay that's it I'm switching off autopilot uh so it switches off a now I have to pay attention full attention then I get home and it says all right that was bad you do that four more times I'm switching off autopilot until the next software update and the reason I mentioned that is it takes a certain amount of confence to do that to your customer base saying I'm switching off the thing that they bought me for in medicine How likely is it that we're going to fall asleep at the wheel if we have an AI thinking for us I think it's a real it's a real issue we know for a fact for example back in the 90s that um doses for a drug like adron where people would talk endlessly about how frequently should be given with what dose the moment you put it in the order entry system 95% of doctors would just use the default there and so how in medicine are we going to keep doctors awake at the wheel and will we dare to do the kind of challenges that I just described the car doing so just to get back to it I do believe because of what I've seen with uh autonomy and robots that as fancy as we think that is controlling a Dien robot will probably have less bad outcomes you know every once in a while someone Nick something and you have to go into full surgery or they go home and they they they they die on the way home because they ex sanguinate I think it's just going to be safer it's just unbelievable for me to wrap my head around that but it's going to take truthfully it's impossible for me to wrap my head around what's already happened so I I I I I guess try to retain the humility that says uh I reserve the right to be startled um again there are certain things that seem much easier than others like I have an easier time believing we're going to be able to replace Interventional cardiologists where the number of degrees of freedom the complexity and the relationship between what the image shows what the cath shows and what the input is the stent that Gap is much narrower yeah I can see a bridge to that but when you talk about doing a Whipple procedure when you talk about you know what it means to car you know cell by cell take a tumor off the superior mesenteric vessels I'm thinking oh my God since we're on record I say I'm talking about your routine prostate yeah uh removal yeah first 10 years I would take that bet today wow let's go one layer further yeah let's talk about Health right so this is this is a field of medicine today that I would also argue is grossly underserved right so everything you've said to date resonates uh I I completely agree from my own experience that the resources in Pediatrics and primary care I mean uh these things are are are unfortunate at the moment Harvard has I 60% of of the of the uh undergraduates are getting some sort of mental health support and it's completely outdoing all the resources available to the university health services and so we have to Outsource some of our mental health and this is a very richly endowed University in general we don't have the resources yeah so so so here we live in a world where there's I think the evidence is very clear that when when a person is depressed when a person is anxious when a person has any sort of mental or emotional illness pharmacotherapy pharmacotherapy plays a role but it can't display Psychotherapy so you you have to be able to put these two things together and the data would suggest that the knowledge of your psychotherapist is important but it's less important than the Rapport you can generate with that individual now based on that do you believe that the most sacred protected if you want to use that term profession within all of medicine will then be Psychiatry I'd like to think that um and I think but here's here's some U I'd like to think that and I'm not going to ever well I shouldn't say that if I had a psych psychiatric uh GPT speaking to me I wouldn't think that it understood me on the other hand um back in the 1960s or 70s uh there was a program called Eliza and it was a simple uh pattern matching program it would just em emulate what's called a reran therapist um where I really hate my mother why do you say you hate your mother oh it's because uh I don't like the way she fed me what is it about the way she fed you and just very very simple pattern magic and this Eliza program which was developed by Joe Whit and B MIT a his own secretary would lock herself in in her office to have sessions with this thing because nonjudge EV and I'm sorry this was in the 80s 70s or 60s wow yeah and and it turns out there that there's a large group of patients who actually would rather have a non-human non-judgmental person who remembers what they've said last time shows empathy verbally and you know again I wrote this book with Peter Lee and Peter Lee made a big deal in the book about how gbt for was showing empathy and in the book I argued with him that this was not that big a deal and I said I remember from medical school being told that some of the most popular doctors are popular because they're very deep empaths not necessarily the best doctors right how is your great example in TV that yeah and so I said you know for certain things I might actually want but that's just me and I think that for um I could imagine a lot of for example what cognitive behavioral therapy being done and be found acceptable by a subset of human beings H yeah you might be right it's just it's not wouldn't be for me because i' say I'm speaking to some stupid program but if it's giving you insight into yourself and it it's based on the wisdom called for millions of uh patients who's to say that it's worse and certainly not judgmental and maybe a little bit less so so Zach you're you're born probably just after the first AI boom you come of age in you know intellectually academically in the second and um and now in the mature part of your career when you're at the height of your esteem you're riding the wave of this third version which I I don't think anybody would argue is is going anywhere um as you look out over the next decade and we'll start with medicine what are you most excited about and what are you most afraid of with respect to AI so with with specifically with regard to Medicine what I'm most concerned about is how it could be used by the medical establishment to keep things the way they are to pour poor concrete over practices and what I'm most excited about is alternative business models young doctors who create businesses outside the mold of hospitals hospitals are these very very complex entities they make billions of dollars some of the bigger ones but with very small margins one to two% and when you make have huge Revenue but very small margins you're going to be very risk adverse and you're not going to want to change and so what I'm excited about is um the opportunity for new businesses and and new ways of delivering to patients insights that are datadriven what I'm worried about is hospitals you doing a bunch of information block blocking um and regulations that will make it harder for these new businesses to get created because they don't want to be understandable they they don't want to be disrupted and that's the danger in that latter case or that that case that you're afraid of Zach can patients themselves work around the hospitals with these new companies these disruptive companies and say look we have the legal framework that says I own my data as a patient I own my data so I know the believe me we know this in our practice just because our patients own the data doesn't make it easy to get I there is no aspect of my practice that is more miserable and more inefficient than data acquisition from hospitals uh it's it's it's actually comical it's actually comical it's absolutely comical I'm willing to and I do pay hundreds of dollars to get my data for my patients with rare and unknown diseases in this network extracted from the hospitals because it's worth it to pay someone to to do that extraction yeah but now I'm telling you it is doable yeah so you're saying because of that um are you confident that that the the the legal framework for patients to have their data coupled with AI and companies do do you think that that will be a sufficient hedge against your your biggest fear I think that um there will that this is not unlike my 10-year prostectomy by robot prediction I'm not as certain but I would give better than 50% odds that in the next 10 years there'll be a company that at least one company that figures out how to use that patient's right to access through dirty apis y AI to clean it up provide decision support with human doctors or par profession Health power professionals to create alternative business I'm convinced because just the the um demand is there and um and I think that you'll see companies that are even willing to put themselves at risk or I mean by that are willing to take the medical risk on that if they do better than a certain level of performance they get paid more and if they do worse they don't get paid yeah I believe there were companies that are going to be in that space But that is because I don't want to uh underestimate the medical establishment's ability to squish uh threats so we'll see okay now let's just pivot to AI outside of medicine same question what are you most afraid of um over the next decade not so maybe we're not talking about uh you know self-awareness and Skynet but next decade what do you most afraid of and what are you most excited about what I'm most afraid of is um a lot of the ills of social networks being magnified by use of um these AIS to further accelerate the um cognitive chaos and uh vitriol that uh fills our um our social experiences on the net and I think it's it could be used to accelerate them so that's my biggest fear I saw an article two weeks ago that um was was an individual I don't I can't remember if they were currently in or formerly part of the FBI and they stated that they believed I think it was somewhere between 75 and 90% of quote unquote individuals on social media were not in fact individuals um would would you I don't know if you spend enough time on social media to to to have have a point of view on that unfortunately I have to uh admit to the fact that my daughter was now 20 years old but 4 years ago she bought me a mug that says on it Twitter addict and um and so I spent enough time um I would not be surprised if some large fraction um are Bots and it's going to get could get worse and it's going to be harder to actually distinguish reality from human beings harder and harder and harder and uh so that's the that's the that's the real problem um and that's going to as we are fundamentally social animals and if we cannot understand our social context and we we cannot trust it in any in most of our interactions it's going to make us crazy and or I should say crazier yeah um and my most positive uh aspect is I think that these tools can be used to um expand the creative expression of all people if you're a poor driver with me I'm going be a better driver if you're a lousy mus musician but have a great ear you're gonna be able to express yourself musically in ways that you could not do before I think you're going to see filmmakers who were never meant to be filmmakers before Express themselves I think human expression it is going to be expanded because just like um printing press allowed all sorts of uh in fact it's it's good analogy because the prined Press also created a bunch of Wars because it made allow people to they clear their opposition to the church and so on uh made uh enabled a number of bad things happen but it allowed also expression of all literature in a ways that would have not been possible without the pr press so I I that's I'm looking forward to um yes human human expression and creativity and you know if you've I'm sure I can't imagine you haven't played with some of the picture generation or music generation capabilities of AI or if you hav I strongly recommend you're gonna be amazed I have not I I I am ashamed maybe to admit that I my my interactions with AI are limited to really you know chat GPT for and and basically problem solving like you know uh solve this problem for me right and by the way I think I'm doing it at a very you know JV level I I could really up my game there there there are you know just before we started this podcast I thought of a problem I've been asking my assistant to solve and because a I don't have the time to solve it and I'm not even sure how I would solve it it would take me a long time I've been asking her to solve it and it's actually pretty hard and then I realized oh my God why am I not asking chat gp4 to do it so I just started typing in the question it's a bit of an elaborate question and as soon as we're done with this podcast I'll probably go right back to it but I haven't done anything creatively with it um what I will say is what does this mean for human greatness right so you know right now if you look at a book that's been written you know and someone who's won to pull it surprise you you sort of recognized like if you I don't know if you read s if you read Sid mukar G right he's one of my favorite writers uh when it comes to writing about science and medicine and when I read something that Sid has written I think to myself there's a reason that he is so special he and he almost alone can do something we can't do right I've written a book doesn't matter I could write a hundred books I'll never write like Sid and that's okay I'm no worse of person I'm no worse a person than Sid but he has a special gift that I can appreciate just as we could all appreciate watching an exceptional athlete or an exceptional artist or musician does it mean anything if that line becomes blurred that's the right question that's the right question and yes Sid writes like poetry um but here's an answer which I don't like I've heard many times people said oh you know that uh deep blue beat Kasparov in chess but chess is more popular than it ever was even though we know that the best chess players of the world are computers so that one answer I don't like that answer at all because it does uh because the fact that I don't think if I mean let's put if we cve Sid GPT and Sid wrote you know Alzheimer's the second greatest malady and he wrote it in full Sid style but it was not Sid yeah but it was just as empathic family references you know walking along right the history the the the weaving of history with story with science yeah but if it did that and it was just a computer how would you feel about it Peter I I mean Zach you are asking the jugular question I would enjoy it I think just as much but I I don't know who I would praise like I think maybe I have in me a weakness SL tendency to want to idolize you know I'm not a religious person so my Idols aren't religious but I do tend to love to see greatness I love to look at someone who wrote something who's amazing and say that that that amazes me I love to be able to look at you know the best driver in the history of Formula 1 and study everything about what they did to make them so great um so I'm not sure what it means in terms of that what and I don't I don't know how it would change that I I grew up in Switzerland in Geneva uh and even though I was American accent both my parents were Poland and so the reason I have American accent is I went to international school with a lot of Americans but I all I read was whatever my dad would get me from uh England in science fiction so I'm a big science fiction fan so let me go science fiction on me to answer this question it's not gonna be in 10 years but it could be in 50 years is you'll have idols and the idols will be yes um Greg gregorovic wrote a great novel but you know AI 521 what their their understanding of The Human Condition is wonderful I cry when I read their novels and they'll just be part they'll be a a part of the ecosystem and this is where you know they'll they'll be entities within us whether they are self-aware or not will become a philosophical question you know let's let's not go that narrow path that disgusting Rabbit Hole where I went I wonder is does Peter actually have Consciousness or not you know does he have the same processes as I do and we won't know that about these or maybe we will but will it matter if they if they're just Among Us and they'll have Brands um they will have yeah they'll have Brands they'll have companies around them they'll be Superstars and uh they'll be you know Dr Fubar from uh Kansas trained on um ivic medicine the key person for alternative medicine not a not a human but we love what they do okay last question how long until from at least an in intellectual perspective we are Immortal so if I died today my children will not have access to my thoughts and musings any longer will there be a point at which during my lifetime an AI can be trained to be identical to me at least from a goalpost perspective to the point where after my death my children could say Dad what should I do about this situation and it can answer them in a way that I would have so Peter um that's great question and people normally say great question because they're trying to buy themselves time to answer it it's a great question because that was an early business plan that was generated shortly after GPT 4 came out in fact um I was talking to um very briefly to Mark cubin and he because he saw gbd4 I think he uh got uh TR trademarks or copyrights on his voice all his work and likeness so that someone could not create a mark who responded in all the ways he does and I'll tell you that it sounds crazy but there's a company called rewind. and I have it running right now and everything that appears on my screen it's recording every sound that it hears it's recording and if um it's doing if characters appear on the screen they'll do it'll OC them if voice appears voice and then if I have a question I said when did I speak with Peter antia find it for me it was I'll say um what who was I talking about who was I talking about Ai and uh Alzheimer's and they'll find this video on a timeline now all that data H how many terabytes of data is this Zach amazingly small it's just gigabytes no h how is that possible because a it compresses it it compresses it down in real time with using Apple silicon and second of all you and I you're old and you don't realize that gigabytes are not big on a standard Mac that has a terabyte that's a thousand gigabytes and so you can compress uh audio immensely and what it's doing is it's actually not taking um um video it's just taking multiple snapshots every time the screen changes by a certain amount yeah it's not trying to get video resolution per se no yeah no and it's doing it and I can see a timeline and I it's quite remarkable and so that is enough in my opinion data so that with enough uh conversations like this someone could create a pretty good approximation of at least public Zach so then the next question is is is Zach willing to have rewind AI guy on a recording device his phone with him 24/7 in his private moments in his intimate moments when he's arguing with his wife when he's upset at his kids when he's having the most amazing experience with his postto like if you think about the entire range of experiences we have from The Good the Bad the Ugly those are probably necessary if we want to formulate the essence of ourselves um you envision a day in which people can say look I'm I'm willing to take the risks associated with that and there are clear risks associated with doing that but I'm willing to take those risks in order to have this Legacy this data set to be turned into a legacy so I I think it's actually pretty creepy to um to come back from the dead to talk to your children but so I actually have other other goals like being able to here's where I take it we are being monitored all the time we have iPhones we have Alexa devices I don't know what is actually being stored by whom and what and people are going to use this data in ways that we do or don't know I feel it's us the little guy if we have our own copy and we can say well actually look this is what I said then yeah that was taken out of context yeah yeah that was taken out of context and I can do it I have an assistant I can just find it and find exactly and find all the times I said it I think that's that's good I I still I I don't think it's I think it's messing with your kids' head to have uh you come back from the dead and give advice even though they might be tempted technically I think it's going to be not that difficult wow um and just again speaking about rewind AI again I have no I have no stake in them um and I don't think I I think I might have paid them for a license to run on my computer but it's also the microphone is always on so when I'm talking to students in in my office uh it's it's uh taking that down so yeah I'm sure some people there are some moments in my life where I don't want to be on record but it's there big chunks of my life that are actually being stored this way well Zach this has been um this has been a very interesting discussion I I've learned a lot because I I think you know I I probably came into this discussion with about the same level of knowledge maybe slightly more than the average person but clearly not much more um on just the the general principles of AI the evolution of AI um and I think what I come away from this I come away from this much I guess if anything surprises me and and a lot does but nothing surprises me more than the time scale that you've painted for the evolution within my particular field and your particular field which is medicine I had no clue that we were getting this close to that level of intelligence so Peter if I were you and and this is not an offer because I'm too busy but you're a capable guy and you have a great network if I was running the clinic that you're running I would take advantage of now I would get those those videos and those those uh sounds and get all my patients with with of course their consent to be part of this and to actually follow their progress uh not just the way they report it but by their gate by the way they look um you can do great things in what you're doing and Advance the state of art you know you're asking who's going to do it you're doing uh some interesting things you can be pushing the envelope using these Technologies as just another very smart comprehensive assistant Z you've given you've given me lot to think about I uh I'm I'm grateful for your time and and obviously for your your Insight and years of dedication that have uh allowed us to be sitting here having this discussion thank you very much it was a great pleasure thank you for your time [Music]