because we are quite a team. Is this being recorded? Now we are. Yes.
Yeah. Yeah. Now we are. And then, Andrew, did you want to give a little background for your area of expertise and kind of how you got roped into working with us?
I became a new big geek. So my background is acute internal medicine. So I did a... residency fellowship and specialty in what do they call it in the United States critical care acute internal medicine essentially so spent like nine plus years NHS trumping around in the escalated wards so ICUs any of the monitor wards and then I decided to come to America studied applied data science became a uber geek and I've been building these models with you know Marga and Jan now for, it's been almost over a year, right?
It's been quite a while. And I love working with them. It's fantastic.
And so we tackled something that's near and dear to my heart in this project, which is really the clinical deterioration of patients in the wards. So preventing them from an escalation of care. So the escalation of care is defined as a transfer to the ICU, a rapid response being activated, a code.
blue or mortality. So there's a group of patients obviously we remove, right? So any patient that's pregnant, any patient has mental illness, any patient that's under 18, or any patient in a monitored ward, because at that point it's really, it doesn't make any sense to score them as a likelihood of clinical deterioration because they've already been monitored, somebody has already determined. that they may or may not deteriorate you know during this day um so i initially started off as a failure to rescue right if you have a patient in the hospital and they deteriorate and we don't respond it's our failure like there's no question about it then it became a nursing intuition score and now we're all the way to nadia so we've created an ai that is really quite sophisticated um but for many reasons that i want to really go into how we built it because I really want to focus on how we're going to clinically validate it. So this is more of a conversation and trying to find out our best approach at clinically validating as opposed to a standard because there is no standard to clinically validating these types of models.
So I can tell you what we did in other projects and see if the team feels that that's a good approach. So the first thing is we're going to have to do chart reviews, and quite a few. And chart reviews of patients that do not clinically deteriorate, and chart reviews of patients that we know have clinically deteriorated. So looking at the rapid response teams, monthly output of patients that have had a rapid response, a code blue, mortality, or any kind of, I think even code. Greg could probably fall under there somewhere.
I don't know. I'll leave that to the team to figure out. Like if somebody's cognitively declining, should we consider that as deterioration?
I don't know. I'd say yes, but maybe that's not the criteria. Vascular health, yeah. I guess it depends why. If it's vascular, it'd be easier to demonstrate then.
I'm assuming if you're not breathing, that's probably a good indication you're going to clinically deteriorate if your heart's not functioning properly. You know, so I just get on the list, you know, I'm a checklist kind of doc, right? So everything that I build is clinical decision support at the bedside. I'm leaving all of the brilliant publishing and all that stuff to Margot and Jen and all the evidence-based medicine.
For me, it's using the EHR to figure out a way that we can identify these patients prior to them clinically deteriorating. So. If you can think of the AI as a time machine, we can determine when that person clinically deteriorates, but our goal is really to determine ahead of time.
We have to predict the likelihood and then intervene prior to the escalation to an ICU, etc. Did I explain that quite clearly? Did I get it right? I think, Andrew, you were spot on. At NUPA, it's really about what Andrew was saying, that before they show any symptoms or any manifestations of decompensation.
And so I really like it to have REDCap experience and knowledge because that's what we used in our other study, a completely different study, where we have 25 clinicians doing chart reviews. And then we would measure their intuition, their likelihood that this patient would deteriorate. And then it's a double blind.
So you don't even see what the... the artificial intelligence or in this case advanced digital intelligence is then we only look at the chart and get a clinical determination a likelihood of clinical deterioration but then when finished using the reviewer yourself um new power or whoever we get into that clinical group as the gold standard and then measuring it against the advanced digital intelligence. I don't think we should ever be in a world where the advanced digital intelligence is flipping the script and telling the clinician who is clinically deteriorating yet.
You know, so right now, and that's just my opinion, Jen, Margot, feel free to pipe in, but I was really thinking that, you know, it would be outstanding to have... a clinician with your background and experience looking at the charts saying okay out of these say 100 charts you know i determined that you know 30 of them were clinically deteriorating and then measure that against the model as opposed to the other way around right and like you're looking at the model and saying oh does it fit into these charts like measuring the other way around so using the clinician the human being as a gold standard And then depending on the results, it means we would either have to retrain the model, maybe let her become more mature if she's not accurate, or if she's too accurate, then we have to think, like, wow, she's too accurate. Like, maybe she's too intelligent, or maybe she knows too much. Or if there is a correlation between the clinician and the model, then we can say, okay, this is a, you know. a gold standard tool that we could use in clinical practice.
How does that approach start? Does it sound feasible? Anyone have other ideas? I don't know. That's just what we used in another project, and it seemed to have worked really well.
But in advanced care management, that other project, you have so many chefs. and one kitchen. So it's a very difficult project because most of these clinicians put in orders, but they don't really stay at bedside taking care of these patients for their whole stay.
And that's why I love doing this project because nursing are really the people that care for these patients the whole stay. And they have an idea and an intuition based on experience that maybe a pulmonologist or a cardiologist that just pops in for the RRT and then leaves doesn't have. So I think I did a pretty good job at explaining what I was thinking. But that was just my thought, this binomial direction. But it was just a thought.
I think we could start there and then open up the discussion to what any of you think. I like it. I just stuck in the chat. I found this in medical informatics. It was just literally published in September, like just now.
But it's kind of a nice model, I think, to help ground the process for validation. That's fantastic. I love it.
And there is a, it's based off the Duke model. Let me try to find it. I have so many files.
I'll give you guys the PDF that I pulled this from. But it's great. I was just flipping through it.
It's got a whole section on verification, validation, and certification of AI. And I have not read this part in detail yet, but let me save it. Did you have a chance to see that policy that Craig put out? The AI policy? Yeah, compliance policy.
So what's nice is they cover monitoring. You know, and system design. And also going through an AI council. So we'll have to make that as part of our... We can go through our nursing AI council.
Yeah, I think that would be great. You know, we don't have any third parties. We've already done the patents.
So all of that is really covered. There's nothing in there that really stood out as, oh, wow, we have to... go back to the drawing table um sorry what are your questions so i wanted to uh really um uh start from the problem question like you know what is that we are trying to we are trying to resolve she needs a smart goal guys for so she can do the validation Yeah, she needs to write a problem statement.
Sure. Like that we're demonstrating that there was an intervention that took place, right? So we're implementing a clinical decision support tool, right?
And then that helped achieve this gold standard, which was over 80% accuracy, right? Yes. Okay, so you need to, in terms of defining what the gold accuracy is. But so, I mean.
I guess landing on the problem, right? So, I mean, I think we've solved a few problems here with our clinical decision support tool, where we've had many, many surprises along the way that we can actually solve multiple things, right? So, we're looking at alarm nuisance, right?
So, that's a burden on nurses, right? We're also looking at exhausting our frontline workers with providing. having, uh, providing a step up in care because symptoms were, were providing a step up in care once the symptoms manifest, but this is allowing us to intervene before they actually become symptomatic. Right. So when a patient then becomes symptomatic, we maximize when we're, there's multiple, multiple resources that we need to pull in at that point.
Right. So, um, there's multiple. No, no, no, please finish your thought. No, no, no, I'm just like going through a list of the problems that we have solved. If you look at Andrew's model, Anupa, Andrew put a nice operating model here.
And if you look here, you know, the research question really was, you know, can we create a decision-making tool that is superior to... is equivalent or superior to decision-making by a provider or a nurse. And so that was kind of the original question, right?
Like that's like, that's where like the whole deterioration index score came from is, you know, can we create a decision-making tool that helps providers make better decisions faster, earlier, more concisely compared to not having that resource? And, you know, why is that? And it's because, you know, today, you know, we still have significant rates of sepsis and other types of acute illnesses that are many times, you know, hospital even generated that are really kind of needless.
And if we had earlier intervention and more preventive strategies, then maybe we could mitigate. you know, like the number one problem of, I think for your school problem, I think you could even stick with sepsis because I think it helps it go from being too vast, even though Nydia could probably be applied many different ways. That is a smart, that's like a peak goal. It's part of the nursing strategic plan. You can write about that all day long for subsequent assignments and stuff.
It's aligned with the value-based care initiative. And you Bob, which is that the sepsis piece, I mean, you can, you can say that Nydia is responding to several problems. Margo, I mean, if you want her to go this value-based care direction, our abstract is a little bit more around the burden on nurses, but, and retaining nurses and preventing nurse burnout. But either direction I think works.
Yeah. But the validation, the validation process won't, like your intervention has to match. It has to match with that. It does.
It does. And for intervention, it would be more along with that, with the sepsis for sure. Yeah. So if we, and that's great. I mean, you know, obviously sepsis is the child of deterioration, which is the parent.
I mean, you can't separate them, but I'm, I'm thinking instead of, instead of looking at news to muse and the epic deterioration index, which is the numbers that, so the, and let me step back. This whole project started with me getting a assignment where where I was to look at the Epic Deterioration Index and figure out, is this good enough for clinical care? So my mandate from the CMIO was, Andrew, go turn it on and see if it works.
So I turned it on. We looked at the outcomes and the false positives, which is killing the crisis care and now rapid response team. They were just responding to all these false positives and nothing was going on. Then the...
the specificity of the model was awful. I mean, the accuracy level was at 54%. So it's a coin toss, right?
You're better off just guessing. And then we went on the road where we were like, well, nurses have better intuition with deterioration. And we built a nursing intuition score.
And then we built Nadia. So we could easily measure, you know, in... Applied data science, you have to always measure against other models to see how better you perform. Because our goal is to detect earlier than any other model, right? So the more lead time you have, the more likely you have to intervene, you know, do your two labs, your lactates, and then fluids, antibiotics or antivirals, antifungals, and then prevent the onset of sepsis altogether, right?
So we could use sepsis as the child for the validations completely. you know feasible and would be acceptable so instead of using like news to muse epic deterioration we can use qsofer serves an epic sepsis model version two sorry i didn't put like two in the text message but they just released their version two now and so it gives us something to measure against so we can say okay um i can do an output from the model that will give us the QSOFA score, the SIRS score, and the sepsis version 2 score. But more importantly, it will give us the amounts of false positives that were generated.
So every alert that was generated and how many of those alerts were actual events as opposed to just a response to an event that was a false alarm. Andrew, is there any downside to being... Because like right now, I'm wearing my professor hat when I'm thinking about Anupa and her question. It's like you hone it down, you get it concise. It makes it quite measurable and feasible.
But what would that do for the model itself? Would that just say, well, it's been validated just on the one tool? Or is it better that we can start there with septic patients? charts, but then we will definitely expand because I'm sure that we'll do several chart reviews and they won't all end up ruling it for sepsis.
So then if that's the case, then we could use, you know, two of the other tools that are more general deterioration scores. I just didn't want to get us so specific for our particular validation that we could then only say that it's, it's, it's been demonstrated, you know, gold standard for sepsis. Absolutely. Cause she can determine, you know, anything that any factor that would lead to clinical deterioration, like sepsis is just one of hundreds.
Yeah. It's just, it's just one, but like, yeah, I could see her, like you grab a, you grab a chart, you know, grab a digital chart, you open it up and it's like, you do your analysis, you look at it. And go, okay, this person got diagnosed with sepsis.
So then you'd like take a town and you would compare it, you know, that algorithm with what Nydia would say. And then if it said no, they ruled out for sepsis, then you would use like the deterioration index score or MERS or one of these others. I don't know. I mean, you know, consecutive charts.
I'm just trying to think of like the methodology of like, because we don't want to appear to be cherry picking charts necessarily. I mean, we have the narrative from the rapid response team as our measurement of... There you go.
Yes, there you go. Like if they call a rapid response, then those are all the files that we, that's how we would. And in that they also include any code blue.
They also include any escalation of care to an ICU. So transfer of a patient. Yes.
Okay. And so I think, and I, yeah, I, now that I'm thinking of it, I actually agree with you a thousand percent, Margo. I mean, if we only look at sepsis, then we're, we're kind of doing a disservice to the. tool.
That's right. The tool is really measuring like early warning system, right? It's completely abstracted on purpose.
Like we can, you know, focus in on one, you know, disease path, but it's really designed to be a catch-all for any. Yeah. And I think we're missing, I think we're really missing the main point there because now there are at least three.
four tools that I know of that can try and early detect sepsis. So why don't we do more of a random, why does it have to be sepsis specific? Well, we're not, I'm just saying for Anupa's project, like she can specifically say for sepsis, but for the methodology, it'll be all the patients that go through that get a rapid response called on them.
They had a code blue. Or they had, you know, so it's like, we'll be flipping through all of those charts. I mean, it doesn't have to be so specific, Margo, for her problem that do we have to avoid her from needing to do 5,000 chart review as opposed to like a hundred.
So we, we don't need a large volume to train. So the, we could do an equal number of patients that did not experience. an escalation of care and the number of patients that had an escalation of care. So, you know, there will be a mixture of patients in the data set, but there would be like an equal amount. So you're not doing thousands.
Like, could it just be like one month of data? The same as the narrative and how many patients. Yeah.
And Andrew, for the other algorithm that you have IRB approval for. If you guys have, have you validated that one? Is that validation process done? Because we could give that, we could give that data to the biostatistician and they could run a power analysis. Yeah, so we did that on the other project.
So we did the first two rounds, the first two tranches of validation. The case reviews came back. We were two.
positive like we were 93 accurate on the model so that's when we went to the statisticians and we said okay do a power analysis and figure out like is there something we're doing wrong and it turned out we were actually quite close to what their models say and the n value the number of studies um and what was you see the the difference in that study is there wasn't a measurement capability right we couldn't measure an transferred to an ICU. We couldn't measure an RRT. That's got a criteria of patients that have never been defined in medicine before.
So we didn't have anything to measure it against except for the clinicians, right? So that's why we had to do the 25 specialists that would look at the chart and say, yeah, I agree that this patient is seriously ill, right? There wasn't an actual measurement. Yeah, we have actual measurements.
We have an activation of an RRT. We have transferred to an ICU, we have a mortality, we have a code blue, have a code gray. So we can measure against data of, and we can create like a cohort of patients of similar, you know, demographics, but ones that experienced, you know, an escalation, one that didn't, and then measure the two together. Kind of like a double blind test, I'm thinking.
More like a pharmaceutical grade test, which is a little bit more heavier, but a lot more stringent. You know, where Anupa has no idea what the model is doing at all. So there's no way she can be biased in the study. Right. That's why I didn't even want to describe how she learns.
How she learns, because automatically that would be like, oh, a nursing grade bias. Yeah, it is. You can actually say, like, you did not know.
Anupa, right? How she is learning. So you can go in exactly, Andrew, I think that's a really, really good point. There's no bias there. We did that specifically with the 25 physicians that were doing the case reviews in the other study.
We literally told them nothing. We asked them like five questions in red cap. And we would say like, you know, what is the likelihood that this patient would die in the next 12 months? And they would say yes or no, like yes. And they wouldn't even have to give a reason why.
Because we were trying to get their intuition, not their clinical facts. factors leading to yeah and so i'm thinking in red cap maybe we could write it in such an abstracted way yeah and then i don't know get a new parent a few other nurses to just go into red cap and do a chart review and say you know looking at this chart review what is the likelihood this patient is going to deteriorate yeah it's not right and then some follow-up questions like what are the vital signs that you look at to make a determination? What are the, I don't know, something like that.
We can design that through the IRB. But I can pass along, let me find out if Brad, I'm sure you wouldn't have a problem with that, if I can pass along the outline of what they submitted. Because we submitted for the IRB, and then we had to get the red cap.
And every time you change it, you have to get, like, a re-approval in the next few weeks. So maybe that can just save us some time. That would be great.
I don't know if that's helpful. Does that make sense? Yes.
Yeah. Okay, great. And it's still, there's not really a standardized way of doing it. So let me talk to that image that I put. in the chat.
So, let me just share my screen and I'll open up that image. So, this is something fairly new that I've been dealing with with the CMI, Sean, on how are we going to get these models from research into clinical integration. So, I think this should really be a part of what we're also looking at in this study, right? Because this would technically be the first time C.
Dyssinia has ever taken. a nursing data science model and pushed it through into our actual production environment. So if you look at everything in the pink, we would say this is a research project. And we have our question, which Anif was asking us about, and then the model design, the model build.
And then we're going to go into the silent mode. So once we do this validation, it's going to sit in the silent mode. until this validation is completed and we publish our paper, Evidence-Based Medicine, and then it's going to go into the whole clinical integration piece.
But what will be amazing is that this is actually going to be the very first ever clinically designed machine learning model that CEDARS has ever pushed into production in Epic from concept. to completion. And what's nice is our timing couldn't be perfect because they just released Nebula, cloud services, et cetera.
So we would put Nydia into Nebula and then should be available to us, but in the future it should be available to any other Epic institution that pay us licensing and royalties. And then there's the whole clinical workflow piece, which we also have to figure out too. What do we do when, you know, in a year determines that a patient is clinically deteriorating, what is the clinical workflow that we would put in place at that point? And I think that that's also part of this discussion because, Nupa, you would be right there with the nurses or, you know, at the stage where they give you that information and say, okay, what would you do now?
Like, what would your ideal world be? What navigator sheet would you get? What would your checklist be? Do you do like two labs from two different locations, lactates every three hours?
Like what is your ideal will? Not our protocol, but what is the nursing idea? Because this is normally driven by physicians who, by the way, don't have any idea, right? They have what they learn at med school. That's it.
Like they're not as skilled as the nurses on the floor. So what would the ideal protocol be for a digital? deteriorating patient and what do you do at that point if you have a suspicion of onset of sepsis or something like that and that's where i think we'd break off to what the disease path is Like our example could be sepsis.
Yeah, but it could be a multitude of things. Yeah, it could be. And again, they're pre-symptomatic.
They're not showing these signs yet. So that has to be also within our frame of thought when we're thinking about what you would do next. It's like, oh, this patient looks fine.
However, this is a prediction, right? And what would you do next? What does that look like?
Do you know that first author on that model, Nicolita? I'm trying to think of like what her last name is. It's got the MOU at the end of it. She's got to, I just submitted another paper that she was on that talks about the role of AI and these types of deterioration models in critical care settings and stewarding.
what that process of stewarding these algorithms that are coming down the bank looks like. I just stepped in the chat. The thing is that Epic hasn't figured this stuff out either.
You know, a lot of the... So fluid. Yeah.
And for them, I mean, it's such a global problem, right? So I created an application a while back on... mobile devices that could do this, right?
So it was a deterioration index list. So this is, you know, our threshold. At that time, we were looking at 62. And so anything before the threshold would give you lead time to intervene.
So you could intervene, like in this case, there. You intervene at 55. The threshold would really be set by your paper, your outcome, your validation. But this was just looking at, you know, this is an app where there's just a stream of patients and they would then be able to, you know, have a real time dashboard in the nursing station on the floor on your phone or an iPad. So you could use like Haiku, Koneeto, all those kind of tools. And basically from there, when you get to that point, you could just, you know, click on this.
bring you to the charge nurse and she'd be at bedside you can activate her rapid response the code blue kind of brain you know just even do an s bar with maybe the attending physician between the charge nurse and they um the attending just to see if they're like a sanity check yeah um so it's just a way of of saying okay you got it you got this tool on how to do a fairly accurate prediction of deterioration. Now, what do you do with then, right? So, see if it's just like a number that you put in flow sheets, then, what does that mean? We need like an actual application around it.
And so, that's what this allowed us to do. And then you could go into like really detailed chart reviews but more on an analytical perspective, not only where You're looking at the score, but then you can also start to do things like training, you know, respiratory rate, like 21 to 24. And you can start putting in information like this to the nurses. Like if this is an adult, these are things that you have to look at. And then you can see trends over time. And then you can figure out, like this patient in particular, then you can see cardiac distress, rapid, you know, respiratory distress.
And then you see cognitive decline. and you see that they spike in so they have a temperature it's probably a viral infection right and so i just put my information in um as you can see it's dated i'm 44 that was 42 at the time but um but this is a way to turn it into an application like how would you actually apply it at bedside so that's what this deployment piece is all about right so once we get into the silent evaluation mode, then we're then measuring it over time to see if the model's waning or waxing. Is it becoming more accurate?
Is it becoming less accurate? And that's the kind of stuff that we have to do for the AI policy. The new one that's out, that's what's required for that. So we have to put that also into this validation process.
I like that you printed that out, Andrew. That made the printer. yeah i you know i i had to read through it really carefully you know why i know and i do that too there's something i actually need to use my highlighter i'm like i'm printing this one out you understand like i wanted to see if there's any fine print that henryns came up with yes there's something about using the highlighter actually writing it manually and then okay that's for real hey you know what's crazy is i i still do like moleskin notebooks right i've been using them ever since i was like a kid right this is mine i love it and my fellows they like do the stuff on their phone i was like what if your phone doesn't work like what do you just like forget everything but no i know right i get mine you can't really see but i these are from epic so i have a bunch of these from and i use these like front and back and i just It's the way that it actually, it really sinks in and it just stays.
It feels so real when I write it down. It's a much different cognitive process, at least for me. Sometimes I'll make, I told Margo, I'm actually making like some charts, like manually and writing them out, the neurons.
Or kinesthetic fingers. In my ANN. I have to draw out my ANN. In it, because there's some. There's some funky language in this policy, so I wanted to be very precise.
Oh, okay. You know, I should print it out also because I got the email from Craig, but I have not printed it out. And I find a lot of the language in school, when I'm in school right now, as well in the literature and in some of our conversations, sometimes they're analogous, but sometimes they're slightly different, like the tokens, by the way, that you texted me. um that that um like that's a new one um so i am it i shared it a new about like a week or two ago that policy there's about like five different ways to say the same thing i'm so deeply impressed by you jen i'm my even even my mit data scientists don't even know half the stuff you know it's great i love it this track this is uh it's fantastic it actually feels good you to gain a little bit more of that deep learning so that I can develop some of that thinking and understanding of how she's learned and become more sophisticated and actually have a visual is very, very helpful. But anyway, sorry to get us off track about tokens. No problem.
I'm looking at page six of this, and this is something I think that maybe we should put in our red cap or in the study or something. It talks about transparency so we have to show what the model is actually doing per se right this the stuff you're writing down Jen but also we have to write out like the what the actual factors are that are leading to that determination there's also bias and fairness what's nice is she's not she's not created with any bias or fairness so you know everything she's learned she's learned from a computer there's no human being so there's no bias there's no factors um by design right and then also this stuff in new pi is about the accountability of the model so the work that you're doing is so critically important in this project because without that human gold standard measurement blind from the tool we would never be able to say it's completely unbiased it's completely removed from the validation process so that's why i was recommending Like we not actually explain in any way to Anupar how we built Nadia because we need to be able to say, okay, the validation was done completely blind. Like the validation team literally had no idea how this model was built, what it does, nothing.
They just did their own clinical evaluation. Does that sound fair? Anupar, do you have any questions at all about any of this?
Don't hesitate to ask, really. Yeah. It's quite novel. Go ahead.
My question is, when we implement, are we going to implement in only Cedars-Sinai Medical Center or with the whole system? You mean in EPIC? No, Cedars-Sinai, Marina del Rey, or Cedars-Sinai Medical Network. Oh, okay. Yeah, if we get the validation and if things turned out to be the null hypothesis, which is. you know like we say that there is no significant difference with it should i go that way and or should i just say that i'm going to look at um her sepsis and uh you know do the chat review and see that it is going to help um predicting um before like 10 hours before and so that the other dinners can be there or cold blueness can be there to prevent um and we can order all the same.
I think that's going to be the challenge, right? So you're going to report on something that we prevented from happening. So it's really a minority report, right? You're preventing it from occurring prior to it occurring.
So your report would be, had we not intervened, X could have happened. Since we intervened, Y happened. And it's going to be challenging.
It's not as straightforward as a confirmation. We can speak to national averages, and we can speak to data from other university-based, like the PSIs. It's a great question, because you've hit on exactly the difference between our study and other studies.
We're not reactive, right? So like a muse or a... news two or an epic di they're bpas opas and they're true or falses yeah right so and it's it's a confirmation of state it's not a prediction of prior to state right so like if you get a news it's like oh it's too late it's like a you know you got an email after you already won a million dollars right like it's not going to tell you that you could win a million dollars like it's just confirming something that it's a fact where this model Is really predicting. Right.
Yeah. And it's having the independent thought process. So have you guys ever seen the movie Kingsman?
Yeah. Have you seen the movie Kingsman? Okay.
So there's this English. He's like a special agent. And I love his office.
His office is full of all these English newspaper front pages. And on the front page, it's like. the queen changed the color of her hat uh lady diana went for a tea and then he would say on that day i i you know um found a bomb in whatever and detonated a bomb on that day and on this day you know we did and so there was like this major avoidance of a catastrophe but like if you look at the news it's like such drab and boring news because They had done all these major like interventions.
So like his whole wall is like papered full of like boring front pages because those are all on days that they had these major interventions that took place. Like we want Cedars-Sinai to be a boring place. It's preventative in that way.
It's predicting and preventing. It's a preventative clinical decision support tool. I mean, it's predictive.
It's predictive. We would eradicate the rapid response team if this really, truly played out and worked the way it was supposed to. We wouldn't need it any longer. You'd have zero fatigue or burnout, right?
Because they would show up only when it's necessary. And that's it. And so I think if you had to write a purpose, that's probably the purpose, right?
Reduce the amount of, you know. of a long fatigue of being you know first responders in some way but um i don't know so i think that the measurement of three models against the model would probably be good you know like what is the news does what does the muse say what does epics deterioration say and then what does i'll say i like that the best methodology because i'm just thinking that you know One day when I'm old and gray, I might be a CMIO, and I'd be sitting there saying, why should I put this tool in the hospital? And your argument would be, what do you have today?
Oh, news. Well, this is 50% better than news. This is 100% better than Epic. I'm just thinking that measuring those together makes it a good argument for you. This is why I was telling Jen why there's 500 people in this AI class at Harvard.
Because... And 80% of them are clinicians because they have to make decisions on this AI that everybody's trying to sell them. And just because they say it's amazing or... What is it that it just because AI is written in your name doesn't mean you're good at it.
Yeah. I don't know if you've had the pleasure of seeing Jen's husband's clinical decision support presentation, but it was one of the first things I ever saw when I started at CEDIS. And it was going through like the life of a cardiologist.
Like you have 6,000 studies a week that are going on. You have like 76 pages of, you know, clinical care practice. Like, how are you going to keep all this stuff in your head? It's not, right?
And so building something that helps people make those kind of decisions is so critical. And then also becomes a teaching tool, which is the great stuff that Margot added. So I think the REDCap, if we could start to think about designing it in that way, I think it would be very helpful.
Oh, yeah. Yeah, we can do that. And Anupa, if we. Depending on how many methods, how many variables we're going to pull, we can also ask to have one of the REDCap.
We can actually put in a ticket to have one of the REDCap specialists build this for us. Because you guys haven't seen them do that yet. I mean, Anupa and her team have like homegrown everything.
But that might be something good for you to do, Anupa, to learn about that resource. But you know enough about REDCap that you can tweak it. You can give recommendations.
And then that way we will have a database that we can continue to, because we're going to, I would only imagine we would be spot checking things every six months. We might be, you know, doing X amount of chart review to make sure that we're not seeing drift or what is it? Delirium, Jennifer?
The hallucinations. hallucinations i went to a delirium talk yesterday no but delirium is close enough we don't want too many we don't want hallucinations because that's when we have to really get overly involved as and we want to be able to uh make our our process to be as efficient and ethical and trustworthy as possible delirium is a good word i'm sure they use delirium too margo there's about five words for the same thing I believe it. I mean, it's just, I'm like, wait, that's the same as this and same as this. I'm like, okay.
I know my thoughts on neurosciences. I'm like, oh, my goodness. Let's use one word. Cardiac. Yes, yes, hallucinations.
So let me share my screen for a sec. I'm just going to show some of the data that we're seeing now, right? So this is a patient.
I'm sharing PHI, so just be aware. So the patient, you know, encounter date, counter time, the respiratory rate, oxygen saturation, are they on an oxygen device? So stylic blood pressure, pulse, the consciousness.
So this is Glasgow Coma Scale. And then temperature. And this is the score from the NEWS model, right?
So the NEWS 2 model will score a number, and then we change it into percentage values so we can measure it against. our model because our model gives a percentage and not a score based value. So five out of 25 is a 25 percent likelihood of deterioration. As you can see, the patient will wax and wane in their percentage value.
But this is just a very crude measurement of the NEWS2. Just to give you an idea, model uses thousands of parameters and not just 10 that are in that model. But it gives us a way to measure against it. So I don't know if that's going to be helpful.
I think what I'll do is I can put the data set together that we can utilize for the validation and then put the news to the muse, the epic deterioration index number. That'd be very helpful. Bet not put Nadia's number in there until we get validation from Anupa and then measure it afterwards.
Does that sound? I'm just thinking for the red cap, we could put the Muse 2 score was 5, the Muse score was 3. And then we could put the other two in later to see how they measure up. Yeah, that would be very helpful.
Yeah, I'm just thinking because they would be very blind. I have no idea. Yeah, and the best we can line up those variables, the better. Because I think when we're trying to describe how they're, like in the discussion piece, when we're describing how they're different, like you're like, this has 10 variables. The 10 variables are like, are they like, they're like cardiovascular variables there, you know, then this one has maybe like.
30 variables or 50 variables, and then we can describe like how many variables we are pulling from each system or, you know, we want to sort of think about how we can further demonstrate how comprehensive it is in relation to some of these. It's also going to kind of expose, I feel, for people that kind of hang their hat on some of these scores that there really isn't that much. kind of in there. Like we make decisions as clinicians, sometimes like we're looking at the Holy grail and I was just talking about, like they took the black box warning off of statin used for pregnancy now, because we have women who've had bypass surgery that they were moving their statin and they're pregnant. And it's like, oh my gosh, like you would never want to do that because it was like based on 20 patients who were on Pravastatin 30 years ago.
You know, we just we make we make major decisions off of sometimes not really robust data because it's called a deterioration score. It's got a really good name. And you put a new but you have any other questions at all?
Anything just to kind of get you to feel supported or get this off the ground for you? Who to contact? Edward is wonderful.
he's so high thank you i just to um you know i'm i'll go back to this one and i try to clearly write my problem here and then um and i also have to maybe put uh every one of your name because it's asking for who is helping with the this big project so um as my stakeholders for my part i'll put you know, all three of your names. And then whoever your chair is, like your primary mentor at school. Yes. Because they'll probably be reading your paper and providing edits and stuff like that.
Yes, sure. So then it's asking for national initiatives of the project that we have. I mean like what is existing for this and I from today's meeting I understand it's like a deterioration score and the new score that we have and we have to come you know that's nothing related to what we are trying to implement because it is more than accuracy yeah yes and so I think on the next thing i will try to do my thing and is it okay to send it back to you three and there you can decide if i'm going to the right we'll flush out your smart goal give it your best stab and then uh maybe you and jen can bounce it around um and then uh we can all uh we can all give you give you feedback on your smart goal yeah for sure she needs to develop a pico statement You're doing Pico, right?
You're doing Pico, Anupa, or not? They still do that? Not yet. Not yet.
They don't want you to come up with a Pico? I mean, definitely I have to, but... I've got a fellow waiting.
I've got to run. Bye. Thanks, Andrew.
Thank you, Andrew. We'll talk to you later. You know, Anupa, you may want to... You're going to do Smarkle, but you may want to start thinking about...
No, she's going to work ahead. Remember what we were talking about, Anupa? I know. Because they're like, this spring, we're just going to... You're going to flip it, but, but while you're, while you're documenting, it's fine because we need you to flip it, but you can start thinking about your PICO, what even like flipping it so that you don't have to then kind of like reinvent the wheel, like just know where you're heading.
It should still steer you that PICO statement. Okay. So I, I, I do advise that you start thinking about that just based on, uh, yeah, what we discussed today.
And, uh, what your how this intervention is then going to change that outcome for you okay yeah and i think um i have already worked with the what is his name um edward Oh, yeah. Edward's great. Edward is great. Have you met with him before?
Yeah, I have made a lot of Red Cap. But the one that we are going to make for the questions, are we going to make it as a research or operational or other? Because usually I do operational and other because we are just trying to use as a form, right? It's going to be research because we. this validation is going to support our manuscript.
This is a research manuscript that we're going to be submitting. She may have issue with that though, with it being a DNP project. So it might be that Anupa's writing in regards to nursing retention and that the research piece lives with me. I know.
That's why I went there initially. But to be honest with you, like, this is how we're supposed to work together. Like a PhD and a DNP, this is how we're supposed to work together.
I still think, you know, I'm looking at this from both perspectives and I hear you, but also by preventing sepsis, we've also changed the burden on the nurse. and she's in an innovation doctoral I think you need to talk to your professor talk to her about it this is validating AI which is like it's epic like it's an epic project for you that's why I was like this is like this is such a good thing like you'll get like you'll have like the best project of the year and I'm pretty sure like because I am doing it with you you know the I'm And you're working with a PhD arm and arm and I will be helping. Yeah, it will be like a piece of cake for me to just to do that and, you know, get all of your wisdom and support.
And I can also show you how to navigate the CSIRB if you haven't navigated that platform before. Anupa, you can just switch to a PhD at this point. All right.
We've got you covered. Just go to the PhD. UCLA would take you in a heartbeat.
I'm telling you, Anupa, I have my DNP. Go for the PhD. I know. And because she's going like halftime, it's like four years.
So she could. Oh, my gosh. And I know one way to absolutely drive my husband like through the roof.
If I told him, you know what, hon? Oh no, don't say it. He'll divorce us.
I really, really love innovation. And I just finished this course at Harvard that Margo supported. And I think she would support me now to get my, actually my PhD and board certification in AI. And she'd be like, oh my God. Jen, Jen, that's why I say he's going to divorce us.
Oh my God. I feel like a third wheel in Jen's marriage. Oh my gosh.
I mean, the DNP is wonderful. Don't get me wrong. It's a good, good thing.
You're going to finish it. It's a good thing. But what happens is innovation, which will be great.
Yeah, you're doing an innovation. But what happens, at least my experience, as you become deeper in either projects, even if it's a process oriented, there's always like that research arm trip because you're in an academic institution and that clinical piece and academia piece really rises to the surface. Like you want to be able to get into that. It's. And that's in that PhD.
I have to go to Margo for that part. You know, it's a really I just I think it's just a wonderful, wonderful thing to have that PhD. They're both great.
Right. They're both great. I just think that the DNP is a little bit easier to adopt independently than the PhD.
Does that make sense? You see what I'm saying? Like it's just like the research we are really contributing to the science and the DNP we are implementing.
Yeah, like Margo can, you know, she's figured out the PDSA. She can write a PICO. I mean, I'm not downplaying the DNP.
I just think that it's a different skill set when you have gone through your PhD. And so this validation piece, if there's any questions along the way, we're both here to bounce off ideas in terms of SMART goals, PICO, editing. Edward, if you need me to come on a meeting with him, he's been, he was very wonderful and instrumental to me and, and my clinic, we were, we built a research arm and, and Edward was fabulous.
So you're, you're not going to be alone in this. Consider yourself to have three mentors to help you navigate through this, this part. We do have some goals and a timeline, which is a little bit specific at this point. And we have some reach goals in terms of our timeline. So I just want to make sure that, you know, we're aligned in what works for you and if it aligns with where we're at and what we can do to help facilitate this.
I don't want you to feel like you're left alone with this. I think, Anupa, if you look at your curriculum for school and we know like, you know, this is going to be like rolling out in the spring. Right. So you'll look at that curriculum, look at what you have to do for winter. And.
Uh, let's see how we can, and then let's look, we're gonna, you know, we just got to talk about workload if we just got CEUs taken off of a DUPAS plate. So she'll be transitioning that, um, and the nursing research council is gonna come and go. Um, so that will feel better to you also, I'm sure.
So I'm hoping that we can build you a block of time where like you kind of have like a Like Jen and I were just talking about this, but you could probably review all these charts in like a two week sabbatical. It'd be amazing. If you can do like a two week period of time, Anupa, like Margo's offering, which is amazing.
It's just like this golden opportunity, right? You have this focus period of time. Your plate is cleared of everything else.
And that is your full focus. That would be, wouldn't that, that would. Can you tell me the timeline?
Like when do we need it by? And before we have to create the REDCap survey too, right? So that I can put my...
We need a manuscript that's ready to submit for publication in February. Yes, yes. So part of this will be when you get your SMART goal written out. And then you've got your, I'm trying to think of like an A3, like, you know, we've identified the problem.
We've got, you know, key stakeholders, what our potential barriers are. Then it's like, okay, what do we have to do there? So we've got to set, the REDCap is going to, the REDCap and the IRB are the two most time-consuming components of this, I believe.
they each are going to take, you know, two to four weeks to do. So I'm, I, I want you to worry less about the IRB stuff, because I think for your learning, I think learning how to set this up, the project itself, I think is the most important, like, you're right, like, for our PhD, they just make us struggle. And then like, they don't even let us use like, resources from the hospital so even though I had somebody who could build like they're like yeah you can't do you have to build it so but you're she's gonna you're gonna need to have your IRB though a new book she will have to have her IRB so right now Andrew Andrew's gonna go back and check so Jen we could probably use some help here which is looping back with Andrew in the next three to four days like maybe early to mid next week And ask them if he spoke to Brad, if we can take a look at their protocol that they submitted to the IRB. And is there a possibility to amend their protocol? Wouldn't that be amazing?
So that we can just kind of plug and play? To be honest with you, I don't think that's going to fly. This is a different study.
I think it's a little. The IRB is going to say no, but Brad, I mean, they might know something more about alignment that they can justify it. But I would say like if it's a separate study, which indeed it is because it's got a different algorithm built in, it's going to probably need, and to be honest with you, then it's easier for us to control it.
And then we can tweak it and modify our own IRB. We can add coordinators. We can amend it. We then we'll have control over it.
And it's really not hard to build. Like I know it's like it sounds like it, but it really isn't that bad. Like I wish Leah was here because Leah is like Leah knocks these things out like nobody's business.
But I also have Raven and I just got to probably Raven's just she's stuck on her thing. But I could maybe even see about a CTRC coordinator if we needed it. Wait, what about Margo Jody?
I think I've got Jody all over the place now, officially. Oh, you do? Okay, fine.
Yeah, I think academic nursing in particular is going to be a real beast for her. Oh, okay. Yeah, most of the DNP students.
When I was at UC, we had to get our IRB most. I know. I know you do.
And Leah had to do it also at UCLA. So. And so did Christina Crago.
We all did bridge IRBs. So it's, but Cedars is the home. We're the home.
So we submit, we submit a home IRB and then, then they'll, they'll give a... Mike Lewis's committee. Yeah, they'll give a reliance. Basically, they blanket approve it because UCLA knows that.
or other schools know that what we do is like top drawers. So they don't, it's like, it's validated. It's validated.
Yeah. So Arizona state, you'll have to do that with Arizona state university. I have their IRB and I have to also get to the IRB from here. So that way, something like that.
Yeah. Yeah. So I'm going to help you. I'm going to help you with the IRB process, Anupa. I'll be your, I'll be your lead on that.
What I would. what I would suggest that we do is we just start building that time into my calendar. So I would probably build in like an hour and a half, like every week.
Um, okay. And go sit with bell and say, I need an hour and a half every week with Margo until we get this IRB bill. And Jen's going to start to give us information.
So then I can be putting that into the protocol. I knew, but I'm going to, we have an abstract. So far.
Okay. And about 70% of the paper. Okay. That requires, it's going to require a lot of editing.
And there are parts that Andrew is going to have to come in and draft because I, I'm not. I don't think so, Jen. I think we're going to take that from that other protocol because they're not. Yes.
Because what they care more about is the validation process and the stewardship of. the data. But I, but I think there, there's this one section he's going to have to come in and write, which is just specific to AI medicine.
We're going to have to describe a little bit more about that, why she's novel, how it's done different and how she learned it different. Anyway. So we'll talk about that, that, that piece will be separate.
I'm taking a stab at it, Margo, but I know I have, I would, I would Jen, because to be honest with you, that. is not a type that is not going to be a decision making like if i was reviewing this protocol for irb approval that's the least like you almost don't even care about he cares the dynamics like i don't sit there and go into like the mechanism of action of these study drugs i really i could care less what i care about is what's their safety profile how is it performing and other yeah what are our checks and balances yeah because the irb's role and i often have to remind them a lot they are not there to make sure that scientists are doing good research they are there to keep patients safe so they'll sit there and then and i'm like yeah gang that's not our role our role is to keep the patient safe if a patient agreed and it wasn't coercive like we have to make sure it wasn't coercive um he's thinking more from like a business perspective in terms of it as a product, not just the outcomes. He does. We're just talking IRB right now.
Yeah, just IRB. Your paper, I don't even need that part. Like, yeah, you don't need that part.
I'm just talking general about the finished manuscript. Yeah. There's a few different places in Yuba that we're going to submit.
Depending upon where we submit, it's going to be tailored just a little bit, depending upon where we submit. In terms of the IRB, you absolutely don't need to know. this regular paper like for sure not at all not at all not at all absolutely not at all i'm just talking about this february she's talking big picture i'm talking big picture of this whole thing and all these moving parts and pieces i'm going to be able margo and i are going to be able to um complete the majority of this paper but there's going to be i don't think a little a new pub won't be on the on the original manuscript.
This is her, she's not, yeah, this is her, this is her doctoral work. That's another paper. Just not to be confused. I don't want Anupa to feel overwhelmed either. Cause I know she's got a lot on her plate right now.
Yeah. And Margo, I did mention to Anupa earlier that guy the other day that That I'm going to, I'm going to give you a mention. There's a few people I'm going to give a mention to people, but you're not going to be authoring this, this manuscript. Yeah.
We'll put a thank you in there. You're going to be, you're going to, you're going to get a shout out in that one, but your paper and what you're doing is that's going to be separate. And that's for your, your DNP.
Yeah, and that's your first author paper. That's when you're your first author. Your first author paper on your own.
I think we'll probably write even a couple papers on this because the process, and also like for nursing, like Anupa, like if you're, I don't know if you take an informatics class or not, but, you know, helping to, yeah, like helping to steward AI in this space. It's very similar to that paper that... was written for this Pinsky paper.
This is written in medicine, but like nursing needs more written like this and things that we write like into like Nurse Leader and other very mainstream publications that get a lot of visibility. Yeah, that's right. So I sent him a minute to...
bell saying that I want to meet with you next week I'm in and we'll work together yeah I'm in um I'm at Becker's I'm here Monday um but we have like graduation and stuff and we don't have to start working on it Monday um I want you to be able to read up and you know Jen's gonna uh it sounds like Andrew's gonna get us the variables I'd probably start on the red cap I'd probably go try and meet with the red cap person, let them know that we're looking to validate, you know, an AI, you know, protocol against two other AI protocols and to see about starting to meet with them about setting up the red cap first. Okay, so I think it's a good start. You don't need an IRB protocol to do that because there's nothing specific to. the research study that for setting it up. Yeah, that's right.
So I'll reach out to Edward and, um, see, uh, I have never did a research. Um, you know, I, I never created a research at a cap form, but it's not that different. It's not that different, but this is, this is good for you because, um, this is as we do more and more like. projects or we're helping other DNP you're in the future going to be helping a lot of DNPs you'll be the students you'll be the mentor trying to help them set up their DNP project and sure they're all going to be um either disease specific or preventive you know specific but but project oriented so this will be good and you but Edward can help you set this all up sure I will just go, I think maybe next week or the second week of October, I'll meet with Edward also. And from today's meeting, whatever came up, like I will screen everything that we said and what specifically mentioned for what I am validating.
I'll come up with those questions and I'll run it through you all. so that we can know that whether we want to add more questions to that and how we wanted to comprehend and interpret the data at the very end so that we can create it more in that way the questions right so that i can put my whatever um so i think we will start from there and i have um meetings that i will work with bell um so that i can meet with the margo so that's yeah and even if it's remote that's a remote meeting is fine. I just want to make sure that we stay together and we keep working on it. Cause like I can, even if we're just talking and we're talking through things and I, it forces me to like, I log into CSIRB and I like open up a new, this is a new research project and we're just talking through it. And I start setting it up while we're doing this and we're talking together.
That's productive. So some, we'll, we'll turn these into working meetings. Yeah.
So meanwhile, I'll reach out to my instructor and some of my people in ASU who I know that next year they are going to, you know, like finish. I can ask them like how they went through, you know, like really the next steps of the project. So I understand what is that they are looking for.
And just like how Jen recommended, Jen, I will really look into creating that Pico's question beforehand so that I'm not really like. oh, now that I have to rewrite it this way or that way, because I didn't know it was supposed to be like that. Yeah, if they use an A3 or like, what is...
what format do they use? Like for PhDs, like we're using a specific aims page. Like, so like, you know, how do they start to work out that project summary, that, that work summary? It's probably an A3, I'm guessing. I mean, I think it's a good place to get started.
Um, and the second thing you want to ask your professor is if, if you had to select is there a preference or we're validating an AI algorithm compared to two other validated algorithms that are less specific? Yeah, sure. I think she would allow because I was basically first time I was interviewed for a PhD and I actually came back they selected me but I said I can do it because they want me to go part-time and stay in it.
So that's when I said that. It sounds really, it sounds really, was this before COVID or during COVID? During COVID. During COVID.
Yeah, it's, yeah. I mean, so, and then the other thing could be looking at wellness in nurses and rapid response nurses. When they're responding to this new algorithm compared to usual care, that would be the second way we could spin this. If they say, no, no, it has to be more nursing focused. I think you'll be okay because the product, the end user is a nurse.
Yes. So it's the product is. is targeted to benefit nurses. Sure.
Sure. Yeah. I think that's why I started off with, with that whole nursing focus, Margo, because they always bring you back to the nursing. Yeah. Anupa, do you know if, if they have, do you guys do, I remember when I was at UC, I think I had to do like a SWAT and I had to do a fishtail.
Did you? do they have you guys do those still do you know or is it lean a3 what are they do you know how they have you kind of build this out um did they get there yet like did they it did it kind of helps build out the program and the methodology of of the whole pro the the problem of what you're trying to address yeah can you it's like oh i there was a fishtail um like a you fish bone it's like i'm sorry it's a fish bone yes fish bone there's a swat there's lean there's like all there's like variety of methods i i just don't know what methodology of yeah and these are processed these are process methods so all process but ucla uses a3 lean i don't know it's not yet uh we haven't gone there yet you haven't gotten there yet yeah you and I think their masters they use all of that I am waiting for what is that they are using for their doctoral level why don't you well I tell you what I mean I'm gonna be why don't you just send Dan tell Dan you you know Dan well enough if you want oh yeah what am I talking about I texted him yesterday let me ask him yeah our book got published yesterday oh gee Dan is going to be sitting with us at the American Academy of Nursing. He's going to sit at our table. Dan Wieberg, Jen.
Yeah, I've not met him. I know of him, but I haven't met him. He's a super terrific guy.
When's the AAS? The same as Magnet. It's like Halloween to like November 2nd. Oh, I see.
Okay. Anupa, have you been assigned your advisor yet for your project? Not yet. Okay, well so they're going to be like, wow, Anupa's really ahead of everybody here.
They're like, I want my advisor assigned to me. I want to know exactly what my methodology is going to be. I'm going to, I'm flipping this program. Yeah, they actually put it over like eight 18 months.
They don't want to overwhelm us. So that's why they're like, you are only identifying your problem. on this semester that's what they say they want you to think about the problem and then put it into a statement yeah yeah we also want us to make sure that it is not going with our own personal so they make us sign something like uh like it's not my personal interest project it's it's really something that will bring value to someone you know all that will this will for sure yeah um you So do you think that they're going to object to your accelerated ambition? No, I don't think so.
I think at the very end, they want, you know, something like this to be really presented. They want to showcase like how much, you know, the ASU students can do. and this is like a huge you know because I have so many novel it's it's really novel it's innovative there's nothing like it we created something new um I'm not going to hold myself because I'm taking DNB that I cannot do a research because I basically wanted to do that but because of my situation I had to change um but yeah that's what and I totally understand like we are really doing something big it's like value-based and it's more of like a collaboration it's more of like ai but uh definitely yeah to validate we need to well and in the validation space things are fluid like you can't that's what cracks me up sometimes with nursing it's like they want things to be so cookie cutter crackerjack with the timing but it's like You're picking an innovation project. It's like, man, if this baby is going, like it's going, like it's like, it'll, it'll be a commercialized product by the time, you know, spring comes around. So we, you know, it would, they, I would say for an innovation program, I would give them that feedback Anupa that depending on if you're working truly on an innovation project.
that it would be nice if the curriculum would allow you to plug and play a little bit more. Like, wouldn't it be great if like in the winter, you could be taking like your design thinking and your like, your project work, because that's when this project came up and when it needs to get done, as opposed to no, no, no, it's like all this is written to two years in advance. And, you know, you're gonna have to just go and be. instantly innovative in the spring and come up with this miraculous idea it doesn't it doesn't work you know innovation doesn't work like that like that it lands in your lap I think the reason that they they make it like 18 months because there is so many BSN to DNP students so they don't want to overwhelm those people that's why they just make it a little bit so that they can they can understand what they are trying to do because this might be like first time they're doing a project right so they are like trying to yeah that's why I think it is like taking that long but yeah I think you know I don't regret Margo I don't regret doing this because it's it's really something that I always thought like I you know contribute to that quadruple aim this is a quadruple aim He said, so here's Dan's advisement, Anupa. He said that ASU defaults more to design thinking and less Six Sigma, which is lean thinking.
But he said it depends on who you're working with. He said the HEAL, H-E-A-L center, would be design. Yeah, I think my...
My mentor, my faculty, who is going to get assigned to me for innovation leadership, she's a PhD. That's great. Anupa, pull up her work. That would make a huge difference, Anupa, by the way.
Anupa, pull up her work. Pull up her work and look and see how she did her own methods on some of these things. Yeah.
Because if that's the case, then we may just. we may just look, I love a specific aims page. It's got everything right in there, like in, and you write it really tight. And if you write it really tight, you can pull it for an abstract. You can pull it for a grant.
You can put, you can pull it like for anything. And so if we do write a single page, specific aims page, if it has to go into any other, you just cut and paste, but it's kind of all there. And.
I would, I planned on putting one together anyway for the protocol because the first page of the protocol is pretty much a specific aims page. So, um, you know, it's like, it's the first paragraph is your, your, your, your overview of the problem. And then the second paragraph is diving much deeper into the specifics around the problem.
And what we plan to do to solve the problem. And then the third paragraph is your impact statement. So you say, we've designed Nydia and like we say what she is. And, you know, we plan to validate Nydia against these other two, these two algorithms, you know, and. you know, our hypothesis is that Nydia is superior by X percent when compared to algorithm A and algorithm B and predicting, you know, deterioration like over time.
Yes. Yeah. So Margo, I think, yes, definitely.
I, do you have any which talks about the problem that I don't know if we've drafted anything I mean I do have the problem but it's more a little bit more a little bit more nursing specific in terms of just the background I also that's good but that's good though because that's what Remember, these are clinicians reviewing. They are. It's not this sepsis as in terms of what we were talking about related to the validation. So my initial instinct was to really keep the problem as, you know, this is an early warning system that's been created to help prevent the burden on our rapid response team, our nurses, our attending physicians. So, you know, I elaborate on that a bit, but not so much.
I attached it. Not so much as a value-based care. However, that is such a significant piece of this. Look at Pinsky, Anupa, in the chat, that paper that Pinsky wrote.
That paper is specifically about the importance of AI in the ICU and why we need it. and why we need to be good stewards over it. And it specifically talks about validating and why it's important and why it needs to be done.
Like that paper pretty much says the use of artificial intelligence and critical care, opportunities and obstacles. Boom. Okay, perfect.
Sure. That's what you need. That is it. Like, so, and then what you do is you start reading there and then you start to work backwards.
You look at the references. So when you read something that you like, you look at those references and then you go pull those papers and then you go look at those papers and you find their references. And that's how you start to understand with a deeper understanding that the overall body of evidence.
And then, and this space, like we were just talking with Jen, like you can get. really in the weeds with the AI. And that's not the intent here.
The intent here is, you know, to validate so that we can, so that we can demonstrate that this tool we created is safe and effective, more effective than the current tools that nurses have for early prediction. And aligns with our strategic plan with a value-based care initiative. I found more broad strokes.
Yeah, it's, you know, Margo, I think what I need to do, I think I need to expand a little bit in our background. Pull up. Pull up. Well, my back, you know, Andrew and I, like when we started this paper a while back, you know, we. Our problem was really mostly nursing related in terms of the burden.
But I think that this is an opportunity. And actually, I wrote a separate background statement on my own doc that was before I got this, that we had a more value-based care focused in terms of prevention, like the sepsis type of prevention or any type of acute event for that matter. But I think I want to expand in what I have here on our manuscript.
I want to pull that in. Because I think it should. Anupa, well, and here's, so to add to your thought, Jen, Anupa works very closely with the Code Blue Council.
And she has been in contact with the crisis nurses. So Anupa, the crisis nurses went to Andrew, or Andrew went to the crisis nurses. I think the crisis nurses went to Andrew and said, we are going to. all these, we're getting called left and right.
We're getting called all over the place because the deterioration index says that this patient is going to crash and they're wrong. Yeah. And it's not specific enough. It's not specific. It's too sensitive.
So go talk, go interview. This is a wonderful project. Go interview and go talk to some of those crisis nurses.
They are going to tell you what your background is. Okay. And then once you get, once you start to see some themes happening, you're going to, you'll talk to five of them.
And, you know, three or four of them are going to be saying something fairly similar. And then you hop into PubMed or hop into Google Scholar and just plug in, you know, what their main, what their main statement is. And look up and see what, what's been done and like, what's gold standard.
That's what I'm saying. Like, it's. It's, we, that's where we want to tie the problem back to nursing, like, and they're going to tell you exactly what the problem is better than any other paper. A lot of these papers get really technical, but we want, we want this, this is like NIDI as a nursing, you know, intuition, you know, score. This is for nursing.
It's a nursing CVS, you know, but I still think there's an opportunity because it is multifactorial. Um, Oh, the problem and the purpose. Um, and then all these other benefits since, you know, that, that we've learned along the way that, that we're going to be kind of, uh, expanding on. Um, but, um, I think I'm going to integrate into here. I'm going to expand on the problem just a bit, Margo, um, as well as our purpose, I'm just going to elaborate a bit.
It doesn't need to be in the weeds. Um, it's still going to have a primary, um, nursing focus because even When, so if you have like the alarm focus and like the burden, okay, and the fatigue that's associated with it and the unhappy work environment and all of that, even when you're providing the step up in care, that also is a burden, right? So there's still that nursing program. Well, and then there's the failure to resuscitate. Remember, like, by the time they get there and if they're not able to resuscitate a patient, like that's so defeating.
So when you ladies are writing these papers, like for my for my dissertation, my my background was almost identical for these papers. My the first paragraph is nearly very, very similar. The second paragraph. will have a different impact statement because one paper was on methodology.
And so that's where your impact statement is saying something about the methods. And then, you know, this other one is talking about the algorithm itself. Like, so that's where things start to get really different.
But when it comes to the intro and it comes to the methods, those are fairly plug and play. So, you know. you don't worry so much about that part.
I think it's better that you kind of get the story, Anupa, you start working on getting that red cap. We're going to be giving you the variables. So that will, that will be good.
But then it's like learning, how do I, how do I find out who the, who the, how, what list do I get all of the files from for a month, you know, from who has a rapid response, who has a co-boot, like you already know all those people now. Yeah, yeah. I actually know already because I get to report. I think it's me, Zach, because Zach's PhD project, Zachary's PhD project, I was the Code Blue committee at that time in 2019. So he made sure that me, him and EIS report, like Dr. Oren from the Code Blue committee, all these people get it. So I still get it every month.
How many people that, you know, Code Blue coded. if they are alive or just you know deceased or what was the time of death and how many patients are rrt so i have all those records but not the real chart that i go back but i think right now what i'm comparing is like bad patients and the normal patients right yes yeah and i think That's what we are trying to look in the red cap to put it like, what is my intuition as like whether this patient will. Yes.
You write down like how you think that patient's going to go. And then you see. Well, so here he said there's three.
There were three. I thought we wrote them down here somewhere. Here, there's News2, Muse, and Epic DI. These are the three.
This is what Andrew shared with us at the very beginning of the talk. So those three, you can actually pull those. You can pull papers on those and you can read more about that in the chat. Margot, I don't know if you did. He did.
he did at the very oh okay okay so then uh yeah he wrote news to muse epic you know not better than 54 accurate and high levels of false positives i think right i think that if you pull a couple papers on those that will give you some better context as to like what other people have said the problem is and what how they were trying to solve things uh and That, I think, will help you when you're formulating your PICO question because they will all have like a PICO question in there. Okay, sure. Thank you, Margot. I think it's a lot that I learned today.
I just wanted to. It's a lot. I'm going to send you guys my notes. Sure.
And. And I have one other paper that I shared. I'm sending you guys. Please download them, the PDFs from the chat, because the links won't work for you.
Okay. Because they're in my box. Oh, really? Okay.
But I already put the PDF in the chat. So all you have to do is just download it directly from there. Thanks, Margaret, for putting the PDF in there.
Yeah, you're welcome. It just, it makes life a lot easier that way. Otherwise, like these files that I have, I have hundreds of, I have hundreds of articles in here and you'll, your eyes will glaze over if I just shared with you my file. You'll be like, phew, why did she do that? Yeah, Chris is going to make me log in here.
Okay, girls. Tomorrow, do we have to meet with Andrew again? No, no, no, no. It's canceled.
That was an error. That was an error. You should have received a cancellation for that.
Did you check in a new button and see that it's not removed from your calendar? If it's not, just decline it. Okay, sure.
Decline all of them. All of those for Friday. Let me just see if he removed him. Friday, September 27, 12 to 1 p.m.
Discussing diversity statements for a cat. No, that's my school thing. Okay.
It should not appear on your calendar any longer. No, it's not. Yeah, it's not there.
Good. We will set that on the next one. But I actually think it's probably best that you meet with Edward first. And then.
We'll get you the variables. Okay. I'll work with Andrew on that.
And then I, Margo, I'm going to work on a copy of that other protocol. Okay. And to see if I can get that.
And then we'll reconvene. Okay. Does that sound good? Yeah.
And will you, Anupa, will you just keep us posted and let us know when you're meeting with Edward? Yes. Yeah.
Wonderful. I asked him October. second week but I'm waiting to hear back from him okay great great yeah there is one more guy his name is Kevin he's usually available more than yeah yeah I know that's what I'm saying Anupa is like a red cap expert she really is like all over this and it sucks me a lot too I started with Edward and then I went with Kevin yeah I think Edward is like the director and Kevin is like you know If he doesn't know something, he will just get back to you after talking to Edward. So that's fine. I can even, you know, I'll also send an email to Kevin.
Yeah. Edward helped us. He helped us launch our research arm. And then after that, Kevin took over. But like there were sometimes some meetings like I would have to come in.
So as long as Edward knows about the initiative and kind of provides the framework, then Kevin will probably be able to take over. Yeah. Yeah.
OK. Thank you so much. Thank you so much.
So nice meeting you. Thank you. OK.
Enjoy. I'm in San Antonio, Margo. Oh, yeah.
I will see. Margo, when do you leave for San Antonio? Tuesday or Monday night? I was supposed to leave Sunday night. It was like my wedding anniversary.
And my husband was supposed to be here. And then he canceled. So I'm here by myself. And so I probably am not staying until Sunday by myself.
but I haven't rearranged. I didn't get in here and get.