[Music] all right welcome to another edition of drunk agile as always i am daniel vicanti and with me is fatigue um i guess as the guest you always get to go first what did you choose for your drink tonight um so because we're gonna need a we're gonna need a heavy one tonight by the way yeah i had a choice between either repeating since i've run through my whiskey collection in the quarantine so quickly or going with a whiskey with an e so i chose to go with the bourbon that's a wild turkey rare breed for those of you can see that and it's a 58.4 16 proof um it's one of their their battle proof selects that wild turkey does and it's probably my favorite wild turkey expression that's what i have we were talking about this before pratik is now officially banned from drunk agile after only the third episode who had the over and under of three i had two so um all right i am going with a uh a glenn dronic i don't know if anybody can see this is one that i actually bottled myself at the distillery um it is a sherry cask sixty percent this is exactly sixty percent i don't know if it can be but yeah sixty percent um a fifteen year old um glendronic sherry cask so let's let's see how we do on this we were debating we can't remember we need to go to the tape and see do we normally pour first or to report after that looks pretty good we definitely finished during that's all yeah there's a rude joke there that we're not going to go into all right so cheers everybody thanks for joining us for another edition of um of drunk agile so i don't know pratik would you like to introduce our topic for tonight oh after we had a sepia sorry yeah well well today today we we had to pick um higher proof than usual because we're we're talking monte carlo and um last time around we talked a lot about probabilistic thinking um this time we're talking about how to put that probabilistic thinking in action so yeah it says it's monte carlo time so yeah so if we remember the kind of the from last time the essence of probabilistic thinking is just acknowledging whenever we're trying to make a forecast whenever we're talking about the future uncertainty is involved uncertainty is involved we just need to acknowledge that there is more than one possible future outcome we do not know we cannot predict the future with 100 certainty right the future is not deterministic which is why we need to think probabilistically and which is why we need tools to help us think probabilistically because i think as we talked about last time humans are inherently crap at thinking probabilistically we don't when we when people when people see numbers percentages they just they just freak out so tonight we're hopefully going to do a deep dive in terms of um what's you know what are some tools specifically a tool available to help us in that probabilistic thinking so i don't know do you we said a lot about monte carlo simulation i don't know sometimes maybe that name scares some people they're like what is monte carlo i don't know what that what that means are we talking about casinos here why you know what is monte carlo why monte carlo can you help us out with that yeah and then and and maybe maybe right after why monte carlo we should talk about why not monte carlo number oh why not other things that's fair uh it's very good monte carlo is uh i mean i guess the simplest way i could describe it is say let's model the future in some way and then run that model hundreds thousands ten thousand million times and find out what the results are how often does one um as we said earlier the first thing we have to recognize is the future has multiple possibilities how often does each of those possibilities happen so then we can figure out what's the probability of each of those possibilities happening yeah so you know what that's simple yeah i don't know i doubt anybody understood that but thanks thanks for trying um thanks for playing we have some wonderful parting gifts for you um you know one of the ways that i you know i like to talk about monte carlo simulation is everybody understands like you know if if you have a coin and you flip that coin what are the chances of becoming up heads and and i think everybody understands well the chance of it coming up ahead is is 50 because people realize um there are two possible outcomes remember there's more than one possible outcome so in flipping a coin there are two there's tails or heads and um heads is one of those outcomes so one divided by two is you know 50 that's you know that's kind of how the math works um but if let's say you were really really really bad at math right like like most americans let's just say that you're really really really really bad at math how might you know what the probability of flipping you know a coin is and it coming up heads and one of the ways we would you know one of the ways you might try it is just flip it over and over and over just try it just try it over and over and just track and after a while you would see that the you know it coming up tails and it coming up heads you know roughly equal roughly equal chances of that happening now it turns out to figure the exact probability you'd need to do math and it's probably harder harder math than one divided by two but let's let's forget about that you know same thing with rolling you know rolling dice if we have two six sided die and we roll them what are the chances of it coming up seven what are the chances of coming up 12. again one way we could figure that out is roll it over and over and over and over and over again and just and just find out those are simple examples in our world they're really difficult really hard complex and in some cases it's either impossible or or extremely extremely difficult to figure out those those probabilities by hand so the only option we have is to try it over and over and over and over and over again and that's essentially what when we say monte carlo simulation that's exactly what we're talking about is yeah it's not it's not it's not a technique that any of us invented it's it's been around for a while and it's something that is used in so many different applications and um yeah if anybody's interested in the history of monte carlo you can look at me like the manhattan project and there's some pretty interesting interesting stories on how it was uh it was first applied there but um i don't know we could dedicate a a segment to the history of monte carlo if we want to but we've got more pressing things more important things to talk about tonight so and if you can keep going kind of explaining monte carlo well i was gonna i was gonna actually pose a different question which is sure monte carlo is a way and we're gonna spend a bunch of time talking about that um aren't there other ways isn't there mathematically proven statistical ways that people have been doing forever that we could probably look at i don't know how mathematically proven it is but you know throwing darts or curling up on the floor and crying or you know those those are those are my those are my methods and somebody asked me to make a forecast i know yeah um i don't know are you are you talking about like regression or those types of yeah if if if what we are trying to say is that we need a probabilistic way to look at the world and get be able to acknowledge multiple outcomes what about averages and standard deviations why not use something like that to well oh of course options curve fitting yeah we need we need to take our data we need to fit it to a curve and then we can use that curve yeah why don't i mean it's weibull yeah so maybe let's let's put that on the whiteboard that's probably maybe our next um our next option is why why don't we do some of these things but we're gonna need a wig we're gonna need heavier whiskey oh wow yeah we're gonna need like some everclear or something like that for that but so we will talk about you know in maybe the next one or a a very similar one why don't you you may have heard of things like curve fitting you may have heard of things like you know calculate a average or a standard deviation you may have heard of you know some of these other statistical techniques um and we need to devote some time as to why we at least in our opinion anyway those are i don't want to say invalid but i'm going to say invalid or you know probably probably not useful probably not probably more inappropriate than inaudible yeah yeah yeah um so yeah okay good so just just a little teaser for next time actually we should take a drink every time we say hey we're gonna we're gonna cover that in a future episode we need to return this to drinking i'm gonna need a bigger boat all right monte carlo what is a way i waved our hands it's like okay it's this way of simulating future outcomes to come up with probabilities but mechanically how does it work i mean what for for teams like if i'm a team and i'm trying to forecast say a release date how do i use monte carlo simulation for that well probably yeah that's pictures worth a thousand words so i'm going to screen share okay and i'm going to use a teams data that will remain unnamed hopefully it's not it's not up in the corner there but yeah yeah um this this is a team scatter plot we talked a little bit about scatter plots and the story points discussion we haven't gotten too much into it but that's not as important what's each each dot here represents an item that's done what's important from a monte carlo perspective at least as applying it to a team's perspective is what is the throughput for that team and i'm going to define that just actually while i pull that up uh dan if you want to define what throughput is yeah so there are are several basic you know when we when we're talking about um this type of forecasting when we're talking about just flow in general there there are i guess at a high level four four metrics that we basically care about for just you know from four flow metrics that that we care about um the first as particular showed is is cycle time is something called cycle time um and that is just really a measure of elapsed time between some start point and some end point now how you define that star point and end point is completely up to you but imagine you have a process and you have a start point well defined and an endpoint well defined the amount of time between those two points is is what we're going to call cycle time any work that is started but not finished any work that is past that start point but not past the end point we're going to call that work in progress that's another metric um how long it has been since something started but hasn't finished yet we're going to call that age work on an age and again that's something we're going to make is something we're going to talk about in a future video take a drink and so the last metric that we're going to concern ourselves with here is throughput and all throughput is is a count of the number of items that cross that endpoint remember we have that well-defined endpoint throughput is just a count of the number of items that cross that endpoint per unit of time in all the examples we're going to show you tonight i'm thinking in all the examples we're going to show you tonight that that that unit of time is days so we're literally going to count every day how many items finish right how many items cross that finish point and we're going to track that over the history of our team or our project or release or or whatever so every day so like what critique has highlighted here you can see this team got two items done on august 17th you know they got april sorry april thank you april 17th my eyes i need i need new glasses it's quarantined i can't get to the doctor um so the heights of those bars represent how many items um that that teams got done they got they got three items done on april 24th anytime by the way anytime you see a gap between the bars that means they got zero items done on that particular day which is why i prefer a different view particular likes the bar view but there's a different view but anyway this is this is the teams there's there's a line view so you can actually explicitly see those zeros if you cut yeah if you open those dots you can see same data just as a line chart as opposed to a bar chart um so this is the teams historical looks like for the past what to three months first yeah so about about three months worth of data this is now that it's about the end of may um the past three months this is how how many things they've got done every day for the past three months so we the way monte carlo at least works and the way the way we make it work at least is we take that data which is the number of items that got done and run it through this monte carlo simulation engine to come up with some results um the question though first first question we're trying to answer here just just at least in this simulation is how many things can we get done based on this data based on this three months of historical data that we have for the team over the next 30 days and you can see here some controls i can say starting today and going 30 days out over the next 30 days based on this data that we have how many things are we going to get done um what what and and i'll just give another little thing here that's this this runs this monte carl remember we said we take uh that we run the future we model model it and run it multiple times this software does it ten thousand times so that this has taken this data and modeled it and run the future 10 000 times to figure out what are the probabilities of things happening in the next 30 days you think it's a good idea we go results backwards or model forwards um i don't know i'm thinking i'm thinking about the terrorist choice you you decide whatever you want to do there we go so in the next 30 days if you look at the results histogram here this is the this is how many items do we expect them to get done so let's say we wanted to find out if um if this team can do 80 things in the next 30 days there is a result for that out of those 10 000 trials three of them had them compete in completing uh 80 items so which probably means yes there is a chance but it's a very minuscule chance as we use this as we model this data and ran it for the next 30 days 80 times uh uh 80 items got done three times so roughly you're roughly your chances of um contracting covet 19 right yeah pretty much oh wait is that too soon too soon maybe that might be well but people could be watching this much later so yeah meanwhile uh let's say the result of 31 happened 239 times in fact 31 or more happened about 89.9 or 90 of the time you'll see that result uh shown right up top here so about 90 of these simulations had 31 or more things happening but then the question is what are what is this magic that we're doing in the background to run these simulations um so the way that goes is because we have 30 days left but 30 days we're trying to figure out how much can this team do in 30 days what we do is we take one of these throughputs from the past just randomly select one of them and assign it to the next day then we go back pick another throughput assign it to the following day and do that for each of these 30 days and once we've done that we total up how many things got done in that simulation in that one single string of 30 days we totaled that up total that throughput up for each of those days and go let's say that total came to 50. we'll we'll put a mark here at 50 and say the histogram 50 goes up by one then we do that again and figure out how many we get done in that next simulation and it's completely randomized and we do that over and over again 10 000 times and come up with all these results dan would you like to go over the results a little more than i just did well yeah so so what what pratik started getting into because remember we started this whole conversation talking about probabilistic thinking um and so what this this histogram that you're seeing right here as pratik was just explaining gives us an understanding of the risk associated with certain outcomes and we can actually quantify that risk as percentages and that's what critique was roughly going over he was saying you know we've got a less than one percent chance of getting 80 or more done we've got a you know roughly 80 some percent chance of getting 30 or more done whatever that is and so that's that's really what we need to do with this histogram is we need to segment it in such a way that it communicates appropriate levels of risk to us and you can kind of see there are clues if you look closely there are clues already um there for us you can see there's a 50 with a dashed line at 70 with a dashed line and 85 percent with the dash line and 95 of the dashed line um and and what that's doing like i said is it it's it's communicating those those lines are communicating right to us so you know right now pratik is hovered over the the 95th percentile so what that's what that is saying is this team given given this historical data this team has a 95 chance of getting 27 or more or more things done right they could get 27 things done they could get 28 things done they could get you know 80 things done you know we you know that's that's that's really what that just what that block with that segment is saying you know they have an 85 chance of getting 33 or more done so notice um what's kind of quirky about these results here is as our confidence if you will you know then going from a 95 confidence to an 85 percent of confidence as that confidence shrinks the number of items that we think we can get done goes up i don't know i don't know if that seems intuitive to people or not but if if you want to plan with less confidence so for example if we want to plan it's a 50 50th percentile roughly flip a coin that's going to say um 42 42 or more items done so i believe last time we did we did talk a lot about you know the the flaw of averages i think we did didn't we i don't know i was drunk i was drunk at the time so i can't remember um but i think we did talk about the flaw of averages and so if you wanted to plan based on average here we would say you know what half the time we're getting 42 things done that's our plan but you'll notice i think this curve hopefully demonstrates pretty nicely that half the time yes half the time we're getting 42 or more things done but half the time we're getting 42 or less things done you know as well so um that's that's really what we're trying to communicate here is and that's really what we're trying to segment by using these percentile lines is how much risk are we willing to live with and then it just really becomes a conversation to understand you know for this particular release for this particular project for this particular sprint whatever how much risk are we willing to live with if you really just want to flip a coin you know we're going to say 42 or more most often that's too much risk just flipping a coin for for most planning purposes that's too much risk you know we probably need something a little bit more certain if you will or a little bit more confident i guess is a better way of saying it you know so if you go to the so oh you can say something what i was going to say what i really like about that is it it it shifts the conversation to what the conversation the planning conversation always was about planning was always about risk about mitigating risk and seeing it this way or just interpreting the results as risk as probabilities shifts the conversation back to how much risk are we willing to take yeah yeah so you know a a manager or a product owner or somebody can come come and say yeah we absolutely in the next 30 days we absolutely have to have 60 things done we can say okay as particular showing us before yeah you know there's a chance of that happening absolutely there's a chance of that happening but there's a uh what is that about 5.7 about a six percent chance of that happening you know is that is that a bet you're willing to make and i don't know if we mentioned if we talked about annie duke's book thinking in bets last time again we really should review the tape before we do these things i think we did i think we did hopefully we did um but this you know this is a classic example of you want to think of this as you want to think of your release planning as as betting essentially as gambling that's what that's what you're doing um are you are you willing to make a bet on a six percent outcome you know or would you rather make a bet on an 85 outcome or a 70 outcome you know something like that also is a hint to where the name monte carlo comes from yeah absolutely um so yeah so that's that's you know that's that's at a very very very very high level without getting too deep into the mechanics and and how this works and why this works you know that that's monte carlo and how we can use monte carlo to help us now to quantify risk um from a planning perspective to help us start thinking probabilistically about this these this uncertain futures yeah and then and i mean we only showed one um iteration of there's a one mode of this there's also you might not always have an end date you might just have an n number of stories say hey i have to get 60 things done you could do the exact same thing and figure out what was what percentage of my future simulations end up with it with 60 60 things getting done by july 1st what percentage are by august 1st so you can make the commitment to your customer accordingly to say i am 85 confidence confident that i can i can get this done by august 1st now the the difficult thing um the difficult thing about monte carlo simulation in my humble opinion anyway and i don't know i don't know if you had more things that you wanted to say about the mechanics or or how it works or whatever no we said enough about that okay because i was going to transition just very quickly i was going to transition into um the difficulty in terms of you know how you use this and how you interpret this um because you know the the team itself may understand the ins and outs of monte carlo very well um but but now what the one of the problems is we are presenting our our customers our stakeholders our managers whomever with much more information than they're probably used to um and they may or may not know how to interpret those results you know especially when you say well you know when they ask well how many things going to get done in the next 30 days our first question to them has to be how much risk are you willing to live with and the answer back will almost always be well i want 100 certainty you know you you need a plan you need to tell me with 100 certainty that how many things you can get done and if critique i know if critique on this on this graph if he scrolls all the way over to the right if he kind of hovers over on the results histogram all the way to the right the right or sorry left yeah you're right you're the other right left yeah you'll see there's no such thing as 100 just like we talked about before with probabilistic thinking there's no such thing as 100 so that's one thing we need to educate you know our customers our stakeholders on is you know start thinking start thinking in terms of um you know we're at risk and the thing is they probably understand this better than you think that they do certainly you know executives understand it very well people in finance understand this very well um they probably understand it much much much better but just maybe as a nice heuristic well maybe not even a heuristic maybe just kind of a nice way of explaining it if we go through each percentile or some of the percentiles anyway that 50th percentile line is like i said it's roughly it's the same chances as flipping a coin right i mean you've got you know we've got a 50 chance of getting 42 or more 50 chance of getting 42 or less so so that's that's one out of every two you're going to be wrong one out of every two times um if we go to the 85th percentile a useful way of thinking of that is instead of being wrong one out of every two times now at the 85th percentile we're going to be wrong one out of every seven or so times right six sometimes is it between six and seven i'm trying to do the math in my head somewhere between six and seven we're gonna be wrong one out of and so you see we're actually a little bit more confident to even go from the 85th percentile to the 95th percentile that's just 10 percent if you will 10 increase in confidence um at the 95th percentile now we're only wrong one out of every 20 times see we've gone from one out of every six and seven six or seven at the 85th percentile to one out of every 20 at the 95th percentile and so of course your product owners or whomever your stakeholders whatever they're going to want to be as confident as possible but the thing is every time you take that jump up into confidence there is a cost associated with that and that one of the ways one of the costs that comes is we have to plan for less stuff to get done if you want to go from the 85th percentile to the 95th percentile instead of planning for 33 things to get done now we can only plan for 27 things to get done that's just the way that it works so i know did you want to say something about that predicted you and i well i was just saying that it's it's the initial point that you were making of you know people don't in general understand it and if you throw all the information at them it might just overwhelm them um i think i think if you sit down and have the conversation with the product owner and and kind of walk them through it i think they'll totally get it you'll you'll you'll totally get it but that's that's where the second part of this which we were talking about comes in which is even if the person in front of you doesn't understand it you yourself don't want to be wrong more often so we plan on being a plan on the 85th or the 95th so you're long wrong a lot less often than you would be otherwise i think i think that amazon did something similar when they got hit with recently when everyone was staying home and ordering only off of amazon i think they started bumping up their their estimates and people were a little surprised that you said this will be here in a week but it got here in two days well they were probably back there modeling and saying that if we got hit with x number of uh requests what's going to happen when will we get our stuff through without having to tell us what percentage of confidence they had they just bumped it up and said i want to be wrong less often with my customers orders yep and if we talk yeah and if we use the example like we talked about last time with the 2016 election again not to get too political or anything but um with the with the 2016 election you know going into it we modeled hillary clinton with a 75 chance of of winning that does not mean she wins the election right you know you still have a one in four chance of donald trump winning that election you know and absolutely unfortunately unfortunately depending on on your political persuasion um you know donald trump wins that election but that that doesn't mean that the model was wrong that doesn't mean that you were um you know we necessarily screwed up how we we did our forecasting um but it could mean maybe we just got unlucky or lucky again depending on how you look at it right yeah well it's it's it's uh it's it's again it's one of those things that everyone i think every everyone trying to do any kind of forecast who has data uh seems to do any any anyone publishing poor forecast seems to do it's the same thing with the last time we talked about um the google estimating how long it will take you to get somewhere they are doing some of that in fact i am sure what they are also doing ways in google all of them are taking real-time data as is coming in from other cars and applying that and saying hey we i know we said it'll be 30 minutes but we're getting more data and each each throughput point here is that data that comes in and saying we said 30 minutes but we saw that there's a slowdown up ahead it'll actually be 40 minutes so this to me this is kind of the next kind of logical transition for this conversation and we should talk about whether we talk whether we have this conversation now or whether we save it for a future episode take a drink um but that is this idea of you know continuous forecasting you know because we haven't talked about we've hopefully we've shown everybody here using very very simple and by the way that you know what what you see here um you know it can be done in a spreadsheet you can god forbid if you like spreadsheets you could run monte carlo forecasts in a spreadsheet you know there's no magic math behind it um it's it's a very very simple very very straightforward technique and because of that what that means is is that as you know as our projects are running as our releases are running and we get more information we get more data about how things are going we should constantly be updating our forecasts to reflect that new data and re-running our forecast to see has anything fundamentally changed you know have are we now incurring more risk than we thought that we would or are we still on track or whatever and do we need to do we need to adjust accordingly so again i don't know i mean we've been going for a while here i don't know if we want to spend any time now talking about continuous forecasting and we could even run through the the ultimate software dashboard that we we published in the yeah we could do that real quick it's it's yeah i don't think we can dedicate enough time to it we can definitely run through it real quick it sets um just to just to jump on to the point that you were making earlier it's kind of like um you guys should anyone listen to this you should definitely try this take take a take a screenshot of your weather prediction from uh from sunday when your weatherman comes on the tv and does a prediction or a weather woman a weather person yes yes yes weather person when your weather person comes onto the tv and does the prediction um for the week take a screenshot of it and then watch it every day take screenshots every day and you'll see how it changes as they get more information they don't they don't just go i made the prediction that is it um that's the plan and the plan we got the plan we got to work the plan and because we made the plan and that's the plan is more important than anything else right it's because we got the plan yeah that is now the plan so we can't just say that it's not the plan yeah look how much look how much effort we put into putting this to putting this plan together right you're telling me it's wrong that's not right we planned so much to plan this moving on because this head so as we were talking about um the weatherman or the weather woman forecasting over and over again as they get new information um there is we have the ability to grab um our information for our teams from whichever systems we are working in for example if you're working in jira you could pull all the information on throughput and and uh remaining features and remaining stories for a team and do this over and over again the same thing we showed in in in the tool earlier in the other tool earlier of of results of monte carlo um this dashboard here this this screenshot that we're showing you does the exact same thing um you can see this is a little older this is from 2017. but for each row here is a team their release so for example this integration services team has a release coming up on january 2017 on the 31st of january and they have five stories to get done by then and when we do the exact same thing we pull their latest throughput and we run monte carlo on that it seems like they'll get done they have an 80.58 chance of getting done by january 31st um and we do that every 15 minutes we pull data as soon as as as a team has closed the story we've pulled that data 15 to 20 minutes later we've run simulations and found out this team supposedly went from five stories to four stories what is the likelihood that that will get done now by january 31st we can do this over and over again as we get new data yeah so this is what you're showing here is really kind of the essence of continuous forecasting this is continuous forecasting in practice um this what particular showing you right now is part of a case study that we published from ultimate software if you literally just google ultimate kanban info queue you will see a write-up on in terms of how we um of how we use continuous forecasting in practice uh but let's now maybe transition back to the uh the the tool itself and we can kind of show you how continuous forecasting might work um using using the tool so you know there's there's kind of a view control there it's kind of kind of hiding but the kind of middle section of this this chart that we're showing you is a way to go in yeah and select a certain date range because we might say okay yeah you know what we've got we've got three months of data here three months of historical data here but maybe that that first month is is not really relevant um and we really want to talk about what's happened because you know maybe the team has changed maybe the technology has changed maybe something fundamentally about the release has changed and really only the last 30 days or so is relevant so we can go in and this is that idea of continuous forecasting is we can go in and we can select whatever we think that appropriate date range is and possibly include new and i should say probably include new throughput data that we're getting in every day and i don't know if everybody can see that as pratik was shifting around that window that selected window the model itself was changing or the the results itself were changing they were changing they're changing in real time they're changing instantaneously um and that's that's what we need to do that's what you know they were essentially screenshotting the the weather forecast right that's exactly what we need to do is as we get new information it would be rather silly to not at least consider what that new information is and how it might impact our you know our outcomes and because these tools make it so easy for us to do that you really don't have an excuse not to yeah that i i do i do sympathize with um [Music] with people's perspective of we planned for this we spent time there's this there's a certain um a certain sunk cost fallacy attached to we spent a good amount of time estimating getting the team together figuring out what we can do and we should try to stick to that as much as possible because we did that but the fact of the matter is ground conditions change things happening on the team change the rate at which we thought we would be able to eat through this project changes and we should react to that there is there is no um there are very few other professions where people won't react to that as ground conditions change for for you sports fans out there um you know i don't whether you're a cricket fan or a world cup fan or an nfl fan or whatever there are tons of websites that you can go to and you know as the game as your your favorite game is being played you can see you know your percentage chance of your team winning and you will see that as the game progresses you know as teams score or as they don't score um those probabilities will change and that that's that's one classic example of continuous forecasting as the game is running we're getting more information you know hey this team scored this team didn't score this team got out this team whatever um whenever those events happen you go and you update your model and you get an updated probability of of what's happening yeah and then again we don't have time today to dive too much into it if you take yourself out of the chair of the fan and put yourself into the chair of the coach as you see those probabilities change as you see these results change it's imperative that you make some changes that you take decisions to affect them one way or another which you've got a variety of decisions that you're at hand i think fans would be pretty upset if uh if a coach is in the super bowl and the team's lead is is uh losing at halftime and they don't make any adjustments right or they're losing with two minutes to go and they're not they don't go for it on fourth down you know whatever yeah your tactics at some point your tactics have to change based on the new information that you get based on the new uh the new data that you're provided so um we've thrown a lot at you in this episode um i don't know is there anything else that we really want to cover before we yeah we i think we exceeded our whip on yeah i think we do how many things we should talk about in one episode we probably should have split out the continuous forecasting into a separate episode maybe we'll do that in post you know um see us show up again with a full glass of whiskey talking about continuous forecasting then you will you will know what happened um any um as always pratik as the guest i will let you have the final word about um about anything that we've talked about tonight you know taking from probabilistic uh thinking all the way to using monte carlo simulation sum it up for us well i think it's it's uh i'm gonna try to distill it down into steps first start thinking probabilistically acknowledging there are multiple uh multiple possibilities that can happen to have some tools at hand do some modeling simulation to find out what is the probability attached to each of those possibilities and most importantly three take action when those tools tell you that the probabilities are not in favor of the possibilities that you want excellent job so i couldn't i mean i could have done it better myself but i'm going to tell you that i i couldn't have done it better myself so um with that we will uh we'll we'll wrap it up um for pratik singh uh you know i am daniel vicanti thank you again for uh for joining us on this episode uh and we'll see you next time on drunk agile cheers everybody good night you