Transcript for:
Understanding Decision Analysis Techniques

Okay, so we're now on Chapter 15, and Chapter 15 is about decision analysis. So business analytics is about making better decisions, and decision analysis can be used to develop an optimal strategy. When a decision maker is faced with several decision alternatives and an uncertain or risk-filled pattern of future events, they use decision. analysis. For example, the state of North Carolina used decision analysis in evaluating whether to implement a medical screening test to detect metabolic disorders in newborns. A good decision analysis includes careful consideration of risk, and risk analysis helps to provide the probability information about the favorable as well as the unfavorable outcomes that may occur. So decision analysis considers problems that involve reasonably few decision alternatives and reasonably few possible future events. So topics to be discussed under decision analysis are payoff tables and decision trees, sensitivity analysis, and the use of Bayes'theorem. So we need some problem formulation here. So the first step in the decision analysis process is problem formulation, which is a verbal statement of the problem, as well as identifying the decision alternatives. The uncertain future events referred to as chance events and the outcomes associated with each combination of decision alternative and chance event outcome. So illustration would be the Pittsburgh Development Corporation. which commissioned preliminary architectural drawings for three different projects, one with 30 condominiums, one with 60 condominiums, and one with 90 condominiums. The financial success of the project depends on the size of the condominium complex and the chance event concerning the demand for the condominiums. The statement of the Pittsburgh Development Corporation's decision problem is to select the size of the new luxury condominium project that will lead to the largest profit given the uncertainty concerning the demand for the condominiums. So given the statement of the problem it is clear that the decision to select the best size for the condominium complex is the outcome that we are wanting to talk about. So we have three decision alternatives. a small complex with 30 condominiums, medium complex with 60 condominiums, or a large complex with 90 condominiums. In decision analysis, the possible outcomes for a chance event are the states of nature. The states of nature are mutually exclusive, no more than one can occur, and collectively exhaustive. meaning that they make up the sum total of them make up all possible futures. So something has to be done. And out of those some things that have to be done, we're going to select one of them. So that's one and only one of the possible states of nature will be occurring for us. And we now then say that the chance event concerning the demand for the condominium has two states of nature. So what are the two states of nature? There's either a strong demand for the condominiums or there's a weak demand. There's no other possible alternative. And so that means then that we could state it quite simply that if we want to look at the states of nature. But what we're then describing is that we have the decision alternative, strong demand and weak. demand. So suppose that we actually have values determined here. So here we have small complex d1, medium complex d2. and large complex d3, and let's suppose that we know this to be 8, 14, 20, and 7, 5, negative 9. Now payoff is the outcome resulting from a specific combination of a decision alternative and a state of nature. And the payoff table is a table showing payoffs for all combinations of decision alternatives and states of nature. And this is in millions. So this is the states of nature payoff table. Now, the 20, it's right here, indicates that a payoff of 20 million occurs if the decision is to build a large complex. And there's a strong demand state of nature that occurs. And the negative nine indicates that there's a $9 million loss for that. So a decision tree provides a graphical representation of the decision-making process that's involved in making conclusions here. So the decision tree for this. has got to look something remarkably like the following. So that's it. Those are the only options. So small, medium, large. And then once you have the condominium, there's either a strong or weak demand for that. Now. There are four nodes and these four nodes are numbered one, two, three, four. These nodes are used to represent decisions but they're also chance events. So squares are used to depict decision nodes, circles are used to depict chance nodes. Now that's a relatively standard notation. in terms of chance versus decision nodes. So node one is a decision node and two, three, and four chance nodes. Now the branches connect the nodes. So this is a branch right here, right there. And the branches leaving each chance node correspond to the states of nature. So the outcomes, which are the payoffs, are shown at the end of the states of nature. outcomes. So outcome, outcome, outcome, outcome, outcome, outcome. Those are the available outcomes. Now you could have a decision analysis without probabilities. And what are some of the possible ways that you could have it? So decision analysis without probabilities is appropriate in situations in which a simple best case and worst case analysis is sufficient. This is where the decision maker has little confidence in his or her ability to assess the actual probabilities involved. Sometimes you just honestly don't know. So the optimistic approach evaluates each decision alternative in terms of the best payout that can occur. The decision alternative that is recommended is the one that provides the best. possible payoff. For minimization problems, this approach leads to choosing the alternative with the smallest payoff, which, you know. Seems quite logical. So the optimistic approach would lead the decision maker to choose the alternative corresponding to the largest profit. So if we go to our states of nature table, between these two strong and weak demand, you can see that 8, 14, and 20 are the maximum payoffs. And therefore the overall maximum payoff is a large complex under strong demand. So assuming strong demand is the optimistic approach to this scenario, the conservative approach evaluates each decision alternative in terms of the worst payoff that can occur. So that would be where we are assuming that the decision alternative recommended is the one that provides the best of the worst possible payoffs. So for problems involving minimization, this approach identifies the alternative that will minimize the maximum payoff. So in this instance, we should be assuming that the weakest demand is what will occur. And under the weakest demand of what can occur, then that means our maximum is seven. Therefore, under a week, we should have any remaining. small complex be built. So small condominium complex. So when we have these min-max regret approach, or pardon me, when we have these kind of min-max things, because there's a minimum and there's a maximum both going on here, then what we have is that we want to then be able to say that we have what's referred to as regret. Now, what is regret? Well, regret is the difference between the payoff associated with a particular decision alternative and the payoff associated with the decision that would yield the most desirable payoff for a given state of nature. Regret is often referred to. And let's put that in here. Regret is oftentimes referred to as an opportunity loss. And under the min-max regret approach, one would choose the decision alternative that minimizes the maximum state of regret that could occur over all possible states of future. So that's where Rij is the difference between Vj, where Vj is the payoff value corresponding. So what are these Vs? These Vs are the values of this table right here. and the payoff corresponding to decision alternative DI and the state of nature SJ. So in other words, it's the various values from the table. So if we go with our various things, an opportunity loss or regret table would then look like the following. Okay, so what is our maximum opportunity loss regret table? So that would look like, so copy this, paste this, merge. for tune 80. Okay, so what's the difference between 8 and 20? It's 12. What's the difference between Okay, well under weak demand, difference between 7 and 7 is 0. Difference between 7 and 5 is 2. Difference between 7 and negative 9 is 16. Difference between 20 and 20 is 0. Difference between 20 and 14 is six. So remember that it's absolute values here. So it's the absolute difference. So what's the maximum regret? Well, these are the maximum regrets for... Let's move this over. Let's copy these decision alternatives. Maximum regret, millions. So maximum regret would be 12, 6, that is not a 6, 6, and 16. So the minimum here of the maximum regret is 6. So the min-max regret decision for this problem is six. And I actually want to move that down a little. There we go. So the min-max regret. is six. So what about if we have probabilities? If we have probabilities, this problem changes quite a bit in terms of how we're able to characterize what goes on. So In the case where we have probabilities, then we have the expected value of the decision alternative, which expected value is EV of DI. DI is the decision. And it is the weighted sum of payoffs for the decision alternative. vij are values on the decision tree so for instance then when we look to our table or really it's a decision tree when we look to our decision tree here here we could have weightings on strong and weak decisions and So when we go with our strong versus weak, we have to have a likelihood of our two different decisions. So this leads us to the expected value approach, which then says that for this particular problem, that to get to the expected value for it. the small decision, small complex, it's the weight of choosing the strong, which is 0.8 times 8, which is this outcome payoff. And then the 0.2 times that outcome, which means there's a total of 7.8 for small. condominium complex. There's 12.2 for medium, and there's a 14.2 for a large. condominium complex. The weight for payoff is the probability of the associated state of nature, and therefore the probability that the payoff will occur. So we select the decision branch leading to the chance node with the best expected value. And the decision alternative associated with this branch is the recommended decision. So in practice, obtaining precise estimates of the probabilities for each state of nature is often impossible. So historical data is preferred to use. for estimating the probabilities for the different states of nature. So risk analysis helps the decision maker recognize the difference between the expected value of a decision alternative and the payoff that may actually occur. So decision alternative and a state of nature combined to generate the payoff associated with the decision. So a risk profile for a decision alternative shows the possible payoffs along with the associated probabilities. So risk profile for the large complex decision alternative for the PDC condominium project could look like the following. So you can see that, I mean, it's quite profitable. The likelihood of it being 20 million in profit is far more likely than the loss of negative 10. And so it allows for. a graphical interpretation on the decision making. So sensitivity analysis determines how changes in the probabilities for the states of nature or changes in the payoffs affect the recommended decision alternative. In many cases, the probabilities for the states of nature and the payoffs are based on subjective assessments. So sensitivity analysis helps the decision maker understand which of these inputs are critical to the choice of the best decision alternative. If a small change in the value of one of the inputs causes a change in the recommended decision alternative, the solution to the decision analysis problem is sensitive to that particular input. So illustration, suppose that in our problem, the probability for a strong demand is 0.8 and the probability for a weak demand is, or pardon me, for a strong demand is 0.2 and probability for weak demand is 0.8. So let's contrast that with what we previously found. Okay, with these probability assessments, the recommended decision alternative is to construct a small condominium complex with an expected value of 7.2 million. When the probability of strong demand is large, then we should build the large complex. But when the probability of... strong demand is small, then we should build the small complex. So this illustrates that there's a sensitivity to the probability of the strong demand. So what about when we have sample information? So decision makers have the ability to collect additional information about the states of nature. So for instance, additional information is obtained through experiments designed to provide sample information about the states of nature. So the preliminary or prior probability assessments for the states of nature that are the best probable values available prior to obtaining additional information is what usually researchers would use. So posterior probabilities are revised probabilities. after obtaining additional information. So it's prior probability versus posterior probability. So, for instance, consider the Pittsburgh company where management is considering a six month market research study designed to learn more about potential market acceptance of the condominium project, anticipating two results. A favorable report, which is a substantial number of the individuals contacted express interest in purchasing a condominium or an unfavorable report. These would be speaking towards the possibility of demand. So a decision strategy is a sequence of decisions and chance outcomes in which the decisions chosen depend on the yet-to-be-determined outcomes of chance events. So a possible, and we should say one possible, decision tree. would be to look at the following. So you have your market research and from your market research, you're then going to have either a favorable or an unfavorable report. Now you could not conduct the market research, and then you still have the same exact outcomes as before. But suppose that you conduct the market research and then you have either a favorable or an unfavorable report. Well, that does not necessarily update your immediate conclusions. We should say not automatically. So this allows for the computing of the chance nodes for both favorable as well as unfavorable outcomes. And in particular, let's and I forgot to. put the additional one in. So we can have the result of the favorable versus unfavorable. So the difference between this particular decision tree on the left and the one that we're currently looking at is that this contains the values of the favorable versus unfavorable. So favorable, let's suppose in this example is 0.77 and unfavorable is 0.23. Well that changes then, that changes the decision tree and so it makes it with this favorable versus unfavorable report from individuals, it changes the overall probabilities and that means that it leads us to a potentially different conclusion. for what should be done. So from our market research study versus no market research study, we have that we can observe our conclusions right here and we can see that after choosing the best decisions, we're left with the following. Well, that is a different result, potentially, that we have. So it's really reduced then to two decision branches for us to go with among these two. Now if you're wondering where the 15.93 came from, 15.93 is what we get when we take the 0.77 times the 18.26 and then we add 0.23 times 8.15 to it to get that quantity. So really we were just doing the weighting on that decision tree right there. So when we're reduced down to these two decisions, if the market research is favorable then we construct the large condominium complex. If the market research is unfavorable we construct the medium condominium complex. which means that we're left with, in these two things, we're left with two different decisions, which are, of course, obviously different. So from this, and by this, I mean from our most recent conclusion right here. So let me... I'm going to highlight that in yellow. Bow. Okay. So that's a silly way to highlight it, but I had fun with that. So from that, we can conclude that the difference between these two things. which is the difference between 15.93 and 14.20 is 1.73. So this is what we would refer to then as the expected value of sample information. So EVWSI is the expected value with sample information about the states of nature. EVWOSI is the expected value without sample information about the states of nature, and EVSI is the expected value of sample information. A special case of gaining additional information related to a decision problem is when the sample information provides perfect information on the states of nature. Now that is not likely to typically happen. So an expected value of perfect information. So we could state the optimal decision strategy when the perfect information becomes available as following. So if we were to have perfect information, then If S1 select D3 and receive a payoff of 20 million, if S2 select D1 and receive a payoff of 7 million, meaning what? Meaning if you exactly know whether there's going to be strong or weak demand, then that exactly determines what you know to do. So the original probabilities for the states of nature. where the probability of S1 for this example was 0.8 and the probability of S2 is 0.2. So the expected value of the decision strategy that uses perfect information is 17.4. How do you conclude the 17.4? Well, that's the 0.8 times 20 plus 0.2 times 7, which is 17.4. And just so that we can remember where all of these things are coming from, it's coming from this table where 20 is what happens when we have strong demand and 7 is what we have when we have weak demand. So... That means the EW, EVWPI for this example is the 17.4 with perfect information. Now, earlier we found the expected value approach is decision alternative D3. This is referred to as the expected value without perfect information, which is EVWPI. So the difference between the expected value of perfect information, well, the difference, yeah, the difference caused by the perfect information is therefore then the difference between the 17.4 and the 14.2, which is 3.2 million. So this is what we could refer to as EV. p i and that's equal to 17.4 minus 14.2 which is the 3.2. So that's really how we begin to handle a lot of these kinds of problems. And so it's at this point that we come to then describing conditional probabilities in accordance with Bayes'theorem. So Bayes'theorem can be used to compute branch probabilities for decision trees. So what's the notation that we use for Bayes'theorem? It's worthwhile to have an investigation into just the notation system of Bayes'theorem. So let's take a glance here at what Bayes'theorem will tell us. And I'm going to screen share my iPad, which will be what I will use to define Bayes'theorem for us here. I hope that it'll work. There we go. Excellent. And so hopefully the screen is visible and I can then get writing. about Bayes'theorem. Now, Bayes'theorem is a standard part of probability theory. There we go. So let's investigate what Bayes'theorem has to say. It has to say quite a few things to quite a useful result. I'll go larger on that. All right. So my apologies for the sneeze that's about to happen. Excuse me. All right. So probability of S1 given F means a conditional. probability of S1 given F occurs. Now, as you have already had a statistics class, I'm going to make the assumption that you have investigated some of this already in terms of conditional probabilities. And a conditional probability is because we're interested in the probability of a particular state of nature conditioned on the fact that we receive a favorable market result. As related to this context, it's a more general topic than just this. So we could similarly write S2 given F. And both of these two terms right here, the two terms being. this and this are both instances of what we would refer to as posterior probabilities, because they are conditional probabilities based on the outcome of the sample information. So when we come to the When we come to, for instance, wanting to describe a scenario with Bayes'theorem, and we then have, for instance, F for favorable and U for unfavorable, favorable. and this is unfavorable, and then the S1 is strong demand, and S2 is equal to weak demand. And this is one of those rare instances of where it's entirely appropriate to make a fuss. And that is a classic, classic Dr. B joke. Thank you. We then can then look at how do we then describe the assessment of the probabilities of the two states of nature. because we want to know probability of S1, probability of S2. So we must know the conditional probability of the market research outcomes given each state of nature. To carry out the probability calculations, we need conditional probabilities for all sample outcomes given all states of nature. So Bayes'theorem is stated in the following way. So let's get a statement of Bayes'theorem here. So a proper statement of Bayes'theorem would be to say the following, that the probability of A I given B is equal to the probability of A times the probability of B, given that A happens, divided by the probability of A times the probability of B. given that a1 occurs, plus probability of a2 times the probability of b given a2 occurs, plus dot dot dot plus probability of a n times the probability of b given that a n occurs. So let's take an example look at this. So an example look at a usage of Bayes'theorem is the following. So let's go back. And one thing I wanted to comment on is that the probability result that Bayes'theorem uses and is therefore based on is using the fact that probability of probability of J given W is equal to probability of J and W divided by the probability. of W. So this is a general probability result that gives the definition of how to find the conditional probability of any event. And so this is not just a Bayes theorem result, this is a general result. And then when we come down to then the denominator term of Bayes theorem, and if you look at what's going on here, Right here, you can see then that because of this formula, this term right here can be rewritten as the probability of B and A I divided by the probability of A I. And so as a result, that gets some cancellation right here. And so this whole numerator then becomes the probability of A I and B, which is the same thing as the probability of A, you know, which is the same thing as the probability of B and A I. and then we need the denominator term which is all of this right here. Well all of this right here is actually a rewriting on b, the probability of b. And what's the reason why? Well the reason why is because we take all of these events a1, a2, a3 up through a n as being distinct probabilities that together make up all things that can occur. So let's look at an example application of Bayes'theorem. as it will serve fruitful. So this will be an example of Bayes'theorem. So let's, so here we have favorable. So let's create it as a tabular. So let's say this is. market research and state of nature favorable f unfavorable u and This will be then strong demand S1, this will be weak demand S2, and this will then be probability of favorable given S1. which would be equal to 0.90. We created a little more spacing there. This will be probability of unfavorable given S1, which is 0.10. You can notice that those two add up to one. That has to be true. And then this is... probability of favorable given S2 0.225, probability of unfavorable given S2 0.75. Okay, so We now would be curious about being able to make some statements about, you can think of it as changing the order of the probabilities. That's one way to think of it. So for instance, we would then say, let's write it over here, probability of s1 given favorable equals what? Okay, so. All right, so typically we would need to know the probability of F and the probability of U to be able to reach these conclusions. And that's something that typically we'd be stuck with. But let's make our conclusions here. So from our previous studies, so let's see where our facts here. We already made the conclusion that the probability of S1 is equal to 0.8, probability of S2 is equal to 0.2. in our decision tree, we also had favorable and unfavorable. So that was the conclusion that we drew. So probability of favorable is 0.77. So I can actually plug this in. So this is not an unknown. This is equal to probability of, oh wait, pardon me, s1, what was s1? Oh, that's 0.8. So 0.8, 0.8 is the probability of s1 times the probability of f given s1, which is 0.9, divided by the probability of s1, which is 0.9. times the probability of f given s1, pardon me, that's 0.8, my apologies, and then that's 0.90, and then plus 0.20 times 0.10 All right, let's see. S1, S2. Check this. This right here should be the probability of S1. This should be the probability of S2. This should be the probability of F given S1. Ah, here we go. This should be the probability of f given s2. There we go, that's 0.25. And probability of s1, which I think we had a conclusion on that. oh that's 0.77 okay so that's what's off So let me share so that you can see what it is that I'm writing. So what I'm writing here, I'll write it from scratch for you so that you can follow along. Hopefully the screen shares. There we go. Someone's at the G. I'm upstairs. What's that? Yes, I'm also recording right now. One second. No worries. Yes, I'm guessing you probably heard that. That was my significant other. So let's look at the specifics as I was writing them out. So what we were looking for was we were looking for the probability of S1 given F, which by Bayes theorem right here is the probability of F times. probability of f given s1, which if you then go back to the theorem result, oops, I just swapped that, didn't I? Yes, I did. So I forgot which one I was doing. Crazy talk, crazy talk. Probability of S1, there we go, because you can see that this corresponds with this, and this appears in reverse order. Probability of S times probability of f given s1, there we go, and then you have probability of a1, a2, etc., which in this instance is probability of s1 times probability of f given s1 plus probability of s2 times probability of f given s2. So that's what I was starting to write out and let's include our facts which is the probability of s1. So this is something that down in our decision tree, we wrote that, or pardon me, not our decision tree, under our facts. So this is equal to 0.80, and probability of S2 is written, which is 0.2. Those were previously computed and calculated, and probability of f given s1 is 0.90 and probability of f given s2 is 0.25. So now let's return to what we were doing and then write down our associated terms. So we need our Excel file. So this means that we have 0.8 times 0.9, that's correct, divided by 0.8. times 0.9 plus 0.2 times 0.25. So that is correct. And that's 0.93506494 is the probability of S given F occurs. So our final topic, I believe, yeah, our final topic. is a little bit of utility theory, just a slight branching into the concept of utility theory. So what is utility theory? When monetary value does not necessarily lead to the most preferred decision, expressing the value or worth of a consequence in terms of its utility will permit the use of expected utility to identify the most desirable decision alternatives. Utility is a measure of the total worth or relative desirability of a particular outcome. So let's see. Boom. This reflects the decision maker's attitude towards a collection of factors such as profit, loss, and risk. So an example of a situation in which utility can help in selecting the best decision alternative would be when we have two investment opportunities that require approximately the same cash outlay. Cash requirements necessary prohibit for making more than one investment. at any time. And consequently, three possible decision alternatives may be considered. So for instance, we could say that we have D1 is make investment A, D2 is make investment B, and D3 is do not invest. In the states of nature, It can be stated as S1 is the real estate prices go up, S2 is real estate prices remain stable, and S3 is real estate prices go down. So that leaves us then with a table of data. And let's call our example, our specific example that we're going to do. Swofford, called Swofford 1. And so we'll have the decision alternative prices up, which is S1. Price is stable, S2. Price is down, S3. We'll have investment A, D1, investment B, D2, do not invest, D3, 1, 2, 3, 30,000. 20,000. One, two, three. One, two, three. One, two, three. Okay. So this is an example of what we have. So a decision maker who would choose a guaranteed payoff over a lottery. with a superior expected payoff is a risk avoider. So the following steps state in general terms, the procedure to solve the investment problem. So step one, develop a payoff table using monetary values. Step three, or pardon me, step two, is that you identify the best and worst payoff values in the table and assign each a utility with U of best payoff being greater than U of worst payoff. So the utility function is measuring the payoffs. Now, step three for this is that for every other monetary value M in the original payoff table, do the following to determine its utility. So this is the key step to find the lottery such that there is a probability P of the best payoff and probability one minus P of the worst payoff. So they're complementary. probabilities. Determine the value of p such that the decision maker is indifferent between a guaranteed payoff of m and the lottery defined in step 3a. Calculate the utility of m as follows. Utility of m is equal to the probability, is equal to p times utility of best payoff plus 1 minus p times utility of worst payoff. Step four, convert each monetary value in the payoff table to a utility. Step five is apply the selected utility approach to the utility table developed in step four and select the decision alternative at the highest expected utility. And we can compute the expected utility, E of U, of the utilities in a similar fashion as we computed expected values. And that's really all we're doing is actually we're just computing an expected value. It's just a weighted probability. And what we can then look at is that we can look at the utility like so. And so let's suppose that we have 10 units ascribed to the monetary value of 50,000 for this and then utility at each of the levels going down. Then what that allows us to do is that using the. state of nature probabilities that are the following, then we have the following calculations for the expected utilities. So this recommends Investment B with the highest expected utility of 3.95. Now, you can similarly, or rather, in addition, you can plot the values for diminishing marginal return. for risk taker as the utility function for money. So for instance, we have the following, it can be graphed. So this is where you then can consider the options, which is a risk avoider, a risk neutral, and a risk taker, which are the three paths of how to handle the utility. in regards to the monetary value. For a risk-neutral decision, the utility function can be drawn as a straight line connecting the best and worst points. The expected utility approach and the expected value approach apply to monetary payoffs results in the same action. Now used as an alternative to assume that the decision maker's utility is defined when decision maker... provides enough indifference values to create a utility function is the exponential utility function, like so. All the exponential utility functions indicate that the decision maker is risk averse, and we can have different risk tolerances that create for us then different collections of utility functions that can then be used in the computation. So the parameter r represents the decision maker's risk tolerance and it controls the shape of the exponential utility function. Larger r values create flatter exponential functions indicating that the decision maker is less risk averse, whereas smaller r values indicate the decision maker has less risk tolerance and is therefore more risk averse. So for example, if the decision maker is comfortable accepting a gamble with a 50% chance of winning $2,000 and a 50% chance of losing $1,000, but not with a gamble of a 50% chance of winning $3,000 and a 50% chance of losing $1,500, then we would use R equals $2,000 in equation 15.7. So this is some background on the idea of utility theory. You will not be expected to do a whole lot with it. And this completes our final lecture. So as you know, you will be meeting virtually to be able to go over your final project, which will be your last task in this course. So congratulations on finishing it, and I look forward to all of your final projects. and much good work for the future. Have a good one and goodbye.