This is FRM Part 2, Book 1, Market Risk Measurement and Management, and the chapter on Estimating Market Risk Measures. Let's take a look at these learning objectives, some of which we've seen before. We've talked a bunch about value at risk, so we'll talk about value at risk. look at a couple different ways to estimate the VAR.
We'll also talk about some shortcomings of Value at Risk and then we'll extend that into expected shortfall and we'll talk about coherent risk measures. And then we'll end with a discussion, a couple of slides on quantiles, and that's going to be fairly interesting. It's going to add some value to our understanding of risk management. It's not going to be a perfect addition, but it's going to give us, kind of give us a picture. Now before we get into that first learning objective, let me go ahead and just give you a little bit of a brief history on value at risk and some of the things that we've some other risk statistics.
You know, let's go all the way back to 1952 when Harry Markowitz taught us that standard deviation was a great measure of total risk. And that was extended by William Sharpe and the whole capital asset pricing model and beta and the importance of covariance and correlation coefficients. But all that modern portfolio theory stuff was based on the assumption that the underlying asset returns followed a normal distribution. And these models, they work perfectly well.
well. And they give us a great fundamental understanding of the risk and expected return relationship. But we have to ask the question, what happens if the return distribution is not normal? What happens if there's positive or negative skewness?
What happens if there's something going on in the tail? And of course, after the 1987 stock market crash, that's when financial risk managers started looking at other measures. And that's when value at risk risk became pretty popular. And even though sometimes value at risk computations depend on normal distributions, as we'll see here in just a minute, and we probably talked about that in some previous chapters, that's what we're trying to do here today.
We're trying to examine how value at risk is going to help us decide on the risk characteristics of investing in a particular security. All right, so let's go ahead and start the learning objectives for this chapter. So what we're going to try to do is estimate value at risk using historical simulation. So let me just remind you that what we're trying to do here is we're looking at a distribution of returns for a particular investment. And we're not so much worried about the right hand side.
We're worried about the left hand side of the distribution. We're really worried about that left hand tail. If it's a normal distribution, well then life becomes much easier and risk management probably follows an easier set of decisions.
But when there's skewness or kurtosis over there on the left, then we need to worry about that kind of stuff. What we're trying to do with value at risk is we're trying to determine what is the worst thing that could happen to us, right? Now value at risk doesn't really answer that question, but what it does, it gets us to the left side of that distribution, and the value at risk estimate is going to tell us something like, boy, we have only a 5% chance of losing more than a million dollars or a billion dollars. So that gives us a sense, some extra information, how we can use that in our risk management decisions.
All right, so what does that first block point tell us? Simple approaches to estimating value at risk. Historical simulation. All right, so what we're going to do is we're going to construct a distribution of losses.
And what we can do is we... we can look at these as being subjected to certain key factors over the last time periods. You know, what are some key factors? If this is a bond portfolio, then, of course, we need to worry about yields.
We need to worry about duration. We need to worry about convexity. We probably don't need to worry about the number of clouds in the sky.
So remember, we talked about, you know, the key factors and key rate durations and all that kind of stuff. So we need to make sure that these factors are pretty relevant. All right, what we do is we just order them. And look in the blue block that I have there. We can put together a copy.
Confidence interval given by that pretty simple equation there. And what this is going to do is it's going to separate the tail from the rest of the distribution. And so there's a quick example at the bottom. If we have a thousand observations.
confidence level of 95%, then when we order all of these losses, we're going to identify that there are 50 of them in the tail, and then that value at risk is going to be the 51st observation. Not the 51st all the way to the left, but the 51st starting from the left and working our way to the middle. Let's look at some of these steps. So we're going to order, we're going to find the highest observation, and then we're going to locate that loss that's corresponding to that specific confidence level. So let's do just a quick example here.
300 trading days, the five worst daily losses in millions of dollars. So we've got from minus 30 all the way down to minus 19. So what is that 99%? Daily historical simulation value at risk. Alright, so we go back to that equation from the previous slide and we take 1 minus the 99%.
So essentially all we're doing is taking 1%, right? 1% of the sample size, so percent of 300 is going to be three and then we're going to add a value to it to get to the fourth highest one isn't that what we did back here whoops uh isn't that which way am i going here this is what we did all the way back here right we did the same kind of thing we did the five percent of the 1000 that gave us 50 and then we added one to it. So that's going to be the fourth highest value is going to be the minus 21 and so that fourth observation separates the largest losses which is the 30 and the 27 and the 23 from the rest of the distribution.
And so what did that slide tell you just a few moments ago that this is really relatively straightforward so there you go. How about if we do this using some parameters? The good thing about the historical simulation approach is that we don't make any assumptions regarding the distribution of returns.
Those things right there minus 30 27th. So we have, we have five of them there. And then we have 295 of them to the right of the minus 19. And we really don't care what those are, right? They could be almost anything.
I mean, hopefully there are some of them, some of them. Most of them, maybe a significant majority of them are going to be positive, but we don't make any assumptions. But when we use the parametric approach, we are going to make assumptions that date us back to 1952 and the Harry Markowitz reliance on a normal distribution. And so we're going to look at a normal and a log normal distribution.
All right, let's suppose that we're using a database. base of dollars of profits and loss. Okay.
We're going to assume that the distribution is normal. And so our value at risk is going to look, you know, that's a little bit like a confidence interval, right? It's not quite a confidence interval because we're not plusing and minusing. But what we're going to do is we're going to take that mean, and then we're going to add the product of the standard deviation times some critical value. from the Z table.
And that looks an awful lot like at least one part of a confidence interval. It's not identical, but it's a similar kind of a thing. And remember, what we're trying to do is look to the left. All right, so let's look at a quick example here.
All right, so over a specified period, this normal distribution has a mean of 12 million and a standard deviation of 24 million. let's calculate that 95 percent value at risk so all we're going to do is take the 12 million now we're going to put a minus sign in front of it of course because we're looking to the left and then we're going to add the standard deviation times the 1.645 which is the critical value of z at 95 percent remember this is one tail right we don't care about that other tail so 1.645 so that gets us to what is that about 27 and a half and so what we can say about that there's my little balloon over there, we can expect to lose at most $27.5 million over the next year with a 95% confidence, which means that there is a 5% chance that we're going to lose more than that $27.5 million. And of course, now it's probably a good time to go.
It's a great time for me to mention that, you know, this is one of the weaknesses of the value at risk method is that it says, okay, you have a 95% chance of going between, you know, 27 and a half, minus 27 and a half of a loss all the way up to, you know, whatever that. normal distribution of a profit would be, but we don't know what's in the tail. We don't know what that tail looks like. Maybe we have a three percent chance of losing, oh let me just be conservative, let's say 30 million dollars, right?
Or maybe we have a three percent chance of losing 300 million or three billion dollars. We don't know what's in that tail over there. Nevertheless, this gives us a really good picture. Assuming a normal distribution distribution of what that left tail looks like. Now, of course, we don't have to be limited to using dollars.
We can use rates of return. So if we look at an arithmetic return, boy, there's just a regular old F over P minus one. That's how I teach my students how to calculate return on the very first day of investments class.
F being the future value, P being the present value. So let's go from 100 to 100. So you do 110 divided by 100, subtract 1, and you get that 10%. But here, notice there's the price today minus the price yesterday. So that would be the 110 minus the 100. And then you can add some intermediate cash flows in there.
There's a D in there, which stands for interim payments. Don't be limited. that those must mean a dividend payment if this is a share stock because of course it can be a coupon payment if it's a bond and so that value at risk formula there in the bottom blue box looks an awful lot like what we just did and so here's just a really quick example suppose we have a mean return of 155 standard deviation of 107 and then we'll just make life simple that our portfolio is worth is worth one unit of a currency 95 value at risk so you do the minus 155 multiple add the product of the 107 standard deviation times that same 1645, right?
We get that from the critical value. And that gives you 21. So for one, these are euros. So we have one euro of a portfolio.
Then our value at risk then is 0.21 euros. Now let's go ahead and discuss the possibility that we're not going to use profits and losses, which can be negative, right? We're not going to use arithmetic returns, which can be positive or negative, but we're going to use prices. And so prices, you know, they're almost always, can I say almost always or always, they're almost always bounded, bounded by zero. I was fascinated when you look at some uh the balance sheet of some of these extremely distressed financial positions of these publicly held corporations and they have a negative equity value in there i always wanted to scratch my head and say i'm not quite sure what that means but anyway stock prices and bond prices are pretty much bounded by zero so what we have to do then is that if we assume that the returns follow a normal distribution then the natural log of those asset prices prices will follow a normal distribution.
And so on your calculator, you need to use the little LN buttons and the little E to the X button. And that's what we have in the blue box there at the bottom. So there's our value at risk. And we're just assuming that continuous, you know, when you were E to the X continuous compounding and all that good stuff that we learned way back in calculus.
And so here's a pretty quick example. So we have a mean of 0.1, standard deviation of 0.15, portfolio currently valued at $20 million. There's the 95% value at risk, so we just plug in those numbers.
It's similar to what we've been doing, and we get, what is that, about $2.7 million. So our portfolio is worth $20, and our value at risk at the 95% level of confidence is about $3 million. Let me read that first block point there.
I've been saying this here. Despite the significant role value at risk plays in risk management. So let me go ahead and emphasize significant role.
It stops short of telling us the amount or magnitude of the actual loss. That was that three million or three hundred million or three billion that I talked about just a few slides ago. We don't know what amount the actual loss would be beyond that value at risk estimate. So in order for us to get a sense of that magnitude, we need to compute something called the expected shortfall. What we're going to do is we're going to take that tail and we're going to chop it and we're going to assign probabilities into those little pieces of that tail and we're going to take the average of that and call that the expected shortfall.
So here's an example here. Oh boy, notice what I say in that first block point. As a probability weighted average of tail losses.
So that's going to be average of tail value at risks. All right, so let's divide the tail into n equal slices, each of which has the same probability mass. So look there, I at my graph on the left-hand side, we have, you know, we've got the one, two, three, four, we got five slices over there. If we're in France, we call those tranches with the same probability mass.
We're going to estimate the value at risk for each one of those, and then we're just going to take the average of those computed uh computed value at risks and so here's a here's just a quick example let's go ahead and chop that uh that tail 95 expected shortfall let's chop that tail into into all of the regions to the left and let's assume that we're going to have nine of those tails and so right 95 so that's the that's the value at risk so we want to go and increments of 0.5 so notice down the left hand column 95 and a half 96 96 and a half and so on all the way down to 99 and a half and then those uh those value at risk the tail value at risk those are just the those are just the critical values of z because look what I did up at the top there. They're normally distributed with a with a mean of zero and a standard deviation one like like that z table. So those are just the z values at each of those confidence levels and then if you take the average you get about 2.02 and a half and that's the expected shortfall. So notice what we're doing here with this extra.
So we start with the value at risk, we get that number and then we re-examine the tail. I mean the value at risk process just ends with that estimate. But now when we add the expected shortfall to it, now we go in and we get our chef's hat out and we get our fork and our knife and we kind of slice it up.
So we're trying to see, okay, if this is, if we're trying to determine if the steak is cooked completely on our grill, you slice the ends and you see that it's then in the middle, it gets more red and red, et cetera, et cetera. So it adds value to what we're doing. all right how about this concept of a coherent risk measure this goes all the way back to statistics and and probably that dude chebyshev in the you know 1780s in order for a risk measure to be coherent it has to meet all that's an all right up there at the top all of the following conditions first thing we need to do is if we add two portfolios then the risk measure can't get any worse than adding those two risks separately and this sub at additivity condition is where the value at risk fails All right, so just remember that first one is where value at risk doesn't really help us out in terms of being a coherent risk measure. Second thing we need is that if we double a portfolio results, we double the risk. That's called HOMO.
geniety that make should make perfect sense and then if we're comparing y to x then um under all scenarios then uh the risk of y has to be either greater than or less than the risk of x depending on where y and x are. So notice what I have written there. If y always has better values than under all scenarios, then the risk ought to be less. That's called monotonicity.
And then the last condition is that if we add cash to our position, all right, if we add cash, clearly cash has no risk, so that we're going to decrease our risk by the same amount. This is called translation invariance. So here's kind of a summary slide.
Value at risk is not a coherent risk measure because it fails the sub-additivity property. But expected shortfall does satisfy this property and is a coherent risk measure. And that means that value at risk is not a coherent risk measure. Risk sometimes, sometimes discourages risk diversification, but expected shortfall does not. And notice that that third block point I have.
The expected shortfall reveals to the risk manager the risk manager what to expect in all of these bad scenarios and it tells us, it gives us an estimate of how bad it might be while the value at risk doesn't really tell. tell us anything. All right, so what did I say earlier?
You know what we're going to do is calculate value at risk and then we're going to complement that by estimating the expected shortfall. All right, let's move on to these quantiles. All right, so it's possible to estimate coherent risk measures by manipulating the average value at risk method. Manipulating is a strong word.
How about massaging? How about trying, trying to squeeze, to squeeze out a value at risk, something that adds to our understanding of that left tail. All right, so what we're going to do under these general coherent risk measures is that we're going to divide the entire distribution into equal probability slices.
And then we're going to weight them by some kind of a risk aversion measure. Because remember, again, going back to Harry Markowitz, when he drew that efficient frontier in the mean variance framework that some investors liked. to be at the minimum variance portfolio, some investors like to be way up at the top right corner, and their location depended on the slope of their indifference curves.
Steeply sloped indifference curves were drawn by conservative investors, and flatly sloped indifference curves were drawn by aggressive investors. So it makes some sense then to have a measure of that risk aversion. All right, let me go ahead and illustrate this. So what we're going to do is divide our entire distribution into nine pieces, right? 10, 20, 30, right?
All the way up to 90. So that's nine. And somewhere in there we can... assign those critical values in there and then we can weight them and so look 10% 20% 30% and that's what I have in this in this slide maybe we're not going to wait that at all but as we hit 40 we'll wait it a little bit and then as we move up boy look at 90 there's a bunch of X's there so we're gonna wait those much more heavily much more heavily Then the 20% or the 10%.
So notice my last block point down there. Each quantile is weighted by the specific risk aversion function, and then it's going to be averaged. And we're going to... arrive at the value of the coherent risk measure.
So notice that column there, because I have Xs, I don't really have any numbers in there. I have Xs, you just take column A times column B, which is, you know, it's kind of like a, you know, it's kind of like a confidence interval. We're using that product to help us understand volatility.
And then if you just sum those there at the bottom, you get that coherent risk measure. Now, bear in mind that any risk measure estimates are only as useful as their precision, right? You know, I say this to my children regularly when they try to make decisions. I say, all right, if you're wrong on this decision, what are the consequences?
And then how can you figure out how wrong you're going to be? And so if you're just... just going to be this wrong, well, you could probably live with that.
But if you might be this wrong, can you guys see my hands way up there? If you might be this wrong, then who knows what those consequences are? And so the question then becomes, how do we measure the precision of that estimate?
And of course, we got to go way back to Harry Markowitz in 1952, when he introduced us to the concept of standard deviation as a measure of risk. But then we need to go way, way back in time to the first men and women who discussed regression analysis and confidence intervals and and the standard error. Alright so that's what we're going to do. With the help of standard errors we're going to build, hopefully build, a precise confidence interval.
All right, so there's the question that I was implying in that previous slide. How do we go about determining that standard error? All right, so let's work through just a quick formula, and then we'll do a quick example. Sample size of n and an arbitrary bin width around a quantile. All right, so we're going to call these things bins and think of them as parts of, you know, kind of an almost normal distribution.
But picture that thing and then we chop those at the left tail. And so those are going to be part of the histogram that you guys probably drew back in your first stats class. So if we take the square root of the variance of that quantile, it's equal to the standard error. And once we have that standard error, then we can construct a confidence interval using that formula there in the blue block.
And then there's a good formula for the standard error, the standard error of the quantile. And that should look somewhat familiar from what we did back in previous chapters. all right so let's uh let's go through the example here let's construct a 90 confidence interval for a five percent value at risk all right so remember we've already calculated that value at risk now we want to know are we going to be this wrong or are we going to be this wrong all right so let's assume a bin bin width of 0.1 sample size of a thousand all right so step one is going to be to determine the value of q and that's going to correspond to the five percent value at risk all the way from that normal distribution so there's the 1.645 we've been using that number in all the slide decks so that's that critical value of z so you know kind of in this general form that confidence interval is going to take the form of we've got the q plus or minus then the standard dv i'm sorry standard error you know times the times that critical value again right so what we're doing is determining the width that confidence interval so let's determine the range of q so we'll start with 1.645 plus or minus the 0.1 cut that in half and we get you know what is that 159 up to 169 so that's where we know where q falls inside of that bin spanning So I want you to picture, just picture a normal distribution here and we've got along the horizontal axis we've got 159 and 169 and what we want to do is find that area in between. Find the area in between of those and so that's what we're going to do here. So we're going to take the 100% right and we're going to subtract out the 0.45% because that's the probability of the loss.
exceeding the 1.695 right so all the way to the from 1.695 to the end of the tail that's four and a half percent and then we need to go from 1.595 all the way to the right tail so that's going to be what did i have up there 94.46 percent so if you take the 100 which is the total minus the stuff in the left tail minus the stuff that's from the 159 all the way to the right tail you get about one percent right 1.0 3.2%. And we can use that information to compute the standard error. There's that formula here. Let me just go back real quick. Where was that formula?
There it was at the bottom. And well, there it is again on this slide. So I'm sorry for going back. And you get, what do you get for that standard error? About 63.5.
So let's go back and do the confidence interval 1.645 plus or minus. There's our standard error, which we just calculated. based on all of this extra material that we've been covering since we ended that value at risk slide and then multiply it again by that critical value. So what's our confidence interval?
What is that? About 2.7 all the way down to 0.6. Let's go ahead and finish up this slide deck talking about a quantile plot. And these, as I said in the very beginning, are kind of interesting because what they do is they take two data sets and they superimpose them on on one particular graph and if the two match up well then you can say that the two data sets came from the same population and you can extend that to include a population and a sample and you can do in sample and out of sample you can do all sorts of you can do all sorts of things but really this is a visual estimate of whether or not the data set comes from some predetermined population distribution.
So what do I have in that block point at the top? A graphical tool used to assess if a data set of data plausibly comes from some theoretical distribution you guys remember in that great james bond movie spy who loved me james bond came in and he had this he had this clear screen and it had a line on it and he put it up on the wall and it perfectly matched it perfectly matched the uh the uh the ship that was driving along the ocean and they were all astounded and I thought when I was doing this when I was reading this in the in the chapter I thought man this came right comes right from a James Bond movie but here look at this one here we've got lots of stuff in the middle that look like the data set comes from some theoretical distribution like the normal distribution but notice what happens at the extremes there's a little floating out here on the right and there's a little floating out here on the left So the meat of the distribution looks the same, but the tails don't. And that should make perfect sense that, boy, this is going to add value to our understanding, especially of that left-hand tail.
And here is just a slide of summarizing what I've just been saying here. A tentative view of the distribution. All right. So if the data are drawn from the reference population, then the QQ plot should be linear.
And that's what I was saying here, that meat. that meet of the, meet the middle of that distribution. Otherwise it's going to follow a different distribution.
All right, so we can use the intercept and the slope to give us a rough idea of where those parameters lie. And so there could be heavy tails, there could be skewness, there could be outliers, all sorts of stuff there. All right, so here's a first example.
And so what we're doing is we've drawn some data from a normal distribution, and the reference distribution is also normal. So here's that example I was talking about earlier. We have a normal population, and then we have a sample that's also normally. distributed and then look look those things so those things fall pretty much on there the central mass observations fit a linear qq plot very closely while the observations in the tail are just a bit spread out but either way this look looks like this looks like the sample it looks an awful lot like the population hopefully you agree with me there ah what about this thing right here oh my gosh there's no way so the meat once again is is linear that meat of that distribution matches but what happens in the tail so look at the top right you have just a few observations so that's a right light right tail distribution right but look at that where there are lots of dots they're uh going going down the left and notice that they are more negative.
Oh my gosh, so this sounds an awful lot like, hey, let's use this to help us uncover the magnitude of those losses. So that thing down the bottom left, boy, that gives us some really good information about extreme events that have very low probabilities. And I think that takes us through this slide deck. So that was fun.