hi everyone this is intro stats with Matt to show and today I wanted to show you how you can calculate critical values on stat key so stat key again is found at LOC 5s comm it's a great program there's also a book that goes with the program you're welcome to look at that but but I just want to show you how you can use stat key to look up critical values traditionally again we would look up critical values on charts again I'm not a big fan of charts I I think we're kind of in the technology age and we should be using technology to calculate things and stat Keys got a great program for looking up critical values so you just click on this button that says stat key right here that gives us to the program and we're going to be looking at theoretical distributions critical values are often based on theoretical distributions like the standard normal distribution for z-scores the T distribution or the student T distribution for T scores then we have also chi-squared and f so if you kind of look under theoretical distributions right here right here it says normal now the normal one means the really does standard normal distributions for looking up critical values you can also adapt it and look up normal probabilities if you want a normal probability calculator but today we're just using it to look up critical z-score values and then here's the T scores this x squared is really Kyler's it's the Greek letter Chi Chi square is a very famous distribution we used to and a lot of things in stats today we're going to be focusing on critical values used for confidence intervals usually again we use the Z scores so the normal ones will be used for usually proportion one or two population proportion confidence intervals keys are usually used for mean average so one or two population mean average confidence intervals and then chi-square actually can be used for one population variance or one population standard deviation confidence interval calculation so all three are very useful this is F F is a little bit more advanced that's the that's basically the distribution we use with ANOVA so later on in the class and we get to that we'll be kind of looking back at this distribution here alright so let's look up this so we said that a lot of times statisticians like to use 90% 95% or 99% confidence levels when they're calculating confidence intervals you could really look up anyone you want but those are the most most used and 95% is by far the most famous so it's actually more often than not a confidence interval is often a 95% confidence interval so let's look these up now the normal button is really think z-score doesn't say z-score on it but that's really what when you're looking up a z-score critical value a lot of times you'll see it in stat books with a Z with a little C next to it just to tell you it's one of these famous critical value z-scores so if you click on normal notice that the mean is set at zero and the standard deviation is set at one so that's really important so that tells us that we're dealing with a z-score if you change that if the means not set at zero and the standard deviation is not at one you'll be calculating some kind of other normal calculation you're not looking for the z-score critical values so we're in if we're doing to population to population proportion or one population proportion confidence intervals this is the critical values we'd want now if we're doing two-tailed critical value two-tailed confidence intervals we want to tails so you just click on to tail and there's the first one right 95% in the middle we have 2.5% in the tails and there's the famous critical values and anybody has been in stats for a long time has got these memorized because they're so famous so the two critical values for Z scores for 95% were plus or minus 1.96 now what if we want to do a 90% confidence interval well I just changed this middle percent to 0.9 or 0.9 zero and there we go now we got 5% in each tail and there is the two famous critical values for that we use in 90% calculations so in a lot of traditional programs if you click a 90 percent proportion confidence interval the the computer is already programmed with these numbers in it so these are how many standard errors away we need to be to be 90 percent confident so we saw that for 95 percent it was 1.96 standard errors away for 90 percent confident we only have to be 1.645 standard errors away so the two critical values are plus or minus one point six four five what about 99 well if you click on this to click on the middle percentage and change it to 0.99 we can look up the critical values for 99 percent and again these are also famous so the z-score critical value for 99 percent is going to be plus or minus two point five seven six so again these are very famous so when you click on one or two proportion confidence intervals these are the critical values that most computer programs are already programmed with you can also see that this is implying that the higher the confidence level goes the farther away we want to be in other words more standard errors away so this actually makes the margin of error larger so 99 percent confidence levels have a much larger margin of error than our 90 percent all right so that's the three famous z-scores let's go back and do now the T scores now with T scores critical values you'll need to know the sample sizes good usually goes with mean average so you'll have the data and some kind of degrees of freedom calculation one population degrees of freedom is usually n minus one so one less than your sample size two population degrees of freedom is a very very complicated formula I actually love the online if online there's a lot of online calculators that'll do degrees to population mean degrees of freedom calculators so if you actually just google that you'll get one little pop-up and it's an easy way to get your degrees of freedom without having to go through that terrible formula so basically once we know our degrees of freedom we can look up T's T scores now so think of a t-score is like a z-score it's gonna tell us how many standard errors away we need to be for 90% or 95% or 99% confidence but there's a different T score for every sample size william Gossett had the rather brilliant idea and it makes a lot of sense that the bigger the data set is the less error you should have and the smaller the data set is the more error you have so basically T's work like a almost like a built in error correction on the z-score for small data set so we'll use them a lot they're really great for mean average confidence intervals and really anything like one population or two population mean average inferential statistics in general we use these t scores a lot so let's suppose I had a data set of 40 that means my degrees of freedom would be 39 39 and then I would put in okay all right and then I could do just like what I did with the z-scores I can go to tail since I'm doing two two-tailed confidence intervals I can here's my 95 see it's going to be a different number for every sample size though so it's not like the three famous z-score ones that this one for 39 degrees of freedom the 95% t-score critical value would be plus or minus two point zero to three like say in the old days we used to look these up on charts again I'm not a big fan of the charts I kind of like to use technology and I like the visual of this now if I want to go 90 percent I just changed the middle to zero point nine zero all right now we got one point that plus or minus one point six eight five for 39 degrees of freedom if I want to go 99 I just change the middle to 0.99 very similar there we go now we got our plus or minus two point seven zero nine now the key with this is that the t-distribution you can you changes the curve for the degrees of freedom so basically the smaller the sample size the farther out it gets right this is the idea of less random data should have more error so basically it's a way of increasing the margin of error calculation in the confidence intervals so if I had for example if I change the degrees of freedom here from let's suppose I had a data set of only 20 so I mean my degrees of freedom was 19 now my my it doesn't I know you probably can't tell but the curve actually has changed the the critical values will now be different in fact they'll be bigger so if we click to tale you can see now we've gone for 95% we're at plus or minus two point zero nine three little farther out then for the degrees of freedom 39 if I go ninety ninety percent zero point nine in the middle now I'm at plus or minus one point seven two nine again a little farther out then for the ninety percent at 39 degrees of freedom and then we have of course 99 we can do that 0.99 if I want to look up that critical value there we go and by the way the I'm focusing on confidence intervals today but these are the kind of the same critical values that we'll be looking up for hypothesis testing but again then we'll kind of be focusing on maybe it's a right tailed test or a left tailed test so that'll be a little bit different but it's the same idea you can look these up with the theoretical distribution calculator and stat key is fabulous love it now let's look at let's look at another one so I think I mentioned chi-squared now this is kind of going a little overboard for an intro stat class but chi-square is used in a lot of different calculations and stats it's very famous in the categorical Association test it's also used to on traditional formula calculations for one population variance or one population standard deviation confidence intervals so again if you notice with chi-square it also has a degrees of freedom so you have to use the degrees of freedom so it depends on what situation you're at you know for example goodness of fit test often have a degrees of freedom of k minus 1 the number of groups minus 1 with 1 population variance confidence intervals it's often n minus 1 so let's suppose I got let's say I'm doing a confidence interval for variance and I need to look up the critical value so I can do that so let's suppose I had 19 degrees of freedom let's suppose I had a data set of 20 and I had 19 degrees of freedom so there we go I just put that in and we get the chi-squared now you can see kind of visually that this does not look very normal chi-square is not a normal distribution it's um it has a sort of effect the smaller the degrees of freedom the more it sort of looks very skewed right but it never really gets to fully normal this is a different distribution that's used a lot this is why it matches up so nice with variance if we remember when we study variance sampling distributions they tended to be very skewed right no matter what so this tends to match up pretty well with this is why rich status