have you ever seen a graph with a bunch of these extra lines sticking out all over the place i'm going to tell you what they're for stay tuned [Music] have you ever seen a bar graph or a line graph with all these extra weird lines on them like this one or this one or like this those weird lines serve an important purpose and they allow you to draw conclusions that you might not otherwise be able to see in fact error bars let you do complicated statistical tests using just your eyeballs here's how it all starts with error now we've got a few videos on the concept of error in statistics and how it's measured and i'll put a link to them in the video description or maybe some cards will pop up as the video plays if you want to check those out the 10 cent version is that error is about your level of uncertainty about your measurement if you flip a coin 100 times and you get 49 heads and 51 tails you probably would still say it's a fair coin even though it wasn't exactly 50 50. with any measurement there will be some random chance or error that might throw the results off a little bit in other words we know our measurement isn't perfect the key to being sure about whether there's a statistically significant difference between two groups is to estimate how much error there is in the measurement usually researchers are comfortable saying that there is a difference between let's say two groups if there's less than five percent chance that they're actually from the same group but differ just due to random chance and this is where that p-value of less than 0.05 that you may have heard about comes from the p-value is essentially the probability that the two groups were drawn from the same sample by random chance if i were pulling m m's out of a bag and putting them into two piles it would be extremely unlikely that i would get all the red ones in this pile and only blue ones in this pile so a low p value means it's unlikely they came from the same sample and that there's a 95 chance that there's some true difference between the groups in other words this came out of a bag of red m ms and this came out of a bag of blue ones as an example let's take a measurement of jumping spider jumping distances with caffeine and without calculate the average and make a graph just like with the coin chances are these two numbers aren't exactly going to be the same even if there's no real difference between them i measured two different groups of spiders but if caffeine makes no difference to spiders then actually they all belong to the same population and should have similar jumping distances so how can i tell if there's a real difference between groups or not well i should probably do a t-test or compare the group statistically in some way and we've got a couple of videos on those if you want to see how to do this formally however if i want to do this informally there's an easier way add some error bars to my graph by adding error bars it gives you some information about how much variability there was in the sample and variability reflects error if the error bars are large there's a lot of variability in the sample and i would need a really big difference between the groups in order to be confident that that difference is real and not just due to the error on the other hand if the samples have small error bars it may be easy to say with some confidence that though there's some wiggle room in my estimate of the measurement these groups appear really different there's a handy rule of thumb that you can use to do a little statistical test with your eyes you can see if the error bars cross and overlap each other between the groups if they do there's probably not a difference between the groups if they don't cross there probably is a difference between the groups that is statistically reliable you have to do the stats to be sure but the eyeball method works most of the time now there's an important distinction to be made based on the type of error bar you have there are a few different kinds of error bars that might be presented and the conclusions you draw are technically different based on which one you have in front of you the most commonly used error bars are confidence intervals and standard errors technically the eyeball method tells you something slightly different for each of these if you're looking at confidence intervals usually 95 confidence intervals if the bars do not cross you can be quite sure that there is a significant difference between the groups at the p less than .05 level however if the bars do cross you can't be sure that there's not a difference on the other hand if you have standard error bars if the bars do not cross you can't be certain that there is a difference between the groups but if they do cross you can be sure that there is no difference between them confession time okay for graphs i like to just throw caution to the wind and interpret them all the same way using the eyeball method but i thought it would be irresponsible if i didn't tell you there was technically a difference between them i think most people probably do the same thing i do in practice the eyeball method works fine for either type of error bar for the most part but there can be edge cases where it might be close and you'd have to make sure you're interpreting it correctly based on the kind of error bar you have at that point you might want to just check the formal stats anyway just to be sure okay so that's what all those weird lines that are cluttering up all the graphs are for they provide a sense of how much variability there was in the measurements which we call error so they are called error bars you can use them to get a sense of statistical differences between groups at a glance so they are must-haves across the sciences don't make an error and forget to hit the like button if you found this video helpful check out our other videos and if you're itching what we're scratching consider subscribing to stay up to date on all things psychology and until next time keep thinking [Music] we should make a club for statisticians called the error bar you