Ultimately, what have we done so far? Ultimately, we have studied a proportion using confidence intervals. That is ultimately what we did in Section 7.4. We studied how to analyze proportions using confidence intervals. Except in 7.4, we only did that with one population. So, what we're going to do now in Section 7.5 is compare two populations. So, what we're doing now is really upping the ante in Section 7.5 because what we're going to do now is instead of just studying one population proportion, we are now going to compare two population proportions. So, what we're going to do is ask ourselves, "What are our two groups? Men versus women? Treatment group versus control group? Old group versus new group?" And what we are going to do is compare each of their two proportions. Now, what this means is we are going to have two of everything. In the past, we talked about population proportion sample sizes, sample proportions, and you know, we're going to still use the same symbols of p, n, p-hat, except now we are going to do this twice. We're going to have one population which we are going to denote with a subscript of one, and we'll now have a second population which will now subscript with the number two. And so what I want to begin with is that the way we are going to compare them is we're going to use the same root symbols of p, n, and p-hat, but now we're going to have twice as many of them. So now we need to denote which one came from group one versus group two, and that the act of comparing them, the act of comparing them, is going to be looking at their difference. What do I mean by that? What do I mean by difference? Well, remember, difference simply means subtraction. Difference means subtraction. Again, if I ask the question, "What is the difference in height between my two girls?" I would literally subtract my taller kid's height minus my shorter kid's height. And that's how I compare their heights. I'm literally comparing the smaller minus the bigger number. I am subtracting the two. So, in the same way, we are going to compare these two population proportions by subtracting them. I am going to compare these two population proportions by subtracting them. But as we've already known practically, it's hard to know the true population proportion, whether it's from population one or two, it's hard to know exactly what they are. So, remember that we learned how to estimate them by looking at their corresponding statistic. And so, in the same way, we are going to compare the two corresponding sample proportions and use that in our calculations. We're going to compare the two sample proportions to ultimately determine, does there exist a significant difference between the two sample proportions? Why? So that we could then look at its confidence interval. We can look at the difference of our two sample proportions and then ultimately develop a confidence interval representing the difference in the two population proportions. And now, what significant difference really going to boil down to whether or not the confidence interval includes or doesn't include zero. Again, I'm going to get into the weeds of this idea at when we talk about our examples. And so, what I really, really want you to just take away from this entire first page is this: this difference, difference when it comes to us studying two different proportions. What we're studying is the difference between those two proportions. And that what we're going to do is construct a confidence interval of that difference between the two proportions. That is the main idea of section 7.5. The main idea of section 7.5 is we are going to figure out how we can construct a confidence interval for the difference for the difference of two sample proportions. That is what we are going to study here in section 7.5. And while the concept of a difference between two sample proportions or population proportions is new, I want to emphasize we're still making a confidence interval. When we talked about the confidence interval, we ultimately said that the confidence interval had a three-step process. Step one was to check the conditions of the central limit theorem. Step two was to ultimately compute the confidence interval using your calculator. And step three, step three was once again to interpret, interpret that confidence interval. And what I want you guys to see here, and I know I'm kind of zooming out a lot right now, but what I want you guys to see here is that while we are in section 7.5, while we are looking at two populations, what I want you to see is that it's the same three steps of confidence intervals all over again. It's the same exact three steps: check the conditions, find the confidence interval using your calculator, do an interpretation. It's the same exact three steps. What I really want to focus on is step one, which makes sense because if step one doesn't hold, you don't get to move forward. I want to focus on step one of confidence intervals, taking a look at these four conditions. I want you to know they all seem familiar: random sample, large sample, large population. Notice that when it comes to conditions one, three, and four, those are the exact same Central Limit Theorem conditions that we learned in section 7.3 and in 7.4. This is not a coincidence because what we're doing is we are checking for random sample, large sample, and large population for population one. But then you need to do it again for population two, random sample, large sample, large population. See, the idea here is that when it comes to checking the conditions, you need to do this for both populations, the exact same Central Limit Theorem conditions from section 7.3 and 7.4 but for both population one and for population two. That's the reason why we write the word "both." So what this means is you need to check for randomness twice. You're going to need to check for number of successes and number of failures for population one and then again for population two. You're going to need to check large population for both population one and for population two. So just keep that in mind. You're literally just doing twice the amount of condition checking when you're looking at conditions one, three, and four. So truly, the only condition that's new here is the condition of independence. Why? Because independence is going to be a condition comparing these two populations, which is unique in section 7.5. This is the first time we're looking at two different groups. This is the first time we're looking at two different populations. And so, we need to check condition two. This is the new one which is asking, are these samples independent of each other, meaning is the selection of one sample not affecting selection of the other, meaning if I pull someone from population one, it will not affect who I've been pull from population two. We are going to do three examples, and ultimately what you guys are going to see with each of those examples is a clear explanation of why are these independent. And I'm going to explain it to you guys to make sure you understand those concepts. However, I want to make a point that honestly, every example we're going to look at in section 7.5 is in fact going to be independent. And so in some ways, condition two is going to be kind of an automatic yes. But I'm still going to make sure I take time to explain with each example why they're independent.