Overview
This lecture discussed the difference between statistical and practical significance in categorical data analysis, highlighting the importance of sample size in interpreting results.
Statistical vs. Practical Significance
- Statistical significance occurs when data shows a mathematical difference between groups, often measured as percent increase.
- Practical significance considers if the observed difference is meaningful or large enough to matter in real-world situations.
- Small counts (e.g., 11 vs. 13) may appear statistically significant, but the difference might not be practically important.
- Be cautious about interpreting significant percentage increases when sample sizes are very small, as they may lack practical relevance.
Better Measures of Significance
- Percent of increase does not account for sample size and may be misleading.
- More reliable methods include two population confidence intervals, two-proportion hypothesis tests, and p-values.
- These methods incorporate sample size and provide a more accurate assessment of significance.
Importance of Sample Size
- Large sample sizes with big differences support both statistical and practical significance.
- Small sample sizes can show statistical significance even when differences are minor and likely not meaningful.
Key Terms & Definitions
- Statistical Significance — When a result is unlikely to have occurred by chance, according to statistical tests.
- Practical Significance — Whether the difference or effect found is large enough to be important or useful in practice.
- Two-Proportion Hypothesis Test — A statistical test comparing the proportions of two groups.
- Confidence Interval — A range of values likely to contain the true difference between groups.
- P-value — Probability of observing the result by chance if no real difference exists.
Action Items / Next Steps
- Review concepts of statistical vs. practical significance.
- Prepare for upcoming lessons on confidence intervals, hypothesis testing, and p-values.