Overview
This lecture covers how psychologists analyze findings, distinguish between correlation and causation, design experiments, ensure data quality, and report research accurately.
Correlational Research
- Correlation shows a relationship between two or more variables but does not imply causation.
- The correlation coefficient (r) ranges from -1 to +1 and indicates the strength and direction of the relationship.
- Positive correlation: variables move in the same direction; negative correlation: variables move in opposite directions.
- Scatterplots visually represent the strength and direction of correlations.
- Correlation helps predict outcomes but cannot determine cause and effect.
Causation and Experiments
- Only experiments, not correlations, can determine cause-and-effect relationships.
- Experiments require a clear hypothesis and well-defined variables.
- Experimental design uses experimental and control groups, differing only by the experimental manipulation.
- Operational definitions specify how variables are measured for clarity and reproducibility.
- Experimenter bias is minimized by using single-blind or double-blind studies; double-blind studies also control participant expectations (placebo effect).
Variables in Experiments
- Independent variable: manipulated by the experimenter.
- Dependent variable: measured outcome, expected to change due to the independent variable.
Sampling and Assignment
- Samples are subsets of larger populations; random samples give each member an equal chance of selection.
- Random assignment divides participants into groups to minimize preexisting group differences.
Issues and Limitations
- Some variables (e.g., sex) cannot be manipulated; such studies are quasi-experimental and cannot establish causality.
- Ethical constraints limit certain experiments.
- Statistical analysis determines if findings are significant (less than 5% chance of false positives is standard).
Reporting and Reviewing Research
- Research is shared in peer-reviewed journals for quality control and replication.
- Replication confirms reliability of findings; failures to replicate can challenge original conclusions.
- Retractions occur if data is falsified, fabricated, or design is flawed, as in the vaccine-autism myth.
Reliability and Validity
- Reliability: ability to consistently reproduce results.
- Validity: accuracy of measuring what is intended.
- Types of reliability: inter-rater, internal consistency, test-retest.
- Types of validity: ecological, construct, face.
- Valid measures must be reliable, but reliable measures are not always valid.
Key Terms & Definitions
- Correlation — A relationship between two or more variables.
- Correlation coefficient (r) — Statistic indicating strength and direction of a relationship.
- Positive correlation — Both variables increase or decrease together.
- Negative correlation — One variable increases as the other decreases.
- Confounding variable — Unmeasured factor affecting both variables.
- Illusory correlation — Perceived relationship where none exists.
- Experiment — Research method to determine cause and effect.
- Independent variable — Variable manipulated by the experimenter.
- Dependent variable — Variable measured for changes.
- Random sample — Subset of a population with equal selection chance.
- Random assignment — Equal chance for participants to be in any group.
- Single-blind study — Participants unaware of group assignment.
- Double-blind study — Both participants and experimenters unaware of assignments.
- Placebo effect — Expectation-driven changes in participants.
- Reliability — Consistency of measurement.
- Validity — Accuracy of measurement.
Action Items / Next Steps
- Practice identifying independent and dependent variables in study examples.
- Review the difference between causation and correlation for exam prep.
- Explore interactive scatterplots to understand correlation visually.