🔍

Exploring Simple Experiments and Validity

Aug 1, 2024

Chapter 10: Introduction to Simple Experiments

Key Topics

  • Introduction to simple experiments
  • Examples of simple experiments
  • Aspects of experimental design and methodology
  • Validity and causal claims

Examples of Simple Experiments

1. Note-Taking Methods and Test Scores

  • Researchers: Pam Mueller and Daniel Oppenheimer (2014)
  • Participants: 67 college students
  • Methods: Students took notes using laptops or handwritten notes while watching TED Talks
  • Results: Both groups scored equally on factual questions; the handwritten group scored higher on conceptual questions
  • Manipulated Variable: Method of note-taking (laptops vs. handwritten)
  • Conclusion: Method of note-taking affects conceptual understanding

2. Serving Bowl Size and Portion Sizes

  • Researchers: Researchers at Cornell University (2012)
  • Participants: Participants served themselves pasta from either large or medium bowls
  • Results: Participants took more pasta and consumed more calories from the large bowl
  • Manipulated Variable: Size of the serving bowl (large vs. medium)
  • Conclusion: Serving bowl size influences portion size and caloric intake

Experimental Variables

  • Manipulated Variable: Researcher assigns participants to a particular level (e.g., note-taking method)
  • Measured Variable: Researcher records outcomes (e.g., test scores, amount of pasta consumed)
  • Independent Variable (IV): Manipulated by researcher (e.g., note-taking method)
  • Dependent Variable (DV): Measured by researcher, depends on IV (e.g., test scores)
  • Control Variable: Held constant (e.g., type of pasta)

Causal Claims

  • Three Criteria:
    1. Covariance: Cause variable related to effect variable (e.g., serving bowl size and calories consumed)
    2. Temporal Precedence: Cause variable occurs before effect variable
    3. Internal Validity: Rule out alternative explanations

Experimental Design Methods

Independent Group Designs (Between-Subjects)

  • Post-Test Only: Participants tested on the DV after exposure to IV
  • Pre-Test/Post-Test: Participants tested on the DV before and after exposure to IV
  • Example: Mindfulness training study
  • Advantages: Random assignment can control selection effects
  • Disadvantages: Participants might get full or exhausted

Within Group Designs (Within-Subjects)

  • Repeated Measures Design: Same participants experience all levels of IV
  • Concurrent Measures Design: Participants exposed to all levels of IV simultaneously
  • Advantages:
    • Participants serve as their own controls
    • Increased statistical power
    • Fewer participants needed
  • Disadvantages:
    • Order effects
    • May not be practical (e.g., teaching methods)
    • Potential for demand characteristics
  • Counterbalancing: Mitigates order effects by varying the order of conditions

Validities in Experiments

Construct Validity

  • Evaluates how well variables are operationalized and measured
  • Example: Factual and conceptual questions in note-taking study

External Validity

  • Generalization of results to other populations or settings
  • Example: Note-taking study on college students may not generalize to middle school students

Statistical Validity

  • Checks if the differences are statistically significant
  • Measures effect size (e.g., Cohen's d)

Internal Validity

  • Ensures no design confounds or selection effects
  • Example: Random assignment in note-taking study controlled for selection effects

Summary

  • Discussed examples of simple experiments
  • Explained types of variables (independent, dependent, control)
  • Covered criteria for causal claims (covariance, temporal precedence, internal validity)
  • Compared experimental design methods (independent vs. within group designs)
  • Explained how to interrogate causal claims using the four validities (construct, external, statistical, internal)