Overview
This lecture reviews the concepts of reliability and validity in standardized testing, using examples from the Key Math Test to illustrate how consistent structure affects assessment quality.
Reliability in Assessments
- Reliability refers to the consistency of an assessment across different forms or occasions.
- Using similar question structures (e.g., "count to 5" vs. "count to 4") increases reliability.
- Only small variables like numbers or objects should change between test versions to maintain reliability.
- Keeping question wording and type consistent across test forms ensures more reliable results.
Validity in Assessments
- Validity measures whether a test accurately assesses the intended concept or skill.
- Using items with the same structure and intent across forms supports validity.
- Changing question structure may decrease both reliability and validity.
- Examples: Both "count the chicks" and "count the pigs" assess counting skill, supporting validity.
Guidelines for Creating Assessments
- Maintain consistent question structure and wording across different forms of the test.
- Only change the minor details (numbers or objects) when creating parallel forms.
- For formal assessments, use precise labeling for each problem to clarify what is being measured.
- For informal assignments, labeling is more flexible but clarity is still important.
Key Terms & Definitions
- Reliability — The degree to which an assessment yields consistent results over repeated administrations.
- Validity — The extent to which an assessment measures what it is intended to measure.
Action Items / Next Steps
- When designing assessments, keep structure and wording consistent across items.
- Label test problems clearly, especially for formal assessments.
- Review your current assessments for reliability and validity.