Fairness, Validity, and Reliability

IMG_5402Fairness, validity, and reliability are three critical elements of assessment to consider.  The Education Evaluation IPA Cohort of 2013 compiled this chart of definitions and examples.

Fairness 
  • Assessment should not discriminate (age, race, religion, special accommodations, nationality, language, gender, etc.)
  • Subjective vs. objective
  • Accessible
  • Have you prepared students sufficiently to demonstrate skill/knowledge?
Example: Blackboard assessment (objective, more black and white, right-wrong)  Non-Example:
Validity 
  • Factually sound
  • Does it matter?
  • Is it relevant?
  • Degree the assessment measures what it claims to measure
  • Does it have value or worth?
  • How well does it correspond to the real world?
  • Pertinent
Example:  Student generated questions used as a review; using rubrics and providing to students ahead of time (content, critical thinking, communication skills, accountability)Non-Example: Opinion poll may lean away from validity
Reliability
  • Repeatable – if you give the same assessment to a different group of people, will you achieve the same result
  • Transcends class and instructor
  • How does mode impact the assessment? 
Example: Department-wide final exam (same course), using a pool of questions with similar level of difficulty and distribution of question topicsNon-Example: not controlling for question difficulty (even distribution of questions)

One thought on “Fairness, Validity, and Reliability

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s