Creating High-Quality Assessments

Creating High-Quality Assessments

(Adapted for P-12 Use from the CAEP Evaluation Framework for EPP-Created Assessments)

Continually evaluating what students know and are able to do is an essential element of the competency-based educational model. In order to accurately determine the extent to which students are competent in relation to specific learning outcomes, teachers must administer assessments that are of high quality. Otherwise, the data from these assessments could paint a skewed picture of a learner’s skills. In addition, a set of well-constructed assessments administered over time can enable educators to see patterns and trends which in turn can inform learning resource selections, instructional support, staffing, and the like.

There are several key components to creating high-quality assessments. Consider each when designing assessments for students:

 

Component Guiding Questions Examples
Administration and Purpose Does the assessment clearly state when it is to be administered, and why it is necessary? This assessment should be taken after completing Unit 3. It will measure your competence in major concepts of flora and fauna.
Will students know what they are expected to do on the assessment? You are to respond to each question stem.
Will students know what score they must attain in order to demonstrate the expected level of competence? You must earn a holistic score of at least 3 out of 4 possible points on the rubric in order to move on to Unit 4.  If your holistic score is less than 3, you must meet with your teacher and then rework the assessment.
Assessment Content Are question stems/prompts aligned to specific standards, which are in turn aligned to competencies and learning objectives? Create a 6-slide presentation teaching another learner how to code.

ISTE 4a-c

C 3.5

LO 3.5.1

Does the degree of difficulty required by the assessment prompt reflect the difficulty of the standard, competency, and learning objective? For example, if these focus on the concept of analysis that expects the learner to compare/contrast, does that assessment prompt reflect that level of difficulty?
Do performance indicators clearly describe what proficiencies will be evaluated in the assessment?

 

Learners should know exactly what knowledge and skills they will be evaluated on.
Scoring the Assessment Is the basis for judging student performance clear and well defined? Students (and parents) should be able to easily see how their work will be scored.
Are multiple, progressive levels of performance clearly identified in the rubric? For example:

Incomplete

Approaching Expectations

Meeting Expectations

Exceeding Expectations

Have numerical values been tagged to each level of performance? For example:

1, 2, 3, 4

Does the rubric contain detailed, specific expectations, avoiding vague or ambiguous language? Desirable Example:

The essay contained at least 5 paragraphs and a reference list. It included facts that were cited and could be substantiated.

 

Undesirable Example:

The essay was an appropriate length and contained mostly correct information.

Is feedback to the student specific and actionable? If a student meets or exceeds expectations, they should know why. Likewise, if a student fails to meet expectations, they should know why, and what they need to do in order to meet expectations.

For struggling students, teachers should provide targeted guidance and mentoring, additional learning resources, etc.

Data Validity* Do assessments measure what they are intended to measure? The assessment coordinator should document how validity was established (e.g., construct, content, concurrent, predictive, etc.).
Have new or significantly revised assessments been piloted? The assessment coordinator should oversee the controlled pilot of each new grade-level or building-level assessment.
Data Reliability* Do assessments yield reliable data consistently over time? The assessment coordinator should document the type of reliability that has been established (e.g., test-retest, parallel forms, inter-rater, internal, consistency, etc.).
If the same assessment is used by multiple teachers, have they all been trained in evaluating student work in the same way? The training of scorers and checking on inter-rater agreement and reliability should be documented.

*School-wide or district-wide level

For more information regarding how to create high-quality assessments for use at the P-12 level, please contact me:

Roberta Ross-Fisher, PhD

Twitter: @RRossFisher

Blog Site: www.robertarossfisher.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s