Content Validity: One Indicator of Assessment Quality

content validity

Updated on April 13, 2023 to include additional CVR calculation options from Dr. Gideon Weinstein. Used with permission. 

In this piece, we will focus on one important indicator of assessment quality: Content Validity.

Proprietary vs. Internal Assessments

As part of their programmatic or institutional effectiveness plan, many colleges and universities use a combination of assessments to measure their success. Some of these assessments are proprietary, meaning that they were created externally—typically by a state department of education or an assessment development company. Other assessments are considered to be internal, meaning that they were created by faculty and staff inside the institution. Proprietary assessments have been tested for quality control relative to validity and reliability. In other words:

At face value, does the assessment measure what it’s intended to measure? (Validity)
Will the results of the assessment be consistent over multiple administrations? (Reliability)

Unfortunately, however, most colleges and universities fail to confirm these elements in the assessments that they create. This often results in less reliable results and thus the data are far less usable than they could be. It’s much better to take the time to develop assessments carefully and thoughtfully to ensure their quality. This includes checking them for content validity. One common way to determine content validity is through the Lawshe method.

Using the Lawshe Method to Determine Content Validity

The Lawshe method is a widely used approach to determine content validity. To use this method, you need a panel of experts who are knowledgeable about the content you are assessing. Here are the basic steps involved in determining content validity using the Lawshe method:

  • Determine the panel of experts: Identify a group of experts who are knowledgeable about the content you are assessing. The experts should have relevant expertise and experience to provide informed judgments about the items or questions in your assessment. Depending on the focus on the assessment, this could be faculty who teach specific content, external subject matter experts (SMEs) such as P-12 school partners, healthcare providers, business specialists, IT specialists, and so on.
  • Define the content domain: Clearly define the content domain of your assessment. This could be a set of skills, knowledge, or abilities that you want to measure. In other words, you would identify specific observable or measurable competencies, behaviors, attitudes, and so on that will eventually become questions on the assessment. If these are not clearly defined, the entire assessment will be negatively impacted.
  • Generate a list of items: Create a list of items or questions that you want to include in your assessment. This list should be comprehensive and cover all aspects of the content domain you are assessing. It’s important to make sure you cover all the competencies, et al. that you listed in #2 above.
  • Have experts rate the items: Provide the list of items to the panel of experts and ask them to rate each item for its relevance to the content domain you defined in step 2. The experts should use a rating scale (e.g., 1-5) to indicate the relevance of each item. So, if it’s an assessment to be used with teacher candidates, your experts would likely be P-12 teachers, principals, educator preparation faculty members, and the like.
  • Calculate the Content Validity Ratio (CVR): The CVR is a statistical measure that determines the extent to which the items in your assessment are relevant to the content domain. To calculate the CVR, use the formula: CVR = (ne – N/2) / (N/2), where ne is the number of experts who rated the item as essential, and N is the total number of experts. The CVR ranges from -1 to 1, with higher values indicating greater content validity. Note to those who may have a math allergy: At first glance, this may seem complicated but in reality it is truly quite easy to calculate.
  • Determine the acceptable CVR: Determine the acceptable CVR based on the number of experts in your panel. There is no universally accepted CVR value, but the closer the CVI is to 1, the higher the overall content validity of a test. A good rule of thumb is to aim for a CVR of .80.
  • Eliminate or revise low CVR items: Items with a CVR below the acceptable threshold should be eliminated or revised to improve their relevance to the content domain. Items with a CVR above the acceptable threshold are considered to have content validity.

As an alternative to the steps outlined above, the CVR computation with a 0.80 rule of thumb for quality can be replaced with another method, according to Dr. Gideon Weinstein, mathematics expert and experienced educator.  His suggestion: just compute the percentage of experts who consider the item to be essential (ne/N) and the rule of thumb is 90%. Weinstein went on to explain that “50% is the same as CVR = 0, with 100% and 0% scoring +1 and -1. Unless there is a compelling reason that makes a -1 to 1 scale a necessity, then it is easier to say, “seek 90% and anything below 50% is bad.”

Use Results with Confidence

By using the Lawshe method for content validity, college faculty and staff can ensure that the items in their internally created assessments measure what they are intended to measure. When coupled with other quality indicators such as interrater reliability, assessment data can be analyzed and interpreted with much greater confidence and thus can contribute to continuous program improvement in a much deeper way.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Unseen Studio on Unsplash