Key Assessments and Quality Assurance

Engineering students completing Key Assessment

Introduction

In today’s data-driven educational landscape, ensuring program quality and student success requires robust assessment strategies. Key assessments serve as critical checkpoints throughout a student’s academic journey, providing vital information about both individual achievement and overall program effectiveness. When thoughtfully designed and implemented, these assessments become powerful tools that drive continuous improvement and demonstrate institutional accountability which is essential in regulatory compliance and accreditation agencies, such as the Higher Learning Commission, the Association for Biblical Higher Education, the Association for Advancing Quality in Educator Preparation, Council for the Accreditation of Educator Preparation, and more. This post explores the types of key assessments, their applications, and how they collectively contribute to quality assurance in higher education.

What is a key Assessment?

A key assessment is one that’s required for all students enrolled in a specific program of study. For example: all pre-licensure nursing students may need to take one or more common assessments that measure their knowledge and/or performance at periodic intervals. At the end of their program, they must pass the NCLEX to earn their license. Teacher candidates are required to complete an internship and then take a specified basic skills test or state licensure exam such as the Praxis series. For admission into medical school, one example of a key assessment would be the MCAT, but once they’re enrolled, those students must pass specified assessments throughout their program.

Not all assessments are considered to be key assessments–just those that provide particularly valuable insight regarding whether students have achieved the intended learning objectives of a particular course, program, or institution.  Alternatively, they can be used to solicit feedback from various stakeholder groups. A key assessment can be proprietary (developed outside the institution), or it can be internally created.

Quantitative Key Assessments

Common types of key assessments used within the higher education landscape utilize both quantitative and qualitative measures. While quantitative assessments provide valuable numerical data, qualitative assessments offer deeper insight into student learning and program effectiveness through more open-ended formats. For example:

Objective Exams

Traditional objective exams consist of both closed and constructed response questions that typically focus on content knowledge and how learners are able to apply that knowledge within a specified context or prompt. These assessments are commonly associated with true/false, multiple choice, fill-in-the-blank, matching, and short answer questions.

Surveys

Surveys are commonly administered at the end of each course for the purpose of soliciting learner feedback about the quality of instruction. Survey results are then routed to department chairs who commonly use them to monitor faculty performance, curriculum, and other factors. In addition, annual surveys have become a popular method for gauging program effectiveness when sent to graduates and their employers. Sometimes, faculty and staff are surveyed to determine how they feel about proposed changes to institutional policies and procedures.

Traditional objective exams and survey responses based on a Likert scale are often scored through software. It’s a quick and simple process to tabulate responses and grade true/false, multiple choice, fill-in-the-blank, and in some cases, even short answer responses. Modern survey platforms can offer sophisticated reporting features which can easily disaggregate data by specified parameters. Analysis tools make it a snap to “slice and dice” the data, track trends over time, and identify specific strengths or areas for improvement.

Qualitative Key Assessments

Focus Group Interviews

Similar to surveys, focus group interviews are intended to yield feedback from a specified group of respondents. However, even though they are more time consuming to administer and score, the insights interviews can potentially provide are so much richer and more in-depth than survey results, because interviewers have the opportunity to have an actual conversation with participants. They can ask follow-up questions or provide an example if needed to ensure understanding. Responses are then coded according to theme and results can be summarized. When compared with similar questions on survey results, the combination of these two data sources can be quite powerful in learning how stakeholder groups really feel about specific aspects of a program.

Project-Based Assessments

Project-based assessments are very commonly used in coursework on assignments that are designed to utilize learners’ knowledge in a deeper way. They require application of content knowledge to compare/contrast, analyze, evaluate, and synthesize. Project-based learning can and often does span across all six cognitive domain levels of Bloom’s Taxonomy.

Most project-based assessments used in coursework are typically language-intense and include multiple components and cannot be easily scored through automation. For example, teacher candidates may be tasked with creating a standards-based unit or lesson plan using a specific instructional method. Engineering students may be asked to work in teams to design a new fuel-efficient automobile within specified parameters. Business majors may be assigned to create a print and multimedia marketing campaign for a fictitious company based on its mission. Key assessments such as these require the use of a different scoring system – an analytic rubric with clearly differentiated performance levels (I recommend 4) and specific performance indicators based on criteria from the assignment instructions.

Performance-Based Assessments

Depending on the context, performance-based assessments typically also utilize rubrics. Whereas project-based assessments are typically completed as part of a course, performance-based assessments are usually connected to field or clinical experiences such as an internship, apprenticeship, in-house rotation, student teaching, and so on. Rather than applying their knowledge to create a single, specified project, performance-based assessments require students to demonstrate their proficiency or competency across numerous skills simultaneously within a real-world context.

In many ways, performance-based assessments represent the highest level of key assessment. It’s akin to a student who’s been taking flying lessons: They’ve passed all their written exams, completed various tasks assigned by their instructor, and now they’re ready to fly solo. This last step is where they demonstrate competency. Of course, in this context, true competency is determined by whether or not they can successfully take off, fly the aircraft, and land safely. In nursing, teaching, music, foreign language, and engineering, students demonstrate their proficiency in very different ways. But they are all excellent indicators of not only what students know and able to do—their performance is also indicative of the program’s quality.

How are Key Assessments Used in Quality Assurance?

Each key assessment provides a snapshot of data about a given program. When viewed in isolation, each one represents one piece of a giant jigsaw puzzle. But when considered in groups or altogether, the insight we can glean is much more powerful.

When we are able to consider the results of key assessments that measure similar things, we achieve triangulation, which is akin to a three-legged stool. One leg allows the stool to stand but that’s about all. Two legs balance it out a bit but three give the stool maximum balance and strength. It’s the same with assessments and the data we are able to harvest from them.

When we have multiple key assessments, we’re able to look at data over multiple administrations over time. That means we’re able to more easily identify specific trend lines and patterns in the data. This enables us to draw conclusions with much greater confidence for the purpose of programmatic decision making. And that’s the heart and soul of quality assurance.

###

About the Author: A former public-school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialties: Council for the Accreditation of Educator Preparation (CAEP) and the Association for Advancing Quality in Educator Preparation (AAQEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit: ThisisEngineering on Unsplash

 

Accreditation Stress: It’s Real.

Accreditation Stress

Author’s Note: Updated from a previous publication. 

We can all agree: Accreditation is something all higher education officials acknowledge is necessary, but the accreditation stress that goes along with it is something they’d love to do without.

Each accrediting body has its own standards and quality indicators. They have their own policies and procedures which can vary widely. However, one thing that’s common across every accrediting body a site visit, where a review team spends a few days on campus (or virtually) conducting interviews, verifying information, and making recommendations regarding how well the institution measures up to standards.

Regardless of the accrediting body, the site visit is both expensive and exhausting. Regulatory reviews are also extremely high stakes: If things don’t go well, the entire institution suffers. In worst case scenarios, accreditation can be lost, and students can lose their ability to receive financial aid. That can lead to an institution shutting its doors.

With very few exceptions, faculty, staff, and administrators shout for joy when they see a site review team leave campus and head for the airport. Or, when the reviews are virtual, they all breathe a collective sigh of relief when hitting that, “Exit Meeting” button.

Accreditation Stress is Real.

In many instances, staff involved in the accreditation process focus so much on preparing for the site visit they aren’t ready for the emotional or physical toll that it can take on them. Moreover, the stress usually doesn’t end when the site review team leaves. My experience in accreditation over the past 15 years has confirmed there’s a need for this kind of information, and yet it’s a topic I’ve never seen addressed at conferences or in professional literature.

Accreditation-related stress and anxiety are real. You might be able to function, and you may be able to hide it from others. But, how do you know if it’s starting to get the best of you? And what can you do about it?

Red Flag Alert: Some Signs the Stress is Negatively Impacting Your Life

You’re surviving, but you’re not thriving. You may be making it through each day, but the quality of your life is suffering. You’re not enjoying the things that used to make you happy. You feel guilty about taking the time to watch a sunset or to read a book. Every waking moment is spent thinking about the site visit.

Those lights in your brain just won’t shut off. You can’t sleep, even though you feel exhausted. You’re worn out physically and mentally, but you can’t allow yourself to take even a few hours off to rest.

You’re numb inside. You have no appetite and aren’t eating. You’ve even managed to shut down your emotions. It’s like you’ve gone on auto-pilot and feel like a robot.

You feel empty, like there’s a gaping hole inside. But even though the emptiness isn’t from hunger you binge eat everything in sight. And then you still look around for more because you still have that huge gaping hole that just can’t seem to be filled.

You become obsessed with every detail, no matter how minute it may seem. It’s those little foxes that spoil the vine. You’re determined that you’re going to make sure NOTHING is overlooked.  

You come to believe that you are ultimately responsible for the success of the site review. If you’re honest with yourself, you don’t think others are as committed to success as you are. The little voice inside you says, “If you want something done right, you’ve got to do it yourself!”

You start to resent others who don’t seem as stressed out as you are. While you hate feeling like you have the weight of the world on your shoulders, you refuse to delegate responsibility to others and then you get mad when you hear that they went to a movie or a concert over the weekend.

Drink the Stress Away: You may hear yourself saying, “I just need to take the edge off” or “I just need to relax for a while.” Having one glass of Chardonnay is one thing but knocking back five tequila shots in 30 minutes is another.

Ups and Downs: You may self-medicate by taking a pill or two to help you sleep because even though you’re exhausted, you’re wired due to all the stress.

Caffeine overload: You may guzzle coffee, soda, or Red Bull throughout the day (or night) because, “I’ve got to keep going for just a little while longer.”

Shop ‘til Your Fingers Drop: On a whim you may go on a shopping spree and spend a ton of money on things you probably didn’t really need. Not at a brick and mortar store or mall—that would be far too self-indulgent. Instead, you likely visited Zappos or Amazon, where you could remain close to your computer and be right there to respond to an urgent email should one land in your Inbox.

Keep Setting the Bar Higher: You set impossible standards for yourself to meet and then criticize yourself endlessly when you don’t meet them. It’s like you’re obsessed with proving something to others—and to yourself. Except that you’re never satisfied with your performance, even when you do things well.

Slay the Dragon: You plan things down to each minute detail, leaving no stone unturned. You review things in your mind, over and over again. Sometimes you obsess about forgetting something. You’re determined to emerge victorious, regardless of the personal cost.

Accreditation Stress: The Gift that Keeps on Giving

Think the stress of getting ready for a site visit only affects you? Think again. If you have close friends, a life partner, or children, they are affected as well. It’s possible that your furry buddies at home can even detect your anxiety. You’ll know if your stress is out of balance if you hear a loved one say, “I miss you!” “I HATE your job!” or “Will this ever end?”

 

Moving from Surviving to Thriving: How to Manage Your Stress in a Healthy Way

Even Superman struggled at times with Kryptonite. However, he found ways to adapt and overcome those challenges, and so can you. While an accreditation site visit always leads to a certain level of stress, there are things you can do to minimize the anxiety. For example:

Prepare ahead of time: It may sound simplistic, but getting a jumpstart on the process reduces a lot of stress. If you don’t start on the process until 6 or 8 months before the site visit, you are putting yourself squarely in the crosshairs of some serious stress and anxiety.

Ideally, quality assurance should be an integral part of every program. There really shouldn’t be any significant scrambling or looking for data. Your institution should already be reviewing, analyzing, looking for trends, and making data-driven decisions to improve programs on a continual basis. You should plan on starting your Self-Study Report (SSR) or Quality Assurance Report (QAR) no later than 18 months prior to a scheduled site visit. The more you delay this timetable, the higher your stress level will be. Guaranteed.

Hire a consultant: Let’s face it–not everyone has a lot of expertise when it comes to writing self-study reports, gathering evidence, and preparing for site visits. In many institutions, departments are understaffed and often wear multiple hats of responsibility. Most institutions don’t have to deal with accreditation matters on a regular basis. Therefore, few have a high level of confidence in that area. Plus, writing for accreditation, state program approvals, and the like requires a very different skillset than our typical academic writing.

In some schools, new faculty are tasked with coordinating a site visit because more seasoned faculty refuse to do it. This is wrong on so many levels, and yet it’s a frequent occurrence. An experienced consultant could provide the kind of guidance and support that may be needed. The institution doesn’t incur the expense of paying for someone’s full-time salary, benefits, or office space. In this age of budget cuts, hiring an independent contractor can actually save money.

Provide faculty/staff training: Letting others know what to expect and getting them on board early on will greatly reduce anxiety for everyone. Plan a kickoff event and then schedule periodic retreats/advances. Create a solid communication protocol and stick with it. When team members are fully informed and are active contributors to the process, the stress is reduced for everyone. Again, a good consultant can create this type of project management schema and even oversee it.

Delegate to others as much as possible: It’s important to have a project manager/coordinator for every major project, and that includes accreditation site visits and program reviews. However, that does NOT mean that this one person needs to take on the bulk of the responsibility—quite the contrary. Instead, that person should serve as a “conduit” who facilitates the flow of information between internal and external stakeholders. That person should also play the primary role in delegating tasks to appropriate personnel and then making sure those tasks are actually completed on time.

It’s OK to talk about it: Know that a certain amount of stress and anxiety are normal reactions to accreditation site visit preparation, but it doesn’t have to be overwhelming. Don’t be afraid to talk with your colleagues and leadership about your stress level. It’s entirely possible that others share your feelings—it might be helpful to start a small informal support group. Getting together one day a week for lunch (in person or remotely) works wonders.

Be upfront with your friends and loved ones:  Prepare family and friends ahead of time. Help them to know what to expect. Include them in the celebration once it’s over. Your children, significant other, and close friends may not be writing the self-study report or creating pieces of evidence. Your support system also plays an important role in the site review process behind the scenes.

Be kind to yourself: This may sound silly but it’s really important. Purposely build one nice thing into your personal calendar each day. It may be taking a walk, working out, or reading for pleasure for 30 minutes. Regardless what you choose, it’s crucial that you make this a part of your schedule.

Be ready when it’s over:  You may find that you can hold yourself together from start to finish, but then after the site review team packs up and leaves your institution you have a feeling of not quite knowing what to do with yourself. What you’ve focused all your energy on for 18 months is suddenly over. This can result in your emotions taking a deep dive—and it can last for several weeks.

You can greatly reduce this by planning a combination of fun activities and work activities for your next four weeks after the site visit. You’ve been functioning within a very structured paradigm for several months. However, if you suddenly have nothing to do it will likely lead to additional anxiety so it’s best to transition back slowly.

The bottom line is that while accreditation stress is definitely real, it doesn’t have to get the best of you or your team.

###

 

About the Author: A former public-school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialties: Council for the Accreditation of Educator Preparation (CAEP) and Association for Advancing Quality in Educator Preparation (AAQEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Graphic Credit:  Luis Villasmil on Unsplash

 

 

 

Inter-Rater Reliability in Higher Education: The Critical Role of Ensuring Consistency

teacher scoring student work during inter-rater reliability calibration exercise

Inter-rater reliability (IRR) is a term many higher education faculty have encountered, yet its precise meaning and relevance often remain unclear. Within the accreditation landscape, IRR is crucial for making data-informed decisions about programs, particularly those that emphasize competency-based, project-based, or performance-based learning. Without high-quality assessment data, faculty and administrators cannot confidently make decisions about their programs.

In my article, “The Pillars of Data Consistency: Inter-Rater Reliability, Internal Consistency, and Consensus Building,” I discussed the importance of IRR, internal consistency in measurement tools, and the role of consensus building. In a separate post I explored the specifics of calibrating IRR, which is essential for informed programmatic decisions.

Consider a professor who assigns a major project scored by a rubric. Grading is straightforward if the rubric is well-constructed, as the professor understands their expectations for student performance. They use the results to decide whether to reuse or modify the assignment. However, the scenario changes when multiple faculty members use the same rubric to assess different students. Here, inconsistency can quickly arise.

In multi-evaluator settings, such as student teaching evaluations, ensuring consistency across evaluators is critical for quality assurance. Different interpretations of criteria like “manages behavior effectively using a variety of techniques” can lead to varied scores for the same student. While individual interpretations may be acceptable for single students, they create significant issues when aggregating data across multiple students. If evaluators interpret criteria differently, the resulting data lack consistency, rendering interpretations unreliable. Consequently, departments cannot accurately identify program strengths and weaknesses or rely on key assessments for decision-making. Instead, they must rely on intuition, a risky approach to academic program management.

To ensure reliable interpretations of data scored by rubrics, all evaluators involved in assessing student performance must participate in IRR calibration exercises. Here’s why this is important:

Consistency Across Evaluators

The primary goal of IRR calibration is to ensure all evaluators interpret assessment criteria consistently. For example, in evaluating student teachers, all evaluators must understand and apply the rubric in the same way. If only a subset of faculty participates in calibration, others may apply different standards, leading to inconsistent evaluations and undermining the reliability of the assessment process. Engaging all evaluators fosters a unified approach to interpreting and applying assessment criteria.

Comprehensive Calibration

Engaging all evaluators in calibration promotes discussions and clarifications, ensuring a shared understanding of the rubric. This process is vital for maintaining the integrity of evaluations and encourages collaboration among evaluators, enhancing reliability. Providing evaluators with at least three samples of student work at varying performance levels (low, medium, high) helps them discern differences in work quality, further supporting consistent assessments.

Quality Assurance

Including all evaluators in IRR calibration is a key aspect of quality assurance. It ensures fairness, transparency, and accuracy in the assessment process. Comprehensive calibration allows institutions to apply consistent rigor and standards, supporting the validity of assessment outcomes and reinforcing stakeholders’ trust in the evaluation process.

While involving all evaluators in IRR calibration can be logistically challenging, it is a best practice that significantly enhances the reliability and consistency of assessment outcomes. Institutions committed to quality assurance should prioritize comprehensive IRR calibration as part of their standard assessment processes.

Conclusion

In conclusion, inter-rater reliability in higher education is vital in the accreditation process for ensuring consistent and reliable assessment data. By engaging all evaluators in IRR calibration exercises, higher education institutions can uphold the integrity of their evaluations, confidently make data-driven decisions, and improve academic programs. This commitment to consistency and quality benefits institutions in meeting regulatory requirements and supports the broader educational mission of providing accurate and fair assessments for all students.

###

About the Author: A former public-school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialties: Council for the Accreditation of Educator Preparation (CAEP) and the Association for Advancing Quality in Educator Preparation (AAQEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Unseen Studio on Unsplash

The Pillars of Data Consistency: Inter-Rater Reliability, Internal Consistency, and Consensus Building

data consistency

Introduction

Accreditation in higher education is like the North Star guiding the way for colleges and universities. It ensures institutions maintain the highest standards of educational quality. Yet, for higher education professionals responsible for completing this work, the journey is not without its challenges. One of the most critical challenges they face is ensuring the data consistency, or reliability, of key assessments. This is why inter-rater reliability, internal consistency, and consensus building serve as some of the bedrocks of data-informed decision making. As the gatekeepers of quality assurance, higher education professionals should possess a working knowledge of these concepts. Below, I explain some basic concepts of inter-rater reliability, internal consistency, and consensus building:

Inter-Rater Reliability

What it is: Inter-rater reliability assesses the degree of agreement or consistency between different people (raters, observers, assessors) when they are independently evaluating or scoring the same data or assessments.

Example: Imagine you have a group of teachers who are grading student essays. Inter-rater reliability measures how consistently these teachers assign grades. If two different teachers grade the same essay and their scores are very close, it indicates high inter-rater reliability. A similar example would be in an art competition, where multiple judges independently evaluate artworks based on criteria like composition, technique, and creativity. Inter-rater reliability is vital to ensure that artworks are judged consistently. If two judges consistently award high scores to the same painting, it demonstrates reliable evaluation in the competition.

Importance in Accreditation: In an educational context, it’s crucial to ensure that assessments are scored consistently, especially when accreditation bodies are evaluating the quality of education. This ensures fairness and objectivity in the assessment process.

Internal Consistency

What it is: Internal consistency assesses the reliability of a measurement tool or assessment by examining how well the different items or questions within that tool are related to each other.

Example: Think about a survey that asks multiple questions about the same topic. Internal consistency measures whether these questions consistently capture the same concept. For example, let’s say a teacher education program uses an employer satisfaction survey with multiple questions to evaluate various aspects of its program. Internal consistency ensures that questions related to a specific aspect (e.g., classroom management) yield consistent responses. If employers consistently rate the program quality highly across several related questions, it reflects high internal consistency in the survey.

Importance in Accreditation: When colleges and universities use assessment tools, they need to ensure that the questions within these tools are reliable. High internal consistency indicates that the questions are measuring the same construct consistently, which is important for accurate data in accreditation.

Consensus Building

What it is: Consensus building refers to the process of reaching agreement or alignment among different stakeholders or experts on a particular issue, decision, or evaluation.

Example: In an academic context, when faculty members and administrators come together to determine the learning outcomes for a program, they engage in consensus building. This involves discussions, feedback, and negotiation to establish common goals and expectations. Another example might be within the context of institutional accreditation, where an institution’s leadership, faculty, and stakeholders engage in consensus building when establishing long-term strategic goals and priorities. This process involves extensive dialogue and agreement on the institution’s mission, vision, and the strategies needed to achieve them.

Importance in Accreditation: Accreditation often involves multiple parties, such as faculty, administrators, and external accreditors. Consensus building is crucial to ensure that everyone involved agrees on the criteria, standards, and assessment methods. It fosters transparency and a shared understanding of what needs to be achieved.

Conclusion

In summary, inter-rater reliability focuses on the agreement between different evaluators, internal consistency assesses the reliability of assessment questions or items, and consensus building is about reaching agreement among stakeholders. All three are essential in ensuring that data used in the accreditation process is trustworthy, fair, and reflects the true quality of the institution’s educational programs.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialties: Council for the Accreditation of Educator Preparation (CAEP) and the Association for Advancing Quality in Educator Preparation (AAQEP).  She can be reached at: Roberta@globaleducationalconsulting.com 

Top Photo Credit: Markus Spiske on Unsplash