Key Assessments and Quality Assurance

Engineering students completing Key Assessment

Introduction

In today’s data-driven educational landscape, ensuring program quality and student success requires robust assessment strategies. Key assessments serve as critical checkpoints throughout a student’s academic journey, providing vital information about both individual achievement and overall program effectiveness. When thoughtfully designed and implemented, these assessments become powerful tools that drive continuous improvement and demonstrate institutional accountability which is essential in regulatory compliance and accreditation agencies, such as the Higher Learning Commission, the Association for Biblical Higher Education, the Association for Advancing Quality in Educator Preparation, Council for the Accreditation of Educator Preparation, and more. This post explores the types of key assessments, their applications, and how they collectively contribute to quality assurance in higher education.

What is a key Assessment?

A key assessment is one that’s required for all students enrolled in a specific program of study. For example: all pre-licensure nursing students may need to take one or more common assessments that measure their knowledge and/or performance at periodic intervals. At the end of their program, they must pass the NCLEX to earn their license. Teacher candidates are required to complete an internship and then take a specified basic skills test or state licensure exam such as the Praxis series. For admission into medical school, one example of a key assessment would be the MCAT, but once they’re enrolled, those students must pass specified assessments throughout their program.

Not all assessments are considered to be key assessments–just those that provide particularly valuable insight regarding whether students have achieved the intended learning objectives of a particular course, program, or institution.  Alternatively, they can be used to solicit feedback from various stakeholder groups. A key assessment can be proprietary (developed outside the institution), or it can be internally created.

Quantitative Key Assessments

Common types of key assessments used within the higher education landscape utilize both quantitative and qualitative measures. While quantitative assessments provide valuable numerical data, qualitative assessments offer deeper insight into student learning and program effectiveness through more open-ended formats. For example:

Objective Exams

Traditional objective exams consist of both closed and constructed response questions that typically focus on content knowledge and how learners are able to apply that knowledge within a specified context or prompt. These assessments are commonly associated with true/false, multiple choice, fill-in-the-blank, matching, and short answer questions.

Surveys

Surveys are commonly administered at the end of each course for the purpose of soliciting learner feedback about the quality of instruction. Survey results are then routed to department chairs who commonly use them to monitor faculty performance, curriculum, and other factors. In addition, annual surveys have become a popular method for gauging program effectiveness when sent to graduates and their employers. Sometimes, faculty and staff are surveyed to determine how they feel about proposed changes to institutional policies and procedures.

Traditional objective exams and survey responses based on a Likert scale are often scored through software. It’s a quick and simple process to tabulate responses and grade true/false, multiple choice, fill-in-the-blank, and in some cases, even short answer responses. Modern survey platforms can offer sophisticated reporting features which can easily disaggregate data by specified parameters. Analysis tools make it a snap to “slice and dice” the data, track trends over time, and identify specific strengths or areas for improvement.

Qualitative Key Assessments

Focus Group Interviews

Similar to surveys, focus group interviews are intended to yield feedback from a specified group of respondents. However, even though they are more time consuming to administer and score, the insights interviews can potentially provide are so much richer and more in-depth than survey results, because interviewers have the opportunity to have an actual conversation with participants. They can ask follow-up questions or provide an example if needed to ensure understanding. Responses are then coded according to theme and results can be summarized. When compared with similar questions on survey results, the combination of these two data sources can be quite powerful in learning how stakeholder groups really feel about specific aspects of a program.

Project-Based Assessments

Project-based assessments are very commonly used in coursework on assignments that are designed to utilize learners’ knowledge in a deeper way. They require application of content knowledge to compare/contrast, analyze, evaluate, and synthesize. Project-based learning can and often does span across all six cognitive domain levels of Bloom’s Taxonomy.

Most project-based assessments used in coursework are typically language-intense and include multiple components and cannot be easily scored through automation. For example, teacher candidates may be tasked with creating a standards-based unit or lesson plan using a specific instructional method. Engineering students may be asked to work in teams to design a new fuel-efficient automobile within specified parameters. Business majors may be assigned to create a print and multimedia marketing campaign for a fictitious company based on its mission. Key assessments such as these require the use of a different scoring system – an analytic rubric with clearly differentiated performance levels (I recommend 4) and specific performance indicators based on criteria from the assignment instructions.

Performance-Based Assessments

Depending on the context, performance-based assessments typically also utilize rubrics. Whereas project-based assessments are typically completed as part of a course, performance-based assessments are usually connected to field or clinical experiences such as an internship, apprenticeship, in-house rotation, student teaching, and so on. Rather than applying their knowledge to create a single, specified project, performance-based assessments require students to demonstrate their proficiency or competency across numerous skills simultaneously within a real-world context.

In many ways, performance-based assessments represent the highest level of key assessment. It’s akin to a student who’s been taking flying lessons: They’ve passed all their written exams, completed various tasks assigned by their instructor, and now they’re ready to fly solo. This last step is where they demonstrate competency. Of course, in this context, true competency is determined by whether or not they can successfully take off, fly the aircraft, and land safely. In nursing, teaching, music, foreign language, and engineering, students demonstrate their proficiency in very different ways. But they are all excellent indicators of not only what students know and able to do—their performance is also indicative of the program’s quality.

How are Key Assessments Used in Quality Assurance?

Each key assessment provides a snapshot of data about a given program. When viewed in isolation, each one represents one piece of a giant jigsaw puzzle. But when considered in groups or altogether, the insight we can glean is much more powerful.

When we are able to consider the results of key assessments that measure similar things, we achieve triangulation, which is akin to a three-legged stool. One leg allows the stool to stand but that’s about all. Two legs balance it out a bit but three give the stool maximum balance and strength. It’s the same with assessments and the data we are able to harvest from them.

When we have multiple key assessments, we’re able to look at data over multiple administrations over time. That means we’re able to more easily identify specific trend lines and patterns in the data. This enables us to draw conclusions with much greater confidence for the purpose of programmatic decision making. And that’s the heart and soul of quality assurance.

###

About the Author: A former higher education administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation, program development, and competency-based education. Specialties: Council for the Accreditation of Educator Preparation (CAEP) and the Association for Advancing Quality in Educator Preparation (AAQEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit: ThisisEngineering on Unsplash

 

As Your Higher Education Consultant, I Need to Know Where the Snakes Are.

consultant

Birth of a Snake Analogy

You may be wondering what being a higher education consultant has to do with snakes. Let me explain. Several years ago, my husband and I bought a small farm and decided to do a complete rehab on the old house. One weekend I was by myself and noticed something on the floor as I walked from one room to the other.

Lying very still in the hallway was a small snake, roughly 10-12 inches long. I love animals but for some reason reptiles have never been at the top of my list. I am particularly not interested in having them inside my home.

My 10-second assessment convinced me it wasn’t being aggressive, so I quickly ran for the broom and dustpan. Scooping that little critter up and carrying him a safe distance away from the house shot to the top of my priorities list, and I’m happy to report that the mission was successful.

Feeling proud of myself for taking care of this unexpected intruder in a relatively calm manner I went back in and tended to my tasks. However, after about an hour I walked into the bedroom and noticed something lying on the floor.  I couldn’t believe my eyes. How did it get back in? And why would it want to come back and scare me a second time?

Upon closer inspection I then learned it wasn’t the same snake. This one’s colors were a little richer, and it was a little shorter than the first. The realization that two snakes were able to get into my home (somehow, somewhere) did not bring me joy.

Having been successful in showing my first uninvited guest out, I did the same with the second. But by the time I got back in the house I found a third in the kitchen. By the time it was all said and done, I had come across at least seven snakes in my house that weekend. Thankfully, they weren’t poisonous nor were they aggressive. But they jarred me to my bones every time I came across one.

Making a very long story short, my husband and I discovered a hole just large enough for them to have squeezed through. Why those chose to make themselves at home, I’ll never know. But we packed every crack and crevice with enough steel wool and caulk to probably withstand gale force winds.

Snakes and Consultative Support

Now, you may be wondering:

What on earth does this have to do with being a higher education consultant? 

It actually fits perfectly. Let me explain:

Before I ever agree to take on a client in need of a higher education consultant, I always have a fairly lengthy conversation with them. We talk about where they currently are in a given project and what they are struggling with. I listen carefully to determine if their needs match up with my skill sets. In other words, I want to determine if I am the best person to help them achieve their goals. If I’m not, I tell them. We part ways and I wish them well.

However, if I do take on a client, I always tell them how important it is to have open, honest communication. We must be able to trust each other. For example, they need my reassurance that everything between us is confidential. It is–I never ever reveal who I work for unless I receive their express permission to share that information. But just as important, I need my clients to be honest with me and tell me exactly what they’re struggling with so ugly surprises don’t pop up later on.

In other words, I need to know where the snakes are.

Regardless of whether I’m working with a College of Education, an online learning department, or an entire institution, as a higher education consultant I need to know what keeps my clients up at night or what makes their stomachs feel queasy. I want to know where the bodies are buried (figuratively). If we agree to work together, I’ll find them eventually. But it would save us both a lot of time if I knew up front what they wouldn’t want to showcase to accrediting bodies, state regulatory agencies, and the like.

On the Path to Continuous Program Improvement

Once we lay all those problems areas out on the table, we can work together to address them.  As your higher education consultant, we can work together to get those gaps filled and shore up areas that the institution knows deep down should have been taken care of a long time ago.  I can support them in doing what’s necessary to ensure continuous program improvement. As long staff follow the plan, they should be able to handle any unpleasant surprises that may arise — without having to resort to steel wool and caulk.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Graphic Credit: clipartspub.com