Smarter Quality Assurance in Higher Education: Key Questions for Institutional Success

Vital QAS Questions in Higher Education

Quality assurance in higher education is no longer just about meeting accreditation standards—it’s about building smarter systems that drive institutional and programmatic success. Whether you’re preparing for reaffirmation or refining internal processes, a well-functioning Quality Assurance System (QAS) can transform data into decisions and decisions into impact.

In today’s dynamic higher education landscape, accreditation is more than a compliance exercise—it’s a catalyst for continuous improvement. Whether your institution is accredited by accreditors such as the Higher Learning Commission (HLC), the Accreditation for Biblical Higher Education (ABHE), the Distance Education Accrediting Commission (DEAC), the Association for Advancing Quality in Educator Preparation (AAQEP), or the Council for the Accreditation of Educator Preparation (CAEP), the common denominator is the expectation of a well-developed, functioning Quality Assurance System (QAS).

A robust QAS isn’t just a repository of data—it’s a living system that supports institutional and programmatic effectiveness through data-informed decision making. When operating as it should, a QAS can answer vital questions that drive strategic planning, academic quality, and student success.

What Should Your QAS Be Able to Answer?

Here are some of the most critical questions a well-functioning QAS should be equipped to answer:

Quality Assurance System questions institutions should answer

Quality Assurance System questions institutions should answer

These questions aren’t just theoretical—they’re practical prompts that should guide your institution’s self-study, annual reporting, and strategic planning. Accrediting bodies increasingly expect institutions to demonstrate not only that they collect data, but that they use it to improve outcomes and close the loop.

Why Institutions Benefit from Expert Support

Developing and sustaining a QAS that can answer these questions isn’t easy. It requires cross-functional collaboration, clear governance, and a culture of evidence. That’s where an experienced consultant can make a difference.

Ways a Skilled Consultant Can Help

 

Whether you’re launching a new program, preparing for reaffirmation, or simply strengthening your internal processes, expert guidance can accelerate your progress and ensure your QAS is not just compliant—but transformative.

Final Thoughts

Quality assurance isn’t a checkbox—it’s a mindset. When your QAS is built to answer the right questions, it becomes a strategic asset that drives excellence across your institution. As accrediting bodies evolve and expectations rise, now is the time to invest in systems that support meaningful, measurable improvement.

If your institution is ready to take its QAS to the next level, consider partnering with a consultant who understands the nuances of accreditation and the realities of institutional life. The right support can turn your data into decisions—and your decisions into impact.

###

About the Author: An experienced college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialties: Council for the Accreditation of Educator Preparation (CAEP) and the Association for Advancing Quality in Educator Preparation (AAQEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Microsoft Co-Pilot

 

Using Data to Inform Decision-Making in Higher Education

higher education faculty sitting around a conference table discussing aggregated and disaggregated data

In the context of higher education quality assurance, sound academic decision-making cannot rely on instinct or personal preferences—it must be based on evidence. That evidence often comes in the form of key metrics, specifically key assessment data. For institutions to drive continuous improvement confidently, faculty and administrators must both (1) understand what the data reveal about current practices, and (2) use that understanding to guide future strategic planning.

To accomplish this, institutions need high-quality metrics and the ability to view them from both macro and micro perspectives. Aggregated data offer a big-picture view of student performance, while disaggregated data reveal the finer details needed to understand patterns and areas for improvement.

 

Aggregated Data: The Big Picture

Aggregated data are summary-level data that give a broad overview of institutional performance. These include statistics such as overall pass rates on exams, enrollment trends, retention, and graduation rates. They are especially useful for public reporting—such as IPEDS submissions or presentations to external stakeholders like advisory boards and community groups.

This type of data helps institutions “tell their story” at a high level and demonstrate overall effectiveness. However, aggregated data alone often mask important variations within student populations. To make meaningful programmatic improvements, institutions must go deeper—by breaking these data sets into smaller, more specific subsets through disaggregation.

 

Disaggregated Data: The Details That Drive Program Improvement

Many accrediting bodies such as HLC, CAEP, AAQEP, TRACS, and others require institutions to report both aggregated and disaggregated data across at least three assessment cycles. Common disaggregation parameters include program, cohort, gender, and race/ethnicity. While this is feasible at large institutions with diverse enrollments, it poses a challenge for smaller colleges or institutions with relatively homogenous student bodies. In those cases, disaggregation by gender or race may not be statistically meaningful or helpful for decision-making.

Depending on the assessment and institutional context, alternative disaggregation strategies can provide valuable insights. For instance, disaggregating by cohort is common—typically based on academic years (e.g., September 1–August 31). In licensure-based programs like teacher education or nursing, disaggregating by specialty area or licensure pathway is also standard practice.

Beyond that, institutions can disaggregate data in the following ways:

  • By entry status: First-time freshmen vs. transfer students
  • By admission type: Fully admitted vs. conditionally admitted students
  • By prior degree: Post-baccalaureate students vs. those without a degree
  • By course modality: Face-to-face vs. online learners
  • By instructor: If a course or field experience is taught by multiple faculty
  • By academic preparedness: Based on incoming GPA or standardized test scores
  • By assessment attempt: First attempt vs. multiple attempts on key assessments or licensure exams
  • By support service utilization: Students who were referred to academic support services vs. those who were not (similar to analyzing at-risk vs. non-at-risk student groups)

These strategies allow program leaders and faculty to gain a more nuanced understanding of how different groups of students are performing—and why.

 

Conclusion

Aggregated data are essential for summarizing institutional performance and sharing high-level outcomes. But disaggregated data offer the granular insights needed to identify strengths, pinpoint challenges, and support targeted interventions. In today’s accountability-focused educational landscape, combining both views enables colleges and universities to make truly data-informed decisions that lead to meaningful, strategic improvements at both the program and institutional levels.

###

Need to know more about this topic? Need a consultant to help guide your institution or program through an upcoming accreditation site visit? Reach out:

About the Author: A former higher education administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation, program development, and competency-based education. She can be reached at: Roberta@globaleducationalconsulting.com

 

Quality Assurance System: The Drivetrain of Institutional Effectiveness

Quality Assurance System

If you spend much time at all within the accreditation space, you’ll undoubtedly hear someone in higher education say, “Oh, we have a Quality Assurance System (QAS); we use _________.” They’ll proudly point to a license agreement they have with a company, where student work or assessment results are uploaded and stored. Some use that service to run data reports and are thrilled to share that it even “does data analysis.” Unfortunately, those well-intentioned individuals are missing the mark when it comes to a QAS.

A Quality Assurance System is really like the drivetrain of our car—without it we’d get nowhere, stuck along the side of the road. We’d know we had a problem, but without that drivetrain we may not know how to resolve our issue. We’d be wondering what to do next.

What a Quality Assurance System Isn’t

It’s important to remember that a Quality Assurance System isn’t a software program or a subscription-based website. It’s a well-planned and executed system by which institutions and individual programs monitor quality on key performance indicators. They then use insights gleaned from trendlines to make data-informed programmatic decisions.

Essential Components of a Healthy QAS

A healthy, solid quality assurance system requires a well-defined schema that involves looking at multiple data sources and being able to triangulate those data over time to look for patterns, trends, strengths, and weaknesses. And it shouldn’t just be one or two people reviewing data—there should be groups and advisory boards assigned to this task. Why? So steps can be taken to make improvements when the need arises.

High Quality Assessments

 

A well-functioning QAS requires using a blend of both proprietary and internally created high quality assessments. We know that data are only as good as the assessments themselves. Great care must be taken when creating key assessments to ensure that each measure what they are intended to measure (content validity) and that they see consistency in assessment results over multiple administrations (reliability). Surveys need to be created with a manageable number of questions, and items should be worded clearly. New assessments need to be piloted according to widely accepted protocols.

Real-Life Assessment Examples

Some examples of proprietary assessments that colleges and universities often use include the SAT, ACT, GRE, edTPA, Praxis, NCLEX, and so on. In other words, these are standardized high-stakes assessments that have been developed and road-tested by assessment development companies.

Internally created assessments, on the other hand, are those institutions create “in-house” for a variety of purposes. For example, it’s common for colleges to survey their students at the end of each semester to determine their satisfaction with their instructors, the quality of the food in the cafeteria, advising services, and so on. Faculty within programs also develop what they consider to be key assessments–perhaps 5-7 that are required by all students to monitor their skills development as they progress in a particular licensure track program. These are often cornerstone assessments in a select group of courses, and they can provide valuable insight regarding student performance as well as the quality of the program itself.

Stakeholder Input

A solid QAS depends on stakeholder input, both internal and external stakeholders. Faculty, student support staff, current students, graduates, and members of the community or business and industry should serve in advisory capacities. Each individual brings a unique set of experiences and perspectives to the table, and diversity of thought can enrich programs.

Real Life Stakeholder Examples

Internal stakeholders include current and past students, faculty members, academic advisors, and so on. External stakeholders are those on the outside of the college or university. They include employers, individuals who have graduated more than a year ago, members of relevant civic groups, and so on. It’s really important to garner the perspective of those who are from within the institution as well as those who are on the outside looking in.

The Ultimate Goal: Continuous Program Improvement

And finally, a well-functioning Quality Assurance System must enable institutions to make data-informed decisions with confidence, for the purpose of continuous program improvement. Staff must be able to identify specific areas of strength, as well as specific areas for growth and improvement. They need to know if an approach or a policy is working or not. And they need a leg to stand on when it comes to making programmatic changes. That leg needs to be grounded in high quality data. Having well-functioning Quality Assurance Systems will support colleges and universities in their accreditation efforts, state program approvals, and growth. They truly are the drivetrain of institutional effectiveness.

###

About the Author: A former higher education administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com 

Top Photo Credit: Samuele Errico Piccarini on Unsplash