CPL and CBE in Higher Education

Student demonstrating electric circuit boards through PLC and CBE

In today’s higher education scene, we often hear the terms Credit for Prior Learning (CPL) and Competency-Based Education (CBE) tossed around a lot. While they both aim to recognize learning outside of traditional classrooms, they’re not the same thing, and knowing the difference can help educators and students make the most of these options.

What is Credit for Prior Learning (CPL)?

CPL is all about awarding academic credit for what students have learned through real-life experiences—like jobs, volunteering, or independent study. Usually, students create a portfolio that showcases their experiences and connects them to specific course outcomes. Schools then evaluate these portfolios to determine how much credit a student can receive.

What is Competency-Based Education (CBE)?

CBE takes a different approach. It focuses on students showing that they’ve mastered certain skills or competencies defined by their program. This can include direct assessments like exams, projects, or clinical experiences, and indirect assessments, such as self-reflections or peer evaluations. The goal here is to ensure students can demonstrate what they’ve learned in practical settings.

Key Differences Between CPL and CBE

Chart showing key differences between CPL and CBE

Regulatory and Accreditation Considerations

When rolling out CPL and CBE programs, institutions must keep an eye on various regulations and accreditation requirements:

  • Federal Regulations: The U.S. Department of Education has specific guidelines for CBE programs, especially for those looking for federal aid. For CPL, there are limits on how much of a program can be completed through prior learning assessment to qualify for aid.
  • Accreditation Standards: Various institutional (regional) accreditors have unique standards for CPL and CBE. For example, the Higher Learning Commission (HLC) has specific guidelines for CBE regarding faculty qualifications and assessment methods.
  • State Authorization: If institutions offer CBE programs across state lines online, they need to comply with varying state requirements. The State Authorization Reciprocity Agreement (SARA) can help simplify this, but it has its own rules.
  • Credit Hour Equivalencies: It’s essential to establish clear policies on how competencies or prior learning convert to credit hours, aligning with accreditor and federal definitions.
  • Assessment Documentation: Keeping detailed records of assessment processes and outcomes is crucial for demonstrating program quality to accreditors.
  • Regular Review and Reporting: University personnel should set up processes for regularly reviewing CPL and CBE programs to stay compliant with changing regulations and standards. Be ready to report on student progress and outcomes in your annual reports and in self-study reports in preparation for reaccreditation site reviews.
  • Substantive Change Notifications: If your institution is planning to launch new CBE programs or expanding CPL offerings, be aware that this may require notifying accreditors of substantial changes.

Conclusion

While CPL and CBE both aim to enhance learning and credit recognition, they serve different purposes and use different methods. Understanding these differences is key for educators and administrators. By following best practices and keeping regulatory considerations in mind, institutions can create strong programs that meet diverse learner needs and promote academic success.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Jeswin Thomas on Unsplash

Interview Preparation: An Essential Part of a Successful CAEP Site Visit

CAEP Interview Preparation

Let’s cut to the chase: Interview preparation is one of the best things an institution can do to ensure a successful accreditation outcome.

Preparing for an accreditation site visit is always stressful for higher education faculty and staff, even under the best of circumstances. Depending on whether it’s a regional (institutional) accrediting body, a state compliance audit, or a programmatic accreditor, there are certain processes and procedures that must be followed. While each body has its own nuances, there’s one thing institutions should do to prepare, and that is to help their interviewees prepare.  This piece will focus helping educator preparation programs prepare for a Council for the Accreditation of Educator Preparation (CAEP) site visit.

Important note: The guidance below focuses exclusively on the final months and weeks leading up to a site visit. The actual preparation begins approximately 18 months before this point, when institutions typically start drafting their Self-Study Report (SSR).

2-4 Months Prior to the Site Visit

Approximately 2-4 months prior to a site visit, the CAEP team lead meet virtually with the educator preparation program (EPP) administrator(s) and staff. Sometimes, representatives of that state’s department of education will participate. By the end of this meeting, all parties should be “on the same page” and should be clear regarding what to expect in the upcoming site visit. This includes a general list of who the team will likely want to speak with when the time comes.

A Word About Virtual and/or Hybrid Site Reviews

The onset of Covid-19 precipitated a decision by CAEP to switch from onsite reviews to a virtual format. Virtual or hybrid virtual site reviews require a different type of preparation than those that are conducted exclusively onsite. I think the more we start to see Covid in the rearview mirror, the more accreditors may start to gradually ease back into onsite reviews, or at least a hybrid model. I provided detailed guidance for onsite reviews in a previous post.

CAEP has assembled some very good guidelines for hosting effective accreditation virtual site visits, and I recommend that institutional staff familiarize themselves with those guidelines well in advance of their review.

Interviews: So Important in a CAEP Site Visit

Regardless of whether a site visit is conducted on campus or virtually, there’s something very common:

An institution can submit a stellar Self-Study Report and supporting pieces of evidence, only to fail miserably during the site review itself. I’ve seen this happen over and over again.  Why? Because they don’t properly prepare interviewees. Remember that the purpose of site visit interviews is twofold:

First, site team reviewers need to corroborate what an institution has stated in their Self-Study Report, Addendum, and supporting pieces of evidence. In other words:

Is the institution really doing what they say they’re doing?

For example, if the institution has stated in their written documents that program staff regularly seek out and act on recommendations from their external stakeholders and partners, you can almost bet that interviewees will be asked about this. Moreover, they’ll be asked to cite specific examples. And they won’t just pose this question to one person. Instead, site team reviewers will attempt to corroborate information from multiple interviewees.

Second, site team reviewers use interviews for follow-up and answering remaining questions that still linger after reading the documents that were previously submitted. So for example, if both the Self-Study Report and the Addendum didn’t provide sufficient details regarding how program staff ensure that internally created assessments meet criteria for quality, they will make that a focus in several interviews.

In most instances, the site team lead will provide a list of individuals who can respond accurately and confidently to team members’ questions. Within the educator preparation landscape, typical examples include:

However, I had seen instances where the team lead asks the institution to put together this list. Staff need to be prepared for either scenario.

Mock Visits: Essential to Site Review Interview Preparation

Just as you wouldn’t decide a month in advance that you’re going to run a marathon when the farthest you’ve been walking is from the couch to the kitchen, it’s to an institution’s peril if they don’t fully prepare for an upcoming site visit regardless of whether it’s onsite, virtual, or hybrid.

I’ve come to be a big believer in mock visits. When I first started working in compliance and accreditation many years ago, I never saw their value. Truthfully, I saw them as a waste of time. In my mind, while not perfect, our institution was doing a very good job of preparing future teachers. And, we had submitted a Self-Study Report and supporting pieces of evidence which we believed communicated that good work. We took great care in the logistics of the visit and when the time came, we were filled with confidence about its outcome. There was one problem:

We didn’t properly prepare the people who were going to be interviewed.

During site visits, people are nervous. They’re terrified they’ll say the wrong thing, such as spilling the beans about something the staff hopes the site team reviewers won’t ask about. It happens. Frequently.

When we’re nervous, some talk rapidly and almost incoherently. Some won’t talk at all. Others will attempt to answer questions but fail to cite specific examples to back up their points. And still others can be tempted to use site visit interviews as an opportunity to air their grievances about program administrators. I’ve seen each and every one of these scenarios play out.

This is why it’s critical to properly prepare interviewees for this phase of the program review. And this can best be done through a mock site visit. Another important thing to keep in mind is that the mock visit should mirror the same format that site team members will use to conduct their program review. In other words, if the site visit will be conducted onsite, the mock visit should be conducted that same way. If it’s going to be a virtual site visit, then the mock should follow suit.

Bite the bullet, hire a consultant, and pay them to do this for you.

It simply isn’t as effective when this is done in-house by someone known in the institution. A consultant should be able to generate a list of potential questions based on the site team’s feedback in the Formative Feedback Report. In addition to running a risk assessment, a good consultant should be able to provide coaching guidance for how interviewees can communicate more effectively and articulately. And finally, at the conclusion of the mock visit, they should be able to provide institutional staff with a list of specific recommendations for what they need to continue working on in the weeks leading up to the site visit in order to best position themselves for a positive outcome.

If you’re asking if I perform this service for my clients, the answer is yes. There is no downside to preparation, and I strongly encourage all institutions to incorporate this piece into their planning and budget.

While the recommendations above may feel exhausting, they’re not exhaustive. I’ve touched on some of the major elements of site visit preparation here but there are many more. Feel free to reach out to me if I can support your institution’s CAEP site visit effort.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Graphic Credit: Pexels

 

Teacher Effectiveness & Positive Impact: The Dynamic Duo

Shaping Lifelong Learners: The Symbiosis of Teacher Effectiveness and Positive Impact

In education, a lot of emphasis is placed on teacher effectiveness and positive impact, as it should be. It’s widely accepted that teachers are highly influential on students, and that influence doesn’t just stop at the end of the school day or even the school year. Teachers have the ability to impact students’ learning and achievement for many years.

As a society, we want to know that those responsible for instructing our children are competent, caring, reflective, and ethical. We want teachers to possess the kind of skills, knowledge, and dispositions they need to model positive behaviors and support students in their learning and development.

Principals typically are responsible for monitoring the effectiveness of teachers in their building. They come in a few times per year and formally observe and evaluate each teacher “in action” while they’re teaching a lesson. Principals then rate teachers on their effectiveness using various district-approved criteria.

In addition, colleges and universities that prepare future teachers also play an important role in ensuring their graduates will be effective in the classroom.

That said, teacher effectiveness and having a positive impact on students’ learning and development are related concepts but are not necessarily synonymous. In fact, the Council for the Accreditation of Educator Preparation (CAEP), a leading national accrediting body, requires educator preparation providers to show the extent to which program completers are having a positive impact on the learning and development of their P-12 students. However, despite publishing a guide on the topic, the accrediting body doesn’t clearly articulate that while these terms go hand in glove, they are not the same and can’t be measured in the same way.

In order to have well-rounded, successful learners, we need to see evidence of both teacher effectiveness and positive impact. Here’s a brief explanation of the differences between the two:

Measuring Effectiveness vs. Impact

No doubt about it: We need teachers to plan lessons that are aligned to state standards. They must design learning experiences that will help students grasp important skills and concepts throughout the school year. There continues to be a heavy emphasis on using high stakes standardized assessments to measure student learning and subsequently, teacher effectiveness. However, an assessment is typically not a good way to truly measure positive impact. How, for example, can a test determine a student’s love for learning or their social development?

Teacher Effectiveness and Positive Impact

Long-Term vs. Short-Term Outcomes

We all want to see immediate results. When we change our diet or increase our exercise, we typically expect to see outcomes pretty quickly when we climb on the scale, and we’re elated when we see those pounds going down and feel those clothes become looser. However, we may not realize the long-term impact of those efforts for many months or years later. Lowering our cholesterol, taking pressure off our joints, and the like can take quite a while to notice, and can be hard to measure. This is similar in some ways to teacher effectiveness and positive impact:

Long Term vs. Short Term Outcomes

Holistic Development vs. Academic Achievement

We certainly need to support our students’ learning. They need to know facts and critical information about a variety of topics. In turn, they must be able to demonstrate what they know and are able to do within both formal and informal assessments. However, students also need to learn how to interact positively with others, solving problems and conflicts in a way that meets their needs while also treating others with respect. In other words, they need to develop life skills.

Holistic Development vs. Academic Achievement

Student Engagement and Motivation

We need safe, orderly classrooms with sufficient structure, but yet we also need to create learning environments that encourage students to stretch their minds, explore their dreams, and begin the journey of becoming eager lifelong learners.

Student Engagement and Motivation

Striking the Balance: Unveiling the Dual Roles of Effective Teaching

So, a teacher can be effective in a single lesson, or over a unit of study. They can create an orderly, calm learning environment where students are well-behaved. They can create and deliver instructional lessons that are aligned to state standards, and their students can perform well on formative and summative assessments. Those are all examples of teacher effectiveness, and we certainly want that.

However, we also need our teachers to support their students as individuals, helping them to feel excited and motivated. We need teachers to encourage learners to think creatively and critically and ask questions. We want educators to empower students so they gradually take on a greater role in their own learning and decision making. Those are the kinds of influences teachers can and should have on their students, because those are skills that students will carry with them for the rest of their lives. That’s positive impact.

Beyond the Classroom: Nurturing Effective Teachers for Lasting Impact

In summary, while teacher effectiveness is an important aspect of education, having a positive impact on students’ learning and development involves a more comprehensive and long-term perspective. It extends beyond academic achievements to encompass holistic growth and lifelong learning skills. Teacher education program faculty should integrate these concepts into their coursework and clinical experiences. They should also be working in partnership with local school districts by exchanging ideas and providing professional development. Developing highly effective teachers who make a positive impact on students’ learning and development requires a concerted effort, and it doesn’t happen overnight.

###

 

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit: Zainul Yasni on Unsplash 

Career-Focused Outcomes in Higher Education

Career-Focused Outcomes in Higher Education

In an educational landscape where scrutiny is high, academic institutions find themselves under the microscope, particularly in demonstrating their value to stakeholders. To address this, colleges and universities must articulate their commitment to preparing students for the workforce effectively. This involves not only showcasing career-focused outcomes but also ensuring a tangible return on investment. Metrics have become the tool of choice, allowing institutions to gauge success both at a macro and micro level.

From Classroom to Career

It’s so important for colleges and universities to show the academic community, as well as the public at large, that they provide good value for the money that students, donors, and taxpayers invest in them each year. One of the ways they do this is through career-focused outcomes. Higher education institutions must be able to answer questions like:

Career-Focused Outcomes

Career-Focused Outcomes Using a Macro vs. Micro Lens

Metrics like these are measured in various ways. An entire institution, for example, may view this through a broad lens, and may answer questions like these from a macro perspective. However, each academic program should be able to collect, analyze, and interpret data tailored to its specific area in order to answer the ROI question from more of a drilled-down, micro perspective.

Teacher Effectiveness and Positive Impact

In educator preparation, for example, one important indicator of a program’s quality can be found in the performance of its graduates, typically up to three years post-graduation. Teacher preparation program faculty and staff must look closely at a large number of performance indicators, two of which are teacher effectiveness and positive impact on student learning. These are related concepts, but they are not necessarily synonymous. Let’s break down the similarities and differences:

Similarities

  • Focus on Student Outcomes: Both teacher effectiveness and positive impact center around achieving positive outcomes in students’ learning and development.
  • Student Progress: Both concepts involve assessing and improving students’ progress, academic achievements, and overall growth.

Differences

  • Teacher effectiveness: Typically refers to how well a teacher can facilitate learning and engage students in the educational process. It is often measured through various factors such as classroom management skills, instructional techniques, subject knowledge, and adherence to curriculum standards. Typical pieces of evidence for determining teacher effectiveness often include peer observations, principal evaluations, a review of teaching methods, lesson plans, and classroom management practices.
  • Positive Impact on Students:  Involves not only effective teaching but also fostering a supportive and motivating environment that contributes to students’ personal and academic growth. It goes beyond traditional academic metrics and may include factors like students’ social-emotional development, critical thinking skills, and overall well-being. Evidence for positive impact can include student testimonials, changes in behavior or attitudes, academic improvement, and long-term success beyond the classroom. Another way schools and states try to determine positive impact comes from value-added data, which involves measures that typically focus on quantifying the specific contribution a teacher makes to students’ academic achievement, often measured through standardized test scores.

Conclusion

It is very important for higher education institutions to create a well-balanced schema for answering questions related to job preparation, positive impact, and overall return on investment. They must collect and analyze data from a variety of internal and external high-quality assessments. It’s about tracking results over time and making informed decisions with a commitment to continuous improvement.

In essence, the pursuit of showcasing career-focused outcomes is a collective effort that involves the institution as a whole and each academic program individually. By embracing a holistic perspective and delving into program-specific metrics, colleges and universities can not only provide answers to pertinent questions, but also demonstrate their unwavering commitment to delivering value in the evolving landscape of higher education.

###

 

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit:

Competency-Based Education: A Paradigm Shift in Higher Learning

CBE A Paradigm Shift in Higher Learning

We need a paradigm shift in higher learning. For over a century, the Carnegie Unit has been the cornerstone of American education, providing a time-based standard for student progress. However, as the landscape of education evolves, the limitations of this model become apparent, prompting educators to explore innovative alternatives. One such model gaining significant traction is Competency-Based Education (CBE). In this post, I’ll delve into the merits of CBE and offer some practical tips for higher education professionals looking to pilot this transformative approach. 

Rethinking Education in the 21st Century

The traditional education model often propels students forward collectively, irrespective of individual learning paces or abilities. The disruption caused by events like COVID-19 has underscored the need for a more adaptive and personalized approach. We know that each learner is different, and they come with a variety of learning needs as well as life and work experiences. For too long, we’ve used a cookie cutter, one-size-fits-all approach to teaching and learning — particularly at the higher education level. Enter Competency-Based Education, a paradigm that requires learners to demonstrate their understanding and skills through rigorous assessments rather than mere attendance. It also requires faculty members, administrators, and other staff to rethink their roles and how they support students through their academic journey.

Unveiling the Essence of CBE

Competency-Based Education isn’t about taking the easy route; it’s about embracing a different and more effective methodology. Instead of passively absorbing information, students are challenged to showcase their knowledge and abilities through high-quality assessments. This approach is inherently standards-based and is built on evolving educational and/or industry-specific standards. This is far different from what most faculty members are used to, when they alone decide what content to teach in their classes, how students will meet their expectations, and the pace at which students may progress through a course. 

Key Principles of Competency-Based Education

Traditional learning and CBE learning share a common goal of wanting students to be successful. It’s how they meet that goal that’s different. Here are some key “big picture” ways where a competency-based model is quite different from a traditional course-based model:

Competency-based education is a paradigm shift in higher learning.

A Paradigm Shift: Tips for Piloting CBE in Higher Education

I’ve presented at conferences on this topic, and multiple times have been approached by a college dean or department chair who was interested in bringing the CBE model to their campus. Few realize that changing to this model — either retrofitting an existing program or creating a program from scratch — require a considerable paradigm shift not only to academics, but to infrastructure services (i.e., enrollment & admissions, registrar, bookstore, academic advising, etc.). I even had a dean once pull out a pen and small tablet out of her purse, waiting for me to give her three easy steps to CBE, as if it was a biscuit recipe. The truth is, competency-based education is a complex approach to teaching and learning. Once it’s in place, the payoff can be tremendous — but stakeholders must understand the cultural changes that must take place in order for CBE to become a long-term reality within their institutions.

Here are a few key tips for launching CBE at the higher education level: CBE is a paradigm shift in higher learning.

A Long-Term Commitment to Student Success

Competency-Based Education is not a quick fix, but a powerful, long-term solution to enhance student learning, achievement, and satisfaction. It truly is a paradigm shift in higher learning. I think it’s time to take the leap into a future where education adapts to the needs of the learner.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com 

Top Photo Credit: Kaleidico on Unsplash 

CBE: A Transformative Approach for Higher Learning

CBE

Introduction

In the dynamic landscape of education, where the needs and expectations of learners continue to evolve, Competency-Based Education (CBE) stands out as a powerful and adaptive model. Initially prevalent in P-12 schools, CBE is progressively gaining recognition and traction in higher education. For those interested in exploring new and better ways to meet the needs of learners, it’s crucial to understand the transformative potential of CBE and how to initiate this innovative model in your institution.

Understanding the Core Principles of Competency-Based Education

Demonstrative Assessment

CBEIn a CBE, students showcase their knowledge and skills through a variety of high-quality formative and summative assessments. This approach shifts the focus from traditional testing to a more comprehensive evaluation of a student’s true understanding and application of concepts.

Measurable and Clear Expectations

CBE emphasizes measurable and clearly defined expectations. Learners are aware of the specific targets they need to reach in order to demonstrate competency or proficiency in key concepts or skills aligned with standards. This clarity empowers students to take ownership of their learning journey.

Outcome Over Seat Time

Let’s face it: We’ve all had students who showed up for class, but never answered a question and could barely stay awake. Or they sat glued to their phone throughout the period and couldn’t wait to make their exit. Unlike traditional models that rely on seat time, CBE prioritizes what students learn rather than how long they spend in a classroom. This flexibility allows students to progress at their own pace, accommodating those with diverse life or work experiences who may not require a conventional college experience.

Mentorship Model

Faculty members transition from direct instructors to mentors or learning coaches. This shift is fundamental in supporting student learning, enabling them to work independently and guiding them through their educational journey. The mentorship model fosters a personalized approach to education. Truthfully, some faculty members have a difficult time in making this transition. But for those who are able, it can be tremendously satisfying to support students on their educational journey, rather than being the sage on the stage.

Data-Driven Decisions

Instructional decisions in a CBE environment are data-driven. Regular assessments provide valuable insights into student progress, allowing faculty to tailor their support and interventions based on individual needs. This personalized approach contributes to a more effective learning experience.

Navigating CBE Implementation Challenges

CBEInitiating CBE at the college or university level requires a comprehensive institutional commitment. This commitment involves a paradigm shift in the faculty model, changes in registration and scheduling processes, and adaptations to student support services. Here are a few practical tips to navigate these challenges:

Faculty Development

Invest in comprehensive faculty training programs to equip educators with the skills and mindset required for the mentorship role. Workshops on coaching techniques, personalized learning strategies, and outcome-oriented assessment methods can be invaluable.

Flexible Scheduling and Registration

Redefine traditional scheduling structures to accommodate the individualized pace of CBE. Implement flexible course structures and explore modular approaches to allow students to progress based on their demonstrated competencies.

Technology Integration

Leverage educational technology to facilitate personalized learning pathways. Learning management systems, data analytics tools, and adaptive learning platforms can enhance the effectiveness of CBE by providing real-time insights into student performance.

Communication and Marketing

Effectively communicate the benefits of CBE to both faculty and students. Highlight the flexibility, personalized learning experiences, and real-world applicability of competencies acquired. Develop marketing strategies to attract students who seek a non-traditional educational experience.

Accreditation Alignment

Collaborate with accrediting bodies to ensure that your institution’s competency-based instructional models align with their standards. Stay informed about modifications in regulations and actively engage in discussions with accrediting agencies to demonstrate the effectiveness and rigor of the CBE approach. While hesitant at first, many accrediting bodies such as the Higher Learning Commission (HLC), Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), and other bodies recognized by the Council for Higher Education Accreditation (CHEA) have modified their regulations to include competency-based instructional models.

Embracing the Future of Education

CBEWhile the transition to Competency-Based Education may present challenges, the benefits are substantial. It provides a pathway for institutions to meet the needs of a diverse student population, acknowledging the rich experiences that learners bring to the table. Moreover, the flexibility of CBE can be a strategic advantage in attracting a broader range of students.

As pioneers in higher education, faculty, department chairs, deans, provosts, and accreditation specialists have the opportunity to shape the future of learning. By embracing the principles of CBE and strategically navigating the implementation challenges, institutions can create an environment that not only meets the evolving needs of students but also positions them as leaders in innovative education.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Dollar Gill on Unsplash

 

Quality Assurance System: The Drivetrain of Institutional Effectiveness

Quality Assurance System

If you spend much time at all within the accreditation space, you’ll undoubtedly hear someone in higher education say, “Oh, we have a Quality Assurance System (QAS); we use _________.” They’ll proudly point to a license agreement they have with a company, where student work or assessment results are uploaded and stored. Some use that service to run data reports and are thrilled to share that it even “does data analysis.” Unfortunately, those well-intentioned individuals are missing the mark when it comes to a QAS.

A Quality Assurance System is really like the drivetrain of our car—without it we’d get nowhere, stuck along the side of the road. We’d know we had a problem, but without that drivetrain we may not know how to resolve our issue. We’d be wondering what to do next.

What a Quality Assurance System Isn’t

It’s important to remember that a Quality Assurance System isn’t a software program or a subscription-based website. It’s a well-planned and executed system by which institutions and individual programs monitor quality on key performance indicators. They then use insights gleaned from trendlines to make data-informed programmatic decisions.

Essential Components of a Healthy QAS

A healthy, solid quality assurance system requires a well-defined schema that involves looking at multiple data sources and being able to triangulate those data over time to look for patterns, trends, strengths, and weaknesses. And it shouldn’t just be one or two people reviewing data—there should be groups and advisory boards assigned to this task. Why? So steps can be taken to make improvements when the need arises.

High Quality Assessments

 

A well-functioning QAS requires using a blend of both proprietary and internally created high quality assessments. We know that data are only as good as the assessments themselves. Great care must be taken when creating key assessments to ensure that each measure what they are intended to measure (content validity) and that they see consistency in assessment results over multiple administrations (reliability). Surveys need to be created with a manageable number of questions, and items should be worded clearly. New assessments need to be piloted according to widely accepted protocols.

Real-Life Assessment Examples

Some examples of proprietary assessments that colleges and universities often use include the SAT, ACT, GRE, edTPA, Praxis, NCLEX, and so on. In other words, these are standardized high-stakes assessments that have been developed and road-tested by assessment development companies.

Internally created assessments, on the other hand, are those institutions create “in-house” for a variety of purposes. For example, it’s common for colleges to survey their students at the end of each semester to determine their satisfaction with their instructors, the quality of the food in the cafeteria, advising services, and so on. Faculty within programs also develop what they consider to be key assessments–perhaps 5-7 that are required by all students to monitor their skills development as they progress in a particular licensure track program. These are often cornerstone assessments in a select group of courses, and they can provide valuable insight regarding student performance as well as the quality of the program itself.

Stakeholder Input

A solid QAS depends on stakeholder input, both internal and external stakeholders. Faculty, student support staff, current students, graduates, and members of the community or business and industry should serve in advisory capacities. Each individual brings a unique set of experiences and perspectives to the table, and diversity of thought can enrich programs.

Real Life Stakeholder Examples

Internal stakeholders include current and past students, faculty members, academic advisors, and so on. External stakeholders are those on the outside of the college or university. They include employers, individuals who have graduated more than a year ago, members of relevant civic groups, and so on. It’s really important to garner the perspective of those who are from within the institution as well as those who are on the outside looking in.

The Ultimate Goal: Continuous Program Improvement

And finally, a well-functioning Quality Assurance System must enable institutions to make data-informed decisions with confidence, for the purpose of continuous program improvement. Staff must be able to identify specific areas of strength, as well as specific areas for growth and improvement. They need to know if an approach or a policy is working or not. And they need a leg to stand on when it comes to making programmatic changes. That leg needs to be grounded in high quality data. Having well-functioning Quality Assurance Systems will support colleges and universities in their accreditation efforts, state program approvals, and growth. They truly are the drivetrain of institutional effectiveness.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com 

Top Photo Credit: Samuele Errico Piccarini on Unsplash 

 

Fostering Student Success: The Significance of Transition Points in Higher Education Programs

Transition Points

Introduction

In the dynamic landscape of higher education, successful program completion involves more than just attending classes and earning credits. It requires a structured and purposeful journey through well-defined transition points or milestones. These markers delineate specific phases of progress that students must navigate to ensure they are well-prepared for the challenges that lie ahead. This is especially crucial in licensure-based programs like nursing and teacher education, where the sequential mastery of skills is paramount.

The Protective Structure of Transition Points

Transition points are not arbitrary hurdles; they are a safeguard, ensuring that students progress through a program in a planned and thoughtful manner. The structure serves to protect students and foster their success. It prevents them from signing up for an advanced level course before they have successfully completed foundational level work. Moreover, these gateways provide them with a chance to build their developing skills in key areas before engaging in field experiences. And, by adhering to established transition points, students are much more likely to graduate on time, pass licensure examinations, and get hired for a job in their chosen profession after graduation.

Key Criteria for Identifying Transition Points

Department chairs and faculty should carefully consider various criteria when determining the right transition points. For example:

Transition Points Checklist

 

A Transition Points Framework

To guide educators in implementing effective transition points, a customizable framework can be immensely beneficial. For instance, in educator preparation programs, a five-point model might include:

Transition Points

 

This framework acts as a roadmap, offering a detailed depiction of a student’s progression from matriculation to program completion. Each transition point represents a crucial phase, ensuring that students are adequately prepared before advancing to the next stage. As long as a student meets the stated expectations, the journey continues and they move ahead toward graduation. If the student fails to meet one or more expectations in a given stage, the institution implements a plan for remediation, additional support, or in some case, counseling out of the program.

Conclusion

Transition points are the linchpin of a successful higher education program, providing a structured path for students to navigate. Through careful consideration of key criteria and the implementation of a tailored framework, educators can guide students toward timely graduation, licensure success, and a seamless transition into their chosen professions. By prioritizing these markers, institutions not only protect the interests of their students but also contribute to the overall success and reputation of their programs.

###

 

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Clay Banks on Unsplash 

Examples and Exemplars in Regulatory Spaces

Examples and Exemplars in Regulatory Space

Introduction

Embarking on the journey of launching a new program at your college or university is an exciting endeavor, but the regulatory landscape can be a daunting terrain to navigate. Many college and university personnel find themselves grappling with uncertainty about what evidence to provide and how to demonstrate compliance with specific standards set by institutional or programmatic accreditors. In an era where higher education websites offer a plethora of examples, it’s crucial to understand the distinction between examples and exemplars when it comes to the regulatory space. While examples can serve as general guides, they should not be mistaken for perfect templates. Here I shed light on this crucial distinction and provide higher education staff with actionable tips for a smoother regulatory approval process.

Understanding the Difference Between Examples and Exemplars

Before delving into the tips, it’s important to clarify the difference between examples and exemplars when working in the regulatory space. Examples are instances of documents, reports, or data submitted by other institutions to accrediting bodies. They can serve as helpful references, offering insight into the types of information that might be required. On the other hand, exemplars are not just examples; they are models of excellence. Exemplars represent the gold standard, and assuming that any document submitted by another institution is flawless can lead to significant pitfalls. Recognizing this distinction is the first step toward a more informed and successful regulatory approval process.

 

Alternatives to Using Examples & Exemplars in Regulatory Work

Customization is Key

While examples can provide a starting point, it’s crucial to customize documents and evidence to align with the unique characteristics of your institution and the proposed program. Copying and pasting from examples might not capture the specific nuances and strengths of your institution, potentially leading to a misrepresentation of your capabilities.

Engage in Peer Collaboration

Instead of relying solely on online examples, consider engaging in collaborative efforts with peer institutions. Sharing insights, challenges, and successful strategies with institutions facing similar regulatory processes can offer a more nuanced understanding. Peer collaboration allows for the exchange of real-world experiences and promotes a collective learning environment.

Regularly Review and Update Documentation

The regulatory landscape evolves, and so should your documentation. Rather than relying solely on outdated examples, strive to stay abreast of changes in accreditation standards and requirements. Regularly review and update your documentation to reflect any new expectations, ensuring that your submission remains relevant and compliant.

Seek Guidance from Accreditation Experts

Most institutions have dedicated accreditation liaisons or experts who can provide valuable guidance. These individuals possess an in-depth understanding of accreditation standards and can offer insights tailored to your institution’s context. Consult with them regularly to ensure your documentation meets the necessary criteria and standards. That said, some colleges and universities don’t have the luxury of full-time compliance and accreditation experts on staff. On the other hand, there may not be anyone who’s had experience working with a particular state agency or accrediting body. In those cases, hiring a consultant can be a wise investment.

Use Examples Judiciously

Examples can be powerful tools when used judiciously. Rather than mirroring another institution’s document entirely, extract relevant concepts, structures, and approaches that align with your institution’s context. Adapting best practices from examples can enhance the quality of your submission without compromising authenticity.

 

Conclusion

In the realm of regulatory matters, the journey to program approval requires careful consideration, strategic planning, and a nuanced approach to documentation. While examples can serve as valuable guides, they should not be misconstrued as flawless templates. The key lies in understanding the unique needs of your institution and tailoring documentation accordingly. By following these tips, higher education staff can navigate the regulatory landscape with confidence, ensuring that their submissions stand out for their authenticity and compliance.

###

 

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Gabrielle Henderson on Unsplash 

 

Transforming Higher Education: The Power of Student Mentoring Programs

Mentoring

Introduction

In the ever-evolving landscape of higher education, the quest for student success remains a central concern for colleges and universities across the United States. While academic advisors play a pivotal role in guiding students through their educational journey, a more personalized and intensive approach is required to meet the needs of at-risk students. This is where student mentoring programs step in. Here I explore the concept of student mentoring in higher education — delving into its benefits, potential drawbacks, and its significant role in enhancing institutional effectiveness and accreditation efforts.

Understanding the Role of a Student Mentor

In traditional academic advising, the primary focus is on helping students chart their academic paths and assisting with course registration. However, there exists a group of students who require a more hands-on and personalized approach. These students, often referred to as at-risk, may struggle with various aspects of their college experience, be it academic, financial, or personal. A student mentor is a specially trained individual who goes beyond the traditional academic advisor’s role.

A mentor typically:

  • Interacts with students regularly: A mentor engages with the student multiple times each month through various communication channels, including email, phone calls, text messages, virtual conferences, or in-person meetings. This frequent interaction helps build a strong support system for the student.
  • Acts as a liaison: A mentor serves as a bridge between the student and various university services. If a student encounters difficulties with financial aid applications, the mentor can either assist directly or connect the student with the appropriate staff in the Financial Aid office. Similarly, if a student is struggling academically, the mentor can facilitate tutoring services.
  • Monitors student progress: If a student begins to miss classes or falls behind in their coursework, the mentor plays a proactive role in reaching out to the student. They work with the student to identify the reasons for their struggles and collaboratively develop a plan for academic success.

The Benefits of a Strong Mentoring Model

The traditional academic advising model often relies on students seeking assistance, which may not be sufficient for at-risk students. However, a strong mentorship model, where a mentor is assigned to a student upon matriculation and remains with them until graduation, offers numerous advantages:

  • Improved Student Success: A mentor’s consistent support and guidance significantly contribute to student success. At-risk students often face challenges that can derail their academic progress, and a mentor helps address these issues promptly, leading to higher achievement and improved GPA.
  • Enhanced Student Retention: By closely monitoring a student’s academic journey, a mentor can identify and address issues that may lead to dropouts. This proactive approach contributes to higher retention rates, which is a key concern for colleges and universities.
  • Greater Student Satisfaction: The personal connection and support provided by mentors lead to increased student satisfaction. Knowing there is someone dedicated to their success boosts students’ morale and confidence.
  • Improved Institutional Effectiveness: A well-structured mentorship program aligns with institutional effectiveness goals. It provides a systematic approach to monitor, support, and measure student success, helping institutions meet accreditation standards more effectively.
  • Accreditation Compliance: Accreditation bodies, such as the Higher Learning Commission (HLC), the Association for Biblical Higher Education (ABHE), and the Council for the Accreditation of Educator Preparation (CAEP), emphasize the importance of demonstrating support for student success. A strong mentorship program positions institutions to meet these requirements effectively.

Challenges and Drawbacks to a Mentoring Model

While student mentoring programs offer immense benefits, there are challenges and potential drawbacks that institutions need to consider:

  • Financial Costs: Implementing a mentorship program requires hiring and training mentors, which can strain an institution’s budget. However, the long-term benefits often outweigh the initial costs.
  • Workload for Mentors: Mentors must be dedicated and properly trained to address a wide range of student needs. The workload can be intensive, and managing a caseload of at-risk students requires effective time management and organizational skills.
  • Scalability: Scaling a mentorship program to accommodate a growing student population can be challenging. Institutions must carefully plan and allocate resources to ensure the program’s success as the student body expands.
  • Cultural Shift: Shifting from a traditional academic advising model to a mentorship program may require a cultural shift within the institution. Faculty, staff, and students need to adapt to the new approach.

Practical Steps for Implementing a Student Mentorship Program

To successfully implement a student mentorship program in your institution, consider the following practical steps:

  • Assess Student Needs: Identify the specific needs of your student population. Conduct surveys, focus groups, and data analysis to understand the challenges at-risk students face.
  • Define Mentor Roles: Clearly outline the roles and responsibilities of mentors. Determine how they will interact with students and which services they will connect students with.
  • Mentor Training: Invest in comprehensive training for mentors, covering areas such as academic support, communication skills, and campus resources. Training is crucial for ensuring mentors are well-prepared to assist students effectively.
  • Integration with Existing Services: Ensure seamless integration with existing university services, such as academic advising, financial aid, and tutoring. Mentors should collaborate with these services to provide holistic support.
  • Data and Monitoring: Implement a data-driven approach to monitor the program’s impact on student success. Regularly assess the program’s effectiveness and make adjustments as needed.
  • Student Outreach: Promote the mentorship program to incoming students and engage them from day one. Assign mentors to students upon matriculation to establish a strong support system from the start.
  • Resources Allocation: Allocate necessary resources, both in terms of personnel and budget, to support the program. Consider seeking external funding sources if needed.

Conclusion

In the quest for higher education excellence and student success, student mentoring programs play a pivotal role. These programs provide a more personalized, proactive, and comprehensive approach to supporting at-risk students, ultimately leading to improved retention, student satisfaction, and academic success. While there are financial and logistical challenges, the long-term benefits, including compliance with accreditation standards and institutional effectiveness goals, make student mentoring a worthwhile investment for colleges and universities.

In a rapidly changing higher education landscape, the transformational power of student mentoring programs can be the catalyst for lasting change, ensuring that all students have the opportunity to thrive and succeed in their academic pursuits.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit:  Monica Melton on Unsplash

The Pillars of Data Consistency: Inter-Rater Reliability, Internal Consistency, and Consensus Building

data consistency

Introduction

Accreditation in higher education is like the North Star guiding the way for colleges and universities. It ensures institutions maintain the highest standards of educational quality. Yet, for higher education professionals responsible for completing this work, the journey is not without its challenges. One of the most critical challenges they face is ensuring the data consistency, or reliability, of key assessments. This is why inter-rater reliability, internal consistency, and consensus building serve as some of the bedrocks of data-informed decision making. As the gatekeepers of quality assurance, higher education professionals should possess a working knowledge of these concepts. Below, I explain some basic concepts of inter-rater reliability, internal consistency, and consensus building:

Inter-Rater Reliability

What it is: Inter-rater reliability assesses the degree of agreement or consistency between different people (raters, observers, assessors) when they are independently evaluating or scoring the same data or assessments.

Example: Imagine you have a group of teachers who are grading student essays. Inter-rater reliability measures how consistently these teachers assign grades. If two different teachers grade the same essay and their scores are very close, it indicates high inter-rater reliability. A similar example would be in an art competition, where multiple judges independently evaluate artworks based on criteria like composition, technique, and creativity. Inter-rater reliability is vital to ensure that artworks are judged consistently. If two judges consistently award high scores to the same painting, it demonstrates reliable evaluation in the competition.

Importance in Accreditation: In an educational context, it’s crucial to ensure that assessments are scored consistently, especially when accreditation bodies are evaluating the quality of education. This ensures fairness and objectivity in the assessment process.

Internal Consistency

What it is: Internal consistency assesses the reliability of a measurement tool or assessment by examining how well the different items or questions within that tool are related to each other.

Example: Think about a survey that asks multiple questions about the same topic. Internal consistency measures whether these questions consistently capture the same concept. For example, let’s say a teacher education program uses an employer satisfaction survey with multiple questions to evaluate various aspects of its program. Internal consistency ensures that questions related to a specific aspect (e.g., classroom management) yield consistent responses. If employers consistently rate the program quality highly across several related questions, it reflects high internal consistency in the survey.

Importance in Accreditation: When colleges and universities use assessment tools, they need to ensure that the questions within these tools are reliable. High internal consistency indicates that the questions are measuring the same construct consistently, which is important for accurate data in accreditation.

Consensus Building

What it is: Consensus building refers to the process of reaching agreement or alignment among different stakeholders or experts on a particular issue, decision, or evaluation.

Example: In an academic context, when faculty members and administrators come together to determine the learning outcomes for a program, they engage in consensus building. This involves discussions, feedback, and negotiation to establish common goals and expectations. Another example might be within the context of institutional accreditation, where an institution’s leadership, faculty, and stakeholders engage in consensus building when establishing long-term strategic goals and priorities. This process involves extensive dialogue and agreement on the institution’s mission, vision, and the strategies needed to achieve them.

Importance in Accreditation: Accreditation often involves multiple parties, such as faculty, administrators, and external accreditors. Consensus building is crucial to ensure that everyone involved agrees on the criteria, standards, and assessment methods. It fosters transparency and a shared understanding of what needs to be achieved.

Conclusion

In summary, inter-rater reliability focuses on the agreement between different evaluators, internal consistency assesses the reliability of assessment questions or items, and consensus building is about reaching agreement among stakeholders. All three are essential in ensuring that data used in the accreditation process is trustworthy, fair, and reflects the true quality of the institution’s educational programs.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com 

Top Photo Credit: Markus Spiske on Unsplash 

Persistence and Retention in Higher Education

Persistence and Retention Word Cloud

In higher education, “persistence to graduation” and “retention” are related but distinct terms that are often used to measure and analyze student progress and institutional effectiveness. College and university personnel encounter them with working on institutional or programmatic accreditation efforts. These are confusing terms that are sometimes used interchangeably, and yet they are not synonymous.

For example, the Higher Learning Commission (HLC) makes a distinction in its Teaching and Learning: Evaluation and Improvement (Criterion 4C).  In its Guiding Principle 2 (Standard IV), the Middle States Commission on Higher Education (MSCHE) requires member institutions to “…commit to student retention, persistence, completion, and success through a coherent and effective support system…”

Here’s a very quick overview of the difference between retention and persistence:

Retention

Retention refers to the percentage of students who continue their enrollment at the same institution from one academic year to the next. It measures how many students remain at the same college or university without transferring or dropping out.

Retention is primarily concerned with keeping students within the institution they initially enrolled in.

Persistence

Persistence, on the other hand, is a broader term that encompasses a student’s continuous pursuit of a degree or educational goal. It measures whether a student is consistently working toward completing their program or degree, regardless of whether they stay at the same institution or transfer.

Persistence focuses on the overall progress of a student toward their educational goal, which can involve transferring to another institution, taking breaks, or pursuing part-time studies.

The Bottom Line

In summary, while both persistence and retention are crucial metrics in higher education, they differ in focus and scope:

Retention is concerned with students staying at the same institution and measures institutional success in keeping students from leaving.

Persistence is concerned with students continuously working toward their educational goals, which may include transferring to other institutions, taking breaks, or pursuing part-time studies.

Higher education institutions and accreditation bodies use these terms to assess student success and institutional performance, with the goal of improving graduation rates and the overall quality of education. Both are important to quality assurance but are determined by different data.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Leveraging Stakeholder Involvement for Higher Education Quality Assurance

Stakeholder Group Meeting

In the realm of higher education, quality assurance and institutional effectiveness are paramount. Internal and external stakeholder groups, including students, faculty, alumni, employers, and community members play a pivotal role in this process. Their active involvement not only ensures transparency but also significantly contributes to accreditation efforts.

It seems that nearly everyone in higher education is aware of the need for stakeholder involvement–or say they are–but very few actually use it effectively. In this post, I delve into the importance of stakeholder involvement in higher education and provide some practical advice for colleges and universities to harness it effectively.

Why Stakeholder Involvement Is Vital

Engaging stakeholders brings diverse perspectives and valuable insights to the forefront. Here’s why their involvement is critical:

Enhanced Accountability

Stakeholder involvement fosters transparency and accountability within institutions. It ensures that decisions align with the needs and expectations of those they serve.  As members of the higher education community, we often develop “tunnel vision” and become so entrenched in our everyday institutional bubble that it’s possible to lose our perspective. As a result, we sometimes don’t consider things from a lens outside of our own. That’s where stakeholder groups can be so valuable to the accountability process.

Continuous Program Improvement

Regular feedback from stakeholders helps colleges and universities identify areas for enhancement. This feedback loop leads to ongoing program improvements, benefiting students and the broader community.  To that end, institutional accreditor Southern Association of Colleges and Schools Commission on Colleges (SACSCOC) prompts university personnel to ensure that appropriate internal and external constituents and stakeholders are involved in the planning and evaluation process as part of their overall institutional planning and effectiveness model.

Accreditation Support

Accrediting bodies often require evidence of stakeholder involvement. Comprehensive records of these interactions streamline the accreditation process and bolster institutional credibility. That doesn’t mean, however, that we should just create an advisory board of some kind in name only. Nor should we hold our obligatory annual meetings for the purpose of simply checking a box and moving on. If institutions build a culture of continuous program improvement rather than a culture of compliance, they will realize just how important stakeholders can be to their regulatory success.

Initiating and Optimizing Stakeholder Involvement

Here are practical steps for college and university personnel to initiate and optimize stakeholder involvement:

Identify Your Key Stakeholders

Identify the primary internal and external stakeholders relevant to your institution, including students, part-time and full-time faculty, alumni, employers, business and industry representatives, and community organizations. Students, of course, should be viewed as the most critical stakeholder in higher education. To underscore the importance of this group, the Higher Learning Commission adopted it as Goal #1 in its Evolve 2025: Vision, Goals, and Action Steps.  It’s essential to select individuals who genuinely want to help you improve your institution. It’s also important to build a cadre of stakeholders who represent a variety of backgrounds and perspectives.

Set Clear Objectives

Determine the specific outcomes you need from your stakeholder groups. Are you seeking input on curriculum development, program evaluation, or community engagement initiatives? Having a clear purpose guides your efforts. For example, in its 2020 Guiding Principles and Standards for Business Accreditation, the Association to Advance Collegiate Schools of Business (AACSB) specifies that stakeholders should play a central role in developing and implementing a program’s strategic plan, in its scholarship, and in its quality assurance system.

Establish Communication Channels

Create multiple communication avenues with stakeholders, such as surveys, focus groups, advisory committees, and regular meetings. Ensure these channels are accessible and user-friendly. Maintaining effective communication and collaboration with stakeholder groups is considered to be part of an essential team of administrators that brings together and allocates resources to accomplish institutional goals, according to the Association for Biblical Higher Education (ABHE), an accreditor for faith-based institutions.

Meet Regularly

Meeting with stakeholders at least once a year is crucial. Consider more frequent interactions, such as quarterly or semi-annual meetings, to maintain engagement. Establishing positive relationships takes time, and this requires seeing stakeholders more than just once per year. Some institutions invite stakeholders to a monthly virtual meeting, supported by one or two onsite meetings. To encourage attendance and keep the momentum going, consider the value of variety: Invite students to come and speak or interact with advisory board members. Don’t overdo it but try to include at least one fun icebreaker or activity in each meeting. And above all else: Whenever possible, provide food. Educators have known about this for many years, and it’s still true today: If you feed them, they will come. 

Share Data

Share relevant data and information with stakeholders, including enrollment figures, student achievement data, and institutional goals. Providing context allows stakeholders to make informed recommendations. And don’t just sugarcoat everything–be real with your stakeholders. If you can’t trust them with data that may be less than desirable, why are they on your advisory board?

Establish a Positive Environment

Foster an open and inclusive environment where stakeholders feel valued and heard. Encourage constructive feedback and respect dissenting opinions. Hopefully, each member of the stakeholder group was selected with care because of the value they bring to the conversation. Assuming that’s the case, each person should walk away from meetings feeling as though their presence and participation mattered. It’s the job of the institutional leader to ensure that happens.

Create a Documentation Framework

Keep detailed records of stakeholder interactions, including meeting agendas, minutes, recommendations, and action items. These records serve as tangible evidence for accreditation purposes. We’ve all heard the saying, “If there’s no photo, it didn’t happen!” The same thinking applies with stakeholder meetings. If there’s no detailed record, it’s really the same as a meeting never taking place. All documents should contain enough details that someone outside the institution (such as an accreditor) could review them and understand who the members are, what the group’s purpose is, how often they meet, what they do, and how the institution’s personnel act on their recommendations.  Pro tip: Create a standard template for meeting agendas and minutes, and store all documents in a secure, university-approved cloud platform in an organized manner. Never store these items on a single user’s laptop.

Using Stakeholder Involvement Effectively

Simply hosting an annual stakeholder meeting to check off a compliance box isn’t good enough. Higher education personnel must weave their input into all facets of their institutional or programmatic structure.  The importance of this is emphasized by the 2023 standards adopted by the Middle States Commission on Higher Education, where stakeholder involvement is featured in multiple standards. To maximize the benefits of stakeholder involvement, I recommend following these guidelines:

Act on Feedback

Don’t just collect feedback; act on it. Use stakeholder recommendations to drive meaningful change within your institution, demonstrating a commitment to improvement. For example, educator preparation accreditors such as the Association for Advancing Quality in Educator Preparation (AAQEP) and the Council for the Accreditation of Educator Preparation (CAEP) both have expectations for utilizing input from teacher candidates, alumni, employers, P-12 partners, and the like.

Evaluate Impact

Regularly assess the impact of changes made based on stakeholder feedback to ensure ongoing positive progress. This is an essential component to your quality assurance system and to a continuous program improvement model. Advancing academic quality and continuous improvement are at the core of accreditation, according to the Council for Higher Education Accreditation (CHEA).

Engage Diverse Voices

Ensure your stakeholder group represents a diverse range of perspectives, leading to more innovative and well-rounded solutions. The American Association of Colleges of Nursing (AACN) emphasizes the need for multiple voices to be heard in its more recent set of Core Competences for Professional Nursing Education.

Communicate Outcomes

Keep stakeholders informed about the outcomes of their input. Sharing how their feedback has shaped decisions and improvements underscores the value of their involvement. This goes back to helping all members feel valued, heard, and respected. It also renews their commitment to your organization and their role in advancing institutional goals.

Maintain an Active Feedback Loop

Continuously refine your stakeholder involvement processes based on feedback to make the collaboration more effective and efficient. In other words, the model should be organic and evolve over time as needs change. The mission, vision, and objectives of stakeholder groups should be revisited periodically in order to gain maximum benefit.

Conclusion

Incorporating stakeholder involvement into higher education quality assurance is not just a best practice; it’s a necessity. By actively engaging stakeholders, colleges and universities can ensure their programs remain effective, relevant, and aligned with community needs. Moreover, documenting these interactions provides valuable evidence for accreditation, further enhancing institutional credibility.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com 

Top Photo Credit: Campaign Creators on Unsplash 

 

Creating a Quality Assurance System

Quality Assurance

A well-conceived, functioning quality assurance system (QAS) helps colleges and universities continuously improve their programs through data-informed decision-making. When comparing it to a ship, the QAS can be considered the steering wheel; it directs the ship’s path as it treads through water on its journey. While a software program is often used to store, track, and analyze data, that’s not a quality assurance system. But what exactly does a QAS consist of? And specifically, how can a quality assurance system be effectively implemented in order to facilitate continuous program improvement?

What is a Quality Assurance System?

A quality assurance system (QAS) in higher education typically involves a set of processes, policies, and procedures that are put in place to ensure that programs and services meet or exceed established quality standards. It includes a range of activities designed to monitor, evaluate, and improve the quality of academic programs, student services, and administrative functions.

To effectively implement a QAS, it’s important to start by identifying the key components that you will need. These components typically include the following:

Components of a Solid Quality Assurance System

Purpose: It may sound simplistic, but when developing quality assurance system higher education institutions need to take the time to articulate its purpose. Consider starting with providing a clear vision and mission statement that define the purpose and direction of the institution and its programs. The more well-defined an institution’s (or a program’s) vision and mission are, the easier it is to create a solid QAS.

Quality standards: Clearly defined quality standards should be established for all aspects of the program, including teaching and learning, assessment, student services, and administrative processes. These standards should be based on industry expectations as well as best practices. These standards should be measurable, so that you can align specific key assessments with them in order to gauge the effectiveness of your programs. Using relevant standards serves as a foundation for building learning outcomes that specify what students should know and be able to do upon completion of their program. From there, a natural progression is to create curriculum map that aligns the courses and activities with the learning outcomes and shows how they are assessed.

Assessment and evaluation: A comprehensive assessment and evaluation process should be established to measure program outcomes against your standards. This should include both internal and external assessments and evaluations. External assessments are also known as proprietary assessments, which are created by an assessment development company. These are often required for licensure-based programs. The nice thing about proprietary assessments is that they’re already standardized and have been closely examined for quality indicators such as content validity and reliability. If you opt to use internally created assessments, you must do this legwork yourself.

Data collection and analysis: A robust data collection and analysis system should be put in place to capture relevant information related to program quality. This system should be designed to generate regular reports that can be used for monitoring, evaluation, and decision-making. A well-defined data analysis plan describes how the data will be interpreted, compared, and reported. Most institutions handle data collection and analysis on an annual basis, but it can also be done at the end of each semester. I recommend creating a master cadence or a master periodicity chart to track all key assessments, how they are used, when they’re administered, when and by whom the data are collected, when and by whom the data are reviewed and analyzed, and other relevant information. Keep this chart up to date and handy in preparation for regulatory reviews such as state program approval renewals and accreditation site visits.

Communication and collaboration: Effective communication and collaboration among both internal and external stakeholders are essential to the success of a QAS. This includes regular reporting and feedback loops to ensure that all stakeholders are informed and engaged in the process. The feedback loop also provides a formal mechanism for stakeholders to make recommendations for improvement. Examples of internal stakeholders are faculty, administrators, and interdepartmental staff. Examples of external stakeholders include business and industry representatives, school district teachers and administrators, and faith-based staff (if applicable).

Dynamic, not static:  A QAS is not a one-time project or a static document. It is a dynamic system that evolves with the changing needs and demands of the institution and its programs. For this reason, I recommend that institutions revisit their QAS annually, with a comprehensive review taking place at least once every five years.

Continuous improvement: In order for a quality assurance system to truly be effective, a culture of continuous improvement should be fostered within the institution, with a focus on using data and feedback to identify areas for improvement and make necessary changes. If this is presented from the perspective of supporting everyone’s efforts at providing exceptional student learning experiences, most faculty and staff receive it well and embrace the model. However, if having a QAS simply for the purpose of accreditation is communicated, then in nearly all instances personnel will not receive it well, simply viewing it as “yet another thing they have to do” in order to “check the boxes” and “get through” the next accreditation site visit. As in the case with most initiatives, how we communicate something to others makes a huge difference in its success.

Ensuring High Quality, Continuous Improvement

Successfully implementing a QAS requires a commitment from leadership and a willingness to invest time, resources, and effort into the process. It also necessitates an action plan that outlines the steps and resources needed to implement the recommendations and monitor their impact.

A well-conceived, functioning quality assurance system can help colleges and universities ensure that their programs are of high quality and continuously improving over time. It facilitates the accountability and transparency of the institution and its programs and demonstrating their effectiveness and impact.

By providing a framework for data-informed decision-making, a QAS can help institutions make evidence-based decisions that lead to better outcomes for students and the broader community—which is our collective mission.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit:  rawpixel.com

 

 

 

 

Using Data to Tell Your Story

data

With a few exceptions, staff from all colleges and universities use data to complete regulatory compliance and accreditation work on a regular basis. Much of the time these tasks are routine, and some might say, mundane. Once programs are approved, staff typically only need to submit an annual report to a state department of education or accrediting body unless the institution wants to make major changes such as add new programs or satellite campus, change to a different educational model, and so on.

And then typically every 7-10 years a program or institution must reaffirm their program approval or accreditation. That process is much more complex than the work completed on an annual basis.

Regardless of whether an institution is simply completing its annual work or if they are reaffirming its accreditation, all strategic decisions must be informed or guided by data. Many institutions seem to struggle in this area but there are some helpful practices based on my experiences over the years:

Tips for Using Data to Tell Your Story

  • Know exactly what question(s) you are expecting to answer from your assessment data or other pieces of evidence. If you don’t know the question(s), how can you know you can provide the information accreditors are looking for?
  • Be selective when it comes to which assessments you will use. Choose a set of key assessments that will inform your decision making over time, and then make strategic decisions based on data trend lines. In other words, avoid the “kitchen sink” approach when it comes to assessments and pieces of evidence in general. Less is more, as long as you choose your sources carefully.
  • Make sure the assessments you use for accreditation purposes are of high quality. If they are proprietary instruments, that’s a plus because the legwork of determining elements such as validity and reliability has already been done for you. If you have created one or more instruments in-house, you must ensure their quality in order to yield accurate, consistent results over time. I talked about validity and reliability in previous articles. If you don’t make sure you are using high-quality assessments, you can’t draw conclusions about their data with any confidence. As a result, you can’t really use those instruments as part of your continuous program improvement process.
  • Take the time to analyze your data and try to “wring out” all those little nuggets of information they can provide. At a minimum, be sure to provide basic statistical information (i.e., N, mean, median, mode, standard deviation, range). What story are those data trying to tell you within the context of one or more regulatory standards?
  • Present the data in different ways. For example, disaggregate per program or per satellite campus as well as aggregate it as a whole program or whole institution.
  • Include charts and graphs that will help explain the data visually. For example, portraying data trends through line graphs or bar graphs can be helpful for comparing a program’s licensure exam performance against counterparts from across the state, or satellite campuses with the main campus.
  • Write a narrative that “tells a story” based on key assessment data. Use these data as supporting pieces of evidence in a self-study report. Narratives should fully answer what’s being asked in a standard, but they should be written clearly and concisely. In other words, provide enough information, but don’t provide more than what’s being asked for.

Let’s face it: Compliance and accreditation work can be tricky and quite complex. But using data from high-quality assessments can be incredibly helpful in “telling your story” to state agencies and accrediting bodies.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit: Markus Winkler on Unsplash 

Inter-rater Reliability Ensures Consistency

interrater reliability

In a previous article, we focused on determining content validity using the Lawshe method when gauging the quality of an assessment that’s been developed “in-house.” As a reminder, content validity pertains to how well each item measures what it’s intended to measure and the Lawshe method determines the extent to which each item is necessary and appropriate for the intended group of test takers. In this piece, we’ll zero in on inter-rater reliability.

Internally Created Assessments Often Lack Quality Control

Many colleges and universities use a combination of assessments to measure their success. This is particularly true when it comes to accreditation and the process of continuous program improvement. Some of these assessments are proprietary, meaning that they were created externally—typically by a state department of education or an assessment development company. Other assessments are internally created, meaning that they were created by faculty and staff inside the institution. Proprietary assessments have been tested for quality control relative to quality indicators such as validity and reliability. However, it’s common for institutional staff to confirm these elements in the assessments that are created in-house. In many cases, a department head determines they need an additional data source and so they tap the shoulder of faculty members to quickly create something they think will suffice. After a quick review, the instrument is approved and goes “live” without piloting or additional quality control checks.

Skipping these important quality control methods can wreak havoc later on, when an institution attempts to pull data and use it for accreditation or other regulatory purposes. Just as a car will only run well when its tank is filled with the right kind of fuel, data are only as good as the assessment itself. Without reliable data to that will yield consistent results over multiple administrations, it’s nearly impossible to draw conclusions and make programmatic decisions with confidence.

Inter-rater Reliability

One quality indicator that’s often overlooked is inter-rater reliability. In a nutshell, this is a fancy way of saying that an assessment will yield consistent results over multiple administrations by multiple evaluators. We most often see this used in conjunction with a performance-based assessment such as a rubric, where faculty or clinical supervisors go into the field to observe and evaluate the performance of a teacher candidate, a nursing student, counseling student, and so on. A rubric could also be used to evaluate a student’s professional dispositions at key intervals in a program, course projects, and the like.

In most instances, a program is large enough to have more than one clinical supervisor or faculty member in a given course who observe and evaluate student performance. When that happens, it’s extremely important that each evaluator rates student performance through a common lens. If for example one evaluator rates student performance quite high or quite low in key areas, it can skew data dramatically. Not only is this grading inconsistency unfair to students but it’s also highly problematic for institutions that are trying to make data-informed decisions as part of their continuous program improvement model. Thus, we must determine inter-rater reliability.

rubric

Using Percent Paired Agreement to Determine Inter-rater Reliability

One common way to determine inter-rater reliability is through the percent paired agreement method. It’s actually the simplest way to say with confidence that supervisors or faculty members who evaluate student performance based on the same instrument will rate them similarly and consistently over time. Here are the basic steps involved in determining inter-rater reliability using the percent paired agreement method:

Define the behavior or performance to be assessed: The first step is to define precisely what behavior or performance is to be assessed. For example, if the assessment is of a student’s writing ability, assessors must agree on what aspects of writing to evaluate, such as grammar, structure, and coherence as well as any specific emphasis or weight should be given to specific criteria categories. This is often already decided when the rubric is being created.

Select the raters: Next, select the clinical supervisors or faculty members who will assess the behavior or performance. It is important to choose evaluators who are trained in the assessment process and who have sufficient knowledge and experience to assess the behavior or performance accurately. Having two raters for each item is ideal—hence the name paired agreement.

Assign samples to each rater for review: Assign a sample of rubrics to each evaluator for independent evaluation. The sample size should be large enough to ensure statistical significance and meaningful results. For example, it may be helpful to pull work samples from 10% of the entire student body in a given class for this exercise, if there are 100 students in the group. The samples should either be random, or representative of all levels of performance (high, medium, low).

Compare results: Compare the results of each evaluator’s ratings of the same performance indicators using a simple coding system. For each item where raters agree, code it with a 1. For each item where raters disagree, code it with a 0. This is called an exact paired agreement, which I recommend over an adjacent paired agreement. In my opinion, the more precise we can be the better.

Calculate the inter-rater reliability score: Calculate the inter-rater reliability score based on the level of agreement between the raters. A high score indicates a high level of agreement between the raters, while a low score indicates a low level of agreement. The number of agreements between the two raters is then divided by the total number of items, and this number is multiplied by 100 to express it as a percentage. For example, if two raters independently score 10 items, and they agree on 8 of the items, then their inter-rater reliability would be 80%. This means that the two raters were consistent in their scoring 80% of the time.

Interpret the results: Finally, interpret the results to determine whether the assessment is reliable within the context of paired agreement. Of course, 100% is optimal but the goal should be to achieve a paired agreement of 80% or higher for each item. If the inter-rater reliability score is high, it indicates that the data harvested from that assessment is likely to be reliable and consistent over multiple administrations. If the score is low, it suggests that those items on the assessment need to be revised, or that additional evaluator training is necessary to ensure greater consistency.

Like determining content validity using Lawshe, the percent paired agreement method in determining inter-rater reliability is straightforward and practical. By following these steps, higher education faculty and staff can use the data from internally created assessments with confidence as part of their continuous program improvement efforts.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Content Validity: One Indicator of Assessment Quality

content validity

Updated on April 13, 2023 to include additional CVR calculation options from Dr. Gideon Weinstein. Used with permission. 

In this piece, we will focus on one important indicator of assessment quality: Content Validity.

Proprietary vs. Internal Assessments

As part of their programmatic or institutional effectiveness plan, many colleges and universities use a combination of assessments to measure their success. Some of these assessments are proprietary, meaning that they were created externally—typically by a state department of education or an assessment development company. Other assessments are considered to be internal, meaning that they were created by faculty and staff inside the institution. Proprietary assessments have been tested for quality control relative to validity and reliability. In other words:

At face value, does the assessment measure what it’s intended to measure? (Validity)
Will the results of the assessment be consistent over multiple administrations? (Reliability)

Unfortunately, however, most colleges and universities fail to confirm these elements in the assessments that they create. This often results in less reliable results and thus the data are far less usable than they could be. It’s much better to take the time to develop assessments carefully and thoughtfully to ensure their quality. This includes checking them for content validity. One common way to determine content validity is through the Lawshe method.

Using the Lawshe Method to Determine Content Validity

The Lawshe method is a widely used approach to determine content validity. To use this method, you need a panel of experts who are knowledgeable about the content you are assessing. Here are the basic steps involved in determining content validity using the Lawshe method:

  • Determine the panel of experts: Identify a group of experts who are knowledgeable about the content you are assessing. The experts should have relevant expertise and experience to provide informed judgments about the items or questions in your assessment. Depending on the focus on the assessment, this could be faculty who teach specific content, external subject matter experts (SMEs) such as P-12 school partners, healthcare providers, business specialists, IT specialists, and so on.
  • Define the content domain: Clearly define the content domain of your assessment. This could be a set of skills, knowledge, or abilities that you want to measure. In other words, you would identify specific observable or measurable competencies, behaviors, attitudes, and so on that will eventually become questions on the assessment. If these are not clearly defined, the entire assessment will be negatively impacted.
  • Generate a list of items: Create a list of items or questions that you want to include in your assessment. This list should be comprehensive and cover all aspects of the content domain you are assessing. It’s important to make sure you cover all the competencies, et al. that you listed in #2 above.
  • Have experts rate the items: Provide the list of items to the panel of experts and ask them to rate each item for its relevance to the content domain you defined in step 2. The experts should use a rating scale (e.g., 1-5) to indicate the relevance of each item. So, if it’s an assessment to be used with teacher candidates, your experts would likely be P-12 teachers, principals, educator preparation faculty members, and the like.
  • Calculate the Content Validity Ratio (CVR): The CVR is a statistical measure that determines the extent to which the items in your assessment are relevant to the content domain. To calculate the CVR, use the formula: CVR = (ne – N/2) / (N/2), where ne is the number of experts who rated the item as essential, and N is the total number of experts. The CVR ranges from -1 to 1, with higher values indicating greater content validity. Note to those who may have a math allergy: At first glance, this may seem complicated but in reality it is truly quite easy to calculate.
  • Determine the acceptable CVR: Determine the acceptable CVR based on the number of experts in your panel. There is no universally accepted CVR value, but the closer the CVI is to 1, the higher the overall content validity of a test. A good rule of thumb is to aim for a CVR of .80.
  • Eliminate or revise low CVR items: Items with a CVR below the acceptable threshold should be eliminated or revised to improve their relevance to the content domain. Items with a CVR above the acceptable threshold are considered to have content validity.

As an alternative to the steps outlined above, the CVR computation with a 0.80 rule of thumb for quality can be replaced with another method, according to Dr. Gideon Weinstein, mathematics expert and experienced educator.  His suggestion: just compute the percentage of experts who consider the item to be essential (ne/N) and the rule of thumb is 90%. Weinstein went on to explain that “50% is the same as CVR = 0, with 100% and 0% scoring +1 and -1. Unless there is a compelling reason that makes a -1 to 1 scale a necessity, then it is easier to say, “seek 90% and anything below 50% is bad.”

Use Results with Confidence

By using the Lawshe method for content validity, college faculty and staff can ensure that the items in their internally created assessments measure what they are intended to measure. When coupled with other quality indicators such as interrater reliability, assessment data can be analyzed and interpreted with much greater confidence and thus can contribute to continuous program improvement in a much deeper way.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Photo Credit: Unseen Studio on Unsplash 

 

The Path to Academic Quality

academic quality

All colleges and universities want their students to succeed. That requires offering current, relevant, and robust programs. But what is the path to academic quality? How can faculty and administrators know for sure that what they are offering is meeting the needs of students? Here are five recommendations:

Embrace Technology: With the advancement of technology, universities can adopt innovative approaches for data collection, analysis, and program evaluation. For instance, leveraging artificial intelligence to analyze large data sets can help identify patterns and trends that may not be apparent with traditional methods. This can lead to more insightful recommendations for program improvement.

Create a Culture of Continuous Improvement: The culture of the university should be centered around continuous improvement. All stakeholders should be encouraged to provide feedback on the programs regularly. This feedback should be analyzed and acted upon to ensure that the programs are up-to-date, relevant, and of high quality.

Involve All Stakeholders: All stakeholders, including faculty, students, alumni, industry professionals, and accrediting agencies, should be involved in the quality assurance process. Each of these groups can offer unique perspectives and insights that can help improve the academic programs.

Develop Key Performance Indicators (KPIs): KPIs are essential metrics used to measure the success of an academic program. These metrics can include student outcomes, faculty satisfaction, and retention rates. Universities can leverage KPIs to monitor and improve the quality of their programs continually.

Invest in Faculty Development: Faculty members play a crucial role in program quality. Therefore, universities should invest in their professional development to ensure they are equipped with the latest knowledge and skills to deliver quality instruction. By providing faculty with ongoing professional development opportunities, universities can enhance program quality and ensure that students receive a high-quality education.

By having a comprehensive quality assurance system, colleges and universities can be assured that they are on the right path to academic quality.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit: Nathan Dumlao on Unsplash 

Exceptional Academic Programs

Academic Programs

Exceptional Academic Programs.

If you look at their mission statements, nearly all colleges and universities strive to serve students and support their success through innovation and high-quality course offerings. But how do they know what they offer is truly exceptional? Here are five innovative tips for how colleges and universities can ensure that they have outstanding academic programs through their quality assurance system:

Focus on student learning outcomes: The ultimate goal of any academic program is to help students learn and grow. To ensure that your programs are meeting this goal, it’s important to focus on student learning outcomes. This means regularly collecting data on student learning, such as grades, test scores, and surveys of student satisfaction. You can then use this data to identify areas where your programs are succeeding and areas where they could be improved.

Engage in continuous improvement: Quality assurance is not a one-time event. It’s an ongoing process of collecting data, analyzing it, and making changes to improve student learning. To be successful, you need to create a culture of continuous improvement within your institution. This means encouraging faculty and staff to be constantly looking for ways to improve their teaching and learning practices.

Use data to drive decision-making: The data you collect through your quality assurance system can be a valuable tool for making decisions about your academic programs. For example, if you find that students are struggling in a particular course, you can use this information to make changes to the course content or delivery. Or, if you find that a particular program is not meeting the needs of its students, you can use this information to make changes to the program or to discontinue it altogether.

Involve all stakeholders: Quality assurance is not just about the faculty and staff who teach the courses. It’s also about the students who take the courses, the alumni who graduate from the programs, and the employers who hire the graduates. To be successful, you need to involve all of these stakeholders in your quality assurance process. This means getting their input on the goals of your programs, the data you collect, and the changes you make.

Be transparent: Quality assurance should be an open and transparent process. This means sharing your data and findings with all stakeholders, including students, faculty, staff, alumni, and employers. It also means being willing to discuss the challenges you face and the changes you make to improve your programs.

By following these tips, higher education personnel can create a quality assurance system that will help ensure that the academic programs they offer are truly exceptional.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit: Element5 Digital on Unsplash 

Quality Assurance and Continuous Program Improvement

quality assurance

An effective quality assurance system is essential to a university’s continuous program improvement. This can involve regular data collection and analysis on multiple key metrics, seeking input and recommendations from both internal and external stakeholders, utilizing high-quality assessments, and more. Here are some innovative tips for how colleges and universities can ensure that they have exceptional academic programs through their quality assurance system:

– Use digital platforms and tools to streamline the data collection and analysis process, and to provide timely feedback and reports to faculty and students. For example, online surveys, dashboards, learning analytics, and e-portfolios can help monitor student learning outcomes, satisfaction levels, and engagement rates.

– Establish a culture of quality assurance that values collaboration, innovation, and diversity. Encourage faculty and students to participate in quality assurance activities, such as peer review, self-evaluation, curriculum design, and accreditation. Provide incentives and recognition for their contributions and achievements.

– Adopt a learner-centered approach that focuses on the needs, preferences, and goals of the students. Design curricula that are relevant, flexible, and aligned with the learning outcomes and competencies expected by the employers and the society. Provide multiple pathways and options for students to customize their learning experience and demonstrate their mastery.

– Incorporate experiential learning opportunities that allow students to apply their knowledge and skills in real-world contexts. For example, internships, service-learning projects, capstone courses, and simulations can help students develop critical thinking, problem-solving, communication, and teamwork skills.

– Seek external validation and benchmarking from reputable sources, such as accreditation agencies, professional associations, industry partners, alumni networks, and international rankings. Compare your academic programs with the best practices and standards in your field and region. Identify your strengths and areas for improvement and implement action plans accordingly.

By following these tips, college and university teams can create a quality assurance system that will help ensure that their academic programs are exceptional. Most importantly, they can be confident that they are meeting the needs of their students–which should be their #1 priority.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

Top Photo Credit: Scott Graham on Unsplash 

Survey: Nonprofit vs. For-Profit Colleges

nonprofit vs. for-profit college

A fascinating survey has just been released by Public Agenda, a nonprofit research and public engagement firm. Investigators collected data from a representative sample of graduates from both nonprofit and for-profit colleges. They received a total of 413 responses, including 217 nonprofit online alumni and 169 for-profit online alumni. Questions focused on respondents’ at-large perceptions about their chosen institution and were not program-specific.

Major Takeaways

There were three big takeaways for me in the survey results:

  • Affordability, accreditation, and whether or not their credits would transfer played more of a role in choosing a college for nonprofit alumni than for graduates of for-profit programs, the survey found.
  • The only factor in which for-profits exceeded nonprofits was providing hands-on financial aid application support to students.
  • About half of nonprofit online alumni (52 percent) enrolled in college in order to get ahead in their current job, compared to 25 percent of for-profit online alumni.

Survey Leads to More Questions

This leads me to ponder some additional questions to consider:

  • Were students who chose a for-profit college so enamored by the personalized, hands-on support in securing financial aid that the institution’s accreditation status became less important?
  • Related, did all of attention they received from the for-profit institution overshadow the affordability factor?
  • Why didn’t those choosing a for-profit institution consider transfer credit policies when making their decision where to attend college? Was this the result of a “hard sell” approach from for-profit enrollment counselors, or some other reason?
  • Survey data indicate twice as many students who chose a nonprofit institution were already employed as those who chose for-profit institution. Does this suggest that for-profits may be targeting and recruiting prospective students who are already underemployed, less financially secure, and thus potentially more receptive to the personalized financial aid assistance they receive?
  • Would survey results be consistent if disaggregated by academic program?
  • If nonprofit institutions stepped up their game and provided more direct support in the financial aid application process, would this further diminish the appeal that for-profits hold?

More Research Needed

The higher education community needs to make data-informed decisions about how best to serve students.  It would be great to see institutions include formalized research in annual goals as part of their strategic plan. Anecdotal information abounds, but there doesn’t appear to be a repository for quality research that compares the nonprofit vs. for-profit space. It’s time we made that happen.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Graphic Credit: pxhere.com

 

 

 

As Your Higher Education Consultant, I Need to Know Where the Snakes Are.

consultant

Birth of a Snake Analogy

You may be wondering what being a higher education consultant has to do with snakes. Let me explain. Several years ago, my husband and I bought a small farm and decided to do a complete rehab on the old house. One weekend I was by myself and noticed something on the floor as I walked from one room to the other.

Lying very still in the hallway was a small snake, roughly 10-12 inches long. I love animals but for some reason reptiles have never been at the top of my list. I am particularly not interested in having them inside my home.

My 10-second assessment convinced me it wasn’t being aggressive, so I quickly ran for the broom and dustpan. Scooping that little critter up and carrying him a safe distance away from the house shot to the top of my priorities list, and I’m happy to report that the mission was successful.

Feeling proud of myself for taking care of this unexpected intruder in a relatively calm manner I went back in and tended to my tasks. However, after about an hour I walked into the bedroom and noticed something lying on the floor.  I couldn’t believe my eyes. How did it get back in? And why would it want to come back and scare me a second time?

Upon closer inspection I then learned it wasn’t the same snake. This one’s colors were a little richer, and it was a little shorter than the first. The realization that two snakes were able to get into my home (somehow, somewhere) did not bring me joy.

Having been successful in showing my first uninvited guest out, I did the same with the second. But by the time I got back in the house I found a third in the kitchen. By the time it was all said and done, I had come across at least seven snakes in my house that weekend. Thankfully, they weren’t poisonous nor were they aggressive. But they jarred me to my bones every time I came across one.

Making a very long story short, my husband and I discovered a hole just large enough for them to have squeezed through. Why those chose to make themselves at home, I’ll never know. But we packed every crack and crevice with enough steel wool and caulk to probably withstand gale force winds.

Snakes and Consultative Support

Now, you may be wondering:

What on earth does this have to do with being a higher education consultant? 

It actually fits perfectly. Let me explain:

Before I ever agree to take on a client in need of a higher education consultant, I always have a fairly lengthy conversation with them. We talk about where they currently are in a given project and what they are struggling with. I listen carefully to determine if their needs match up with my skill sets. In other words, I want to determine if I am the best person to help them achieve their goals. If I’m not, I tell them. We part ways and I wish them well.

However, if I do take on a client, I always tell them how important it is to have open, honest communication. We must be able to trust each other. For example, they need my reassurance that everything between us is confidential. It is–I never ever reveal who I work for unless I receive their express permission to share that information. But just as important, I need my clients to be honest with me and tell me exactly what they’re struggling with so ugly surprises don’t pop up later on.

In other words, I need to know where the snakes are.

Regardless of whether I’m working with a College of Education, an online learning department, or an entire institution, as a higher education consultant I need to know what keeps my clients up at night or what makes their stomachs feel queasy. I want to know where the bodies are buried (figuratively). If we agree to work together, I’ll find them eventually. But it would save us both a lot of time if I knew up front what they wouldn’t want to showcase to accrediting bodies, state regulatory agencies, and the like.

On the Path to Continuous Program Improvement

Once we lay all those problems areas out on the table, we can work together to address them.  As your higher education consultant, we can work together to get those gaps filled and shore up areas that the institution knows deep down should have been taken care of a long time ago.  I can support them in doing what’s necessary to ensure continuous program improvement. As long staff follow the plan, they should be able to handle any unpleasant surprises that may arise — without having to resort to steel wool and caulk.

###

About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: Roberta@globaleducationalconsulting.com

 

Top Graphic Credit: clipartspub.com

 

Educational Leaders Through the Lens of Academic Excellence

Educational leaders play a vital role in areas such as student graduation rates, teacher retention, and standardized test scores. What does it mean to lead an institution? Is leading the same as managing? What skills are essential to becoming a successful leader, and can those skills be taught?

The Primary Role of a Leader

In addition to creating a vision for the future, developing a strategic plan, and setting high but attainable expectations, the role of a leader in education is to motivate and inspire others; to model effective and ethical practice; and to facilitate leadership development in other team members.

Leadership vs. Management

Depending on the type of educational institution, it is common to see managers, leaders, and a combination of both. For example, a small elementary school may only have one building principal who may be responsible not only for leading the faculty and staff, but also to manage all major projects and initiatives. A large university, on the other hand, will typically employ multiple staff to serve in management and leadership roles, often focusing on a particular specialty area. However, a few general statements can be made regarding the two:

Although they are intertwined on many levels, there is a difference between management and leadership in an educational environment. Successful management of projects or departments is one piece of advancing the institution’s mission. Typically, a manager is assigned to oversee a specific department, or a specific project within that department to meet a defined goal or need. It is his or her responsibility to ensure success with direct accountability to a leader–often a superintendent, dean, provost, or president.

A leader must be an effective manager, but from a macro level. A leader is the point person to drive the institution’s vision, mission, and strategic goals. He or she is often the “face” of the school, meeting with the public, potential donors, the press, or politicians. A leader must be able to see the big picture while at the same time have a working knowledge of the details. However, delving too much into the weeds of a project can cause unexpected problems. When leaders micromanage departments or projects, it signals a lack of trust to managers; it breeds confusion and suspicion and ultimately reduces efficiency and success. So, it is incumbent upon a leader to hire the right people, and then trust them to get the job done.

Essential Skills All Effective Leaders Must Have

There are some skills that all educational leaders must have in order to be successful:

  • An effective leader must be truly committed to academic excellence. By setting high expectations for ethical practice and academic outcomes, a leader can inspire others to achieve great things.
  • An effective leader must be an exceptional communicator, both verbally and in written form. It’s not enough to have great ideas—one must be able to communicate them to others to have those ideas come to fruition.
  • An effective leader must be an exceptional listener. When one person is doing all the talking, he or she rarely learns much from others in the room. By actively and purposefully listening to others, a leader shows respect to others; gains a better understanding of a given issue; receives suggestions for tackling a problem; and builds a stronger sense of trust.
  • An effective leader must be competent. We cannot all be experts in everything, but if we are to lead others, we must have a solid command of the subject matter or the field. Educational leaders must stay current with relevant literature, research, patterns, and trends.
  • An effective leader must have confidence. It is difficult to lead others when we don’t communicate that we truly believe the path being taken is the right one.
  • An effective leader must ensure proper recognition of managers and other team members for their contributions, particularly in the context of a significant or particularly challenging project. It’s necessary to motivate and inspire, but we must also show appreciation and recognition.
  • An effective leader must be fair. Showing favoritism, even the suggestion of it, can quickly diminish team morale and motivation. A leader must make it clear that all members of the team will be treated equally.
  • An effective leader must be prepared to make tough decisions. There are times when institutions must face difficult budget shortfalls and steps must be taken to reduce expenditures. There are also times when one or more staff members are not performing up to expectations. An effective leader must be willing and able to make the decisions necessary to ensure the overall quality and well-being of a program, a department, or an entire institution. Decisions may not always be popular, but they are necessary, and if a leader fails to make them he or she simply is not doing the job that individual was hired to do.

Can Leadership Skills be Developed?

The short answer is yes. While there are certainly personality traits that we are born with that lend themselves well to taking on a role of leadership, specific skill sets must be developed, honed, and practice. In fact, the National Education Association (NEA) has identified a set of core competencies that are designed to equip educators with the knowledge and skills they need to become effective leaders. In addition, the Educational Leadership Constituent Council (ELCC) holds schools of education accountable for how well they prepare future school leaders under the umbrella of seven professional standards. It is extremely important for educational leaders to develop a strong base of content knowledge and to gain practice in applying that knowledge in variety of educational settings. Moreover, the importance of mentoring support and ongoing professional development cannot be over-emphasized.

The Bottom Line

The success of an educational institution will be directly impacted by the quality of its leadership. As a community of educators we must be committed to preparing, selecting, and supporting our educational leaders through the lens of academic excellence.

 

Dr. Roberta Ross-Fisher is a national leader in quality assurance, educator preparation, and empowerment-based learning. She supports educational institutions and non-profit agencies in areas such as accreditation, competency-based education, and teacher/school leader prep programs design.  Roberta also writes about academic excellence and can be contacted for consultations, webinars, and on-site workshops through her site (www.robertarossfisher.com). 

###

 

A Golden Opportunity: Let’s Rethink Performance Evaluations (Segment #3)

Is your P-12 school committed to helping instructional staff continually improve their teaching skills? As a school leader, do you recognize that exceptional instruction leads to exceptional learning, but you’re not quite sure where to begin? If so, please check out my 3-part video series entitled, A Golden Opportunity: Let’s Rethink Performance Evaluations.

  • Segment #1 provides an introduction to performance evaluations in the context of student and school success.
  • Segment #2 focuses on the need for ongoing evaluation and targeted support, as well as criteria you may consider when evaluating performance.
  • The final segment helps you to explore how you could design your own performance evaluation model that maintains your school’s individuality, and yet also ensures quality.

I’ve also created a supplemental resource page you can use as a handout to the series.

You can access Segment #3 below:

[wpvideo ZmC1Okaz]

Dr. Roberta Ross-Fisher is a national leader in quality assurance, educator preparation, and empowerment-based learning. She supports educational institutions and non-profit agencies in areas such as accreditation, competency-based education, and teacher/school leader prep programs design.  Roberta also writes about academic excellence and can be contacted for consultations, webinars, and on-site workshops through her site (www.robertarossfisher.com). 

###

A Golden Opportunity: Let’s Rethink Performance Evaluations (Segment 2)

Is your P-12 school committed to helping instructional staff continually improve their teaching skills? As a school leader, do you recognize that exceptional instruction leads to exceptional learning, but you’re not quite sure where to begin?

If so, please check out my 3-part video series entitled, A Golden Opportunity: Let’s Rethink Performance Evaluations.

  • Segment #1 provides an introduction to performance evaluations in the context of student and school success.
  • Segment #2 focuses on the need for ongoing evaluation and targeted support, as well as criteria you may consider when evaluating performance.
  • The final segment helps you to explore how you could design your own performance evaluation model that maintains your school’s individuality, and yet also ensures quality.

I’ve also created a supplemental resource page you can use as a handout to the series.

You can access Segment #2 below:

[wpvideo Uwj9T3eM]

Dr. Roberta Ross-Fisher is a national leader in quality assurance, educator preparation, and empowerment-based learning. She supports educational institutions and non-profit agencies in areas such as accreditation, competency-based education, and teacher/school leader prep programs design.  Roberta also writes about academic excellence and can be contacted for consultations, webinars, and on-site workshops through her site (www.robertarossfisher.com). 

###