CBE for Educator Prep Programs


What is Competency-Based Education (CBE)?

Competency-based education (CBE) is quickly becoming accepted as an effective way to facilitate powerful, authentic learning at all levels. Sometimes referred to as personalized learning, mastery learning, or proficiency learning, students must demonstrate what they know and are able to do, rather than just put in “seat time” and complete a prescribed set of courses. However, designing a solid CBE program is not as simple as it sounds–it requires a great deal of thought, understanding, and know-how.

There are some institutions that implement the CBE model very effectively. For instance, at the higher education level Western Governors University and Capella University use it successfully.

This model supports students’ learning in a rich way. As a result, graduates are able to reach their goals and achieve their dreams. The CBE model enabled them to demonstrate what they know at their own pace because it helps educators to personalize learning experiences.

The CBE model will be a major player in the educational arena over the next two decades at the P-12 level as well as at the collegiate level.

Essential Tenets for Educator Preparation Programs to Consider

There are some essential thoughts to consider for educator preparation programs thinking about adopting the competency-based education (CBE) model, and I shared some of those tenets in a commentary published in the Journal of Competency-Based Education entitled, Implications for Educator Preparation Programs Considering Competency-Based Education. 

The model helps students demonstrate what they know and are able to do. This is done within the context of a set of well-articulated competencies.  Moreover, teachers measure student learning through high-quality assessments. It’s a great example of academic excellence.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: 



Top Graphic Credit:

Is Being Accredited Really That Important When Selecting a College?

We all hear and read about the benefits of earning a college degree: We make more money over a lifetime; we get better jobs; we receive company-paid benefits; we tend to be happier and healthier overall. However, choosing the right college or university can be quite daunting, and yet it’s terribly important, because not all institutions are alike, and the quality can vary widely. While there are lots of things to consider such as cost, degree programs, scheduling, and the like, one thing many college students often overlook is whether or not the university is accredited.

There are many types of accreditation–you may likely hear terms such as regional accreditation, national accreditation, functional or programmatic accreditation, and sometimes even state accreditation. Each plays an important role in quality assurance for specific programs or an entire institution but here’s a strong recommendation:

Don’t ever take a single course from an institution that is not accredited. Never. Ever.

While no guarantee of perfection, accredited institutions have provided certain levels of assurance to respected bodies within academia that students will be taken care of. Non-accredited institutions have had no one looking over their shoulder, digging deep and looking in various academic or financial nooks and crannies; they can accept your money with absolutely no guarantee that the course or degree that you completed will be worth anything at all.

Plus, if you complete courses from an unaccredited institution, there is no guarantee that those courses will be accepted should you decide to transfer to another university later on. Even worse, if you go the distance and complete an entire degree from an institution that’s not accredited, you may find that many employers or graduate schools will not recognize that degree–in their eyes it will be like you don’t have a degree at all–but you’ll still have those student loans to pay back just the same.

Here is an entertaining yet informative video that clears up some of the confusion:

ASPA 2016 Explainer

You should be able to choose a college or university that fits your particular needs:

  • faith-based
  • public
  • private
  • traditional brick & mortar
  • online
  • non-profit
  • for-profit

Regardless of which you choose, make sure it’s a program that is accredited.



Dr. Roberta Ross-Fisher is a national leader in educator preparation, accreditation, online learning, and academic quality assurance. An accomplished presenter, writer, and educator, she currently supports higher education and P-12 schools in areas such as competency-based education, teacher preparation, distance learning, and accreditation through her company, Global Educational Consulting, LLC. She can be reached at:


Professional Dispositions: Essential Traits for Effective Teaching & School Leadership

Thanks for visiting this page.

The content of Professional Dispositions: Essential Traits for Effective Teaching & School Leadership has been incorporated into the following publication:

Key Skills and Dispositions: Essential Traits all Exceptional Teachers Must Have


Please click the link to learn more about this important topic. Thanks for being committed to academic excellence!




Dr. Roberta Ross-Fisher is a national leader in accreditation, quality assurance, teacher preparation, and empowerment-based learning. An accomplished presenter, she currently supports educational institutions and non-profit agencies in areas such as quality assurance, accreditation, competency-based education, and educator preparation.  Roberta also writes about academic excellence and can be contacted for consultations, webinars, and on-site workshops through her site ( 


Competency-Based Education to Support P-12 Student Success

The competency-based educational (CBE) model has been used successfully in higher education for the past two decades, and it is starting to gain national traction at the P-12 level. Several states, particularly on the east coast, have already come to appreciate its benefits. The Marzano Academy at Lomie G. Heard Elementary School, a new magnet charter school focusing on STEM will open its doors this fall under the CBE model. The state of Illinois currently has 10 school districts that will begin a pilot in academic year 2018-19 under the Illinois’ Competency-Based High School Graduation Requirements Pilot Program.

Within CBE, learners must demonstrate what they know and are able to do through carefully designed and calibrated assessments. Expectations are clear and well-defined, and there is thoughtful, purposeful alignment between curriculum, instruction, and assessment.

It’s All About Learning

This model is truly learner-centered: Seat time becomes less important than learning time. Students are able to drive their own learning and work at their own pace within structured guidelines. They are supported through meaningful feedback and mentoring.

Parents and caregivers feel more informed about their child’s progress under the CBE model. They know what their student is learning, their learning goals, progress, and their level of proficiency in each skill set. This helps them to partner with teachers to provide additional support at home.

Teachers recognize the positive impact the CBE model has on student learning and development. They are able to easily track the progress of each student on a daily basis, and they know exactly when a learner needs additional support.

School leaders are able to support teachers more effectively when they know exactly what their needs are. With the CBE model, they can provide strategic assistance through forming a mentoring network to support struggling students; through building school-community partnerships; through offering targeted professional development support, and the like.

Before making a decision to develop one or more programs based on the CBE model, educators must consider the following major questions:

  • Would CBE align with our school’s mission and vision?
  • What are the benefits of CBE for our students?
  • What are the challenges and caveats of CBE?
  • What are the basic steps needed to convert our current curriculum to the CBE model?
  • How can we train and support our faculty and staff so they could implement the CBE model successfully?
  • Could our school commit to a pilot lasting at least five years so we can fully measure the impact CBE has had on our learners?

The Bottom Line

Competency-based education is NOT a shortcut nor an easy fix to serious school challenges. However, if built correctly and maintained properly, the CBE model can prove to be a powerful way to increase student learning, achievement, and satisfaction.


Dr. Roberta Ross-Fisher is a national expert in quality assurance, educator preparation, and empowerment-based learning. She supports educational institutions in areas such as accreditation, institutional effectiveness, competency-based education, and virtual teaching & learning.  Roberta can be contacted for consultations, webinars, and on-site workshops through her site ( 

Meeting the Needs of Learners in Today’s Universities

In a recent piece entitled Survey: American Confidence in Higher Ed is Waning, it appears that only about 25% of the sample thinks the current higher education system is fine the way it is, and among millennials, that number drops to 13%. First of all, why do 75% believe the system is NOT meeting their needs? And of the millennial group, why do they feel even more strongly about the current system? In other words, what do today’s learners need that our colleges and universities are not providing?

We need to take a deep dive into this survey data in order to learn more about exactly what questions were asked, and what the demographics of respondents were. For example, are we reading the results of a representative sample, or were most respondents within a particular age group? Were the questions focused on seeking a first college degree, or did they include advanced studies? That sort of thing…However, just speaking in general terms, I’d say we need to focus on two things:

First, we need to revisit the relevance of curriculum found in today’s college degree programs. Are they workforce-driven? Will what students are learning really help them develop better job skills? I see very little true collaboration between higher education institutions and specific industries; this is essential for modernizing the curriculum and ensuring that what graduates will know and be able to do upon graduation will prepare them to be workforce-ready.

Second, we need to provide more structured support for those who need it throughout their programs, from matriculation to graduation. Mentoring models work wonders–This is particularly true for first-generation college students but really can benefit all learners. The key is to have a formal mechanism in place for continually monitoring and evaluating the progress of each learner, and to provide a safety net for them all along the way. Regular phone calls, emails, academic outreach, and the like can work wonders to help learners stay focused, achieve manageable goals, and attain success.



Dr. Roberta Ross-Fisher is a national leader in educator preparation, accreditation, online learning, and academic quality assurance. An accomplished presenter, writer, and educator, she currently supports higher education and P-12 schools in areas such as competency-based education, teacher preparation, distance learning, and accreditation through her company, Global Educational Consulting, LLC. She can be reached at:

Alternative Educator Preparation: A Viable Option, or a Non-Starter?

There’s an interesting article about alternative teacher preparation programs entitled Analysis Finds Alternatively Credentialed Teachers Performed Equal to Peers in First Two Years–while the results are inconclusive on several fronts it does present some thoughtful information to consider, including:

  • Are traditional educator preparation programs the ONLY way to train future teachers successfully? Are they BEST way?
  • Can alternative (non-traditional) educator preparation programs support student learning in a positive way, whilst supporting supply and demand challenges faced by multiple school districts across the nation?
  • What are the long-term impacts of educator preparation on our country’s workforce? And, what are the long-term impacts of what we view as an educated society?
  • Will how teachers are prepared impact our standing in the world relative to student achievement?
  • How would we know? What research questions need to be posed?


An experienced consultant can help with these questions, and more. Reach out to me for program development, collaboration, accreditation, clinical partnerships, and other matters related to preparing educators with excellence.



Dr. Roberta Ross-Fisher is a national leader in educator preparation, accreditation, online learning, and academic quality assurance. An accomplished presenter, writer, and educator, she currently supports higher education and P-12 schools in areas such as competency-based education, teacher preparation, distance learning, and accreditation through her company, Global Educational Consulting, LLC. She can be reached at:

Transition Points & Gateways: Stop Gaps Universities Should Consider

Each higher education institution’s program of study, regardless of major, contains specific phases of progression that each student must successfully complete before being allowed to graduate. In other words, there is a planned, purposeful order to completing a program or earning a college degree—an individual does not just apply for admission and have complete autonomy over the courses taken, the sequence of coursework, when/where/if practica or internships are completed, and so on. The institution makes those decisions after carefully designing each given program of study. They decide things such as:

  • Admission and enrollment criteria
  • General education requirements
  • # of semester hours required for graduation
  • Minimum GPA required to pass each course
  • Clinical experiences, internships, practica
  • Exit examinations required for graduation (or state licensure, depending on the program)

Transition points are sometimes referred to as “gateways”—they are specific points at which a student passes from one stage in his or her program to the next. As long as a student meets the stated expectations, the journey continues and he or she moves ahead toward graduation. If the student fails to meet one or more expectations in a given stage, the institution implements a plan for remediation, additional support, or in some case, counseling out of the program.

I have created a Transition Points framework that may be useful to some educator preparation programs. Of course, Transition Points must be tailored to fit each unique program but could include gateways such as:

  • Transition Point I: Applicant to Pre-Candidate Status 
    • Admission to the program
  • Transition Point II: Pre-Candidate to Candidate Status
    • Completion of Block #1 Coursework & Preparation for Formative Field Experiences
  • Transition Point III: Candidate to Pre-Graduate Status
    • Completion of Block #2 Coursework & Formative Field Experiences 
  • Transition Point IV: Pre-Graduate to Graduate Status
    • Completion of Block #3 Coursework & Culminating Clinical Experiences
  • Transition Point V: Graduate to Program Completer Status
    • Pass Required Licensure/Certification Examination(s)

Do you see the progression? When detailed out, a complete Transitions Points or Gateway table should paint a portrait of a student’s journey from matriculation to program completion; the sequence should represent a logical flow with at least some detail relative to minimum expectations.

I hope this has been helpful to you. Need more ideas? Want to collaborate on a project? Feel free to reach out to me.



Dr. Roberta Ross-Fisher is a national leader in educator preparation, accreditation, online learning, and academic quality assurance. An accomplished presenter, writer, and educator, she currently supports higher education and P-12 schools in areas such as competency-based education, teacher preparation, distance learning, and accreditation through her company, Global Educational Consulting, LLC. She can be reached at: 

Educator Prep: There’s a Better Way.

Numerous sources can point to a teacher shortage across the United States, with some areas having a much greater need than others. With some exceptions, Elementary and Social Studies teachers tend to be in greatest supply but in least demand, while the converse is true for Special Education, English Language Learning, Mathematics, and Science teachers. School districts typically have a much harder time filling teaching positions in urban districts, in Title I schools, and in remote rural areas. In many instances, a lack of experienced, qualified teachers in those areas forces districts to fill those classrooms with individuals who may be well-intentioned but lack sufficient training and cultural competence to be successful. Moreover, those districts often fail to provide adequate mentoring and support in the first two years of employment which results in new teachers feeling isolated and without tools to succeed. Consequently, we typically see a high turnover rate in those areas which has a negative impact on students and the local community at-large over time.

Various state departments of education have taken steps to address this problem. California has recently committed $25 million for scholarship money to help alleviate the teacher shortage by using a “grow your own” model. They are distributing this money to 25 school districts and county offices of education to help 5,000 support staff members earn their teaching credentials while continuing to work at their schools. While the idea has some merit, I see big gaps in the approach. Specifically, they are granting funds only to individuals who complete their teaching license requirements at one of the California State University campuses; this severely restricts the type of training these individuals will receive and it only supports the enrollment of those campuses. Moreover, EdSource reports 1,000 eligible employees can get stipends of $4,000 per year over the course of the five-year grant, which could cover all or most of the cost to enroll in those select institutions, depending on how many courses these employees take per semester. Acknowledging it could take up to five years doesn’t make a convincing case that these programs are innovative or cutting edge—in fact they are likely just serving as a feeder into their current programs. So, for continuing business as usual, these institutions are reaping the reward of 1,000 new enrollments and $25 million. The latest initiative proposed in California is to offer teachers who have taught at least 5 years in the state freedom from state income tax. While an interesting idea, I don’t see it encouraging sufficient numbers of individuals to enter or to remain in the teaching profession. Plus, it could have a negative impact on a state already short on cash.

The state of Nevada has attempted to alleviate the teacher shortage, most severe in the Clark County School District located in Las Vegas. School officials in that district, reportedly the third largest in the nation, face the daunting task each year of hiring approximately 2500 teachers. At the time of this writing, there are currently 672 openings for licensed teachers. The Nevada Department of Education approved an Alternative Route to Licensure (ARL) program designed to alleviate shortages across the state but it seems to be only a partial solution in its present form. What’s of equal concern is that once hired, districts struggle to retain teachers for a variety of reasons.

In addition to approaches that focus on state funding and providing paths to licensure through nontraditional means, the Missouri Department of Elementary and Secondary Education has recently begun looking at teacher preparation itself; staff have initiated statewide conversations amongst educators regarding how new teachers should be prepared. And of course, the National Council on Teacher Quality (NCTQ) has established itself as a national leader on educator quality and preparation through research and rankings of educator preparation programs.

 So what’s the answer?

The solution to having an adequate supply of qualified, well-prepared teachers who will positively impact the lives, learning, and development of their students is not simplistic—it is complicated, and that’s why no one has solved it yet. However, I believe one answer lies in how teachers are prepared. While many educator prep programs do a fine job, many do not and new teachers are simply not ready to enter the classroom, hitting the ground running. They have absolutely no idea how to effectively manage a classroom, deal with an angry parent, meet the needs of EVERY learner in their class, and so on. There is an apparent disconnect between what is being taught in colleges of education and the reality of teaching in today’s classrooms. Is one reason because those responsible for preparing those future teachers have little to no current teaching experience themselves? Have they stepped foot in a P-12 classroom in the past five years? Have they cleaned up vomit all over desks and the floor? Have they done before and after school bus duty? Have they had a student arrested in their class? Have they had to bring comfort to a child who is homeless? I think that while credentialed education faculty are well-intentioned, knowledgeable, and experienced, their skills may not be what’s needed in today’s classrooms.

I have been developing some specific ideas regarding how to train new educators some of which challenge the current preparation model. I’m working on creating an educator preparation program that could work for new teachers as well as new educational leaders that has features unique to any other program I’ve reviewed. Some would call it an alternative program, but I really don’t like that word and would love to see it disassociated with education preparation. Want to know more? Interested in partnering with me on a project of immense importance that is built from the ground level up on academic excellence? Let me hear from you…



Dr. Roberta Ross-Fisher is a national leader in educator preparation, accreditation, online learning, and academic quality assurance. An accomplished presenter, writer, and educator, she currently supports higher education and P-12 schools in areas such as competency-based education, teacher preparation, distance learning, and accreditation through her company, Global Educational Consulting, LLC. She can be reached at 



CAEP Site Visit Logistics

Preparing for an accreditation site visit is always stressful for university faculty and staff, even under the best of circumstances. Depending on whether we’re talking about a regional accrediting body, a state compliance audit, or a discipline-specific accreditor, there are certain processes and procedures that must be followed. For the sake of brevity, this piece will focus on one discipline–that of teacher preparation–using the Council for the Accreditation of Educator Preparation (CAEP) as the sample accrediting body.

There are some important topics to be covered during a pre-visit conference call between the site team lead, the education preparation provider (EPP), and state representatives. By the end of this call, all parties should be “on the same page” and should be clear regarding what to expect in the upcoming site visit. Here are the topics that are essential to cover:

  • Any general questions the EPP has regarding completion of the Addendum
  • Confirm Addendum submission date
  • Review and revise draft visit schedule
  • Travel Details
    • Confirm preferred airport
    • If arrival and departure times coincide, team prefers to pick up a rental car at the airport and provide their own transportation during the site visit.
    • Otherwise, EPP will need to make ground transportation arrangements.
  • Reminder per CAEP guidelines: No receptions, banquets, poster sessions, dinners with EPP representatives, etc.
  • School Visits
    • Not required, but generally requested by the team if there are concerns regarding clinical experiences. Typically limit of 2 (from different grade levels such as 1 Elem & 1 HS)
    • Should not require significant drive time
    • EPP should provide a guide (typically faculty) to drive and serve as host/hostess
    • Usually should take no more than 1 hour on-site at school
  • Work Room at Hotel and on Campus
    • Must be secure and private; lockable.
    • Only site team members and state representatives are to enter the work rooms.
    • Conference table large enough to accommodate all team members and state representatives
    • Printer, secure wifi, LCD or HDTV projector
    • Shredder
    • Basic office supplies (i.e., stapler, paper clips, post-its, note pads, pens, highlighters, etc.)
  • Food/Snacks
    • There should be healthy snacks and beverages (i.e., bottled water, coffee, soda) in the work room at the hotel and on campus.
    • The team will eat breakfast at the hotel each morning.
    • If at all possible, the team will want to remain on campus for lunch, with the ideal arrangement to have lunch catered either in the workroom or in an adjacent room.
    • The EPP should suggest a variety of restaurants within easy driving distance of the hotel for dinner each night.
  • Interviews
    • Generate interviewee list. Examples include:
      • Dean
      • Assessment Director
      • Field Experiences Coordinator
      • Full-Time Faculty
      • Key Adjunct Faculty
      • Current candidates representing multiple programs
      • Program completers representing multiple programs
      • Cooperating teachers from field experiences
      • Clinical supervisors
      • P-12 partners (i.e., superintendents, principals, teachers, etc.)
      • Other:
    • Interview Rooms
      • Depending on final schedule, 3 rooms may be needed simultaneously.
      • Should have a door for privacy
      • EPP representatives should not attend interviews with candidates, program completers, or cooperating teachers
      • EPP should prepare sign-in sheets for each interview.
      • A staff member should be responsible for get all participants to sign in and then leave the room.
      • All sign-in sheets should be sent to the site team lead.
    • Requests for Additional Information or Data
      • All requests should flow from and back to the site team lead.

There will be additional items to discuss but these are the most essential. Remember–advanced preparation is one key to a successful site visit. Do your homework and know what is required. Get organized. Appoint someone with experience to coordinate the event. Start well in advance. And if in doubt, hire a consultant. Earning accreditation is crucial to an institution’s overall success and should never be taken lightly.



About the Author: Dr. Roberta Ross-Fisher has expertise in educator preparation, CAEP accreditation, and competency-based education. A former public school teacher and college administrator, Roberta is now an educational consultant and adjunct professor.


Interview Preparation: An Essential Part of a Successful CAEP Site Visit

CAEP Interview Preparation

Let’s cut to the chase: Interview preparation is one of the best things an institution can do to ensure a successful accreditation outcome.

Preparing for an accreditation site visit is always stressful for higher education faculty and staff, even under the best of circumstances. Depending on whether it’s a regional (institutional) accrediting body, a state compliance audit, or a programmatic accreditor, there are certain processes and procedures that must be followed. While each body has its own nuances, there’s one thing institutions should do to prepare, and that is to help their interviewees prepare.  This piece will focus helping educator preparation programs prepare for a Council for the Accreditation of Educator Preparation (CAEP) site visit.

Important note: The guidance below focuses exclusively on the final months and weeks leading up to a site visit. The actual preparation begins approximately 18 months before this point, when institutions typically start drafting their Self-Study Report (SSR).

2-4 Months Prior to the Site Visit

Approximately 2-4 months prior to a site visit, the CAEP team lead meet virtually with the educator preparation program (EPP) administrator(s) and staff. Sometimes, representatives of that state’s department of education will participate. By the end of this meeting, all parties should be “on the same page” and should be clear regarding what to expect in the upcoming site visit. This includes a general list of who the team will likely want to speak with when the time comes.

A Word About Virtual and/or Hybrid Site Reviews

The onset of Covid-19 precipitated a decision by CAEP to switch from onsite reviews to a virtual format. Virtual or hybrid virtual site reviews require a different type of preparation than those that are conducted exclusively onsite. I think the more we start to see Covid in the rearview mirror, the more accreditors may start to gradually ease back into onsite reviews, or at least a hybrid model. I provided detailed guidance for onsite reviews in a previous post.

CAEP has assembled some very good guidelines for hosting effective accreditation virtual site visits, and I recommend that institutional staff familiarize themselves with those guidelines well in advance of their review.

Interviews: So Important in a CAEP Site Visit

Regardless of whether a site visit is conducted on campus or virtually, there’s something very common:

An institution can submit a stellar Self-Study Report and supporting pieces of evidence, only to fail miserably during the site review itself. I’ve seen this happen over and over again.  Why? Because they don’t properly prepare interviewees. Remember that the purpose of site visit interviews is twofold:

First, site team reviewers need to corroborate what an institution has stated in their Self-Study Report, Addendum, and supporting pieces of evidence. In other words:

Is the institution really doing what they say they’re doing?

For example, if the institution has stated in their written documents that program staff regularly seek out and act on recommendations from their external stakeholders and partners, you can almost bet that interviewees will be asked about this. Moreover, they’ll be asked to cite specific examples. And they won’t just pose this question to one person. Instead, site team reviewers will attempt to corroborate information from multiple interviewees.

Second, site team reviewers use interviews for follow-up and answering remaining questions that still linger after reading the documents that were previously submitted. So for example, if both the Self-Study Report and the Addendum didn’t provide sufficient details regarding how program staff ensure that internally created assessments meet criteria for quality, they will make that a focus in several interviews.

In most instances, the site team lead will provide a list of individuals who can respond accurately and confidently to team members’ questions. Within the educator preparation landscape, typical examples include:

However, I had seen instances where the team lead asks the institution to put together this list. Staff need to be prepared for either scenario.

Mock Visits: Essential to Site Review Interview Preparation

Just as you wouldn’t decide a month in advance that you’re going to run a marathon when the farthest you’ve been walking is from the couch to the kitchen, it’s to an institution’s peril if they don’t fully prepare for an upcoming site visit regardless of whether it’s onsite, virtual, or hybrid.

I’ve come to be a big believer in mock visits. When I first started working in compliance and accreditation many years ago, I never saw their value. Truthfully, I saw them as a waste of time. In my mind, while not perfect, our institution was doing a very good job of preparing future teachers. And, we had submitted a Self-Study Report and supporting pieces of evidence which we believed communicated that good work. We took great care in the logistics of the visit and when the time came, we were filled with confidence about its outcome. There was one problem:

We didn’t properly prepare the people who were going to be interviewed.

During site visits, people are nervous. They’re terrified they’ll say the wrong thing, such as spilling the beans about something the staff hopes the site team reviewers won’t ask about. It happens. Frequently.

When we’re nervous, some talk rapidly and almost incoherently. Some won’t talk at all. Others will attempt to answer questions but fail to cite specific examples to back up their points. And still others can be tempted to use site visit interviews as an opportunity to air their grievances about program administrators. I’ve seen each and every one of these scenarios play out.

This is why it’s critical to properly prepare interviewees for this phase of the program review. And this can best be done through a mock site visit. Another important thing to keep in mind is that the mock visit should mirror the same format that site team members will use to conduct their program review. In other words, if the site visit will be conducted onsite, the mock visit should be conducted that same way. If it’s going to be a virtual site visit, then the mock should follow suit.

Bite the bullet, hire a consultant, and pay them to do this for you.

It simply isn’t as effective when this is done in-house by someone known in the institution. A consultant should be able to generate a list of potential questions based on the site team’s feedback in the Formative Feedback Report. In addition to running a risk assessment, a good consultant should be able to provide coaching guidance for how interviewees can communicate more effectively and articulately. And finally, at the conclusion of the mock visit, they should be able to provide institutional staff with a list of specific recommendations for what they need to continue working on in the weeks leading up to the site visit in order to best position themselves for a positive outcome.

If you’re asking if I perform this service for my clients, the answer is yes. There is no downside to preparation, and I strongly encourage all institutions to incorporate this piece into their planning and budget.

While the recommendations above may feel exhausting, they’re not exhaustive. I’ve touched on some of the major elements of site visit preparation here but there are many more. Feel free to reach out to me if I can support your institution’s CAEP site visit effort.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Graphic Credit: Pexels


Teacher Effectiveness & Positive Impact: The Dynamic Duo

Shaping Lifelong Learners: The Symbiosis of Teacher Effectiveness and Positive Impact

In education, a lot of emphasis is placed on teacher effectiveness and positive impact, as it should be. It’s widely accepted that teachers are highly influential on students, and that influence doesn’t just stop at the end of the school day or even the school year. Teachers have the ability to impact students’ learning and achievement for many years.

As a society, we want to know that those responsible for instructing our children are competent, caring, reflective, and ethical. We want teachers to possess the kind of skills, knowledge, and dispositions they need to model positive behaviors and support students in their learning and development.

Principals typically are responsible for monitoring the effectiveness of teachers in their building. They come in a few times per year and formally observe and evaluate each teacher “in action” while they’re teaching a lesson. Principals then rate teachers on their effectiveness using various district-approved criteria.

In addition, colleges and universities that prepare future teachers also play an important role in ensuring their graduates will be effective in the classroom.

That said, teacher effectiveness and having a positive impact on students’ learning and development are related concepts but are not necessarily synonymous. In fact, the Council for the Accreditation of Educator Preparation (CAEP), a leading national accrediting body, requires educator preparation providers to show the extent to which program completers are having a positive impact on the learning and development of their P-12 students. However, despite publishing a guide on the topic, the accrediting body doesn’t clearly articulate that while these terms go hand in glove, they are not the same and can’t be measured in the same way.

In order to have well-rounded, successful learners, we need to see evidence of both teacher effectiveness and positive impact. Here’s a brief explanation of the differences between the two:

Measuring Effectiveness vs. Impact

No doubt about it: We need teachers to plan lessons that are aligned to state standards. They must design learning experiences that will help students grasp important skills and concepts throughout the school year. There continues to be a heavy emphasis on using high stakes standardized assessments to measure student learning and subsequently, teacher effectiveness. However, an assessment is typically not a good way to truly measure positive impact. How, for example, can a test determine a student’s love for learning or their social development?

Teacher Effectiveness and Positive Impact

Long-Term vs. Short-Term Outcomes

We all want to see immediate results. When we change our diet or increase our exercise, we typically expect to see outcomes pretty quickly when we climb on the scale, and we’re elated when we see those pounds going down and feel those clothes become looser. However, we may not realize the long-term impact of those efforts for many months or years later. Lowering our cholesterol, taking pressure off our joints, and the like can take quite a while to notice, and can be hard to measure. This is similar in some ways to teacher effectiveness and positive impact:

Long Term vs. Short Term Outcomes

Holistic Development vs. Academic Achievement

We certainly need to support our students’ learning. They need to know facts and critical information about a variety of topics. In turn, they must be able to demonstrate what they know and are able to do within both formal and informal assessments. However, students also need to learn how to interact positively with others, solving problems and conflicts in a way that meets their needs while also treating others with respect. In other words, they need to develop life skills.

Holistic Development vs. Academic Achievement

Student Engagement and Motivation

We need safe, orderly classrooms with sufficient structure, but yet we also need to create learning environments that encourage students to stretch their minds, explore their dreams, and begin the journey of becoming eager lifelong learners.

Student Engagement and Motivation

Striking the Balance: Unveiling the Dual Roles of Effective Teaching

So, a teacher can be effective in a single lesson, or over a unit of study. They can create an orderly, calm learning environment where students are well-behaved. They can create and deliver instructional lessons that are aligned to state standards, and their students can perform well on formative and summative assessments. Those are all examples of teacher effectiveness, and we certainly want that.

However, we also need our teachers to support their students as individuals, helping them to feel excited and motivated. We need teachers to encourage learners to think creatively and critically and ask questions. We want educators to empower students so they gradually take on a greater role in their own learning and decision making. Those are the kinds of influences teachers can and should have on their students, because those are skills that students will carry with them for the rest of their lives. That’s positive impact.

Beyond the Classroom: Nurturing Effective Teachers for Lasting Impact

In summary, while teacher effectiveness is an important aspect of education, having a positive impact on students’ learning and development involves a more comprehensive and long-term perspective. It extends beyond academic achievements to encompass holistic growth and lifelong learning skills. Teacher education program faculty should integrate these concepts into their coursework and clinical experiences. They should also be working in partnership with local school districts by exchanging ideas and providing professional development. Developing highly effective teachers who make a positive impact on students’ learning and development requires a concerted effort, and it doesn’t happen overnight.



About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit: Zainul Yasni on Unsplash 

Career-Focused Outcomes in Higher Education

Career-Focused Outcomes in Higher Education

In an educational landscape where scrutiny is high, academic institutions find themselves under the microscope, particularly in demonstrating their value to stakeholders. To address this, colleges and universities must articulate their commitment to preparing students for the workforce effectively. This involves not only showcasing career-focused outcomes but also ensuring a tangible return on investment. Metrics have become the tool of choice, allowing institutions to gauge success both at a macro and micro level.

From Classroom to Career

It’s so important for colleges and universities to show the academic community, as well as the public at large, that they provide good value for the money that students, donors, and taxpayers invest in them each year. One of the ways they do this is through career-focused outcomes. Higher education institutions must be able to answer questions like:

Career-Focused Outcomes

Career-Focused Outcomes Using a Macro vs. Micro Lens

Metrics like these are measured in various ways. An entire institution, for example, may view this through a broad lens, and may answer questions like these from a macro perspective. However, each academic program should be able to collect, analyze, and interpret data tailored to its specific area in order to answer the ROI question from more of a drilled-down, micro perspective.

Teacher Effectiveness and Positive Impact

In educator preparation, for example, one important indicator of a program’s quality can be found in the performance of its graduates, typically up to three years post-graduation. Teacher preparation program faculty and staff must look closely at a large number of performance indicators, two of which are teacher effectiveness and positive impact on student learning. These are related concepts, but they are not necessarily synonymous. Let’s break down the similarities and differences:


  • Focus on Student Outcomes: Both teacher effectiveness and positive impact center around achieving positive outcomes in students’ learning and development.
  • Student Progress: Both concepts involve assessing and improving students’ progress, academic achievements, and overall growth.


  • Teacher effectiveness: Typically refers to how well a teacher can facilitate learning and engage students in the educational process. It is often measured through various factors such as classroom management skills, instructional techniques, subject knowledge, and adherence to curriculum standards. Typical pieces of evidence for determining teacher effectiveness often include peer observations, principal evaluations, a review of teaching methods, lesson plans, and classroom management practices.
  • Positive Impact on Students:  Involves not only effective teaching but also fostering a supportive and motivating environment that contributes to students’ personal and academic growth. It goes beyond traditional academic metrics and may include factors like students’ social-emotional development, critical thinking skills, and overall well-being. Evidence for positive impact can include student testimonials, changes in behavior or attitudes, academic improvement, and long-term success beyond the classroom. Another way schools and states try to determine positive impact comes from value-added data, which involves measures that typically focus on quantifying the specific contribution a teacher makes to students’ academic achievement, often measured through standardized test scores.


It is very important for higher education institutions to create a well-balanced schema for answering questions related to job preparation, positive impact, and overall return on investment. They must collect and analyze data from a variety of internal and external high-quality assessments. It’s about tracking results over time and making informed decisions with a commitment to continuous improvement.

In essence, the pursuit of showcasing career-focused outcomes is a collective effort that involves the institution as a whole and each academic program individually. By embracing a holistic perspective and delving into program-specific metrics, colleges and universities can not only provide answers to pertinent questions, but also demonstrate their unwavering commitment to delivering value in the evolving landscape of higher education.



About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Photo Credit:

Competency-Based Education: A Paradigm Shift in Higher Learning

CBE A Paradigm Shift in Higher Learning

We need a paradigm shift in higher learning. For over a century, the Carnegie Unit has been the cornerstone of American education, providing a time-based standard for student progress. However, as the landscape of education evolves, the limitations of this model become apparent, prompting educators to explore innovative alternatives. One such model gaining significant traction is Competency-Based Education (CBE). In this post, I’ll delve into the merits of CBE and offer some practical tips for higher education professionals looking to pilot this transformative approach. 

Rethinking Education in the 21st Century

The traditional education model often propels students forward collectively, irrespective of individual learning paces or abilities. The disruption caused by events like COVID-19 has underscored the need for a more adaptive and personalized approach. We know that each learner is different, and they come with a variety of learning needs as well as life and work experiences. For too long, we’ve used a cookie cutter, one-size-fits-all approach to teaching and learning — particularly at the higher education level. Enter Competency-Based Education, a paradigm that requires learners to demonstrate their understanding and skills through rigorous assessments rather than mere attendance. It also requires faculty members, administrators, and other staff to rethink their roles and how they support students through their academic journey.

Unveiling the Essence of CBE

Competency-Based Education isn’t about taking the easy route; it’s about embracing a different and more effective methodology. Instead of passively absorbing information, students are challenged to showcase their knowledge and abilities through high-quality assessments. This approach is inherently standards-based and is built on evolving educational and/or industry-specific standards. This is far different from what most faculty members are used to, when they alone decide what content to teach in their classes, how students will meet their expectations, and the pace at which students may progress through a course. 

Key Principles of Competency-Based Education

Traditional learning and CBE learning share a common goal of wanting students to be successful. It’s how they meet that goal that’s different. Here are some key “big picture” ways where a competency-based model is quite different from a traditional course-based model:

Competency-based education is a paradigm shift in higher learning.

A Paradigm Shift: Tips for Piloting CBE in Higher Education

I’ve presented at conferences on this topic, and multiple times have been approached by a college dean or department chair who was interested in bringing the CBE model to their campus. Few realize that changing to this model — either retrofitting an existing program or creating a program from scratch — require a considerable paradigm shift not only to academics, but to infrastructure services (i.e., enrollment & admissions, registrar, bookstore, academic advising, etc.). I even had a dean once pull out a pen and small tablet out of her purse, waiting for me to give her three easy steps to CBE, as if it was a biscuit recipe. The truth is, competency-based education is a complex approach to teaching and learning. Once it’s in place, the payoff can be tremendous — but stakeholders must understand the cultural changes that must take place in order for CBE to become a long-term reality within their institutions.

Here are a few key tips for launching CBE at the higher education level: CBE is a paradigm shift in higher learning.

A Long-Term Commitment to Student Success

Competency-Based Education is not a quick fix, but a powerful, long-term solution to enhance student learning, achievement, and satisfaction. It truly is a paradigm shift in higher learning. I think it’s time to take the leap into a future where education adapts to the needs of the learner.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: 

Top Photo Credit: Kaleidico on Unsplash 

CBE: A Transformative Approach for Higher Learning



In the dynamic landscape of education, where the needs and expectations of learners continue to evolve, Competency-Based Education (CBE) stands out as a powerful and adaptive model. Initially prevalent in P-12 schools, CBE is progressively gaining recognition and traction in higher education. For those interested in exploring new and better ways to meet the needs of learners, it’s crucial to understand the transformative potential of CBE and how to initiate this innovative model in your institution.

Understanding the Core Principles of Competency-Based Education

Demonstrative Assessment

CBEIn a CBE, students showcase their knowledge and skills through a variety of high-quality formative and summative assessments. This approach shifts the focus from traditional testing to a more comprehensive evaluation of a student’s true understanding and application of concepts.

Measurable and Clear Expectations

CBE emphasizes measurable and clearly defined expectations. Learners are aware of the specific targets they need to reach in order to demonstrate competency or proficiency in key concepts or skills aligned with standards. This clarity empowers students to take ownership of their learning journey.

Outcome Over Seat Time

Let’s face it: We’ve all had students who showed up for class, but never answered a question and could barely stay awake. Or they sat glued to their phone throughout the period and couldn’t wait to make their exit. Unlike traditional models that rely on seat time, CBE prioritizes what students learn rather than how long they spend in a classroom. This flexibility allows students to progress at their own pace, accommodating those with diverse life or work experiences who may not require a conventional college experience.

Mentorship Model

Faculty members transition from direct instructors to mentors or learning coaches. This shift is fundamental in supporting student learning, enabling them to work independently and guiding them through their educational journey. The mentorship model fosters a personalized approach to education. Truthfully, some faculty members have a difficult time in making this transition. But for those who are able, it can be tremendously satisfying to support students on their educational journey, rather than being the sage on the stage.

Data-Driven Decisions

Instructional decisions in a CBE environment are data-driven. Regular assessments provide valuable insights into student progress, allowing faculty to tailor their support and interventions based on individual needs. This personalized approach contributes to a more effective learning experience.

Navigating CBE Implementation Challenges

CBEInitiating CBE at the college or university level requires a comprehensive institutional commitment. This commitment involves a paradigm shift in the faculty model, changes in registration and scheduling processes, and adaptations to student support services. Here are a few practical tips to navigate these challenges:

Faculty Development

Invest in comprehensive faculty training programs to equip educators with the skills and mindset required for the mentorship role. Workshops on coaching techniques, personalized learning strategies, and outcome-oriented assessment methods can be invaluable.

Flexible Scheduling and Registration

Redefine traditional scheduling structures to accommodate the individualized pace of CBE. Implement flexible course structures and explore modular approaches to allow students to progress based on their demonstrated competencies.

Technology Integration

Leverage educational technology to facilitate personalized learning pathways. Learning management systems, data analytics tools, and adaptive learning platforms can enhance the effectiveness of CBE by providing real-time insights into student performance.

Communication and Marketing

Effectively communicate the benefits of CBE to both faculty and students. Highlight the flexibility, personalized learning experiences, and real-world applicability of competencies acquired. Develop marketing strategies to attract students who seek a non-traditional educational experience.

Accreditation Alignment

Collaborate with accrediting bodies to ensure that your institution’s competency-based instructional models align with their standards. Stay informed about modifications in regulations and actively engage in discussions with accrediting agencies to demonstrate the effectiveness and rigor of the CBE approach. While hesitant at first, many accrediting bodies such as the Higher Learning Commission (HLC), Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), and other bodies recognized by the Council for Higher Education Accreditation (CHEA) have modified their regulations to include competency-based instructional models.

Embracing the Future of Education

CBEWhile the transition to Competency-Based Education may present challenges, the benefits are substantial. It provides a pathway for institutions to meet the needs of a diverse student population, acknowledging the rich experiences that learners bring to the table. Moreover, the flexibility of CBE can be a strategic advantage in attracting a broader range of students.

As pioneers in higher education, faculty, department chairs, deans, provosts, and accreditation specialists have the opportunity to shape the future of learning. By embracing the principles of CBE and strategically navigating the implementation challenges, institutions can create an environment that not only meets the evolving needs of students but also positions them as leaders in innovative education.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Photo Credit: Dollar Gill on Unsplash


Quality Assurance System: The Drivetrain of Institutional Effectiveness

Quality Assurance System

If you spend much time at all within the accreditation space, you’ll undoubtedly hear someone in higher education say, “Oh, we have a Quality Assurance System (QAS); we use _________.” They’ll proudly point to a license agreement they have with a company, where student work or assessment results are uploaded and stored. Some use that service to run data reports and are thrilled to share that it even “does data analysis.” Unfortunately, those well-intentioned individuals are missing the mark when it comes to a QAS.

A Quality Assurance System is really like the drivetrain of our car—without it we’d get nowhere, stuck along the side of the road. We’d know we had a problem, but without that drivetrain we may not know how to resolve our issue. We’d be wondering what to do next.

What a Quality Assurance System Isn’t

It’s important to remember that a Quality Assurance System isn’t a software program or a subscription-based website. It’s a well-planned and executed system by which institutions and individual programs monitor quality on key performance indicators. They then use insights gleaned from trendlines to make data-informed programmatic decisions.

Essential Components of a Healthy QAS

A healthy, solid quality assurance system requires a well-defined schema that involves looking at multiple data sources and being able to triangulate those data over time to look for patterns, trends, strengths, and weaknesses. And it shouldn’t just be one or two people reviewing data—there should be groups and advisory boards assigned to this task. Why? So steps can be taken to make improvements when the need arises.

High Quality Assessments


A well-functioning QAS requires using a blend of both proprietary and internally created high quality assessments. We know that data are only as good as the assessments themselves. Great care must be taken when creating key assessments to ensure that each measure what they are intended to measure (content validity) and that they see consistency in assessment results over multiple administrations (reliability). Surveys need to be created with a manageable number of questions, and items should be worded clearly. New assessments need to be piloted according to widely accepted protocols.

Real-Life Assessment Examples

Some examples of proprietary assessments that colleges and universities often use include the SAT, ACT, GRE, edTPA, Praxis, NCLEX, and so on. In other words, these are standardized high-stakes assessments that have been developed and road-tested by assessment development companies.

Internally created assessments, on the other hand, are those institutions create “in-house” for a variety of purposes. For example, it’s common for colleges to survey their students at the end of each semester to determine their satisfaction with their instructors, the quality of the food in the cafeteria, advising services, and so on. Faculty within programs also develop what they consider to be key assessments–perhaps 5-7 that are required by all students to monitor their skills development as they progress in a particular licensure track program. These are often cornerstone assessments in a select group of courses, and they can provide valuable insight regarding student performance as well as the quality of the program itself.

Stakeholder Input

A solid QAS depends on stakeholder input, both internal and external stakeholders. Faculty, student support staff, current students, graduates, and members of the community or business and industry should serve in advisory capacities. Each individual brings a unique set of experiences and perspectives to the table, and diversity of thought can enrich programs.

Real Life Stakeholder Examples

Internal stakeholders include current and past students, faculty members, academic advisors, and so on. External stakeholders are those on the outside of the college or university. They include employers, individuals who have graduated more than a year ago, members of relevant civic groups, and so on. It’s really important to garner the perspective of those who are from within the institution as well as those who are on the outside looking in.

The Ultimate Goal: Continuous Program Improvement

And finally, a well-functioning Quality Assurance System must enable institutions to make data-informed decisions with confidence, for the purpose of continuous program improvement. Staff must be able to identify specific areas of strength, as well as specific areas for growth and improvement. They need to know if an approach or a policy is working or not. And they need a leg to stand on when it comes to making programmatic changes. That leg needs to be grounded in high quality data. Having well-functioning Quality Assurance Systems will support colleges and universities in their accreditation efforts, state program approvals, and growth. They truly are the drivetrain of institutional effectiveness.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: 

Top Photo Credit: Samuele Errico Piccarini on Unsplash 


Fostering Student Success: The Significance of Transition Points in Higher Education Programs

Transition Points


In the dynamic landscape of higher education, successful program completion involves more than just attending classes and earning credits. It requires a structured and purposeful journey through well-defined transition points or milestones. These markers delineate specific phases of progress that students must navigate to ensure they are well-prepared for the challenges that lie ahead. This is especially crucial in licensure-based programs like nursing and teacher education, where the sequential mastery of skills is paramount.

The Protective Structure of Transition Points

Transition points are not arbitrary hurdles; they are a safeguard, ensuring that students progress through a program in a planned and thoughtful manner. The structure serves to protect students and foster their success. It prevents them from signing up for an advanced level course before they have successfully completed foundational level work. Moreover, these gateways provide them with a chance to build their developing skills in key areas before engaging in field experiences. And, by adhering to established transition points, students are much more likely to graduate on time, pass licensure examinations, and get hired for a job in their chosen profession after graduation.

Key Criteria for Identifying Transition Points

Department chairs and faculty should carefully consider various criteria when determining the right transition points. For example:

Transition Points Checklist


A Transition Points Framework

To guide educators in implementing effective transition points, a customizable framework can be immensely beneficial. For instance, in educator preparation programs, a five-point model might include:

Transition Points


This framework acts as a roadmap, offering a detailed depiction of a student’s progression from matriculation to program completion. Each transition point represents a crucial phase, ensuring that students are adequately prepared before advancing to the next stage. As long as a student meets the stated expectations, the journey continues and they move ahead toward graduation. If the student fails to meet one or more expectations in a given stage, the institution implements a plan for remediation, additional support, or in some case, counseling out of the program.


Transition points are the linchpin of a successful higher education program, providing a structured path for students to navigate. Through careful consideration of key criteria and the implementation of a tailored framework, educators can guide students toward timely graduation, licensure success, and a seamless transition into their chosen professions. By prioritizing these markers, institutions not only protect the interests of their students but also contribute to the overall success and reputation of their programs.



About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Photo Credit: Clay Banks on Unsplash 

Examples and Exemplars in Regulatory Spaces

Examples and Exemplars in Regulatory Space


Embarking on the journey of launching a new program at your college or university is an exciting endeavor, but the regulatory landscape can be a daunting terrain to navigate. Many college and university personnel find themselves grappling with uncertainty about what evidence to provide and how to demonstrate compliance with specific standards set by institutional or programmatic accreditors. In an era where higher education websites offer a plethora of examples, it’s crucial to understand the distinction between examples and exemplars when it comes to the regulatory space. While examples can serve as general guides, they should not be mistaken for perfect templates. Here I shed light on this crucial distinction and provide higher education staff with actionable tips for a smoother regulatory approval process.

Understanding the Difference Between Examples and Exemplars

Before delving into the tips, it’s important to clarify the difference between examples and exemplars when working in the regulatory space. Examples are instances of documents, reports, or data submitted by other institutions to accrediting bodies. They can serve as helpful references, offering insight into the types of information that might be required. On the other hand, exemplars are not just examples; they are models of excellence. Exemplars represent the gold standard, and assuming that any document submitted by another institution is flawless can lead to significant pitfalls. Recognizing this distinction is the first step toward a more informed and successful regulatory approval process.


Alternatives to Using Examples & Exemplars in Regulatory Work

Customization is Key

While examples can provide a starting point, it’s crucial to customize documents and evidence to align with the unique characteristics of your institution and the proposed program. Copying and pasting from examples might not capture the specific nuances and strengths of your institution, potentially leading to a misrepresentation of your capabilities.

Engage in Peer Collaboration

Instead of relying solely on online examples, consider engaging in collaborative efforts with peer institutions. Sharing insights, challenges, and successful strategies with institutions facing similar regulatory processes can offer a more nuanced understanding. Peer collaboration allows for the exchange of real-world experiences and promotes a collective learning environment.

Regularly Review and Update Documentation

The regulatory landscape evolves, and so should your documentation. Rather than relying solely on outdated examples, strive to stay abreast of changes in accreditation standards and requirements. Regularly review and update your documentation to reflect any new expectations, ensuring that your submission remains relevant and compliant.

Seek Guidance from Accreditation Experts

Most institutions have dedicated accreditation liaisons or experts who can provide valuable guidance. These individuals possess an in-depth understanding of accreditation standards and can offer insights tailored to your institution’s context. Consult with them regularly to ensure your documentation meets the necessary criteria and standards. That said, some colleges and universities don’t have the luxury of full-time compliance and accreditation experts on staff. On the other hand, there may not be anyone who’s had experience working with a particular state agency or accrediting body. In those cases, hiring a consultant can be a wise investment.

Use Examples Judiciously

Examples can be powerful tools when used judiciously. Rather than mirroring another institution’s document entirely, extract relevant concepts, structures, and approaches that align with your institution’s context. Adapting best practices from examples can enhance the quality of your submission without compromising authenticity.



In the realm of regulatory matters, the journey to program approval requires careful consideration, strategic planning, and a nuanced approach to documentation. While examples can serve as valuable guides, they should not be misconstrued as flawless templates. The key lies in understanding the unique needs of your institution and tailoring documentation accordingly. By following these tips, higher education staff can navigate the regulatory landscape with confidence, ensuring that their submissions stand out for their authenticity and compliance.



About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Photo Credit: Gabrielle Henderson on Unsplash 


Transforming Higher Education: The Power of Student Mentoring Programs



In the ever-evolving landscape of higher education, the quest for student success remains a central concern for colleges and universities across the United States. While academic advisors play a pivotal role in guiding students through their educational journey, a more personalized and intensive approach is required to meet the needs of at-risk students. This is where student mentoring programs step in. Here I explore the concept of student mentoring in higher education — delving into its benefits, potential drawbacks, and its significant role in enhancing institutional effectiveness and accreditation efforts.

Understanding the Role of a Student Mentor

In traditional academic advising, the primary focus is on helping students chart their academic paths and assisting with course registration. However, there exists a group of students who require a more hands-on and personalized approach. These students, often referred to as at-risk, may struggle with various aspects of their college experience, be it academic, financial, or personal. A student mentor is a specially trained individual who goes beyond the traditional academic advisor’s role.

A mentor typically:

  • Interacts with students regularly: A mentor engages with the student multiple times each month through various communication channels, including email, phone calls, text messages, virtual conferences, or in-person meetings. This frequent interaction helps build a strong support system for the student.
  • Acts as a liaison: A mentor serves as a bridge between the student and various university services. If a student encounters difficulties with financial aid applications, the mentor can either assist directly or connect the student with the appropriate staff in the Financial Aid office. Similarly, if a student is struggling academically, the mentor can facilitate tutoring services.
  • Monitors student progress: If a student begins to miss classes or falls behind in their coursework, the mentor plays a proactive role in reaching out to the student. They work with the student to identify the reasons for their struggles and collaboratively develop a plan for academic success.

The Benefits of a Strong Mentoring Model

The traditional academic advising model often relies on students seeking assistance, which may not be sufficient for at-risk students. However, a strong mentorship model, where a mentor is assigned to a student upon matriculation and remains with them until graduation, offers numerous advantages:

  • Improved Student Success: A mentor’s consistent support and guidance significantly contribute to student success. At-risk students often face challenges that can derail their academic progress, and a mentor helps address these issues promptly, leading to higher achievement and improved GPA.
  • Enhanced Student Retention: By closely monitoring a student’s academic journey, a mentor can identify and address issues that may lead to dropouts. This proactive approach contributes to higher retention rates, which is a key concern for colleges and universities.
  • Greater Student Satisfaction: The personal connection and support provided by mentors lead to increased student satisfaction. Knowing there is someone dedicated to their success boosts students’ morale and confidence.
  • Improved Institutional Effectiveness: A well-structured mentorship program aligns with institutional effectiveness goals. It provides a systematic approach to monitor, support, and measure student success, helping institutions meet accreditation standards more effectively.
  • Accreditation Compliance: Accreditation bodies, such as the Higher Learning Commission (HLC), the Association for Biblical Higher Education (ABHE), and the Council for the Accreditation of Educator Preparation (CAEP), emphasize the importance of demonstrating support for student success. A strong mentorship program positions institutions to meet these requirements effectively.

Challenges and Drawbacks to a Mentoring Model

While student mentoring programs offer immense benefits, there are challenges and potential drawbacks that institutions need to consider:

  • Financial Costs: Implementing a mentorship program requires hiring and training mentors, which can strain an institution’s budget. However, the long-term benefits often outweigh the initial costs.
  • Workload for Mentors: Mentors must be dedicated and properly trained to address a wide range of student needs. The workload can be intensive, and managing a caseload of at-risk students requires effective time management and organizational skills.
  • Scalability: Scaling a mentorship program to accommodate a growing student population can be challenging. Institutions must carefully plan and allocate resources to ensure the program’s success as the student body expands.
  • Cultural Shift: Shifting from a traditional academic advising model to a mentorship program may require a cultural shift within the institution. Faculty, staff, and students need to adapt to the new approach.

Practical Steps for Implementing a Student Mentorship Program

To successfully implement a student mentorship program in your institution, consider the following practical steps:

  • Assess Student Needs: Identify the specific needs of your student population. Conduct surveys, focus groups, and data analysis to understand the challenges at-risk students face.
  • Define Mentor Roles: Clearly outline the roles and responsibilities of mentors. Determine how they will interact with students and which services they will connect students with.
  • Mentor Training: Invest in comprehensive training for mentors, covering areas such as academic support, communication skills, and campus resources. Training is crucial for ensuring mentors are well-prepared to assist students effectively.
  • Integration with Existing Services: Ensure seamless integration with existing university services, such as academic advising, financial aid, and tutoring. Mentors should collaborate with these services to provide holistic support.
  • Data and Monitoring: Implement a data-driven approach to monitor the program’s impact on student success. Regularly assess the program’s effectiveness and make adjustments as needed.
  • Student Outreach: Promote the mentorship program to incoming students and engage them from day one. Assign mentors to students upon matriculation to establish a strong support system from the start.
  • Resources Allocation: Allocate necessary resources, both in terms of personnel and budget, to support the program. Consider seeking external funding sources if needed.


In the quest for higher education excellence and student success, student mentoring programs play a pivotal role. These programs provide a more personalized, proactive, and comprehensive approach to supporting at-risk students, ultimately leading to improved retention, student satisfaction, and academic success. While there are financial and logistical challenges, the long-term benefits, including compliance with accreditation standards and institutional effectiveness goals, make student mentoring a worthwhile investment for colleges and universities.

In a rapidly changing higher education landscape, the transformational power of student mentoring programs can be the catalyst for lasting change, ensuring that all students have the opportunity to thrive and succeed in their academic pursuits.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit:  Monica Melton on Unsplash

The Pillars of Data Consistency: Inter-Rater Reliability, Internal Consistency, and Consensus Building

data consistency


Accreditation in higher education is like the North Star guiding the way for colleges and universities. It ensures institutions maintain the highest standards of educational quality. Yet, for higher education professionals responsible for completing this work, the journey is not without its challenges. One of the most critical challenges they face is ensuring the data consistency, or reliability, of key assessments. This is why inter-rater reliability, internal consistency, and consensus building serve as some of the bedrocks of data-informed decision making. As the gatekeepers of quality assurance, higher education professionals should possess a working knowledge of these concepts. Below, I explain some basic concepts of inter-rater reliability, internal consistency, and consensus building:

Inter-Rater Reliability

What it is: Inter-rater reliability assesses the degree of agreement or consistency between different people (raters, observers, assessors) when they are independently evaluating or scoring the same data or assessments.

Example: Imagine you have a group of teachers who are grading student essays. Inter-rater reliability measures how consistently these teachers assign grades. If two different teachers grade the same essay and their scores are very close, it indicates high inter-rater reliability. A similar example would be in an art competition, where multiple judges independently evaluate artworks based on criteria like composition, technique, and creativity. Inter-rater reliability is vital to ensure that artworks are judged consistently. If two judges consistently award high scores to the same painting, it demonstrates reliable evaluation in the competition.

Importance in Accreditation: In an educational context, it’s crucial to ensure that assessments are scored consistently, especially when accreditation bodies are evaluating the quality of education. This ensures fairness and objectivity in the assessment process.

Internal Consistency

What it is: Internal consistency assesses the reliability of a measurement tool or assessment by examining how well the different items or questions within that tool are related to each other.

Example: Think about a survey that asks multiple questions about the same topic. Internal consistency measures whether these questions consistently capture the same concept. For example, let’s say a teacher education program uses an employer satisfaction survey with multiple questions to evaluate various aspects of its program. Internal consistency ensures that questions related to a specific aspect (e.g., classroom management) yield consistent responses. If employers consistently rate the program quality highly across several related questions, it reflects high internal consistency in the survey.

Importance in Accreditation: When colleges and universities use assessment tools, they need to ensure that the questions within these tools are reliable. High internal consistency indicates that the questions are measuring the same construct consistently, which is important for accurate data in accreditation.

Consensus Building

What it is: Consensus building refers to the process of reaching agreement or alignment among different stakeholders or experts on a particular issue, decision, or evaluation.

Example: In an academic context, when faculty members and administrators come together to determine the learning outcomes for a program, they engage in consensus building. This involves discussions, feedback, and negotiation to establish common goals and expectations. Another example might be within the context of institutional accreditation, where an institution’s leadership, faculty, and stakeholders engage in consensus building when establishing long-term strategic goals and priorities. This process involves extensive dialogue and agreement on the institution’s mission, vision, and the strategies needed to achieve them.

Importance in Accreditation: Accreditation often involves multiple parties, such as faculty, administrators, and external accreditors. Consensus building is crucial to ensure that everyone involved agrees on the criteria, standards, and assessment methods. It fosters transparency and a shared understanding of what needs to be achieved.


In summary, inter-rater reliability focuses on the agreement between different evaluators, internal consistency assesses the reliability of assessment questions or items, and consensus building is about reaching agreement among stakeholders. All three are essential in ensuring that data used in the accreditation process is trustworthy, fair, and reflects the true quality of the institution’s educational programs.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: 

Top Photo Credit: Markus Spiske on Unsplash 

Persistence and Retention in Higher Education

Persistence and Retention Word Cloud

In higher education, “persistence to graduation” and “retention” are related but distinct terms that are often used to measure and analyze student progress and institutional effectiveness. College and university personnel encounter them with working on institutional or programmatic accreditation efforts. These are confusing terms that are sometimes used interchangeably, and yet they are not synonymous.

For example, the Higher Learning Commission (HLC) makes a distinction in its Teaching and Learning: Evaluation and Improvement (Criterion 4C).  In its Guiding Principle 2 (Standard IV), the Middle States Commission on Higher Education (MSCHE) requires member institutions to “…commit to student retention, persistence, completion, and success through a coherent and effective support system…”

Here’s a very quick overview of the difference between retention and persistence:


Retention refers to the percentage of students who continue their enrollment at the same institution from one academic year to the next. It measures how many students remain at the same college or university without transferring or dropping out.

Retention is primarily concerned with keeping students within the institution they initially enrolled in.


Persistence, on the other hand, is a broader term that encompasses a student’s continuous pursuit of a degree or educational goal. It measures whether a student is consistently working toward completing their program or degree, regardless of whether they stay at the same institution or transfer.

Persistence focuses on the overall progress of a student toward their educational goal, which can involve transferring to another institution, taking breaks, or pursuing part-time studies.

The Bottom Line

In summary, while both persistence and retention are crucial metrics in higher education, they differ in focus and scope:

Retention is concerned with students staying at the same institution and measures institutional success in keeping students from leaving.

Persistence is concerned with students continuously working toward their educational goals, which may include transferring to other institutions, taking breaks, or pursuing part-time studies.

Higher education institutions and accreditation bodies use these terms to assess student success and institutional performance, with the goal of improving graduation rates and the overall quality of education. Both are important to quality assurance but are determined by different data.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Leveraging Stakeholder Involvement for Higher Education Quality Assurance

Stakeholder Group Meeting

In the realm of higher education, quality assurance and institutional effectiveness are paramount. Internal and external stakeholder groups, including students, faculty, alumni, employers, and community members play a pivotal role in this process. Their active involvement not only ensures transparency but also significantly contributes to accreditation efforts.

It seems that nearly everyone in higher education is aware of the need for stakeholder involvement–or say they are–but very few actually use it effectively. In this post, I delve into the importance of stakeholder involvement in higher education and provide some practical advice for colleges and universities to harness it effectively.

Why Stakeholder Involvement Is Vital

Engaging stakeholders brings diverse perspectives and valuable insights to the forefront. Here’s why their involvement is critical:

Enhanced Accountability

Stakeholder involvement fosters transparency and accountability within institutions. It ensures that decisions align with the needs and expectations of those they serve.  As members of the higher education community, we often develop “tunnel vision” and become so entrenched in our everyday institutional bubble that it’s possible to lose our perspective. As a result, we sometimes don’t consider things from a lens outside of our own. That’s where stakeholder groups can be so valuable to the accountability process.

Continuous Program Improvement

Regular feedback from stakeholders helps colleges and universities identify areas for enhancement. This feedback loop leads to ongoing program improvements, benefiting students and the broader community.  To that end, institutional accreditor Southern Association of Colleges and Schools Commission on Colleges (SACSCOC) prompts university personnel to ensure that appropriate internal and external constituents and stakeholders are involved in the planning and evaluation process as part of their overall institutional planning and effectiveness model.

Accreditation Support

Accrediting bodies often require evidence of stakeholder involvement. Comprehensive records of these interactions streamline the accreditation process and bolster institutional credibility. That doesn’t mean, however, that we should just create an advisory board of some kind in name only. Nor should we hold our obligatory annual meetings for the purpose of simply checking a box and moving on. If institutions build a culture of continuous program improvement rather than a culture of compliance, they will realize just how important stakeholders can be to their regulatory success.

Initiating and Optimizing Stakeholder Involvement

Here are practical steps for college and university personnel to initiate and optimize stakeholder involvement:

Identify Your Key Stakeholders

Identify the primary internal and external stakeholders relevant to your institution, including students, part-time and full-time faculty, alumni, employers, business and industry representatives, and community organizations. Students, of course, should be viewed as the most critical stakeholder in higher education. To underscore the importance of this group, the Higher Learning Commission adopted it as Goal #1 in its Evolve 2025: Vision, Goals, and Action Steps.  It’s essential to select individuals who genuinely want to help you improve your institution. It’s also important to build a cadre of stakeholders who represent a variety of backgrounds and perspectives.

Set Clear Objectives

Determine the specific outcomes you need from your stakeholder groups. Are you seeking input on curriculum development, program evaluation, or community engagement initiatives? Having a clear purpose guides your efforts. For example, in its 2020 Guiding Principles and Standards for Business Accreditation, the Association to Advance Collegiate Schools of Business (AACSB) specifies that stakeholders should play a central role in developing and implementing a program’s strategic plan, in its scholarship, and in its quality assurance system.

Establish Communication Channels

Create multiple communication avenues with stakeholders, such as surveys, focus groups, advisory committees, and regular meetings. Ensure these channels are accessible and user-friendly. Maintaining effective communication and collaboration with stakeholder groups is considered to be part of an essential team of administrators that brings together and allocates resources to accomplish institutional goals, according to the Association for Biblical Higher Education (ABHE), an accreditor for faith-based institutions.

Meet Regularly

Meeting with stakeholders at least once a year is crucial. Consider more frequent interactions, such as quarterly or semi-annual meetings, to maintain engagement. Establishing positive relationships takes time, and this requires seeing stakeholders more than just once per year. Some institutions invite stakeholders to a monthly virtual meeting, supported by one or two onsite meetings. To encourage attendance and keep the momentum going, consider the value of variety: Invite students to come and speak or interact with advisory board members. Don’t overdo it but try to include at least one fun icebreaker or activity in each meeting. And above all else: Whenever possible, provide food. Educators have known about this for many years, and it’s still true today: If you feed them, they will come. 

Share Data

Share relevant data and information with stakeholders, including enrollment figures, student achievement data, and institutional goals. Providing context allows stakeholders to make informed recommendations. And don’t just sugarcoat everything–be real with your stakeholders. If you can’t trust them with data that may be less than desirable, why are they on your advisory board?

Establish a Positive Environment

Foster an open and inclusive environment where stakeholders feel valued and heard. Encourage constructive feedback and respect dissenting opinions. Hopefully, each member of the stakeholder group was selected with care because of the value they bring to the conversation. Assuming that’s the case, each person should walk away from meetings feeling as though their presence and participation mattered. It’s the job of the institutional leader to ensure that happens.

Create a Documentation Framework

Keep detailed records of stakeholder interactions, including meeting agendas, minutes, recommendations, and action items. These records serve as tangible evidence for accreditation purposes. We’ve all heard the saying, “If there’s no photo, it didn’t happen!” The same thinking applies with stakeholder meetings. If there’s no detailed record, it’s really the same as a meeting never taking place. All documents should contain enough details that someone outside the institution (such as an accreditor) could review them and understand who the members are, what the group’s purpose is, how often they meet, what they do, and how the institution’s personnel act on their recommendations.  Pro tip: Create a standard template for meeting agendas and minutes, and store all documents in a secure, university-approved cloud platform in an organized manner. Never store these items on a single user’s laptop.

Using Stakeholder Involvement Effectively

Simply hosting an annual stakeholder meeting to check off a compliance box isn’t good enough. Higher education personnel must weave their input into all facets of their institutional or programmatic structure.  The importance of this is emphasized by the 2023 standards adopted by the Middle States Commission on Higher Education, where stakeholder involvement is featured in multiple standards. To maximize the benefits of stakeholder involvement, I recommend following these guidelines:

Act on Feedback

Don’t just collect feedback; act on it. Use stakeholder recommendations to drive meaningful change within your institution, demonstrating a commitment to improvement. For example, educator preparation accreditors such as the Association for Advancing Quality in Educator Preparation (AAQEP) and the Council for the Accreditation of Educator Preparation (CAEP) both have expectations for utilizing input from teacher candidates, alumni, employers, P-12 partners, and the like.

Evaluate Impact

Regularly assess the impact of changes made based on stakeholder feedback to ensure ongoing positive progress. This is an essential component to your quality assurance system and to a continuous program improvement model. Advancing academic quality and continuous improvement are at the core of accreditation, according to the Council for Higher Education Accreditation (CHEA).

Engage Diverse Voices

Ensure your stakeholder group represents a diverse range of perspectives, leading to more innovative and well-rounded solutions. The American Association of Colleges of Nursing (AACN) emphasizes the need for multiple voices to be heard in its more recent set of Core Competences for Professional Nursing Education.

Communicate Outcomes

Keep stakeholders informed about the outcomes of their input. Sharing how their feedback has shaped decisions and improvements underscores the value of their involvement. This goes back to helping all members feel valued, heard, and respected. It also renews their commitment to your organization and their role in advancing institutional goals.

Maintain an Active Feedback Loop

Continuously refine your stakeholder involvement processes based on feedback to make the collaboration more effective and efficient. In other words, the model should be organic and evolve over time as needs change. The mission, vision, and objectives of stakeholder groups should be revisited periodically in order to gain maximum benefit.


Incorporating stakeholder involvement into higher education quality assurance is not just a best practice; it’s a necessity. By actively engaging stakeholders, colleges and universities can ensure their programs remain effective, relevant, and aligned with community needs. Moreover, documenting these interactions provides valuable evidence for accreditation, further enhancing institutional credibility.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: 

Top Photo Credit: Campaign Creators on Unsplash 


Tired of subs? Grow your own teachers. But do it with excellence.

teacher and students

Note: Updated on June 19, 2023

National Teacher Shortage

There has been a nationwide teacher shortage in math, science, English language learning, and special education for several years, and it will only get worse unless state departments of education, teacher training programs, and local school districts work together to pilot creative, out-of-the-box ideas. Gone are the days when individuals go into teaching just to “have something to fall back on” and to work the same hours as their children—teaching is a demanding profession, and the classroom can be a tough place to be. As a result of increasing demands placed on teachers, low pay and long hours, and little respect, teachers are leaving the profession in droves and choosing a different career path. And decreasing enrollment within schools of education confirms that many are not even considering entering the teaching field.

States’ Efforts to Fill Classrooms

California education officials recognize this critical teacher shortage, and they are committed to finding a solution.  In  Accelerating the Pathway to Initial Teacher Certification, I wrote about an initiative approved by the California Commission on Teacher Credentialing that focuses on growing the number of qualified mathematics teachers.  At the district level, the Los Angeles Unified School District (LAUSD) is trying to shore up its supply of special education and other hard-to-find teachers through its STEP UP and Teach program. This program provides mentoring as well as financial support to qualified candidates, often those who are already employed in the district as paraprofessional and who have strong ties to the local community.

This “grow your own” approach is similar in many ways to other nationwide efforts such as the Educator Academy, formerly known as the Kansas City Teacher Residency project. Based on the premise that teachers are best trained on-site and under the careful mentoring of experienced teachers in real-life situation, such training is certainly workforce driven. Teacher candidates must demonstrate what they know and able to do on a daily basis. Admission requirements into programs such as the Educator Academy are strict, admitting only those candidates who demonstrate a strong propensity for long-term success as a caring, effective educator. This is as it should be—we want only the very best teaching our children and our grandchildren.

Preparation Quality and Teacher Efficacy

All these pilots share some things in common but there is still something they are missing—and that is a curriculum that is built by the best of the best—those educators and school leaders who have been recognized as high performing. Specifically, teacher candidates should be trained by those who have been highly successful in today’s classrooms and who understand how to meet the needs of students in 2023 and beyond.

Many higher education faculty members can talk theory but who have little teaching experience. Much of the time, their instruction will fall flat on its face. Likewise, a program built by those who haven’t seen the inside of a P-12 school in 20 years simply cannot prepare teachers for 21st Century schools. It’s just not realistic, and yet we see those programs training new teachers by the thousands in every state across our nation. As a result, we are licensing new teachers who discover they have come down with a case of, “What have I gotten myself into?” syndrome. Those teachers leave the classroom in droves, headed for less stressful jobs often with more pay. That’s why about half of all new teachers leave the profession within five years of obtaining their license.

Students deserve a fully qualified, caring, and competent teacher in every class. We’ve got to do a better job making sure this happens.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: 

Top Graphic Credit:

Soft Skills and Dispositions: Essential Traits for Exceptional Teachers

Soft Skills and Dispositions

We often read today about soft skills but feel confused as to what this means. Soft skills are also commonly known as dispositions. Regardless of the term you use, soft skills and dispositions are connected to our attitudes, our work habits, and our interpersonal skills.

Being an effective teacher or school leader involves much more than simply possessing a solid command of subject matter or earning a certain grade point average (GPA). It also takes more than an ability to write lesson plans, or to maintain discipline in a classroom.

Soft Skills, Dispositions Defined

Accrediting bodies such as the Council for the Accreditation of Educator Preparation (CAEP) and the Association for Advancing Quality in Educator Preparation (AAQEP) as well as the Interstate Teacher Assessment and Support Consortium (InTASC) emphasize the role that soft skills or professional dispositions play in effective teaching and school leadership.

These bodies hold schools of education accountable for identifying, selecting, and graduating individuals who indicate a propensity for success as an educator. This includes demonstrating specific soft skills or professional dispositions.

In a white paper focusing on knowledge, skills, and dispositions sponsored by the Council of Chief State School Officers (CCSSO), the Innovative Lab Network (ILN) defined dispositions as: mindsets (sometimes referred to as behaviors, capacities, or habits of mind) that are closely associated with success in college and career.

Our Soft Skills Leave an Impact on Others

Our soft skills and dispositions make a statement about who we are, what we believe, and what kind of employee we will be.

For example, being an effective teacher requires numerous skills that are essential to teaching and learning success. Not all of these skills involve subject area expertise.

When students are asked to think back to their favorite teacher–the ones who made the greatest impact on their lives, they make comments like these:

  • She always made me feel as though I mattered.
  • He had a great sense of humor!
  • She could admit when she had made a mistake.
  • He was tough, but always fair. 
  • Being in Mr. ______’s class made me want to become a teacher. 
  • She was kind of like a mom to me when my life was in such chaos.
  • She always encouraged me to keep going and told me she knew I could make it. And I did. 

Comments like these are the result of teachers who made a profound impact on their students’ lives. The impact isn’t just academically, but personally.

Soft Skills & Dispositions: Our Professional “Compass”

Soft skills or dispositions stem from our beliefs, our attitudes, and our professional “compass” that steers us through life. For example:

  • Do I really care about children?
  • Am I compassionate and empathetic?
  • Am I responsible enough to arrive on time each day?
  • Do I respond promptly to phone calls or emails from parents?
  • Do I begin each day fully prepared?
  • Am I respectful of other ideas or traditions, even if they differ from my own?
  • Do I take responsibility for my own actions?
  • Do I take the high road even when no one else is looking?


Ten Essential Key Soft Skills for Teachers

In its research, the Innovative Lab Network was able to pinpoint 10 key soft skills or dispositions that effective teachers possess:

Correlation to Student Success



  • Self-Efficacy
  • Initiative
  • Integrity
  • Intellectual Curiosity
  • Adaptability
  • Study Skills
  • Time & Goal Management
  • Collaboration
  • Communication
  • Problem Solving
  • Leadership
  • Critical Thinking
  • Self-Awareness

The Role of Grit and Self-Control

Renowned psychologist and researcher Angela Duckworth identified two key characteristics that closely predict achievement across multiple professions: grit and self-control.

In essence, grit is the ability to play the long game – to remain focused and committed to meeting long-term goals. In other words:

Grit means not giving up and moving on to something else when there are challenges or bumps in the road.


Self-control is similar to self-discipline. It refers to not allowing ourselves to act on impulses and not needing instant gratification.

In many ways, grit and self-control are related. Individuals who possess these traits can remain focused on accomplishing their long-term goals and are able to cross the finish line.

We need teachers and school leaders with grit and self-control.


What School Districts Look for When Hiring Teachers

Many school principals and human resource directors are looking to hire teachers who demonstrate professional traits and behaviors such as:

  • Adaptable, confident, & organized
  • Good communicators & lifelong learners
  • Team players but also leaders
  • Imaginative, creative, & innovative
  • Committed to Students & the Profession
  • Can locate engaging resources, including technology
  • Empowered and inspire students
  • Successfully manages a positive online reputation
  • Able to periodically unplug from technology & social media

It’s essential to hire teachers who will make a long-term positive impact on the achievement, success, and lives of our students. Subsequently, building principals need to provide teachers with professional development support and mentoring at all career phases to foster their soft skills.



About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at: 


Top Graphic Credit: Adam Winger on Unsplash


Consultants Aren’t Necessary. Until They Are.

  • “We really didn’t think we needed a consultant.”
  • “We thought we could handle it in-house.”
  • “We just didn’t have the money to pay for a consultant.” 
  • “We’re a small institution. Surely ____ will take that into consideration during the site visit.” 

I’d venture a guess that very few higher education institutions build external consulting fees into their annual budgets. Administrators make sure all the essentials are covered, such as hiring faculty and staff, facility and grounds maintenance, advertising, travel, IT infrastructure, legal fees, and the like. But hardly any ever plan for needing to hire a consultant to help with compliance and accreditation matters. 

That’s because higher education administrators never think they need outside guidance. Until they realize that they do. 

And many times, they come to this realization very late in the accreditation game. I’ve received calls from frantic department chairs, deans, and presidents whose anxiety you could literally feel through the phone. 

They thought they had things under control, and then something happened that threw their plans out of orbit. Over the years, I’ve been brought in when a key faculty member, assessment coordinator, or department chair has taken a job with another university. I’ve also been called when the institution’s dean had been incompetent for many years and executive leaders allowed him to stay in that position. Those leaders thought the path of least resistance was to stay the course and it worked for a while with others providing cover, but then they discovered by accident that the institution was scheduled for a national accreditation site review in a few months. 

I’ve also been called on to help when the horse has already left the stall – when an institution actually had lost their accreditation and by default, their state program approval. They had students enrolled in multiple programs, but were unable to recommend them for state licensure because they were no longer authorized to do so. 

As one might imagine, those situations are messy. They are uncomfortable. But these are when an experienced consultant is well worth their fee. Of course, no consultant can ever guarantee a positive end result–that’s impossible–but someone with the right skill set and expertise can get an institution back on solid footing and headed back in the right direction. 

CHEA fellow Rachel Smith recently penned a thoughtful piece that presents the benefits and drawbacks of hiring independent contractors. She also offers some alternatives for higher education administrators to consider if for whatever reason a paid consultant just isn’t feasible. It’s a useful guide to keep handy. 

In this ever-changing landscape of state, regional, and national regulations, it can be a comfort to know that when the chips are down and the stakes are high, an experienced consultant’s fees can be money well spent. 


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Graphic Credit:  Dan Dimmock on Unsplash

Creating a Quality Assurance System

Quality Assurance

A well-conceived, functioning quality assurance system (QAS) helps colleges and universities continuously improve their programs through data-informed decision-making. When comparing it to a ship, the QAS can be considered the steering wheel; it directs the ship’s path as it treads through water on its journey. While a software program is often used to store, track, and analyze data, that’s not a quality assurance system. But what exactly does a QAS consist of? And specifically, how can a quality assurance system be effectively implemented in order to facilitate continuous program improvement?

What is a Quality Assurance System?

A quality assurance system (QAS) in higher education typically involves a set of processes, policies, and procedures that are put in place to ensure that programs and services meet or exceed established quality standards. It includes a range of activities designed to monitor, evaluate, and improve the quality of academic programs, student services, and administrative functions.

To effectively implement a QAS, it’s important to start by identifying the key components that you will need. These components typically include the following:

Components of a Solid Quality Assurance System

Purpose: It may sound simplistic, but when developing quality assurance system higher education institutions need to take the time to articulate its purpose. Consider starting with providing a clear vision and mission statement that define the purpose and direction of the institution and its programs. The more well-defined an institution’s (or a program’s) vision and mission are, the easier it is to create a solid QAS.

Quality standards: Clearly defined quality standards should be established for all aspects of the program, including teaching and learning, assessment, student services, and administrative processes. These standards should be based on industry expectations as well as best practices. These standards should be measurable, so that you can align specific key assessments with them in order to gauge the effectiveness of your programs. Using relevant standards serves as a foundation for building learning outcomes that specify what students should know and be able to do upon completion of their program. From there, a natural progression is to create curriculum map that aligns the courses and activities with the learning outcomes and shows how they are assessed.

Assessment and evaluation: A comprehensive assessment and evaluation process should be established to measure program outcomes against your standards. This should include both internal and external assessments and evaluations. External assessments are also known as proprietary assessments, which are created by an assessment development company. These are often required for licensure-based programs. The nice thing about proprietary assessments is that they’re already standardized and have been closely examined for quality indicators such as content validity and reliability. If you opt to use internally created assessments, you must do this legwork yourself.

Data collection and analysis: A robust data collection and analysis system should be put in place to capture relevant information related to program quality. This system should be designed to generate regular reports that can be used for monitoring, evaluation, and decision-making. A well-defined data analysis plan describes how the data will be interpreted, compared, and reported. Most institutions handle data collection and analysis on an annual basis, but it can also be done at the end of each semester. I recommend creating a master cadence or a master periodicity chart to track all key assessments, how they are used, when they’re administered, when and by whom the data are collected, when and by whom the data are reviewed and analyzed, and other relevant information. Keep this chart up to date and handy in preparation for regulatory reviews such as state program approval renewals and accreditation site visits.

Communication and collaboration: Effective communication and collaboration among both internal and external stakeholders are essential to the success of a QAS. This includes regular reporting and feedback loops to ensure that all stakeholders are informed and engaged in the process. The feedback loop also provides a formal mechanism for stakeholders to make recommendations for improvement. Examples of internal stakeholders are faculty, administrators, and interdepartmental staff. Examples of external stakeholders include business and industry representatives, school district teachers and administrators, and faith-based staff (if applicable).

Dynamic, not static:  A QAS is not a one-time project or a static document. It is a dynamic system that evolves with the changing needs and demands of the institution and its programs. For this reason, I recommend that institutions revisit their QAS annually, with a comprehensive review taking place at least once every five years.

Continuous improvement: In order for a quality assurance system to truly be effective, a culture of continuous improvement should be fostered within the institution, with a focus on using data and feedback to identify areas for improvement and make necessary changes. If this is presented from the perspective of supporting everyone’s efforts at providing exceptional student learning experiences, most faculty and staff receive it well and embrace the model. However, if having a QAS simply for the purpose of accreditation is communicated, then in nearly all instances personnel will not receive it well, simply viewing it as “yet another thing they have to do” in order to “check the boxes” and “get through” the next accreditation site visit. As in the case with most initiatives, how we communicate something to others makes a huge difference in its success.

Ensuring High Quality, Continuous Improvement

Successfully implementing a QAS requires a commitment from leadership and a willingness to invest time, resources, and effort into the process. It also necessitates an action plan that outlines the steps and resources needed to implement the recommendations and monitor their impact.

A well-conceived, functioning quality assurance system can help colleges and universities ensure that their programs are of high quality and continuously improving over time. It facilitates the accountability and transparency of the institution and its programs and demonstrating their effectiveness and impact.

By providing a framework for data-informed decision-making, a QAS can help institutions make evidence-based decisions that lead to better outcomes for students and the broader community—which is our collective mission.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit:





Accreditation Stress: It’s Real.

Accreditation Stress

Author’s Note: Updated from a previous publication. 

We can all agree: Accreditation is something all higher education officials acknowledge is necessary, but the accreditation stress that goes along with it is something they’d love to do without.

Each accrediting body has its own standards and quality indicators. They have their own policies and procedures which can vary widely. However, one thing that’s common across every accrediting body a site visit, where a review team spends a few days on campus (or virtually) conducting interviews, verifying information, and making recommendations regarding how well the institution measures up to standards.

Regardless of the accrediting body, the site visit is both expensive and exhausting. With very few exceptions, faculty, staff, and administrators shout for joy when they see a site review team leave campus and head for the airport.

Accreditation Stress is Real.

In many instances, staff involved in the accreditation process focus so much on preparing for the site visit they aren’t ready for the emotional or physical toll that it can take on them. Moreover, the stress usually doesn’t end when the site review team leaves. My experience in accreditation over the past 10 years has confirmed there’s a need for this kind of information, and yet it’s a topic I’ve never seen addressed at conferences or in professional literature.

Accreditation-related stress and anxiety are real. You might be able to function, and you may be able to hide it from others. But, how do you know if it’s starting to get the best of you? And what can you do about it?

Red Flag Alert: Some Signs the Stress is Negatively Impacting Your Life

You’re surviving, but you’re not thriving. You may be making it through each day, but the quality of your life is suffering. You’re not enjoying the things that used to make you happy. You feel guilty about taking the time to watch a sunset or to read a book. Every waking moment is spent thinking about the site visit.

Those lights in your brain just won’t shut off. You can’t sleep, even though you feel exhausted. You’re worn out physically and mentally, but you can’t allow yourself to take even a few hours off to rest.

You’re numb inside. You have no appetite and aren’t eating. You’ve even managed to shut down your emotions. It’s like you’ve gone on auto-pilot and feel like a robot.

You feel empty, like there’s a gaping hole inside. But even though the emptiness isn’t from hunger you binge eat everything in sight. And then you still look around for more because you still have that huge gaping hole that just can’t seem to be filled.

You become obsessed with every detail, no matter how minute it may seem. It’s those little foxes that spoil the vine. You’re determined that you’re going to make sure NOTHING is overlooked.  

You come to believe that you are ultimately responsible for the success of the site review. If you’re honest with yourself, you don’t think others are as committed to success as you are. The little voice inside you says, “If you want something done right, you’ve got to do it yourself!”

You start to resent others who don’t seem as stressed out as you are. While you hate feeling like you have the weight of the world on your shoulders, you refuse to delegate responsibility to others and then you get mad when you hear that they went to a movie or a concert over the weekend.

Drink the Stress Away: You may hear yourself saying, “I just need to take the edge off” or “I just need to relax for a while.” Having one glass of Chardonnay is one thing but knocking back five tequila shots in 30 minutes is another.

Ups and Downs: You may self-medicate by taking a pill or two to help you sleep because even though you’re exhausted, you’re wired due to all the stress.

Caffeine overload: You may guzzle coffee, soda, or Red Bull throughout the day (or night) because, “I’ve got to keep going for just a little while longer.”

Shop ‘til Your Fingers Drop: On a whim you may go on a shopping spree and spend a ton of money on things you probably didn’t really need. Not at a brick and mortar store or mall—that would be far too self-indulgent. Instead, you likely visited Zappos or Amazon, where you could remain close to your computer and be right there to respond to an urgent email should one land in your Inbox.

Keep Setting the Bar Higher: You set impossible standards for yourself to meet and then criticize yourself endlessly when you don’t meet them. It’s like you’re obsessed with proving something to others—and to yourself. Except that you’re never satisfied with your performance, even when you do things well.

Slay the Dragon: You plan things down to each minute detail, leaving no stone unturned. You review things in your mind, over and over again. Sometimes you obsess about forgetting something. You’re determined to emerge victorious, regardless of the personal cost.

Accreditation Stress: The Gift that Keeps on Giving

Think the stress of getting ready for a site visit only affects you? Think again. If you have close friends, a life partner, or children, they are affected as well. It’s possible that your furry buddies at home can even detect your anxiety. You’ll know if your stress is out of balance if you hear a loved one say, “I miss you!” “I HATE your job!” or “Will this ever end?”


Moving from Surviving to Thriving: How to Manage Your Stress in a Healthy Way

Even Superman struggled at times with Kryptonite. However, he found ways to adapt and overcome those challenges, and so can you. While an accreditation site visit always leads to a certain level of stress, there are things you can do to minimize the anxiety. For example:

Prepare ahead of time: It may sound simplistic, but getting a jumpstart on the process reduces a lot of stress. If you don’t start on the process until 6 or 8 months before the site visit, you are putting yourself squarely in the crosshairs of some serious stress and anxiety.

Ideally, quality assurance should be an integral part of every program. There really shouldn’t be any significant scrambling or looking for data. Your institution should already be reviewing, analyzing, looking for trends, and making data-driven decisions to improve programs on a continual basis. You should plan on starting your self-study report (SSR) no later than 18 months prior to a scheduled site visit. The more you delay this timetable, the higher your stress level will be. Guaranteed.

Hire a consultant: Let’s face it–not everyone has a lot of expertise when it comes to writing self-study reports, gathering evidence, and preparing for site visits. In many institutions, departments are understaffed and often wear multiple hats of responsibility. Most institutions don’t have to deal with accreditation matters on a regular basis. Therefore few have a high level of confidence in that area.

In some schools, new faculty coordinate a site visit because more seasoned faculty refuse to do it. This is wrong on so many levels, and yet it’s a frequent occurrence. An experienced consultant could provide the kind of guidance and support that may be needed. The institution doesn’t incur the expense of paying for someone’s full-time salary, benefits, or office space. In this age of budget cuts, hiring an independent contractor can actually save money.

Provide faculty/staff training: Letting others know what to expect and getting them on board early on will greatly reduce anxiety for everyone. Plan a kickoff event, and then schedule periodic retreats/advances. Create a solid communication protocol and stick with it. When team members are fully informed and are active contributors to the process, the stress is reduced for everyone.

Delegate to others as much as possible: It’s important to have a project manager/coordinator for every major project, and that includes accreditation site visits. However, that does NOT mean that this one person needs to take on the bulk of the responsibility—quite the contrary. Instead, that person should serve as a “conduit” who facilitates the flow of information between internal and external stakeholders. That person should also play the primary role in delegating tasks to appropriate personnel. He or she maintains a schedule so that tasks are completed on time.

It’s OK to talk about it: Know that a certain amount of stress and anxiety are normal reactions to accreditation site visit preparation, but it doesn’t have to be overwhelming. Don’t be afraid to talk with your colleagues and leadership about your stress level. It’s entirely possible that others share your feelings—it might be helpful to start a small informal support group. Getting together one day a week for lunch works wonders.

Be upfront with your friends and loved ones:  Prepare family and friends ahead of time. Help them to know what to expect. Include them in the celebration once it’s over. Your children, significant other, and close friends may not be writing the self-study report or creating pieces of evidence. Your support system also plays an important role in the site review process behind the scenes.

Be kind to yourself: This may sound silly but it’s really important. Purposely build one nice thing into your personal calendar each day. It may be taking a walk, working out, or reading for pleasure for 30 minutes. Regardless what you choose, it’s crucial that you make this a part of your schedule.

Be ready when it’s over:  You may find that you can hold yourself together from start to finish, but then after the site review team packs up and leaves your institution you have a feeling of not quite knowing what to do with yourself. What you’ve focused all your energy on for 18 months is suddenly over. This can result in your emotions taking a deep dive—and it can last for several weeks.

You can greatly reduce this by planning a combination of fun activities and work activities for your next four weeks after the site visit. You’ve been functioning within a very structured paradigm for several months. However, if you suddenly have nothing to do it will likely lead to additional anxiety so it’s best to transition back slowly.

The bottom line is that while accreditation stress is definitely real, it doesn’t have to get the best of you or your team.



About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Graphic Credit:  Luis Villasmil on Unsplash




Using Data to Tell Your Story


With a few exceptions, staff from all colleges and universities use data to complete regulatory compliance and accreditation work on a regular basis. Much of the time these tasks are routine, and some might say, mundane. Once programs are approved, staff typically only need to submit an annual report to a state department of education or accrediting body unless the institution wants to make major changes such as add new programs or satellite campus, change to a different educational model, and so on.

And then typically every 7-10 years a program or institution must reaffirm their program approval or accreditation. That process is much more complex than the work completed on an annual basis.

Regardless of whether an institution is simply completing its annual work or if they are reaffirming its accreditation, all strategic decisions must be informed or guided by data. Many institutions seem to struggle in this area but there are some helpful practices based on my experiences over the years:

Tips for Using Data to Tell Your Story

  • Know exactly what question(s) you are expecting to answer from your assessment data or other pieces of evidence. If you don’t know the question(s), how can you know you can provide the information accreditors are looking for?
  • Be selective when it comes to which assessments you will use. Choose a set of key assessments that will inform your decision making over time, and then make strategic decisions based on data trend lines. In other words, avoid the “kitchen sink” approach when it comes to assessments and pieces of evidence in general. Less is more, as long as you choose your sources carefully.
  • Make sure the assessments you use for accreditation purposes are of high quality. If they are proprietary instruments, that’s a plus because the legwork of determining elements such as validity and reliability has already been done for you. If you have created one or more instruments in-house, you must ensure their quality in order to yield accurate, consistent results over time. I talked about validity and reliability in previous articles. If you don’t make sure you are using high-quality assessments, you can’t draw conclusions about their data with any confidence. As a result, you can’t really use those instruments as part of your continuous program improvement process.
  • Take the time to analyze your data and try to “wring out” all those little nuggets of information they can provide. At a minimum, be sure to provide basic statistical information (i.e., N, mean, median, mode, standard deviation, range). What story are those data trying to tell you within the context of one or more regulatory standards?
  • Present the data in different ways. For example, disaggregate per program or per satellite campus as well as aggregate it as a whole program or whole institution.
  • Include charts and graphs that will help explain the data visually. For example, portraying data trends through line graphs or bar graphs can be helpful for comparing a program’s licensure exam performance against counterparts from across the state, or satellite campuses with the main campus.
  • Write a narrative that “tells a story” based on key assessment data. Use these data as supporting pieces of evidence in a self-study report. Narratives should fully answer what’s being asked in a standard, but they should be written clearly and concisely. In other words, provide enough information, but don’t provide more than what’s being asked for.

Let’s face it: Compliance and accreditation work can be tricky and quite complex. But using data from high-quality assessments can be incredibly helpful in “telling your story” to state agencies and accrediting bodies.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit: Markus Winkler on Unsplash 

Telling Your Story through a Self-Study Report

Self-Study Report SSE

Educator preparation providers (EPPs) seeking programmatic accreditation through the Council for the Accreditation of Educator Preparation (CAEP) must build a case for how their programs comply with specific standards. This is done through a combination of written narratives and supporting pieces of evidence. In most cases, these evidence artifacts are anchored by key assessment data, meeting minutes, examples of acting on input from stakeholders, continuous program improvement initiatives, and the like. All the narrative responses and pieces of evidence are assembled into a Self-Study Report (SSR).

Writing the Self-Study Report: A Daunting Process

Writing the self-study report can feel daunting and at times, overwhelming. EPPs must be able to communicate clearly and concisely how they comply with all aspects of each standard. Writers must be able to think and write from both a macro perspective (making connections across programs) as well as a micro perspective (focusing on specific aspects of each program).

Some faculty members equivocate writing the SSR to the stress they recall when writing their doctoral dissertation. Others think of it as trying to catch a tiger by its tail. Still others try not to think of it at all and fervently hope they won’t see their name listed as part of a CAEP writing team. Without question, this is a challenging process. However, it’s very doable and manageable, particularly if we start to look at the SSR a little differently.

The Role of Testing

No matter what state or specific licensure program is involved, EPPs must assess their teacher candidates multiple times throughout their program. When we test our candidates, we are able to get a glimpse of what they know and are able to do against a specific set of criteria at the particular moment in time. However, we know that no single assessment can provide us with the kind of information we need to make judgments about the quality of our programs. In order to accomplish this, we must administer a suite of high-quality assessments throughout the program. Based on our review of those test results, we can start to gain insight about specific trends, patterns, strengths, and weaknesses.

If we look at these data separately, they just don’t tell us that much. It’s when we put them all together like pieces of a jigsaw puzzle that we start to build a portrait of each program and eventually, we can draw conclusions about the EPP as a whole.

Seeing the Self-Study Report as a Portfolio

In their article focusing on student digital portfolios, co-authors Ronnie Burt and Kathleen Morris quoted international researcher and consultant Dr. Helen Barrett:

“Testing gives you a snapshot. Portfolios give you a movie.”

This is so true, and I think Barrett’s quote fits in very well within the context of Self-Study Reports for accreditation. When it’s fully assembled, the SSR can be used as a showcase for institutions to “put their best foot forward” and highlight their successes. However, the SSR is really like a portfolio that tracks progress over time. While not exactly the same, the two actually share a lot in common:

  1. Purpose: Both a self-study report and a portfolio are designed to demonstrate an individual or organization’s knowledge, skills, and abilities in a particular area. A self-study report aims to demonstrate how an institution meets the standards set by the accrediting body, while a portfolio showcases an individual’s achievements, skills, and experiences.
  2. Evidence: Both require the collection and presentation of evidence to support claims. In a self-study report, evidence may include data, surveys, and other documentation that demonstrates how an institution meets the accreditation standards. In a portfolio, evidence may include work samples, certificates, awards, and other materials that showcase an individual’s accomplishments.
  3. Organization: Both a self-study report and a portfolio require careful organization to present evidence effectively. An SSR written for CAEP, for example, follows a specific structure over five standards, while a portfolio may be organized according to the individual’s goals and accomplishments.
  4. Reflection: Both an SSR and a portfolio require reflection on the evidence presented. In a self-study report, reflection may involve analyzing data and identifying areas for improvement. In a portfolio, reflection may involve assessing strengths and weaknesses and identifying areas for growth.
  5. Evaluation: Both a self-study report and a portfolio require evaluation by others. CAEP site team reviewers evaluate an EPP’s compliance of components within the five standards, while a portfolio may be reviewed by an employer, a mentor, or a peer. In both cases, the evaluation provides feedback and helps the individual or organization improve their work.

As I’ve outlined above, a self-study report written for accreditation and creating a portfolio share similarities in their purpose, evidence collection, organization, reflection, and evaluation. Both require a thoughtful approach to presenting evidence and reflecting on accomplishments and areas for growth.

If we can start to view the self-study report a little differently and approach it more from a portfolio mindset, I think the stress level will start to diminish and the overall quality of narratives and pieces of evidence will begin to improve. Rather than submit a dry and often disjointed self-study report, we can produce a rich, substantive body of work that presents a powerful story about our programs.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit:  Carlos Muza on Unsplash 


Inter-rater Reliability Ensures Consistency

interrater reliability

In a previous article, we focused on determining content validity using the Lawshe method when gauging the quality of an assessment that’s been developed “in-house.” As a reminder, content validity pertains to how well each item measures what it’s intended to measure and the Lawshe method determines the extent to which each item is necessary and appropriate for the intended group of test takers. In this piece, we’ll zero in on inter-rater reliability.

Internally Created Assessments Often Lack Quality Control

Many colleges and universities use a combination of assessments to measure their success. This is particularly true when it comes to accreditation and the process of continuous program improvement. Some of these assessments are proprietary, meaning that they were created externally—typically by a state department of education or an assessment development company. Other assessments are internally created, meaning that they were created by faculty and staff inside the institution. Proprietary assessments have been tested for quality control relative to quality indicators such as validity and reliability. However, it’s common for institutional staff to confirm these elements in the assessments that are created in-house. In many cases, a department head determines they need an additional data source and so they tap the shoulder of faculty members to quickly create something they think will suffice. After a quick review, the instrument is approved and goes “live” without piloting or additional quality control checks.

Skipping these important quality control methods can wreak havoc later on, when an institution attempts to pull data and use it for accreditation or other regulatory purposes. Just as a car will only run well when its tank is filled with the right kind of fuel, data are only as good as the assessment itself. Without reliable data to that will yield consistent results over multiple administrations, it’s nearly impossible to draw conclusions and make programmatic decisions with confidence.

Inter-rater Reliability

One quality indicator that’s often overlooked is inter-rater reliability. In a nutshell, this is a fancy way of saying that an assessment will yield consistent results over multiple administrations by multiple evaluators. We most often see this used in conjunction with a performance-based assessment such as a rubric, where faculty or clinical supervisors go into the field to observe and evaluate the performance of a teacher candidate, a nursing student, counseling student, and so on. A rubric could also be used to evaluate a student’s professional dispositions at key intervals in a program, course projects, and the like.

In most instances, a program is large enough to have more than one clinical supervisor or faculty member in a given course who observe and evaluate student performance. When that happens, it’s extremely important that each evaluator rates student performance through a common lens. If for example one evaluator rates student performance quite high or quite low in key areas, it can skew data dramatically. Not only is this grading inconsistency unfair to students but it’s also highly problematic for institutions that are trying to make data-informed decisions as part of their continuous program improvement model. Thus, we must determine inter-rater reliability.


Using Percent Paired Agreement to Determine Inter-rater Reliability

One common way to determine inter-rater reliability is through the percent paired agreement method. It’s actually the simplest way to say with confidence that supervisors or faculty members who evaluate student performance based on the same instrument will rate them similarly and consistently over time. Here are the basic steps involved in determining inter-rater reliability using the percent paired agreement method:

Define the behavior or performance to be assessed: The first step is to define precisely what behavior or performance is to be assessed. For example, if the assessment is of a student’s writing ability, assessors must agree on what aspects of writing to evaluate, such as grammar, structure, and coherence as well as any specific emphasis or weight should be given to specific criteria categories. This is often already decided when the rubric is being created.

Select the raters: Next, select the clinical supervisors or faculty members who will assess the behavior or performance. It is important to choose evaluators who are trained in the assessment process and who have sufficient knowledge and experience to assess the behavior or performance accurately. Having two raters for each item is ideal—hence the name paired agreement.

Assign samples to each rater for review: Assign a sample of rubrics to each evaluator for independent evaluation. The sample size should be large enough to ensure statistical significance and meaningful results. For example, it may be helpful to pull work samples from 10% of the entire student body in a given class for this exercise, if there are 100 students in the group. The samples should either be random, or representative of all levels of performance (high, medium, low).

Compare results: Compare the results of each evaluator’s ratings of the same performance indicators using a simple coding system. For each item where raters agree, code it with a 1. For each item where raters disagree, code it with a 0. This is called an exact paired agreement, which I recommend over an adjacent paired agreement. In my opinion, the more precise we can be the better.

Calculate the inter-rater reliability score: Calculate the inter-rater reliability score based on the level of agreement between the raters. A high score indicates a high level of agreement between the raters, while a low score indicates a low level of agreement. The number of agreements between the two raters is then divided by the total number of items, and this number is multiplied by 100 to express it as a percentage. For example, if two raters independently score 10 items, and they agree on 8 of the items, then their inter-rater reliability would be 80%. This means that the two raters were consistent in their scoring 80% of the time.

Interpret the results: Finally, interpret the results to determine whether the assessment is reliable within the context of paired agreement. Of course, 100% is optimal but the goal should be to achieve a paired agreement of 80% or higher for each item. If the inter-rater reliability score is high, it indicates that the data harvested from that assessment is likely to be reliable and consistent over multiple administrations. If the score is low, it suggests that those items on the assessment need to be revised, or that additional evaluator training is necessary to ensure greater consistency.

Like determining content validity using Lawshe, the percent paired agreement method in determining inter-rater reliability is straightforward and practical. By following these steps, higher education faculty and staff can use the data from internally created assessments with confidence as part of their continuous program improvement efforts.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Content Validity: One Indicator of Assessment Quality

content validity

Updated on April 13, 2023 to include additional CVR calculation options from Dr. Gideon Weinstein. Used with permission. 

In this piece, we will focus on one important indicator of assessment quality: Content Validity.

Proprietary vs. Internal Assessments

As part of their programmatic or institutional effectiveness plan, many colleges and universities use a combination of assessments to measure their success. Some of these assessments are proprietary, meaning that they were created externally—typically by a state department of education or an assessment development company. Other assessments are considered to be internal, meaning that they were created by faculty and staff inside the institution. Proprietary assessments have been tested for quality control relative to validity and reliability. In other words:

At face value, does the assessment measure what it’s intended to measure? (Validity)
Will the results of the assessment be consistent over multiple administrations? (Reliability)

Unfortunately, however, most colleges and universities fail to confirm these elements in the assessments that they create. This often results in less reliable results and thus the data are far less usable than they could be. It’s much better to take the time to develop assessments carefully and thoughtfully to ensure their quality. This includes checking them for content validity. One common way to determine content validity is through the Lawshe method.

Using the Lawshe Method to Determine Content Validity

The Lawshe method is a widely used approach to determine content validity. To use this method, you need a panel of experts who are knowledgeable about the content you are assessing. Here are the basic steps involved in determining content validity using the Lawshe method:

  • Determine the panel of experts: Identify a group of experts who are knowledgeable about the content you are assessing. The experts should have relevant expertise and experience to provide informed judgments about the items or questions in your assessment. Depending on the focus on the assessment, this could be faculty who teach specific content, external subject matter experts (SMEs) such as P-12 school partners, healthcare providers, business specialists, IT specialists, and so on.
  • Define the content domain: Clearly define the content domain of your assessment. This could be a set of skills, knowledge, or abilities that you want to measure. In other words, you would identify specific observable or measurable competencies, behaviors, attitudes, and so on that will eventually become questions on the assessment. If these are not clearly defined, the entire assessment will be negatively impacted.
  • Generate a list of items: Create a list of items or questions that you want to include in your assessment. This list should be comprehensive and cover all aspects of the content domain you are assessing. It’s important to make sure you cover all the competencies, et al. that you listed in #2 above.
  • Have experts rate the items: Provide the list of items to the panel of experts and ask them to rate each item for its relevance to the content domain you defined in step 2. The experts should use a rating scale (e.g., 1-5) to indicate the relevance of each item. So, if it’s an assessment to be used with teacher candidates, your experts would likely be P-12 teachers, principals, educator preparation faculty members, and the like.
  • Calculate the Content Validity Ratio (CVR): The CVR is a statistical measure that determines the extent to which the items in your assessment are relevant to the content domain. To calculate the CVR, use the formula: CVR = (ne – N/2) / (N/2), where ne is the number of experts who rated the item as essential, and N is the total number of experts. The CVR ranges from -1 to 1, with higher values indicating greater content validity. Note to those who may have a math allergy: At first glance, this may seem complicated but in reality it is truly quite easy to calculate.
  • Determine the acceptable CVR: Determine the acceptable CVR based on the number of experts in your panel. There is no universally accepted CVR value, but the closer the CVI is to 1, the higher the overall content validity of a test. A good rule of thumb is to aim for a CVR of .80.
  • Eliminate or revise low CVR items: Items with a CVR below the acceptable threshold should be eliminated or revised to improve their relevance to the content domain. Items with a CVR above the acceptable threshold are considered to have content validity.

As an alternative to the steps outlined above, the CVR computation with a 0.80 rule of thumb for quality can be replaced with another method, according to Dr. Gideon Weinstein, mathematics expert and experienced educator.  His suggestion: just compute the percentage of experts who consider the item to be essential (ne/N) and the rule of thumb is 90%. Weinstein went on to explain that “50% is the same as CVR = 0, with 100% and 0% scoring +1 and -1. Unless there is a compelling reason that makes a -1 to 1 scale a necessity, then it is easier to say, “seek 90% and anything below 50% is bad.”

Use Results with Confidence

By using the Lawshe method for content validity, college faculty and staff can ensure that the items in their internally created assessments measure what they are intended to measure. When coupled with other quality indicators such as interrater reliability, assessment data can be analyzed and interpreted with much greater confidence and thus can contribute to continuous program improvement in a much deeper way.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:


Top Photo Credit: Unseen Studio on Unsplash 


The Path to Academic Quality

academic quality

All colleges and universities want their students to succeed. That requires offering current, relevant, and robust programs. But what is the path to academic quality? How can faculty and administrators know for sure that what they are offering is meeting the needs of students? Here are five recommendations:

Embrace Technology: With the advancement of technology, universities can adopt innovative approaches for data collection, analysis, and program evaluation. For instance, leveraging artificial intelligence to analyze large data sets can help identify patterns and trends that may not be apparent with traditional methods. This can lead to more insightful recommendations for program improvement.

Create a Culture of Continuous Improvement: The culture of the university should be centered around continuous improvement. All stakeholders should be encouraged to provide feedback on the programs regularly. This feedback should be analyzed and acted upon to ensure that the programs are up-to-date, relevant, and of high quality.

Involve All Stakeholders: All stakeholders, including faculty, students, alumni, industry professionals, and accrediting agencies, should be involved in the quality assurance process. Each of these groups can offer unique perspectives and insights that can help improve the academic programs.

Develop Key Performance Indicators (KPIs): KPIs are essential metrics used to measure the success of an academic program. These metrics can include student outcomes, faculty satisfaction, and retention rates. Universities can leverage KPIs to monitor and improve the quality of their programs continually.

Invest in Faculty Development: Faculty members play a crucial role in program quality. Therefore, universities should invest in their professional development to ensure they are equipped with the latest knowledge and skills to deliver quality instruction. By providing faculty with ongoing professional development opportunities, universities can enhance program quality and ensure that students receive a high-quality education.

By having a comprehensive quality assurance system, colleges and universities can be assured that they are on the right path to academic quality.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit: Nathan Dumlao on Unsplash 

Exceptional Academic Programs

Academic Programs

Exceptional Academic Programs.

If you look at their mission statements, nearly all colleges and universities strive to serve students and support their success through innovation and high-quality course offerings. But how do they know what they offer is truly exceptional? Here are five innovative tips for how colleges and universities can ensure that they have outstanding academic programs through their quality assurance system:

Focus on student learning outcomes: The ultimate goal of any academic program is to help students learn and grow. To ensure that your programs are meeting this goal, it’s important to focus on student learning outcomes. This means regularly collecting data on student learning, such as grades, test scores, and surveys of student satisfaction. You can then use this data to identify areas where your programs are succeeding and areas where they could be improved.

Engage in continuous improvement: Quality assurance is not a one-time event. It’s an ongoing process of collecting data, analyzing it, and making changes to improve student learning. To be successful, you need to create a culture of continuous improvement within your institution. This means encouraging faculty and staff to be constantly looking for ways to improve their teaching and learning practices.

Use data to drive decision-making: The data you collect through your quality assurance system can be a valuable tool for making decisions about your academic programs. For example, if you find that students are struggling in a particular course, you can use this information to make changes to the course content or delivery. Or, if you find that a particular program is not meeting the needs of its students, you can use this information to make changes to the program or to discontinue it altogether.

Involve all stakeholders: Quality assurance is not just about the faculty and staff who teach the courses. It’s also about the students who take the courses, the alumni who graduate from the programs, and the employers who hire the graduates. To be successful, you need to involve all of these stakeholders in your quality assurance process. This means getting their input on the goals of your programs, the data you collect, and the changes you make.

Be transparent: Quality assurance should be an open and transparent process. This means sharing your data and findings with all stakeholders, including students, faculty, staff, alumni, and employers. It also means being willing to discuss the challenges you face and the changes you make to improve your programs.

By following these tips, higher education personnel can create a quality assurance system that will help ensure that the academic programs they offer are truly exceptional.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit: Element5 Digital on Unsplash 

Quality Assurance and Continuous Program Improvement

quality assurance

An effective quality assurance system is essential to a university’s continuous program improvement. This can involve regular data collection and analysis on multiple key metrics, seeking input and recommendations from both internal and external stakeholders, utilizing high-quality assessments, and more. Here are some innovative tips for how colleges and universities can ensure that they have exceptional academic programs through their quality assurance system:

– Use digital platforms and tools to streamline the data collection and analysis process, and to provide timely feedback and reports to faculty and students. For example, online surveys, dashboards, learning analytics, and e-portfolios can help monitor student learning outcomes, satisfaction levels, and engagement rates.

– Establish a culture of quality assurance that values collaboration, innovation, and diversity. Encourage faculty and students to participate in quality assurance activities, such as peer review, self-evaluation, curriculum design, and accreditation. Provide incentives and recognition for their contributions and achievements.

– Adopt a learner-centered approach that focuses on the needs, preferences, and goals of the students. Design curricula that are relevant, flexible, and aligned with the learning outcomes and competencies expected by the employers and the society. Provide multiple pathways and options for students to customize their learning experience and demonstrate their mastery.

– Incorporate experiential learning opportunities that allow students to apply their knowledge and skills in real-world contexts. For example, internships, service-learning projects, capstone courses, and simulations can help students develop critical thinking, problem-solving, communication, and teamwork skills.

– Seek external validation and benchmarking from reputable sources, such as accreditation agencies, professional associations, industry partners, alumni networks, and international rankings. Compare your academic programs with the best practices and standards in your field and region. Identify your strengths and areas for improvement and implement action plans accordingly.

By following these tips, college and university teams can create a quality assurance system that will help ensure that their academic programs are exceptional. Most importantly, they can be confident that they are meeting the needs of their students–which should be their #1 priority.


About the Author: A former public school teacher and college administrator, Dr. Roberta Ross-Fisher provides consultative support to colleges and universities in quality assurance, accreditation, educator preparation and competency-based education. Specialty: Council for the Accreditation of Educator Preparation (CAEP).  She can be reached at:

Top Photo Credit: Scott Graham on Unsplash