I just finished reading a piece entitled AI Could Conduct Peer Review, Report Finds; it actually focuses on using robots to detect plagiarism, finding instances of misused data and noting when statistical tests have been used incorrectly. This sounds like Turnitin perhaps joined at the hip with SPSS software–on steroids. It may prove to be quite a handy tool.
However, I had other thought: Could robots or other forms of artificial intelligence accurately identify individuals who have a propensity for success in the classroom? In other words, could they be programmed to assess an individual’s professional dispositions? Dispositions are the “soft skills” needed to have a positive impact on the lives of students–not just academically but also developmentally, socially, and emotionally. Skills like compassion, caring, ethics, values, commitment, grit, attentive to detail, organized, collaborative, and so on–cannot easily be measured but we know them when we see them, and they make a huge difference in the classroom. I’ve seen so many times over the years individuals who had a tremendous command of their subject matter and yet they were terrible teachers–they didn’t have those dispositions necessary for working well with students, parents, colleagues, and others.
Institutions of higher education struggle with how best to measure dispositions; it’s often cost-prohibitive or personnel-prohibitive to assess each applicant once, much less at multiple points in their program. But what if we could build tech tools that would be very effective at evaluating the professional dispositions of prospective teachers or school leaders? If developed correctly, this could potentially save schools of education huge sums of money each year and it would help them to better identify those who are most likely to be successful: Most likely to be retained in the program, most likely to graduate, and most likely to be successful after program completion.
Of course, this would also open the door to all sorts of research studies! And, it would be entirely possible to confirm things such as content validity, reliability, inter-rater reliability, and so on.
What might this look like? And how would we get started? Reach out to me if you want to continue the quest for Academic Excellence – Nothing Less.