Introduction
Faculty members’ employment, career path, and promotion applications in the academic world depend on their scholarship and teaching effectiveness. Student evaluations of teaching (SETs) or Faculty Course Surveys (FCS) are often considered in assessing an academic’s teaching. In 2018, using SETs was challenged in an arbitration case between Ryerson University and its Faculty Association. The award rendered by Arbitrator William Kaplan resulted in Ryerson losing the ability to use student evaluations as evidence of a professor’s effectiveness in the classroom. This essay supports the verdict of Arbitrator Kaplan’s decision, explaining why student evaluations of teaching are imperfect at best and biased and unreliable at worst.
My position
Student evaluations of teaching are unreliable, biased, and imperfect. While student surveys may be students’ main source of information about their educational experience, the data should be “carefully contextualized”, as Kaplan noted. Students may have personal biases and prejudices that can affect their evaluation of a professor’s effectiveness in the classroom. Moreover, studies have shown that student evaluations are biased based on an array of immutable personal characteristics, including race, gender, accent, age, and even a professor’s attractiveness.
Supporting points
Was the arbitrator right?
Arbitrator Kaplan’s decision to prohibit Ryerson University from using student evaluations as evidence of teaching effectiveness was based on strong evidence provided by expert witnesses from UC Berkeley, which showed that these evaluations were biased and unreliable. Kaplan weighed the strengths and weaknesses of Ryerson’s student survey system in arriving at his conclusion, which shows that he considered all the relevant evidence and arguments presented by both parties. As such, his decision was well-reasoned and well-supported by the evidence, making it a reasonable and just outcome.
How right was he within the parameters of Labour Law?
Employers are required by labour law to use equitable and impartial evaluation methods when making employment-related decisions, such as promotion and tenure. The expert evidence presented in the case demonstrates that student evaluations are not fair and objective due to their susceptibility to implicit biases based on personal characteristics such as race, gender, and attractiveness. Therefore, Kaplan’s decision was well within the bounds of labour law. He prohibited using an unreliable and prejudiced evaluation method, ensuring academic employment decisions were based on fair and objective criteria.
Was the university right in its argument?
The claim made by Ryerson University that student evaluations give useful feedback on teaching effectiveness is not entirely without merit, as these evaluations can provide information about various aspects of the educational process, including the instructor’s capacity for engaging students and communicating clearly. These assessments, however, are flawed at best and discriminatory at worst, according to the evidence provided by the faculty organization and the expert witnesses from UC Berkeley, making them an unreliable source of information for assessing instructional performance. Therefore, the university’s justification for including student evaluations in the evaluation process was unfounded, and Kaplan’s decision to forbid their use was appropriate.
Was the faculty association right?
The expert evidence presented in the case supported the faculty association’s position that student evaluations are unreliable and biased due to their susceptibility to implicit biases based on personal characteristics such as race, gender, and attractiveness. The faculty association was correct in challenging the use of student evaluations in academic employment decisions, as these evaluations can substantially impact a professor’s career prospects and result in unjust and biased outcomes. Therefore, Kaplan’s decision to prohibit their use was a victory for the faculty association and a step toward fair and impartial academic hiring practices.
Can students be unbiased in their feedback?
Student evaluations are susceptible to implicit biases based on personal characteristics such as race, gender, and attractiveness, which can result in unreliable and biased data. However, this does not inherently imply that all students’ comments are biased. It is possible for students to provide objective and useful feedback on their educational experience, provided that the evaluation method is designed to mitigate implicit biases and ensure the data collected is objective and reliable. Consequently, although student evaluations may provide feedback on teaching efficacy, they should not be the sole or primary criterion for academic employment decisions.
In addition, student teaching evaluations may not accurately reflect a professor’s teaching effectiveness. Students may evaluate their professors based on factors other than their teaching efficacy, including grading policies, workload, and punctuality. Additionally, some students may rate their instructors unjustly out of spite or to improve their grades. Consequently, student evaluations of teaching as a measure of a professor’s effectiveness are imperfect and unreliable.
If you were a faculty member, what would you suggest to improve or include students’ voices?
As a faculty member, I recommend employing a more comprehensive and objective evaluation method that considers multiple data sources, such as peer evaluations, classroom observations, and student feedback. This would provide a more comprehensive view of teaching effectiveness while assuring the collected data is accurate and impartial. In addition, I would suggest devising student evaluations to mitigate implicit biases by utilizing anonymous and standardized surveys, incorporating open-ended questions, and training students on how to provide objective feedback. This would ensure that students’ perspectives are heard and that the collected data is useful and objective.
Conclusion
At best, student instruction assessments are flawed; at worst, they are biased and untrustworthy. It is a step in the right direction towards accurate and trustworthy evaluations of an academic’s teaching efficacy that Arbitrator Kaplan’s 2018 decision disallowed Ryerson from using student ratings as proof of a professor’s competence in the classroom. Student evaluations of teaching should not be utilized in place of peer review models since they are less trustworthy and accurate in determining an academic’s effectiveness as a teacher. If I were on the faculty, peer review models would be a more dependable and accurate way to determine how effective a professor is at teaching. In contrast, peer review methods are more dependable and accurate in determining how well a professor teaches. In peer review models, professors’ coworkers evaluate their performance as teachers based on their familiarity with the academic’s work and prior teaching experience. Because peer review models are based on objective criteria, such as the professor’s research publications, presentations, and teaching philosophy, they are more accurate and reliable.
References
Ryerson University v Ryerson Faculty Association, 2018 CanLII 58446 (ON LA), <https://canlii.ca/t/hsqkz>, retrieved on 2023-04-08
Student Evaluations In Promotion And Tenure – Arbitration & Dispute Resolution – Canada. (2019, November 29). Student Evaluations in Promotion and Tenure – Arbitration & Dispute Resolution – Canada. https://www.mondaq.com/canada/arbitration-dispute-resolution/869344/student-evaluations-in-promotion-and-tenure
The end of student questionnaires? | CAUT. (2018, November 1). The End of Student Questionnaires? | CAUT. https://www.caut.ca/bulletin/2018/11/end-student-questionnaires