I became aware of the website Rate My Professors about 10 years ago, when many of my friends beginning their teaching careers would discuss it with a mixture of unease, dismissiveness and outright annoyance. For those unfamiliar, it’s a website that allows students to write anonymous reviews of professors (and colleges), featuring notations such as “tough” or “easy grader,” “helpful,” “funny,” good/not good at giving feedback, if one can skip classes and still pass, and the list goes on. Basically, it’s like Yelp for academia, and like Yelp, its voluntary and anonymous nature makes Rate My Professors a notoriously unfair system.
Years after grad school, when I began working as an academic advisor, I realized just how popular Rate My Professors had become – many of the undergraduates I met would without hesitation rely on this website in determining their class schedules.
While showing students various course options for their first semester, I would watch in dismay as they – sometimes with an assisting parent beside them – would quickly look up a professor’s rating on their phone, immediately shooting down an option characterized by even one anonymous reviewer as “too hard,” “boring,” “terrible,” “unfair,” etc. That these descriptions were often unqualified didn’t seem to bother the students, who were accustomed to making snap decisions based on online information. In these situations, I would always attempt to bring the student to his or her senses, noting that typically, students who leave reviews on a site like Rate My Professors are those with strong opinions in the positive or negative – rarely are merely satisfied students going to dedicate time to thoughtful reviews. And those with an illegitimate grade dispute or other axe to grind are free to leave retaliatory critiques without even entering their names.
An even bigger problem than its dubious measure of quality is the fact that Rate My Professors has become an enabler of gender bias and the objectification of women in academia. A bone of contention for many faculty over the years has been the “chili pepper” rating – an option students can add to reviews to indicate a professor’s physical attractiveness. (Side note: according to Inside Higher Ed, a Rate My Professors spokesperson hedged that the chili pepper “is meant to reflect a dynamic/exciting teaching style.” This dubious claim, even if true, is rendered moot by the fact that most student reviewers interpret the chili pepper as a “hotness” rating, plain and simple.) This issue garnered media attention in June, when Vanderbilt neurology professor BethAnn McLaughlin tweeted the following at Rate My Professors:
an Inside Higher Ed piece
After the victory, McLaughlin penned an essay for the website Edge for Scholars explaining the reasoning behind her tweet, and pointing out that this small issue represents something more important. “Being a professor is an incredibly stressful job, but being a female professor is measurably more difficult,” she writes. “Females make up the majority of educators for our college students yet earn far less than our male colleagues, do more university service and still experience higher levels of sexual harassment than any other profession outside of the military.”
She goes on to discuss the unfortunate reality that many male students (and even female students) have internalized gender bias, taught early on to believe that women academics are less accomplished, intelligent and worthy of respect than men. In a university setting, this translates into less respect and more critical evaluations for female professors. While the Rate My Professors chili pepper rating can be used to rate both men and women, McLaughlin emphasizes that this entrenched culture of sexism is further encouraged by this kind of rating system. Women stand to lose way more by being objectified than male professors, who might find it easier to laugh off.
In her piece, McLaughlin expresses satisfaction with the website’s decision, even though it is a small step toward confronting gender bias. “Today, the good people at RateMyProfessors took a hard look at how we are judging one another and made the right call,” she writes of the company’s decision. “They chose civility and kindness over snarky banter and retribution. They chose to show our students that the path forward is not one of pettiness and locker room banter.”
The Rate My Professors chili pepper is, of course, one small and egregious symbol of a much larger issue: gender bias in the classroom, and particularly student evaluations. proFmagazine’s own Suzette wrote last year about the problematic difference between the kinds of comments male and female professors receive on traditional course evaluations – those short course surveys students complete at the end of a semester, which are then considered during a professor’s performance and even tenure reviews. Suzette writes,
Recent statistical analyses have suggested that course evaluations do not adequately rate teacher effectiveness, but do demonstrate gender bias across a number of disciplines. Bias isn’t just a factor regarding the gender of the instructor, but is also related to the gender of the student. Male students, for instance, rate male faculty higher than female faculty. Another study shows a “small same gender preference” in that male students prefer male professors and female students prefer female professors. Some research shows that female faculty have to work significantly harder than male faculty to receive the same evaluations. Quite concerning is that this effect is most prominent for female faculty at the junior level. But of course, the issue of bias in course evaluations isn’t just one based on gender, but one that affects the position of minorities as well.
Talk to any female academic and she’ll tell you that this is a pervasive problem on campuses throughout the country. Thus far little has been done to address it, but some institutions are taking steps to change the system. A May Inside Higher Ed article details a University of Southern California decision to do away with student evaluations of teaching (SETs), due to both bias against women and minorities and evidence that these evaluations don’t actually correlate with learning outcomes. According to the article,
[Provost Michael Quick] just said, “I’m done. I can’t continue to allow a substantial portion of the faculty to be subject to this kind of bias,” said Ginger Clark, assistant vice provost for academic and faculty affairs and director of USC’s Center for Excellence in Teaching. “We’d already been in the process of developing a peer-review model of evaluation, but we hadn’t expected to pull the Band-Aid off this fast.”
In place of SETs, USC administrators plan to implement a peer-review system focused on “training, evaluation and incentives.” Student input will still be valued, but instead of simply “rating” professors and courses in response to general questions, they will be asked to specifically discuss assignments, inclusivity, feedback and knowledge or skills they gained, and they will also rate their own level of engagement with the course. As for the peer-review aspect, Inside Higher Ed notes that “peer review will be based on classroom observation and review of course materials, design and assignments. Peer evaluators also will consider professors’ teaching reflection statements and their inclusive practices.”
While implementing such a system may be difficult, it is a necessary step toward addressing the problem of bias in evaluations at the institutional level. And while students won’t likely stop turning to sites like Rate My Professors any time soon, at least universities themselves can place the focus where it matters: not on nebulous measures of popularity, but on the strength of a faculty member’s actual effort and ability to develop teaching methods that educate, encourage and inspire their students.