Are Professor Ratings and Reviews Sites Legit?
Share this Article
- Students and college administrators rely on professor ratings to make critical decisions.
- Professor rating sites, such as Rate My Professor, are popular destinations for students.
- Student evaluations of professors often demonstrate gender and racial bias.
- Some colleges are seeking alternative methods for measuring teaching effectiveness.
Higher education is a consumer industry, and its chief consumers — students — have a powerful voice in determining institutional quality. Student evaluations of faculty and teaching can influence college choice, course selections, faculty hiring decisions, and professional reviews related to promotion and tenure.
But just how valid are professor ratings from students? Should prospective and current students, as well as faculty leaders and college administrators, rely on these assessments to make critical decisions? If not, then what are some alternatives?
BestColleges.com is an advertising-supported site. Featured or trusted partner programs and all school search, finder, or match results are for schools that compensate us. This compensation does not influence our school rankings, resource guides, or other editorially-independent information published on this site.
Ready to Start Your Journey?
Rate My (Formerly Hot) Professors
Student reviews of faculty and teaching fall generally into two categories: public reviews catalogued on websites and private reviews administered by institutions. Let's first look at what these sites entail.
The largest and most influential professor rating site is Rate My Professors, which features more than 19 million ratings of 1.7 million professors at 7,500 schools and has the most traffic — some 4 million students per month — of any such resource.
The site welcomes honest, if entirely subjective, comments about faculty teaching but cautions students to avoid personal attacks, posting in anger, vulgarity, and details about a professor's background or appearance (more on that later). It also encourages faculty members to create their own accounts so they can engage publicly with students.
Rate My Professors features more than 19 million ratings of 1.7 million professors at 7,500 schools and has the most traffic — some 4 million students per month — of any professor rating site.
Professors receive grades for quality and difficulty, with scores falling on a 1-5 scale (5 being the highest), along with brief narratives of each student's experience in their courses. A faculty member's page will reflect their average rating, much like one's Uber score, and a school's homepage will rank professors accordingly, showing the top handful.
Students can evaluate their schools as well. Using the 1-5 scale, they score for institutions' overall reputation, location, internet access, food, social life, happiness, opportunities, clubs, and safety. They also provide written comments summarizing their experiences. The college's homepage shows its average score.
Perusing a discussion forum at College Confidential, a popular online student hangout, reveals that students rely on Rate My Professor as much as shoppers rely on consumer opinions on Amazon. Comments include the following:
"In my experience, Rate My Professor is dead on."
"RMP has been my Holy Bible for my freshman year."
"I wish I'd found RMP before I'd taken a class with a professor who turned out to have the critical thinking skills of a turnip."
"One of the best teachers I've ever had was rated very poorly just because he gave a lot of work."
"[Rate My Professor has] become a magnet for disgruntled students to vent."
"I found [Rate My Professor] very helpful when selecting specific classes, but not useful when selecting universities."
I conducted my own rather modest experiment by asking my daughter about her experiences with two professors in the same department at her college. One, she said, was outstanding, and the other was terrible. To corroborate or refute her conclusions, I consulted each professor's page.
Sure enough, the "good" professor had an overall rating of 5, with 96% of students claiming they'd take another course with him. "The absolute best professor I've ever had," someone wrote. But the other professor? A rating of 2.6, with only 45% saying they'd take another course. One summation: "Avoid at all costs."
A word about appearance, referenced earlier. In 2018, Rate My Professors eliminated its controversial "chili pepper" metric, which students used to rate professors according to their "hotness." Five chili peppers meant you were smokin'. Faculty protested, noting that such a criterion was demeaning and generally had little to do with teaching effectiveness. Yet faculty rated as attractive boasted higher overall scores, so looking good was evidently worth something.
Two other sites offer similar content. One is Uloop, a "student marketplace" with information on job opportunities, scholarships, roommates, textbooks, study abroad programs, and off-campus housing. It contains a small section on professor ratings but no overall university scores. The other site is StudentsReview, which similarly features a limited number of faculty assessments.
The Main Issues With Rating and Reviewing Professors
Anyone who's taken college classes knows that at the end of each course, students are asked to complete an evaluation form requesting feedback on their experience. Traditionally, the professor would reserve a few minutes during the final class to allow students to complete the paper survey, taking advantage of a captive audience. Nowadays, students are more often asked to fill out these forms online on their own time.
One study noted a 23% drop in completion rates for online student evaluations.
Even with that captive audience, response rates have never been stellar. One would think every student would want to opine on their educational experience, but in truth response rates can fall below 50%. Allowing students to fill out the survey online has served only to decrease that rate. One study noted a 23% drop in completion rates for online surveys compared to in-class collection.
Other problems abound. Survey results suffer from what's called "sampling bias," meaning students motivated to complete the surveys are often either very happy or very unhappy with the class. Using the 5-point scale, let's say a particular professor receives a slew of 1s and 5s, with results averaging out to about a 3. If another professor receives 3s across the board, are those evaluations really the same?
Also, faculty who assign less work and routinely award high grades tend to receive higher ratings — but does that high score equate to a valuable educational experience?
Gender and Racial Bias in Professor Ratings
An even more problematic bias plagues student evaluations. Numerous studies have revealed a persistent bias against women and people of color. In one experiment, for example, a professor, posing once as a woman and once as a man, taught the same online course and received lower evaluation scores when using a woman's name. This form of gender discrimination is particularly evident in male-dominated fields, such as STEM disciplines and economics.
At the same time, it's rather peculiar that while study after study identifies racial and gender bias evident in student evaluations, few articles attempt to explain exactly why these biases exist. One posits that female professors are assumed to be "less qualified" than their male counterparts.
Numerous studies have revealed a persistent bias against women and people of color. In one experiment, a professor received lower evaluation scores when using a woman’s name.
With respect to faculty of color, another article suggests that their lower ratings might stem from teaching less popular topics, but then concludes that they receive inferior scores compared to white faculty teaching the very same courses. So although the causes of bias remain somewhat vague, the problem nevertheless persists.
Why is that an issue? Who really cares what students think about teaching quality anyway? People who make decisions about appointments, promotion, and tenure, that's who. Most colleges use professor reviews when making these important personnel decisions. At Oregon State University, for instance, a faculty guide notes that promotion and tenure portfolios should include "evaluations by learners or participants of every course taught by the candidate."
If minority and female faculty both face a disadvantage resulting from biased evaluation results, might this limit opportunities for professional advancement?
Alternatives to Traditional Professor Evaluations
Despite the shortcomings of student evaluations, colleges aren't ready to dismiss them entirely. Instead, as one faculty association suggests, professor ratings "should not be used as the only evidence of teaching effectiveness," but instead "should be part of a holistic assessment that includes peer observations, reviews of teaching materials, and instructor self-reflections."
Evaluations, after all, might measure students' satisfaction, but that doesn't necessarily equate to actual quality.
In response to mounting claims of bias and other issues with student evaluations, some colleges have stopped using them altogether. The University of Southern California recently eliminated the use of evaluations in promotion and tenure decisions, and the University of Oregon replaced them with a more comprehensive system of determining teaching effectiveness.
Last spring, when the COVID-19 pandemic forced a nation of academics to move online, teaching and learning changed overnight. Professors and students alike, especially those unaccustomed to online education, struggled with the new pedagogy and content delivery systems.
In response to mounting claims of bias and other issues with student evaluations, some colleges have stopped using them altogether.
In light of such volatility, some colleges decided to forgo administering student teaching evaluations until normal classes resume, while others are temporarily excluding survey results from faculty assessment portfolios.
Student evaluation of teaching, whether in a public forum such as Rate My Professors or at the institutional level, is an inexact science at best. Sure, a series of consistently positive or negative results might reveal the truth about a particular professor and their classes, much like consumer reviews of cars offer valuable insights about the driving experience.
But the system has too many potholes — gender and racial bias, low participation rates, the rewarding of light workloads and easy grading, and the preference for entertainment over educational quality — to render viable results.
Colleges might continue to devalue or eliminate professor ratings as they seek a more fair and equitable measurement of quality, but given their popularity, sites offering Yelp-like reviews of professors will continue to influence America's college students.
Feature Image: Stígur Már Karlsson / Heimsmyndir / E+ / Getty Images