Tuesday, December 04, 2012

 

Rate Your Professors and Yourself





10088274-COMS 369-03

Think back to your first term of university. You had a lot to learn in a short amount of time. Not only did you have to find your way around campus, you had to create your own schedule by selecting classes and professors. The process of choosing classes may have been daunting. Did you consult friends on which professor was the best? Did you ask which professor was the easiest to learn from?

It is common for students to consult with their peers before choosing their professors. They may consult with their friends or fellow students in addition to university ratings and forum websites such as RateMyProfessors.com. Although online sources are popular, students may not be aware of the many biases present in online reviews. How reliable are professor evaluations anyways?
Imagine you are presented with this real list of student comments on RateMyProfessors.com and are asked which professor you would choose.
  • I felt stupid almost every class that I attended. I have never met such a rude teacher.”
  • “Absolutely hilarious, and he knows his stuff. Easy to get an A.”
  • “Hands down the best prof I've ever had”
  • “Decent prof, but not the best.”
  • “He is very unapproachable and his tests are ridiculously unfair.”
  • I leave the class feeling disrespected and stupid EVERYday”
After looking at the above list, you would probably be more inclined to enroll in the class promising the “easy A”. But how do you feel after being told that all of these reviews are for the same professor? How can you trust any of these reviews when the responses are so varied? The reality is that you may consult these types of reviews during your university career without being fully aware of varying degrees of bias.
When students give feedback on their professors they are influenced by many different factors. Some of those factors include class size, exam and assignment type, workload, and timing of the evaluation itself. Studies have shown that teachers with smaller classes consistently get higher ratings than those with more students. Another influencing factor is the type of assignments and exams a student is given; classes with multiple choice exams get consistently lower ratings than those with written responses such as essays. Students who perceive the out-of-class workload to be higher than other classes will also rate professors lower. Students are influenced by all of these factors, but perhaps the important factor is whether a student completes the questionnaire before or after a final exam.
If a course and professor evaluation is scheduled after a final exam, the chance of bias increases. For many students, the final exam or submission of a final assignment, is the ultimate test of knowledge of course material. A student may experience a feeling of relief, disappointment, or euphoria following an exam. These feelings will directly or indirectly influence the professor feedback. If a student feels they did poorly on a test, they may use the course evaluation to retaliate against a professor. Another student who feels they did well, may give a more positive score to their professor. The degree of bias present in evaluations is also intensified by how the evaluations are collected.
Forum websites such as RateMyProfessors.com (RMP) carry significantly more bias than Universal Student Ratings of Instruction (USRI) collected by an institution. The University of Calgary asks students to complete a USRI near the end of the semester before any final exams. The timing is no accident; the University is trying to limit the bias in the ratings by scheduling the reviews before a final examination. In contrast, RMP allows students to post reviews at any time, even years later. The trustworthiness of the data is questionable as the site operates on the honor system; reviewers are not even required to prove they are students.
The use of RMP amongst university students is extensive even with so much potential for bias and error. A study into the level of student awareness and utilization of RMP was done recently at the Appalachian State University. The study surveyed a total of 216 students and the results suggested that the use of RMP was extensive amongst those students. The survey showed that 95% of students regarded RMP as being a credible source of information. When it came time to select an instructor, 75% of the students admitted to using RMP. The results of this study are not entirely surprising because students appreciate the qualitative responses that give more insight into teaching style than the USRI numeric results.

Students may perceive RMP as being reliable, but the students who leave reviews will often blame the professor for poor marks instead of admitting personal responsibility. There are two components that contribute to the academic success of a student. The first one is quality of instruction; a good professor is essential to a successful learning environment. The second component is the motivation of the student to learn. If a student is not doing the required readings or attending class, they are failing to take responsibility for their own education. You will ultimately glean the results of you efforts but many students will not own up to their own mistakes.

You will never read on professor evaluations comments such as, “I wish I had attended more classes and done the required readings”, or “If I had asked more questions or seen the professor during office hours, then I would have understood the material a bit better”.  

Personal responsibility is the missing piece of the puzzle. You will never get a complete picture of a professor from a review unless those reviewers provide an honest assessment of their own responsibility for their education.

If you ask someone for a professor recommendation, ask a follow up question of, "how dedicated were you to learning the class material?" The response will indicate whether or not the professor deserves the criticism or the praise. 

Keep an open mind when it comes to reading professor reviews. Remember, one man's junk is another man's treasure. You may find that some of your favourite professors may not appeal to everyone, but that doesn't mean they don't have knowledge to impart.

References

Davison, E., Price, J. (2009): How do we rate? An evaluation of online student evaluations, Assessment & Evaluation in Higher Education, 34:1, 51-65

McKeachie, W. (2007): Student ratings: The validity of use, American Psychologist, 52:11, 1218-1225




Comments: Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]