Biased college rankings don’t tell the entire story

April 26, 2012 — by Nick Chow
usnews-2012

U.S. and World Report's rankings are highly influential.

As the frenzy for acceptance into a prestigious college increases, numerous magazines and news sources, such as U.S. News and Forbes, have profited from their college ranking issues. 
 
 
 
As the frenzy for acceptance into a prestigious college increases, numerous magazines and news sources, such as U.S. News and Forbes, have profited from their college ranking issues. 
Unfortunately, the college ranking system has led to fierce competition with colleges vying for higher spots on the lists. This competition has prompted some schools to secretly inflate their students’ average SAT scores to increase their ranking to attract more students. Recently, Claremont-McKenna College was caught exaggerating its SAT scores for the past six years.
For the average consumer, these college rankings are, at a glance, quite useful, because they quickly represent a school with a single score and allow the consumer with the satisfaction that University X is his or her best choice. However, many of the factors used in college rankings are insubstantial and inconsequential in a school’s performance.
One strange statistic used by arguably the most famous college ranking sources, U.S. News, is the alumni giving rate. It reflects the average percentage of living alumni who gave money to their schools. The U.S. News claims this alumni giving rate “is an indirect measure of student satisfaction.”
In fact, such giving is completely irrelevant because the amount of money that alumni donate does not measure student satisfaction. Most alumni donate money out of support for their alma maters and don’t donate money based on their satisfaction of their education. 
Furthermore, there is one more extremely questionable statistic that the U.S. News rankings uses, which is the peer evaluation. All participating colleges choose their top three administrators to rate other institutions on a 1-5 scale, or a “don’t know.” 
This is extremely difficult for administrators to complete this survey objectively because there are too many schools for them to know about, which will inevitably result in “don’t know” for numerous smaller colleges. 
In contrast, all the “top-tier” ranked colleges will likely have extremely favorable reviews, due to their already-prestigious ranking status, creating a vicious cycle that often victimizes lesser-known schools. 
This peer evaluation statistic further perpetuates the stranglehold of the already high-ranked colleges and does not allow the more obscure colleges to achieve higher rankings. As a result, these highly ranked schools will attract even more extremely good students, further establishing their dominance on top of the college rankings.
A college ranking system cannot hope to encapsulate the overall quality of an institution and turn it into a single number. Colleges all have their specialties and advantages, which cater to different types of students. These rankings greatly impact how schools are perceived by prospective students. By publishing these college rankings, these magazines are tricking students into thinking that the highest ranked colleges are the best fit for all students. 
In reality, though, everyone is different and has different interests, which means that it is impossible for a college to be one-size-fits-all. The same goes for a system of ranking and measuring college experiences with superficial numbers and statistics.
 
 
 
 
1 view this week