Staff editorial: Colleges should release internal numerical ratings to all applicants

January 29, 2023 — by Nilay Mishra and Arnav Swamy
Photo by Leyna Chan
Understanding how an application was reviewed would ensure equity and that a holistic admissions process is truly being followed.
College admissions has increasingly turned into a game and a business dominated by rankings and acceptance rates rather than quality of education. 

Last year, Harvard University accepted a mere 3.16% of applicants, much lower than the 7.1% of applicants accepted a decade ago. But student achievement has not significantly changed in this time period; instead, tens of thousands of more high school seniors are applying to more and more colleges, causing yield and acceptance rates both to decrease.

In 2018, elite colleges like Harvard came under scrutiny for alleged discrimination in their admissions process against certain races, especially against Asian Americans. In the resulting court cases, judges analyzed admissions records for thousands of students.

According to these filings, admission officers rated each applicant on a scale from 1-6 in 14 different categories, using both “+” and “-” marks to distinguish further in each rating level. These categories ranged from cookie-cutter standards like academic achievement and extracurricular involvement to more abstract qualities like athletic prowess, humor and grit. Subsequent court filings have shown that such an internal rating system is widespread among prominent colleges such as Harvard, Stanford and the University of Michigan

To ensure that college admission processes are more equitable and that students regain some of their agency over the process that they have lost over the years, colleges should release these ratings and their associated comments to students, instead of simply giving an accept, defer or reject decision.

This transparency may seem a little bold; after all, the longstanding thinking is that colleges do not owe students any more information than a final decision. However, releasing the objective metrics used in the process would reduce some of the downsides of the inherent subjectivity associated with the process — namely, implicit bias and discrimination, as well as a growing surge of colleges deviating away from their stated goal of holistic admissions.

The idea of holistic college admissions started in some colleges in the early 20th century, but today is a valued component of admissions: Colleges essentially look at  as a whole, considering factors other than test scores, grades and achievements, prompting many students with lower measurables such as GPAs to apply.

With colleges serving as an instrumental factor of social mobility, it is important that students facing educational barriers are evaluated in proper context, and that academic pedigree is only one of many considerations. Yet colleges likely spend more time on applications with high GPAs and test scores, and only mere seconds on others, often skimming essays and recommendations quickly rather than spending a reasonable amount of time.

This wouldn’t be an issue alone, as of course a solid high school GPA is an indicator of success in college, but holistic admissions are among the most advertised parts of college admissions. Colleges attract students to apply with holistic admissions pitches: if they do not live up to it, they are essentially swindling applicants with potential potholes in their application in an effort to lower their acceptance rate to increase their perceived prestige.

Releasing numerical ratings to rejected applicants would help ensure a holistic admissions process, because instead of glossing over applications with lower academic stats, admission officers would be forced to assign a proper rating to every student, rather than just the ones that they feel would be more likely to gain admission. 

Furthermore, organizations such as Students for Fair Admissions have long argued that admission officers either directly or indirectly tend to give Asian American students lower personality ratings. In the case Students for Fair Admissions (SFFA) v. Harvard, the Supreme Court is already analyzing the numerical record ratings of thousands of students; releasing these records to each student every year would further reduce the implicit and explicit biases of the colleges we are conditioned to praise so much.

Getting a rejection from a hoped-for college is a painful experience that tens of thousands of seniors experience each year. And the whimsical aspect of it is that though you may have been qualified, your admissions officer may not have gotten their coffee in the morning. In an age when more and more laws call for pay transparency in all jobs, rejected applicants deserve to know how their file was analyzed and how admission officers ranked them.

Without better transparency, college admissions will continue to feel like a roll of the dice for applicants. But by providing more information on why an applicant was rejected, colleges will demystify the process. Moreover, such knowledge could help students improve future applications, especially in the regular round after receiving early feedback. Certain factors such as officer notes and letters of recommendation may be too subjective and volatile to release or standardize into a number, but metrics in more neutral qualifications would be much appreciated in understanding how an application was evaluated. 

We’re not trying to put a loophole in the system; after all, part of the surprise of an acceptance is cracking the elusive and strenuous admissions process. However, the process is still a game, and those of us who are pawns want to know more about how it’s played.

4 views this week