College Board SAT equating process lowers student scores

September 10, 2018 — by Jeffrey Xu

Reporter analyzes the results of the June SAT and its implications for the CollegeBoard.

After the June SAT scores were released in July, thousands of angry students and their parents took to social media and sent emails to The College Board, complaining that students generally scored comparatively lower on this test than on past  SATs, despite getting more questions right in June. In response to the backlash, The College Board released the following statement: 
“We understand your questions about your June SAT scores. We want to assure you that your scores are accurate. While we plan for consistency across administrations, on occasion there are some tests that can be easier or more difficult than usual. That is why we use a statistical process called 'equating.'”
Essentially, equating means ensuring that a score received on a given test date is the performance-equivalent of a score from another date. Put simply, it is the SAT equivalent of curving.
Equating supposedly accounts for differences in difficulty between different versions of the SAT, so a more difficult test would allow a student to miss more questions and still achieve the same score compared to missing fewer questions on an easier test.
However, equating does not always work as intended. While its purpose is to make the test more fair across all dates, there can be drastic results if the test is too easy, which is what happened last June.
In the case of the June test, too many people earned high raw scores. To standardize the scaled score distribution, The College Board deducted a high number of points even for missing one or two questions. According to The College Panda, students would lose zero to 10 points for missing just one question on math in all previous administrations of the test. However, test-takers in June lost 30 points for missing just one question in the math section. For some students, they received drastically lower scores in June than in their previous attempts at the SAT, despite answering many more questions correctly. 
Following the June SAT fiasco, the Princeton Review published a lengthy article titled “Why You Don’t Want An Easy SAT.”
In the article, they wrote, “It is a problem, too, for high-scoring students who make the occasional careless error or who misbubble on questions that they are quite capable of answering. With a typical curve, there’s some cushion to mitigate the impact of such errors. There was no cushion on the June 2018 SAT."
While the scaled scores of students who typically perform below the third quartile wouldn’t be significantly impacted, this problem certainly distorts the results for students who score within the top 10 percent. Unfortunately, many of the students at Saratoga High, and in the Bay Area in general, fall in this top 10 percent category. 
In fact, according to PrepScholar, the average old SAT score for Saratoga High Students was a 1920, equivalent to a 1360 on the new SAT, which is within the top 10.8 percent of all test-takers.
Rather than being an effective way to standardize SATs across all testing sessions, the equating process seems more like a concession that the test is inherently flawed. For most of the high-performing students at the school, the equating process simply means that easier tests are harder and vice versa. 
But it is also true that a harder test can be more nerve-racking to students and cause them to fumble on more problems than they normally would.
Either way, equating, which comes as a result of the inherent difference in difficulty among different test sessions, works against students.
A way around the inevitably flawed SAT is to simply take the ACT. While the ACT still has a score equating process, ACT scaled scores are based on a larger range of raw scores. For example, a 36 on the ACT corresponds to any score between a 1570 to a 1600 on the SAT. This means that even if the test skews slightly easy or hard, the large score ranges will likely keep a student in the same score range, despite the effects of equating.
According to an Associated Press report, the number of students taking the SAT and ACT were both around 1.65 million in 2012, with the ACT surpassing the SAT by less than 2,000 test-takers. Within the last six years, however, this edge has continued to grow. 
According to Education Week and The College Board, in 2017, 2.03 million students took the ACT in 2017, while only 1.7 million took the SAT.
Unless they take immediate measures to directly address the equating problems with the SAT, The College Board might see this gap widen with even more students switching over to the ACT. 
10 views this week