Harsh Curve on the August SAT Leads to Lower than Expected Scores
On the morning of September 6th, I sat eagerly by my phone. It was the day College Board was scheduled to release most of the August SAT scores, and I couldn’t wait to celebrate my students’ progress with them. I’d been meeting with several students who worked diligently over the summer, students who sacrificed time at the pool to study and gave up weekend morning adventures to take practice exams. Their hard work had been rewarded by improvement from mock-to-mock, but I knew they were anxious to see that improvement reflected on their official scores. I couldn’t wait to see those scores and celebrate their improvement with them.
Score release day is always stressful, because no matter how hard you work, you can’t be 100% certain what the result will be. The SAT and ACT are standardized, which is a good thing in many ways – students are held to the same standards and compared to each other, which is what college admissions officers need to see. However, that also means that raw scores are converted to that 1600-pt scale using a curve that depends on the difficulty level and performance of students on that particular test administration. At Applerouth, we’re in the business of teaching; we prepare our students to know the material as thoroughly as possible, but we can’t predict the curve, and that’s what I realized anew on September 6th. A number of my high-scoring students – students who’d seen trackable improvement over the summer – saw smaller increases than they’d expected (what was worse, some students didn’t see the increases they’d been expecting at all).
As a tutor, these kinds of days really deal a blow to your confidence. Is it me? Logically, I knew that was unlikely – I’ve been tutoring these tests for six years, and my students have achieved great improvement in my tenure. The results from the August test were a curveball, both for me and for the tutors I manage. Something strange was afoot.
After doing some research by polling our students and tutors, our Tutor Services team discovered that the curve for the Math section was uncharacteristically harsh on the August test administration.
Here’s what I mean. On the scales that the College Board provides for their official practice tests (the same tests we use for our mock tests to track student progress), a student who missed one question in the Math section would normally expect to score 790, a student who missed two would score 770-780, and so forth; on most practice tests the College Board provides, a student could miss as many as eight questions and still score in the low 700’s.
Now, this August test was a different animal altogether. From what we could find out from our own students, a student who missed one question dropped to 770 and a student who missed two dropped to 750. Missing eight questions dropped a student to 660, rather than the low 700’s. As we looked at the data more closely, it became evident that several of our students actually increased the number of questions they answered correctly, but decreased their overall score from previous tests and official practice tests. While the test generally didn’t affect students who’d been aiming to score in the middle range of the test (in fact, it rewarded their commitment to thorough work and avoiding careless errors), it was brutal to normally high-scoring students.
This isn’t the first time the SAT has administered a test with this sort of issue. The June 2018 administration of the SAT featured an unusually easy Math section; to compensate for the ease of the questions, the test was scored with an extremely harsh curve. On the December 2018 exam, we saw a similarly harsh scale for the Writing section of the test, creating similar issues for high-scoring students. What’s more, for the June 2019 exam, the College Board administered 17 different test forms (which one could argue isn’t exactly standardized).
It seems that the resources the College Board is providing don’t accurately reflect what students should expect to see on test day. It also seems the scales that they are providing for their practice exams don’t accurately reflect how the official exams will be scored. How can students confidently prepare for an exam when the tools they’re given from the testing organization don’t accurately reflect what they’ll see on test day?
The ACT’s track record for consistency is superior to that of the SAT. High-scoring students will want to bear this in mind as they weigh the SAT vs. the ACT. Ultimately, the choice of the right test will come down to a variety of individual factors. There are certain characteristics of the SAT that students may prefer to those of the ACT: more time per question, the style of Reading passages, and its alignment with Common Core principles, just to name a few. The ACT makes small adjustments from test-to-test (like introducing a slightly different problem type, testing a familiar concept in a new way, or including more of one question type than it had historically), but these adjustments do not hurt students’ scores with nearly the severity that these SAT inconsistencies have. Additionally, the most recent practice tests that ACT, Inc has published accurately reflect what the students can expect to see; perhaps more importantly, the corresponding scales accurately reflect the scales of the official test dates. As a student, I’d rather know exactly what I’m prepping for, and the ACT simply provides predictability that the SAT doesn’t right now.