The Testing Apologists fight back: New book makes the case for keeping the SAT and ACT in college admissions

Jed Applerouth, PhD
March 15, 2018
#
min read

Johns Hopkins University Press recently published a book, Measuring Success: Testing, Grades and the Future of College Admissions, which calls into question the merits of test-optional college admissions policies. Some of the book’s findings were also adapted into a March 8 article in the Wall Street Journal. Together, these two works ask what we are giving up when we abandon standardized academic measures, and what we are actually getting in return. This is a debate that will likely continue to rage in the admissions world, so here’s what you should know about the findings presented in these new publications:

Do Standardized Tests like the SAT and ACT Predict College Success?

In recent years, an increasing number of colleges and universities have adopted test-optional admissions policies. Critics of testing have celebrated this trend, often attacking the SAT as a biased test that privileges the wealthy, adds minimal predictive validity to college admissions decisions, results in decreased access for underrepresented student groups, and is rendered unnecessary by the use of high school grades and other factors.

The editors of this new volume propose that these claims have, until now, largely avoided empirical scrutiny. They dig into the research underpinning the claims and conclude that much of it is spurious, based on anecdotal evidence, with limited generalizability or flawed methodologies.

Positionality

It is important to note that both the editors of Measuring Success and a number of its contributors have deep ties to the testing industry. They have a proverbial horse in the race, just as many in the test-optional camp similarly have a position they fiercely defend. This does not necessarily invalidate the research, but it’s important to note these affiliations*.

Debunking the leading myths about standardized testing

Measuring Success starts by looking at some of the most prevalent myths about college admissions testing. Using evidence from meta-analysis and synthesizing large-scale, representative samples, lead authors Nathan Kuncel and Paul Sackett draw the following conclusions:

  • The SAT and ACT are valid predictors of college performance
    Kuncel and Sackett found that the SAT and ACT are valid predictors of college GPA from freshman year through the completion of college. While they found that High School Grade Point Average (HSGPA) was the single best predictor of college performance, test scores were right behind it. Moreover, the combination of HSGPA and test scores yielded the most powerful and informed predictions.
  • The SAT and ACT predict more than grades and graduation rates
    SAT and ACT scores predict course-taking behaviors: students with higher scores choose more difficult majors and more advanced courses with harsher grading standards. “Evidence suggests that students with strong test scores end up better educated, even at the same institution. They tend to pursue more challenging majors and take more difficult and advanced courses while earning the best grades.”
  • Tests have more predictive power than other measures in college admissions
    The authors acknowledge a correlation between test scores and socioeconomic status, but claim the tests maintain their statistic validity when controlling for class. Within each income category, they found tremendous score variation, almost as large as the variation across the entire sample.
Methods matter: key test-optional study flawed

If one study has bolstered the test-optional movement in recent years, it is the Defining Promise study published by former Bates Dean of Admissions Bill Hiss and his colleague, Valerie Franks. Before this research was published, test-optional studies had been limited in scope and generalizability. Hiss and Franks analyzed data on 122,916 students from 33 test-optional schools and found no significant difference in graduation rates between students who submitted test scores for admission and those who did not. Presented in 2014, the Defining Promise study influenced many colleges to adopt test-optional policies.

Several authors in Measuring Success call into question the validity of the Defining Promise study, pointing to flaws in its methodology:

Jerome A Lucido, the dean of enrollment at University of Southern California’s School of Education and a former trustee of the College Board, notes that 56% of the sample of “nonsubmitters” was composed of public university students who had, in fact, submitted test scores but who had been admitted based on GPA and coursework alone. Similarly, Rachel Zwick, a researcher at Educational Testing Services, notes that several students who submitted test scores were counted as “nonsubmitters” because they were admitted based on percentile plans (e.g., Texas’s 10% admission plan, which requires public universities to automatically accept students who graduate in the top 10% of their class).

Zwick also draws attention to the problem caused by the study’s data aggregation. Hiss and Franks drew conclusions based on the collective data pool, but a “more nuanced story” appears when the data is disaggregated by school. For 13 of the 15 private schools that provided graduation data, submitters were more likely than nonsubmitters to graduate; furthermore, at 15 of the 18 private schools that provided relevant data, test submitters attained higher final college GPAs. Zwick points out that nonsubmitters were also found to be much less likely to major in STEM programs, and that Hiss and Franks offered no evidence that going test optional increased diversity at the schools studied, although that is often a stated goal of such policies.

Grade inflation weakens the predictive value of HSGPA and strengthens the value of testing

One claim often made in support of test-optional admissions is that HSGPA is a better indicator of freshman GPA than test scores. Perhaps the most frequently cited study offered in support of this position is Crossing the Finish Line, which was published in 2009 by William Bowen, Matthew Chingos, and Michael S. McPherson and which looked at outcomes for 150,000 students who entered one of 68 colleges in 1999.

In Measuring Success, Michael Hurwitz, Senior Director of the College Board, and Jason Lee, Research Director of the Tennessee Higher Education Commission, argue that grade inflation and grade compression have eroded the findings of Bowen et. al. The US Department of Education reported that average high school grades rose from 2.68 to 2.94 between 1990 and 2000, and the College Board reported that the average GPA of SAT test takers increased from 3.27 to 3.38 between 1998 and 2016. These effects are even more pronounced amongst the highest-achieving students: for students with SAT scores above 1300, the College Board reported a 28% decrease in the variance of HSGPA between 1998 and 2016. Hurwitz and Lee warn that continued grade inflation and crowding at the top may increase the reliance on standardized tests as an anchoring measure and may render test-optional policies unsustainable.

To determine how grade inflation may be affecting college admissions, Hurwitz teamed up with Meredith Welch, another College Board researcher, to replicate the Bowen et. al. study with the 2009 student cohort. Hurwitz and Welch looked at data from students entering the same 68 colleges from the original study and found that the power of HSGPA to predict 4- and 6-year graduation rates has significantly declined. “Across most selectivity groupings, the predictive value of HSGPA has diminished over time while the predictive value of the SAT has increased.”

Ironically, an increased reliance on grades over test scores may actually hurt the very students colleges say they hope to protect with such policies. High school grades have been rising for decades, but all boats have not risen equally. Hurwitz and Lee call attention to the unequal distribution of grade inflation, which is most likely to benefit students who are white, Asian, wealthy, or from private schools. Hurwitz and Lee found that the rate of GPA increase in private schools between 1998 and 2016 was three times as large as the increase observed in public schools (a gain of .26 GPA points compared to .08), and the inflationary gains were greatest at schools with relatively fewer black or Hispanic students.

Schools may optimize retention and graduation rates by marrying HSGPA and test scores rather than relying on either in isolation

One argument made in Measuring Success is that the combination of HSGPA and test scores is more useful than either in isolation. One example provided is that of the University of Oregon, whose enrollment managers integrated test scores into their merit scholarship program in the fall of 2013. In the years since, they have achieved record-setting HSGPA and test scores, increased student retention, and are on track to have record 4- and 6-year graduation rates. The enrollment managers found that “by accounting for both HSGPA and test score, we achieved a more nuanced understanding of the academic preparation of these students.” They found that students who had low testing or low GPA were less likely to succeed than those with strength in both areas. Research from their student data revealed that “test score was a valuable academic preparation metric to consider and offset many of the weaknesses of HSGPA… A combination of the two best predicted who was most prepared to succeed at the UO academically.”

Researchers at UGA and Michigan State suggest that self-serving reasons may drive schools towards test-optional policies

Many schools who adopt test-optional admissions policies claim publicly to adopt these policies with the goal of building more diverse student bodies. Measuring Success cites findings and statements from several different sources that suggest schools may also adopt these policies for more self-serving reasons:

  • In 2014 Andrew Belasco, Kelly Rosinger and James Hearn, three researchers from the University of Georgia, examined test-optional admissions policies for 180 selective liberal arts colleges, including 32 that had test-optional admissions between 1992-2010. They found that schools who moved to test-optional admissions policies did not enroll more underrepresented minorities or Pell grant recipients. However, these schools did experience increased application numbers, selectivity and test scores, all of which benefited the schools.
  • Kyle Sweitzer, Emiko Blalock, and Dhruv Sharma, researchers from Michigan State, used a more advanced college matching model (a propensity score matching analysis) to examine the effects of going test optional. This team found that making the shift to test-optional can be expected to pop SAT scores by 10 points, yield 100 more applications (not statistically significant), and result in roughly the same level of racial and ethnic diversity.
  • Jerome Lucido of USC cited the publicity bump that accompanies a move to test optional and the subsequent boost in the number of applicants as a key driver in the decision to adopt these policies.

The writers of Measuring Success admit that a number of schools who have shifted to test optional admissions have reported gains in diversity, but they also note that many schools who continue to require testing are also admitting more diverse student bodies. They warn that schools must be careful to assign causality to any such gains.

Conclusion

After evaluating the evidence, the editors of Measuring Success conclude that enrollment managers have “ample reason” to consider test scores a gauge of academic potential and to utilize them in college admissions. They acknowledge that colleges and universities must resolve the question of what relative weight to assign to testing, but caution that admissions officers will lose important information if they wholesale abandon the use of standardized testing.

Will this new volume slow down the rise of test-optional admissions? Or is the allure to go test-optional too strong? Many highly selective institutions will likely maintain their use of testing, and this book provides them with evidence to support their positions. Many others -- struggling to meet their enrollment goals -- are likely to join the test-optional movement. One thing is clear: the debate between testing apologists and testing critics is unlikely to disappear any time soon.

*As the founder of a testing company, I obviously have a horse in the race, too; however, this article is a summary of the claims in Measuring Success, not a statement of my own position.

 

Schedule a call with a Program Director.

Questions? Need some advice? We're here to help.

A happy Program Director makes the peace sign with their fingers
An animated man walks while juggling 'A', 'B', 'C', 'D' balls
Free Events & Practice Tests

Take advantage of our practice tests and strategy sessions. They're highly valuable and completely free.

No items found.
No items found.

Explore More

No items found.

Related Upcoming Events

Heading

(50% extended time)

Month DD, YYYY
0:00am - 0:00pm EDT
1234 Las Gallinas
San Francisco, CA 94960
Orange notification icon

No events found{location}

Check out upcoming Webinars.

Orange error icon

Error

Let’s figure this out.
Try again or contact Applerouth.

Retry
No items found.