The icon indicates free access to the linked research on JSTOR.

In recent years, an increasing numbers of colleges and universities have stopped requiring prospective students to take the SAT. Many argue that the tests are more a measure of the parents’ income and ability to pay for test-prep classes than a students’ potential.

JSTOR Daily Membership AdJSTOR Daily Membership Ad

In a 2009 paper for Educational Researcher, Richard C. Atkinson and Saul Geiser looked at the history of the SAT and how it slowly fell from grace.

In the early decades of the twentieth century, the standard admissions test for U.S. colleges was the College Boards. These were curriculum-based tests designed to check whether students’ work in high school had prepared them for college-level material.

In 1926, the Scholastic Aptitude Test was introduced, with a very different goal. Rather than testing students’ mastery of academic material, the SAT promised to assess their aptitude for learning. Stemming from IQ tests used to measure military enlistees’ intelligence during World War I, the SAT rested on the assumption that intelligence was a fixed, inherited attribute.

At the time, that idea was associated with a meritocratic approach toward college admissions. Advocates argued that an aptitude test would find the potential in disadvantaged students who might not have had the chance to excel in the classroom.

Over time, the College Board repeatedly revised the SAT. As the focus on IQ fell out of favor, it was renamed the Scholastic Assessment Test in 1990. In 1996, the name was dropped entirely (today, the SAT doesn’t stand for anything). But Atkinson and Geiser wrote that the point has consistently been to check students’ general analytical abilities as a way of predicting their likelihood of success at college.

A challenge to the SAT as a tool for meritocratic admissions came in the late 1990s, after California eliminated affirmative action in the state university system. University administrators looked at their admissions criteria to try and explain why rates of admission for Latinos and African Americans were disproportionately low. What they found was that SAT scores were actually more closely tied to socioeconomic status than either high school grades or curriculum-based tests like the AP exams. Meanwhile, looking at a database of tests and student performance going back to 1968, they found that the SAT was also somewhat worse than subject tests in predicting student performance.

Atkinson and Geiser argued that it’s time for a shift back to the old notion of admitting students to college based more on their achievements than on supposed measures of their potential. Not only does it seem to be a more accurate kind of assessment, but it also encourages high school students to see their future as something they can control by working hard, rather than a matter of how essentially smart they are.

The shift now taking place at colleges doesn’t address the continuing disparity in students’ educational experiences in the first 13 years of schooling, but it does look like a step in the right direction.


JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Educational Researcher, Vol. 38, No. 9 (Dec., 2009), pp. 665-676
American Educational Research Association