May 3, 2013
“We’re Number Umpteenth!”
The Myth Of Lagging U.S. Schools
By Alfie Kohn
Beliefs that are debatable or even patently false may be repeated so often that at some point they come to be accepted as fact. We seem to have crossed that threshold with the claim that U.S. schools are significantly worse than those in most other countries. Sometimes the person who parrots this line will even insert a number — “We’re only ____th in the world, you know!” — although, not surprisingly, the number changes with each retelling.
The assertion that our students compare unfavorably to those in other countries has long been heard from politicians and corporate executives whose goal is to justify various “get tough” reforms: high-stakes testing, a nationalized curriculum (see under: Common Core “State” Standards), more homework, a longer school day or year, and so on.
But by now the premise is so widely accepted that it’s casually repeated by just about everyone — including educators, I’m sorry to say — and in the service of a wide range of prescriptions and agendas, including some that could be classified as progressive. Recently I’ve seen it used in a documentary arguing for more thoughtful math instruction, a petition to promote teaching the “whole child,” and an article in a popular on-line magazine that calls for the abolition of grades (following a reference to “America’s long steady decline in education”).
Unsurprisingly, this misconception has filtered out to the general public. According to a brand-new poll, a plurality of Americans — and a majority of college graduates! — believe (incorrectly) that American 15-year-olds are at the bottom when their scores on tests of science knowledge are compared to those of students in other developed countries.[1]
A dedicated group of education experts has been challenging this canard for years, but their writings rarely appear in popular publications, and each of their efforts at debunking typically focuses on just one of the many problems with the claim. Here, then, is the big picture: a concise overview of the multiple responses you might offer the next time someone declares that American kids come up short. (First, though, I’d suggest politely inquiring as to the evidence for his or her statement. The wholly unsatisfactory reply you’re likely to receive may constitute a rebuttal in its own right.)
1. Even taking the numbers at face value, the U.S. fares reasonably well. Results will vary depending on the age of the students being tested, the subject matter, which test is involved, and which round of results is being reported. It’s possible to cherry-pick scores to make just about any country look especially good or bad. U.S. performance is more impressive when the focus is on younger students, for example — so, predictably, it’s the high school numbers that are most often cited. When someone reduces our schools to a single number, you can bet it’s the one that casts them in the worst possible light.
But even with older students, there may be less to the bad news than meets the eye. As an article in Scientific American noted a few years back, most countries’ science scores were actually pretty similar.[2] That’s worth keeping in mind whenever a new batch of numbers is released. If there’s little (or even no) statistically significant difference among, say, the nations placing third through tenth, it would be irresponsible to cite those rankings as if they were meaningful.
Overall, when a pair of researchers carefully reviewed half a dozen different international achievement surveys conducted from 1991 to 2001, they found that “U.S. students have generally performed above average in comparisons with students in other industrialized nations.”[3] And that still seems to be the case based on the most recent data, which include math and science scores for grade 4, grade 8, and age 15, as well as reading scores for grade 4 and age 15. Of those eight results, the U.S. scored above average in five, average in two, and below average in one.[4] Not exactly the dire picture that’s typically painted.
2. What do we really learn from standardized tests? While there are differences in quality between the most commonly used exams (e.g., PISA, TIMSS), the fact is that any one-shot, pencil-and-paper standardized test — particularly one whose questions are multiple-choice — offers a deeply flawed indicator of learning as compared with authentic classroom-based assessments.[5] The former taps students’ skill at taking standardized tests, which is a skill unto itself; the latter taps what students have learned, what sense they make of it, and what they can do with it. A standardized test produces a summary statistic labeled “student achievement,” which is very different from a narrative account of students’ achievements. Anyone who cites the results of a test is obliged to defend the construction of the test itself, to show that the results are not only statistically valid but meaningful. Needless to say, very few people who say something like “the U.S. is below average in math” have any idea how math proficiency has been measured.
3. Are we comparing apples to watermelons? Even if the tests were good measures of important intellectual proficiencies, the students being tested in different countries aren’t always comparable. As scholars Iris Rotberg and the late Gerald Bracey have pointed out for years, some countries test groups of students who are unrepresentative with respect to age, family income, or number of years spent studying science and math. The older, richer, and more academically selective a cohort of students in a given country, the better that country is going to look in international comparisons.[6]
4. Rich American kids do fine; poor American kids don’t. It’s ridiculous to offer a summary statistic for all children at a given grade level in light of the enormous variation in scores within this country. To do so is roughly analogous to proposing an average pollution statistic for the United States that tells us the cleanliness of “American air.” Test scores are largely a function of socioeconomic status. Our wealthier students perform very well when compared to other countries; our poorer students do not. And we have a lot more poor children than do other industrialized nations. One example, supplied by Linda Darling-Hammond: “In 2009 U.S. schools with fewer than 10 percent of students in poverty ranked first among all nations on PISA tests in reading, while those serving more than 75 percent of students in poverty scored alongside nations like Serbia, ranking about fiftieth.”[7]
5. Why treat learning as if it were a competitive sport? All of these results emphasize rankings more than ratings, which means the question of educational success has been framed in terms of who’s beating whom. This is troubling for several reasons.
a) Education ≠ economy. If our reason for emphasizing students’ relative standing (rather than their absolute achievement) has to do with “competitiveness in the 21st-century global economy” — a phrase that issues from politicians, businesspeople, and journalists with all the thoughtfulness of a sneeze, then we would do well to ask two questions. The first, based on values, is whether we regard educating children as something that’s primarily justified in terms of corporate profits.
The second question, based on facts, is whether the state of a nation’s economy is meaningfully affected by the test scores of students in that nation. Various strands of evidence have converged to suggest that the answer is no. For individual students, school achievement is only weakly related to subsequent workplace performance. And for nations, there’s little correlation between average test scores and economic vigor, even if you try to connect scores during one period with the economy some years later (when that cohort of students has grown up).[8] Moreover, Yong Zhao has shown that “PISA scores in reading, math, and sciences are negatively correlated with entrepreneurship indicators in almost every category at statistically significant levels.”[9]
b) Why is the relative relevant? Once we’ve refuted the myth that test scores drive economic success, what reason would we have to fret about our country’s standing as measured by those scores? What sense does it make to focus on relative performance? After all, to say that our students are first or tenth on a list doesn’t tell us whether they’re doing well or poorly; it gives us no useful information about how much they know or how good our schools are. If all the countries did reasonably well in absolute terms, there would be no shame in being at the bottom. (Nor would “average” be synonymous with “mediocre.”) If all the countries did poorly, there would be no glory in being at the top. Exclamatory headlines about how “our” schools are doing compared to “theirs” suggest that we’re less concerned with the quality of education than with whether we can chant, “We’re Number One!”
c) Hoping foreign kids won’t learn? To focus on rankings is not only irrational but morally offensive. If our goal is for American kids to triumph over those who live elsewhere, then the implication is that we want children who live in other countries to fail, at least in relative terms. We want them not to learn successfully just because they’re not Americans. That’s built into the notion of “competitiveness” (as opposed to excellence or success), which by definition means that one individual or group can succeed only if others don’t. This is a troubling way to look at any endeavor, but where children are concerned, it’s indefensible. And it’s worth pointing out these implications to anyone who cites the results of an international ranking.
Moreover, rather than defending policies designed to help our graduates “compete,” I’d argue that we should make decisions on the basis of what will help them learn to collaborate effectively. Educators, too, ought to think in terms of working with – and learning from – their counterparts in other countries so that children everywhere will become more proficient and enthusiastic learners. But every time we rank “our” kids against “theirs,” that outcome becomes a little less likely.
NOTES
1. Pew Research Center for People and the Press, “Public’s Knowledge of Science and Technology,” April 22, 2013. Available at: www.people-press.org/2013/04/22/publics-knowledge-of-science-and-technology/.
2. W. Wayt Gibbs and Douglas Fox, “The False Crisis in Science Education,” Scientific American, October 1999: 87-92.
3. Erling E. Boe and Sujie Shin, “Is the United States Really Losing the International Horse Race in Academic Achievement?” Phi Delta Kappan, May 2005: 688-695.
4. National Center for Economic Statistics, Average Performance of U.S. Students Relative to International Peers on the Most Recent International Assessments in Reading, Mathematics, and Science: Results from PIRLS 2006, TIMSS 2007, and PISA 2009, 2011. Available at: http://nces.ed.gov/surveys/international/reports/2011-mrs.asp
5. See, for example, Alfie Kohn, The Case Against Standardized Testing(Heinemann, 2000); or Phillip Harris et al., The Myths of Standardized Tests (Rowman & Littlefield, 2011).
6. For example, see Iris C. Rotberg, “Interpretation of International Test Score Comparisons,” Science, May 15, 1998: 1030-31.
7. Linda Darling-Hammond, “Redlining Our Schools,” The Nation, January 30, 2012: 12. Also see Mel Riddile, “PISA: It’s Poverty Not Stupid,” The Principal Difference [NASSP blog], December 15, 2010 (http://bit.ly/hiobMC); and Martin Carnoy and Richard Rothstein, “What Do International Tests Really Show About U.S. Student Performance?”, Economic Policy Institute report, January 28, 2013 (http://www.epi.org/publication/us-student-performance-testing/).
8. Keith Baker, “High Test Scores: The Wrong Road to National Economic Success,” Kappa Delta Pi Record, Spring 2011: 116-20; Zalman Usiskin, “Do We Need National Standards with Teeth?” Educational Leadership, November 2007: 40; and Gerald W. Bracey, “Test Scores and Economic Growth,” Phi Delta Kappan, March 2007: 554-56. “The reason is clear,” says Iris Rotberg. “Other variables, such as outsourcing to gain access to lower-wage employees, the climate and incentives for innovation, tax rates, health-care and retirement costs, the extent of government subsidies or partnerships, protectionism, intellectual-property enforcement, natural resources, and exchange rates overwhelm mathematics and science scores in predicting economic competitiveness” (“International Test Scores, Irrelevant Policies,” Education Week, September 14, 2001: 32).
9. Yong Zhao, “Flunking Innovation and Creativity,” Phi Delta Kappan, September 2012: 58. Emphasis added.