>
Clearinghouse on Assessment and Evaluation

Library | SearchERIC | Test Locator | ERIC System | Resources | Calls for papers | About us

 

ERIC®/AE Digest Series EDO-TM-98-08 August 1998
The Catholic University of America Department of Education


Seven Myths about Literacy in the
United States

Jeff McQuillan, California State University, Fullerton

Adapted with permission The Literacy Crisis, False Claims, Real Solutions (1998) by Jeff McQuillan, Portsmouth, New Hampshire: Heinemann..

Serious problems exist with reading achievement in many United States schools. However, much of the commonly accepted wisdom about the academic performance of United States students is false. The best evidence we have on the reading crisis indicates that no crisis exists on average in United States reading.The purpose of this digest is to investigate seven of the most prevalent -- and damaging-- myths about literacy achievement in the United States.

Myth 1: Reading Achievement in the United States Has Declined in the Past Twenty-five Years

The best evidence on reading achievement in the United States comes from a national system of examinations established back in the late 1960s by the federal government to determine how United States schoolchildren were performing in a variety of school subjects. These exams, known as the National Assessment of Educational Progress (NAEP) are important barometers of educational achievement. They are given nationally to a representative sample of United States children.

When the test was first administered in 1971, the average reading proficiency score for nine year-old children was 208, for thirteen year-old children was 255, and for seventeen year-old children was 285. The results of the most recent administration of the test (1996) revealed that the average reading proficiency score for nine year-old children was 212, for thirteen year-old children was 259, and for seventeen year-old children was 287. These scores indicate that, despite a few minor shifts, reading achievement has either stayed even or increased over the past thirty years.

Myth 2: Forty Percent of U.S. Children Can't Read at a Basic Level

During the early years of the NAEP tests, the Department released only the raw scores for each age level on its 0 to 500 scale, with no designations of which score was thought to constitute "basic knowledge" or "proficiency." The designers of the NAEP test later decided that simply reporting the raw scores was no longer adequate in order to judge the progress of United States schools. The Department decided it would determine how well students were reading by establishing the minimum score constituting "below basic," "basic," "proficient," and "advanced" reading. The "basic" level for fourth-grade reading, for example, was fixed at a score of 208. In 1994, 40% of United States children scored below the "basic" cutoff of 208.

The problem with this approach lies in "objectively" determining where these cutoff points should be. Glass (1978), after reviewing the various methods proposed for creating "minimal" criterion scores of performance, concluded that all such efforts are necessarily arbitrary. Of course, such arbitrary cutoff points already exist in education and many other fields, but at least they are recognized as arbitrary and not given the status of absolute or objective levels of competence. In 1991, the General Accounting Office (GAO) examined the how the NAEP defined their levels of proficiencies and found their methods to be questionable (Chelimsky, 1993).

Myth 3: Twenty Percent of Our Children Are Dyslexic

Closely related to the previous misconception that 40% of our students read below the "basic" level is another portentous-sounding figure that indicates 20% of United States schoolchildren suffer from a "neuro-behavioral disorder" known as "dyslexia" (Shaywitz et al., 1996). The research most often cited to support this claim is drawn from the results of the Connecticut Longitudinal Study (CLS), a large-scale project funded in part by the National Institute of Child Health and Human Development (e.g. Shaywitz, Escobar, Shaywitz, Fletcher & Makuch, 1992; Shaywitz, Fletcher & Shaywitz, 1994). The CLS tracked over 400 students from kindergarten through young adulthood, periodically measuring their Intelligence Quotient (IQ), reading achievement, and mathematical abilities, among other attributes. CLS researchers measured "reading disability" by two methods. The first is what is known as "discrepancy scores," which represent the difference between a child's actual reading achievement and what would be predicted based upon his IQ. The idea is that if you have a high IQ but are poor at reading, then something must be wrong with you. The actual size of the discrepancy used in the CLS studies was that recommended by the United States Department of Education, 1.5 standard deviations. This 1.5 standard deviation figure was thus their "cutoff" score used to determine who was reading "disabled" and who was not. In any given year, a little less than 8 percent fall into the category of reading disabled, using the 1.5 cutoff.

Two important things need to be noticed about these results. First, and most importantly, the 1.5 standard deviation cutoff point is arbitrary. We could just as easily have used 1.25 or 1.75 or .5, each producing a different percentage of "neuro-behaviorally" afflicted children. Second, even the 8% have not been shown in this research to be "dyslexic," if by "dyslexic" we mean a "neurologically based disorder in which there is unexpected failure to read," the definition used by the CLS team (S. Shaywitz et al., 1992, p. 145; emphasis added). This is because, quite simply, no neurological measures were administered to these particular children. All that can be said from these findings is that around 8 percent of children in any given year will have a discrepancy of 1.5 standard deviations between their IQ and reading achievement, at least if they live in Connecticut.

Myth 4: Children from the Baby-Boomer Generation Read Better than Students Today

Some argue today's reading levels are dismal compared to those of the 1940s or 1950s. This evidence comes from a study of adult literacy levels, the National Adult Literacy Survey (NALS), which was given to a representative sample of United States adults in 1992 (Kirsch, Jungblut, Jenkins & Kolstad, 1993). McGuinness (1997) notes that those who learned to read in the mid-1950s to mid-1960s have higher reading scores than those of later generations.

Can we really measure the effectiveness of schools 40 years ago by how well their graduates read today? What about the intervening 30 years of reading experience and education? We should hardly expect the reading proficiency of these adults to remain stagnant over time. Surely the reading scores of this group of 35-44-year-olds from when they were still enrolled in school are better indicators of how well they performed as children, since fewer intervening variables then exist to confound the results. We do, in fact, have reading achievement scores from a representative sample of children of this age cohort in the form of the high school NAEP scores from 1971 (for those who entered first grade in 1959 and were 38 at time of the NALS administration). Their scores are not much different than more recent graduates.

Myth 5: Students in the United States Are Among the Worst Readers in the World

What will come as most surprising to many people is how the United States compares internationally in reading achievement: Our nine-year-olds ranked second in the world in the most recent round of testing conducted by the International Association for the Evaluation of Educational Achievement (IEA); our fourteen-year-olds ranked a very respectable ninth out of 31. A dissenting opinion on just how well United States school-children perform over time and internationally is held by Walberg (1996), who argues that reading achievement has in fact declined since the early 1970s. Walberg compared the IEA scores from 1990-91 to the first IEA test given to 15 nations in 1970, with the scores from the two tests equated (Lietz, 1995, cited in Walberg). Walberg (1996) concluded that the scores did indeed decline, from 602 in 1970 to 541 in 1991 (using his adjusted scores).

Two problems exist with this analysis, however. First, it is not clear why the two IEA tests given 22 years apart should be preferred in measuring trends in United States reading performance over the United States Department of Education's own NAEP exam, which has not only been given more frequently (9 times since 1970), but was designed to be much more sensitive to a broader range of reading achievement (Binkley & Williams, 1996) than the IEA tests. Second, the IEA test has changed considerably since its first administration in 1970 (Elley, 1994). Unfortunately, the reanalysis of the scores upon which Walberg bases his comparisons is unpublished, making it difficult to know precisely how these "equated" scores were derived from what were markedly different tests.

Myth 6: The Number of Good Readers Has Been Declining

It has been claimed by some critics that the number of students "at the top" has been declining (e.g., Murray & Herrnstein,1992; Coulson, 1996). While it is true that the number of students scoring above 700 on the SAT did decline, the numbers were never high (2.3 percent in 1966, 1.2 percent in 1995). Also, the large demographic changes in United States schools over the past three decades have almost certainly had an influence on the scores. Bracey (1997) points out that the drops occurred primarily between 1966 and 1972, since which time the percentage of students scoring above 700 has remained stable. Moreover, two studies that have attempted to control for the significant demographic shifts in the test pool since the early 1950s have found that the average declines during the 1960s and 1970s were rather small (Bracey, 1997).

However, the most important point to keep in mind when discussing the SAT is that it is not a representative sample of United States high school students. It is a voluntary test that a large proportion of students takes in some states (e.g., New York) and hardly any students take in other states (e.g., Iowa). The NAEP tests, by contrast, are representative. They indicate no decline in the percentage of students who score at the highest levels. Little change has occurred in the percentage of high-scoring students at any grade level, with the percentage of thirteen-year-olds scoring at the top levels showing an increase over the past three decades.

Myth 7: California's Test Scores Declined Dramatically Due to Whole Language Instruction

In addition to finding a crisis where none exists, it has also become necessary to produce a guilty party to blame for our greatly exaggerated woes (Levine, 1996; Stewart, 1996): "whole language." The focus of these attacks has centered primarily on California, a state that at least nominally adopted a more "holistic" view of teaching language arts back in 1987. This supposedly led to a steep decline in reading scores.

Two points are at issue in the case of California and its reading crisis. First, did California's reading test scores really "plummet" (Stewart, 1996, p. 23) to record lows after 1987? Second, was this sharp decline attributable to the adoption of a reading curriculum in the state in 1987 (CRTFR, 1995), that emphasized reading books and decreasing (but not eliminating) phonics and skills instruction? It turns out that the answer to both of those questions is "no." The popular wisdom about California's decline stemmed mostly from the release of two sets of test scores: the 1992 and 1994 NAEP scores, and results of the state's own California Learning Assessment System (CLAS). In both the 1992 and 1994 state NAEP rankings, California fared rather poorly: In 1992, the state was in the bottom third, and in 1994, in the bottom quarter (Campbell, Donahue et al., 1996). Although Californian students clearly performed poorly compared to the rest of the nation, one must look at scores from both the beginning and the end of the time period in question to show a decline. Unfortunately, state-level NAEP scores are unavailable before 1992, and the tests are not equivalent to any other standardized reading measure. As such, the NAEP data cannot tell us anything about whether scores went up or down after the implementation of the literature-based curriculum. The only test score data available both before and after the implementation of the "holistic" 1987 Language Arts Framework are the California Achievement Program scores. However, there is no indication of dramatic drops or increases.

The second part of the argument used to promote a renewed emphasis on skills instruction was that whole language was the cause of California's (nonexistent) decline and (very real) low national ranking. Is a literature-based curriculum or whole language to blame? Another look at the 1992 NAEP data reveals that the answer appears to be "no." As part of the assessment, fourth-grade teachers were asked to indicate their methodological approach to reading as "whole language," "literature based," and/or "phonics." The average scores for each type of approach were then compared, and those children in classrooms with heavy emphasis on phonics clearly did the worst. Children in whole language-emphasis classrooms (reported by 40 percent of the teachers) had an average score of 220, those in literature-based classrooms had a score of 221 (reported by 49 percent of the teachers), and students in phonics classrooms (reported by 11 percent of the teachers) had an average score of 208 (NCES, 1994, p. 284).

Conclusion

Many things are wrong with United States schools. However, false crises and distorted views of student achievement can only distract us from the real concerns of parents, teachers, and policymakers. Instead, we need to have some understanding of what reading is and know some of the most important factors influencing reading achievement.

References

Binkley, M. & Williams, T. (1996). Reading literacy in the United States: Findings from the IEA reading literacy study. Washington, D.C.: National Center for Educational Statistics.

Bracey, G. (1997). Setting the record straight: Responses to misconceptions about public education in the United States. Alexandria, VA: Association for Supervision and Curriculum Development.

Campbell, J.; Donahue, P. et al.(1996). NAEP 1994 reading report card for the nation and the states. Washington, D.C.: US Department of Education.

Chelimsky, E. (1993). National Assessment Governing Board (NAGB)

Achievement Levels. Interim Letter Report. Washington, DC: General Accounting Office. (ERIC Document Reproduction Service No ED342821).

Coulson, A. (1996). Schooling and literacy over time: The rising cost of stagnation and decline. Research in the Teaching of English, 30, 311-327.

Elley,W. (1994). Preface. In W. Elley (Ed.), The IEA study of reading literacy: Achievement and instruction in thirty-two school systems (pp. xxi-xxii). Oxford, England: Pergamon.

Glass, G. (1978). Standards and criteria. Journal of Educational Measurement, 15, 237-261.

Kirsch, I., Jungblut, A., Jenkins, L., & Kolstad, A. (1993). Adult literacy in America: A first look at the results of the National Adult Literacy Survey. Washington, D.C.: National Center of Educational Statistics.

Levine, A. (1996). America's reading crisis: Why the whole language approach to teaching reading has failed millions of children. Parents, 16, 63-65, 68.

McGuinness, D. (1997). Why our children can't read and what we can do about it: A scientific revolution in reading. New York: The Free Press.

Murray, C. & Herrnstein, R.(1992). What's really behind the SAT-score decline? The Public Interest, 106, 32-56.

National Center for Education Statistics (NCES) (1994). Data compendium for the NAEP 1992 Reading Assessment of the nation and the states. Washington, D.C.: U.S. Department of Education.

Shaywitz, S.; Escobar, M.; Shaywitz, B.; Fletcher, J.; & Makuch, R. (1992). Evidence that dyslexia may represent the lower tail of a normal distribution of reading ability. The New England Journal of Medicine, 326(3), 145-150.

Shaywitz, S.; Fletcher, J.; & Shaywitz, B. (1994). Issues in the definition of and classification of attention deficit disorder. Topics in Language Disorders, 14 (4), 1-25.

Shaywitz, B. et al. (1996). The Yale Center for the Study of Learning and Attention: Longitudinal and neurobiological studies. Dallas, TX: Paper presented at the Annual Meeting of IDA.

Stewart, J. (1996). The blackboard bungle: California's failed reading experiment. LA Weekly, 18(14), 22-29.

Walberg,H. (1996). U.S. schools teach reading least productively. Research in the Teaching of English, 30, 328-343.


ERICCUA
ERIC Clearinghouse on Assessment and Evaluation, 210 O'Boyle Hall,
The Catholic University of America, Washington, DC 20064 * 800 464-3742


This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract RR93002002. The opinions expressed in this report do not necessarily reflect the positions or policies of OERI or the U.S. Department of Education. Permission is granted to copy and distribute this ERIC/AE Digest.

[Home]


Degree Articles

School Articles

Lesson Plans

Learning Articles

Education Articles

 

 Full-text Library | Search ERIC | Test Locator | ERIC System | Assessment Resources | Calls for papers | About us | Site map | Search | Help

Sitemap 1 - Sitemap 2 - Sitemap 3 - Sitemap 4 - Sitemap 5 - Sitemap 6

©1999-2012 Clearinghouse on Assessment and Evaluation. All rights reserved. Your privacy is guaranteed at ericae.net.

Under new ownership