RESPONSE TO CRITICS

Comments in Response to Critics

Originally posted 1999 with permission from Gay Su Pinnell,
The Ohio State University

In response to the online publication, “Reading Recovery: An evaluation of benefits and costs; The claims versus the facts” by B. Grossen and G. Coulter

Reading Recovery has been documented as teaching several hundred thousand young children to read. It is not classroom instruction; it is not “whole language” instruction. It is a one to one tutorial for children with difficulties. That definition must be kept in mind when encountering criticisms.

Views of Reading Recovery have been confused with the debate between “whole language” and “phonics.” In fact, a balanced approach is desired for the classroom. Reading Recovery is a special program, designed specifically for at-risk first grade children. More evidence has been collected on Reading Recovery than on any other program. Detailed statistics on all children involved are collected each year. Nevertheless, several documents have emerged that attack Reading Recovery.

The “journal,” Effective School Practices has been recently started by an organization called “Association for Direct Instruction.” This is a new organization and the volume called Effective School Practices is the only publication that has been distributed widely to school administrators. The small book is clearly a quickly printed collection of letters and articles negative to Reading Recovery. For example, there are personal letters from several individuals, one dissatisfied parent whose child had difficulty and a former professor from San Diego named Patrick Groff, who has made a long career of advocacy for phonics. By no stretch of the imagination could this volume be called a “scientific” journal. It is a collection put together for a political agenda.

The article by Grossen, Coulter and Ruggles does not report a study. These authors selected from the literature a collection of information, used out of context, to support a certain point of view. In fact, these authors are heavily involved in a program called “direct instruction,” formerly DISTAR, which uses a phonics approach and is sold by SRA, Inc. There are errors and flaws throughout the document. In addition, the author’s opinions should be treated with caution because of the potential bias.

The authors claim that Reading Recovery’s evaluation is biased because persons responsible for success collect the data. Reading Recovery has a rigorous and responsible system for collecting a large amount of data on every single child served. These data are used by teacher leaders to improve teaching and implementation. No other program collects data on every child served. Most collect none. Others, including the one these authors are associated with, have some studies that have been performed by people involved in the projects. They do not document every school and every pupil. Only Reading Recovery has this extensive data collection and reports it every year. It is ridiculous to criticize teachers and program implementers for the very thing that makes it unique and that keeps the quality high. For the controlled studies on Reading Recovery that have been published, rigorous data collection standards were held and reported.

They also criticize the standard of moving children to above average levels in their classes. This standard is held because we want Reading Recovery children to be able to participate fully in classroom instruction, which in a large class of children must be pitched to the average. To raise averages in a whole school, the system must also have a comprehensive approach to the improvement of classroom instruction. But, even in classrooms where achievement is very high, a few children will always need extra help to move on with the group. In addition [and this is omitted from the article] there are rigorous criteria that assure children have independent reading skills before being released from the program; that is, there is criterion that they must reach no matter what the general achievement is in the classroom.

To improve overall reading achievement, two things are necessary: (1) a strong staff development program that also includes reorganization of the time and management system; and (2) a strong intervention that serves the lowest children. Averages may be raised without necessarily helping the lowest group; the lowest group may be helped to average or close-to-average without substantially raising test scores. Both actions are necessary if our schools are to be effective; to date, Reading Recovery has more documentation on success than any other intervention.

Grossen repeats an error formerly published by Shanahan and Barr, 1995, in the Reading Research Quarterly. Those writers claimed that in the Ohio State study (published in Reading Research Quarterly) half of the data were lost. That was an error and Shanahan and Barr have apologized for it. Letters were published in subsequent issues of Reading Research Quarterly to correct the error.

Also, the research in question concerned a statewide study rather than Columbus, Ohio, as the Grossen article implies. Grossen simply repeats the error and tries to apply it to Columbus.

The article does mention Columbus, Ohio, and there has been concern about the results there. They do not compare well with other urban centers at this time [although results change from year to year]. In Columbus, Reading Recovery is spread very thin; administrators and teacher leaders have been working hard to get more children through the program. On the other hand, Grossen and others are distorting the facts. Reading Recovery does report data on every child served. About 20% are lost through mobility. Children are not withdrawn from the program without a long testing process. It is true that in the early days of the program we set a criteria of 60 days as being a minimal program. We did this with the advice of the Columbus evaluation department. Then the success rate is calculated. But, taking national data, if we calculate the percentage of all children served, even for one day, the success rate of Reading Recovery is 60%. No other program can make that claim. And, each one of these children has shown he or she can read in a skilled way on a variety of measures and has been approved by the classroom teacher as performing in an average range.

Reading Recovery does serve the lowest children in any given school. The procedures require looking across a range of measures. Grossen and others quote Hiebert as saying that the average entry score for children in Reading Recovery is 34.5. In one school or district that has particularly high scores, that could have been the case; however, it would be unusual. In most districts, scores vary widely. Standardized test scores for very young children are unreliable and most districts do not use them today. [Hiebert examined reports from many years ago.] Where standardized measures are used, entry scores are very low.

In the same article quoted by Grossen, Hiebert also said this: “Once a program is in place, there appears to be considerable fidelity in the results. Even when the number of tutees jumps 100%, as it did at OSU from 1986-87 to 1987-88, similar levels of oral reading were maintained with the same percentage of the cohort.” Also in that article, Hiebert acknowledges that Reading Recovery has demonstrated phenomenal growth within the United States and that “a high percentage of Reading Recovery tutees can orally read at least a first-grade level text at the end of Grade 1.”

Over a period of years, between the Reading Recovery program and Grade 4 classroom literacy programs, systemic factors such as subsequent instruction, promotion and disciplinary policies, and individual life circumstances act as intervening variables affecting a student’s progress, despite a successful early intervention. Under such circumstances, showing residual effects of any kind is a remarkable occurrence and indicates that the program has potential given support, higher coverage, and improvement in efficiency and effectiveness. An Australian study, not designed to look for a continuing effect on the progress of individuals involved in Reading Recovery, discovered such an effect as a serendipitous outcome (Rowe, 1988).

In taking the Reading Recovery clientele from the lowest achievers in the classrooms, not excluding any child for any reason, Reading Recovery can offer an educational system savings in reduction of special education referrals, retention, and remedial services, because the children served are virtually an outlier population (see Lyons, 1994). At a time when the LD population has doubled (U.S. Department of Education, 1990) and special education costs are very high (Coopers & Lybrand, 1994), we can not ignore an approach that has shown potential for savings.

Reading Recovery is intended as a “first net.” As such, it has two positive outcomes: (1) successful completion of the program; and, (2) referral for a few children. The great documentation and intensive work with children benefits them all–even those who do not reach the criterion for discontinuing. Of course, some children are referred to special education. Reading Recovery is not at 100% success. But, there are fewer children in special education than if all children at the low end of the scale were simply referred. And, we know much more about these children’s strengths and they enter special education services with more skills than they would have otherwise.

It is interesting that Grossen uses unpublished studies that had no random assignment [a hallmark of a true experimental study] to support the idea that other interventions are more effective. The study that is criticized used rigorous experimental procedures that were approved by a national board of researchers. One OSU study has won national awards; the other was approved by a national board of researchers. Both studies were published in highly rated research journals. Both studies showed strong effects for Reading Recovery.

Grossen and others extensively quote Rasinski’s criticisms of research on Reading Recovery. Rasinski adjusted scores to hold instructional time equal and found equivalent effects. He suggested that the factor that led to Reading Recovery’s superiority in this statewide study was time in reading and writing lessons rather than the program itself.

Intensive instructional time in Reading Recovery is one characteristic that defines this program. One of the hallmarks of Reading Recovery is engaged time. The research indicates that it is not time alone, but engaged time, that is important in instruction. Reading Recovery teachers base instructional decisions on student behaviors. They use time effectively to work on the individual student’s strengths. We argue that if Reading Recovery teachers made the most of time and engaged students to a great extent than did other teachers in other treatments, this must be attributed to the Reading Recovery training model. But Reading Recovery training does not focus on time alone; teachers concentrate on including and wide range of reading and writing experiences every day.

Rasinski’s reasoning is faulty. He took raw post-test scores for treatment groups and reported them as percentages of Reading Recovery scores. This is a misleading and indefensible statistical procedure. Post-test scores involve a combination of pretreatment learning and exposure with any special program effects. Program duration is only properly related to the amount of gain (or learning) that occurs while students are receiving the program. The effects OSU reported represent relative differences among the various groups in the size of these learning gains. Differential effects may be analyzed relative to programmatic time, not to the observed post-test scores. We reported these results in terms of standardized effect estimates. Such statistics are accepted standards for judging the size of program/experiment effects across a wide range of social and behavioral research. According to the standards established in these various fields, the effect estimates range from modest to very large in favor of Reading Recovery in this study. In view of this, Grossen can not rely on Rasinski’s opinions.

The unpublished “study” by Fincher, quoted heavily in Grossen and others, is not a study at all and was at the time criticized and discarded. This paper was the result of disagreements among individuals favoring competing programs within the Canton district.

Wasik and Slavin have studied intervention programs. When one compares a program like Success for All with Reading Recovery, you are comparing a comprehensive approach that includes all children in the school [from low achieving to high achieving] with a program that serves only the very lowest achieving. Slavin has found that when he isolated the tutoring programs, Reading Recovery was more effective than the tutoring program used in Success for All. Other comparisons are not appropriate. Reading Recovery, for example, can work well within a comprehensive program like Success for All and does so in many places. It also can work well within other comprehensive approaches. We need good, strong teaching in the classroom in combination with Reading Recovery.

Grossen and others criticize the cost of Reading Recovery. Costs are higher in initial implementation and become lower as the system becomes more efficient and there is higher coverage. In most places the cost is estimated as about $2500 per child. Removing Reading Recovery would not significantly lower class size but would remove this intensive help for the lowest children just when they need it.

The article claims that when children return to the classroom after being released from Reading Recovery, the rate of learning slows. Reading Recovery is individual, intensive instruction designed to help children make accelerated progress, which they do in the program. When no longer receiving individual instruction, they return to an average rate of progress. It is true that a monitoring system must be in place to assure that they do not return to a slow rate of progress; however, most children who have developed independent reading systems also have “learned how to learn” in reading and writing and continue that progress. Life circumstances or completely new tasks may need special attention.

Grossen favors strict phonics approaches and refers to “Thirty Years of NICHD Research,” which is a report of studies that, taken together, focus on one point of view. It provides a framework based on research; however, we need to think about the assumptions and design underlying this framework. It reduces complex beginning literacy education issues to a simplistic skills solution, with phonemic awareness training as a sort of “magic bullet.” The report is well-meaning but we need to understand that there are many different bodies of research in the field of literacy education. This reflects one point of view and we have found the information useful. Phonics is very important. We can find support here for some [not all] of the factors involved in learning to read. The researchers, however, virtually ignore broader linguistic factors such as language comprehension and syntax.

Nevertheless, “Thirty Years of NICHD Research” tells us nothing about Reading Recovery; in fact, its conclusions support Reading Recovery. The article says: “Children who fall behind at an early age (K and Grade 1) fall further and further behind over time … The best predictor in K and lst grade of a future reading disability in Grade 3 is a combination of performance on measure of phonemic awareness, rapid naming of letters, numbers, and objects, and print awareness … ” Reading Recovery has a very strong direct instructional approach to helping children develop this body of knowledge.

Reading Recovery is organized according to the framework of reading and writing activities. Reading Recovery teachers work intensively with children to develop phonemic awareness (the overt knowledge that words are made up of phonemes in sequence). Through saying words slowly in writing, they become aware of letter and sound relationships. Children are introduced to the visual forms of letters and instructed in how sounds are related to letters and letter clusters. Through explicit work with magnetic letters, they learn how words work and how words are related to other words (linguistic patterns). The process is highly systematic in that teachers first assess precisely what the children know about letters, sounds and words, and then work to help them learn what they need. Children make fast progress because the teacher has a good inventory of what they already know; they are directed to the next level of learning they need. Moreover, they immediately have opportunity to use phonics and word attach skills in reading and writing.

Therefore, even if one takes the narrow point of view espoused in the NICHD document, Reading Recovery is developing the very skills recommended. In addition, Reading Recovery students are immersed in reading and writing for meaning and they become independent A group of independent researchers (Wong, Groth, & O’Flahavan, 1994) conducted a study of Reading Recovery teaching and commented: “teachers trained in Reading Recovery seem to know from moment to moment what text to focus on, when and how to prompt, when to tell, when to coach, and when to allow readers to direct their own reading” (p. 23).

Grossen also refers to the article by Center, Wheldall, Freeman, Outhred, and McNaught, published in Reading Research Quarterly. This Australian study assessed the progress of 31 children receiving Reading Recovery in the first year of implementation in New South Wales. They were compared to a matched comparison group from five matched schools and a control group consisting of low progress student who had entered Reading Recovery by the time of the testing. The researchers found superior performance for Reading Recovery at short term evaluation (15 weeks) but no significant differences at 30 weeks; however, by that time, the Reading Recovery group had shrunk from 31 to 22 children and the control group had shrunk from 39 to 15, largely because less able students in the control group had been admitted to the Reading Recovery group. These researchers’ analysis of the matched sample suggested that some students may have been served in Reading Recovery who would have made adequate progress without the program. This assertion, however, is based on their own set criteria for adequate performance (relationship between chronological age and test scores) rather than the rigorous procedure required by Reading Recovery as to a child’s competent, independent performance in reading. It is not clear whether the matched students would have met those criteria. We believe that basing any judgment on this study of so few students would be irresponsible.

Other researchers in Australia (Rowe, 1988) included Reading Recovery in a four-year longitudinal study to compare the nature and impact of several teacher professional development literacy programs, including Reading Recovery, on students’ literacy development. Data were received on 56,092 students from 92 schools. The researchers attempted to develop explanatory models that would specify the factors and estimate the magnitudes of the variables that either directly or indirectly, with other variables, influence achievement. Repeated measures on students nested within classes and repeated measures on schools were used. It appeared that students who had been identified as ‘readers at risk: and placed in a Reading Recovery program benefited notably from participation, with some achieving beyond the 80th percentile level of their non-Reading Recovery-exposed peers. Longitudinal data indicated that the earlier gains made by Reading Recovery students who were in grades 5 and 6 during 1988 and 1989 appear to have been sustained. These findings are especially interesting given that these researchers were not necessarily looking at Reading Recovery; its effectiveness emerged from their study. This study was published in a scholarly journal but not included in the direct instruction volume mentioned earlier.

Search Journal Archive

THE JOURNAL OF READING RECOVERY

Spring 2024