chat center
SUBSCRIBE MY LINKS:

Latest Posts Full Chatboard Submit Post

Current Issue » Table of Contents | Back Issues
 


TEACHERS.NET GAZETTE
MAY 2001
Volume 2 Number 5

COVER STORY
Harry & Rosemary Wong offer advice on motivating your students. Tune in to this month's Gazette cover story and pick up tips from the experts to enhance your students' performance....
COLUMNS
Effective Teaching by Harry & Rosemary Wong
Promoting Learning by Marv Marshall
Alfie Kohn Article
4 Blocks by Cheryl Sigmon
School Psychologist by Beth Bruno
BCL Classroom by Kim Tracy
ARTICLES
Around the Block With...
The Unsinkable Sub
Interview: Cheryl Sigmon
Role Of The Online Teacher
Browser Maintenance
Poetic License Information
Learning Improvement Tools
Mars Society Contest For Students
Book Review: Cloud Woman
Family Library Visit
Stellar Walk of Fame
Emotions of A Sight Impaired Child
SFA and Research
REGULAR FEATURES
Poll: Do You Hoard Supplies?
Upcoming Ed Conferences
Humor from the Classroom
Letters to the Editor
New in the Lesson Bank
Help Wanted - Teaching Jobs
Gazette Back Issues
Gazette Home Delivery:


About Stanley Pogrow...
Dr. Pogrow is an Associate Professor of Education at the University of Arizona, where he specializes in school reform and the use of technology. He is the developer of the Higher Order Thinking Skills (HOTS) program for Title I and LD students (www.HOTS.org) and Supermath. He was a public school teacher in inner city schools in New York City, and taught math at the middle school and high school levels.
 
 
Opinion...
Success for All Never Had a Research Base and Never Worked
by Dr. Stanley Pogrow

"The teacher unions continue to support the use of SFA and to this point have not been willing to reconsider this position even in light of the contrary scientific evidence that I have brought to bear."

When good and tolerant teachers talk about Success for All they generally say something like: "We do not think that it should work but there is strong research behind it so we should give it a try for the sake of our students. Last ?? this publication published the articulate and passionate criticisms of Success for All by retired teacher Georgia Hedrick. Was she simply an anomaly and a lone malcontent as claimed by Robert Slavin, the co developer of Success for All? Indeed, how could she as a teacher be so negative when the research was so positive? Was she simply an old fogey who was too stubborn to adapt to the latest research validated innovation and/or a pure whole language ideologue who believed that all structure was evil? How could one teacher possibly be right given the huge volume of research evidence to the contrary?

Actually, there never was any valid supporting research. What had happened was that the developers of Success for All, Robert Slavin and Nancy Madden, were also the directors of a research center at Johns Hopkins University. From the latter position they were able to secure tens of millions of dollars to conduct research on their own program. They flooded publications and professional research meeting with tons of articles, speeches, and sophisticated tables showing how successful this program was. Dr. Slavin also wrote articles on the methods that should be used to evaluate programs, and published evaluations of competing programs--which of course were rated lower than his. On top of this, guess who got the contract for the U.S. Department of Education's (ED) National Center for the Study of Students Placed At-Risk? Yep, the developers of Success for All's research center. When Congress asked for a national study to determine the best approach for helping Title I students, guess who got the contract from ED to conduct the analysis? Yep! The same group.

In other words, the developers of Success for All had a virtual monopoly on research funding for discovering the best way to help disadvantaged students, approximately $100 million over a 10 year period. This should not have been allowed to happen as it is fraught with all types of potential conflicts of interest. (This is equivalent to the National Cancer Institute awarding all of its funding for research on how to cure a particular form of cancer to one of the companies that has a cancer drug.)

Nor was the issue just large amounts of public monies. The for profit company, Edison Inc., which takes over public schools, began to use the Success for All program.

Slavin and Madden convinced everyone of the success of their program through the sheer volume of technical reports, to the extent that until a few years ago no one looked into whether the research was actually valid. Indeed, a study commissioned by the teacher and administrator professional associations to see which programs were indeed supported by research gave the highest marks to Success for All. (It did not help that the director of the study was a former employee of that same research center.) This study like all others took the reported research at face value.

Over the past three years I have published four articles analyzing the validity of the Success for All research in the prestigious Educational Researcher and Phi Delta Kappan journals. (Two of the articles were refutations of challenges by Dr. Slavin.) My articles showed that the methodology used was invalid and cleverly produced the appearance of success while masking actually failure. (If you are a glutton for punishment, you can find the last article at: www.hots.org/Articles/Kappan_sept2000.html, with the capitals and lower case letters as listed.)

The first way that failure was masked was to report effect size differences (whatever that is) in how their students were doing compared to students in other schools. THEY DID NOT REPORT HOW THEIR OWN STUDENTS WERE ACTUALLY DOING AFTER THE EARLIEST GRADES. The first actual achievement results were compiled by Dr. Richard Venezky, a noted researcher from the University of Deleware, who went back into Baltimore and reanalyzed the results. He found that Success for All students were entering the sixth grade reading three years below grade level and falling farther behind.

When I pointed out in my articles that these were terrible results and not a success, Slavin's published responses were invariably: "I did not promise that all students would reach grade level", or "That I was upset only because the students were not reaching grade level." (I would always reply that there is a difference between not reaching grade level and being this far behind, and it is the latter that was not acceptable.) These results were even poorer when you consider that he removed lots of high risk students from his samples, such as mobile students, and in at least one case special ed students just disappeared.

The second way they tried to make Successful for All look good in the comparisons was to stack the deck in favor of their schools. First of all, a lot more money was spent on the Success for All schools in the comparisons, in some cases in the early 90's spending $400,000 more in their elementary schools. Second, students in Success for All schools spent lots more scheduled time reading. Third, nothing was done to help the comparison schools improve using a different approach. They were just left dangling. This invalid stack the deck methodology is the equivalent of the Seinfeld episode in which Kramer takes up Karate. Seinfeld is amazed when the ungainly Kramer becomes champion of the dojo--until he visits and discovers that Kramer is competing against 11 year olds. Kramer looked good in comparison but he wasn't very good.

There are two amazing things about this clearly invalid stack the deck methodology. The first is that despite the huge advantage given to the Success for All schools, the only grade in which they could show a relative advantage was in the first grade. This is awful performance for any 5-6 year program, let alone such an expensive one. To spend so much money and have everything stacked in your favor and then not have students progress relatively after the first grade is amazingly bad, especially given that any structured program produces gains in the first grade. The second amazing thing is that Slavin had written extensively on the importance and proper method of conducting experimental comparisons. I will leave it to you to decide why he did not follow his own recommended methodologies in his own research on his own program.

In addition to the research being invalid, the actual results were terrible. Every place where Slavin claimed success, independent studies later found that the schools and students were doing poorly. (In fact, I do not know of a single independent study that has found the program to be effective.) These districts invariably found that they could implement other approaches, at times homegrown, that did better, at significantly lower cost. The districts/schools invariably dropped the program at almost all sites. However, districts were generally too embarrassed to admit the failure publicly given the high cost. Therefore, despite the failure in a given district, Success for All was able to move on unchallenged and claim success somewhere else. First, there was the supposed success in Baltimore. Then there was the supposed success in Dade County. Then there was the supposed success in Memphis. Now there is a supposed success in Houston, which is the district that made the largest commitment to the program. Unfortunately, Success for All students are doing worse than comparison students there as well, and the number of schools using this program in Houston are shrinking dramatically.

Undeterred, Slavin and Madden continue to claim success and try to get high profile politicians and education officials to support them. The title of their new book is: One Million Children: Success for All. Really! Whenever anyone points to the past failures (which had been designated successes by them), their published excuses are: a) The schools didn't implement the program well, b) the superintendent left, c) there was a hurricane, and d) there were lots of poor kids.

Indeed, there is no way that Success for All could possibly be effective. Observers of Success for All classrooms invariably report that the timed scripted approach forces teachers to move on regardless of whether their students understand the concepts and do not allow for teachers to build upon students' conceptions. It does not allow for differences in how students conceive of ideas and information. The only concepts that can be taught this way are only the simplest and only to the youngest students. This is why the Mississippi study found that after the first grade students interest in reading in among Success for All students declined after the earliest grade. Reading was just an imposed process where there was no place for their own individuality. Students wanting to express their individuality had to look elsewhere outside of books.

The fundamental problem with Success for All is not that there are scripts. My own Higher order thinking Skills (HOTS) program uses partial scripts-as do most performing arts. (My next article will describe how HOTS, www.HOTS.org, combines the use of scripts with highly individualized Socratic teaching techniques.) Developing reading comprehension skills requires teachers who have a variety of techniques at their disposal and who have a way of making decisions as to the conditions under which to employ one versus the other, and which students are likely to benefit most from which combination of techniques. The problem is that Success for All's scripting method does not allow for this. Nor do any of the other one-size fits all comprehensive schoolwide reform programs which are just as ineffective.

Georgia Hedrick and the other teachers who resisted the use of this program were right, and all the professional associations, researchers, and the U.S. Department of Education who concluded that there was valid research supporting the use of Success for All and comprehensive school reform models in general were wrong. Georgia Hedrick's instincts and observations were valid and she was an outstanding teacher who correctly rebelled at being forced to be a less effective teacher by a too limited program. However, this is not a simple case where a teacher was right and good research was wrong. Good research describes reality. The problem here is much worse. The education establishment allowed invalid research by the ones who stood to benefit the most to be disseminated as valid research. There was no oversight by our leaders in the profession, government, or even the research community. Even when my articles began to appear everyone looked the other way and ED continued to fund even more such research by the same group.

The first lesson from all of this is that teachers should follow their instincts and should also constantly test those instincts. The second lesson is that the results of these personal tests are probably more valid than the published research. The third lesson is that good teachers' validated instincts should be supported by their administrators. Unfortunately, faced with accountability pressures and the reassurance that there was a research validated quick fix and an outside organization that could provide a complete package that would make all the kids successful, many administrators became obsessed with getting all their teachers to buy into Success for All. The fact that the best teachers refused and demanded to transfer out did not deter such administrators who generally had little or no background in curriculum and reading. Such schools inevitably become worse when those teachers left. The most important fact is that the quality of teacher is the primary determinant of student progress. Ignoring this basic fact in favor of phony research meant that the 90's were not a decade of 'school reform' but one of 'school deform' in which learning gaps rewidened for the first time in decades and no progress was made in reducing the dropout rate among disadvantaged students.

That does not mean that teachers should just rely on their instincts. There is still a role for science and good research to teach even the most veteran and best of teachers some new lessons. In addition, sometimes there does need to be some uniformity in what is taught (although not necessarily how something is taught). Unfortunately, the case of Success for All demonstrates that even the most widely cited research can be wrong and little more than self-serving hype with numbers. Professional emptor is the rule. Hopefully, we as a profession will never again allow the situation to repeat itself where completely invalid research is disseminated as valid gospel. Think of the damage this has caused to the professional lives of teachers and the lost potential of their students.

 

#