Comments: Apples to apples evaluation of charter schools

Wouldn't the schoolwide measure be pretty much an average of the progress at each grade level? i guess it would be off by one grade, since the oldest students the first year would be gone by the second year, but it would be pretty close and simpler than comparing each grade level. Or am I missing something? Statistics is not my strong suit.

Posted by Joanne Jacobs at July 16, 2003 07:47 PM

The issue is precisely that the oldest students are gone the next year. Let's say that we are dealing with grade school, K-5, and the annual statewide tests start in grade 3 (a common situation), so our schools are rated on the performance of the kids in grades 3-5. Suppose that the charter schools do spectacularly well. The kids that enter in K are on average at the 20th percentile; by the time they are tested in 3rd grade they are at the 50th percentile, and by the time they are tested in 5th grade they have made it all the way up to the 80th percentile. The charter schools repeat this spectacular success from year to year. On the Greene et al measure they would not be making any progress; they are flat at somewhere like the 65th percentile. If the charter schools are consistently a pathetic failure, again the Greene et al measure won't show it. All that it shows is if charter schools are improving over time relative to public schools. It is a measure of how well charter schools are improving their own performance; not of how well they are improving their students' performance.

Posted by Bas Braams at July 16, 2003 08:57 PM

I found the report somewhat confusing; I think it could be written more clearly. I think when he writes of standard deviations, for example, he is refering to test scale distributions, and not the standard deviations of his sample of score means. But, he should be publishing the calculations and results of the latter statistical tests.

I don't think it is so bad that he is comparing school mean scores in one year to school mean scores in the next year. He can't get any better data than that, I'm sure. He's making do with what is available. And, they must be grade level scores. I don't think there is such a thing as a mean test score for an entire school with several grades mushed together.

I have more trouble with the assumption that he has comparable groups. The fact that he is comparing a charter school to the nearest public school (if I understood that part correctly) almost assures that they are not comparable, since presumably any kid at either school must have made a conscious, deliberate choice for one or the other, ...lots of selection bias (and the kids in charter schools are probably the ones with more initiative, more adventurous). I think it would be better to compare charter schools, or cities with charter schools, to public schools in other, demographically similar cities that have no charter schools, or with those entire cities ...a sort of paired cluster design.

Posted by Richard Phelps at July 16, 2003 11:32 PM

Well, OK, Bas has implied that I am being much to wimpy about this. Maybe he's right. The authors could have done a value-added study--it would have taken more time, would have been more complicated, and would have given them a somewhat smaller sample size. But, in the end, they would have had a better study. There are enough states that test in more than one grade, some even in consecutive grades, that the authors could have used synthetic cohorts.

Bas is right, I believe, that at best with the current study they are looking at how well the schools improved what they were doing in a year's time, and not how much they improved their students. This, in a way, biases the study in favor of the newer schools (the charter schools) which are higher up on the learning curve. Newer schools are likely to make larger incremental gains than older schools, because they start from a higher point on the learning curve.

Maybe I was being wimpy about it in part because I am astonished by, but clueless about, what is happening with MI education reports. This is the third one I have read in the past year, which amounts to all that I have read in the past year, that has been thoroughly suspicious. I asked a colleague who tends toward the "establishment" side of the fence about this and he said "they have an agenda, you know." Maybe they do; I certainly wouldn't know. Maybe they are so tired of the research fraud of "the other side" that they have become attracted to research without restraint.

The worrying aspect is that dozens of education reform groups around the country accept what they get from MI as gospel, and spread it. They assume, as I used to, that an MI report was dependably valid and reliable.

Posted by Richard Phelps at July 18, 2003 07:41 PM