August 08, 2003

Advocacy research on NYC District 2

The Educational Policy Archives added a very interesting pair of articles earlier this week. The main article is Research or "Cheerleading"? Scholarship on Community School District 2, New York City, by Lois Weiner of New Jersey City University. This is followed by Reforms, Research and Variability: A Reply to Lois Weiner, by Lauren B. Resnick of the Learning Research and Development Center at the University of Pittsburgh.

New York's CSD2 covers Manhattan below Central Park (except for a small area near the Williamsburg bridge), as well as the East Side up to a boundary varying between 96th and 100th street. During the period covered in the cited articles the superintendents of CSD2 were Anthony Alvarado and (after Alvarado moved to San Diego) Shelley Harwayne. District 2 is a hotbed of whole language and constructivist mathematics teaching and is much celebrated in certain circles for those reforms. The principal cheerleaders from the education research community are Lauren Resnick and Harvard's Richard Elmore.

Anyone familiar with the District 2 area of Manhattan will know of the tremendous changes in the demographics and the quality of life that have occurred there over the past 25, and also the past 10-15 years. Chelsea, TriBeCa, and Alphabet City and surrounding areas east of 2nd Ave are all unrecognizable now from the way they looked in the 1980s. The distractions of urban crime and decay have been very much reduced over the same period. Any honest investigation into the success of school policies must pay careful attention to the effects of these demographic changes. Researchers Elmore and Resnick note the changes, but otherwise pay scant attention to them, as is documented well in Weiner's article.

The reply by Lauren Resnick is of interest in its own way. Resnick characterizes Weiner's article as "... at once an analysis of data on demographics and achievement in Community School District Two (CSD2) in New York City and an attack on the research strategy (and by implication the research ethics) of the High Performance Learning Communities (HPLC) project that I co-directed, along with Richard Elmore and Anthony Alvarado". Resnick also writes (in the abstract):

The intent of the HPLC investigation was always to link scholars and practitioners in a new form of research and development in which scholars became problem-solving partners with practitioners. There are important issues about how to profitably conduct such "problem-solving" research. These issues are worth substantial attention from the communities of researchers and practitioners as collaborative research/practice partnerships proliferate. Serious studies of such partnerships are needed, going well beyond the anecdotal attacks offered by Weiner in her article.

Or, to paraphrase: "Sure, we were engaged in advocacy research. Let others do the proper work." Guilty as charged, is my assessment.

Posted by Bas Braams at 06:24 AM | Comments (0)

July 16, 2003

Apples to apples evaluation of charter schools

The Manhattan Institute Center for Civic Innovation has just released the Working Paper Apples to Apples: An Evaluation of Charter Schools Serving General Student Populations, by Jay P. Greene, Greg Forster, and Marcus A. Winters (July, 2003). In the report the authors confuse the ability of schools to improve themselves with their ability to improve their students, as this Web contribution will explain. It is an elementary error that completely invalidates the report.

I remark that in February, 2003, the same authors produced a report Testing High Stakes Tests: Can We Believe the Results of Accountability Tests?. I wrote a Web review of that report in which I explained that the authors confused the predictive power of a high stakes test with its validity as a measure of student learning. That too was an elementary error that completely invalidated the report. (In both cases the report's conclusions are plausible, but that is besides the point.)

The present Apples to Apples report sets out to compare the performance of charter schools with that of public schools serving similar populations. (Given the wide range of educational policies in place in charter schools as well as in public schools I'm not sure that the question is all that interesting, but let's accept the question anyway.) In order to compare similar schools, the report focusses on charter schools that serve a general student population, and the control group of public schools is formed by taking for each charter school the nearest public school that also serves a general population.

The measure of performance is whatever standard statewide test is in place. Now I remind the reader of the concept of value-added assessment. See, for example:

Value-added assessment employs, ideally, performance data on individual pupils over multiple years, and looks at improvements over time. It is a way to factor out the effects of different student backgrounds, because these are, one assumes, reflected in their initial test performance. If one doesn't have data on individual pupils then one can use data on grades within a school. In that case the incremental performance that one cares for is that between a certain grade in one year and the next higher grade the next year, on the assumption that this involves approximately the same student population.

Greene et al. could certainly have used such grade-to-grade value added assessment in their work. However, they did something different. They look at the overall performance of each school in one year and compare it to the overall school performance the next year. The school performance is measured in whatever way the state measures it: typically some average scale score or a percentile rank within the state. They do this for each tested subject separately, but not separately for each grade. They then compare the year-to-year changes in performance of the charter schools to the year-to-year changes in performance of the nearest public schools. They find, finally, a small (in fact, very small) advantage for charter schools on this measure. In the executive summary they express their observations as follows:

Measuring test score improvements in eleven states over a one-year period, this study finds that charter schools serving the general student population outperformed nearby regular public schools on math tests by 0.08 standard deviations, equivalent to a benefit of 3 percentile points for a student starting at the 50th percentile. These charter schools also outperformed nearby regular public schools on reading tests by 0.04 standard deviations, equal to a benefit of 2 percentile points for a student starting at the 50th percentile.

And so, the authors completely confuse a measure of the improvement of schools with a measure of the improvement of student performance. Charter schools could be performing wonderfully or they could be performing dismally relative to public schools in improving student performance, and it would not be seen on the whole school year to year test score improvements that are the basis of this report. It would be seen, of course, in traditional value-added assessment at the pupil or grade level.

Posted by Bas Braams at 04:26 PM | Comments (4)