August 28, 2003

Poor marketing of NYC school reform

The New York Times has a misguided report by David Herszenhorn on NYC mayor Bloomberg's and schools chancellor Klein's poor marketing of their reforms.

When Mr. Bloomberg laid out the bulk of his education plans in a speech in Harlem in January, his proposals were received with general enthusiasm, even winning the initial support of the teachers' union president, Randi Weingarten.

But in the weeks after the mayor's speech, the administration failed to build the momentum, officials said, and instead became embroiled in an arcane debate over whether the proposed literacy curriculum had a strong enough phonics component.

Sadly, it seems entirely possible that mayor Bloomberg and chancellor Klein view the debate over how to teach reading as just arcane and petty. Their defence of the curricular mandates that chancellor Klein imposed on the New York City schools is never based on the substance of the specific choices, but only on the claimed need to have a unified curriculum throughout the city. A Freedom Of Information Law request done for New York City HOLD showed that there is no documentation, not even for the Department's internal reference, of the rationale for the specific choices of textbooks for reading and mathematics.

Posted by Bas Braams at 01:58 PM | Comments (0)

August 27, 2003

Interim report on NYS Regents Math A

Readers may recall the controversy over the difficulty of the June 2003 New York State Regents Math A exam. The results of the exam were tossed for juniors and seniors, and a panel was appointed to study what went wrong. For reference, here are links to Commissioner Mills's earlier press release and the charge to the Math A panel. Also for reference, my critique of the New York State Regents Math A exam.

The Math A Panel has now produced an interim report, and it is receiving plenty of press attention. (Go to Google News and do a search on 'regents "math a"'.) The best summary that I've seen is that of Karen Arenson in the New York Times.

The panel's interim report deals with only a very limited part of the charge, and deals with it in a disappointingly limited way. The panel clearly thought it was important to have a recommendation out before the start of the school year about a rescaling of the test. I am surprised that they only found the time to compare the June 2003 and the June 2002 instances; in the 6 weeks that they've worked they really might have had a serious look at, say, the past 6 instances of the exam, and this both in a qualitative and a psychometric way. Who knows, maybe the June 2002 exam was exceptionally easy.

In fact, though, the conclusions of the panel regarding the difficulty of the June 2003 instance match all the informed speculations that I've seen, including my own speculations: Parts 1 and 2 of the exam were in line with previous instances, and parts 3 and 4 were more difficult. For my critique I looked at August 2002, January 2003, and June 2003; and found June 2003 the hardest and January 2003 the easiest.

The interim report does not specifically criticise any officials or any actions, but I draw from it the conclusion that inexcusable errors were made in the development of this June, 2003, instance of the exam. In my earlier commentary I quoted an article by David Hoff in Education Week in which he quoted deputy commissioner James Kadamus as saying that the June, 2003, exam had more problem-solving questions than previous exams, because the state is gradually raising its expectations. I wrote then that this is a remarkable statement, because all previous reports indicated that the added difficulty of the June exam was unintended and had taken the Department entirely by surprise.

Now here is Karen Arenson, writing on the basis of the interim report of the Math A panel:

Based on field tests before the actual test was administered, the Education Department expected the average score on the June test to be 46. The expected average for the test given a year earlier was 51 slightly higher, but still below the score needed to pass, which is 65 for students who entered ninth grade in 2001 or later, and 55 for everyone else.

Did commissioner Mills know that the average scaled score of the June, 2003, exam was expected to be 5 points lower than that of June, 2002? (Arenson is mistaken, of course, to describe 51 as "slightly higher" than 46; the difference is large.) Public indications are that Mills did not know this.

I am still surprised that the error of the added difficulty was made in such a blatant way. For myself I had been speculating that a subtle error would have been made: the department might have used for its psychometric evaluation of the difficulty of the test a rather different population of students than the population that really matters. They might have had a test population with lots of bright 9th and 10th graders, and perhaps for that group the difficulty of the June 2003 exam was in line with earlier instances, while for the struggling seniors the added "problem solving" (i.e., aptitude oriented) focus of the exam would have posed more severe problems. But apparently the department did not make a subtle error; they were just completely wrong and out of control.

Posted by Bas Braams at 07:55 PM | Comments (0)

August 24, 2003

NYC schools chancellor Klein under fire

New York City schools chancellor Joel Klein has been under some heavy and well deserved fire recently for his curricular policies. This blog entry is based on articles and opinion pieces by James Traub, Sol Stern and Andrew Wolf; and on the Web pages of New York City HOLD.

On August 2 the New York Times educational supplement offered New York's New Approach, by James Traub. (The original article has gone off-line, and the link is to a copy.) Traub focusses on the literacy part of New York's "Children First" initiative.

[...] All New York elementary and middle-school students will have lengthy "literacy blocks" each day to focus on reading as well as writing skills. Teachers will read books aloud, engage in "shared reading" with the whole class, "guided reading" with smaller groups and "independent reading" from classroom libraries whose books will be carefully calibrated by skill level.

[...] Here was a form of teaching that built on the child's innate knowledge and love of learning, required virtually no rote instruction and permitted children to acquire information and understanding as a painless byproduct of pleasurable activities. It sounded delightful. But would it be effective?

Traub presents Klein as perhaps an unwitting captive of the city's liberal consensus on pedagogical issues, and presents the deputy chancellor for teaching and learning, Diana Lam, as the real force behind the progressive pedagogy. Traub himself has no sympathy for the direction chosen by chancellor Klein:

Every new chancellor in recent years has come into office with a message of salvation for the schools. Once it was "school-based management," then it was "curriculum frameworks," and then data-driven instruction. None of it really mattered in the end, because chancellors couldn't impose their will on the system. Now, at long last, they can. Mayor Bloomberg and Mr. Klein have the power to reshape New York City schools.

But they have imposed a curriculum that scants content knowledge for personal experience and direct instruction for self-directed learning. With almost half of the city's fourth graders and two-thirds of its eighth graders reading below grade level, is this the direction they should go?

Traub's piece mentiones an earlier article by Sol Stern of the Manhattan Institute: Bloomberg and Klein Rush In (City Journal, Spring 2003). There, Stern wrote:

Unless Bloomberg and his handpicked schools chancellor, Joel Klein, admit to some monumental blunders, discredited progressive methods for the teaching of the three Rs such as "whole language," "writing process," and "fuzzy math" will soon be enforced in every single classroom in 1,000 New York City schools. This is a disaster in the making, not least because the children in the targeted schools are mainly poor and minority - the very population historically most damaged by such methods.

Mr. Stern is at it again in the online pages of City Journal with Mayor Bloomberg's Diana Lam Problem. (The article also appeared as an opinion column in the New York Post: Lam Excuses.) Stern first recalls the appointment - later put on hold - of Diana Lam's husband to a $100,000 per year job as regional instructional supervisor. He then addresses a new issue by which Ms. Lam has given the impression of being ethically challenged. With reference to Stern's earlier conclusion that Diana Lam is addicted to discredited "whole language" and "constructivist" methods for teaching reading and writing Stern writes:

Lam responded to these criticisms in a manner that raised new questions about her competence and integrity. In a Daily News op-ed, she trumpeted the results of a recent U.S. Department of Education study comparing the reading and writing scores of New York City's 4th-graders with those of five other urban districts: Atlanta, Chicago, Houston, Los Angeles and Washington.

In those tests, the city's 4th-graders ranked at the top of the six participating districts in writing and a close second to Houston in reading. According to Lam, "the results of this assessment show our pedagogical approach is sound."

But Lam neglected to inform her readers that the tests represented a random selection of the city's 4th-graders from January through March 2002. At that time, Lam was running the Providence, R.I., school system, Joel Klein was an executive with the Bertelsman publishing company, and newly elected Mayor Bloomberg hadn't yet convinced the state Legislature to give him control of the city's schools.

[...] I leave it to others to decide whether Lam's misrepresentations about those 4th-grade tests result from a blunder or from something worse. In either case, Mayor Bloomberg and Schools Chancellor Joel Klein now have a credibility problem on their hands.

In addition to the pieces by James Traub and Sol Stern there was a scathing op-ed by Andrew Wolf in the New York Sun. (No NYC journalist has been as consistently strong on the Bloomberg and Klein educational fiasco as Andy Wolf, as witness this collection of previous columns.)

In a remarkably intellectually dishonest opinion piece that ran last week in the Daily News, Ms. Lam had the chutzpah to declare that New York's "reading plan is working." She bases her claim on the results of the National Assessment of Educational Progress test, a voluntary exam given to compare the progress of students in the nation's cities. This test was administered to a sampling of fourth-grade classes more than six months before Mr. Klein and Ms. Lam took over the old Board of Education. New York City and Houston were shown to have the most effective programs among the six largest urban centers.

Now unless Mr. Klein was lying on January 21, when he stated that the city has been "using something along the lines of 30 different reading programs," the results of the NAEP test reflect that diversity. This is certainly no more an endorsement of Ms.Lam's controversial program than it is of any of the other 29 programs then in use. And what if Ms. Lam, as many of us feel, has chosen the wrong one of the 30 alternatives? She concedes that Houston did just as well, but with a "scripted" reading program that she has specifically excluded. But many of our New York City schools used such programs. How much of New York City's success can be attributed to those schools?

The cited articles of James Traub, Sol Stern, and Andrew Wolf all address primarily the reading component of chancellor Klein's Children First initiative. For critical perspectives on the mathematics component, please see the New York City HOLD Web pages, and see also my overview page Chancellor Joel Klein's "Children First" New Standard Curriculum for NYC Public Schools.

A further issue that has not received adequate attention in the press reporting is the secrecy of Children First. As a result of Freedom Of Information Law Requests we know that the primary Children First working groups operated without formal charge and did not produce reports. In a remarkable show of contempt for integrity of process and for careful policy chancellor Klein has arranged that there is no documentation, not even for the Department of Education's internal purposes, of the rationale behind his and Ms. Lam's choices for the literacy and mathematics curricula.

Posted by Bas Braams at 09:01 AM | Comments (1)

August 16, 2003

On travel, no Blogging for now

This August is a busy month for travel for me, and I don't expect to have anything new to offer here until September. For frequent education news, please check out Number 2 Pencil and Joanne Jacobs. I also recommend my standard Google education news search. For a collection of articles of some longer term interest please see the left column on this page. My Links, Articles, Essays, and Opinions Web page is a more extended, annotated version of same. Happy reading!

Posted by Bas Braams at 12:42 PM | Comments (0)

August 09, 2003

TAKS science problems

Two days ago I commented on a questionable item in the 10th grade TAKS mathematics test, for which scores were revised. The associated TEA press release refers also to a controversy over some science test items, and states that, upon review, these items were found to be correct. It is fascinating to see the items and to see how they are judged to be correct. The TEA (Texas Education Authority) put out an Additional Information Regarding Released Science Items for the spring 2003 testing cycle. Four controversial items are discussed.

Grade 5 Science, Item 13. Item 13 asked students which two planets are closest to Earth. Among the possible answers: Mercury and Venus, and Mars and Venus. The correct answer varies over time, and the question is plainly wrong or crazy. To add insult to injury: the intended answer was Mars and Venus, but on the day the test was given the correct answer was Mercury and Venus. Nevertheless, the TEA insists that for the purpose of the 5th grade test the question had only one correct answer - to wit, the wrong answer.

Grade 10 Science, Item 50. Item 50 looks crazy to me - they seem to be testing in a most convoluted way that the student knows that the element symbol K stands for Potassium. The TEA discussion indicates that the item is factually wrong to boot, but they insist that it is valid just the same.

Grade 11 Science, Items 11 and 45. Question 11 asks for the force exerted by a jumping frog on a leaf. The force has two components: one due to the weight of the frog and the other due to its acceleration. These are to be added vectorially, but the direction of the jump is not given. The TEA insists that therefore the correct treatment of the question must ignore the weight of the frog. Obviously the question is wrong and the TEA is wrong to insist that it is correct. Question 45 concerns a hypothetical situation in which a force is exerted on an object but no work is done. The question asks what can be concluded, and the intended answer is that the object is and remains at rest. This is wrong; the force may be perpendicular to the direction of motion. The TEA insists in effect that students don't know that, and that therefore the TEA's intended answer is, for the purpose of the test, the unambiguously correct answer.

The TEA has a bit of a quality control problem, obviously. In connection with the earlier 10th grade Math test problem Kimberly Swygert asked if the pre-testing might not have found the error. The same question could be asked for these science test items, but I think that it is too much to ask of the psychometric process that it correct for blunders of this kind.

I suspect that for many patently wrong questions students will nevertheless do what the TEA expects of them. The pernicious effect of the bad test items is indirect. It creates among the students and the public an impression (a correct impression) that the TEA doesn't have its house in order; that questions can't be read to mean what they mean; and that one should always be prepared to second-guess the clear meaning of a question.

A closing remark: the New York State Regents testing division has similar quality control problems. I remind the reader of the earlier discussion about the June 2003 Regents Math A exam, and my related Critique of the New York State Regents Mathematics A Exam

Posted by Bas Braams at 12:44 PM | Comments (3)

August 08, 2003

Advocacy research on NYC District 2

The Educational Policy Archives added a very interesting pair of articles earlier this week. The main article is Research or "Cheerleading"? Scholarship on Community School District 2, New York City, by Lois Weiner of New Jersey City University. This is followed by Reforms, Research and Variability: A Reply to Lois Weiner, by Lauren B. Resnick of the Learning Research and Development Center at the University of Pittsburgh.

New York's CSD2 covers Manhattan below Central Park (except for a small area near the Williamsburg bridge), as well as the East Side up to a boundary varying between 96th and 100th street. During the period covered in the cited articles the superintendents of CSD2 were Anthony Alvarado and (after Alvarado moved to San Diego) Shelley Harwayne. District 2 is a hotbed of whole language and constructivist mathematics teaching and is much celebrated in certain circles for those reforms. The principal cheerleaders from the education research community are Lauren Resnick and Harvard's Richard Elmore.

Anyone familiar with the District 2 area of Manhattan will know of the tremendous changes in the demographics and the quality of life that have occurred there over the past 25, and also the past 10-15 years. Chelsea, TriBeCa, and Alphabet City and surrounding areas east of 2nd Ave are all unrecognizable now from the way they looked in the 1980s. The distractions of urban crime and decay have been very much reduced over the same period. Any honest investigation into the success of school policies must pay careful attention to the effects of these demographic changes. Researchers Elmore and Resnick note the changes, but otherwise pay scant attention to them, as is documented well in Weiner's article.

The reply by Lauren Resnick is of interest in its own way. Resnick characterizes Weiner's article as "... at once an analysis of data on demographics and achievement in Community School District Two (CSD2) in New York City and an attack on the research strategy (and by implication the research ethics) of the High Performance Learning Communities (HPLC) project that I co-directed, along with Richard Elmore and Anthony Alvarado". Resnick also writes (in the abstract):

The intent of the HPLC investigation was always to link scholars and practitioners in a new form of research and development in which scholars became problem-solving partners with practitioners. There are important issues about how to profitably conduct such "problem-solving" research. These issues are worth substantial attention from the communities of researchers and practitioners as collaborative research/practice partnerships proliferate. Serious studies of such partnerships are needed, going well beyond the anecdotal attacks offered by Weiner in her article.

Or, to paraphrase: "Sure, we were engaged in advocacy research. Let others do the proper work." Guilty as charged, is my assessment.

Posted by Bas Braams at 06:24 AM | Comments (0)

August 07, 2003

TAKS 10th grade math problem

As reported in the Houston Chronicle the Texas Educational Authority yesterday announced a revision of the scores of the 10th grade TAKS exam that was given this spring, because of an error in one of the questions.

Readers with some knowledge of high school trigonometry may find it interesting to see the problem. The question is reproduced in the Houston Chronicle, or one can see the TEA original (look at question 8). The question shows a drawing of a regular octagon, indicating the inscribed radius as being 4.0cm and the circumscribed radius as being 4.6cm. The question is what is the perimeter of the octagon to the nearest cm. The choices are 41cm, 36cm, 27cm, and 18cm.

The data are contradictory: an octagon with inscribed radius 4.0cm has circumscribed radius about 4.33cm. Taking the 4.0cm and 4.6cm at face value a student might reason that the perimeter of the octagon is somewhere between 2*pi*4.0cm and 2*pi*4.6cm, and this leads to the answer 27cm in the multiple choice format. Or the student could apply trigonometry and obtain perimeter 26.5cm by starting from the given inscribed radius or 28.2cm by starting from the given circumscribed radius. A fourth approach is to use Pythagoras's theorem on a right triangle that has hypothenuse 4.6cm and one right side 4.0cm; then one finds that the circumference of the octagon must be 36.3cm. That (or rather, 36cm) was the intended answer.

According to the TEA press release, "item eight on the 10th grade math test could have been read in such a way that the question had more than one correct answer". That is putting a very kind spin on their blunder - there is in fact no reading of the question under which it has just one correct answer. It is amazing that the TEA would have this test composed and reviewed by people that fail to recognize that one cannot arbitrarily specify both the inscribed and the circumscribed radius of a regular polygon. According to the TEA press release: "Each test item goes through a rigorous review process that includes a field test of the items and two separate review sessions by professional educators who have subject-area and grade-level expertise and who are recommended by their district." The TEA didn't mean that as an explanation, but for me the "professional educators" part goes a long way just the same.

[Addendum, Aug 09. Please see the figure accompanying question 8 in the exam. The line segments that I described as inner and outer radii are not, in fact, identified as such in the figure or in the question. They meet at a point that certainly appears to be the center of the octagon, but that is not labelled either. There is, therefore, a reading of the question under which it has a single correct answer. Under that reading the given data are all correct, the special point is not meant to be the center of the octagon, and the figure is simply distorted in what happens to be a highly misleading way.]

Posted by Bas Braams at 02:14 PM | Comments (1)