Education 'Miracle' Has a Math Problem. Bush Critics Cite Disputed Houston Data, by Michael Dobbs, The Washington Post, November 8, 2003 (front page). The article reports on the fraudulent drop-out statistics in the Houston school district during the tenure there of now education secretary Rod Paige.
This image of integrity is not supported by the Washington Post article, or by earlier reporting on the Houston drop-out statistics. The WP article reports on clear fraud at one high school and continues:Conceding that individual "indiscretions" may have occurred in a school system that serves more than 200,000 students, Paige described the Houston Independent School District as "the most evaluated school district in the history of America." He said he places great stock in the credibility of an accountability system that demands quantifiable results from administrators, teachers and children.
"The whole system for me rode on integrity," Paige said.
An investigation by state auditors showed that at least 14 other Houston high schools, including Austin, reported unusually low dropout rates in 2000-2001, although there is no evidence administrators falsified data. By reporting a dropout rate of less than 0.5 percent, school principals increase their chances of winning bonuses of as much as $10,000 and earning top accountability ratings for their campuses.
After years of relying on dropout statistics as a key component in their annual accountability studies, school officials concede that they were worthless all along. "The annual dropout rate was a crock, and we're not [using] it anymore," said Robert R. Stockwell Jr., the district's chief academic officer.
In this matter, and writing from quite a distance without local knowledge, I am a lot more sympathetic to the school officials than to the Texas Education Authority officials. The TEA decided that schools must report drop-out statistics, but of course schools have no good way of knowing which students have moved to another district, switched to another school, or have really dropped out - and the school officials may well decide that it is none of their business and not worth their time to try to track down students that have left.
The WP article also pays attention to score inflation in Houston on the Texas 10th grade test - a very important benchmark in their high school accountability system. I am not surprised and not disturbed that many students are held back in 9th grade. The WP article, however, mentions cases - not clear if they are isolated or part of a pattern - of students that are held back twice in 9th grade and then advanced to 11th grade, thus not polluting the 10th grade test scores.
In Search of Intelligent Life at the SBOE, by Michael King (Austin Chronicle, Sep 19, 2003). The Texas State Board of Education devoted a marathan session (the first of two) to a hearing of arguments for and against the proposed adoption of high school biology textbooks. The proposed adoptions represent mainstream science, and the anti-Darwinian forces claim that the adoptions are factually in error by being insufficiently critical of evolutionary theory. The article reports on the testimony and the testifiers, and also provides a good background on the relevant rules for textbook adoptions.
Two days ago I commented on a questionable item in the 10th grade TAKS mathematics test, for which scores were revised. The associated TEA press release refers also to a controversy over some science test items, and states that, upon review, these items were found to be correct. It is fascinating to see the items and to see how they are judged to be correct. The TEA (Texas Education Authority) put out an Additional Information Regarding Released Science Items for the spring 2003 testing cycle. Four controversial items are discussed.
Grade 5 Science, Item 13. Item 13 asked students which two planets are closest to Earth. Among the possible answers: Mercury and Venus, and Mars and Venus. The correct answer varies over time, and the question is plainly wrong or crazy. To add insult to injury: the intended answer was Mars and Venus, but on the day the test was given the correct answer was Mercury and Venus. Nevertheless, the TEA insists that for the purpose of the 5th grade test the question had only one correct answer - to wit, the wrong answer.
Grade 10 Science, Item 50. Item 50 looks crazy to me - they seem to be testing in a most convoluted way that the student knows that the element symbol K stands for Potassium. The TEA discussion indicates that the item is factually wrong to boot, but they insist that it is valid just the same.
Grade 11 Science, Items 11 and 45. Question 11 asks for the force exerted by a jumping frog on a leaf. The force has two components: one due to the weight of the frog and the other due to its acceleration. These are to be added vectorially, but the direction of the jump is not given. The TEA insists that therefore the correct treatment of the question must ignore the weight of the frog. Obviously the question is wrong and the TEA is wrong to insist that it is correct. Question 45 concerns a hypothetical situation in which a force is exerted on an object but no work is done. The question asks what can be concluded, and the intended answer is that the object is and remains at rest. This is wrong; the force may be perpendicular to the direction of motion. The TEA insists in effect that students don't know that, and that therefore the TEA's intended answer is, for the purpose of the test, the unambiguously correct answer.
The TEA has a bit of a quality control problem, obviously. In connection with the earlier 10th grade Math test problem Kimberly Swygert asked if the pre-testing might not have found the error. The same question could be asked for these science test items, but I think that it is too much to ask of the psychometric process that it correct for blunders of this kind.
I suspect that for many patently wrong questions students will nevertheless do what the TEA expects of them. The pernicious effect of the bad test items is indirect. It creates among the students and the public an impression (a correct impression) that the TEA doesn't have its house in order; that questions can't be read to mean what they mean; and that one should always be prepared to second-guess the clear meaning of a question.
A closing remark: the New York State Regents testing division has similar quality control problems. I remind the reader of the earlier discussion about the June 2003 Regents Math A exam, and my related Critique of the New York State Regents Mathematics A Exam
As reported in the Houston Chronicle the Texas Educational Authority yesterday announced a revision of the scores of the 10th grade TAKS exam that was given this spring, because of an error in one of the questions.
Readers with some knowledge of high school trigonometry may find it interesting to see the problem. The question is reproduced in the Houston Chronicle, or one can see the TEA original (look at question 8). The question shows a drawing of a regular octagon, indicating the inscribed radius as being 4.0cm and the circumscribed radius as being 4.6cm. The question is what is the perimeter of the octagon to the nearest cm. The choices are 41cm, 36cm, 27cm, and 18cm.
The data are contradictory: an octagon with inscribed radius 4.0cm has circumscribed radius about 4.33cm. Taking the 4.0cm and 4.6cm at face value a student might reason that the perimeter of the octagon is somewhere between 2*pi*4.0cm and 2*pi*4.6cm, and this leads to the answer 27cm in the multiple choice format. Or the student could apply trigonometry and obtain perimeter 26.5cm by starting from the given inscribed radius or 28.2cm by starting from the given circumscribed radius. A fourth approach is to use Pythagoras's theorem on a right triangle that has hypothenuse 4.6cm and one right side 4.0cm; then one finds that the circumference of the octagon must be 36.3cm. That (or rather, 36cm) was the intended answer.
According to the TEA press release, "item eight on the 10th grade math test could have been read in such a way that the question had more than one correct answer". That is putting a very kind spin on their blunder - there is in fact no reading of the question under which it has just one correct answer. It is amazing that the TEA would have this test composed and reviewed by people that fail to recognize that one cannot arbitrarily specify both the inscribed and the circumscribed radius of a regular polygon. According to the TEA press release: "Each test item goes through a rigorous review process that includes a field test of the items and two separate review sessions by professional educators who have subject-area and grade-level expertise and who are recommended by their district." The TEA didn't mean that as an explanation, but for me the "professional educators" part goes a long way just the same.
[Addendum, Aug 09. Please see the figure accompanying question 8 in the exam. The line segments that I described as inner and outer radii are not, in fact, identified as such in the figure or in the question. They meet at a point that certainly appears to be the center of the octagon, but that is not labelled either. There is, therefore, a reading of the question under which it has a single correct answer. Under that reading the given data are all correct, the special point is not meant to be the center of the octagon, and the figure is simply distorted in what happens to be a highly misleading way.]