South Dakota Top Blogs

News, notes, and observations from the James River Valley in northern South Dakota with special attention to reviewing the performance of the media--old and new. E-Mail to MinneKota@gmail.com

Saturday, November 3, 2007

Education is caught up in the numbers racket

I spent the last three weeks in Moline, Ill., one of the Quad-Cities, on medical duty. During that time various assessment testing results from the school districts in the region were being released and reported in the news media.

The school board in Rock Island was in turmoil. Its high school was listed in a study from John Hopkins U. as a "dropout factory." The study was made of statistics compiled by the federal department of education. However, the Rock Island School District had compiled its own statistics for submission to the state and the numbers they produced had no relationship with what was reported from the federal agency.

At this time, there is no explanation for the severe discrepancy, but the situation does send a message to the public. The numbers from the multitude of studies we are confronted with are often fraudulent. We cannot trust them or believe that they tell us anything truthful or significant.

I am not a statistician. As a journalist and educator, I have had to take courses in statistical procedure and statistical reasoning. These courses introduce one to the basic principles of statistics and probability, and one spends a great deal of time in studying when statistics are incompetently compiled and when they are falsely used. The fact is that numbers are constantly used incompetently and fraudulently. Too many people are too undereducated to question the basis and use of the statistics thrown at them.

Benjamin Disraelie is credited with noting that there are lies, damned lies, and statistics. We have known for a long time that we should be skeptical and careful about statistics. They often are generated by people who are totally ignorant of what comprises valid statistical assertions or people who deliberately use them for deceptive purposes.

The testing required by No Child Left Behind has resulted in an epidemic of false and pointless statistical assaults. The fraud has many reasons.

1. As in the case of the Rock Island situation, the data gathering is incompetent. This is largely a matter of what is counted and whether the people doing the counting know how to count. A data set has to have some scientific definition used with care by people of competence and integrity.

2. With educational testing, the first question to be asked is if the tests themselves are capable of measuring what they claim to measure. When kids are being tested by people who have no concept of how to write a test, the results are meaningless. Many of the tests used in NCLB assessments are equivalent of finger-painting. They are wildly "creative" but they represent no known intellectual phenomena.

3. The inferences made from a set of test scores are often the result of fallacious reasoning.

The NCLB tests are supposed to be diagnostic. They are supposed to tell teachers, schools, and school districts how well their children are doing in terms of a general, comparative tendency. For example, the education department may say that any school that doesn't have 55 percent of its students show as accomplished readers on a test will be put on a warning list. That kind of purpose in itself shifts the focus of the test from assessment of students to punishment of the schools.

If assessment is working, it will tell the schools how each student is doing and what factors can be identified as contributing to good performance and what is causing poor performance. And poor performance can be caused by a multitude of factors: genetics, personalities, economic factors, social conditions in homes, social conditions in classrooms and schools, size of classes, curriculum materials, teacher personalities, classroom design, parental and administrative support, and on and on.

Tests should give teachers and administrators feedback on what is working and why, and what is not working and why. And if 55 percent of the students are not proficient readers, for example, testing information should include a profile of the class to determine if that percentage requirement is even relevant to a given class. To make progress with students, educators have to know what variables they are working with.

One of the successes of the old one-room schools was that teachers had the children in class over a number of years and knew in great detail the individual backgrounds and performance capabilities of each student. Testing is meant to supply some of that information that would otherwise be gathered by close observation over a long period of time. It is not enough to tell schools if they aren't doing well; they have to know why. Otherwise, the testing is pointless.

As long as politicians and administrators with little or no experience in the educational process are designing the assessments of our schools, we will learn nothing and get nowhere. The assessments and reporting of them should be in the hands of people educated and experienced in the process of education. The job of administrators should be to implement their programs and explain them to the school boards, whose job it is to represent the public interest in the process.

NCLB has no relevance to education as it actually occurs. It could be an asset if it was designed and administered by teachers. As it is, it is just another obstacle in the way of education.

[Simultaneously posted at KELOLAND.]

No comments:

Blog Archive

About Me

My photo
Aberdeen, South Dakota, United States

NVBBETA