February 26, 2007

On the lack of benefits of sustained silent reading for at-risk kids (p. 9)

The studies compared outcomes of our procedures to those of traditional approaches. For instance, he did studies on the extent to which young, at-risk beginning readers benefit from engaging in independent, silent reading. The answer was none. If anything, the practice tended to galvanize the mistakes they made because they received no feedback when they worked independently, so they simply practiced the same mistakes or inappropriate strategies they made when they read aloud to a teacher.

On teaching by use of examples (p. 11)

Doug did three large studies that addressed the number of differences between positive and negative examples. One study involved five groups of preschoolers. All children received the same positive examples. (A positive example of red is red; a negative, not-red.) Each group, however, received unique negative examples. For one group, positives differed in only one way from the negative (like those in the sequence for red). For the next group, the negative examples differed in two ways (which, for the example of red, would be a difference in color and possibly in the size of the circle). The next group had negatives that differed in three ways from the positives (possibly not red, larger, and oval, not round) and so forth to the fifth and final condition, which had only positives, no negatives.

As the logical analysis predicted, the children who received negatives that differed in only one way from positives outperformed all the other groups, and the poorest group was the one that was shown no negatives. Note that the vast majority of traditional teaching demonstrations do not have any negatives, or, if they do, they are greatly different from the positives in an attempt to make the learning "easier."

...

These again are picky details, but that’s what makes the difference—saving ten trials here, thirty there, avoiding inducing a misconception that will later require reteaching. The result is acceleration and greatly increased confidence of children that they will learn what the teacher tries to teach.

On the futility of Elegant Variation (pp. 11-12)

Consider the unfortunate teacher who goes out of her way to make the directions and “tasks” more interesting by varying wording. Her presentation may sound better to the naïve observer, but she has greatly compromised the clarity of what she is trying to teach. In the same way, the teacher who doesn’t know the details of the positive and negative examples will do what intuitively makes sense, which guarantees that her presentation will be more difficult for the naïve learner.

Intuition might dictate that a concept would be easier to teach if the presentation did not involve positive examples that are minimally different from negatives. The outcomes of intuition are not supported by either logic or experimental outcomes. A long list of popular practices (such as first teaching students fractions that have 1 as the numerator) makes no sense logically or empirically.

On teachers' claims that they know best: The evidence says they don't (pp. 13-14)

Self-sponsored sites in Follow Through are based on the “theory” that local schools know more about their children and the problems they are experiencing than remote writers of instructional programs or sponsors who are not knowledgeable about local children. Because local teachers know more about the children, they are better prepared to orchestrate effective instruction for them. There were 15 self-sponsored districts, and many implemented in more than one school. If Follow Through had been limited to the 15 self-sponsored communities, it still would have been the largest educational experiment ever conducted.

The performance of these sites would be important to those in the educational community who believed that teacher autonomy and collaboration would transform failed schools into effective ones. Nearly all the self-sponsored sites promoted teacher autonomy and school autonomy. They permitted enormous latitude in how teachers developed material and practices to meet the needs of children, and nearly all had provisions for teachers working collaboratively to meet the children's need. If judged on the appeal of goals and practices, these sites should have been 10s. In fact, all failed. Incredibly, the self-sponsored sites performed below the average of the sponsored sites, which was considerably below the level of children in Title 1 schools.

On teaching crack babies (p. 20)

During a visit I made to Bridgeport, a trainer mentioned the problems they were having with the low performers in another school. She indicated that these children were crack cocaine victims. I told her I had never worked with such children and wanted to see them. We drove to the other school. After I worked with some of them, I told the trainer that if these children showed the effects of their mothers’ crack cocaine habits, there must have been a lot of crack cocaine mothers back in the sixties in Champaign-Urbana, because these children performed just like the lower ones we had worked with in the preschool. They entered school very low, but could learn with careful instruction.

On schools not knowing how to motivate students (p. 32)

Instead, foundations fund limp practices like showing role models—professional athletes, reading to kids— or by staging events that celebrate reading. These attempts are based on the notion that if kids are motivated, they will read. In fact, they punish low performers who know they can’t read by showing how important it is to read and showing the happy faces of children who do what low performers have unsuccessfully tried to do for years.

February 25, 2007

So you think the reading wars are over? Do you think phonics won? Do you think that educators were cowed by the reading research that showed that their favored methods didn't work? Do you think that educators balanced their crappy whole language programs with real phonics?

Think again.

The Feds implemented Reading First to force educators to adopt Reading Programs based on scientific research. The Feds offered educators lots of grant money provided they adopt reading programs that were consistent with the research on reading. To effect Reading First, the feds:

sponsored three major reading academies, the Secretary’s Reading Leadership Academies (RLAs). The RLAs were held in Washington, D.C., in January and February 2002, and hosted policymakers and key education leaders from every state and territory in the nation. The academies were designed to help state leaders gear up for the implementation of Reading First, the Department’s program to improve the quality of reading instruction in kindergarten through third grade.

The RLA's included a session entitled “Theory to Practice: A Panel of Practitioners.” in which:

The speakers discussed how implementing a scientifically based reading program had brought about great improvements in the reading skills of their kindergarten through third grade students.

The a majority of the panel consisted of principals who had implemented either the Direct Instruction (DI) reading program or the Open Court reading program, two of only three programs that have been research validated.

After the RLA sessions the "policymakers and key education leaders from every state and territory in the nation" had an opportunity to comment on the RLA sessions by filling out evaluation forms.

Normally, such evaluation forms are maintained in confidence and I suspect that the attendees never expected that their comments would come to light. But then a little thing happened on the way to the teachers' lounge ....

The Department of Education's internal auditing department, the Office of the Inspector General (OIG) audited the Reading First program. (I'll have a post on the merits of this latest OIG audit in an upcoming post (quick take: it is laughable).) As part of that audit, they reviewed the evaluation forms submitted by the policymakers and key education leaders from every state and territory in the nation. And now as part of the OIG's Final Audit Report February 2007 you can see some of their comments in all their glory. See pages 25-39.

A few points immediately jump out. Many of these policymakers and key educators:

1. have not accepted the reading research and are not willing to abandon their beloved whole language programs.

2. were a hostile audience.

3. intensely hate DI and open court, i.e., the reading programs that have been validated by reading research.

4. were conspiring behind the scenes to give the impression that DoE was trying to force them to adopt specific curricula.

5. Know all the cliches very well.

These are THE state level policy makers and education leaders, not a bunch of powerless teachers or academic ideologues. These are the people who make the education policy in your state. From, these comments, it is clear that they won't be abandoning their beloved whole language anytime soon. At least not willingly.

And the research for reading is much further advanced than it is for math. Consider this a preview of the math wars five years hence.

I've been reading one of the most important education books you'll likely never have a chance to read. It's by Siegfried Engelmann, and it's about Direct Instruction, the structured curriculum he began to develop in the early 1960s, how DI participated in a federal study called Project Follow Through, and how the results of that study - which demonstrated that DI produced superior outcomes for at-risk children - were essentially disappeared from the educational landscape by hostile educators and bureaucrats.

February 23, 2007

Over at edwonk’s, Mike challenges me to show some data on the failed state of our schools.

Usually, my questions end the discussion because no specific answers are forthcoming.

Let’s use some data from Pennsylvania, an average performance state according to NAEP.

Let’s look at the results from PA’s 11th grade PSSA exam (2005).

As a preliminary matter, let’s establish that the PSSA is a valid test of student performance. PA takes their testing seriously and has conducted numerous analyses to determine the validity of their test. Those studies can be found here. Here’s another independent evaluation done by Achieve, Inc.

According to the evaluations, the PSSA correlates nicely with such respected measures of performance such as the CTBS, TerraNova, SAT-9, and SAT. One consistent complaint made by the evaluators is that the PSSA lacks rigor, especially at the 11th grade level. See pages 33-35 and 50-52 of the Achieve evaluation.

The math portion of the test contains 66 questions. A student needs to have answer 31 of those questions correctly to score at the basic level, not to be confused with the proficient level. This does not mean that the student needs to know the answer to 33 questions. Because the questions are multiple choice, a student only needs to know the answer to 20 of those questions. By filling in the rest of the bubbles at random, the student will on average receive credit for 11 additional correct answers. So, to score at the basic level a student only needed to know 30% of the answers on a low rigor test. Worse than that, according to Achieve Inc., 26% of the questions on the exam (almost the same amount the basic student needs to know) were at the lowest level of cognitive demand (Items require the recall of information such as a fact, definition, term or simple procedure.). Yet despite this low level of rigor and low cut score, 33% of PA 11th graders could not meet this threshold of performance. This does not include all the students who have already dropped out by the 11th grade.

The analysis for the reading portion is similar. A student only needs to 54% of the questions to score at the basic level. 27% of students couldn’t do this. According to achieve 81% of the test questions fall within the lowest level of cognitive ability.

If we look at all the high schools in PA we find 665 schools with reported PSSA scores for the 11th grade. 25% of those schools have 40% or more of their students performing at the “below basic” level. That seems to me to be an unconscionably large number. The data for all these schools are readily obtainable at schoolmatters. Hopefully, this’ll be enough data for Mike to show a small taste of the utter failure that permeates our schools. Let’s see if Mike can spin the data.

February 21, 2007

Educators have much to be embarrassed about, not the least of which is their inability to adequately educate most children. One only needs to look at sample questions from any standardized test and the percentage of students who answer incorrectly to get a get a good understanding of the magnitude of their failure. This failure is most profound at the elementary school level-- a time in which children should be learning all the basic skills they need to succeed academically in later years. Today, educators are better at making excuses and inventing phony disabilities than they are at making educated students.

I can write such things without fear of criticism because they are true. Back in the 70's we spent a half a billion dollars proving they were true. And they remain true today. Nothing has changed. The song remains the same.

Back in the 70's we conducted a research-based contest involving thousands of students learning in actual classrooms. We assembled all our high-falutin educators, gave them lots of money, and asked them to teach children in the manner of their choosing and to the best of their ability.

The experiment was Project Follow Through.

We gave educators everything they asked for. Not only did we give educators over an extra $2000 per pupil (in today's dollars) we also provided comprehensive services, which included breakfast, lunch, medical, and dental care, and social services, to each student taking part in the experiment. There would be no hungry students with toothaches, tummy aches, and bad parents to foul-up the data.

Our educators came back with exactly the sort of educational programs you would expect from them. One group came back with a whole language program; another came back with a discovery learning constructivist program. A third came back with an open education model, while a fourth came back with a parental involvement model. (A description of all the major models can be found here.) In short, all the fashionable nonsense that passes for education today took part in the contest.

Also included in the experiment was a program designed by a non-educator with no formal education training other than teaching preschoolers. This program was Direct Instruction (DI).

Each program would be implemented in real schools in actual classrooms with actual teachers. Each sponsor got two years to implement and stabilize their program at their sites. The third cohort through the programs would be tested in grades K-3. That's four years exposure to the intervention.

Most of you have heard the results, but I suspect that many of you do not know the magnitude or the lopsidedness of the results. Engelmann, the co-creator of the DI program, describes the results as follows:

The basic skills consisted of those things that could be taught by rote—spelling, word identification, math facts and computation, punctuation, capitalization, and word usage. DI was first of all sponsors in basic skills...Only two other sponsors had a positive average. The remaining models scored deep in the negative numbers, which means they were soundly outperformed by [the control group]

DI was not expected to outperform the other models on “cognitive” skills, which require higher-order thinking, or on measures of “responsibility.” Cognitive skills were assumed to be those that could not be presented as rote, but required some form of process or “scaffolding” of one skill on another to draw a conclusion or figure out the answer. In reading, children were tested on main ideas, word meaning based on context, and inferences. Math problem solving and math concepts evaluated children’s higher-order skills in math.

Not only was the DI model number one on these cognitive skills; it was the only model that had positive scores for all three higher-order categories: reading, math concepts and math problem solving. DI had a higher average score on the cognitive skills than it did for the basic skills...

Not only were we first in adjusted scores and first in percentile scores for basic skills, cognitive skills, and perceptions children had of themselves, we were first in spelling, first with sites that had a Headstart preschool, first in sites that started in K, and first in sites that started in grade one. Our third-graders who went through only three years (grades 1-3) were, on average, over a year ahead of children in other models who went through four years—grades K-3. We were first with Native Americans, first with non-English speakers, first in rural areas, first in urban areas, first with whites, first with blacks, first with the lowest disadvantaged children and first with high performers.

Clearly, these results were a profound embarrassment to our educators. Not only did a program developed by a non-educator completely and utterly trounce the educators' programs, but often their programs were beaten, and beaten badly, by the control group which received a more traditional education.

The bitterness still exists to this day. The DI program is so hated by educators they have erased it from their collective memory banks. It is a painful reminder of their professional incompetence. It dispels all their unscientific "theories" and unfounded opinions. It shows that they are a sham.

For educators, Project Follow Through was a total loss. They lost in every academic subject tested. They lost in teaching basic skills, which was expected. But they also lost in teaching higher order skills and in fostering student self esteem, which was unexpected. They lost in teaching low performers. But they also lost teaching high performers! They lost teaching everybody everything. It was a humiliating defeat.

And, they didn't just lose by a little bit. They lost by a lot and by a lot I mean an obscene amount. For most measures, the DI program beat the control group by an effect size of a standard deviation (This means a student performing at the 20th percentile would be performing at the 50th percentile after the DI intervention). Many of the educator's programs, in fact, underperformed the control group. Ouch!

Today, educators get giddy as schoolgirls when research comes back showing an educationally insignificant effect size of a quarter standard deviation, like results of The Star Project (class size reduction). The DI program beat them by 4x this amount. Yet, you'll never hear a good word coming from an educator when you mention DI.

The performance of the DI program in Project Follow Through dispels all the popular notions educators hold about education. When you hear an educator spout opinions on what he thinks is necessary to improve education, you can rest easy knowing that such bromides were tested in Project Follow Through.

We need more money:

The performance of the sponsors clearly debunked the notion that greater funding would produce positive results. All sponsors had the same amount of funding, which was more than a Title-1 program received. DI performed well in this context; however, the same level of funding did not result in significant improvement for the other models.

You got more money in Project Follow Through and it didn't make a whit of difference in your performance. Not only did you lose, you underperformed the control group.

Poverty kids have too many external factors affecting their performance. We need to fix those before we can educate them.

For all programs there were comprehensive services, which included breakfast, lunch, medical, and dental care, and social services. In this context, the only reasonable cause for the failure of other models was that they used inferior programs and techniques.

The kids in the DI program had the same external factors affecting them, and yet the DI program improved the academic performance by a significant amount despite the presence of these factors.

Kids have different learning styles. One program isn't suitable for all kids.

DI outcomes also debunked the myth that different programs are appropriate for children with different learning styles. The DI results were achieved with the same programs for all children, not one approach for higher performers and another for lower performers, or one for non-English speakers and other for English speakers. The programs were designed for any child who had the skills needed to perform at the beginning of a particular program. If the child is able to master the first lesson, she has the skills needed to master the next lesson and all subsequent lessons. The only variable is the rate at which children proceed through the lessons. That rate is based solely on the performance of the children. If it takes more repetition to achieve mastery, we provide more repetition, without prejudice.

I could go on all day dispelling education myths--half a billion dollars buys a lot of data-- but I think you get the point by now.

In the years subsequent to the Project Follow Through, educators haven't improved their programs by one iota. Student performance remains stagnated at 1970's levels. The only thing they've been successful at was in burying the Project Follow Through data with the help of Jimmy Carter--history's greatest monster-- and his awful presidential administration. (Ironically enough Project Follow Through was initiated by Democrats and then killed by Democrats when the results did not jive with the interests of their special interest supporters.)

In the meantime, the DI people have quietly gone about the business of improving their program while continuing to be shunned by educators. Today, a well implemented K-5 DI school (which only teaches the scripted DI programs for half the day) can get students performing at less than the 20th percentile, children that educators write off as ineducable, up to the performance level of children in affluent schools (See the 2003 results for what was once the worst performing school in the inner city of Baltimore). They do this in a normal school day, in a normal school year, without cherry picking students, without giving homework, without narrowing the curriculum, without sacrificing recess, art class or music class, while actively discouraging the school from teaching to the state tests, by virtually eliminating the need for special education classes, and at the same funding levels available to most schools.

The Commonwealth of Virginia has been doing quite a bit of moaning about its English language learner problem under NCLB:

Officials in some school districts with a high number of immigrants are threatening to defy a federal law that requires all children to take the same reading tests, even those struggling to learn English.

This month, the U.S. Department of Education threatened sanctions against Virginia – including the possibility of withholding funds – if the state doesn't enforce the provision, which is part of the No Child Left Behind law.

The Virginia Department of Education had sought an exemption for another year, contending that the rule is unfair.

Immigrants who have been in the U.S. a short time "are simply unable to take a test written in English and produce results that are meaningful in any way," said Donald Ford, superintendent of the Harrisonburg city school division.

NCLB is scheduled to be rewritten this year and it is likely that the rules for older students who are recent immigrants will be changed to give them additional time to learn English. In my opinion, however, there really is no reason to make an exception for younger students who are recent immigrants and just entering school.

The problem is not that many of these ELL kids don't know English, but that they don't know concepts in any language. Engelmann addressed this problem (chapter 2 (pp. 61-62) of his book (no longer available online)) of low performing ELL children such as the ones at the Uvalde, TX school system which was a Project Follow Through site:

The children in Uvalde had a large range of skill variation. The lower performers were lower than any of the Portuguese children I had worked with. Teachers felt that these children were progressing slowly because instruction was in English. I had a teacher test some of the lower-performing children in Spanish on their knowledge of prepositions, colors, and words like left and right. Not one of the children knew more than a third of the words the teacher tested. I tried to make the point that if children don’t know the “concepts” in either English or Spanish, the most efficient practice is to teach in English. We knew that they would need understanding of them in English to perform on tasks the teacher would present later.

Higher performers who know concepts in Spanish merely have to learn the associated word in English which is easier than teaching the underlying concept to a child who doesn't know the concept in the first place.

So, this criticism of NCLB is a non-starter with me, as is the meme that NCLB is underfunded. The underlying problem remains poor instruction being doled out to ELL children and that is what NCLB intends to fix.

February 20, 2007

Judging a school by its test scores isn't a bad idea, it's just a limited one. A bill in the state Legislature would create a new measuring stick -- one that charts the chances students have to engage in creative activities. That might mean acting in the school play or being in the science fair. And as this century dawns, it should also mean engaging in multidisciplinary activities that may combine math and art or science and economics.

You really do need to read the whole thing.

And, note the irony that a government commission is going to be in charge of defining what "creativity" is and how it will be judged. Yeah, that'll work.

In part one I asked you to use your higher order thinking skills to solve this problem:

You have two identical glasses, both filled to exactly the same level. One contains red dye, the other water. You take exactly one spoonful of red dye and put it in the water glass. Then you take one spoonful of the mixture from the water glass and return it to the red dye glass.

Question: Is there more red dye in the water glass than water in the red dye glass? Or is there more water in the red dye glass than red dye in the water glass? In other words, the percentage of foreign matter in each glass has changed.Has the percentage changed more in one of the glasses, or is the percentage change the same for both glasses?

I even gave you a hint:

Instead of water and red dye, think of red balls and white balls.Assume that each glass starts out with 100 balls of a single color. Now remove a number of red balls from the red-ball glass and put them in the white-ball glass. Then return the same number of balls from the glass with the “mixture” and put them in the red-ball glass. Do this with different numbers of red and white balls.

Now it's time to end your suffering and give you the answer.

To solve the problem (without resorting to math) you need to understand the concept of conservation of number--a Piagetian concept. According to Piaget, conservation of number is

the understanding that the number of objects remains the same when they are rearranged spatially.

Also according to Piaget, you should have developed this concept naturally by the age of seven:

Piaget proposed that number conservation develops when the child reaches the stage of Concrete Operations at around 7 years of age. Around this time, children also develop an understanding of other forms of conservation (e.g., weight, mass). However, number conservation is often the first form of conservation to develop. Before the stage of Concrete Operations, children may believe that the number of objects can increase or decrease when they are moved around.

In Piaget's classic example he arranged objects in two rows, then spread out the objects, and asked the child if the number had changed. Apparently, children younger than about seven don't realize that the number of objects stays the same, i.e., their number is conserved.

No doubt, as an adult you understand the concept of conservation of number and could answer Piaget's problem with the rows of objects readily. But why couldn't you answer my red dye and water example? It's based on the same concept. Liquids are made up of a fixed number of molecules whose number is conserved. So why the difficulty?

What I was trying to do was demonstrate the difference between flexible and inflexible knowledge. You understand that number is conserved and can solve problems involving concrete objects with ease. However, you may have struggled with the unfamiliar red dye and water problem I presented, not realizing that liquids are composed of molecules. Even those of you that solved the problem by resorting to solving a math ratio problem may not have realized that the problem is readily solvable by knowing the number is conserved without resorting to math.

Hopefully, this demonstrates that most people do not have a flexible understanding of conservation of number that is readily generalizable to unfamiliar examples. Your knowledge of conservation of number is likely not flexible. It did not develop naturally like Piaget said it would and it probably was not taught well to you either. The concept remains inflexible.

Answering the problem doesn't require higher order thinking skills or critical thinking skills or any other fancy jargon educators like to use. All it requires is a flexible understanding and the application of a basic concept that any seven year old readily understands--conservation of number. If the basic concept/skill is well taught to the student and the student is given sufficient practice, the student will eventually develop a flexible understanding of the concept and will be able to apply the concept to solve tricky problems. You don't need a super high IQ to understand such things if you were taught them beforehand.

However, if this concept is not well taught, the student must rely on other knowledge he may have learned to solve the problem indirectly. Setting up a math ratio problem to solve the problem may be one way to solve the problem without having a flexible concept of conservation of number. I bet many of you math heads solved the problem this way, demonstrating your flexible understanding of the concept of ratio--another basic concept.

But what about higher order thinking skills. Do these even exist? Or are they just the byproduct of not receiving good instruction of basic skills?

NB: I got this problem from pp. 7-10 of chapter 4 of Engelmann's book.

To the unsuspecting reader this article in the Cincinnati Recorder starts off innocently enough:

Elementary students in the Kings Local School District won't be learning arithmetic next year - they'll be learning mathematics.

According to Angie Thompson, Kings elementary curriculum specialist, there is a difference - and it can make a child more successful in math.

Can you hear the local tutors licking their chops? I swear they are behind all of this. Who else benefits as much?

Pity the elementary students of the Kings Local School District in Cincinnati--they're in store for a bumpy ride the next few years.

It's bad enough that they're going to lose arithmetic. But, they also going to be getting "mathematics" and you know what that's going to be:

The district is getting rid of arithmetic, which consists of memorization of math concepts and procedures for solving problems, in the elementary grades.

The replacement is mathematics, a more unprocedural, inquiry-based approach to math.

First they masterfully redefine "mathematics" but then they follow it up with a brutally honest description of the snake oil--"a more unprocedural, inquiry-based approach." New math now with 60% more poison.

Students will be asked to develop a deep understanding of number concepts and how numbers relate, so they can better understand how to solve problems, why they're solving the problem and be able to find their own method to reach the solution, Thompson said.

More honesty. Students aren't going to "develop a deep understanding of [math]." We know that's impossible. They are only going to be "asked to develop a deep understanding." Many will choose not to answer. We will call these kids unengaged slackers and label them "learning disabled."

The teachers' roles are no longer showing the procedure and having them practice over and over," she said.

"Teachers are being taught to teach in a way that's different from the way they were taught. The way they were taught was very fragmented. Nothing ever connected.

If they think that topics in the traditional curriculum didn't connect and were fragmented wait until they get a load of the amazingly incoherent and illogical structure of the wondrous inquiry-based math curriculum that'll soon be foisted upon them. Measuring shoe sizes one day, fun with tally marks the next, followed by each student deriving the commutative law from a pile of beans and twigs the next.

After building a solid foundation of number concepts, how numbers relate, place value and other number concepts, students will then use that knowledge to solve problems their own way and will be able to communicate their answers, Thompson said.

"As long as they can understand how they're solving it and can communicate how they're solving it, it's fine... This is what research is telling us we need to do."

Is it the research telling you to do it or the voices inside your head?

So far the first four chapters have been a big set-up for chapter five. Zig describes chapter five as:

It is a very short chapter, but it will blow you away. It is the keystone of the book because it provides incontrovertible evidence that:

(a) there was an extensive plot to suppress the findings of the evaluation;(b) the plot involved highly respected academicians in education; and(c) the motivation for suppressing the truth was simply that DI made a mockery of what “experts” predicted would happen.

Note that the details of the plot have never appeared in print before.

If you are battling naysayers who contend that the Follow Through evaluation showed that no model won the competition, or that DI is not as effective as you claim it is, have them respond to this chapter. I would love to hear anybody try to argue either position after reading the chapter.

Note that the reason I didn’t just write the chapter as an article is that I wanted the reader to share some of our experiences in working nine years on this project before the evaluation came out.

Yes, chapter 5 will piss you off because it documents that the decision makers didn’t consider kids anywhere near as important as political correctness.

Chapter four details Zig's work with deaf, autistic, retarded, and brain-injured students along with a description of how the remedial DI program Corrective Reading was developed during the Follow Through years.

There is a misprint towards the end (p. 71) of chapter four in the section on reading comprehension. I think the entire section is a valuable read, so I'll reproduce the corrected section:

Initially, we assessed the comprehension of at-risk high school students in four cities, with a 100-item test that asked open-ended questions like, “In what year did Columbus discover America?” and “How many days are in a year?”

The results were frightening. Fewer than 25 percent of the students correctly answered the question about Columbus. (Several responded, “1942.”) About the same percentage didn’t know the number of days in a year. Their answers ranged from 360 to 12.

The ideal goal for these students is to provide them with enough information to permit them to pass courses. This is an ambitious goal, because students lack so much information that it would be impossible within one or two years of densely packed instruction to fill all the gaps that have developed over more than ten years.

Following the first tryout of the initial level of the corrective comprehension program in Springfield High School, the director of the project called me and indicated that the program was a failure. Why? Because students did not comprehend the history textbook scheduled for 10th graders.

I tried to explain that there were lots of concepts and knowledge of language that these students lacked and that we focused on central skills they needed. He didn’t seem convinced, so I arranged to meet with him and the students. I indicated that I would give him concrete examples that showed the extent of their lack of comprehension.

When we convened, I directed the 20 students in the class to open their history book (United States History for High Schools) to the first chapter, “A New World and a New Nation.”

I read the first sentence of the text and then asked them questions about it. The sentence:

Today, few Americans think of their country as having been a part of a British Colonial empire, but America’s colonial history lasted over 150 years, and Britain’s influence upon America was fundamental.

I asked them about the sentence, a part at a time. Over half the students missed each of the following questions:

The sentence starts with "Today." Does that mean this day or something that would be true today, tomorrow, and yesterday? The consensus: only today.

The sentence refers to few Americans. Is that a small number or quite a few? Quite a few.

The sentence refers to their country. What’s the name of their country? Britain.

The sentence says their country was part of a British Colonial Empire. Which country was part? Britain.

The sentence says that few Americans think of their country. Whose country was that? Britain. Was a British Colonial empire something owned by England or America? America. (Most students didn’t know that Britain was England and that colonial referred to colonies, or exactly what colonies were.)

After asking about several other phrases, I indicated that the last part of the sentence says that Britain's influence upon America was fundamental.

Comprehension is billed typically as reading comprehension, but it has very little to do with reading. Students don’t understand a host of concepts and relationships involving any academic pursuit. It’s not that they can’t extract them from what they read. They can’t extract them from what they hear.

Over the years, we extended the corrective reading programs so there are three levels of decoding programs, and three parallel levels of comprehension programs. The lowest level of the comprehension strand teaches skills and information taught in the second and third level of the language program that we use with children in grades 1 and 2.

An amazing phenomenon is that a lot of high school students who are considered pretty good students place in the highest level of the decoding sequence and the lowest level of the comprehension sequence.

Reading comprehension requires two important things: being able to decode the written text and matching-up the decoded text to words and concepts in the student's oral vocabulary so the student comprehends what has just been read. In many classrooms today decoding is taught in a haphazard fashion with phonics instruction thrown in on an ad-hoc or "as needed" basis and the teaching of vocabulary and underlying concepts takes a back seat to "comprehension strategies."

And, we're surprised when so many kids can barely comprehend not only what they've read but what they've been told, as in what they've been taught.

February 16, 2007

Supposedly the goal of education is higher order thinking which can be defined as:

A complex level of thinking that entails analyzing and classifying or organizing perceived qualities or relationships, meaningfully combining concepts and principles verbally or in the production of art works or performances, and then synthesizing ideas into supportable, encompassing thoughts or generalizations that hold true for many situations

It's commonly thought that these higher order thinking skills can be taught directly apart from the relevant domain knowledge (i.e., a narrow portion of knowledge that deals with the specific topic of interest). The thought is that you don't need to learn (i.e., memorize) all those messy facts because you can use your fancy higher order thinking skills to figure out whatever you need to know. Thus, instructional time is concentrated on higher order thinking skills and the learning of facts is downplayed.

Let's put that theory to the test.

No doubt if you enjoy reading (or at least take the time to read) an obscure education blog you went to college, are highly educated, and are smarter than the average bear. In other words, you have higher order thinking skills in spades. Let's test how well you can use them.

Consider the following:

You have two identical glasses, both filled to exactly the same level. One contains red dye, the other water. You take exactly one spoonful of red dye and put it in the water glass. Then you take one spoonful of the mixture from the water glass and return it to the red dye glass.

Question: Is there more red dye in the water glass than water in the red dye glass? Or is there more water in the red dye glass than red dye in the water glass? In other words, the percentage of foreign matter in each glass has changed.Has the percentage changed more in one of the glasses, or is the percentage change the same for both glasses?

Use your superior higher order thinking skills and intuit an answer. First try to do it without resorting to outside sources. Then try answering it using whatever reference source is handy, such as google.

NB: This only works if you don't know the scientific principle involved. If you happen to know the right scientific principle, you're relying on your domain knowledge to answer the question, not your higher order thinking skills. Also, no fair if you know the source of this problem.

I'll let you stew on it for a while.

Partial Update:Hint--Instead of water and red dye, think of red balls and white balls. Assume that each glass starts out with 100 balls of a single color. Now remove a number of red balls from the red-ball glass and put them in the white-ball glass. Then return the same number of balls from the glass with the “mixture” and put them in the red-ball glass. Do this with different numbers of red and white balls.

February 15, 2007

Rightwingprof has a great post up on the problems he sees some of his new undergraduate students struggle with due to their inadequate K-12 math educations. For all the talk we hear from educrats about teaching math with understanding, its clear that just as many students continue to come to college without the kind of understanding needed to solve college-level applied math problems as there were under the old "rote" "algorithm-based" math curriculum. Further complicating the problem is that today's students seem to be lacking in the calculation department what with the reduced emphasis on "drill and kill."

Edspresso reports that the NEA doesn't like its own constituents. It's trying to nix a spending bill proposed by Senator Lamar Alexander that requests additional funds for merit pay.

Apparently, the NEA doesn't want specialists and exceptional teachers from receiving pay raises in favor of protecting bad apples. Here's the comical video of the letter the NEA sent to the Senator requesting that he not vote for his own bill.

On four of the 11 questions, at least half of the students got them wrong. Students did not receive a grade.

Remediation becomes necessary:

Each of Dublin’s four middle schools is implementing an emergency plan to tutor students this school year. Some will take up to 20 minutes of class time to review basic computation. Others will give algebra students extra homework in division.

Parents hire math tutors:

Some Dublin parents said they have spent hundreds on tutoring services.

Local Kumon centers see enrollment increases:

AbhaJindal, director of Kumon tutoring center on Riverside Drive, said more Dublin children have been coming for help in the past couple of years.

"They don’t know how to multiply or divide," she said.

Administrators start making excuses:

George Viebranz, executive director of the Ohio Mathematics and Science Coalition, said it takes time for the benefits of reform math to materialize.

Decades, apparently.

Blame somebody, like the hapless teachers:

"The challenge we face is the teachers, who are key to the success of the program, are products of the system we are trying to change," he said. "There’s a very strong element of professional development that would have to occur over a number of years."

Should have thought of that before implementing the new program.

Curriculum developer tacitly admits it was wrong:

Ken Mayer, a spokesman for the developer of Investigations, said the second edition of the program puts more emphasis on teaching standard algorithms, the traditional ways of solving math problems. That edition has just been released.

And, resorts to spouting cliches:

He said Investigations improves on the traditional "drill and kill" model, because that method leaves students ill equipped for real-world situations.

"A mechanical procedure can be obscure when they have to apply their knowledge to more and more difficult problems," he said. "(Knowing) when they should be adding, multiplying, that’s when they get in trouble."

A mechanical procedure can also be obscure when it has only been superficially taught. And, students being able to "apply their knowledge to more and more difficult problems" invariably requires the application of some procedure, preferably one that has been taught to mastery.

The article alleges that more recent cohorts score better on achievement tests in the lower grades. I'd like to see those tests.

Gather 'round, kiddies, the New York Times knows how to fix education:

The No Child Left Behind Act of 2002 — which requires states to close the achievement gap between rich and poor students in exchange for more federal dollars — is the most far-reaching educational reform since the country embraced compulsory education in the early 20th century.

But it is unlikely to succeed unless Congress strengthens the law and puts a lot more money behind it when it moves to reauthorize No Child Left Behind later this year.

Gee, that's a shocker. All we need is to throw more money at the problem. Maybe we should just raise the funding level of all Title 1 schools to the Washington, D.C. level, $25,000 per student. Just look at the smashing results they're getting.

First of all, NCLB did not require states to close the achievement gap between rich and poor students "in exchange for more federal dollars." The deal under NCLB was that states would get to keep their existing ESEA funding only if they stopped wasting it and started using the money more effectively to "close the achievement gap." This is the reason why the Feds have a role in education in the first place. NCLB merely provided additional funding to effect the new accountability measures set forth in NCLB.

So do schools need more money to close the achievement gap? There certainly isn't any hard evidence to suggest that this is true and there are a few schools that achieve acceptable performance at existing levels, some for far below the typical funding level in most major cities.

The big news, however, is that the Times seems to be strongly behind NCLB:

This report reflects the growing and welcome consensus that No Child Left Behind, and the quest to improve public schooling for all children, are here to stay.

I read that as strong support for accountability measures, leaving the no-accountability nutter crowd increasingly marginalized. That's a good thing.

February 12, 2007

To raise the achievement gap, we need to raise the performance of the lower performing groups. (Another way is to drag down the higher performing groups. But that's just silly.)

There exist two schools of thought.

The first school believes that the achievement gap is caused by factors external to schools, such as poverty, racism, discrimination, and the like. The theory goes that if you eliminate or ameliorate these factors, you can raise the performance of the low performing groups up to the level of the higher performing groups. Sounds good in theory, kinda like communism sounded good in theory. In practice, however, results have been elusive and by elusive I mean non-existent. Unfortunately, no amount of wishful thinking in editorial pages seems to be able to get this theory past the the rainbows and magic stage of development. Needless to say, this is the dominant view of most educators.

The other school of thought is that the poor performance of the lower performing groups is the result of inferior instruction. Improve the instruction and performance will also improve. One problem with this school of thought is that improving instruction has also been an elusive task. Most efforts to improve the performance of low performers via improved instruction have failed miserably. You can count the number of successful programs that achieve consistent, reliable, and educationally significant results on your fingers of one hand. You have DI, SfA, and a few others. The "problem" with this improved instruction approach is that improved instruction won't just raise the performance of low performers it will raise performance across the board, so the achievement gap isn't going to be reduced in real terms any time soon. But, don't take my word for it take Zig Engelmann's, the creator of one of the few instructional programs that has gotten positive results:

Following the remodeling [in 1972], we opened a learning center, which was designed to serve hard-to-teach children and school failures. One of the earliest groups, however, was not low, but was composed of six preschoolers whose parents were professionals or professors. One student was Wes' youngest son. We worked with these children as four-year-olds and five-year-olds. They went through the reading and math programs as fast as they could go at mastery, which was frighteningly fast. (There was no need for the language programs because these children were very bright.) Even though they worked for only a little more than an hour a day, they went through all four levels of the programs we had for Follow Through classes. Before theyentered first grade, they performed on the fourth-grade level in both subjects. And they loved school.

We never published anything about the performance of these children, largely because the group had only six children, which meant that experimental purists would "question" the results. (Usually, at least 15 experimental children are needed for establishing outcomes that are recognized as valid.)

Although these children were awesome, their performance showed a critical difference between their potential and that of the at-risk child. When these advantaged children came to the third and fourth levels of the reading program, where the material becomes entrenched in and decorated with sophisticated language, they did not slow down. The profile for the at-risk child is different. Performance slows considerably when they reach the vocabulary-rich transition. They have parallel problems with math when the word problems become more substantive than a few pared-down sentences that present necessary information in a "familiar" format.

... [W]e don't have to worry as much about the performance of higher performers. They will tend to learn from teaching that is hideous, as many programs for the talented and gifted demonstrate. These programs provide instruction that appears to be purposely designed to teach, explain, and develop skills in the most circuitous and confusing manner possible. Certainly it's cruel to subject students of any skill level to such instruction, but in the larger scheme, far less cruel than subjecting at-risk children to certain failure. Although it would have been possible for us to work with both populations, we reconfirmed the decision not to work with higher performers but rather to show the degree to which at-risk children would catch up to higher performers with careful instruction. We figured that teaching higher performers effectively is so easy that in time, those who educate them would learn how to do it effectively. It certainly hasn't happened yet. But we felt that we needed to work with the lower performers simply because it is not easy and teachers don't know how to do it. In fact, we believed that if we didn't do it, it wouldn't happen because nobody in or out of Follow Through (with the exception of the University of Kansas) was close.

(p. 2-4)

This finding was later confirmed in a subsequent study;

To show the degree of acceleration that was possible with students at or above average, Doug Carnine and I did a formal study, which appeared in our 1978 Technical Report to National Follow Through. Thirty children in a Springfield, Oregon elementary school went through our reading program at an accelerated rate. These were not children with extremely high IQs, but all but two met the district’s criteria for "higher performer," which was that they performed at or above the district average when they began the first grade. (The district had no kindergarten.) The two exceptions were low performers who were added because the teacher felt they could benefit from the program. During grades 1 and 2, the children had daily reading lessons of about one-half hour per day and devoted another half-hour to independent work. The teaching was conducted by trainees in our practica. The classroom teacher was a star, but she did very little of the teaching. She made sure, however, that the trainees performed very well.

At the end of their second grade, children read at the middle fourth grade level according to the Stanford Achievement Test. They performed on the fifth-grade level of an oral reading test. The top ten children received a fourth-grade test that measured speed and accuracy. (We could find no test for the second or third grade.) Students performed on the seventh-grade level.

The children were not taught science as a subject, but level 3 of our reading program has stories that are heavy in science content. The class performed at the fourth-grade level in science.

(pp. 6-7)

These results shouldn't be surprising to anyone. If the quality of instruction is held constant, smart kids will always learn more and faster than their dull peers. And yet, you can't go a day without reading how the goal of education is to eliminate the achievement gap. The allure of rainbow and lollipop solutions is strong. but, based on the evidence at hand, it's not going to happen. So, get over it.

February 9, 2007

I contend one of the reasons it's so hard to discover the right way to teach is the large cycle times. A cycle is the raw clock time between full trials of teaching someone who doesn't know something to mastery. When I write a computer program I chop the problem into pieces so that my cycle times are a matter of minutes. I make small tests for subtasks that I link together in a full cycle and re-run at whatever interval is convenient - sometimes several trials a minute. In between I tweak my program to pass all the tests. Full cycle times on the larger tasks rarely last over a day and if they do it's because there are competing projects that must be handled.

Consider how long it would take to teach a kid clock addition. You know the rules:

When the seconds hit 60, roll the seconds back to zero and add one to the minutes.

When the minutes hit 60, roll the minutes back to zero and add one to the hours.

When the hours hit 12 alternate a.m. with p.m. or vice versa, roll the hours back to one and add one to the day if you've alternated p.m. to a.m.

When the days hit the current month limit...

It would take weeks to teach a kid who doesn't know this, and have him performing to mastery - if he truely didn't know it. Even doing it Zig's way, it would take a long time. Throw in some heafty ego, unclear rules about what needs to be done and the word 'never' comes to mind.

It would take me less than a day to write a clock addition program, probably less than half a day, maybe an hour.

And I get paid more than teachers? Teaching is much harder than what I do. I take my hat off to the really good teachers out there. You deserve more than you get.

W. Stephen Wilson teaches mathematics at Mayor Bloomberg's alma mater, Johns Hopkins University. Last fall he conducted an experiment on the students in his Calculus I course.

Professor Wilson administered the same final exam to last fall's students that he used for the same course in the fall of 1989. He chose that year because he was able to obtain data for both his exam and the SAT math scores for both cohorts of students.

The surprise: the 1989 students did much better than their 2006 counterparts.

We need more University Professors speaking out on the decline of skills of their incoming students. Maybe then, more high school teachers will get into the act as well. Then we can focus on the real rotten apples of education the elementary and middle schools.

February 8, 2007

Proclaims Johnathan Alter in a seriously misguided and superficial Newsweek article "Stop Pandering on Education." I must have gotten spoiled reading Engelmann's latest book on what's wrong with education because Alter's article pales in comparison. Right off the bat Alter tells us he explicitly that he doesn't have an ounce of brains.

The crazy thing about the education debate in the United States is that anyone with an ounce of brains knows what must be done. Each political party is about half right. Republicans are right about the need for strict performance standards and wrong in believing that enduring change is possible without lots more money from Washington. Democrats are right about the need to pay teachers more but wrong to kiss up to teachers unions bent on preventing accountability.

It doesn't work that way. At all. None of these bromides is accurate. And, the break down on party affiliation is wrong to boot.

Yes, we need "strict performance standards" to measure school performance, but performance standards aren't going to force schools to do anything they don't want to. Schools don't want to change and if none of them change then they can all fail together. That effectively throws a wrench into the works. It's the ultimate loophole and schools know it well by now. This is the technique they're using to escape the provisions of NCLB. Put up a big stink, effect superficial changes, and tough it out until everyone fails. Then put up an even bigger stink until the political fallout becomes untenable.

Schools certainly don't need "lots more money from Washington." They have more than enough money already, especially big city school districts which are flush with five digit per pupil budgets already. Alter is, of course, a Democrat partisan masquerading as a neutral journalist and throwing more money at problems is the Democrat answer to everything. Too bad this gambit doesn't solve problems like it's supposed to since we currently throw enough money at our problems already to have solved all of them. Clearly we haven't--which to Democrats merely means we aren't spending enough. A nice little positive feedback system.

Teachers are paid handsomely already so there is no need to pay them more, though I'm you won't find teachers complaining. Teachers aren't saints because they teach children, they are merely workers providing a service like the rest of us. They are paid in line with the compensation paid to other professionals and there is no indication that we need to attract a better class of teachers to improve instruction. Our current crop of teachers is up to the task once someone trains them how to teach, something their ed schools failed to do.

And while I'm with Alter on the need for Democrats to stop "kiss[ing] up to teachers unions bent on preventing accountability," we're past that stage now. We have accountability measures in place and there's lots of parental and taxpayer support behind such measures. Union agitation notwithstanding, accountability is here to stay. Too much money is being sucked out of taxpayer pockets nowadays for people not to care if that money is being spent wisely. And, when taxpayers concede accountability, you can kiss government run public education goodbye.

[New York Governor Eliot] Spitzer seems game to fight his own party's instinct to pander. "The national Democratic Party has got to understand that real education reform is a central issue both politically and for our economic future," he told me last week. "We have to get our arms around the idea that if there's no performance, you must remove those responsible for the failure." It's a sad commentary on Democrats that they've allowed "educational accountability" to become a winning issue for the GOP.

But that failure is a failure of leadership and it is a total failure. Even if we fired all the underperforming principals and superintendents, who are we going to replace them with? The next batch of clueless educators waiting in the wings? Here's a good example of what I'm talking about:

In New York City—home to 1,400 schools, 80,000 teachers and 1.1 million students—Republican Mayor Michael Bloomberg (a huge improvement over his predecessor, Rudy Giuliani) is showing what accountability means. First, he won mayoral control of the school system, a prerequisite for getting anything done in a big city. Now his tough-minded schools chancellor, Joel Klein (a Democrat), is moving forward on an important new plan to slash administrative layers and empower individual schools. The idea is to make each principal "the CEO of the school instead of an agent of the bureaucracy," Klein says. More than 300 New York principals are signing performance contracts that give them more control in exchange for being accountable. Klein means business: "If your school gets a D or an F, I'm gonna fire your ass."

Bloomberg brought in Klein, a business guy, to fix education in NYC. Klein, being a business guy, didn't know the first thing fixing education and did what business guys do, hire experts. But, there are precious few experts who know what they're talking about in education and Klein, being a business guy, didn't realize that he had hired a bunch of witch doctors as his experts. So now Klein is trying to transform the NYC school system into a well oiled business which just happens to be selling a failed product (constructivist education) recommended by his "experts." Klein hasn't yet realized this demonstrating that he really wasn't the business guru Bloomberg thought he was.

Alter concludes his article with an attack on teachers which he thinks are the cause of our education woes:

A big accountability problem nationwide is teacher tenure, which is almost automatically awarded whether a teacher is good or not. If he's not, he gets to commit educational malpractice for the next 40 years...

It's time to move from identifying failing schools to identifying failing teachers. That sounds obvious, but until now it hasn't happened in American education. "We need a management tool that can show whether Ms. Jones can teach long division," says Margaret Spellings, Bush's sensible secretary of Education. Too many educators are still caught in what Klein calls a "culture of excuses." The excuse du jour is that NCLB is "punitive."

There exist some bad apple teachers that shouldn't be teaching anything to anyone ever. There are always a few bad workers in any profession or occupation that aren't suited for the tasks of the job. So let's fire them. Now we are left with the competent ones. Almost all these teachers are capable of teaching children and are more than willing at today's salary structure. The only problem is that most of them don't know how to teach competently. Their schools of education didn't prepare them and they haven't received any effective in service training. Why point fingers at them? The failure is a failure of management. Teachers will teach what they are told to teach or what they are permitted to teach. The problem is that they aren't being told the right things to teach and the right way to teach. And, they certainly haven't been trained to teach properly. They do it the way it's always been done (at least superficially) and it just so happens that that way doesn't work well at all.

February 5, 2007

OMB has rated (the scandalous) Reading First program as being effective, the highest rating.

The Reading First program has a strong program design and good management practices, and appears to bring about improvements in students' reading abilities. A small Department of Education staff leverages its efforts through technical assistance contracts and the efforts of States.

The program has shown improvements in the reading ability of various subgroups of students and has met performance targets; however, some groups are not improving as quickly as others.

A major evaluation of the implementation of the Reading First program has been completed and yielded positive results. A large-scale scientifically based examination is scheduled to generate an interim report in 2007 and a final report in 2008.

Only four programs in the US Department of Education were ranked "effective." Reading First was the only program in NCLB to be rated effective. In the federal government only 17% of programs are rated effective. Your tax dollars hard at work.

Chapter Three of Zig's book has been posted. This chapter focuses on the how Project Follow Through was implemented at each location. Judging by the resistance faced, it's a miracle theat the project performed as well as it did.

The senior reading teacher and guru in one of our schools instigated an argument with me about reading—what it was, and how best to teach it. In the best cocktail-party style, we were polite, and the small group surrounding us was intent. The teacher’s premise was that the creativeness of teachers should not be trammeled by a lockstep program, like DI. She was well read, and quoted the literature with flourish. After the discussion went on for possibly ten minutes, one of our first-year teachers from the same school interrupted and ended the argument.

She said, “Angie, you know more about reading than I’ll ever know. You know linguistics, and all those theories I don’t understand. All I know how to do is follow the program. I do what it tells me to do in black type, and I say what it tells me to say in red type. But Angie, my kids read better than your kids, and you know it.”

About D-Ed Reckoning

The primary problem with K-12 education today is the problem of dead reckoning--an estimate based on little or no information. We don't know what a good K-12 education system is because we've never seen one operating. A good education system is one that is capable of educating almost every child.