Commentary of the Day - June 10, 2012:
The Irrelevance of Data? Guest commentary
by Edward Carney.

There is a
familiar saying that goes,
"those who can, do; those who
can't, teach." I've never
agreed with that cynical axiom,
which seems like an unfair bit
of sniping against a truly noble
profession. But I do largely
agree with the less familiar
addendum to the saying, which
claims that "those who canít
teach, teach teachers."

It is
commonly said that there is a
crisis in American education.
No doubt it is that perception
which underpins the existence of
this very site. For some, that
perception is enough to justify
an assumption of incompetence in
the average teacher. But I
would venture to guess that the
greater share of failure
generally ascribed to teachers
is, in fact, the failure of
those structures tasked with
making our teachers better.

In my young
freelance career, I have had the
opportunity to write and edit
various columns and academic
works in the areas of education
and educational policy. I also
have the pleasure of counting as
my best friend a teacher whose
several years of experience
already range from primary
school to college instruction.
Several other people with whom I
am at least incidentally
acquainted are teachers or
aspire to be. On the basis of
all of this, one observation
continues to strike me above all
others: that there is an absurd
disconnect between the realm of
education policy and the realm
in which actual classroom
teaching goes on.

Though my
observations on the issue are various,
they are also less than
comprehensive, so I can hardly
begin to imagine how many
resources are wasted every year
on the gathering and analysis of
trends that have shockingly
little to do with educational
outcomes or the performance of
teachers on the ground. In my
experience surveying
graduate-level research, I see
work investigating methods for
how to improve students'
self-esteem, how to increase
engagement by the surrounding
community, what government
profiles are conducive to
educational reform, how
consultants can foster change in
the organizational structure of
particular schools, and so on.
What I see precious little of
are studies indicating how
educators can teach better or
how students can learn more.

To give one
particularly egregious example,
I once encountered a published
study on the topic of city
governance models and their
effect on public education. The
paper stated plainly in its
executive summary that the
research provided no evidence
that governance structures had
any influence whatsoever on
student achievement. It said
this upfront, and then the paper
went on for another 130 pages.
To my mind, if your area of
study is educational policy and
you strike on a particular
subject that has no influence on
educational outcomes, then that
is all you need to say about the
subject.

Naturally,
though, over the course of more
than 130 pages, the governance
paper went to great lengths to
argue for its own relevance. It
explained how efficient
governance structures could
increase public commitment to
education, increase funding, and
increase stability. Frankly,
though, I don't think that a
stable, well-funded public
school with poor student
performance is any better than a
chaotic, de-funded school with
equivalent performance. In
fact, I would hope that a school
with poor performance would lack
internal stability as it
struggles to restructure,
retrain, and experiment in order
to find alternatives that work.

No topic of
study within the areas of policy
and consulting should have any
bearing on classroom structures
or government mandates if it
isn't connected to definite
improvements in teacher
strategies or the academic
performance of individual
students. To the extent that
there can be said to be a need
for sprawling academic
departments in educational
policy and educational
consulting, those are the only
topics that account for that
need.

It is
frightening to think that there
are great masses of research
ostensibly to be used to
influence broad educational
policies, which have little or
nothing to do with improving
student performance. And
governments actually both
implement and reverse the
associated policies as new
research emerges to support
competing speculations about
what second-order changes might
remotely contribute to different
academic outcomes somewhere down
the line.

It is a
discomforting fact of education
that each generation of students
is also a generation of guinea
pigs. Policies change because
we guess that the new ones will
be better for students, but we
don't really know until years
later when a greater or lesser
number of those students have
failed. And even then we don't
know which specific set of
factors to praise or blame, so
we try them in different
combinations.

Upsetting
though that fact is, it is
unavoidable. But so long as
students must be guinea pigs, I
would much prefer that they be
guinea pigs who can be observed
directly, and not analyzed as an
arcane set of numbers and data
points. Inside a classroom,
teachers can respond to the
needs of each student
day-by-day, and if she's
effective at doing so, the
improvements emerge independent
of governance structures and
manipulations of self-esteem
that are unrelated to
performance. Yet policymakers
seem obsessed with finding
explanations that wholly
transcend the space between each
teacher and her students. And
they evidently believe that
every guinea pig can be
perfected at once by only
changing the maze they're made
to run through.

This is all
part and parcel of the severe
addiction
in the United States to
educational data, and it misses
the obvious solution to the
problem: if students are
performing poorly, teach
better. Of course teacher
performance is not the only
factor influencing outcomes.
Classroom culture matters;
parental and community
engagement matters; funding
matters; student background
matters. But none of these
considerations are in themselves
grounds for altering the
policies that actually govern
how teachers teach and how
administrators evaluate them.

The same
addiction to data that promotes
irrelevant policy studies is
also the foundation of an
onerous system of standardized
testing, which insists that
quantitative data alone can
determine the value and
competence of each teacher.
This remote sort of evaluation
recently named
Anderson School teacher Carolyn
Abbott the worst eighth grade
math teacher in New York City
despite the fact that her
seventh graders scored in the 98th
percentile and the eighth grade
tests on which she was being
evaluated were comprised of
material that her advanced
students had learned in fifth or
sixth grade.

Context
matters. If policymakers
believe they are able to
influence academic outcomes
across classrooms, districts,
and states by changing anything
other than the level of talent
in their teachers, I expect that
they'll be consistently
frustrated in their efforts,
though not as frustrated as
their teachers are. It's no
wonder that the study of
governance structures' effects
on public schools found no
correlation to improved academic
performance. The space where
government and community meet
the school is not the space
where teaching happens. It
happens in the classroom, and it
is there that change must be
both sought and implemented. In
absence of that, all the rest is
just frills.

The
Irascible Professor comments:
The IP agrees with most, but not
quite all, of what Edward has
said. Certainly, here in
the U.S. we collect enormous
amounts of data related to K-12
education. And as Edward
notes a good deal of that data
is seriously flawed. There
is an old saying in the data
analysis game; namely, "garbage
in, garbage out." Poorly
structured studies and
evaluation tools produce
information that basically is
worthless. The Carolyn
Abbott case clearly supports
that notion.
Unfortunately, much of the data
collected by education
researchers is flawed because
they seldom employ rigorous
research models and techniques.
Too often variables are poorly
constrained and inferences are
drawn from populations that are
not representative of all
students or teachers.

But,
Carney's assertion that the
quality of the teacher is the
most important factor in student
learning is supported by many
reasonable studies. So, at
least those data are relevant.
We know that high quality
teaching improves student
performance. One thing
that we need to know is how to
attract better teachers to the
profession. However, it is
unlikely that we ever be able to
staff every classroom with a
superbly talented teacher.
So, another very important
question is how do we improve
the performance of the "average"
teacher. Politicians have
tried to do this by wielding the
stick; namely, using such
measures as calculating the
"value added" to standardized
test results.
Unfortunately, this is a
seriously flawed approach
because while it's possible to
have a standardized test, it is
not possible to have a
standardized student.
While K-12 class sizes are not
as small as we might like them
to be, they certainly are small
enough to have significant
variations in both the average
intelligence and the average
motivation of students from year
to year. Value-added
testing doesn't always account
for these factors sufficiently.
In addition, standardized
testing doesn't necessarily help
teachers to be be better at
teaching. More often it
motivates teachers to learn how
to be better at preparing their
students to be better test
takers.