Despite Doubts, NAEP Panel Meets To Set First National Standards for Achievement

ESSEX JUNCTION, VT--After expressing misgivings about the process, a
group of educators, business leaders, and public officials met here
last month to set what could become the first national standards for
student achievement.

Taking what some officials called a historic step, the 71 judges
analyzed each of the questions on the 1990 National Assessment of
Educational Progress test in mathematics, and made judgments about
whether students at "basic," "proficient," and "advanced" levels of
achievement should be able to answer them correctly.

Their decisions will be compiled into a report, expected to be
released this month, that will outline how students at each of the
three levels of performance in grades 4, 8, and 12 should perform on
the assessment.

If their proposals are adopted by the National Assessment Governing
Board, next spring that panel will report the results of the math
assessment--the first to include state-by-state data--by comparing how
students actually measured up to the standards. In the past, NAEP has
simply described how students performed on the assessments.

Some participants in the process, however, questioned whether the
exercise would produce improvements in education. Noting that the
procedure they used is normally employed by small groups setting a
single standard, they said it remained to be seen whether a group as
diverse as theirs could produce valid standards for three levels of
achievement in three grades.

In addition, some panelists also argued that the standards might do
little to spur policymakers to reform schools, and might in fact impede
efforts to upgrade curricula.

"If the test items were really reflecting the kind of curriculum we
have in mind, the process would probably be useful," said Thomas A.
Romberg, professor of curriculum and instruction at the University of
Wisconsin at Madison. "The difficulty is, because these are
short-answer questions that students are expected to answer in less
than a minute, on average, it may just add to the problem of getting
kids to work on more complex tasks."

To encourage students to meet the standards, he added, "teachers may
spend more time on drill and practice on trivial stuff."

Michael Glode, a member of the NAEP governing board who attended the
meeting here, said the board would consider the panelists' views in
deciding whether to adopt the standards, as well as those from a team
of researchers hired to evaluate the project. The board has also
tentatively scheduled a public hearing in Washington on the issue Oct.
30, he said.

"We're not going to stuff this down the American people or the
Congress," Mr. Glode said, adding: "I think it's going to work."

Created in 1969, NAEP is a Congressionally mandated project that
tests a national sample of students in reading, writing, mathematics,
science, and other subjects. It is currently operated by the
Educational Testing Service under contract toU.S. Education
Department.

This year, NAEP for the first time conducted a state-level
assessment in 8th-grade math in 37 states and the District of Columbia.
These results, expected to be released next June, will provide the
first state-by-state comparisons of student achievement.

The plan to set achievement levels, adopted by the National
Assessoverning Board in May, was aimed at providing a "new way of
looking" at the test results, according to Roy E. Truby, the board's
executive director. (See Education Week, May 23, 1990.)

"NAEP has described what American children know and don't know," he
said. "It has never said whether they know enough."

In addition, he said, the standards could also serve as a framework
that would enable President Bush and the nation's governors to refine
their goals for improving student performance.

"The White House and governors could say, 'By the year 2000, X
percent of students should be at the proficient, or advanced level,"'
Mr. Truby said.

To set the achievement levels, the governing board selected a group
that would represent diverse viewpoints on what students should know
and be able to do in mathematics, according to Mr. Truby.

"If the board would do it, with its 23 members in an ivory tower,
that would not be acceptable," he said. "At the same time, we can't get
1,000 people together."

The group that met here included classroom teachers, many of whom
were Presidential scholars, district- and state-level math and testing
specialists, and mathematicians and math educators. In addition, the
panluded representatives from major corporations, a U.S. Army
recruiter, state and local school-board members, an aide to Gov.
Carroll Campbell of South Carolina, and a White House staff member.

The panelists were assigned to examine test questions for a
particular grade level, 4, 8, or 12, and asked to define "basic,"
"proficient," and "advanced" levels of performance. They then were
asked to determine the probability that students at each achievement
level should be able to answer the questions correctly.

In making such judgments, Ronald K. Hambleton, a consultant on the
project, told panel members to consider whether the skill measured by
the test question was important, as well as the degree of difficulty of
the question.

The procedure is similar to one states use in setting standards for
their tests, noted Robert E. Gabrys, chief of program assessment,
evaluation, and instructional support for the Maryland Department of
Education. But, he said, "it's a lot easier to do for a state than for
the nation."

"In a state, you know everybody," Mr. Gabrys said. "The people who
made up the [test] objectives and items are involved in
standard-setting. If you pull 50 states together, that makes it hard to
set a single standard."

Mr. Hambleton, a professor of education at the University of
Massachusetts at Amherst, added that, unlike other standard-setting
efforts, the NAEP panelists were asked to set standards for three
levels of performance, rather than a single standard separating passing
from failing.

"This process is unique," he acknowledged. "But it's an easy
extension. I don't think it in itself is a real hassle."

But Mr. Romberg of the University of Wisconsin said it was difficult
to draw a distinction between proficient and advanced performance. Out
of 191 8th-grade items he analyzed, he concluded that fewer than a
dozen were so challenging that only advanced students could be expected
to answer them.

After making judgments about the test questions, the panelists also
had the opportunity to see how students actually performed on the
assessment, which was administered earlier this year. The judges could
then adjust their ratings if they considered their standards
unrealistic.

Mr. Truby said the governing board had a "knock-down, drag-out
debate" over whether to allow the panelists to see the results; in the
end, he said, they agreed to make them available.

"It was hard to make a case for ignorance," he said.

Ann P. Kahn, former president of the National PTA, said she lowered
some standards after seeing that students performed poorly on questions
she thought those at the basic level should know.

"I want kids to stretch," she said, "but I was putting them on a
rack."

But Yolanda Rodriguez, a teacher at the Martin Luther King Jr.
Middle School in Cambridge, Mass., said she occasionally set higher
standards after seeing that students were unable to answer questions
she thought they ought to do well on.

"It's unconscionable that kids are doing that poorly," she said.
"The system has failed. Something has to propel the curriculum."

Judith D. Thayer, chairman of the New Hampshire Board of Education,
agreed that the results of the standards-setting exercise were likely
to show large gaps between student performance and expectations for
what they should know and be able to do. She predicted that the
publication of the results next spring would spur the public to support
efforts to improve schools.

"Once the American public is given information about what students
do know," she said, "more and more parents and citizens will be
involved in improving education."

But Richard M. Jaeger, one of the researchers commissioned to
evaluate the project, was less optimistic. The results will most likely
be used as "political fodder for those who would bash schools," he
said.

"Educators will resist paying attention to them," said Mr. Jaeger, a
professor of education at the University of North Carolina at
Greensboro. "If they do pay attention to them, they will find a million
reasons why they don't apply to them."

Web Only

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.

Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.