Return-Path: <nifl-assessment@literacy.nifl.gov>
Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j71IR3G21831; Mon, 1 Aug 2005 14:27:03 -0400 (EDT)
Date: Mon, 1 Aug 2005 14:27:03 -0400 (EDT)
Message-Id: <002801c596c7$e2abb7d0$0202a8c0@frodo>
Errors-To: listowner@literacy.nifl.gov
Reply-To: nifl-assessment@literacy.nifl.gov
Originator: nifl-assessment@literacy.nifl.gov
Sender: nifl-assessment@literacy.nifl.gov
Precedence: bulk
From: "Marie Cora" <marie.cora@hotspurpartners.com>
To: Multiple recipients of list <nifl-assessment@literacy.nifl.gov>
Subject: [NIFL-ASSESSMENT:1187] RE: high-stakes testing, state/federal
X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas
X-Mailer: Microsoft Outlook, Build 10.0.2627
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
Status: O
Content-Length: 8769
Lines: 183
Hi Katrina, thanks for your post.
A couple of things to consider: You noted below in your post that
'standardized tests are necessary because of funding'. Actually,
standardized tests are necessary for fairness. The funding part is
quite frankly secondary - although no one would argue with your
frustrations regarding **that use of them**, myself included. I'm just
trying to get you (and all) to see these differences and be careful to
understand how the many pieces of accountability work together, or don't
work together.
And both theoretically and in reality, any type of test (including
surveys, interviews, and portfolios) can be standardized, and in the
best of all worlds, should definitely be standardized. (The challenges
for these latter assessments are steep: costly, time-consuming, huge
amounts of paper/documentation, etc.)
If you check out Phil Cackley's post, he discusses two performance-based
assessments (Best Plus and REEP) that are standardized, that provide
much more usable information for the student and teacher, and that are,
low and behold, approved for use with the nrs. Not perfect...nothing is
with all this...but moving toward a more effective space.
Perhaps we should shift our questions away from the tests themselves.
Perhaps we should discuss what we want to measure, and then make some
suggestions and have discussion on how might be best to capture the
stuff we want to measure. We keep getting stuck in this quagmire of
misinterpretation of terms.
What do others think?
marie cora
Moderator, NIFL Assessment Discussion List, and
Coordinator/Developer LINCS Assessment Special Collection at
http://literacy.kent.edu/Midwest/assessment/marie.cora@hotspurpartners.com
-----Original Message-----
From: nifl-assessment@nifl.gov [mailto:nifl-assessment@nifl.gov] On
Behalf Of Katrina Hinson
Sent: Monday, August 01, 2005 10:13 AM
To: Multiple recipients of list
Subject: [NIFL-ASSESSMENT:1184] RE: high-stakes testing, state/federal
I've been really quiet on this list for the last several weeks - partly
because we just welcomed a brand new baby to our family - now that I've
caught up on all the collected emails, I think I'll dive into this
discussion. A colleague and I were actually discussing "standardized"
testing issues over coffee this past Saturday as it relates to our own
program.
To answer the questions posed by Howard:
I don't like standardized tests. I never have -even as a student in
school myself. I think they are excellent guage of a student's ability
to memorize and regurgiate information but not necessarily a good guague
of a student's ability to APPLY the knowledge they have. I also think
one of the fatal flaws with a standardized tests is that sometimes
students learn something simply to pass a test but then forget it as
soon as they think they don't need it any longer. Unfortunately, because
of reporting and funding, I think standardized tests, irregardless of
which one a state or school uses, have become a necessary evil. I
happen to agree with others that spoke up on the list that stated that
they don't really think standardized tests are the best way to go in
terms of assessing students. Like others, my own school does intake
testing before assigning a student to a class. One of the problems I've
found is that some students don't take the test seriously, they get
really low!
scores, are improperly placed, and then they quit coming b/c they get
bored. For the record, we use the TABE test. I've seen students test
who simply opened their test booklet and just bubbled in answers - yet
when doing work in class, it was discovered that they knew way more than
the test showed. Likewise, I've had students test really high, and it
not be an accurate indication of what they really knew. I've had
students, especially in the math portion of the test, score at the 11th
and 12 grade level yet those same students could not work with
complicated fraction problems, had trouble with long division, etc,let
alone the inability to do algebra and geometry. The TABE, along with
any standardized tests, is going to have inherent flaws - because it
uses snippets of data to "test" a student's knowledge base but it
doesn't come close to giving a real and sometimes completely accurate
picture. On a side note, I also agree with earlier comments that the
TABE is not neces!
sarily an ideal test to "assess" a student's reading ability. In my t
levels, as a GED instructor and even as an AHS instructor, reading
ability is truly only assessed when an instructor spends some quality
one on one time with his or her students gauging everything from fluency
to comprehension. The TABE, CASAS and even the GED definitely tests
comprehension skills but give a weak assessment of the students' fluency
skills. It can be assumed that if the student has trouble comprehending
what they have read, then by defaulty they have trouble with fluency -
but it doesn't begin to tell or help an instructor know just where that
problem might lie. Is it with word recognition, phonetics, rate, etc.
There are a lot of questions that no standardized tests can ever answer
and that the instructor is going to have to "assess" on his or her own.
My experience with CASAS is that it too doesn't give a complete picture
BUT, I do like the fact that it is "Life Skills/Employability Skills"
based. I think it's much easier to explain to someone in their 50's and
60's in terms of CASAS, than it is to have given them the TABE and show
tell them that they are at a 4th grade level in a given area. I agree
that such explanations are a bit demeaning to adults who have life
experiences that the TABE does not take into account. There is a huge
difference between the 17 year old who completed 10th grade and the 50
year old who held a job for 20 years before the plant closed and those
differences are NOT Assessed or accounted for in assessments.
Howard asked if there was one tests that was "better than sliced bread".
I think the answer to that is "no." No one tests will ever give a
complete picture. I think that is also the fatal flaw in the NRS. It's
data driven only and data is one sided. Data like that can be skewed
b/c not everyone tests well; data can be misleading - students tests
high or low and it not be the real "indication" of their ability;
students deliberately "blow" the tests b/c they don't understand or
appreciate the significance of it. There are a lot of factors, it seems
to me, that make "standardized" testing flawed but because of funding
issues, they are necessary. I think it becomes equally necessary then
for instructors to go beyond the "initial" assessment done at an intake
session to truly identify the needs and abilities of their students. I
think this can be done with one to one interviews, surveys and teacher
made materials. I think that as a student enters and learns, that
portfolios !
of work highlighting their growth are the best assessment of their
ability.
I don't think there is an easy answer or solution.
Regards
Katrina Hinson
>>> hdooley@riral.org 07/27/05 10:21 PM >>>
MIME-Version: 1.0
"Help", he says, not quite desperately. (I have procrastinated, so I am
just a "nonce" from desperation.)
As my program (staff and learners) and fellow practitioners move into
the 21st century of "no adult left behind", trying to meet the
accountability requirements of federal, state, and program parties,
trying to be evidence-based, standards-based, and so on in the jargon of
the moment, we are as you are trying to prepare our learners for
post-secondary training/education and for living-wage jobs, and, well,
frankly (as St Paul said) trying to be "all things to all people so that
some few can be saved".
In that context, I am interested in hearing and/or discussing with folks
the implementation of standardized assessments. Are they always a
necessary evil? The devil's due? Have you found ways to make them
relevant, engaging?
Perhaps (whisper, wink) you are you a true-believer? Is the TABE, the
BEST, the CASAS, the best thing since sliced bread?
Don't be shy. Blast me. Guide me. Lurkers, come out and play.
Theorists, practicivists welcome to proselytize.
Do you reject standardization? Are you are a naturalist? Please, let
me know how to move down the "path not taken."
If your comments are "not ready for prime-time", you can reply privately
to hdooley@riral.org. Thank you.
Howard L. Dooley, Jr.
Director of Accountability, Project RIRAL
Assessment Team, Governor's Taskforce on Adult Literacy
We could learn a lot from crayons: some are sharp, some are
pretty, some are dull, some have weird names, and all are
different colors...but they all have to learn to live in
the same box.

Disclaimer: This website was developed by Quotient Inc. with funding from the U.S. Department of Education (ED), Office of Career, Technical, and Adult Education (OCTAE), under Contract No.ED-VAE-14-O-5018. The opinions expressed herein do not necessarily represent the positions or policies of the U.S. Department of Education, and no official endorsement by the U.S. Department of Education should be inferred.