An Examination of the Relative Contributions of the Four Language Modalities to
English Language Proficiency: Implications for Assessment and Instruction Across
Grade Spans and Proficiency Levels

This consortium of six states — Arizona, Colorado, Louisiana, Montana, New Mexico, and Utah — led by Arizona, in partnership with WestEd and Pacific Metrics, and supported by national experts from institutions of higher education, proposes an 18-month study that systematically examines the four language modalities (i.e., listening, speaking, reading, writing) required to be assessed under Title III of the No Child Left Behind Act of 2001 in terms of (1) their relative contribution toward determining English language proficiency (ELP), (2) their interrelationships vis-à-vis ELP, and (3) whether and how their relative contributions toward determining ELP and their interrelationships change across (a) grade levels, (b) language proficiency levels, and (c) English learner (EL) student subgroups. This proposal originates from states’ need to better ensure accurate and valid measures of the ELP domain and the achievement of their EL students, and this information has significant implications for standards and instruction that support EL student achievement.

The proposed study will analyze data from consortium member states related to EL students’ development and attainment of ELP. The study will be conducted within a framework of validity and utility that structures and organizes the study’s activities and outcomes. All states will benefit from the project in terms of (1) knowledge related to improving measurement of student development and attainment of ELP, (2) guidance related to creating systems of support for EL students, and (3) professional development that builds educator capacity related to supporting the development of EL students’ ELP.

Arizona Department of Education (ADE) submits this proposal for Project LEAAP, featuring an analysis of curricular progressions and student performance across grades on states’ alternate assessments based on alternate academic achievement standards (AA-AAAS) for students with significant cognitive disabilities. LEAAP will allow states to examine student progress over time (absolute priority 3) – in both performance and skills assessed. LEAAP is proposed by a consortium of states (competitive priority 2), including Arizona, Maryland, South Dakota, and Wyoming. LEAAP is a collaborative effort (absolute priority 1) with Western Carolina University, which will manage all project activities with oversight by the ADE, and the University of North Carolina at Charlotte. LEAAP will inform states’ future improvements in AA-AAAS systems, including accessibility and validity (competitive priority 1).

Goal 1: Conduct a retrospective study of content and performance expectations in states’ AA-AAAS. PIs will conduct an analysis of content, performance expectations, and cognitive demands of items within and across grades to determine the nature of AA-AAAS progressions across grades. Content will be evaluated relative to state standards and the Common Core State Standards.

Goal 2: Investigate and define dimensions of growth in achievement for this population, using three years of student achievement data.

Goal 4: Provide technical assistance to states on interpreting and using their findings in order to improve assessment systems. An expert panel will review curricular progressions identified in Goal 1. States and the Advisory Committee will discuss the use of findings for continuous improvement.

Goal 5: Disseminate project products and findings. LEAAP includes a multi-faceted approach to dissemination (competitive priority 3) including data collection and reporting tools for states and project
findings disseminated through a wide range of channels.

The Illinois State Board of Education (ISBE), on behalf of the 23-state World-Class
Instructional Design and Assessment (WIDA) Consortium, proposes to develop and implement
Spanish language development (SLD) standards for students in Pre-K through twelfth grade, and
to develop a practicable, reliable, and valid Spanish language proficiency assessment system for
Kindergarten and Grades 1-2 based on those standards for first language Spanish English
language learners and for students receiving content area instruction in Spanish regardless of
their L1. The project has four goals: 1) create academic SLD Standards for grades preK-12; 2)
develop a technology-mediated assessment, the Prueba Óptima del Desarrollo del Español
Realizado (PODER), to ensure that Spanish language development, as defined by the SLD
Standards, is assessed validly and reliably in grades K-2; 3) disseminate information on the
project; and 4) collaborate with other institutions in the development, research, and
administration of the assessments.

The proposed Spanish Academic Language Standards and Assessment (SALSA) project will
result in preK-12 SLD Standards, along with a defensible, psychometrically sound, technologically
mediated test based on those standards. PODER will give LEAs the capacity to initially screen Spanish-speaking ELLs in grades K-2 to determine their baseline Spanish language proficiency, and to monitor students’ progress in Spanish language development in classrooms where Spanish is the medium of instruction. The SLD Standards and PODER, intended for use throughout the United States and in Puerto Rico, will be developed with input from academic experts and educational professionals from a wide spectrum of programs and states. Initial piloting and field testing of PODER will occur in multiple sites, including Illinois, Colorado, New Mexico, and Puerto Rico.

Develop Instrumentation to Analyze Fidelity of Instruction for Students with Disabilities
in Relation to Standards and Assessments
and Report on Opportunity to Learn and Student Achievement

With Kansas serving as the lead state, and Georgia and Ohio serving as anchor
consortium states, two Council of Chief State School Officers’ (CCSSO) State Collaboratives on
Assessment and Student Standards (SCASSs) — in partnership with the University of
Wisconsin’s Center for Education Research (WCER), and WestEd, Inc. — propose a unique
project integrating research, development, collaboration, and technical assistance that will
provide a high-quality, valid method of improving the quality and validity of state assessments
designed to assess the knowledge and skills of students with disabilities. The two state
collaboratives, Assessment for Special Education Students (ASES) and Surveys of Enacted
Curriculum (SEC), have each been working with member states for over a decade, and now are
joining forces for this project. Kansas (lead state), Georgia, and Ohio are active states with the
work of both collaboratives that see strong prospects for the project. The central goal of the
proposed project is to develop and test a method for measuring and reporting the degree to which
instruction for students with disabilities (SWDs) is aligned to grade-level state standards. This
study will result in: 1) a report for state leaders, educators, and researchers with research-based
evidence on the fidelity of instruction to state standards and assessments for students with
Individual Education Plans (IEPs) and evidence on the effects of differences in opportunity to
learn for subsequent student achievement, and 2) focused dissemination of materials and guides
for instructional improvement via conferences and other media that will highlight (a) research
results, (b) implications for their schools/districts, and (c) strategies to improve practice for
students with disabilities.

Improving the Validity of Assessment Results for ELLs with Disabilities (IVARED)

States have developed participation criteria and accommodation policies for ELLs and for students with disabilities, but for the most part have done little to address these for students who fit into both groups: English language learners (ELLs) with disabilities. IVARED creates a consortium of states (Arizona, Minnesota, Maine, Michigan, and Washington) to address the validity of assessment results of ELLs with disabilities in statewide accountability assessments by examining the characteristics of the students and their performance, improving the process for making decisions about participation and accommodations via expert panel input and studies of decision making, and developing principles to guide the assessment of ELLs with disabilities. Through these activities, states will be able to develop validity arguments for their assessments and assessment practices for ELLs with disabilities, and by doing so will enhance the quality of their assessment systems for measuring the achievement of ELLs with disabilities.

States will collaborate with each other and with the National Center on Educational Outcomes
(NCEO) to examine state data, policies, and practices for ELLs with disabilities to achieve the following
objectives: (1) Identify and describe each state’s population and relate it to assessment performance; (2)
Describe inclusion in state assessment participation and performance policies; (3) Identify promising
practices for participation, accommodations, and test score interpretation decisions; (4) Strengthen the
knowledge base of assessment decision makers to improve decisions; and (5) Disseminate projectresults within states and nationally.

The materials developed through these objectives will become the basis for an online training module customized to each state’s specific requirements. All partners in the project agree on the importance of addressing validity issues for ELLs with disabilities, and are committed to IVARED activities to help them move toward their own validity arguments.

The Student Accessibility Assessment System (SAAS) Project is a collaborative effort to develop and
validate a comprehensive system that meets four critical needs that must be addressed in order to improve
the assignment of test accessibility options and to improve the validity of test-based inferences about
student achievement, particularly for students with disabilities, special needs, and/or who are English
language learners. These needs include: a) improving educator understanding of accessibility options that
are available in computer-based test delivery environments; b) providing tools that allow educators to
work individually with students to explore test accessibility features that may help improve access for
each individual student during testing; c) creating better opportunities for students to develop familiarity
and comfort using computer-based test accessibility tools prior to testing; and d) providing educators with
empirical evidence that a selected tool or set of tools benefits the student while performing test items. To
meet these needs, the SAAS project will develop a comprehensive, integrated assessment tool that
provides: a) users with rich information about accessibility options available for computer-based tests; b)
tools to help users make informed decisions about the use of specific accessibility options and
combinations of these options; and c) measures to assess the effectiveness of selected options for
improving student access to test content. Once developed, the SAAS project will conduct a validity study
that focuses on teachers’ use of the SAAS to inform the specification of student access profiles for an
operational state test. Through this study, the project will collect evidence to: 1) examine the effect that
the SAAS has on informing decisions about student accessibility profiles; 2) better understand the factors
that teachers and students consider when defining a profile; and 3) estimate the effect that informed
assignment of accessibility features has on the validity of inferences about student achievement based on
a test score. To accomplish these goals, 9 states (NH, VT, ME, RI, CT, MT, UT, SC, and MD), and
experts from the University of Oregon, Boston College, the National Center for Educational Outcomes,
CAST, Measured Progress, Nimble Assessment Systems, ETS, American Printing House for the Blind,
and Gallaudet University will work collaboratively to develop, validate and disseminate the SAAS.

The Utility of Online Mathematics Constructed-Response Items: Maintaining Important
Mathematics in State Assessments and Providing Appropriate Access to Students

With North Carolina serving as the lead state, four state education agencies including Kentucky, New Mexico, and South Carolina will collaborate to develop an assessment that is designed to better measure students’ knowledge and skills in mathematics. The assessment will consist solely of constructed-response items, delivered via computer with various accommodations available, and be automatically scored. Two sets of items will be developed: one for grade 7 and one for Algebra I, aligned to the Common Core State Standards. The use of constructed-response items in mathematics will allow states to measure students’ knowledge and skills relative to their mathematics standards in ways that more closely tap into the intentions of the standards than the use of selected-response items can.

The online delivery system will allow for both automated scoring of student responses and a variety of administration and response conditions to meet the needs of students with disabilities, English learners, and other students who may have specific access needs but are not classified in one of those two groups. The collaborating states will work with a panel of experts in mathematics education and in educating and assessing students with a variety of learning and assessment requirements. The items will be administered to students in each state, and the results will be analyzed to address the following questions:

What computer-based accommodations are appropriate and feasible to meet a broad range of student needs, and do they increase student access to test content?

Does automated scoring provide reliable, valid, and acceptable (to users) item scores, and do these qualities apply equally to responses from different student groups, particularly English learners and students with disabilities?

Does the application of measurement models associated with constructed-response items reduce error in test scores through (a) the elimination of the need to estimate the guessing parameter and (b) the ability to score for partial knowledge?