Research to Practice: The Future of the Regional Educational Labs

The challenge of creating evidence-based practice bedevils a number of fields. In medicine, for example, health care workers frequently fail to wash their hands thoroughly after patient contact, even though they are aware of the importance of doing so.

In education, the federal government has historically placed substantial responsibility for translational research in the hands of the Regional Educational Laboratories (RELs), which were established in 1966 as part of the original Elementary and Secondary Education Act (ESEA). President Lyndon Johnson, whose administration oversaw the creation of the ESEA and the RELs, had expansive aspirations for a federal role in education. The ESEA was to provide federal funding for reform, focused on children from low-income families. The knowledge base for reform was to be created by new national research and development centers, and the RELs were to be the translators of scientific knowledge for practitioners. That original vision was not realized as a result of several enduring impediments:

No science to translate. In 1999, the National Academies of Science concluded about education that “In no other field … is the research base so inadequate and little used.” If good education research was scarce in 1999, imagine what was available in 1966 when the RELs began. Thus the initial model for the RELs, under which they would perform a role similar to that of extension agents in agriculture, was conceptually inspired but practically impossible. Agricultural agents had solid science to convey to farmers. The RELs had little or no such science to convey to school administrators and teachers.

Low funding levels. Another problem with the original model for the RELs is that the level of funding fell hopelessly short of what would have been necessary to serve a customer base of individual school districts. There is an agricultural cooperative extension agent in every county in the U.S. The extension service receives over $700 million in annual direct federal support, further significant amounts from state appropriations, and indirect support from the land grant universities that house the program. In contrast, there are 10 educational labs for the entire United States with a combined budget of $70.7 million. They deploy no field representatives.

Need to lobby. A third problem for the RELs emerged over time and was a predictable consequence of low levels of funding and unrealistic definitions of mission: their business model shifted to maintaining congressional support for continued funding. To this end they formed a lobbying organization, now called the Knowledge Alliance, that had a principal goal of thwarting periodic efforts to terminate the REL program. For example, the George W. Bush administration proposed the elimination of the REL program in its budget requests to Congress in 2004, 2005, and 2006. The RELs successfully lobbied to continue funding for the program at existing levels. A circumstance in which the worth of a federal program is in doubt, administration support is equivocal, and survival is a function of political lobbying is one that is not conducive to program improvement, either from the top down or the bottom up.

Who is in charge? Management of the RELs from the federal level was challenging because of conflicting statutory provisions that simultaneously require federal oversight while requiring the RELs be responsive to their own regional boards. Over time and through several administrations, the Department of Education and Congress tried to assert more influence on the operation of the RELs to assure that they were serving federal priorities and doing justice to the research they were supposed to be translating into practice. However, there was often a conflict between satisfying federal officials and pleasing regional constituencies, particularly those who could help lobby Congress for continued funding.

Quality control. The lamentable state of education research in 1999 as described by the National Academies extended to work of the RELs. Efforts at the federal level to create higher standards for the RELs were complicated by confused lines of authority, as described previously, as well as ambiguities in branding. The larger REL contractors, who began exclusively as RELs, came to generate far more of their income from state, foundation, and other federal sources than from their REL contracts. However, all of the corporate entities’ products tended to be viewed as REL work, whereas most was not subject to the oversight by the federal office responsible for the RELs — now the Institute of Education Sciences (IES) within the U.S. Department of Education. Substantial progress on the conjoined problems of quality and branding has been made in the last few years by requiring that REL products undergo independent peer review managed by IES, that the individual REL names, e.g., REL Midwest, be restricted to use only with products produced under the REL contract, and that all written REL products be disseminated through a website shared by the RELs.

Opportunities

Some of the historical challenges to the RELs have ebbed substantially. First, and most importantly, there is now science to translate. For example, the What Works Clearinghouse, an activity of IES, has within the last 6 years conducted systematic reviews and reported the evidence on approximately 500 separate branded education interventions and programs and identified about 20 percent as having positive or potentially positive evidence of effectiveness. The standards of evidence of the Clearinghouse are clear and in keeping with those employed to examine program impact in other fields. This makes viable for the first time the original vision for the RELs as brokers and translators between the worlds of research and practice.

Second, the Obama administration is strongly committed to education reform, and the present leadership of IES sees the RELs as critical to that effort.

Third, the quality control issues that have plagued the RELs over much of their existence have been lessened considerably by the independent review procedures instituted under their current contracts.

Finally, while the principal work carried out by the RELs under their current contracts, rigorous evaluations of interventions relevant to their regions, is not, in my view, their ideal function, it caused many RELs to hire new staff with solid science credentials. This has, in turn, reinvigorated a culture of science within the organizations that house the RELs and provided a solid base for the next steps in their evolution.

What direction should that evolution take? I see three possibilities:

Status quo. The current 5-year contracts for RELs, which run through 2010, require them to carry out two principal activities; fast response reports and rigorous impact evaluations. The fast response reports, which have been produced in quantity, are quick turnaround descriptive analyses of education trends and conditions within the region. Examples include a review of professional teaching standards in the western region, a description of how states in the Midwest use their data systems in attempts to improve educational achievement, and reports of the degree of alignment between state learning standards and NAEP assessment frameworks in the Southwest. In addition, each REL is conducting at least one rigorous impact evaluation using a randomized controlled methodology. Fifteen are underway. Examples include studies of the effects of a branded character education program on student achievement and behavior, the impact of a professional development program intended to improve teachers’ use of formative assessment in the classroom, and the impact of the supplemental reading software program on student reading achievement. None of the 15 studies has been completed and reported.

Fast turnaround descriptive reports, e.g., how states are using their data systems or credentialing teachers, have potential value. However, I suspect that a generous portion of the fast turnaround reports produced to date were in response to the requirement from IES for productivity in this area rather than in response to pressing needs expressed by customers in the regions.

Related Books

Having the RELs conduct randomized controlled trials that take five years to report out, eat up a substantial portion of their budgets, and in most cases have no unique relevance to their region is neither politically sustainable nor the best use of this resource. The nation needs more rigorous impact evaluations of education programs and practices. However, Congress should provide a dedicated funding stream for such work, and contracts and grants to do it should be awarded competitively. Organizations housing RELs that have built capacity for this type of work can compete for these contracts along with the contract research firms for which rigorous evaluations are bread and butter.

Analysis of statewide administrative data. The goal of having statewide longitudinal education databases in every state was pursued vigorously during the George W. Bush administration. The Obama administration has added substantially to funding for this effort through the American Recovery and Reinvestment Act of 2009, and has threatened to disqualify states from competition for $4.3 billion in Race to the Top funds that do not link their student data to individual teachers. If all goes according to plan, in the near future all states will have data warehouses with longitudinal student achievement data linked to a variety of education input variables. These administrative databases can serve as a potent fuel for research, analogous to the role that increasingly powerful radio and optical telescopes have played in astronomy.

However, having data available and being able to use it are two different things. Only a few states have the staff capacity within their state education office to conduct analyses of longitudinal data to address policy questions. This means that most policy initiatives fly blind, both in original design and subsequent appraisal. Efforts by IES to increase analytic work with state longitudinal databases, most notably through funding the national Center for the Analysis of Longitudinal Data in Education Research (CALDER), have been important and productive. However, CALDER and related academic centers are run by researchers who address questions of interest to them on their timetable. Often these questions are of little interest to state policymakers, who have no place to turn for fast and sophisticated analyses of state data to address questions of importance to them. This is an important gap that the RELs might fill.

One state may want, for example, a value-added analysis of differences in initial teacher effectiveness by teacher preparation institutions within the state. Another may be interested in identifying charter schools that perform significantly below average given the demographics of the students they serve. Still another may want to know whether its policies to increase AP participation in schools serving disadvantaged students have been associated with higher graduation and college enrollment rates in those schools.

A principal challenge to the RELs in pursuing this line of work is that the previous list of worthwhile questions could have been several pages long. A budget of $6 to $7 million a year for an individual REL would cover the costs of addressing only a few questions that might be posed by state policymakers in each state in its region. Further, success in providing this service would generate substantially more demand for it. More funding for the RELs would help, but a free good of this type is likely to be oversubscribed even if more generously funded. One solution to setting priorities would be to give state departments of education credits they could expend in contracting for analytic work by the RELs, and to provide inducements for them to match their credit expenditures with funding from their administrative set asides for other federal programs such as Title I of ESEA.

Improving process systems within schools and districts. IES has successfully pushed the academic research community to address applied questions on topics such as curriculum, teacher effectiveness, pedagogy, and instruction. The commercial marketplace is more or less successful in generating products for educators in areas in which school districts and states have budgets. Textbooks, instructional and administrative software, and assessments of student achievement are the principal instances. Where there is a substantial unmet need is in the development of process innovations; that is, modifications of existing processes and procedures to achieve superior results or greater efficiencies.

Few would argue that school districts are at the forefront of process improvements. The vast majority of individual k-12 school districts are monopolies in the sense that residents within district boundaries cannot shop for more effective or lower cost educations for their children. Further, there has been an absence of comparative data and accountability for anything other than student achievement. As a result the pressures for process improvements in industries in which there is real competition on quality, price, efficiency, and customer satisfaction, where there are accepted comparative benchmarks for core business processes, or where productivity is required for survival are largely absent in public k-12 education.

Almost all areas of the education system are in need of effective tools to improve processes, from school lunch programs to bus transportation to teacher selection to classroom instructional practices to truancy monitoring. This space is largely unoccupied by developers because there isn’t an existing marketplace for process improvement products. This is a classic case for government funded R&D.

Former Brookings Expert

For example, IES has produced a useful practice guide that describes how instruction should be aligned with findings from cognitive science, such as the value of frequent quizzing. Who is producing a checklist for use by teachers to assure that their quizzing practices meet reasonable standards as extrapolated from the research summarized in the IES practice guide?

Human resource processes in most school districts are far from empirical, whereas there is substantial research on the measurable characteristics of prospective teachers that predict classroom performance. Who is developing a prediction equation for use by districts that enables the selection of teachers that have higher likelihoods of initial classroom effectiveness?

The What Works Clearinghouse has summarized considerable data on which curricula have the best evidence of effectiveness, but many districts don’t use it. Who is developing a decision tool for districts that generates a favorability score for curriculum products based on quality and extent of evidence, size of effect, cost, and the similarity of the district’s population to the students who were studied in research?

Who is developing software and a web portal that can be deployed by high schools that allow 10th graders and their parents to determine, based on the student’s course selection, grades, an online assessment, and family financial information, the student’s likely success at gaining admission to a range of postsecondary institutions, the probability that the student will need to take remedial courses if admitted, the likely net cost of attendance, and the options the student has over the remaining years of high school to improve postsecondary prospects?

Who is generating a database that local taxpayers, school boards, legislatures, and policymakers can consult to identify school districts that are outliers on dimensions such as the cost of transportation services, central administration staffing, bonding indebtedness, and the like?

The RELs can do this type of work, and it is badly needed.

Summary and Recommendations

Education practice and policy have begun a transformation to research-based practice that was anticipated with the passage of the first ESEA over 40 years ago but was thwarted until recently by underinvestment in and weak standards for education research. We now have a solid and rapidly expanding base of rigorous and relevant research on which more effective practice can be constructed.

The RELs were originally designed to be the brokers and translators of education research to practice and policy but could not fulfill that role absent a strong research base, effective mechanisms for quality control of their products, and an activist stance by the federal government in education reform. The next few years will be ones of unparalleled opportunity for the RELs to demonstrate the value of their original mission, and in so doing to be a critical component of the transformation of education to an evidence-based field. The upcoming re-competition of the REL five-year contracts, the reauthorization of the Education Sciences Reform Act and the Elementary and Secondary Education Act, and the expression of the vision of the new leadership of IES and the Department of Education will be the federal fulcrums for the evolution of the RELs. The organizations that house the existing RELs and possible new entrants will have an important role to play as well by articulating their own views of how they can best serve their regions, in deciding whether they are in the business of simply growing their business versus advancing a mission of research-based reform, and in determining whether their best prospect for the future is in partnering with the Obama administration and the IES leadership or continuing to lobby Congress for what has become a funding set aside under terms that provide as little control as possible from the Department of Education.

There are three possibilities for the type of activity that the RELs can most usefully carry out within the practical limits of their funding and their broad missions of regional assistance. The first is to continue down the current path of generating fast turnaround descriptive reports on the state of education in their regions and conducting ambitious and time consuming randomized trials of the impact of education programs and practices. The second is the analysis of statewide administrative data to provide decisional and implementation support to state policymakers and administrators. The third is to develop tools to improve the quality and efficiency of workaday school district processes.

The Obama administration and the regional advocates for the RELs will likely prefer some variant of the analytic function. After all, Secretary of Education Arne Duncan and President Obama have identified the expansion and use of statewide longitudinal databases as one of their top education priorities, and the prior professional experience of John Easton, the new director of IES, was exclusively carrying out such work for the Chicago Public Schools and the Chicago Consortium on School Research.

There are, however, other ways to beef up the analytic function at the state level. In particular, the IES grant program to establish statewide longitudinal databases, which should have achieved its goal in the near future, could be transformed into a grant program to support state analysis of data for policy and implementation purposes. Congressional support for this funding stream is well established, funding levels are substantial, and giving states the financial resources to carry out or purchase their own analytics is likely to generate work that can be conducted more quickly and more responsively than a process that requires prioritization by the RELs and IES.

The R&D enterprise in education needs to invest more deeply and systematically in process innovations that will serve the practical needs of school districts and schools. We are unlikely to get dramatically better at educating students until we have a cadre of researchers whose job is to engineer more efficient and effective processes for carrying out the work of schools. Education has an increasingly strong education research community, but outside of a few areas such as instructional materials, it lacks engineers. Should the RELs become the engineers for those multiple aspects of school improvement that currently do not have a commercial market, they could be a transformational force for public education.

The nation can no longer tolerate vast differences in the quality of its schools and classrooms. Residential geography and quirks of school choice and classroom assignment cannot continue to define the education destiny of individual students. We need for all of our schools to be good enough to do the job that is expected of them. This will require nothing less than relentless effort to engineer processes that assure acceptable results. Much of this work will be down in the weeds and the results of any single effort will be incremental. Examined within a short time frame, it may not look like it is going very far. But it is the accumulation and progression of those incremental improvements that will ultimately be transformational for student achievement and the nation’s future. If the RELs don’t do this work, who will?

The recommendations in this document are developed more fully and placed in the context of the history of education in Whitehurst, G. (2010). Education Research: Past, Present, and Future. San Francisco: WestEd. Available online at http://www.wested.org/cs/we/view/rs/1006. Excerpt adapted here by permission of WestEd.