Institute of Education Sciences

By Diana McCallum, Education Research Analyst, What Works Clearinghouse

It’s been more than a decade since the first What Works Clearinghouse reports were released and we have a wealth of information and resources that can help educators and leaders make evidence-based decisions about teaching and learning. Since 2005, the WWC has assessed more than 11,500 education studies using rigorous standards and has published hundreds of resources and guides across many content areas. (View the full version of the graphic to the right.)

The WWC website has already received more than 1.7 million page views this year, but if you haven’t visited whatworks.ed.gov lately, here are five reasons you might want to click over:

1) We are always adding new and updated reviews. Multiple claims about programs that work can be overwhelming and people often lack time to sift through piles of research. That’s where the WWC comes in. We provide an independent, objective assessment of education research. For example, we have intervention reports that provide summaries of all of the existing research on a given program or practice that educators can use to help inform their choices. In addition, when a new education study grabs headlines, the WWC develops a quick review that provides our take on the evidence presented to let you know whether the study is credible. In 2015, we added 43 publications to WWC and we’re adding more every month this year.

2) We’ve expanded our reach into the Postsecondary area. In late 2012, the WWC expanded its focus to include reviews of studies within the Postsecondary area to capture the emerging research on studies on a range of topics, from the transition to college to those that focus on postsecondary success. To date, the WWC has reviewed over 200 studies on postsecondary programs and interventions, and this area continues grow rapidly. In fact, several Office of Postsecondary Education grant competitions add competitive priority preference points for applicants that submit studies that meet WWC standards. (Keep an eye out for a blog post on the postsecondary topic coming soon!)

3) You can find what works using our online tool. Wondering how to get started with so many resources at your fingertips? Find What Works lets you do a quick comparison of interventions for different subjects, grades, and student populations. Want to know more about a specific intervention? We’ve produced more than 400 intervention reports to provide you the evidence about a curriculum, program, software product, or other intervention for your classroom before you choose it. Recently, we’ve added a feature that allows a user to search for interventions that have worked for different populations of students and in different geographic locations. As we mentioned in a recent blog post, the Find What Works tool is undergoing an even bigger transformation this September, so keep visiting!

4) We identify evidence-based practices to use in the classroom. The WWC has produced 19 practice guides that feature practical recommendations and instructional tips to help educators address common challenges. Practice guides (now available for download as ebooks) provide quick, actionable guidance for educators that are supported by evidence and expert knowledge within key areas. Some of our guides now feature accompanying videos and brief summaries that demonstrate recommended practices and highlight the meaning behind the levels of evidence. The work of practice guides are also actively disseminated during Regional Educational Laboratory (REL) Bridge events. For instance, REL Southwest held a webinar on Teaching Math to Young Children, which was based on a WWC practice guide. For more information, read a previously published blog post on practice guides.

In the coming months, we’ll post other blogs that will explore different parts of the WWC and tell you about ongoing improvements. So keep visiting the What Works website or signup to receive emails when we release new reports or resources. You can also follow us on Facebook and Twitter.

The What Works Clearinghouse is a part of the National Center for Education Evaluation and Regional Assistance in the Institute of Education Sciences (IES), the independent research, evaluation, and statistics arm of the U.S. Department of Education. You can learn more about IES’ other work on its website or follow IES on Twitter and Facebook.

EDITOR’S NOTE: This is part of a series of blog posts about statistical concepts that NCES uses as a part of its work.

Many of the important findings in NCES reports are based on data gathered from samples of the U.S. population. These sample surveys provide an estimate of what data would look like if the full population had participated in the survey, but at a great savings in both time and costs. However, because the entire population is not included, there is always some degree of uncertainty associated with an estimate from a sample survey. For those using the data, knowing the size of this uncertainty is important both in terms of evaluating the reliability of an estimate as well as in statistical testing to determine whether two estimates are significantly different from one another.

If differences between groups are not statistically significant, NCES uses the phrases “no measurable differences” or “no statistically significant differences at the .05 level”. This is because we do not know for certain that differences do not exist at the population level, just that our statistical tests of the available data were unable to detect differences. This could be because there is in fact no difference, but it could also be due to other reasons, such as a small sample size or large standard errors for a particular group. Heterogeneity, or large amounts of variability, within a sample can also contribute to larger standard errors.

Some of the populations of interest to education stakeholders are quite small, for example, Pacific Islander or American Indian/Alaska Native students. As a consequence, these groups are typically represented by relatively small samples, and their estimates are often less precise than those of larger groups. These less precise estimates can often be reflected in larger standard errors for these groups. For example, in the table above the standard error for White students who reported having been in 0 physical fights anywhere is 0.70 whereas the standard error is 4.95 for Pacific Islander students and 7.39 for American Indian/Alaska Native students. This means that the uncertainty around the estimates for Pacific Islander and American Indian/Alaska Native students is much larger than it is for White students. Because of these larger standard errors, differences between these groups that may seem large may not be statistically significantly different. When this occurs, NCES analysts may state that large apparent differences are not statistically significant. NCES data users can use standard errors to help make valid comparisons using the data that we release to the public.

Another example of how standard errors can impact whether or not sample differences are statistically significant can be seen when comparing NAEP scores changes by state. Between 2013 and 2015, mathematics scores changed by 3 points between for fourth-grade public school students in Mississippi and Louisiana. However, this change was only significant for Mississippi. This is because the standard error for the change in scale scores for Mississippi was 1.2, whereas the standard error for Louisiana was 1.6. The larger standard error, and therefore larger degree of uncertainly around the estimate, factor into the statistical tests that determine whether a difference is statistically significant. This difference in standard errors could reflect the size of the samples in Mississippi and Louisiana, or other factors such as the degree to which the assessed students are representative of the population of their respective states.

Researchers may also be interested in using standard errors to compute confidence intervals for an estimate. Stay tuned for a future blog where we’ll outline why researchers may want to do this and how it can be accomplished.

The mission of the Institute of Education Sciences (IES), at its core, is to create a culture in which independent, rigorous research and statistics are used to improve education. But sometimes research is seen by practitioners and policymakers as something that is done for them or to them, but not by them. And that’s something we’re hoping to change.

IES is always looking for new ways to involve educators in producing and learning about high-quality, useful research. We believe that if state and school district staff see themselves as full participants in scientific investigation, they will be more likely to make research a part of their routine practice. Simply put, we want to make it easier for educators to learn what works in their context and to contribute to the general knowledge of effective practices in education.

That’s why we’re so pleased to add the RCT-YESTM software to the IES-funded toolkit of free, user-friendly resources for conducting research. Peter Schochet of Mathematica Policy Research, Inc. led the development of the software, as part of a contract with IES held by Decision Information Resources, Inc.

RCT-YES has a straightforward interface that allows the user to specify the analyses for data from a randomized controlled trial (RCT) or a quasi-experiment. Definitions and tips in the software help guide the user and accompanying documentation includes a mini-course on RCTs. When the user enters information about the data set and study design, RCT-YES produces a program to run the specified analyses (in either R or Stata) and provide a set of formatted tables.

The target users are those who have a basic knowledge of statistics and research design but do not have advanced training in conducting or analyzing data from impact studies. But we expect that even experienced researchers will like the simplicity and convenience of RCT-YES and benefit from some of its novel features, such as how it reports results.

When used properly, RCT-YES provides all of the statistics needed by the What Works ClearinghouseTM (WWC) to conduct a study review. This is an important feature because the WWC often needs to contact authors—even experienced ones—to obtain additional statistics to make a determination of study quality. RCT-YES could help advance the field by increasing the completeness of study reports.

Another unique feature of the software is that it defaults to practices recommended by IES’ National Center for Education Statistics for the protection of personally identifiable information. For example, the program suppresses reporting on small-size subgroups.

While the user sees only the simplicity of the interface, the underlying estimation methods and code required painstaking and sophisticated work. RCT-YES relies on design-based estimation methods, and the development, articulation, peer review, and publication of this approach in the context of RCT-YES was the first careful step. Design-based methods make fewer assumptions about the statistical model than methods traditionally used in education (such as hierarchical linear modeling), making this approach especially appropriate for software designed with educators in mind.

The software is available for download from the RCT-YES website, where you can also find support videos, documentation, a user guide, and links to other helpful resources. The videos below, which are also hosted on the RCT-YES website, give a quick overview of the software.

There are many other ways that IES fosters a culture of research use in education. For instance, our 10 Regional Educational Laboratories (RELs) have research alliances that work with states and districts to develop research agendas. The RELs also host events to share best practices for putting research into action, such as the year-long series of webinars and training sessions on building, implementing, and effectively using Early Warning Systems to reduce dropping out.

IES also offers grants to states and districts to do quick evaluations of programs and policies that have been implemented in their schools. The low-cost, short-duration evaluations not only help the grantees discover what is working, but can help others who might use the same program or implement a similar policy. (We’ll announce the first round of grant recipients in the coming weeks).

Most people remember being told not to talk in class or risk a trip to the principal’s office or a note sent home. But researchers in the Reading for Understanding Research Initiative (RfU) want students to talk in class as a way to improve reading comprehension.

Five research teams in the RfU network have designed and tested new interventions intended to provide a strong foundation for reading comprehension in students from pre-kindergarten through high school. And promoting high quality language use and talk among students is a central feature of many of these interventions. The goal is to improve reading outcomes by building students’ understanding of rich syntax and academic language to express and evaluate complex ideas.

RfU researchers have conducted studies in 29 states and interventions developed by the RfU network have been tested for efficacy with over 30,000 students (see the chart to the right for more information on the grantees and the map below to see where they conducted research).

While findings from these studies are still forthcoming, some interventions already show promise toward improving reading for understanding and/or supporting skills. New assessments have been field-tested with over 300,000 students across the country and have documented their capacity to collect valid and useful information for teachers, schools, and researchers.

Support for informative and instructional talk by students was provided in a variety of ways across different academic areas, including social studies, science, and English language arts classes. Some teams developed new classroom activities to structure whole class discussion through student debate on current topics of interest. Using a program like

Word Generation, students discuss a focal question to stimulate various opinions on current topics, such as ‘Should students be required to wear school uniforms?’ or ‘Are green technologies worth the investment?’ In other interventions, such as PACT, students spend time talking in pairs or small groups to reinforce a new concept or idea.

Teachers are understandably concerned about how to manage a classroom in which students are talking. As part of RfU, curricula and materials were created to help teachers to improve their skills in managing constructive student talk, and several teams also provided extensive professional development for teachers.

Attention to the importance of student talk was also evident in a computer-based assessment called GISA developed by ETS which uses a scenario-based approach. Rather than talking with their peers during the assessments, students interact with avatars on a task that simulates a realistic classroom-based task.

Using student talk to improve reading comprehension is just one of many supports that have been explored by the RfU teams in their extensive body of work over the past six years. The RfU teams provided an update on their research during an event in May. You can watch a webcast of the event until July 31, 2016.

Visit the IES website to see a detailed agenda for the May event and to learn more about the work of the Reading for Understanding Research Initiative. In addition to providing an overview of the work, the abstracts include links to RfU team websites and many of these have examples of their materials. Materials for the Word Generation and PACT interventions are available for free on their websites, and several other RfU grantees will be making their materials freely available in the coming year.

Written by Karen Douglas, project lead, Reading for Understanding Research Initiative, National Center for Education Research

ERIC builds a strong education research collection by continuously seeking out new sources of rigorous content and adding them to the collection. But how does ERIC select publications for the online library?

A new video (embedded below) provides the answer to how ERIC selects new sources, including education-focused journals, grey literature reports, and conference papers. The video was developed to help answer one of the most frequently asked questions by ERIC users and to help publishers and organizations producing materials in the field of education understand what ERIC considers when evaluating potential new sources. Watch this video if you want to learn about the types of resources ERIC will and will not index, the source selection process, and how to recommend a new resource.

Twice a year, in the spring and fall, ERIC reviews journals and producers of conference papers, reports, and books as potential candidates for inclusion in ERIC, using a revised selection policy as a guide when evaluating recommended content. The revised policy was released in January 2016 to clarify the types of materials ERIC is seeking for the collection. ERIC considers resources that are education research focused and include citations, orginal analyses of data, and well-formed arguments. ERIC also considers collection priorities, such as peer- reviewed and full-text materials.

We are continuously working to build a strong education research collection that includes the latest and very best resources in the field. If you are a publisher of high-quality education research, or if you have a favorite journal, or know a source of conference papers or reports not currently in ERIC, please send us your recommendations.