IES Selects ST Math for Replication Studies

ST Math has been identified by the U.S. Department of Education (DOE) Institute of Education Sciences (IES) as one of six math programs to be further investigated under what IES terms “replication studies.”

“We look forward to using ST Math to help IES realize its vision of a healthier and better-informed market in education program evaluations. ST Math is ideal as its visual approach can add value to any other math content. Our previous IES collaboration was extremely informative and established ST Math’s strong impact evidence. But it was a one-off on our program three generations ago, pre-common-core assessments, and one type of school profile. There is clearly an enormous opportunity to improve via replication over multiple settings, informed by high caliber evaluation partners,” said Andrew Coulson, Chief Data Science Officer for MIND Research Institute, the organization that created ST Math.

“At MIND, evaluation research is always about more than a “green light” for ST Math as a product. It is about contributing to the field of evaluation generally - repeatable, generalizable methods, analysis and interpretation. We hope to pioneer some innovations in large scale RCT (randomized control trial) methods also - such as how to rigorously take into account digital treatment dose,” said Coulson.

A central goal of the mission of IES is to identify what works for whom under what conditions. Unfortunately most of the studies that have found impact contribute little to helping us meet that goal. Many, if not the great majority, of these projects were carried out in a single location and/or tested using a relatively small number of settings, teachers, and learners. Given the limited scope of these projects, it is usually impossible to judge whether the tested interventions would work with different types of students or in different education venues.

In the last few years, IES has explicitly called for and supported replications, but these far too often replicate the problem of the original study: too few settings, too few subjects, too little variation.

We are considering a different approach to replication that, hopefully, will accelerate the accumulation of knowledge about which interventions might work for whom under what conditions. This approach revolves around the systematic replication of interventions that already have strong evidence of impact. We envision supporting sets of replications that will implement and evaluate interventions in carefully chosen venues that systematically vary in student demographics, geographic locations, implementation, or technology.

The complete list of 16 programs proposed by IES for further study is available here.

The original study on ST Math that IES is seeking to replicate is a 2014 quasi-experimental study by Schenke et al, which was found to meet ESSA Tier 2 for moderate evidence of effectiveness.

A Shift in Perspective on Research

Every edtech product has a study that says it’s effective. But what are the impact results of that study really saying? How meaningful is the difference between outcomes? How do we know that difference is not due to chance? And crucially, how do educators judge whether the results are applicable to them?

As outlined in MIND’s free ebook, “Demanding More from Edtech Evaluations,” the traditional approach to edtech evaluation has been broken for quite some time now. Because of this, administrators can be easily confused or misled by the scarce credible information about program effectiveness. Yet every school year, they still need to make decisions about what tools or programs to source, select, purchase, adopt and support in their schools and districts.

Regrettably, many resort to a box-checking exercise, where they look for a study that meets their prescribed ESSA tier of evidence, and make a choice based upon that criterion. This says little about whether the selected program will work for students like theirs, in a district like theirs, under circumstances like theirs.

At MIND, we believe there needs to be more non-academic conversation about research, and that formal research findings can and should should be unpacked and translated to become useful to decision makers. We further believe a high volume of effectiveness studies is the future of a healthy market of product information in education.

That’s precisely why this IES commitment to further study of ST Math is such an exciting development. Multiple studies on the current product versions and current assessments, over a variety of settings, will give administrators much more information to work with - helping them make the best possible determination about whether ST Math will make an impact for their students.

Making Research Accessible

MIND Research Institute is a nonprofit social impact organization specializing in neuroscience and education research for over 20 years now. From edtech evaluation to studies on how the brain learns, research is part of what we do every day. Research is literally our middle name, and one of our organizational goals is to make research, generally, more meaningful to educators and administrators. We strive to make all aspects of edtech research more accessible, and better equip educators and administrators to make informed decisions about the varied methods and resources they choose to implement in schools and classrooms. The focus of this ebook is on equipping administrators with the knowledge and tools to get more from edtech evaluations right now.

You can learn more about our methodology and the impact of of our visual instructional ST Math program at stmath.com/impact.

About the Author

Liz Neiman is Vice President of Engagement at MIND, leading the marketing team's plans and activities to promote MIND's initiatives and impact. Besides education and gaming, her interests include music of all kinds (from musical theater to heavy metal), cooking and baking, and fashion. Follow her on Twitter @lizneiman.