Module 6 Resources

Topic 1: Defining Usable Innovations

The lack of adequately defined programs is an impediment to use of an EBP or EII with good outcomes (e.g., Vernez and colleagues, 2006). Education researchers have developed standards for assessing the rigor with which innovations have been tested (e.g. What Works Clearinghouse). However, educators are more interested in the innovations themselves (not standards for experimental rigor). To begin to address this issue, the following criteria have been developed for usable innovations; that is, innovations that are teachable, learnable, doable, and can be assessed in classrooms and schools to produce good outcomes for students.(Fixsen, Blase, Metz, & Van Dyke, 2013). The usable innovation criteria used to determine what to support in districts are listed below. Click on each criterion to read more about it.

The philosophy, values, and principles that underlie an education innovation provide guidance for all education decisions and are used to promote consistency, integrity, and sustainable effort across all districts and schools

Clear inclusion and exclusion criteria define the population for which the education innovation is intended (e.g. middle school Algebra students who have passed Advanced Math)

The criteria define who is most likely to benefit when the education innovation is used as intended

Not every education innovation is a good fit with the values and philosophy of a district or school. In addition, many innovations were developed with particular populations of students. Applications on the innovation with different populations of students may not be equally effective. Thus, having a good description of an education innovation and its foundations is required so that leaders and others can make informed choices about what to use.

The speed and eﬀectiveness of implementation may depend upon knowing exactly what has to be in place to achieve the desired results for students, families, and communities; no more, and no less. Not knowing the essential innovation components leads to time and resources wasted on attempting to implement a variety of (if only we knew) nonfunctional elements.

Practice profiles describe the core activities that allow an education innovation to be teachable, learnable, and doable in practice; and promote consistency across teachers and staff at the level of actual interactions with students

Knowing the essential functions (criterion #2) is a good start. The next step is to express each essential component in terms that can be taught, learned, done in practice, and assessed in practice. The methods for developing operational descriptions (practice profiles) in education were established by Gene Hall and Shirley Hord as part of the Concerns-Based Adoptions Model (called intervention configurations in CBAM).

The performance assessment relates to the education innovation philosophy, values, and principles; essential functions; and core activities specified in the practice profiles; the performance assessment needs to be a feasible method (e.g. a 10-minute classroom walkthrough observation ratings) that can be done repeatedly in the context of typical education settings

Evidence that the education innovation is effective when used as intended

There are data to show the innovation is effective

A performance (fidelity) assessment is available to indicate the presence and strength of the innovation in practice

The performance assessment results are highly correlated (e.g. 0.50 or better) with intended outcomes for students, families, and society

How well are teachers and staff saying and doing those things that are in keeping with the essential functions and with the intentions behind the education innovation? If performance assessments do not exist, this becomes a developmental task for a skilled Implementation Team. Note that the criterion for performance assessment includes the specification that a performance assessment should be highly predictive of intended outcomes. If educators use an innovation as intended then students will benefit as intended.

Using effective innovations in context

Where evidence-based innovations can be or need to be used in education has been a vexing problem. This is especially true in SEAs and LEAs where races, cultures, languages, economic conditions, current system services and functioning, and every other aspect related to human societies vary widely within and across communities and neighborhoods. From a public education point of view this is especially daunting – is a different form of an education innovation needed to accommodate the uniqueness of each education setting and system?

From an applied implementation perspective, the process of adjusting education innovations, organizations, and systems to fit and function together is expected and a part of good implementation practice. This is what Implementation Teams do. This is like a physician being overwhelmed with the infinite variation among individual human beings, each with his or her own unique DNA, physical characteristics, strengths, and weaknesses. Yet, for the application of many pharmaceuticals, the variation is accounted for by a simple dosage calculation of so many milligrams per kilogram of body weight. By stepping back a bit, implementation tools and methods have been established to sense contextual variations that matter and accommodate those infinite variations in the implementation process.

Effective innovations are critical to education success, but they are not enough. As noted in the formula for success, effective implementation supports and enabling system and organization contexts also are essential to moving the indicators for all students in education. Nevertheless, the process of improving education begins with selecting/creating effective innovations. SEAs, LEAs, and educators can select and support the implementation of innovations that meet the usable innovation criteria outlined above.

The effectiveness of WHAT we do in everyday practice is important – why waste resources on doing what does not work? The effectiveness of programs is noted in Usable Innovation criterion #4, with effectiveness tied to a measure of the presence and strength of the program in practice (“4.b. The performance (fidelity) assessment is highly correlated (e.g. 0.50 or better) with intended outcomes for children, families, individuals, and society.”)

Educators are cautioned that assertions by innovation developers and researchers about the essential components of innovations are no substitute for data linking those reported essential components to outcomes. Without adequate descriptions of innovations, presumptive essential functions cannot be ruled in and alternative explanations cannot be ruled out. Thus, educators must define “innovations” so they meet the Usable Innovation criteria and can be taught, used in practice, and assessed for fidelity and outcomes.

WHAT are you trying to do to improve student outcomes? How well does WHAT you are trying to do meet the four criteria for a Usable Innovation?