What consumers should know about choosing a mindfulness program

“What truly marks an open-minded person is the willingness to follow where evidence leads. The open-minded person is willing to defer to impartial investigations rather than to his own predilections…Scientific method is attunement to the world, not to ourselves” Jonathan Adler (1998, p. 44).

What consumers should know about choosing a mindfulness program.

With so much contemporary emphasis on accountability and research in schools and other settings, how can consumers of mindfulness programs make the best choices? This new section provides some basic information about mindfulness program research to help parents, decision-makers, teachers and other consumers understand the processes and rationale underlying scientific approaches. Some basic rules of thumb are offered to provide consumers with ways to assess research quality and help them make well-informed decisions when selecting programs.

Why is research importantto mindfulness program developers, decision makers, and teachers?

The contemporary emphasis on standards and accountability in educational, clinical and other settings has had a big effect on how and what we do in schools and other settings. This accountability movement is supported by federal and state legislation. Simply put, parents, teachers, consumers, funders and policy makers are interested in the question “WHAT WORKS?” in order to get the best possible outcomes from their efforts and their financial resources. Scientific research offers a way to provide evidence about the effectiveness of mindfulness and other kinds of programs in a way that is extremely valuable for these purposes.

What is scientific research and how does it work?

Science has a formal way to investigate problems and draw conclusions about outcomes using statistical and other methods. Questions such as “Does this program work?” are subjected to an objective process so that conclusions can be free from personal bias. Sometimes we have expectations that a practice or intervention works based on our own personal preferences or experience. However, this is not sufficient to justify program adoption from a scientific standpoint. There are a number of rigorous safeguards built into the research process that render its outcomes more likely to be correct than many other methods of proof. When consumers want to commit time and resources to a mindfulness program for schools or other settings, it’s important to consider the best evidence available.

The scientific research method assumes that researchers will share their methods and ideas with others so that their work can be reviewed and improved. In essence, it is a collective process where many people are involved in the furthering of knowledge, reducing personal bias, and building an evidence base. The ultimate goal is that all findings become part of public knowledge base for the good of the whole.

Physicians and other professional groups base their decisions on scientific data derived from research in order to ‘do no harm.’ We in the mindfulness community also want to avoid the risk of doing harm. Therefore, the research process provides practitioners and others a way to evaluate and choose programs that are beneficial.

How can I decide if a mindfulness program has evidence to support it?

Consider the level of evidence

A few conditions need to be met for a basic level of evidence. First, the mindfulness program must be researched using established scientific methods. Then, a report of the study (called a paper or an article) needs to be submitted to a journal and reviewed by peers (other scientists familiar with mindfulness). This process is called ‘peer review.” Not all journals publish original research or utilize peer review, so it’s important to differentiate a research journal from an ‘opinion’ journal. Once the paper has been reviewed by peers and revised, if needed, it is ready for publication. Sometimes, mindfulness program reviews are published online but have not been given this kind of scrutiny. For example, mindfulness program providers may hire private experts or consultants to do an evaluation of a program. These reports are not subject to the kind of blind review (i.e. reviewers are unaware of author identity) that serves to reduce bias in the scientific literature. In addition, program evaluation results may include descriptive information but not careful statistical analyses. Furthermore, some evaluation reports do not include any type of comparison group. This level of evidence, although it can be very helpful, does not meet the standard for empirical support.

Check for control or comparison groups

It’s important for a study of any program to have a control group so that readers can see how much the program affected the participants when compared to a matched group of non-participating students. Without any control or comparison group, readers can’t tell if the effects were the result of the mindfulness program or something else (e.g. something about the group itself or their experience). Imagine that a school adopted a highly innovative math curriculum and examined the results of the curriculum for students in the school’s gifted program. It would be inaccurate to assume that any observed benefits would be similar for students in other classes. Unless similar control groups are a part of the study, consumers really can’t know if something about the group itself caused the effects rather than the mindfulness intervention.

Check for replication with other groups

It’s not enough for a program to have one published study. Replication (repeating studies) is needed to build a sufficient level of evidence. Remember that mindfulness studies are usually done with certain groups of students who participate in a certain mindfulness program. Remember that the findings of one study can only be applied to groups who are similar in composition. Consider that benefits of a mindfulness program were found for a program provided to White 5th grade males in an after-school setting. These benefits cannot be assumed to apply to a diverse group of high school seniors in a public school. The more studies a program has in different settings, the better it is for strengthening conclusions about the program’s effectiveness.

Make sure the research is carried out and reported ethically

Scientific researchers are required to follow ethical guidelines in the research and reporting process. One of the requirements is for researchers to clearly describe what the intervention process is. In mindfulness research, where definitions can be murky, providers may not always provide clear and specific detail about the specifics of the program and the kinds of practices. This limits the extent to which others can replicate their research. Researchers should not overstate the outcomes of a study or draw conclusions that are not derived from the analyses. It’s questionable to oversell program effectiveness if the results do not support such results directly, and if the evidence base is limited. This is an important caveat for potential consumers.

I’m trying to choose a program to use. What are some indicators that the evidence base may be lacking?

Has the whole program been studied using scientific methods?

In some cases mindfulness programs are described as ‘empirically-supported.’ Unless the whole program sequence has been researched, it’s not accurate or ethical to define a program in this way. In other words, it’s not appropriate to say a program is empirically supported just because it includes some practices from other programs that have been studied (e.g. body scan practice). Sometimes a program may be loosely described as ‘research-based.’ It’s unclear what this might mean without more detail, so it’s a good idea for consumers to investigate this claim. Sometimes the use of this term is vague and doesn’t reflect the actual level of evidence.

Are testimonials the primary form of evidence?

Testimonials about mindfulness programs from satisfied teachers and participants certainly may be helpful, but they cannot be considered research evidence. Personal anecdotes, recommendations, quotes, etc. don’t meet the standards of research, even though prospective users might find them interesting and even compelling, Remember that testimonials don’t always include participants who fail to provide glowing reviews. There is no substitute for peer review if you are serious about scientific evidence.

Are the advertised effects supported by actual research?

Mindfulness is currently very popular and claims of its effectiveness for a variety of issues seem to appear everywhere. Despite the hype, ethical program developers are not absolved from the task of evaluating their own programs in a scientifically respectable way. Sometimes, the benefits of adult mindfulness programs (like MBSR) have been applied to programs for youth without any real evidence about developmental differences. Conclusions from adult programs can’t be generalized to children and adolescents without further proof. Broad-brush conclusions like ‘improves emotion, attention, grades, retention” and so forth should not be made unless there is specific evidence from well-designed studies that confirm these outcomes.

Where can I go to get some help?

Organizations like CASEL (www.casel.org) and Blueprints for Violence Prevention (http://www.colorado.edu/cspv/blueprints/) do a very good job of evaluating the research foundation of prevention programs. Consumers can also review the mindfulness research base posted on the American Mindfulness Research Association Network website (https://goamra.org/publications/mindfulness-research-monthly/).

Some take-away messages

Contemporary education and practice guidelines advocate for empirically based programs and treatments.

Although many programs and treatments are advertised as ‘empirically or research-based”, these terms may be used loosely and inaccurately.

Always look for supporting documentation that shows programs have been studied in a scientifically ethical way. The best way to see this evidence is through peer-reviewed publications. This guarantees that others, not just program developers, have reviewed the evidence objectively and without compensation.

In general, studies need to be done using a program manual so that readers can understand what’s being taught. Alternately, consumers (and particularly parents of minors) should clearly understand what the program entails and what’s being taught. Parents have the right to view curriculum materials in public schools. It is important that mindfulness programs delivered in public school settings be secular in nature. Program manuals make program content transparent to all.

When selecting a program, remember to check that the fit is right for your students’ age, setting, and needs.

The careful application of the scientific process to studying mindfulness programs provides a means of quality control that ultimately helps all professionals. It assures that claims are legitimate, have passed the test of scientific research, and will move the field of mindfulness research forward.