An Evaluation 10 Years In the Making !

A wise man proportions his belief to the evidence. - David Hume,
Scottish Historian and Empiricist

2014-November 10
by Nick Randell, Program Officer

On September
16, 2014, I completed the last phone survey of an evaluation of a five-year
grant initiative at the Tower Foundation.
If I haven't lost you already, I'd like to take this blog space to
summarize what we learned. I'm keeping
this overview at a high level. We had an
article published about this evaluation, so if you crave details, email me
([email protected]) and I will send you a copy.

In 2004 The Tower Foundation issued an RFP for non-profits to
implement programs with a strong evidence base.
There was a pre-selected list of programs to chose from, mostly
behavioral health interventions targeting young people and their families. That first year there were eight programs on
the list. The RFP was issued in each of
the next four years, and that list of eligible programs grew to 38 by the third
year, but was "right-sized" to 16 in the fifth and final year of the
initiative.

Here's the list of programs that applicants actually chose (and that were
funded):

Across Ages

Al's Pals: Kids Making Healthy Choices

Brief Strategic Family Therapy

Functional Family Therapy

Helping the
Non-Compliant Child

Functional Family Therapy

Strengthening
Families Program

Incredible Years, Classroom

Incredible Years, Parent Training

Multi-systemic Therapy

Second Step

Strengthening Families

Strengthening Families Program: For Parents and
Youth 10-14

Trauma Focused Cognitive Behavior Therapy

In five years, we funded 25 program implementations for 22 agencies,
generally three-year grants. The value
of the grants totaled $2.1 million, with an average award of $84,050. We encouraged applicants to develop robust
implementation plans that recognized the realities of staff turnover, program
certification demands, internal cultural resistance to change, etc.

To assess the effectiveness of all this grantmaking, we developed an
evaluation methodology, heavy on phone surveys, that would get at two key
issues:

Was the program
sustained? Specifically, was it still being offered after the grant period
closed? One year after? Two?

Are the programs
being offered with fidelity? Without it,
you can't expect the results that these evidence-based programs deliver when
the model is closely followed.

The evaluation included phone interviews at one and two year
intervals, post grant period. For each intervention,
we developed a metric to reflect fidelity to the model. A higher fidelity score demonstrated
attention to things like certification requirements for clinicians, adherence
to program scripts/manuals, the correct length and number of sessions,
appropriate caseloads, and clinical supervision. We also asked questions to try to identify
factors that most contributed to the success or failure of an
implementation. We called these "success
drivers" and "failure drivers."

Some would question the wisdom of conducting this evaluation with
foundation staff, rather than engaging a third-party evaluator. We actually did solicit evaluation proposals
for this work. Evaluators actually
recommended that we do the evaluation internally, provided we keep the focus on
sustained program delivery that maintained some minimum level of adherence to
the protocols of the specific intervention.
In fact, since staff had done a lot of work to understand just what
these protocols entailed, we were in the best position to do this
evaluation. To maintain some level of
objectivity, I conducted all the phone interviews and metric scoring
personally, and was not involved in grant reviews or monitoring activities for
any of these grants.

So what did we learn? Two
years post-grant, 15 of 25 programs were still running. Fidelity scores for these continuing programs
placed eight in the "exemplary" range (score 9.0 or higher), five "good"
(7.0-8.9), and three "fair" (5.0-6.9).
Two programs actually improved their ratings across the two post-grant
interviews. Nine held their ground; five
slipped slightly. The table below
captures some of the factors that distinguished successful implementations from
less successful ones.

Success
Drivers

Failure
Drivers

Peer
meetings, shared learning

Early
attention to staff buy in

Internal
training capacity developed

Effective
fidelity monitoring tools

Program value
communicated to payer agency

Plan for
turnover

High staff
turnover

Non-reimbursable
costs too high

Certification
requirements too high

State agency
(for payments/referral) not receptive

Family member
participation difficult

The initiative as a whole had some pretty clear wins.In several cases, state agencies agreed to
reimburse for new intervention models.One intervention in particular, Trauma-Focused
Cognitive Behavioral Therapy, achieved significant penetration in Tower
regions.50 clinicians were trained to
offer it in eastern Massachusetts, 80 in western New York.Subsequent grantmaking has built on efforts
to introduce research-tested social/emotional curricula in pre-k programs.To date, we have supported these
implementations in over 150 classrooms.