Two-generation approaches create opportunities for parents and their children by seamlessly integrating services. Advancing this work involves building evidence, which is a complex challenge that requires data on both parents and children across racial and ethnic groups as well as information sharing across multiple agencies.

To support these efforts, the Annie E. Casey Foundation hosted an event called Two-Generation Talk Back. The gathering, which took place in Baltimore, drew more than 100 researchers, evaluators, funders, providers and other experts.

Throughout the event, participants discussed strategies for evaluating two-generation approaches and covered a wide range of relevant topics, including promising programs, data sharing across systems and building capacity to work with data. Many attendees noted that programs are carving out innovative solutions to the unique issues two-generation approaches present.

Conversations also focused on the challenges of building evidence for two-generation work.

Charlyn Harper Browne is no stranger to this topic. She serves as a senior associate at the Center for the Study of Social Policy and has authored a report for the Foundation on evidence-building for two-generation approaches. Browne identifies “building causal relationships, not just correlational relationships” as one major hurdle.

Few programs have undertaken evaluations of effectiveness, according to Kathleen Dwyer, a senior social research scientist with the U.S. Department of Health and Human Services’ Administration for Children and Families. Dwyer is currently working on a scan of 52 two-generation approaches with colleague Carli Wulff. The forthcoming scan, conducted by Mathematica Policy Research, looks at various dimensions of two-generation work, including services provided, programming stability and funding mechanisms.

“We know there is anxiety and interest on the part of funders to get to those evaluations quickly, to get to what works,” Dwyer says. “But because so many are just getting off the ground, it’s important to assess their readiness for evaluation of effectiveness.”

Based on the findings from the scan, Dwyer outlines three ways to begin this assessment:

Examine how well the program is operating.
Ask: How faithful is it to the original design? What is the level of staff commitment? How many participants does it serve, and what services are they receiving?

Assess the strength of the program’s logic model.
Ask: Are the quality and intensity of services sufficient to support desired outcomes?

Consider the maximum level of rigor an evaluation of this program could achieve.
Ask: What is the size of the target population? The capacity to serve and gather data on that population? The interest in and support for evaluation among program staff, leadership and the community?

Several event panelists also noted a lack of models for assessing and validating program effectiveness for different racial and ethnic groups.

“Sometimes we are controlling for race when race is the issue,” explains Keisha Bentley-Edwards, an assistant professor of psychology at Duke University who studies racial socialization and cultural strengths. “If you actually want to make changes, you have to look at who is doing well and who is not doing well in each of these groups.”

Despite these challenges, evidence for two-generation approaches is evolving.

“One of the things I hope people come out of this day feeling is, yes, there is a small research base and, yes, there is a lot to be learned,” said T’Pring Westbrook, a senior associate at the Foundation who hosted the meeting, “but the work to date has generated many lessons and the field is benefiting from this new knowledge.”