FINAL REPORT

What Is This Report?

The following summative report of the three-year project is an effort to discuss the main successes and challenges, highlight lessons learned, and suggest recommendations to assist science museum and informal science education professionals in developing similar future programs or improving current programs. It was also important for ASTC to take the opportunity with such a multi-faceted project to collect data that would be useful for the science center and museum field. In particular, observations were made on how science museums interpreted the same project requirements in different regions around the world, the unique methods used to communicate about a complex topic, the needs and barriers of developing a youth program, and how all these components were connected across an international network. This required a thorough evaluation plan that covered multiple sites and countries. A careful review was conducted on the two-pronged evaluation method used in the project that included reporting from within the participating science museums and external evaluation from local evaluators. Collecting evaluation from the two perspectives has provided valuable information about the development, resources, and management required to execute this kind of program from the museum side and how those efforts impacted the interaction and feedback from the publicfacing side through impartial evaluation. We recognize that for a complex program like the WBT there are better formats for evaluation that are discussed in the report.

International informal-science programs may not be a new concept in the science center arena, but as we move forward as a field, it may be worth considering more of these programs on different science topics and possibly on a larger scale. The WBT helps our understanding of how international programs are relevant to science centers and their local initiatives. May this report serve as a tool for science centers, museums, evaluators, and other science advocates in creating successful international programs that continue to connect people around the world through the wonder and awe of science.

Evaluation Methods

Summative Evaluation

ASTC appointed an international three-person evaluation team to conduct the summative evaluation and produce a final report. The evaluation team used a meta-evaluation approach, conducting a secondary analysis of two reporting requirements that were completed by each WBT Host: (1) Progress and final reports that were submitted by each Host, using a template provided by ASTC, and (2) Evaluation reports from each Host’s local evaluation team. The evaluation team collected additional data as needed, through interviews and conversations with ASTC, the Biogen Foundation, and Hosts.

Local Evaluation

Evaluation of WBT was conducted by a local evaluation team that was proposed by the Hosts and hired by ASTC to report on program progress and visitor reactions to WBT events. Hosts were provided with overall guidance about the evaluation, and then each potential evaluator submitted a statement of work as part of the selection process.

All Hosts evaluated the Ambassador Program (12 of 12), and most evaluated Lab-in-a-Box (LIAB) kits (10 of 12). A total of 10 Hosts collected data to describe those who attended the WBT, and nine made conclusions about whether and how the program helped Hosts work with new audiences. The new audience(s) of interest varied based on the WBT requirement.

Additional evaluation requirements included the following: a focus on the Ambassador Program, with a suggested pre-post interview design; usability and end-user evaluation of the Lab-in-a-Box (LIAB); evaluation of the ways that the WBT helped Hosts make new connections with local groups; support from ASTC and Biogen; and any unanticipated results from implementing the project.

Hosts also evaluated several components beyond the requirements, with the total number of components evaluated ranging from three to 10.

Looking across the three years of the project, the scope and rigor of the evaluations was enhanced over time. Earlier evaluations were less reliant on data from participants to help evaluate WBT components, and instead relied on the independent judgement of the evaluation team. Later evaluations, by contrast, often collected data from multiple participant groups to understand the value of WBT.