Friday, 21 December 2012

One project finishes | another one starts better

One
project finishes, another one starts better

How
can we share what we learn?

There’s no doubt in my mind that there is some
excellent audience research going on in the heritage industry, but I’m equally
convinced that we could and should share it better.The question is how?

Audience research at the Science Museum has focused for
over 15 years on improving our offer for visitors.
The main people the research has influenced have therefore been internal – the
exhibition/programme/web teams who make our offer and the Trustees and
Executive Board who drive our audience-focused strategy. We have always wanted
to publish the results of our evaluation more broadly and have sometimes done
so[i]
but I have to admit this has been opportunistic. Too often as one project
finishes we are taken up in the rush of the next before sharing what’s happened
outside the Museum itself. This is now
changing. We have a new Director keen to raise the profile of the work we’re
doing, and we have started going down several paths to do just that. Here are
some of the issues that I’ve come across in doing this and some suggestions for
making sharing easier.

Which
evaluation to publish?

If most evaluation publications are about the
summative conclusions at the end of a project we miss out on sharing front-end
and formative research. I can understand why this is – developmental research
is harder to publish and is often ‘quick and dirty’ with the practical aim of
getting an exhibit to work rather than exploring the intellectual theory behind
it. Yet if we fail to share this kind of research we underplay the crucial
research and development function of audience research, one which leads to
perhaps the greatest direct impact on visitors.

Front-end research can completely change the direction
of an exhibition. Here are some examples of how it has worked for us. 'Difficult and
dull' was the audience reaction to the idea of a gallery on brain
science and genetics. The reaction challenged us to be more creative and we
reframed the exhibition around the issue of identity. The result is the
enduringly popular 'Who Am I?' gallery. For a
redevelopment in 2010, we conducted formative research with disabled groups who
asked for a more multisensory approach to content. The result was a series of case
augmentations bringing objects into the open gallery [fig 1] and increasing
engagement opportunities for all our visitors. Increasing access is a key
interest of the Audience Research Group and this work was discussed at several
conferences.[ii]

When we designed our new Launch Pad gallery [the
Museum’s signature interactive children’s gallery which presents science
phenomena through hands-on exhibits], the existing Launch Pad provided an ideal
test-bed for new ideas. Almost immediately we discovered that the current space
was pitched at the wrong audience, falling into the chasm between younger
children [for whom the scientific ideas were too complex] and older children [for
whom the design was too childish]. Moreover, although current learning
literature stresses the importance of social learning, it turned out that a lot
of the existing exhibits gave a passive, demonstration type of experience
rather than offering opportunities for interaction [fig 2].

The new Launch Pad [fig 3] was redesigned for children
aged 8-14, and accompanying adults. The exhibition team absorbed the current
learning literature [including for example, George Hein[iii],
Howard Gardner[iv],
Kevin Crowley[v]],
looked at international best practice [especially at the Exploratorium in San
Francisco] and conducted endless testing to develop exhibits, that invite
open-ended, multi-user, exploratory kinds of behaviour. New Launch Pad welcomed
its millionth visitor within 10 months.

When we came to create displays about climate change research
included a small-scale test of visitors’ understanding of terminology and the researcher
started to map out the mental models that visitors were bringing to the
subject. We found out that the visitor’s understanding was at an altogether
different level than the exhibition team [led by an eminent climate scientist]
had supposed. The mental model showed that visitors had a high level of prior
knowledge about the impacts of climate change, for example rising sea levels
and melting ice caps. However, there were gaps in people’s knowledge and
understanding of some of the key terms and processes.

For example, visitors believed their own prior
knowledge of the causes of climate change to be sound. However, when probed it
was apparent that they had misconceptions, such as a believing that a depleted
ozone layer has a direct causal link to global warming. This knowledge of
audience understanding helped the exhibition team to create content that
extended knowledge and challenged misconceptions[vi].

When
it comes to developing new ideas, prototype evaluation is crucial and we do it
with interactive exhibits, interpretation, new technologies [such as object
engagement apps] and online content.Prototyping allows us to take risks in innovation and creativity – we
can test ideas in the relative privacy of development rather than on the public
floor and succeed [or fail] interestingly [and cheaply]. I can’t help feeling
that these experiences are among the most valuable evaluations to share. An
example is the research we conducted to develop instruction labels for
interactive exhibits in Launch Pad. We realised there was a problem for
visitors understanding how to use some exhibits correctly. For example they
were using the ‘turntable’ exhibit as a roundabout rather than experimenting
with their body position to explore conservation of angular momentum. We needed
an interpretation solution that solved this barrier to engaging with the
exhibit properly. The video label project used a combination of academic
literature review and original research to create a solution. Inspired by Gelman et
al.[vii]and Stevens and Hall[viii]
who wrote about the value of presenting hints and concepts through moving-image
media, we worked
with a resident PhD student and a group of placement students to develop and
test four different prototypes of a visual label. The final video labels show a
short clip of a member of staff using the exhibit correctly, footage of the
science phenomena from the real world, and a couple of lines of simple text,
and they are played on large screens so that people can absorb the instructions
while waiting for their turn. Video-labels are a completely simple and original
answer to a recognised problem [fig 5].

But what about summative evaluations? Should
we share these?

Summative evaluations [which explore how well an
exhibition - or other kind of project - meets its objectives against its target
audiences] might look more promising for publication: they are usually more
extensive, have complex triangulated methodologies and often very long and
detailed reports. But I’m uncomfortable publishing these in their raw form. Not
because I mind what they say – some of our best ideas come from things that
don’t work - but because I think they are hardest for other people to make use
of as they relate so specifically to the gallery. Tracking studies, cued and
uncued observations at individual exhibits, dwell times and satisfaction
ratings are very site specific. And for me the impact isn’t in the report or
gallery being evaluated, it’s in the cumulative learning that feeds back into
the institution. I always see summative work as front-end work for the next
project, and I think we genuinely achieve that at the Science Museum. The problem is
that it’s usually done through the knowledge accumulation, training and
advocacy of an internal audience research department and that’s hard to
translate into documentary form. This is where we have to do better.

What
format is best for sharing, and with whom?

The format for sharing should meet the audience needs.
But which audiences, and what type of information is most useful to them?

To try and answer these questions, here are some of
the ways The Science Museum is planning to share our work across the museum
sector. I think this is an area for much further discussion, and I’ve commented
on each method to contribute to the debate:

Training | Often it’s not evaluation findings but expertise
that professional colleagues are after. Here, training is the better
format and we’ve had a good response to evaluation and prototyping
workshops at ecsite and Visitor Studies Association Conferences, but there
are no cross-sector systems for encouraging and developing practitioners’
skills in research and evaluation. Should that be a priority?

Peer networks | The web is a great tool for sharing with
peers, and we have developed the Sharing Expertise section of our website
to do just that http://www.sciencemuseum.org.uk/about_us/sharing_expertise.apx
This focuses on our practice rather than evaluation, including what we’ve
learned from 15 years of running children’s sleep-overs, tips on
developing science dialogue events, and lessons from Talk Science, a five-year project delivering
teachers’ Continuing Professional Development [CPD] around contemporary
science debate [fig 6].

Conferences |We often present the newest
work at museum or science centre conferences, in the UK and the States, and
keep in touch with international best practice that way. But I wonder whether
presentations which are so momentary and ephemeral are easily embedded in the
field? And work presented to specific sectors, such as science related
institutions, may not find its way to the broader cultural sector: a missed opportunity.

Self-publishing is another option - we are
planning a detailed audience research website - and I do like the idea of
posting on the growing number of knowledge portals, for example: the VSA
[Visitor Studies Association], iseevidenceWikkiand ASTC [Association of Science Technology Centres] Exhibitfiles[on which we published the Launch Pad video labels work]. But informal feedback seems to suggest that take-up of
these is patchy. How are readers and contributors to choose which to use? As a
reader I’d also like a quality filter [such as peer review] so I can be sure
case-studies are sound and generalisable.

Academic papers | Academic papers are a powerful way of
sharing learning. Writing them forces us to contextualise on-the-ground evaluation
into a general research question, and reading them helps us embed original
research in reliable academic literature. And peer review must provide
some assurance of quality. But how many of we practitioners regularly read
academic literature? Recent Wellcome Trust funded research found
that out of 29 senior science communication professionals surveyed, none
had read any of the top 10 cited literature related to their field[ix]. Nevertheless, academic rigour and reputation is important to us and I
will always look for opportunities to publish in this way.

Academic collaborations | Building relationships
with university departments can also lead to fruitful outcomes. Academics
provide access to current thinking and the newest methodologies, as well
as routes into joint research projects [and funding streams] and potential
co-publications. An informal reading group organised by staff at King’s
College, London, provides a forum for us to discuss our work with
colleagues from museums such as the V&A, British Museum and Tate, and
it is always useful to have bracing discussions about our current
thinking.

Practitioner’s guides and manuals | These provide an
alternative dissemination format. One of the best practitioner’s guides
I’ve seen is the San Francisco Exploratorium’s
Active Prolonged Engagement (APE) manual, which uses rigorous research
to tease out what makes visitors stay longer and engage more deeply at
interactive science exhibits in a ‘how to... ’
format. It was a big influence on the success of our Launch Pad
gallery.

An example of where sharing was planned for and funded
may be useful here. In 2003 the Wellcome Trust funded ‘Naked Science’, an 18-month
pilot project to inform the programme of adult controversial science dialogue
events planned for our new Dana Centre. As usual the project included
evaluation – we tested experimental events from puppet shows to science
theatre, punk science comedy to genetics board games, as well as exploring
adults’ attitudes to controversial science and working out how we would define
and measure dialogue. Unusually though, the project also included funding for
sharing our findings externally. To do this we developed a ‘do-it-yourself’
guide for anyone wanting to set up their own science dialogue events, and
produced a detailed report with all of the research findings. You can find these
on the Dana Centre website here http://www.danacentre.org.uk/aboutusSo, might including sharing activity in
funding bids may be the most effective way forward?

Is one solution a secondment to
disseminate findings?

In conclusion, my feeling is that we should be working
towards a more systematic way of gathering and sharing evaluation and best
practice across the cultural field, but while that develops we are best off
sharing in all kinds of different ways for different kinds of reader. However,
this does increase the required investment of time to reflect and write, so
that we translate our research into formats that are findable and useable to
the widest audience. This time is over and above that of doing the original
research, and institutions have to commit to it. The Science Museum has delivered
some of the investment I’ve suggested in this blog by giving me a secondment to
concentrate just on writing and publishing research findings. It’s now my job to identify the most important
and interesting findings from the past ten years of audience research and
translate them into the most effective sharing formats. My question to myself
is how much can I get out there in the next six months!

About Me

Established in 2006, London Museums Group is the representative group of all museums and the museum workforce in the Greater London area.
All individuals working in and for museums and related organisations in London, whether paid or unpaid, whether directly or as freelancers, can become members of the wider London Museums Group.
Please note that the opinions expressed in these blogs are personal to the authors and do not represent an official statement from LMG.