This is a resource
file which supports the regular public program "areol" (action
research and evaluation on line) offered twice a year beginning
in mid-February and mid-July. For details email Bob Dick.

...
in which an action-research-based evaluation study is
described. The emphasis is on the use of triangulation to
achieve rigour

Contents

The purpose of action
research is to achieve both action (that is, change) and research
(that is, understanding). In my view, effective action
research tries to work towards effective action through good
processes and appropriate participation. It tries also to
collect adequate data, and interpret it well.

At its best, action
research is done so that the action and the research enhance each
other.

This file presents the
first of two very different case studies. Both, in different
ways, illustrate some aspects of the simultaneous pursuit of
change and understanding.

Below you will find an
overview of a particular evaluation study which used an
action-research approach. The evaluation was of an action
learning program. We were approached by the people
conducting the program to evaluate it.

The case study
illustrates some of the features of action research applied to
evaluation. (I address them further in later sessions of
areol). You will read:

o a brief background to
the study,

o an account of the
interviewing method used, and

o an overview of the
comparison of the interviewer findings to other data.

There were constraints:
limitations of time, money, and availability of people. So
it was not done in a very participative manner. The
participants took part mostly as informants.

The program providers
were more directly involved in the study. Even here, though,
their participation was not substantial. It was our
perception that they were committed to the program, and keen to
improve it. We judged, therefore, that the disadvantages of
low participation might not be as important in this instance as
they might often be.

(A second case study
describes a more participative approach.)

For brevity, in this
case study I'll emphasise the methods we used to check our data
and interpretations.

We gave attention to
building relationships and clarifying roles with participants and
clients. I won't detail that here. For similar reasons
I will describe only part of the evaluation in detail.

As you read the case
study, you will note

an open-ended start
to each piece of data collection -- each interview or
focus group

the use of multiple
and diverse data sources

the use of a number
of different data collection methods

a step by step
process in which the later steps could be designed to take
account of what we learned from the early steps

a continuing focus
on challenging the data and interpretations already collected;
in particular, when there were apparent agreements between
informants we deliberately sought exceptions; when there were
disagreements we sought explanations.

Background

A colleague and I were
approached by a staff member in the training and development unit
of an organisation. We were asked if we would evaluate a
project-based training program which the unit had set up for
organisational members. We agreed.

To simplify the
consultancy we decided to allocate the tasks. Each of us
would carry out different parts of the evaluation. My
colleague would collect data from the non-participant
stakeholders. I would work with participants.

We could immediately see
an advantage of this division of labour: if we did our early work
independently, each of us would later be able to check our part of
the evaluation against the conclusions drawn by the
other.

I would be using an
interview method which works best with two interviewers or
more. I therefore negotiated for another colleague, Karyn
Healy, to join me. This was readily approved.

The technique we used
was convergent interviewing, described in more detail in the
archived file iview.

We explained to the
training unit that we preferred to use a "maximum diversity"
sample of participants. This provides a greater range of
data than a random sample. We asked if such a sample could
be drawn up for us. This was done: we were given a list of
names and telephone extensions of a sample of
participants.

We then asked for a
second sample, to take part in group interviews. This was
also done. This sample, too, was as diverse as
possible. There was no overlap between the two
samples.

From time to time
throughout the evaluation we talked to people who were involved in
the delivery of the training. This included:

early negotiation
with the person who first approached us, clarifying the unit's
expectations and our roles

presenting results
in progress to members of the unit, for their comments and
feedback; this also gave the unit members a chance to provide
us with background information, and to add some colour to our
interpretation of the data we were collecting.

contact with unit
members just prior to writing the final report, to check that
we had covered the information they required

meeting with unit
members to give them a further chance to react to, and
challenge, the report.

We regarded this contact
as important. Our final recommendations were more likely to
be understood if we clarified our intentions along the way.
They were more likely to be acted on if we addressed the concerns
of unit members, and involved them in helping us interpret the
data.

The
interviews

Karyn and I each
interviewed half of the interview sample. We phoned each of
them, explained who we were, and negotiated a time to meet
them. In a few instances we were unable to find a suitable
time, and instead arranged a telephone interview.

Each of us first carried
out one interview with a different informant. At the start
of each interview we explained our role in some detail. We
made clear what use would be made of the data they provided to
us. We also explained that we would take pains to protect
their identity in our reports to the training unit.

To begin the interview
proper, we said "Tell me about [the program]". We
then used attentive listening, and other verbal and non-verbal
signs of attention, to keep our informant talking for about 45
minutes. In this and similar studies, I have noticed that
most informants appear to talk freely in response to this
approach. I think that having someone listen carefully to
your every word, and show every sign of being very interested in
it, is an affirming experience for most people.

(In some similar studies
I have been given information that, if I had revealed the identity
of the person giving it, would have caused problems for them
within the organisation. With some exceptions, it is usually
not difficult to gather a lot of valuable information in a
relatively short time.)

Towards the end of the
interview we asked more specific questions. (The role of
these questions will become apparent soon.)

During the interview we
listened for important themes. At the end of each interview
we asked our informant to summarise their interview for us.
We mentally compared their summary to our recollection of the
themes, as a check.

Finally, we restated the
guarantees about anonymity, and thanked the informant
enthusiastically.

After each pair of
interviews, Karyn and I met to compare results. We made
particular note of any themes mentioned by both informants.
(In the later interviews we also noted themes mentioned by only
one of the two informants, but which had come up in earlier
interviews.)

Here is an important
feature of the technique... For each theme identified, we
developed probe questions to explore the theme further in later
interviews. Agreements and disagreements were differently
treated:

Sometime both
informants mentioned the same topic and revealed the same
attitude. When this occurred we devised one or more
questions to seek out exceptions to that
agreement

Sometimes both
informants mentioned the same topic, but with different
attitudes. We then developed probe questions that sought
information to explain the disagreement.

In short, we actively
sought out exceptions to apparent agreements, and explanations for
apparent disagreements.

All interviews began in
the same open-ended way. We wanted to ensure that the
information we collected was contributed freely by the
informants. We didn't want it to be determined by the
questions we asked. As the series of interviews progressed,
the probes increased in number and detail.

In other words, we
allowed the data, and the interpretations placed upon it by our
informants, to lead us deeper into the study. You will
notice, too, that we refined questions and answers over the series
of interviews. Guided by the informants, we developed a more
in-depth perception of the program we were evaluating.

When the series of
individual interviews were nearing completion we conducted group
interviews.

Group
interviews

Karyn and I held a
detailed planning session before the group interview. We
reviewed the results so far, and noted any uncertainties we
had. We then planned the group interview to begin again in
an open-ended fashion, saving the more specific questions for
later.

Our chosen process was a
particular form of focus group. Compared to many focus
groups it was modified to allow more systematic data collection
and more interpretation of the data by participants.

You will recall that the
people delivering the program assembled a sample for us.
Before the focus group, each of these was also asked to fill in a
brief questionnaire indicating which sessions they found most
satisfying, for whatever reason, and which they found least
satisfying.

There is a description
of this form of structured focus group in another document.
Here I will provide only a very broad overview. The main
steps were:

We introduced
ourselves, described our role and the purpose of the group
interview. We gave guarantees about anonymity, and
explained what would be done with the information. We did
this in a way that established rapport with the informants.

We asked each person
to think about their reaction to the training program, jotting
down notes to themselves as they thought of an issue.

Then followed an
unstructured discussion in which everyone was encouraged to
speak. We introduced this discussion by saying that we
were keen to know what issues people agreed on; we were also
interested in noting where different people had different
experiences or different views. We explained that the
discussion wasn't intended to provide an opportunity to argue
for a point of view. We wanted them to note and explain
the disagreements as well as the agreements. During this
discussion we also asked people to note down the themes which
emerged from the discussion. We explained that we would
collect these (on an electronic whiteboard) after the
discussion.

The themes and
issues were written on the electronic whiteboard, and then
further discussed. Through further discussion the most
important of these were then identified and marked.

We asked, and
recorded the answers to, the more specific questions we had
which were not already answered earlier in the focus group.

Other data
sources

From time to time during
these processes we also gained access to other data sources.
The most important of these were written accounts by participants
of the projects they had conducted, and what they had learned from
their experience. To complement this one of us was also able
to attend a one-day workshop at which each of the participant
teams gave a spoken presentation on their project. We used
this information to refine the interpretations based on individual
and group interviews.

After the interviews and
focus groups were completed we again reviewed the data we
had. We agreed on a set of themes which captured the main
advantages and disadvantages of the program we were
evaluating.

It was time to compare
our results, from participants, to those our other colleague had
collected from non-participants involved in the program.

Comparing
participants and other stakeholders

In comparing our data
and interpretations to those provided by the other evaluator, we
followed a similar approach to that used in other
comparisons. That is, we particularly noted agreements and
disagreements.

In practice, there was
high agreement on the main themes. Where there were
differences, these most often reflected the differing roles and
interests of those providing the information.

Following this, we
agreed with our colleague on the way in which we would report the
results. We decided that we would report the strengths of
the program as such. The disadvantages we decided to report
in the form of recommendations for improvement. Our
intention was to make it as easy as possible for those delivering
the program to absorb and understand the data and interpretations
we were to report.

Meeting with
providers

A meeting was then set
up. Those attending consisted of two of us, and the team
delivering the program. In addition to the program leader,
the delivery team included program designers, project
facilitators, and presenters. Lasting several hours, this
was conducted as an initial report and then an open
discussion.

We began by presenting a
summary of results, and inviting comments. More detailed
evidence was presented when it was appropriate during the
discussion. From our (the evaluators') point of view, there
were three main purposes:

providing
information to those who could make use of it; we wanted this
to be interactive so that there were more opportunities for
clarification and inquiry than with written presentations

obtaining
information which challenged or refined the data and
interpretations obtained by the evaluation so far

gauging how to
present the information so that threat to the program team was
minimised; this, we thought, would increase understanding and
subsequent action by the program team.