What are approaches?

Approaches (on this site) refer to an integrated package of options (methods or processes). For example, 'Randomized Controlled Trials' (RCTs) use a combination of the options random sampling, control group and standardised indicators and measures.

Appreciative enquiry

A strengths-based approach designed to support ongoing learning and adaptation by identifying and investigating outlier examples of good practice and ways of increasing their frequency.

Click an approach on the left to navigate to it

Beneficiary Assessment

An approach that focuses on assessing the value of an intervention as perceived by the (intended) beneficiaries, thereby aiming to give voice to their priorities and concerns.

Click an approach on the left to navigate to it

Case study

A research design that focuses on understanding a unit (person, site or project) in its context, which can use a combination of qualitative and quantitative data.

Click an approach on the left to navigate to it

Causal Link Monitoring

An approach designed to support ongoing learning and adaptation, which identifies the processes required to achieve desired results, and then observes whether those processes take place, and how.

Click an approach on the left to navigate to it

Collaborative Outcomes Reporting

An impact evaluation approach based on contribution analysis, with the addition of processes for expert review and community review of evidence and conclusions.

Click an approach on the left to navigate to it

Contribution Analysis

An impact evaluation approach that iteratively maps available evidence against a theory of change, then identifies and addresses challenges to causal inference.

Click an approach on the left to navigate to it

Critical System Heuristics

An approach used to surface, elaborate, and critically consider the options and implications of boundary judgments, that is, the ways in which people/groups decide what is relevant to what is being evaluated.

Click an approach on the left to navigate to it

Democratic Evaluation

Various ways of doing evaluation in ways that support democratic decision making, accountability and/or capacity.

Click an approach on the left to navigate to it

Developmental Evaluation

An approach designed to support ongoing learning and adaptation, through iterative, embedded evaluation.

Click an approach on the left to navigate to it

Empowerment Evaluation

A stakeholder involvement approach designed to provide groups with the tools and knowledge they need to monitor and evaluate their own performance and accomplish their goals.

Click an approach on the left to navigate to it

Horizontal Evaluation

A particular type of case study used to jointly develop an agreed narrative of how an innovation was developed, including key contributors and processes, to inform future innovation efforts.

Click an approach on the left to navigate to it

Innovation History

A way to jointly develop an agreed narrative of how an innovation was developed, including key contributors and processes, to inform future innovation efforts.

Click an approach on the left to navigate to it

Institutional Histories

A particular type of case study used to create a narrative of how institutional arrangements have evolved over time and have created and contributed to more effective ways to achieve project or program goals.

Click an approach on the left to navigate to it

Most Significant Change

Approach primarily intended to clarify differences in values among stakeholders by collecting and collectively analysing personal accounts of change.

Click an approach on the left to navigate to it

Outcome Harvesting

An impact evaluation approach suitable for retrospectively identifying emergent impacts by collecting evidence of what has changed and, then, working backwards, determining whether and how an intervention has contributed to these changes.

Click an approach on the left to navigate to it

Outcome Mapping

An impact evaluation approach which unpacks an initiative’s theory of change, provides a framework to collect data on immediate, basic changes that lead to longer, more transformative change, and allows for the plausible assessment of the initiative’s contribution to results via ‘boundary partners’.

Click an approach on the left to navigate to it

Participatory Evaluation

A range of approaches that engage stakeholders (especially intended beneficiaries) in conducting the evaluation and/or making decisions about the evaluation​.

Click an approach on the left to navigate to it

Participatory Rural Appraisal

A participatory approach which enables farmers to analyse their own situation and develop a common perspective on natural resource management and agriculture at village level.

Click an approach on the left to navigate to it

Positive Deviance

A strengths-based approach to learning and improvement that involves intended evaluation users in identifying ‘outliers’ – those with exceptionally good outcomes - and understanding how they have achieved these.

Click an approach on the left to navigate to it

Qualitative Impact Assessment Protocol (QUIP)

An impact evaluation approach without a control group that uses narrative causal statements elicited directly from intended project beneficiaries.

Click an approach on the left to navigate to it

Randomised Controlled Trials (RCT)

An impact evaluation approach that compares results between a randomly assigned control group and experimental group or groups to produce an estimate of the mean net impact of an intervention.

Click an approach on the left to navigate to it

Realist Evaluation

An approach especially to impact evaluation which examines what works for whom in what circumstances through what causal mechanisms, including changes in the reasoning and resources of participants.

Click an approach on the left to navigate to it

Social Return on Investment (SROI)

An participatory approach to value-for-money evaluation that identifies a broad range of social outcomes, not only the direct outcomes for the intended beneficiaries of an intervention.

Click an approach on the left to navigate to it

Success Case Method

The Success Case Method (SCM) involves identifying the most and least successful cases in a program and examining them in detail. This approach was developed by Robert Brinkerhoff to assess the impact of organisational interventions, such as training and coaching, though the use of SCM is not limited to this context. It is a useful approach to document stories of impact and to develop an understanding of the factors that enhance or impede impact.

Click an approach on the left to navigate to it

Utilisation-Focused Evaluation

An approach to decision-making in evaluation that involves identifying the primary intended users and uses of an evaluation and then making all decisions in terms of the evaluation design and plan with reference to these.

What are approaches?

Approaches (on this site) refer to an integrated package of options (methods or processes). For example, 'Randomized Controlled Trials' (RCTs) use a combination of the options random sampling, control group and standardised indicators and measures.

Appreciative enquiry

A strengths-based approach designed to support ongoing learning and adaptation by identifying and investigating outlier examples of good practice and ways of increasing their frequency.

Click an approach on the left to navigate to it

Beneficiary Assessment

An approach that focuses on assessing the value of an intervention as perceived by the (intended) beneficiaries, thereby aiming to give voice to their priorities and concerns.

Click an approach on the left to navigate to it

Case study

A research design that focuses on understanding a unit (person, site or project) in its context, which can use a combination of qualitative and quantitative data.

Click an approach on the left to navigate to it

Causal Link Monitoring

An approach designed to support ongoing learning and adaptation, which identifies the processes required to achieve desired results, and then observes whether those processes take place, and how.

Click an approach on the left to navigate to it

Collaborative Outcomes Reporting

An impact evaluation approach based on contribution analysis, with the addition of processes for expert review and community review of evidence and conclusions.

Click an approach on the left to navigate to it

Contribution Analysis

An impact evaluation approach that iteratively maps available evidence against a theory of change, then identifies and addresses challenges to causal inference.

Click an approach on the left to navigate to it

Critical System Heuristics

An approach used to surface, elaborate, and critically consider the options and implications of boundary judgments, that is, the ways in which people/groups decide what is relevant to what is being evaluated.

Click an approach on the left to navigate to it

Democratic Evaluation

Various ways of doing evaluation in ways that support democratic decision making, accountability and/or capacity.

Click an approach on the left to navigate to it

Developmental Evaluation

An approach designed to support ongoing learning and adaptation, through iterative, embedded evaluation.

Click an approach on the left to navigate to it

Empowerment Evaluation

A stakeholder involvement approach designed to provide groups with the tools and knowledge they need to monitor and evaluate their own performance and accomplish their goals.

Click an approach on the left to navigate to it

Horizontal Evaluation

A particular type of case study used to jointly develop an agreed narrative of how an innovation was developed, including key contributors and processes, to inform future innovation efforts.

Click an approach on the left to navigate to it

Innovation History

A way to jointly develop an agreed narrative of how an innovation was developed, including key contributors and processes, to inform future innovation efforts.

Click an approach on the left to navigate to it

Institutional Histories

A particular type of case study used to create a narrative of how institutional arrangements have evolved over time and have created and contributed to more effective ways to achieve project or program goals.

Click an approach on the left to navigate to it

Most Significant Change

Approach primarily intended to clarify differences in values among stakeholders by collecting and collectively analysing personal accounts of change.

Click an approach on the left to navigate to it

Outcome Harvesting

An impact evaluation approach suitable for retrospectively identifying emergent impacts by collecting evidence of what has changed and, then, working backwards, determining whether and how an intervention has contributed to these changes.

Click an approach on the left to navigate to it

Outcome Mapping

An impact evaluation approach which unpacks an initiative’s theory of change, provides a framework to collect data on immediate, basic changes that lead to longer, more transformative change, and allows for the plausible assessment of the initiative’s contribution to results via ‘boundary partners’.

Click an approach on the left to navigate to it

Participatory Evaluation

A range of approaches that engage stakeholders (especially intended beneficiaries) in conducting the evaluation and/or making decisions about the evaluation​.

Click an approach on the left to navigate to it

Participatory Rural Appraisal

A participatory approach which enables farmers to analyse their own situation and develop a common perspective on natural resource management and agriculture at village level.

Click an approach on the left to navigate to it

Positive Deviance

A strengths-based approach to learning and improvement that involves intended evaluation users in identifying ‘outliers’ – those with exceptionally good outcomes - and understanding how they have achieved these.

Click an approach on the left to navigate to it

Qualitative Impact Assessment Protocol (QUIP)

An impact evaluation approach without a control group that uses narrative causal statements elicited directly from intended project beneficiaries.

Click an approach on the left to navigate to it

Randomised Controlled Trials (RCT)

An impact evaluation approach that compares results between a randomly assigned control group and experimental group or groups to produce an estimate of the mean net impact of an intervention.

Click an approach on the left to navigate to it

Realist Evaluation

An approach especially to impact evaluation which examines what works for whom in what circumstances through what causal mechanisms, including changes in the reasoning and resources of participants.

Click an approach on the left to navigate to it

Social Return on Investment (SROI)

An participatory approach to value-for-money evaluation that identifies a broad range of social outcomes, not only the direct outcomes for the intended beneficiaries of an intervention.

Click an approach on the left to navigate to it

Success Case Method

The Success Case Method (SCM) involves identifying the most and least successful cases in a program and examining them in detail. This approach was developed by Robert Brinkerhoff to assess the impact of organisational interventions, such as training and coaching, though the use of SCM is not limited to this context. It is a useful approach to document stories of impact and to develop an understanding of the factors that enhance or impede impact.

Click an approach on the left to navigate to it

Utilisation-Focused Evaluation

An approach to decision-making in evaluation that involves identifying the primary intended users and uses of an evaluation and then making all decisions in terms of the evaluation design and plan with reference to these.

You are here

Interviews

Interviewing is a fundamental methodology for both quantitative and qualitative social research and evaluation. Interviews are conversations between an investigator (interviewer) and a respondent (‘interviewees’, ‘informants’ or ‘sources’) in which questions are asked in order to obtain information. Interviews seek to collect data and narrative information in order to better understand the respondent’s unique perspectives, opinions, and world-views.

For Interviews with Individuals see:

For Interviews with Groups see:

Types of interviews

There are many different types of interview approaches and techniques, Generally speaking, all interviews fall into one of three categories: structured, semi-structured, and depth/unstructured interviews.

Structured interviews are most typically used in quantitative investigations, including survey research. In structured interviews, the interviewer presents the interviewee with a standardized set of questions, often in questionnaire form. These questions usually have pre-set answers from which the interviewee selects, rather than ‘open-ended’ questions. Each individual interview features the same set of questions, asked in a fixed order. All questions included in the research design are asked in each interview session. Structured interview questions are the most common type used in surveying interviewing.

Semi-structured interviews center around a mixed framework of general themes and pre-established questions, which can be adapted in the context of individual sessions. The interviewers is thus free to leave certain questions out, mix the order of questions, or ask certain standard questions in different ways depending on context. Semi-structured interviews also rely on a combination of both open and closed questions.

Unstructured interviews – also known as ‘informal’ or ‘conversational’ interviews – are wholly qualitative, and include only topic areas and themes rather than standard questions. Unstructured interviews take the form of natural conversation between two or more people, and allow the interviewer to pursue follow-up questions or new lines of discussion as they see fit. Closed questions are avoided, and the interviewee is often asked to identify the information they feel is most important for the discussion.

In practice, these three approaches are routinely combined. Qualitative exploratory interviewing, for instance, can prove a good compliment to more structured interviewing using closed questions later in an evaluation.

A large number of more specific interviewing techniques fall under this broad taxonomy; including telephone interviews, computer-assisted interviewing, elite interviewing, life histories, household surveys and Key Informant Interviews which are interviews with people who have particularly informed perspectives on the project. (Group interviews, including focus groups, and survey research require sufficiently specialized methodological approaches as to be considered separate from general interview methodology, although many of the fundamentals overlap.)

Whatever approach the investigator selects, the interviewing processes itself follows several general stages:

First, investigators design and plan the study, determining both generalized approach (structured, semi-structured, unstructured), specific technique, the research questions to be asked, and any practical, conceptual and ethical external factors to consider. Interviewers also undertake any specialized training/preparations required.

Second, the interviews themselves are conducted, and their results are subsequently transcribed. (Be warned that the transcription process can take as long, if not longer, than the interviewing itself.)

Third, the results of the interviews are analyses and interpreted using the investigator’s chosen option of analysis. During this stage, verification of the data and findings collected from the interviews is also required.

Fourth, results are reported.

Conducting the interview itself is as much ‘art’ as ‘science’, and requires practice. That said, the following steps provide a useful guide:

Opening up the interview: Begin with informal introductions and small talk, then properly introduce the investigation, interview format and structure to the participants, allowing the opportunity for them to withdraw their consent prior to its start.

Administer questions: Ask the questions of the interviewee, while recording their answers by hand, audio recorder, or video tape. Pay close attention to both the answers being recorded and, especially for semi- and unstructured interviews, the emerging themes, perspectives, opinions, and underlying logic which is being elicited in the respondent’s answers – all of these should be noted and if appropriate, probed further. Inconsistencies and diversionary answers should also be watched out for. Respondents should always be given the space during discussions to form their own answers. The investigator should also keep an eye on the time throughout.

Closing down the interview: After all questions have been asked, the investigator should ask the respondent their feelings on the interview, and whether they have anything further to add. It is particularly useful to summarise the key points of the discussion with the respondent while they are still there. End by thanking the respondent for their time. If possible, review notes immediately after and expand on any annotations made during the interview while it is still ‘fresh’ in your mind.

Keep in mind that specific interview techniques will require important variations of this approach – a telephone interview is conducted very differently than a one-to-one interview, as is a focus group. See relevant options entries for more on each.

Ethics: Finally, it is the investigator’s duty to ensure that they adhere to proper ethical interviewing standards. First, interviews should only be conducted with informed consent – permission should be sought in a transparent way from interviewees, and granted, prior to the interview. This step should be conducted alongside risk assessments to identify potential risks to interviews and address them in the content process. Second, rights to confidentiality and anonymity must be offered where appropriate – keep in mind that many governments have strict laws about both confidentiality and circumstances where important/dangerous information cannot be kept confidential (for example, in instances of severe domestic or child abuse, etc.) Third, fair return for assistance should be considered, whether in terms of financial or material compensation. (compensation is itself a tricky ethical issue, and policies may differ by organization and context, but should be addressed and discussed prior to the start of the interview process.) Finally, be aware that interviewing vulnerable groups – including children, disabled people, elderly people, victims of violence, etc – bring with it a range of unique ethical considerations. An example of ethical guidelines for social research prepared by professional research associations to govern their professions may serve as a useful starting point. See, for example, The Economic and Social Research Council (ESRC) Framework for Social Ethics.

This following example of a semi-structured interview guide was prepared for the World Health Organization's (WHO) training package on substance use, sexual and reproductive health as part of the M&E component of a street children project.

(WHO 2002 p.31)

Advice

Advice for CHOOSING this option (tips and traps)

In general, interviews have the following strengths:

Depth and context of data – interviews elicit insights and perspective’s into people’s though-processes and rationales behind a particular issue, and are able to articulate their ‘story’ in their own words.

Access – Interviews, particularly one-to-one interviews, can permit the investigator to access individuals unwilling or unable to participate in focus groups or surveys.

‘Interviewer effect’ – in which the interviewer’s presence and behavior bias the interviewee’s response – is a consistent challenge to avoid, as are unintended transgressions of individual’s comfort-zones and levels of privacy.

People’s narrative explanations do not always conform to the reality of a situation, which requires consideration of reliability and triangulation.

Keep in mind that a translator may be required to assist in both the interview and transcription if the investigator does not speak the same language as the interviewee – keep in mind that this requires additional coordination between the translator and interviewer to ensure all questions are delivered as intended. Interviewers should also be involved in the post-interview review and analysis.

While most interviews outside group settings include only one interviewer, it can often be useful to consider including a second interviewer. Ideally, this second interviewer can also serve as the translator if needed, and can focus on note-taking while the first interviewer focuses on questions. Can also provide feedback and perspective after the interview is complete, and interviews can often benefit from two people with different perspectives asking different types of questions.

Advice for USING this option (tips and traps)

Interviewing is a unique and somewhat intuitive skill which requires practice. The following are a few, but by no means all, guidance on how to ask good questions.

On question design:

Good questions should be clear, specific, unambiguous, and directly related to the overarching research question. They should ideally concern one issue or topic at a time, and be worded in clear, jargon and technical-free language. They should ask only subjects respondents can be reasonably expected to possess information on.

Questions should be neutral in tone.

Investigators must always remain aware of the danger of ‘leading questions’. Leading questions are those which encourage the respondent to answer in a particular way, thus biasing your resulting answer and potentially discrediting your research. (wouldn’t you agree that this project should berecognisedas a success?) To avoid asking leading questions – which can often be difficult to spot yourself – ask a colleague to review your questions for you.

Similarly, investigators should avoid questions which presume information (i.e. asking ‘what age did you graduate primary school’ presumes the respondent attended primary school), questions which contain subtle or explicit value judgments (i.e. asking ‘at what age were you forced to marry?’ to someone who was not forced to marry), and of course, any questions which may be considered rude, offensive, or insensitive (determining which will always depend on context, culture, and the individual).

Place questions in a logical order – for instance, beginning with an interesting and nonthreatening question which all respondents may be expected to answer confidently. (Conversely, do not open your interviewing with your most challenging or potentially embarrassing questions – rather, sensitively introduce them later in the interview after rapport has been established.)

If at all possible, pilot-test your questions on a small group from your final sample to identify potential issues, and expect to redraft them multiple times before finalizing the list.

On interviewing:

Time-management and scheduling is critical. Arrive at every interview with an understanding of the timing involved in the entire interview and each individual question. Keep a discrete eye on the time to ensure you have enough time remaining. If possible, leave extra time to begin and end (in case of late arrivals and protracted ending small-talk). If possible, remind respondents a day or two before the scheduled interview.

Pay close attention to your own body-language: maintain appropriate levels of eye contact. Also notice non-verbal signs in your respondents, and note them. Bring all relevant materials with you – typically, a notebook, writing tools, and any recording equipment (audio or video) needed. Ensure that your recording option (note-taking, taping) is workable in the context in which you will be doing the interview. Especially for long interviews get some water and maybe even food/ candies for you and your subject.

Wait to establish rapport, when the interviewee appears most comfortable, before asking sensitive questions.

It is important to allow interviewees the space to express themselves the way they feel most comfortable, in order to elicit the deepest and most honest responses. Be very careful to give these interviewees enough time to respond, and learn to be comfortable with pauses and silence.

Always thank the respondent at the end of the interview.

Immediately following the interview, take time to retire to a quiet place and expand or complete any notes you made during the session – your conversation will never be fresher in your mind.

Continue to practice and refine your interviewing techniques.

Resources

Guides

Using Structured Interviewing Techniques: This paper from the United States General Accounting Office (GAO) explains structured interview techniques for GAO evaluators and how they can should be incorporates when appropriate.