IRs & Participatory DP

Where do we stand on critical approaches to discursive psychology? Wetherell would say that a complete analysis of talk requires moving beyond a “technical” conversation analysis to consider larger surrounding discourses such as those of gender and race. To this, Schegloff might say that such categories will make themselves visible in talk if they are relevant (i.e., participants will orient to them as relevant). Weatherell would say that simply isn’t true and that interpretive repertoires provide a way to crystalize the ways in which those things are made relevant even when they aren’t explicit.

I have quite a few questions/comments/concerns throughout this debate. Firstly, I reject the notion of a “complete” analysis, although Wetherell also says “scholarly” analysis so we can just stop there. Assuming that relevance will make itself visible is problematic for a few reasons. One is that that doesn’t account for why/how/when people don’t participate in a conversation (which means they don’t present any talk that can then be analyzed). I appreciate that assuming these things can compromise the quality of a study but I wouldn’t phrase this as making it “more subjective” because for the moment I assume researchers cannot be “objective” about anything (and also that there are no “objective truths,” which puts me into DP’s relativist nontology).

Interpretive Repertoire

But more than anything, I still don’t think I get IRs. Wetherell writes:

“An interpretative repertoire is a culturally familiar and habitual line of argument comprised of recognizable themes, common places and tropes […] These interpretative repertoires comprise members’ methods for making sense in this context – they are the common sense which organizes accountability and serves as a back-cloth for the realization of locally managed positions in actual interaction (which are always also indexical constructions and invocations) and from which, as we have seen, accusations and justifications can be launched. The whole argument does not need to be spelt out in detail. Rather, one fragment or phrase (e.g. ‘on the pull’, ‘social guards were down’) evokes for listeners the relevant context of argumentation – premises, claims and counter-claims.”

The part that bothers me about IRs is that it is unclear to me how a researcher establishes quality of IR. I personally have never found IR-based arguments as compelling as their CA counterparts. Schegloff might call this the “objectivity” of CA but to me it’s just a matter of things being grounded in the data. By contrast, with IRs it’s never clear to me what is and isn’t an IR. In most of what we’ve read (e.g., the scientists study by Gilbert & Mulkay) it seems that particular phrases, etc., are put together to create an IR. But how do we know that an IR is, well, a pre-existing “package” that people are intentionally drawing upon? How does the DA researcher prove that the constellation of phrases are “culturally familiar and habitual” and that they are “recognizable” (to whom?). Is it that they are visible across multiple participants across the talk? Is it therefore the quantity of their use that helps establish the legitimacy of an IR? But I confess I am having trouble articulating my skepticism so I will end here.

Participatory DP

Let me therefore briefly digress into an ongoing discussion about “participatory” DP since my final project for this course is situated in a larger constellation of work. I propose that to do true participatory anything, you have to not be methodologically committed to “naturally-occurring” anything. In particular, to do participatory discursive psychology as an iterative research program, empirical findings cannot assume that the talk is unaffected by the researcher’s presence (i.e., this fails the dead social scientist test). For example, in the youth organization I work with (pseudonym: Virgo) two weeks ago, one of the youth (who is also a researcher) put out his phone to record and instructed, “If everyone is laughing please don’t say something until everyone stops laughing because it makes it really hard to transcribe what you say.” In Virgo this week, one youth said, “Should we be recording this?” and another said, “no we already have too many to transcribe and I can’t do anymore.”

To get around this, one could treat these interactions as “naturally-occurring” because they are in the context of “Participatory research talk,” which is itself a kind of talk. More compelling is to treat these interactions as “naturally-occurring” because when we say naturally-occurring we mean not premeditated, so no scripts, speeches, etc. This talk, even though affected by the researchers, is still “natural” in that it serves a communicative, spur-of-the-moment purpose.

Unlike some conversation analysts, perhaps, it is still unclear to me how important “naturally-occurring talk” is to DP. Early DP work is based heavily on interview data although more recent work prizes an ethnomethodological commitment to talk that exists even in the absence of the research. That said, Wetherell’s very recent (2014) piece was still using interview data without taking any special care to analyze that data with regard to its interview structure.

To all this, Jessica said “First, I think the borders between “natural” and “not-natural/contrived” are murky. But, some DP and many CA folks write about this in a way that rhetorically positions ‘natural’ as more meaningful, objective, ‘less tainted.’ All things that imply the (I would argue false) notion that you can REALLY or TRULY or FULLY represent anything or record in FULL. […] Rebuttal #2 is the winner for me. I see value in considering (at least at times) talk that is not premeditated, that just is the talk that happens. Which in your example is exactly that — at least from my perspective.”

This all makes sense. What are the advantages of naturally-occurring talk? One is this notion of its “untaintedness” which, well, just isn’t that important to me. But some of it is to argue for the generalizability of the research – the talk’s “naturalness” may support the notion that similar patterns are happening elsewhere, are unprompted by the researcher. But if generalizability isn’t that important to you – which it’s not to me – this may not be a problem.

So from here it seems that the assumptions of DP and the values of participatory research can be framed as not in contention, which will be helpful moving forward.

Post navigation

One thought on “IRs & Participatory DP”

So, first Silverman cautions about going to far with the distinction between ‘naturally occurring’ and researcher generated, suggesting that really when we do so the argument falls apart. But, I would suggest the benefits of naturally occurring data can lie outside of debates about objectivity (and by the way I would describe the data you describe your work as naturally-occurring in their own right). One benefit is that a researcher doesn’t come to the data with a prescribed set of expectations that they are expecting from the dataset. For instance, in my own work with interview data (narratively oriented work), I have specific expectations that my semi-structured questioning pushes participants to unpack. Indeed, there is a problematic aspect to this, as well as a benefit (I get people to share specific to my interest). But, if I were to collect naturally occurring data around this interview perhaps I’d come to make sense of this phenomenon or even a different one as people go about their business. I digress…all this to say, I think it is possible to generate a list of benefits of NC data beyond claims of objectivity. There is also limitations — with the same being true of researcher generated data (both benefits/limitations).

Then, okay, yes – IRs are a bit messy/fuzzy. I think in many ways it is the move to “macro” (but not quite) which is also messy/fuzzy. How do we know it is what we say it is? Because we just do. Hm…where does quality lie then?