In my previous post, I wrote about a peculiar asymmetry that I observed between the harder social sciences and the humanistic social sciences at a conference on methodology in Southeast Asia. The political scientists and economists, on the whole, were quite willing to engage critically with the anthropologists and cultural/ethnic/global studies folks. The same is not true in reverse: despite my best efforts to start a debate with the traditional area studies crowd, it just seemed to stall.

Asking Questions, Being Critical, Feeling the Flow

I entertained five possibilities for why this might have been the case: representatives from the harder social sciences are (1) numerate, (2) obviously wrong, (3) American, (4) male, or (5) jerks. I’m not dismissing any of these out of hand, but my suspicion is that it lies in something else, more fundamental to the disciplines. (Hearing from both political scientists and non political scientists that my experience talking about Southeast Asian studies in Freiburg is not unique to either SEA or Freiburg only reinforces this suspicion.)

I think the difference has to do with disciplinary norms about fallibility in social research.

By fallibility, I just mean the fact that the findings can be wrong and that it’s often hard to tell. There’s nothing unique to social research in this regard, of course. But some scholars of society seem to be uniquely distressed by the possibility that we could be wrong. It’s not just that we’re bad researchers, but also somehow bad people too. (It’s this sort of idea that generates the triumphalist pronunciation that someone’s research is “reductionist” or “hegemonic” or “Orientalist,” which take on the tone of personal insults.)

My claim is that there are two different ways in which social scientists address the fallibility of their own research.

The first is the standard normal science way: trying one’s best to make all research procedures clear, objective, and replicable. (Many historians, incidentally, fall into this camp.) It involves an understanding that the point of the audience is to ensure that the presenter is not pulling a fast one. Your audience should be able to ask you questions about your research, and you should be able to answer them. The message here is “look, if you did this research yourself you’d come up with the same results too.”

The second is the approach embraced in most of cultural anthropology and many of the thematic studies corners of Southeast Asian studies. In essence, it is to redefine the enterprise of social research in such a way as to make fallibility irrelevant. Rather than representing the social world as it is, or presenting a framework for organizing observations about the world, the task is to reflect upon the meanings (overt or hidden) that make the task of social research difficult. The message here is “look, you can’t do this research. And honestly, I’m not sure that I can either. So let’s forget about truth or accuracy. There’s no point trying to nail down the objective facts here. Here’s what I felt.”

This matters, in turn, because the norm of how you think about the fallibility of your own research shapes how you engage with the potential fallibility of other people’s research. When a political scientist presents a scatterplot, s/he is already thinking about how to face questions about its meaning. In turn, when s/he sees someone else’s research, the thought is “how do I know that this is right?” Same thing when an economist hears a cultural studies presentation which claims to represent, say, what popular culture is like in Vietnam. The instinct is to immediately think something like, “well, how do I know that this is correct? How did we get a representative sample? What does popular culture mean? How does it vary?” In short, for the harder social sciences, we think that our job as an audience is to knock the presentation around to see if it holds up.

Not so, I think, with the humanistic social sciences. If you start with the position that you yourself are incapable of faithfully recording or interpreting the social world as it “really is” then you have no reason to believe that anyone else can either. When witnessing an economist present corruption data, I can only imagine what a critical theorist thinks. There is really no ground for interaction, because the entire enterprise of believing that one says true things about the world is a mistake. It’s a best a peculiar thing that economists do.

One comment made by a political scientist at the end of the conference really nails this. (Sorry, no names here, but it wasn’t me.) He said, “what I’d really like is a guidebook that tells me what counts as good anthropological research in Southeast Asia. I’m not an anthropologist, so how do I know it when I see it? What standards do you use?” The fact that no one would answer him—and that no one could even point him to a place where anthropologists have been debating the answer—speaks volumes.

I don’t think that it was always this way. I wasn’t around in the1960s and 1970s, but I think that back then the debates across the disciplines in Southeast Asian studies really were more common, and likely more productive, with contributions from both sides. We’ve lost something. If we take interdisciplinarity seriously, which is what we’re supposed to do, we ought to restart the conversation.

So here’s a plea—a cry into the wilderness perhaps, but I make it nonetheless. Can we find a forum for real dialogue across disciplines on methodology in Southeast Asian studies? An AAS panel, a special edition of JSEAS, something like that? I see the wonderful Freiburg conference last week as actually crystallizing for me just how much further we need to go. This will require some representatives of the humanistic social sciences (you know who you are) who are willing to engage directly with the harder social sciences on their own terms. I’ll volunteer to be the representative from mainstream political science. Who’s with me?