The Workers of the World Unite Edition

1. Microfinance/Household Finance: I mentioned the Hrishipara Financial Diaries last week--it's a project Stuart Rutherford has been running in central Bangladesh for four years now. That's a truly unique data set of high frequency data on the financial lives of households. I also mentioned that Stuart is now funding the continuation of the diaries out of his own pocket. Don't make me beg for someone to step in with more funding so this dataset gets even more valuable. It's incredibly cheap by the way---hmm, maybe the first faiV GoFundMe? See, don't make me resort to such things!Continuing in the wave of revisiting ideas about microfinance and it's impact, Bruce Wydick has "3 reasons the impact of microcredit might be bigger than we thought." Of course, the "we" in that sentence matters a lot. Mushfiq Mubarak and Vikas Dimble have a short review of microfinance research with handy links to the research we talk about most these days: evidence for ways that microfinance could innovate to increase impact. Of course, I have to return to the binding constraint on microfinance innovation: funding appropriate for investment in innovation.

2. Replication: I know what you're thinking: "Hey, I haven't heard about Worm Wars in a long time. What happened?" And so, let me bring you a new paper from Owen Ozier that reviews the history of the Worm Wars in an effort to understand the state of reproducibility in Economics and related topics. Here is Owen's Twitter thread with some "wild things" he learned working on the paper. And here's Annette Brown's replies (one, two, three) pointing out some longstanding errors in the literature on replication in economics--one lesson is that if you don't read the variable definitions you're likely to draw the wrong conclusions and others won't be able to replicate your work.Here is an interesting argument that theory constrains degrees of researcher freedom more than experiment--that in fact one of the sources of the replication crisis is a lack of theoretical frameworks around empirical research. Oh, and that empirical work needs more formal mathematical models. In case you haven't figured it out yet, this is coming from the perspective of "behavioral sciences" which apparently does not include economics, where alot of recent argument has been about the need for experiments to constrain degrees of freedom and that "mathiness" is a problem. And here's Dorothy Bishop on "reining in the four horsemen of irreproducibility".Inherent variability is not one of those four horsemen, but it is a plausible source of irreproducibility that has nothing to do with bad practices or researcher misbehavior. If reactions to stimuli vary a lot based on minor contextual factors (which is in fact one of the findings of behavioral sciences, albeit one that is itself subject to lots of questions about replication), then you should expect that the exact same experiment conducted at a different time and place with different subjects will yield different results. Whether that's the case is the subject of this debate between Simmons/Simonsohn, McShane/Bockenholt/Hansen (not that one), and Judd and Kenney (also not that one), all hosted by Andrew Gelman. It's worth the time to read through.

3. Research and Communications: Taking that conversation as a leaping off point, here's a new paper on demand effects in survey experiments. On the one hand, it may come as a relief to know that the paper doesn't find much evidence of experimenter demand effects. On the other hand, a lot of economics lab experiments are built on the idea that the experimenter can induce people to behave in certain ways with incentives--and when those incentives don't work, it's evidence of some other important factor operating. But, "Even financial incentives to respond in line with researcher expectations fail to consistently induce demand effects." I feel like this paper could not have been published in an economics journal, because the theory constraints (I'm particularly proud of this callback).In other backed-up research methods links, I've carried around an open tab for this very useful post from Berk Ozler on alternatives to recruiting a control group for more than a month. As usual Berk lays out the issues and questions, and there's bonus follow-up via Susan Athey, linking to some other recent papers on related issues that I've also been carrying around in open tabs, so I'm feeling good about slaying all those giants at once.The questions Berk is asking and the responses from Susan stirred something deep in my memory bank and led me back to this 2014(!) post from David McKenzie onwhether the impact evaluation production function is best understood via O-Ring Theory or Knowledge Hierarchy theory. And it seems to me increasingly like the answer is O-Ring.Finally, there is another part of the production function: communicating the results of the work. That is a place where it seems the dominant model is Knowledge Hierarchy--leave the communications side to the comms experts. (OK, now I have to pause for moment and figure out if that explains the faiV or the faiV is contradictory evidence...). Here is David Evans making an argument that becoming a better communicator should be high on the list of priorities for economists. And here's a paper that finds that high quality communication is contagious, so there are positive externalities if you follow David's advice.I'm going to confidently predict though, that this last link on communications is going to get the most clicks this week: Bullshitters: Who Are They and What Do We Know About Their Lives?