Several papers on risk preferences, including discussion of whether risk preferences are stable and how to think about them if they are not (interesting sidenote in this is a comment on how much measurement error there is when using incentivized lotteries – the correlations between risk premia measured for the same individual using different experimental choices can be quite low, and correlations tend to be higher for survey measures – and speculation that the measurement error may be worse in developing countries “large share of the papers that document contradictory effects of violent conflict or natural disasters use experimental data from developing countries, but these tools were typically developed in the context of high-income countries. They may be more likely to produce noisy results in samples that are less educated, partly illiterate, or less used to abstract thinking)

a series of papers on how much the U.S. gains from trade

The Economist on the shortcomings in Applied Micro “Being alert to the shortcomings of published research need not lead to nihilism. But it is wise to be sceptical about any single result” and Matthew Kahn responds with “is the economist correct about the state of applied micro?”, with a special emphasis on climate economics: “The Lucas Critique continues to be ignored. Economics has no physics constants. The economy is always evolving. The Austrians were right about this. In an economy featuring forward looking optimizing decision makers, what do "reduced form" coefficient estimates mean? Which structural estimates are "structural"? Research in climate economics is plagued by this. Past correlations between extreme heat and economic growth tell me little about the future relationship between these variables and I have argued that the past correlation plays a causal role in leading to a smaller relationship between these variables in the future. Why? If we have learned from the past experience that extreme heat hurts our economy, and if we expect climate change to cause greater heat in the future, we will invest to attenuate the future effect of heat on economic growth.”

With increased discussion of registration and pre-analysis plans, I was pointed to this Heckman and Singer P&P from last year on “abducting economics” (h/t Aureo de Paula): “One of the worst examples of frequentist dogma in action is the common practice in government-sponsored research requiring that investigators specify all of their models in advance of looking at the data. The successful abductor immerses himself in the data and the conceptual issues underlying its generation and its interpretation, and reports the results of this immersion to the reader... there is often an initial stage of many investigations, especially those into fundamentally new problems where informal, even vague, reasoning takes place as initial sets of hypotheses are formulated, and making rich descriptions of the phenomena being investigated play a crucial role in framing problems. These creative acts precede the use of formal logic of any stripe... The abductive approach encourages analysts to interact with all of the available data and theory to learn and to augment it. Testing and rejecting (or corroborating) any a priori hypothesis is only a stage of an investigation. Generating and testing new hypotheses in response to rejection of initial candidate hypotheses is the central feature of the process of providing defensible explanations for surprising phenomena.”