Oct 31, 2017

For a while now I've been interested in the difficulty of incorporating the full scope of the consequences of one's actions into one's reasoning about what to do.

I initially called this consequentialist scope (a), and later learned that academic ethicists refer to this as cluelessness (as in, we don't have a clue about what the consequences of our actions will be).

It is also not at all obvious, however, how deep or important the phenomenon of cluelessness really
is. In the context of effective altruism, it strikes many as compelling and as deeply problematic.
However, mundane, everyday cases that have a similar structure in all respects I have considered are
also ubiquitous, and few regard any resulting sense of cluelessness as deeply problematic in the
latter cases. It may therefore be that the diagnosis of would-be effective altruists’ sense of
cluelessness, in terms of psychology and/or the theory of rationality, lies quite elsewhere.

Which is to say "everyone else isn't worried about this, so why worry about it?" The problems with this epistemic standard should be obvious.

Some reasons why cluelessness is worth thinking about:

Greater cluelessness about the consequences of our actions probably implies more epistemic humility, more Hayekian stances towards economic & social policy, and more emphasis on attacking problems at a high level of abstraction (opposed to very specific, object-level interventions)

If cluelessness turns out to be tractable (i.e we can grow less clueless by way of focused effort), epistemic interventions (e.g. prediction markets) are likely very important

If we are thoroughgoingly clueless, consequentialism probably doesn't have much to say about how we should direct our behavior

I don't have a synthesized take about why cluelessness matters or what to do about it, but the topic seems crucially important and under-investigated. I intend to think more about this, and would be very grateful for any pointers to worthwhile work on the topic (my contact details are here).