Log-normal lamentations

Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.

There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best.[ref]As further bad news, there may be progression of ‘tiers’ which are progressively more selective, somewhat akin to stacked band-pass filters: even if you were the best maths student at your school, then the best at university, you may still find yourself plonked around median in a positive-skewed population of maths professors – and if you were an exceptional maths professor, you might find yourself plonked around median in the population of fields medalists. And so on (especially – see infra – if the underlying distribution is something scale-free).[/ref]

Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.

Look at our thick-tailed works, ye average, and despair![ref]I wonder how much this post is a monument to the grasping vaingloriousness of my character…[/ref]

One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities.[ref]Pace: academic performance is not the only (nor the best) measure of ability. But it is a measure, and a fairly germane one for the fairly young population ‘in’ EA.[/ref] This sets the bar very high.

Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.

Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.

A shattered visage

Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields’: Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex ante extremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.

So there are a few ways an Effective Altruist mindset can depress our egos:

It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.

‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.

(Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).

Many of these fields have ‘lottery-like’ characteristics where ex ante and ex post value diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.

What remains besides

I don’t haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.

In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. [ref]Although there are other more benign possibilities, given diminishing marginal returns and the lack of people available. As a further aside, I’m wary of arguments/discussions that note bias or self-serving explanations that lie parallel to an opposing point of view (“We should expect people to be more opposed to my controversial idea than they should be due to status quo and social desirability biases”, etc.) First because there are generally so many candidate biases available they end up pointing in most directions; second because it is unclear whether knowing about or noting biases makes one less biased; and third because generally more progress can be made on object level disagreement than on trying to evaluate the strength and relevance of particular biases.[/ref]

Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care.[ref]Another thing I am wary of is Crocker’s rules: the idea that you unilaterally declare: ‘don’t worry about being polite with me, just tell it to me straight! I won’t be offended’. Naturally, one should try and separate one’s sense of offense from whatever information was there – it would be a shame to reject a correct diagnosis of our problems because of how it was said. Yet that is very different from trying to eschew this ‘social formatting’ altogether: people (myself included) generally find it easier to respond well when people are polite, and I suspect this even applies to those eager to make Crocker’s Rules-esque declarations. We might (especially if we’re involved in the ‘rationality’ movement) want to overcome petty irrationalities like incorrectly updating on feedback because of an affront to our status or self esteem. Yet although petty, they are surprisingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be polite with the other half, do you think one group would update better?) and part of acknowledging our biases should be an acknowledgement that it is sometimes better to placate them rather than overcome them.[/ref]

‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. [ref]cf. Max Ehrmann put it well:

… If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself.