The fact that on average fundraising is effective today, doesn’t mean that additional fundraising ‘on the margin’, would also be effective;

Fundraisers for effective charities won’t do better than other fundraisers in general;

The reason high spending on fundraising is unappealing to donors is because funders want confirmation that a charity has a useful project on which to spend the money they raise, rather than because spending more money on fundraising would be bad for the charity sector as a whole;

There’s plenty of money around, so what matters is finding useful things to do with it, not advocacy about how it should be spent;

Starting a fundraising group is harder than I imagine.

I find her evidence for 2 and 5 reasonable enough and agree 3 is a better explanation, though for a different reason than the one she gives [1]. I just want to take issue with 1 and then 4.

While the high average effectiveness of fundraising doesn’t translate into a high ‘marginal’ effectiveness of fundraising for the charity sector as a whole, it probably does for any individual charity. The model I have in mind here is a fixed pool of donations D and a total fundraising spend F, which is some fraction of D. Different charities choose to contribute to F, and the share of D that each charity receives is proportional to their share of F. In this scenario, any extra money that one charity raises through fundraising comes wholly at the expense of other charities. But on the margin, their fundraising ratio is still approximately D/F, which is the average cost-effectiveness of fundraising. This is too optimistic insofar as each organisation will gradually run out of donors who can easily be convinced to fund their specific cause. But when you are as small as GiveWell’s recommended charities – which receive only millions of dollars each year – I don’t think that will be a significant effect. It is also too pessimistic, insofar as total donations to the charity sector could be increased through extra fundraising.

Regarding point four, there are a range of approaches one can take when trying to help others: the least meta would be to start or fund a project that used effective methods to directly help people (let’s just call this charity); another would be to try to identify organisations who are already using effective methods to directly help people (let’s call this meta-charity); another would be to draw greater attention to the work of meta-charities who have identified such organisations (let’s call this meta-meta-charity). In support of her skepticism about meta-meta-charity, Alyssa points out that the effectiveness of more meta approaches can’t go up forever:

This logic immediately falls apart, once one takes it a step further. If fundraising offers a 3x multiplier, why not fundraise to fundraise, and get 9x? 27x? 81x? Where does the tower of meta end? The answer, of course, is that doing things everyone is already doing (like generic “fundraising”) never has a 3x return.

While I agree that ‘meta’ approaches can’t get better forever [2], this objection can’t be right either, because it seems to demonstrate that fundraising must always be useless. It can’t be the case that you should never put effort into raising money for a useful and unfunded project you are aware of, even if you are trying to get it off the ground yourself! Instead, I suggest that operating effective charities, finding effective charities, and fundraising for those effective charities are complementary inputs which each have declining marginal returns. Which one will most benefit from additional resources depends on which component is comparatively neglected.

If there are a lot of organisations trying to find effective charities, and donors lining up to give to them, but no final-level charities doing a good job, we most need altruistic entrepreneurs to start promising projects. If there are many organisations that can make cost-effective use of money, and donors keen to give to them, but donors don’t know who they are, then we most need more meta-charity. On the other hand, if there are effective charities, and we know who they are, but they still face a ‘funding gap’, there is a place for meta-meta-charity.

Alyssa is right that there is a lot of money available on Earth, or as an economist would more naturally put it, there are a lot of potentially productive inputs like capital, land and labour. Insofar as these resources are already being well allocated, and there are few, or no, effective altruist projects that still need funding, my proposal to improve how those resources are allocated through fundraising cannot work. That would be the case if the charities GiveWell and others identified were fully or nearly-fully funded, or there was nobody out there who could be convinced to fund them. But last I checked, for some reason, their recommendations still had ‘room for more funding’, which is why GiveWell recommends giving to them. And the organisation I work for has found it fairly easy to convince folks to do so.

I actually think Alyssa and I agree more than it looks. We both think that finding the most useful things to do with money is hard work. We both think there are a lot of resources out there that would like to find something better to do. Hopefully that means that, once you identify a great project, a small amount of money dedicated to fundraising should convince a lot of donors and fill its funding gap. And once that work is done, we will indeed be back hunting for the best charities to give to.

[1] Once a distaste for charities spending a lot of money on fundraising is entrenched among donors, I don’t see how it can be undermined by any particular charity or donor ‘defecting’ from the norm.

[2] A simple reason such a tower couldn’t work would be that you would quickly stop finding willing donors. There aren’t many people easily convinced to fund fundraising for fundraising.

Update: I should have included something like the anecdote below – now helpfully provided by Nick Beckstead – to explain why I am much less skeptical than Alyssa about the fundraising multiplier working on the margin. This implies a multipler of at least 3 for a generic, simple and scalable fundraising operation:

“A data point: I have a friend who worked as a street fundraiser (aka a “chugger” or “charity mugger”). He told me he took home about 10% of what he raised, and about 20% of what he raised went to the company that did street fundraising. They would raise funds for a wide variety of different charities, about which they and the people they fundraised from knew very little. (Charities would contract with the street fundraising org, and pay about 30% of the amount the street fundraising org raised, on a commission basis, to that org.)

If what my friend has told me is true, it seems it would not be hard to make a charity that consisted simply of funding street fundraising for a charity like AMF, assuming the charity wanted to cooperate. You might have trouble getting your fundraising charity non-profit status, but that isn’t too relevant from the perspective of this argument. I think you’d have a hard time street fundraising for a charity that did street fundraising for a charity like AMF, so you could probably only get one layer of meta in this way. But it would require fairly limited marketing skills (just enough to get some people to donate to your meta-charity) and may even have very limited downside risk (my understanding is that the street fundraisers can easily raise money for many conventional causes, and the charity pays on a purely commission basis in any case).

Assuming cooperation from the object-level charity, it seems the main limiting factors for something like this would be: 1) dollars from people willing to donate to your pure fundraising charity, 2) your ability to raise those dollars, and 3) running out of the object-level charity’s room for more funding. There’s probably some unknown challenges I’m not thinking of, but I suspect (just barely) that they wouldn’t be decisive.”

A result the media loves to report is that people who study economics are more likely to react like jerks when asked to respond to game theoretic predicaments like the prisoners’ dilemma. Are economists naturally mean? I can’t rule it out, but I always thought a more likely explanation was that they have just thought about these puzzles ahead of time, and simply respond with a memorised ‘correct’ answer, such as the Nash equilibrium. So I was glad to see this paper in Nature finding that anyone who has a while to think about how to react to these situations also becomes more selfish:

Cooperation is central to human social behaviour. However, choosing to cooperate requires individuals to incur a personal cost to benefit others. Here we explore the cognitive basis of cooperative decision-making in humans using a dual-process framework. We ask whether people are predisposed towards selfishness, behaving cooperatively only through active self-control; or whether they are intuitively cooperative, with reflection and prospective reasoning favouring ‘rational’ self-interest. To investigate this issue, we perform ten studies using economic games. We find that across a range of experimental designs, subjects who reach their decisions more quickly are more cooperative. Furthermore, forcing subjects to decide quickly increases contributions, whereas instructing them to reflect and forcing them to decide slowly decreases contributions. Finally, an induction that primes subjects to trust their intuitions increases contributions compared with an induction that promotes greater reflection. To explain these results, we propose that cooperation is intuitive because cooperative heuristics are developed in daily life where cooperation is typically advantageous. We then validate predictions generated by this proposed mechanism. Our results provide convergent evidence that intuition supports cooperation in social dilemmas, and that reflection can undermine these cooperative impulses.

A take-away would be that if you want someone to cooperate with you, you could put them in a situation where they need to make a decision on the spot. And if you want to come across as a naturally nice guy, go with your cooperative instincts and don’t think too much. Greater selfishness should be expected as a downside to letting people go into a detailed ‘near’ mode, where they concretely reflect on strategic choices and the likely outcomes. Calculation makes you calculating.

What about those brave behavioural economists, sent onto the front lines to study human psychology? Maybe they should ask for extra pay to compensate for the risk their work presents to their personalities.

In many religions there is a belief in ‘predestination’. While I am far from a religious scholar, predestination is roughly the idea that God has already foreseen and willed all future outcomes. Believing this threw up a curly problem for personal moral responsibility: if God had already decided who deserved to go to heaven, and who deserved to go to hell, why bother doing anything in particular? Your fate has already been sealed. In fact it was sealed long before you were born. But it turned out there still was a strong incentive to behave righteously, so long as you didn’t know which group you belonged to: every time you did the right thing, you were producing evidence for yourself that you were one of those destined for heaven rather than hell. Your virtuous acts couldn’t change the outcome at all, but they could still offer a huge relief!

The same is true for various health-affecting behaviours. My go-to example is flossing, which is correlated with a significant extension in life expectancy (e.g. this). How much of the extension is caused by flossing, and how much is due to flossing being associated with other things that improve health, like diligence? I doubt anyone knows. But all else equal, if you are someone who flosses, you should expect to live longer than someone who doesn’t. The correlation is what matters for that prediction, not causation. That sounds like a good reason to start flossing to me. Your flossing may or may not change anything, but it will give you a compelling reason to expect to be blessed with good health. The same goes for drinking in moderation, exercising regularly, and so on. So take this realisation, and use it to stay motivated to do the things you thought you should be doing, because the expected benefits are even bigger than causal studies make it sound. Incidentally, people who are convinced by this argument live on average two years longer, so I wouldn’t recommending dwelling on it too long.

I often listen to music or use my phone while walking through town. I have heard that this is dangerous and could cause me to get killed, so I’ll attempt a back of the envelope calculation to work out if I should stop.

Firstly, the base rate of risk for being run over as a pedestrian in the UK is 0.0006% per person each year. Let’s say as a young man I face double that risk, or 0.0012%. I don’t know how much using my phone some of the time while walking raises the risk, though this data suggests a pedestrian failing to look properly was the cause of 190 road fatalities in the UK, out of a total of 385 pedestrian deaths. For the sake of argument, let’s say the risk triples. Let me know if you have a better estimate. That would result in an extra risk of death of 0.0024% each year. Dying now would cost me some 60 years of healthy life, so I should expect to lose 0.00144 years for each year I engage in this behaviour – which is around 12 hours.

As compensation, I get to listen to music, audiobooks and check my email for on average 10 minutes a day, which comes to 60 hours or so a year. I would say that time is about 50% better spend than it would be otherwise thanks to my ability to use my phone, so I expect to gain the equivalent of 30 hours a year.

If these numbers are about right, I should be fairly comfortable listening to music or looking at my phone as a pedestrian. However, the harm is pretty close to the benefit, and someone could reasonably think the cost actually outweighs the benefit.

Nonetheless I could do better by not using my phone in cases where the cost exceeds the benefit, for example by keeping the volume low, not having conversations which are particularly distracting, and being strict about not starting to look at my phone if I expect to cross the street soon after.

If you like this approach – and maybe even if you don’t – you’ll also like How to Gain or Lose 30 Minutes of Life Every Day, which estimates how much life you should expect to gain or lose each time you exercise, eat fruit, vegetables or meat, drink alcohol, smoke a cigarette, remain overweight, or sit at a computer for hours at a time. Gains and losses are measured using the ‘micromort‘, which corresponds to half an hour of life. While the numbers are no doubt a dramatic simplification of the medical evidence, I find a concrete estimate of the benefits gives me stronger motivation to eat more vegetables, drink less, and perhaps exercise more as well. And it helps me prioritise which health enhancing activities are worth the trouble, and which are not.

Many of you will be familiar with the fact that past returns from notable stock indices, such as those in the US, are a biased indicator of the likely future returns to investing in equities. The problem is that due to war, government interference, and financial collapse, some stock markets disappeared altogether, wiping out investors. In some countries this has even happened multiple times. Historical stock indices that went to zero tend not to be remembered, and so are under-sampled. The result is ‘survivorship bias‘, a problem that shows up in many other research questions as well. When these defunct investments are put back in the sample, average returns are quite a bit lower than when you look at just, for example, the NY stock exchange.

A lesser known result is that a broader and representative sample of stock histories shows that investing over long time horizons doesn’t reduce the variability of your return. Contrary to convention wisdom, even young savers need to diversity across different assets types and countries in order to get that effect and be confident of retiring in comfort:

“One of the most enduring question in ﬁnance is the persistence of investment risk across time horizon. This issue of time diversiﬁcation is crucial to long-term asset allocation decisions.

There is a widespread view that the longer the horizon, the more investors beneﬁt from investing in equities. Young investors, for instance, are typically advised to allocate more to equities than those whose retirement is imminent, on the grounds that equities are less risky over long horizons. A common rule of thumb is that the percentage of stock allocation should equal 100 minus an investor’s age.

Some researchers claim to have found empirical evidence that equities are less risky over long horizons because of mean reversion. Mean reversion implies that the variance of stock retums does not grow linearly with time, contrary to a random walk. As a result, several authors have claimed that greater equity allocations are justiﬁed on the grormds that shortfall risk lessens as the horizon is extended.

This conclusion seems hardly justiﬁed. Previous ﬁndings of mean reversion have considered seventy years or so of U.S. data. For long-horizon retums, say ten years, this implies only seven truly independent observations, which seems insufﬁcient to support robust conclusions about the risk of ten-year equity investments. The problem is that, with a ﬁxed sample size, the number of eﬁective observations diminishes as the investment horizon lengthens. Another problem is that markets with long histories may not represent investment risk for reasons of survivorship bias.

One solution is to expand the sample by adding cross-sectional data. We describe the distribution of long-term returns for a sample of thirty countries for which we have long series of equity prices. The empirical evidence expands on the work of Jorion and Goetzmann (1999) and substantially extends results described by Dimson, Marsh, and Staunton (2002), who analyze a century of stock market returns in ﬁﬁeen countries.

The results are not reassuring. We ﬁnd no evidence of long-term mean reversion in the expanded data sample. Downside risk declines very little as the horizon lengthens. In addition, U.S. equities appear systematically less risky than equities of other markets.

Mean reversion is analyzed ﬁrst in terms of variance ratio tests. There is no evidence of mean reversion from variance ratio tests across this sample, taking into account statistical properties of these tests. Furthermore, markets that suﬁered interruption displayed mean aversion, or the opposite of mean reversion. Therefore, statistical properties such as high average retums and mean reversion may be an artifact of survival. Probabilities of losses on equities are reduced very slowly, if at all, with the horizon. In fact, shortfall measures such as value at risk (VAR) sharply increase with the horizon.

There is, however, some positive news. Diversiﬁcation across assets pays. Over this century, a global stock market index would have displayed less downside risk than any single market. The conclusion is that across-country diversiﬁcation is more effective than time diversiﬁcation.” (HT Ben Hoskin)

GiveWell’s charity recommendations – currently Against Malaria Foundation, GiveDirectly and the Schistosomiasis Control Initiative – are generally regarded as the most reliable in their field. I imagine many readers here donate to these charities. This makes it all the more surprising that it should be pretty easy to start a charity more effective than any of them.

All you would need to do is found an organisation that fundraises for whoever GiveWell recommends, and raises more than a dollar with each dollar it receives. Is this hard? Probablynot. As a general rule, a dollar spent on fundraising seems to raise at least several dollars. It’s a pretty simple and fast multiplier that obviously beats putting your money in the stock market. An independent organisation raising money for GiveWell’s top charities should do even better than a typical fundraiser, thanks to:

the strength of evidence, which is especially compelling to big donors

the independent recommendation, which looks particularly credible and removes the perception of any ulterior motive

a willingness to maximise (for example by targeting the wealthy, and focussing on regular or legacy donors)

an intrinsic motivation to do good

the freedom to choose which of the three organisations they promote, depending on who they are talking to.

Putting your money into fundraising, rather than just giving it directly, does impose additional costs on the donors you inspire, and may ‘crowd out’ gifts to other charities. However, the logic of giving to GiveWell’s top rated charities is that they make (much) better use of money than most other individuals or organisations. So if you have a fundraising ratio significantly above 1:1, these downsides shouldn’t much matter.

You might ask: if fundraising is the best thing to do, why wouldn’t AMF, SCI or GiveDirectly just spend the money you give them on fundraising? My guess is that it’s simply a bad look. If they spend too much on fundraising, it will irrationally scare off their existing and potential donors. Even if a charity should ideally spend most of its receipts on further fundraising in order to grow more quickly, the option simply isn’t available. The social norm against ‘optimising fundraising‘ is generally helpful, because intense competition between charities for donations would cause ‘rent dissipation’, and less total money would flow to charity recipients. But if your charity actually is much better than other charities, and so it’s good when you ‘take’ their money, this social norm does harm by preventing you from doing so.

So, if you are unlike most donors and are willing to have your money spent on effective fundraising, you can easily increase your impact several times over. Just help GiveWell’s top charities take their fundraising efforts ‘off the books’ by founding or giving to a separate organisation that does it for them.

This isn’t actually an impractical plan. Starting up a lean and effective fundraising organisation is difficult, but much easier than building a global team to distribute insecticide-treated bed nets. Any bright and energetic person in a rich country who went and received the necessary training would have a decent shot at getting such an organisation off the ground. If you would like to discuss the first steps required to make this happen, drop me an email (robertwiblin [at] gmail [dot] com) and I can put you in touch with a team already working on this approach, who could direct you towards legal and financial support.

In a situation where different activities have very different benefit to cost ratios, it is important to set priorities, and finish those with the highest values first. Any individual who didn’t set priorities would achieve much less than they could; they might end up malnourished because they are busy reading their junk mail. While it is relatively easy to set priorities for a single human’s personal life – not that we always follow them – setting priorities for humanity as a whole is very difficult and requires in-depth study.

The central limit theorem suggests that the cost effectiveness of different projects ought to have a ‘log normal’ distribution, if not an even fatter-tailed one. Furthermore, there is no reason to think that (e.g.) political reform, different environmental causes, R&D for various technologies, conflict resolution, poverty reduction and so on are ee in the same ball-park of cost effectiveness, so we should anticipate a large variance in the distribution. This would leave some causes orders of magnitude more important than others. What research on this topic has been done, by groups like J-PAL, GiveWell, the WHO, and so on, indeed finds that the value of different methods of improving the world varies dramatically, with some doing enormous amounts of good and others achieving next to nothing. Unfortunately, as far as I am aware – and I would love to be informed otherwise – there is no one who has taken on the role of picking out and promoting the most important tasks we face.

The Copenhagen Consensus set out to fill this gap in 2003, and produced reports that were of mixed quality, though excellent value for money and a substantial improvement on what existed before. Sadly, it is not currently planning another round of research because it is out of funding (though still taking donations). In the absence of a comprehensive and broad comparison of different causes, resources naturally flow to the most powerful or vocal interest groups, or the approaches that people intuitively guess are best. Given our terrible instincts for risks and magnitudes we don’t have regular direct experience with, it would be an extraordinary coincidence if these actually were the most valuable projects to be embarking on.

The natural home for a properly-funded and ongoing global prioritisation research project would be the World Bank or alternatively, the OECD, or a university. If anyone is reading this and has some influence: global prioritisation looks like a cost effective cause to hop on. Though given the lack of research on the topic, I’ll admit it is hard to be sure!

Over at 80,000 Hours we have been looking into which research questions are most important or prone to neglect. As part of that, I was recently lucky enough to have dinner with Iain Chalmers, one of the founders of the Cochrane Collaborations. He let me know about this helpful summary of reasons to think most clinical research is predictably wasteful:

“Worldwide, over US$100 billion is invested every year in supporting biomedical research, which results in an estimated 1 million research publications per year

…

a recently updated systematic review of 79 follow-up studies of research reported in abstracts estimated the rate of publication of full reports after 9 years to be only 53%.

…

An eﬃcient system of research should address health problems of importance to populations and the interventions and outcomes considered important by patients and clinicians. However, public funding of research is correlated only modestly with disease burden, if at all.6–8 Within speciﬁc health problems there is little research on the extent to which questions addressed by researchers match questions of relevance to patients and clinicians. In an analysis of 334 studies, only nine compared researchers’ priorities with those of patients or clinicians.9 The ﬁndings of these studies have revealed some dramatic mismatches. For example, the research priorities of patients with osteoarthritis of the knee and the clinicians looking after them favoured more rigorous evaluation of physiotherapy and surgery, and assessment of educational and coping strategies. Only 9% of patients wanted more research on drugs, yet over 80% of randomised controlled trials in patients with osteoarthritis of the knee were drug evaluations.10 This interest in non-drug interventions in users of research results is reﬂected in the fact that the vast majority of the most frequently consulted Cochrane reviews are about non-drug forms of treatment.

…

New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence. Many researchers do not do this—for example, Cooper and colleagues 13 found that only 11 of 24 responding authors of trial reports that had been added to existing systematic reviews were even aware of the relevant reviews when they designed their new studies.

…

New research is also too often wasteful because of inadequate attention to other important elements of study design or conduct. For example, in a sample of 234 clinical trials reported in the major general medical journals, concealment of treatment allocation was often inadequate (18%) or unclear (26%).16 In an assessment of 487 primary studies of diagnostic accuracy, 20% used diﬀerent reference standards for positive and negative tests, thus overestimating accuracy, and only 17% used double-blind reading of tests.17

…

More generally, studies with results that are disappointing are less likely to be published promptly,19 more likely to be published in grey literature, and less likely to proceed from abstracts to full reports.2 The problem of biased under-reporting of research results mainly from decisions taken by research sponsors and researchers, not from journal editors rejecting submitted reports.20 Over the past decade, biased under-reporting and over-reporting of research have been increasingly acknowledged as unacceptable, both on scientiﬁc and on ethical grounds.

…

Although their quality has improved, reports of research remain much less useful than they should be. Sometimes this is because of frankly biased reporting—eg, adverse eﬀects of treatments are suppressed, the choice of primary outcomes is changed between trial protocol and trial reports,21 and the way data are presented does not allow comparisons with other, related studies. But even when trial reports are free of such biases, there are many respects in which reports could be made more useful to clinicians, patients, and researchers. We select here just two of these. First, if clinicians are to be expected to implement treatments that have been shown in research to be useful, they need adequate descriptions of the interventions assessed, especially when these are non-drug interventions, such as setting up a stroke unit, oﬀ ering a low fat diet, or giving smoking cessation advice. Adequate information on interventions is available in around 60% of reports of clinical trials;22 yet, by checking references, contacting authors, and doing additional searches, it is possible to increase to 90% the proportion of trials for which adequate information could be made available.22

…

Although some waste in the production and reporting of research evidence is inevitable and bearable, we were surprised by the levels of waste suggested in the evidence we have pieced together. Since research must pass through all four stages shown in the ﬁ gure, the waste is cumulative. If the losses estimated in the ﬁ gure apply more generally, then the roughly 50% loss at stages 2, 3, and 4 would lead to a greater than 85% loss, which implies that the dividends from tens of billions of dollars of investment in research are lost every year because of correctable problems.”

His assessment was that the research profession could not be expected to fix up these problems internally, as it had not done so already despite widespread knowledge of these problems, and had no additional incentive to do so now. It needs external intervention and some options are proposed in the paper.

There is a precedent for this. The US recently joined a growing list of countries who have helped their researchers coordinate to weaken the academic publishing racket, by insisting that publicly-funded research be free and openly available within a year. So long as academics are permitted to publish publicly-funded research in pay-for-access journals, established and prestigious journals can earn big rents by selling their prestige to researchers – to help them advance their careers – in exchange for copyright on their publicly-funded research. Now that researchers aren’t permitted to sell that copyright, an individual who would refuse to do so out of principle won’t be outcompeted by less scrupulous colleagues.

Likewise, rules that require everyone receiving public money to do the public-spirited thing, for instance by checking for systematic reviews, publishing null results, pre-registering their approach to data analysis, opening their data to scrutiny by colleagues, and so on, would make it harder for unscrupulous researchers to get ahead with corner-cutting or worse chicanery.

As part of our self-improvement program at the Centre for EffectiveAltruism I decided to present a lecture on cognitive biases and how to overcome them. Trying to put this together reminded me of a problem I have long had with the self-improvement literature on biases, along with those for health, safety and nutrition: they don’t prioritise. Kahneman’s book Thinking Fast and Slow represents an excellent summary of the literature on biases and heuristics, but risks overwhelming or demoralising the reader with the number of errors they need to avoid. Other sources are even less helpful at highlighting which biases are most destructive.

You might say ‘avoid them all’, but it turns out that clever and effort-consuming strategies are required to overcome most biases; mere awareness is rarely enough. As a result, it may not be worth the effort in many cases. Even if it were usually worth it, most folks will only ever put a limited effort into reducing their cognitive biases, so we should guide their attention towards the strategies which offer the biggest ‘benefit to cost ratio’ first.

There is a bias underlying this scattershot approach to overcoming bias: we are inclined to allocate equal time or value to each category or instance of something we are presented with, even if they are arbitrary, or at least not a good signal of their importance. Expressions of this bias include:

Allocating equal or similar migrant places or development aid funding to different countries out of ‘fairness’, even if they vary in size, need, etc.

Making a decision by weighing the number, or length, of ‘pro’ and ‘con’ arguments on each side.

Offering similar attention or research funding to different categories of cancer (breast, pancreas, lung), even though some kill ten times as many people as others.

Providing equal funding for a given project to every geographic district, even if the boundaries of those districts were not drawn with reference to need for the project.

Fortunately, I don’t think we need tackle most of the scores of cognitive biases out there to significantly improve our rationality. My guess is that some kind of Pareto or ’80-20′ principle applies, in which case a minority of our biases are doing most of the damage. We just have to work out which ones! Unfortunately, as far as I can tell this hasn’t yet been attempted by anyone, even the Centre for Applied Rationality, and there are a lot to sift through. So, I’d appreciate your help to produce a shortlist. You can have input through the comments below, or by voting on this Google form. I’ll gradually cut out options which don’t attract any votes.

Ultimately, we are seeking biases that have a large and harmful impact on our decisions. Some correlated characteristics I would suggest are that it:

potentially influences your thinking on many things

is likely to change your beliefs a great deal

doesn’t have many redeeming ‘heuristic’ features

disproportionately influences major choices

has a large effect substantiated by many studies, and so is less likely the result of publication bias.

We face the problem that more expansive categories can make a bias look like it has a larger impact (e.g. ‘cancer’ would look really bad but none of ‘pancreatic cancer’, ‘breast cancer’, etc would stand out individually). For our purposes it would be ideal to group and rate categories of biases after breaking them down by ‘which intervention would neutralise this.’ I don’t know of such a categorisation and don’t have time to make one now. I don’t expect that this problem will be too severe for a first cut.

The standard policy for blogs and online forums is for everyone to be free to add comments unless they repeatedly violate rules against swearing or personal abuse. In the past I have taken this approach on my personal blog and Facebook profile and so only blocked a handful of people over many years. This policy ensures that all comments, even those judged negatively by the original author, can be found somewhere in the resulting thread. But it has some major downsides, and I now wonder if it was a mistake.

People who write outrageous things and get banned never last long enough to do much harm. The real damage is done by frequent commenters who are uninformed, thoughtless, long-winded, mean-spirited or uncharitable. I have inadvertently wasted a lot of time over the years reading and responding to the resulting comments. While I could ignore them, that allows incorrect claims or poor character to go unchallenged. Even if I knew I were wasting my time doing this, obnoxious comments preoccupy me and lower my productivity whether directed at me or others. Many readers start scanning comment threads and I imagine they can find the experience similarly draining.

The worst case scenario is the ‘comment thread death spiral’. The best comments typically come from those whose time is most valuable: busy professionals who actively study or work in a given field. But comments threads are naturally dominated by those who spend much of their life on the internet commenting on blogs and often bring no particular expertise. Each foolish comment lowers the signal-to-noise ratio and reduces the attention good comments receive. This wastes everyone’s time. But it is particularly particularly annoying for ‘busy but informed’ commenters who barely have time to read the original post, let alone wade through lengthy comment threads. They realise their remarks will be crowded out by others, or they will have to wrangle with uninformed responses, and rationally opt out. As a result, bad comments disproportionately drive away the best ones. The average quality of comments falls and the cycle repeats. This partly explains the negative correlation between the quantity and quality of comments between blogs.

Despite the damage they do, most authors refuse to warn or block those who leave lousy comments because they do not violate social norms, and in most cases mean no harm. It is impossible to set up clear rules to specify which comments are helpful and which are not. Instead, the author must exercise a lot of responsibility and discretion, which they do not want to do because it is time-consuming and opens them up to conflict and criticism.

A nice alternative is up- and down-voting, which has worked well on Reddit and Less Wrong. This allows (anonymous) readers to notify everyone else about whether something is worth reading before they bear the cost of doing so. Modules for this are tricky to set up, and rely on a large, active and intelligent audience of voters. But they are invaluable and ought to be the default. A simpler option would be ‘highlighted comments’, which would let the author pin the best comments at the top of the page.

Where those options are unavailable, should we worry about authors choosing which comments, or commenters, remain on their websites? I think not. Most writers want to offer readers a good experience in order to attract more of them. When choosing commenters they will bear this in mind, just as they do when choosing the content of original posts. If you find their writing worthwhile out of the millions of blogs and books available, you can probably also trust them curate comments effectively if given the chance. Where they don’t, you can seek responses elsewhere or vote with your feet and read someone else. Personally, I feel that the benefit of not having my time wasted vastly outweighs the risk that I will be prevented from reading good responses, or have my own removed.

We don’t let strangers without interesting things to say interrupt and talk over conversations with friends and colleagues. We invite the people we want to our seminars, parties, and so on. Despite some drawbacks this model works pretty well, and it should be acceptable online more than it is today.