Tag Archives: Charity

My last post discussed how to influence the distant future, using a framework focused on a random uncaring universe. This is, for example, the usual framework of most who see themselves as future-oriented “effective altruists”. They see most people and institutions as not caring much about the distant future, and they themselves as unusual exceptions in three ways: 1) their unusual concern for the distant future, 2) their unusual degree of general utilitarian altruistic concern, and 3) their attention to careful reasoning on effectiveness.

If few care much or effectively about the distant future, then efforts to influence that distant future don’t much structure our world, and so one can assume that the world is structured pretty randomly compared to one’s desires and efforts to influence the distant future. For example, one need not be much concerned about the possibility that others have conflicting plans, or that they will actively try to undermine one’s plans. In that case the analysis style of my last post seems appropriate.

But it would be puzzling if such a framework were so appropriate. After all, the current world we see around us is the result of billions of years of fierce competition, a competition that can be seen as about controlling the future. In biological evolution, a fierce competition has selected species and organisms for their ability to make future organisms resemble them. More recently, within cultural evolution, cultural units (nations, languages, ethnicities, religions, regions, cities, firms, families, etc.) have been selected for their ability to make future cultural units resemble them. For example, empires have been selected for their ability to conquer neighboring regions, inducing local residents to resemble them more than they do conquered empires.

In a world of fierce competitors struggling to influence the future, it makes less sense for any one focal alliance of organism, genetic, and cultural units (“alliance” for short in the rest of this post) to assume a random uncaring universe. It instead makes more sense to ask who has been winning this contest lately, what strategies have been helping them, and what advantages this one alliance might have or could find soon to help in this competition. Competitors would search for any small edge to help them pull even a bit ahead of others, they’d look for ways to undermine rivals’ strategies, and they’d expect rivals to try to undermine their own strategies. As most alliances lose such competitions, one might be happy to find a strategy that allows one to merely stay even for a while. Yes, successful strategies sometimes have elements of altruism, but usually as ways to assert prestige or to achieve win-win coordination deals.

Furthermore, in a world of fiercely competing alliances, one might expect to have more success at future influence via joining and allying strongly with existing alliances, rather than by standing apart from them with largely independent efforts. In math there is often an equivalence between “maximize A given a constraint on B” and “maximize B given a constraint on A”, in the sense that both formulations give the same answers. In a related fashion, similar efforts to influence the future might be framed in either of two rather different ways:

I’m fundamentally an altruist, trying to make the world better, though at times I choose to ally and compromise with particular available alliances.

I’m fundamentally a loyal member/associate of my alliance, but I think that good ways to help it are to a) prevent the end of civilization, b) promote innovation and growth within my alliance, which indirectly helps the world grow, and c) have my alliance be seen as helping the world in a way which raises its status and reputation.

This second framing seems to have some big advantages. People who follow it may win the cooperation, support, and trust of many members of a large and powerful alliance. And such ties and supports may make it easier to become and stay motivated to continue such efforts. As I said in my last post, people seem much more motivated to join fights than to simply help the world overall. Our evolved inclinations to join alliances probably create this stronger motivation.

Of course if in fact most all substantial alliances today are actually severely neglecting the distant future, then yes it can make more sense to mostly ignore them when planning to influence the distant future, except for minor connections of convenience. But we need to ask: how strong is the evidence that in fact existing alliances greatly neglect the long run today? Yes, they typically fail to adopt policies that many advocates say would help in the long run, such as global warming mitigation. But others disagree on the value of such policies, and failures to act may also be due to failures to coordinate, rather than to a lack of concern about the long run.

Perhaps the strongest evidence of future neglect is that typical financial rates of return have long remained well above growth rates, strongly suggesting a direct discounting of future outcomes due to their distance in time. For example, these high rates of return are part of standard arguments that it will be cheaper to accommodate global warming later, rather than to prevent it today. Evolutionary finance gives us theories of what investing organizations would do when selected to take a long view, and it doesn’t match what we see very well. Wouldn’t an alliance with a long view take advantage of high rates of return to directly buy future influence on the cheap? Yes, individual humans today have to worry about limited lifespans and difficulties controlling future agents who spend their money. But these should be much less of an issue for larger cultural units. Why don’t today’s alliances save more?

Important related evidence comes from data on our largest longest-term knownprojects. Eight percent of global production is now spent on projects that cost over one billion dollars each. These projects tend to take many years, have consistent cost and time over-runs and benefit under-runs, and usually are net cost-benefit losers. I first heard about this from Freemon Dyson, in the “Fast is Beautiful” chapter of Infinite in All Directions. In Dyson’s experience, big slow projects are consistent losers, while fast experimentation often makes for big wins. Consider also the many large slow and failed attempts to aid poor nations.

Other related evidence include having the time when a firm builds a new HQ be a good time to sell their stock, futurists typically do badly at predicting important events even a few decades into the future, and the “rags to riches to rags in three generations” pattern whereby individuals who find ways to grow wealth don’t pass such habits on to their grandchildren.

A somewhat clear exception where alliances seem to pay short term costs to promote long run gains is in religious and ideological proselytizing. Cultural units do seem to go out of their way to indoctrinate the young, to preach to those who might convert, and to entrench prior converts into not leaving. Arguably, farming era alliances also attended to the long run when they promoted fertility and war.

So what theories do we have to explain this data? I can see three:

1) Genes Still Rule – We have good theory on why organisms that reproduce via sex discount the future. When your kids only share half of your genes, if you consider spending on yourself now versus on your kid one generation later, you discount future returns at roughly a factor of two per generation, which isn’t bad as an approximation to actual financial rates of return. So one simple theory is that even though cultural evolution happens much faster than genetic evolution, genes still remain in firm control of cultural evolution. Culture is a more effective ways for genes to achieve their purposes, but genes still set time discounts, not culture.

2) Bad Human Reasoning – While humans are impressive actors when they can use trial and error to hone behaviors, their ability to reason abstractly but reliably to construct useful long term plans is terrible. Because of agency failures, cognitive biases, incentives to show off, excess far views, overconfidence, or something else, alliances learned long ago not to trust to human long term plans, or to accumulations of resources that humans could steal. Alliances have traditionally invested in proselytizing, fertility, prestige, and war because those gains are harder for agents to mismanage or steal via theft and big bad plans.

3) Cultures Learn Slowly – Cultures haven’t yet found good general purpose mechanisms for making long term plans. In particular, they don’t trust organized groups of humans to make and execute long term plans for them, or to hold assets for them. Cultures have instead experimented with many more specific ways to promote long term outcomes, and have only found successful versions in some areas. So they seem to act with longer term views in a few areas, but mostly have not yet managed to find ways to escape the domination of genes.

I lean toward this third compromise strategy. In my next post, I’ll discuss a dramatic prediction from all this, one that can greatly influence our long-term priorities. Can you guess what I will say?

A key turning point in my life was when my wife declared that her biological clock said she wanted kids now. I hadn’t been thinking of kids, and the prospect didn’t inspire much passion in me; my life had focused on other things. But I wanted to please my wife, and I didn’t much object, so we had kids. I now see that as one of the best choices I’ve made in my life. I thank my wife for pushing me to it.

Stats suggest that while parenting doesn’t make people happier, it does give them more meaning. And most thoughtful traditions say to focus more on meaning that happiness. Meaning is how you evaluate your whole life, while happiness is how you feel about now. And I agree: happiness is overrated.

Parenting does take time. (Though, as Bryan Caplan emphasized in a book, less than most think.) And many people I know plan to have an enormous positive influences on the universe, far more than plausible via a few children. But I think they are mostly kidding themselves. They fear their future selves being less ambitious and altruistic, but its just as plausible that they will instead become more realistic.

Also, many people with grand plans struggle to motivate themselves to follow their plans. They neglect the motivational power of meaning. Dads are paid more, other things equal, and I doubt that’s a bias; dads are better motivated, and that matters. Your life is long, most big world problems will still be there in a decade or two, and following the usual human trajectory you should expect to have the most wisdom and influence around age 40 or 50. Having kids helps you gain both.

And in addition, you’ll do a big great thing for your kids; you’ll let them exist. It isn’t that hard to ensure a reasonably happy and meaningful childhood. That’s a far surer gain than your grand urgent plans to remake the universe.

Having kids is actually the best-proven way to have a long term influence. So much so that biological evolution has focused almost entirely on it. By comparison, human cultural mechanisms to influence the future seem tentative, unreliable, and unproven, except when closely tied to having and raising kids. Let your portfolio of future influence attempts include both low-risk, as well as high-risk, approaches.

Added 2p: Of course our biases help us make our meanings, in parenting as elsewhere:

Years ago I was honored to share this blog with Eliezer Yudkowsky. One of his main topics then was AI Risk; he was one of the few people talking about it back then. We debated this topic here, and while we disagreed I felt we made progress in understanding each other and exploring the issues. I assigned a much lower probability than he to his key “foom” scenario.

Recently AI risk has become something of an industry, with far more going on than I can keep track of. Many call working on it one of the most effectively altruistic things one can possibly do. But I’ve searched a bit and as far as I can tell that foom scenario is still the main reason for society to be concerned about AI risk now. Yet there is almost no recent discussion evaluating its likelihood, and certainly nothing that goes into as much depth as did Eliezer and I. Even Bostrom’s book length treatment basically just assumes the scenario. Many seem to think it obvious that if one group lets one AI get out of control, the whole world is at risk. It’s not (obvious).

Imagine that some person or organization is now a stranger, but you are considering forming a relation with them. Imagine further that they have one of two possible reputations: presumed selfish, or presumed pro-social. Assume also that the presumption about you is somewhere between these two extremes of selfish and pro-social.

In this situation you might think it obvious that you’d prefer to associate with the party that is presumed pro-social. After all, in this case social norms might push them to treat you nicer in many ways. However, there are other considerations. First, other forces, such as law and competition, might already push them to treat you pretty nicely. Second, social norms could also push you to treat them nicer, to a degree that law and competition might not push. And if you and they had a dispute, observers might be more tempted to blame you than them. Which could tempt them to demand more of you, knowing you’d fear an open dispute.

For example, consider which gas station you’d prefer, Selfish Sam’s or Nuns of Nantucket. If you buy gas from the nuns, social norms might push them to be less likely to sell you water instead of gas, and to offer you a lower price. But you might be pretty sure that laws already keep them from selling you water instead of gas, and their gas price visible from the road might already assure you of a low price. If you start buying gas from the nuns they might start to hit you up for donations to their convent. If you switched from them to another gas station they might suggest you are disloyal. You might have to dress and try to act extra nice there, such as by talking polite and not farting or dropping trash on the ground.

In contrast, if you buy gas from Selfish Sam’s, laws and competition could assure you that get the gas you wanted at a low price. And you could let yourself act selfish in your dealings with them. You could only buy gas when you felt like it, buy the type of gas best for you, and switch it all when convenient. You don’t have to dress or act especially nice when you are there, and you could buy a selfish snack if that was your mood. In any dispute between you and them most people are inclined to take your side, and that keeps Sam further in line.

This perspective helps make sense of some otherwise puzzling features of our world. First, we tend to presume that firms and bosses are selfish, and we often verbally criticize them for this (to others if not to their faces). Yet we are mostly comfortable relying on such firms for most of our goods and services, and on bosses for our jobs. There is little push to substitute non-profits who are more presumed to be pro-social. It seems we like the fact that most people will tend to take our side in a dispute with them, and we can feel more free to change suppliers and jobs when it seems convenient for us. Bosses are often criticized for disloyalty for firing an employee, while employees are less often criticized for disloyalty for quitting jobs.

Sometimes we feel especially vulnerable to being hurt by suppliers like doctors, hurt in ways that we fear law and competition won’t fix. In these cases we prefer such suppliers to have a stronger pro-social presumption, such as being bound by professional ethics and organized via non-profits. And we pay many prices for this, such as via acting nicer to them, avoiding disputes with them, and being reluctant to demand evaluations or to switch via competition. Similarly, the job of being a solider makes soldiers especially vulnerable to their bosses, and so soldier bosses are expected to be more pro-social.

As men tend to be presumed more selfish in our culture, this perspective also illuminates our male-female relations. Men commit more crime, women are favored in child custody disputes, and in dating men are more presumed to “only want one thing.” In he-said-she-said disputes, observers tend to believe the woman. Women tend more to initiate breakups, and find it easier to get trust-heavy jobs like nursing, teaching, and child-care, while men find it easier to get presumed-selfish jobs like investors and bosses. Female leaders are more easily criticized for selfish behavior, e.g., more easily seen as “bitchy”. Women tend to conform more, and to be punished more for nonconformity.

This all makes sense if men tend to feel more vulnerable to hidden betrayal by women, e.g. cuckoldry, while women can more use law and visible competition to keep men in line. In traditional gender roles, men more faced outsiders while women more faced inside the family. Thus men needed more to act “selfish” toward outsiders to help their families.

When those who are presumed selfish want to prove they are not selfish, they must sacrifice more to signal their pro-sociality. So men are expected to do more to signal devotion to women than vice versa. Conversely folks like doctors, teachers, or priests, who are presumed pro-social can often get away with actually acting quite selfishly, as long such choices are hard to document. Few with access to evidence are willing to directly challenge them.

Joseph said .. Let Pharaoh .. appoint officers over the land, and take up the fifth part of the land of Egypt in the seven plenteous years. .. And that food shall be for store to the land against the seven years of famine, which shall be in the land of Egypt; that the land perish not through the famine. And the thing was good in the eyes of Pharaoh. (Genesis 38)

[Medieval Europe] public authorities were doubly interested in the problem of food supplies; first, for humanitarian reasons and for good administration; second, for reasons of political stability because hunger was the most frequent cause of popular revolts and insurrections. In 1549 the Venetian officer Bernardo Navagero wrote to the Venetian senate: “I do not esteem that there is anything more important to the government of cities than this, namely the stocking of grains, because fortresses cannot be held if there are not victuals and because most revolts and seditions originate from hunger. (p42, Cipolla, Before the Industrial Revolution)

63% of Americans don’t have enough saved to cover even a $500 financial setback. (more)

Even in traditional societies with small governments, protecting citizens from starvation was considered a proper of role of the state. Both to improve welfare, and to prevent revolt. Today it could be more efficient if people used modern insurance institutions to protect themselves. But I can see many failing to do that, and so can see governments trying to insure their citizens against big disasters.

Of course rich nations today face little risk of famine. But as I discuss in my book, eventually when human level artificial intelligence (HLAI) can do almost all tasks cheaper, biological humans will lose pretty much all their jobs, and be forced to retire. While collectively humans will start out owning almost all the robot economy, and thus get rich fast, many individuals may own so little as to be at risk of starving, if not for individual or collective charity.

Yes, this sort of transition is a long way off; “this timeisn’t different” yet. There may be centuries still to go. And if we first achieve HLAI via the relatively steady accumulation of better software, as we have been doing for seventy years, we may get plenty of warning about such a transition. However, if we instead first achieve HLAI via ems, as elaborated in my book, we may get much less warning; only five years might elapse between seeing visible effects and all jobs lost. Given how slowly our political systems typically changes state redistribution and insurance arrangements, it might be wiser to just set up a system far in advance that could deal with such problems if and when they appear. (A system also flexible enough to last over this long time scale.)

The ideal solution is global insurance. Buy insurance for citizens that pays off only when most biological humans lose their jobs, and have this insurance pay enough so these people don’t starve. Pay premiums well in advance, and use a stable insurance supplier with sufficient reinsurance. Don’t trust local assets to be sufficient to support local self-insurance; the economic gains from an HLAI economy may be very concentrated in a few dense cities of unknown locations.

Alas, political systems are even worse at preparing for problems that seem unlikely anytime soon. Which raises the question: should those who want to push for state HLAI insurance ally with folks focused on other issues? And that brings us to “universal basic income” (UBI), a topic in the news lately, and about which many have asked me in relation to my book.

Yes, there are many difficult issues with UBI, such as how strongly the public would favor it relative to traditional poverty programs, whether it would replace or add onto those other programs, and if replacing how much that could cut administrative costs and reduce poverty targeting. But in this post, I want to focus on how UBI might help to insure against job loss from relatively sudden unexpected HLAI.

Imagine a small “demonstration level” UBI, just big enough to one side to say “okay we started a UBI, now it is your turn to lower other poverty programs, before we raise UBI more.” Even such a small UBI might be enough to deal with HLAI, if its basic income level were tied to the average income level. After all, an HLAI economy could grow very fast, allowing very fast growth in the incomes that biological human gain from owning most of the capital in this new economy. Soon only a small fraction of that income could cover a low but starvation-averting UBI.

For example, a UBI set to x% of average income can be funded via a less than x% tax on all income over this UBI level. Since average US income per person is now $50K, a 10% version gives a UBI of $5K. While this might not let one live in an expensive city, a year ago I visited a 90-adult rural Virginia commune where this was actually their average income. Once freed from regulations, we might see more innovations like this in how to spend UBI.

However, I do see one big problem. Most UBI proposals are funded out of local general tax revenue, while the income of a HLAI economy might be quite unevenly distributed around the globe. The smaller the political unit considering a UBI, the worse this problem gets. Better insurance would come from a UBI that is funded out of a diversified global investment portfolio. But that isn’t usually how governments fund things. What to do?

A solution that occurs to me is to push for a World Basic Income (WBI). That is, try to create and grow a coalition of nations that implement a common basic income level, supported by a shared set of assets and contributions. I’m not sure how to set up the details, but citizens in any of these nations should get the same untaxed basic income, even if they face differing taxes on incomes above this level. And this alliance of nations would commit somehow to sharing some pool of assets and revenue to pay for this common basic income, so that everyone could expect to continue to receive their WBI even after an uneven disruptive HLAI revolution.

Yes, richer member nations of this alliance could achieve less local poverty reduction, as the shared WBI level couldn’t be above what the poor member nations could afford. But a common basic income should make it easier to let citizens move within this set of nations. You’d less have to worry about poor folks moving to your nation to take advantage of your poverty programs. And the more that poverty reduction were implemented via WBI, the bigger would be this advantage.

Yes, this seems a tall order, probably too tall. Probably nations won’t prepare, and will then respond to a HLAI transition slowly, and only with what ever resources they have at their disposal, which in some places will be too little. Which is why I recommend that individuals and smaller groups try to arrange their own assets, insurance, and sharing. Yes, it won’t be needed for a while, but if you wait until the signs of something big soon are clear, it might then be too late.

These four emotions: scared, sad, angry, and bitter, all suggest that one has suffered or will suffer a loss. So all of them might inspire empathy and help from others. But they don’t do so equally. Consider the selfish costs of expressing empathy for these four emotions.

While a scared person hasn’t actually suffered a loss yet, the other kinds of feelings indicate that an actual loss has been suffered. So the scared person is not yet a loser, while the others are losers. When there are costs with associating with losers, those costs are lowest for the scared. For example, if it takes real resources to help someone who has suffered a loss, the scared person is less likely to need such resources.

People who are angry or bitter blame particular other people for their loss. So by expressing empathy with or helping such people, you risk getting involved in conflicts with those other people. In contrast, helping people who are just sad less risks getting you into conflicts.

People who are angry tend to think they have a substantial chance of winning a conflict with those they blame for their loss. Anger is a more visible emotion that drives one more toward overt conflict. Angry people are visibly trying to recruit others to their fight.

In contrast, bitter people tend to think they have little chance of winning a overt conflict, at least for now. So bitter people tend to fume in private, waiting for their chance to hit back unseen. If you help a bitter person, you may get blamed when their hidden attacks are uncovered, and your support may tempt them to become angry and start an overt fight. So by helping a bitter person, you are more likely to be on the losing end of a conflict.

These considerations suggest that our cost of empathizing with and helping people with these emotions increases in this order: scared, sad, angry, and bitter. And this also seems to describe the order in which we actually feel less empathy; we feel less empathy when its costs are higher.

Note that this same order also describes who has suffered a larger loss, on average. Scared people expect to suffer the smallest loss, while bitter people suffer the largest loss. (Ask yourself which emotion you’d rather feel.) So our willingness to express empathy with those who suffer a loss is inverse to the loss they suffer. We empathize the most with those who suffer the least. Because that is cheapest.

Thanks to Carl Shulman for pointing out to me the social risks of helping bitter folk, relative to sad folk.

Added 18Feb: Interestingly, many lists of emotions don’t include bitterness or an equivalent. It is as if we’d like to pretend it just doesn’t exist.

It is usually bad for people to die, and so good for them to keep living. Overall in our society, people who weigh more for their age and gender tend to die more, and so many are concerned about an “obesity epidemic”, and seek ways to reduce people’s weight, such as by getting them to consume fewer calories. Such as from drinking sugary soda.

TIME magazine says that evil soda firms, like evil tobacco firms before them, are lying about science to distract us from their evil:

You may not have noticed it yet, but sodamakers are working hard to get you off your couch. On Aug. 9, a New York Times article revealed that Coca-Cola was quietly funding a group of scientists called Global Energy Balance Network that emphasizes the role of exercise, as opposed to diet, in fighting obesity. … This has some nutrition and obesity experts charging soda companies, whose sales of carbonated soft drinks have hit a 20-year low, with cherry-picking science to make its products more appealing. … Indeed, there isn’t strong evidence to show that exercise alone … can help people shed pounds and keep them off. … It’s not the first time science has been used to sway public perceptions about the health effects of certain behaviors; the tobacco industry famously promoted messaging passed on studies that claimed to prove that “light” or “low-tar” cigarettes were less harmful that regular ones. (more)

Yes, it is true that the literature usually suggests that for most people exercise won’t do much to change their weight. However, another consistent result in the literature (e.g., here, here) is that when we predict health using both weight and exercise, it is mostly exercise that matters. It seems that the main reason that heavy people are less healthy is that they exercise less. Obesity is mainly unhealthy as a sign of a lack of exercise.

So if we cared mainly about people’s health, we should cheer this effort by soda forms to push people to exercise. Even if that also causes people to cut down less on soda. A population that exercises more doesn’t weight much less, but it lives much longer. In fact, exercise seems to be one of the biggest ways we know of by which an individual can influence their health. (Much bigger than medicine, for example.)

I suspect, however, that what bothers most people most about fat people isn’t that they’ll die younger, its instead that they look ugly and low status, and so make them also look low status by association. So we don’t want people near us to look fat. All else equal we might also want them to live longer, but that altruistic motive can’t compete much with our status motive.

So boo soda firms if you want your associates to not seem low status. But yay soda firms if you want people to live and not die (sooner).

… Global Energy Balance Network, which promotes the argument that weight-conscious Americans are overly fixated on how much they eat and drink while not paying enough attention to exercise. Health experts say this message is misleading …

Actually that message seems exactly right to me, and not at all misleading.

Have you heard about the new “effective cars” movement? Passionate young philosophy students from top universities have invented a revolutionary new idea, now sweeping the intellectual world: cars that get you from home to the office or store and back again as reliably, comfortably, and fast as possible. As opposed to using cars used as shrub removers, pots for plants, conversation pits, or paperweights. While effective car activists cannot design, repair, or even operate cars, they are pioneering ways to prioritize car topics.

Not heard of that? How about “effective altruism”?

Effective altruism is about asking, “How can I make the biggest difference I can?” and using evidence and careful reasoning to try to find an answer. Just as science consists of the honest and impartial attempt to work out what’s true, and a commitment to believe the truth whatever that turns out to be, effective altruism consists of the honest and impartial attempt to work out what’s best for the world, and a commitment to do what’s best, whatever that turns out to be. …

I helped to develop the idea of effective altruism while a [philosophy] student at the University of Oxford. … I began to investigate the cost-effectiveness of charities that fight poverty in the developing world. The results were remarkable. We discovered that the best charities are hundreds of times more effective at improving lives than merely “good” charities. .. From there, a community developed. We realized that effective altruism could be applied to all areas of our lives – choosing charity, certainly, but also choosing a career, volunteering, and choosing what ewe buy and don’t buy. (MacAskill, Doing Good Better)

This all sounds rather vacuous; who opposes applying evidence and careful reasoning to figure out how to do better at charity, or anything? But I just gave a talk at Effective Altruism Global, and spent a few days there chatting and listening, and I’ve decided that they do have a core position that is far from vacuous.

Effective altruism is a youth movement. While they collect status by associating with older people like Peter Singer and Elon Musk, those who work and have influence in these groups are strikingly young. And their core position is close to the usual one for young groups throughout history: old codgers have run things badly, and so a new generation deserves to take over.

Some observers see effective altruism as being about using formal statistics or applying consensus scientific theories. But in fact effective altruists embrace contrarian concerns about AI “foom” (discussed often on this blog), concerns based neither on formal statistics nor on applying consensus theories. Instead this community just trusts its own judgment on what reasoning is “careful,” without worrying much if outsiders disagree. This community has a strong overlap with a “rationalist” community wherein people take classes on and much discuss how to be “rational”, and then decide that they have achieved enough rationality to justify embracing many quite contrarian conclusions.

Youth movements naturally emphasis the virtues of youth, relative to those of age. While old people have more power, wealth, grit, experience, task-specific knowledge, and crystalized intelligence, young people have more fluid intelligence, potential, passion, idealism, and a clean slate. So youth movements tend to claim that society has become lazy, corrupt, ossified, stuck in its ways, has tunnel-vision, and forgets its ideals, and so needs smart flexible idealistic people to rethink and rebuild from scratch.

Effective altruists, in particular, emphasize their stronger commitment to altruism ideals, and also the unusual smarts, rationality, and flexibility of their leaders. Instead of working within prior organizations to incrementally change prior programs, they prefer to start whole new organizations that re-evaluate all charity choices themselves from scratch. While most show little knowledge of the specifics of any charity areas, they talk a lot about not getting stuck in particular practices. And they worry about preventing their older selves from reversing the lifetime commitments to altruism that they want to make now.

Effective altruists often claim that big efforts to re-evaluate priorities are justified by large differences in the effectiveness of common options. Concretely, MacAskill, following Ord, suggested in his main conference talk that the distribution looks more like a thick-tailed power law than a Gaussian. He didn’t present actual data, but one of the other talks there did: Eva Vivalt showed the actual distribution of estimated effects to be close to Gaussian.

But youth movements have long motivated members via exaggerated claims. One is reminded of the sixties counter-culture seeing itself as the first generation to discover sex, emotional authenticity, and a concern for community. And saying not to trust anyone over thirty. Or countless young revolutionaries seeing themselves as the first generation to really care about inequality or unwanted dominance.

When they work well, youth movements can create a strong bond within a generation than can help them to work together as a coalition as they grow in ability and influence. As with the sixties counter-culture, or the libertarians a bit later, while at first their concrete practice actions are not very competent, eventually they gain skills, moderate their positions, become willing to compromise, and have substantial influence on the world. Effective altruists can reasonably hope to mature into such a strong coalition.

Added 1a: The last slide of my talk presented this youth movement account. The talk was well attended and many people mentioned talked to me about it afterward, but not one told me they disagreed with my youth movement description.

Added 10a: Most industrials and areas of life have a useful niche to be filled by independent quality evaluators, and I’ve been encouraged by the recent increase in such evaluators within charity, such as GiveWell. The effective altruism movement consists of far more, however, than independent quality evaluators.

Added 8Aug: OK, for now I accept Brienne Yudkowsky’s summary of Vivalt, namely that she finds very little ability to distinguish the effectiveness of different ways to achieve any given effect, but that she doesn’t speak to the variation across different kinds of things one might try to do.

In one of my first blog posts back in 2006 I said people overestimate the social value of joining a helping profession, like doctors:

Yes, if you choose to be a doctor, you will spend your time providing services that people perceive to have value, sometimes enormous value. However, you cannot take full credit for this value. (more)

Now 80,000 Hours’ Rob Wiblin, who once blogged here, says “If you want to save lives, should you study medicine? Probably not”:

Most people skilled enough to make it in a field as challenging as medicine could have a bigger social impact through an alternative career. The best research suggests that doctors do much less to improve the health of their patients than you might naturally expect. Health is more determined by lifestyle factors, and most of the treatments that work particularly well could be delivered with a smaller number of doctors than already work in the UK or USA. (more)

In contrast, 80,000 Hours is quite bullish on getting an Economics PhD.

The Boston Review asked eleven people to respond to an essay by Peter Singer on effective altruism, i.e., on using careful analysis to pick acts that do the most good, even when less emotionally satisfying. For example, one might work at a less satisfying job that earns more, so that one can donate more. Response quotes are at the end of this post.

The most common criticisms were these: five people complained that in effective altruism the people helped don’t directly participate in the decision making process, and three people complained that charity efforts targeted directly at people in need distract from efforts to change political outcomes. Taken at face value, these seem odd criticisms, as they seem to apply equally to all charity efforts, and not just to this approach to charity. Yet I doubt these people have published essays complaining about charity in general. So I’m tempted to try to read between the lines, and ask: what is their real issue?

Charity plausibly has a signaling function, at least in part. Charity can let us show others our wealth, our conformity to standard social norms, and our loyalty to particular groups. Charity can also display our reassuring emotional reactions to hearing or seeing others in need or pain. Charity can also let us assert our dominance over and higher status than the people we help, especially if we control their lives a lot in the process. (There are birds who gain status by forcing food down the throats of others who lose status as a result.)

The main complaint above, on including the helped in decisions, seems closely related to showing dominance via charity that controls. But again, how is this problem worse for effective altruism charity, relative to all other charity?

I think the key is the empathy signaling function. People who give because of emotional feelings induced by seeing or hearing those in need are seen as having friendlier and less suspect motives, and people who participate in a political process that includes those they help are also seen as treating them more as equals. In contrast, people with an abstract distant less emotional relation to those in need, whom they help directly as opposed to indirectly via politics, are seen as less having a personal-like relation to those they help, and so are more plausibly trying to dominate them, or to achieve some other less relational purpose.

This interpretation, that the main dislike about effective altruists is their less displaying empathy emotions, is also supported by two other criticisms made of Singer’s essay: two people complained that effective altruism relies too much on numbers and other abstractions, and two people complained that it can be very hard to estimate many numbers.

Imagine someone who said they were in love with you, cared about you, and wanted to live with you to help you, but who didn’t seem very emotionally engaged in this. They instead talked a lot about calculations they’d done on how you two could live your lives together well. You might suspect them of having ulterior motives, such as wanting to gain sex, money, or status from you. Maybe the same sort of thing is going on in charity. We want and expect a certain sort of emotional relation to people who help us, and to people who help the same people we help, and people who say they are trying to help but who won’t join in the usual emotions in the usual way may seem suspect. We’d be more likely to find fault with their approach, and to suspect them of bad ulterior motives.