Saturday, December 22, 2012

One of the more colorful vignettes in philosophy is Gibbard and Harper's "Death in Damascus" case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.
‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death's foresight takes into account any reactions to Death's activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won't work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

Thursday, December 06, 2012

[Cross-posted from Overcoming Bias; edited to remove tactless commentary 2017. Also, the possibility of reducing the misery in factory farming through genetic alteration discussed in this post does not and would not justify factory farming.]

I have spoken with a lot of people who are enthusiastic about the possibility that advanced genetic engineering technologies will improve animal welfare.

But would it really take radical new technologies to produce genetics reducing animal suffering?

Modern animal breeding is able to shape almost any quantitative trait with significant heritable variation in a population. One carefully measures the trait in different animals, and selects sperm for the next generation on that basis. So far this has not been done to reduce animals' capacity for pain, or to increase their capacity for pleasure, but it has been applied to great effect elsewhere.

One could test varied behavioral measures of fear response, and physiological measures like cortisol levels, and select for them. As long as the measurements in aggregate tracked one's conception of animal welfare closely enough, breeders could generate increases in farmed animal welfare, potentially initially at low marginal cost in other traits.

Just how powerful are ordinary animal breeding techniques? Consider cattle:

In 1942, when my father was born, the average dairy cow produced less than 5,000 pounds of milk in its lifetime. Now, the average cow produces over 21,000 pounds of milk. At the same time, the number of dairy cows has decreased from a high of 25 million around the end of World War II to fewer than nine million today. This is an indisputable environmental win as fewer cows create less methane, a potent greenhouse gas, and require less land.

Anderson, who has bred the birds for 26 years, said the key technical advance was artificial insemination, which came into widespread use in the 1960s, right around the time that turkey size starts to skyrocket...
This process, compounded over dozens of generations, has yielded turkeys with genes that make them very big. In one study in the journal Poultry Science, turkeys genetically representative of old birds from 1966 and modern turkeys were each fed the exact same old-school diet. The 2003 birds grew to 39 pounds while the legacy birds only made it to 21 pounds. Other researchers have estimated that 90 percent of the changes in turkey size are genetic.

Moreover, breeders are able to improve complex weighted mixtures of diverse traits:

Monday, November 05, 2012

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

1. Peace
2. A nuclear war that kills 99% of the world's existing population.
3. A nuclear war that kills 100%

2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater... If we do not destroy mankind, these thousand years may be only a tiny fraction of the whole of civilized human history.

The ethical questions raised by the example have been much discussed, but almost nothing has been written on the empirical question: given nuclear war, how likely is scenario 3?

The most obvious path from nuclear war to human extinction is nuclear winter: past posts on Overcoming Bias have bemoaned neglect of nuclear winter, and high-lighted recent research. Particularly important is a 2007 paper by Alan Robock, Luke Oman, and Georgiy Stenchikov: "Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences." Their model shows severe falls in temperature and insolation that would devastate agriculture and humanity's food supply, with the potential for billions of deaths from famine in addition to the direct damage.

So I asked Luke Oman for his estimate of the risk that nuclear winter would cause human extinction, in addition to its other terrible effects. He gave the following estimate:

The probability I would estimate for the global human population of zero resulting from the 150 Tg of black carbon scenario in our 2007 paper would be in the range of 1 in 10,000 to 1 in 100,000.
I tried to base this estimate on the closest rapid climate change impact analog that I know of, the Toba supervolcanic eruption approximately 70,000 years ago. There is some suggestion that around the time of Toba there was a population bottleneck in which the global population was severely reduced. Climate anomalies could be similar in magnitude and duration. Biggest population impacts would likely be Northern Hemisphere interior continental regions with relatively smaller impacts possible over Southern Hemisphere island nations like New Zealand.

Luke also graciously gave a short Q & A to clarify his reasoning, below the fold:

Monday, September 17, 2012

Imagine there are two advanced interstellar civilizations near one another who begin outward colonization around the same time, in an otherwise uninhabited accessible universe. One civilization likes to create convert star systems into lots of people leading rich, happy lives full of interest and reward. Call them the Eudaimonians. The other is solely interested in expanding its sphere of colonization as quickly as possible, and produces much less or negative welfare. Call them the Locusts. How much of a competitive advantage do the Locusts have over the Eudaimonians? How much of the cosmic commons, as Robin Hanson calls it, would wind up transformed into worthwhile lives, rather than burned to slightly accelerate colonization efforts? If the Locusts will inevitably capture almost all resources, then little could be done to avert astronomical waste, but an even waste-free split of the accessible universe could be half as good as a Eudaimonic monopoly.

I would argue that in our universe the Eudaimonians will be almost exactly as competitive as the Locusts in rapidly colonizing the stars. The reason is that the Eudaimonians can also adopt a strategy of near-maximum colonization speed until they reach the most distant accessible galaxies, and only then divert resources to producing welfare. More below the fold.

Will our civilization ever be able to colonize the stars and avert astronomical waste? Will we create computer programs more intelligent and energy-efficient than ourselves, enabling much larger and smarter sapient populations? We don't know exactly how hard it will be to engineer interstellar probes, or build AI, and we probably won't be sure until we actually do so.

However, we can shed some light on the question of whether humanity will ever be able to colonize the stars by asking how existing methods and technologies could increase our capacities, if they were deployed widely and to their limits. Here's a thought experiment: if we imagine that we were magically frozen in roughly our current technological regime for a time, long enough for Malthusian population growth and competition, how much would our economic and scientific production grow? By Malthusian, I mean that population would keep increasing until food costs started to price people out of reproduction, with higher-income folk reproducing more, and institutions that lead to high incomes spreading through migration, imitation or conquest.

Below the fold, I consider several dimensions where existing systems could simply be scaled up to increase global output and R&D: bringing poor countries up to the standards of rich countries, increasing population, and increasing average human capital within countries to near the level of the best-endowed households. Collectively, I estimate they could increase global R&D efforts by more than one hundred fold.

Monday, July 16, 2012

tl;dr: If we take possible people into account, even endorsing the Repugnant Conclusion would only provide a negligible chance of getting to exist. So in the Rawlsian original position, they would be concerned with other features of society than population.

Friday, May 11, 2012

A number of possible global catastrophic risks seem like they would do their worst damage by disrupting food production. Some examples include nuclear winter, asteroid impacts, and supervolcanoes. In addition to directly laying waste to significant areas, such events would cast ash, dust, or other materials into the atmosphere. Temperatures would fall and solar radiation for primary producers would be reduced, causing agricultural failures and wreaking havoc on wilderness ecologies. It seems clear that feasible events of this sort could cost hundreds of millions or even billions of lives. But would even extreme events actually bring about would they cause human extinction or constitute an existential risk?

There are several sources of evidence we can bring to bear on the question. We can apply the "outside view" and consider the species, including hominids and primates, that have survived past volcanic and asteroid impacts. We can examine current supplies of food sources that could provide for humans during a period of impaired solar radiation. And we can look at past and present social behavior that bears on the distribution of food and recovery from period of severe famine. In the aggregate, it seems to me that humanity would survive one of these severe food disruptions, despite terrible quantities of death and misery.

This post will take a first-pass look at existing food sources that could be drawn upon during a "year without the Sun," or something close to it.

Thursday, May 10, 2012

If you read lists of the most costly earthquakes, hurricanes, and other natural disasters, you will find that they they tend to be quite recent, with damages increasing over time. But earthquake costs have not been rising because of some geological phenomenon, i.e. earthquakes getting more frequent or higher on the Richter scale. Rather, populations and economies have been growing, so that there are more valuable things for earthquakes to destroy. This dynamic offers a powerful defense against global catastrophic risks that can be addressed by interventions with particular fixed or falling costs.

Wednesday, May 09, 2012

Temporal discounting is not about time
Economists doing cost-benefit analysis normally make use of temporal discounting, i.e. benefits further in the future count for less than those nearer to the present. In part this is done to reflect the availability of positive investment returns, but normally analysis also include an additional element of pure temporal preference.

Say that I set up a sealed habitat for some plants and cute bunny rabbits. The rabbits are placed in suspended animation, and the habitat is rocketed out of the Solar System by an automated spacecraft which will never return to interact with our world again. At a predesignated time, the rabbits will be revived and go on to live happy lives in the sealed habitat for a time and then die. With significant pure temporal preference this spacecraft is much more valuable if it is set to revive the isolated rabbits after 5 years rather than 50.

Indeed, economists typically make use of constant exponential discounting, e.g. reducing the valuation of benefits by 3% per year. At a 3% annual discount rate the value of future benefits will be cut by more than half every 23 years. After 230 years a good would be valued at less than a thousandth of an immediate counterpart. But to most people the change in activation time does not make such an overwhelming difference. Further, constant exponential discounting makes strong distinctions between different far-future periods: benefits received in 1 million years are still more than a thousand times as valuable as benefits received in 1,000,230 years.

But real humans mostly don't care about such distinctions. A difference of a few centuries added onto a million years is a negligible change in time: in either case they lie far beyond the current era and the proportional change is small. Favoring the earlier time for a thousandfold reduction in the goods achieved seems absurd in that context. Humans may be impatient within our own lives, care more about our children than distant descendants, and so forth, but the constant exponential discounting framework just doesn't make sense of our attitudes towards the further future.

Because of cases like this philosophers tend to reject the idea of pure temporal preference for social cost-benefit analysis, e.g. with respect to climate change, and often critique economists for persisting in making use of it. But economists are not fools, and the reasons why so many continue to do so are worth thinking about.

Tuesday, May 08, 2012

[This post is a response to cousin_it's request for counterarguments to utilitarianism.]

Saving the drowning child is not a self-sacrifice
Suppose that you live in a developed country, and earn a high income even by developed country standards. One day you are walking home from a business meeting in your best suit, worth some $1,000, and see a small child drowning in a muddy (and foul-smelling) pond off the road. No one else is around, but you could save the child, at the cost of hopelessly ruining the suit. Most people intuit that in such a case one should save the child, despite the cost of a $1,000 suit.

The utilitarian philosophers Peter Singer has often argued that since we should make a financial sacrifice for the drowning child, we should do the same to save the lives of children in poor countries, e.g. by donating to the most cost-effective public health charities identified by GiveWell. Others would generalize to saving future generations, saying that if we can reduce existential risk and avert astronomical waste to in expectation save trillions of happy lives, then that is even better than saving one life today.

However, the drowning child case is problematic as a justification for self-sacrifice in other contexts: it probably doesn't involve any self-sacrifice at all, but rather an expected selfish gain.

Saturday, March 24, 2012

Aggregative hedonistic utilitarians are often concerned with the expected value of pleasure minus pain going forward. For instance, they may wonder how to value the expected Astronomical Waste if humanity were rendered extinct by a sudden asteroid impact.

One important consideration is that it appears that biological life as we know it generates pain or pleasure at a very low density relative to long-term technological possibilities. As Nick Bostrom's astronomical waste paper notes, the energy output of the Sun is a number of orders of magnitude higher than the energy that goes into life on Earth, and even higher than the energy going to power animal nervous systems. Further, ultra-efficient computing substrates could run emulations of animal nervous systems at much lower cost in energy.

For particular accounts of the normative importance of pain and pleasure, one could further streamline conscious software programs to have just the right features to maximize pain or pleasure produced by a given lump of mature computing hardware ("computronium").

Call computronium optimized to produce maximum pleasure per unit of energy "hedonium," and that optimized to produce maximum pain per unit of energy "dolorium," as in "hedonistic" and "dolorous." Civilizations that colonized the galaxy and expended a nontrivial portion of their resources on the production of hedonium or dolorium would have immense impact on the hedonistic utilitarian calculus. Human and other animal life on Earth (or any terraformed planets) would be negligible in the calculation of the total. Even computronium optimized for other tasks would seem to be orders of magnitude less important.

So hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the "hedons per joule" or "hedons per computation" of hedonium (call this H), minus the expected production of dolorium, multiplied by "dolors per joule" or "dolors per computation" (call this D).

By symmetry, my default expectation would be that H=D. Insofar as pain and pleasure involve accessibility to conscious reflection, connections to decision-making and memory, these pose like demands for both pain and pleasure. Evolutionary arguments about the distribution of pain and pleasure in the lives of animals, e.g. that in the lifecycle of some organism there are more things that it is important to avoid than to approach, are irrelevant to hedonium and dolorium. Pleasure (or pain) is set to maximum, not allocated to solve a control problem for a reproduction machine.

This is important to remember since our intuitions and experience may mislead us about the intensity of pain and pleasure which are possible. In humans, the pleasure of orgasm may be less than the pain of deadly injury, since death is a much larger loss of reproductive success than a single sex act is a gain. But there is nothing problematic about the idea of much more intense pleasures, such that their combination with great pains would be satisfying on balance.

So the situation would look good for hedonistic utilitarians of this sort: all that is needed is a moderately higher (absolute as well as relative) expected quantity of hedonium than dolorium. Even quite weak benevolence, or the personal hedonism of some agents transforming into or forking off hedonium could suffice for this purpose.

Now, the "measurement" of pain and pleasure brings in definitional and normative premises. Some may say they care more about pleasure than pain or vice versa, while others build into their "unit" of pain or pleasure a moral weighting in various tradeoffs. However, if we make use of data such as the judgments and actions of agents in choice problems, quantity of neuron-equivalents involved, and so forth, the symmetry does seem to hold. I would distinguish traditional and negative-biased hedonistic utilitarians in terms of the tradeoffs they would make between the production of hedonium and dolorium.