This page explores the intersection of life extension and artificial intelligence. It explores questions like whether it is preferable to achieve life extension directly (by doing research in it) or indirectly (by first trying to advance artificial intelligence enough to aid in life extension).

For instance, it seems possible to claim something like the following: directly working to advance artificial intelligence is dangerous, so we should for the moment research AI safety per Differential intellectual progress; at the same time, doing life extension research doesn’t involve as much risk, so we should focus on that as well, and if we don’t end up making much progress directly on life extension yet make progress on creating a friendly AI, then we might achieve life extension indirectly (in other words, one could argue that until friendly AI can be achieved one should directly focus on life extension research, but after that indirectly achieving it is more practical). (This general argument can apply to other causes besides life extension as well.)

External links

Diego Caleiro’s comment, where he notes in the context of differential intellectual progress that “if aging is cured, chronologically old individuals no longer would have an incentive to accelerate the intelligence explosion”.