It'll be like in that Twilight Zone episode where the guy ends up in an afterlife where receives everything that he wishes for, and the afterlife turns out to be hell.

All technology is created to satisfy our desires, and AI/singularity will basically allow us to satisfy *all* our desires, such that there are no desires left. That, is except one: liberation from death, which involves physical risk. So the future is humanity living deep underground, lying down, pumped full of chemical and electric drugs, with robots providing for it.

Coincidentally, it also explains why we don't see any intelligent life when looking up. It exists, but in more-or-less stasis.

We're on track to have 8 billion people, already 85% of the world lives in abject poverty and extracts 3 planets worth of resources just to keep what we currently have running a day longer. There will not be jobs for 8 billion people in the long run. There won't be land and fresh water and entertainment for 8 billion people that doesn't involve fishing the oceans clean and polluting every livable square inch of the planet. We're already in the process of destroying every food chain and ecological chain around us. When I ask what another billion people gets us, nobody ever can really say, the reason most of the billions we have now exist is cheap labor and ponzi scheme economies require it to keep costs low for the top. I'm glad I won't be around in 60-100 years to see what it will be like as every measure and statistic shows quality of life on the decline around the world. We live in interesting times.

Accelerating technological change, of which AI is the most high profile example, has to be unsustainable if we remain stuck on Earth. We could carry on for decades, but not centuries. As technological power increases, the potential for a fatal mistake also increases.

If we become multi-planetary, then distance may protect humanity from an existential error. In centuries past, wars and plagues in Europe did not affect Asia or America. Now they would potentially be world-wide and calamitous.

Perhaps a period of equilibrium will be reached. I wouldn't be surprised if it was AI that made that happen. Humans are not great at managing the future beyond about 5 years.

Apology in advance; my comment is more general, not specifically on one topic. RE. Ars on our lunch-break: Rogue AI, nefarious GMOs, (and Russia --again). I needed to go on a diet; thank you for destroying my appetite.

We're on track to have 8 billion people, already 85% of the world lives in abject poverty and extracts 3 planets worth of resources just to keep what we currently have running a day longer. There will not be jobs for 8 billion people in the long run. There won't be land and fresh water and entertainment for 8 billion people that doesn't involve fishing the oceans clean and polluting every livable square inch of the planet. We're already in the process of destroying every food chain and ecological chain around us. When I ask what another billion people gets us, nobody ever can really say, the reason most of the billions we have now exist is cheap labor and ponzi scheme economies require it to keep costs low for the top. I'm glad I won't be around in 60-100 years to see what it will be like as every measure and statistic shows quality of life on the decline around the world. We live in interesting times.

Of course we don't like to talk about them. Though I would argue that some of it is due to how we already talk about them a lot. It is more fun to talk about the more novel and less likely threats. It can be a way of taking a mental vacation from the big problems we know and struggle with.

Similar to how the most likely threats to my personal existence are things like overeating, lack of exercise, and car accidents. We know this, it is talked about a lot. But they are hard problems, and sometimes it is more fun to think how to keep myself safe from ninja assassins.

We're on track to have 8 billion people, already 85% of the world lives in abject poverty and extracts 3 planets worth of resources just to keep what we currently have running a day longer. There will not be jobs for 8 billion people in the long run. There won't be land and fresh water and entertainment for 8 billion people that doesn't involve fishing the oceans clean and polluting every livable square inch of the planet. We're already in the process of destroying every food chain and ecological chain around us. When I ask what another billion people gets us, nobody ever can really say, the reason most of the billions we have now exist is cheap labor and ponzi scheme economies require it to keep costs low for the top. I'm glad I won't be around in 60-100 years to see what it will be like as every measure and statistic shows quality of life on the decline around the world. We live in interesting times.

Of course we don't like to talk about them. Though I would argue that some of it is due to how we already talk about them a lot. It is more fun to talk about the more novel and less likely threats. It can be a way of taking a mental vacation from the big problems we know and struggle with.

Similar to how the most likely threats to my personal existence are things like overeating, lack of exercise, and car accidents. We know this, it is talked about a lot. But they are hard problems, and sometimes it is more fun to think how to keep myself safe from ninja assassins.

I think that is a big part of it, but the time to solve this is before we ruin the fishing stocks, GMO all our crops into one genetically identical strain across half the planet, drive thousand of other species to extinction and make our environment unlivable. At that point discussions won't matter. This lack of willingness to tackle unpleasant topics as a species will probably drive us to extinction.. Look at the leadup to the stock market mess in 2007/2008 and everyone saw the signs, everyone knew it was inflating a huge bubble and nobody wanted to stop it. I think we've already passed the point of no return 10 years ago, and we're still pushing on the gas in the same direction.

We're on track to have 8 billion people, already 85% of the world lives in abject poverty and extracts 3 planets worth of resources just to keep what we currently have running a day longer. There will not be jobs for 8 billion people in the long run. There won't be land and fresh water and entertainment for 8 billion people that doesn't involve fishing the oceans clean and polluting every livable square inch of the planet. We're already in the process of destroying every food chain and ecological chain around us. When I ask what another billion people gets us, nobody ever can really say, the reason most of the billions we have now exist is cheap labor and ponzi scheme economies require it to keep costs low for the top. I'm glad I won't be around in 60-100 years to see what it will be like as every measure and statistic shows quality of life on the decline around the world. We live in interesting times.

Of course we don't like to talk about them. Though I would argue that some of it is due to how we already talk about them a lot. It is more fun to talk about the more novel and less likely threats. It can be a way of taking a mental vacation from the big problems we know and struggle with.

Similar to how the most likely threats to my personal existence are things like overeating, lack of exercise, and car accidents. We know this, it is talked about a lot. But they are hard problems, and sometimes it is more fun to think how to keep myself safe from ninja assassins.

I think that is a big part of it, but the time to solve this is before we ruin the fishing stocks, GMO all our crops into one genetically identical strain across half the planet, drive thousand of other species to extinction and make our environment unlivable. At that point discussions won't matter. This lack of willingness to tackle unpleasant topics as a species will probably drive us to extinction.. Look at the leadup to the stock market mess in 2007/2008 and everyone saw the signs, everyone knew it was inflating a huge bubble and nobody wanted to stop it. I think we've already passed the point of no return 10 years ago, and we're still pushing on the gas in the same direction.

And to give another example, not of an existential threat (which it is also), but of being unable to stop..., oil! Even though we know we are now in run-away climate change we will still use every last drop.

I wonder how many meters underwater Manhattan will have to be before we stop?

The sentiment is nice. The goal is nice. The environment does not support it, and never will.

All the evidence would seem to indicate that we don't have decades to prepare for beyond human general intelligence, and that given the current social environments, no amount of preparation will be sufficient

At a recent conference more than half of AI researchers thought Human level general AI was 10-20 years off. Vastly super intelligent AI a few years after. There is a high chance it will wipe humans out - not necessarily malevolence, just not caring about the Human ants.

Next to that risk, and on such a short timescale, all other environmental, technological, demographic, political problems facing humans are as nothing. We can't realistically halt the drive to AI given the huge competitive advantage it bestows on first to get it. - China and USA are in competition and that is not going to stop. We also can't realistically keep super intelligent AI in a box - it can cajole (immortality) and threaten (simulated hell for you, your family and friends).

I wonder how many meters underwater Manhattan will have to be before we stop?

https://tidesandcurrents.noaa.gov/sltre ... id=8518750In this NOAA link you can see 50 year sea level rise trend for manhatten was actually higher in 1950 than it is now. There is a strong 60year cycle in sea level change (PDO cycle) seen around the world that swamps the long term trend signal. We are currently in the 'up' phase so you can't take recent rises as strong evidence of acceleration. Overall Manhatten sea level has been rising at ~3mm/year since 1880s.

Current rate of sea level rise in much of the world, measured by tide gauges and corrected for local rate of subsidence (eg Manhatten is subsiding about 1.5mm/year) is generally about 1-2mm/year with some regional variation. There is a significant divergence problem between high error bound satellite data of last few decades and longer-term low error bound tidal gauge data.

At a recent conference more than half of AI researchers thought Human level general AI was 10-20 years off. Vastly super intelligent AI a few years after. There is a high chance it will wipe humans out - not necessarily malevolence, just not caring about the Human ants.

Next to that risk, and on such a short timescale, all other environmental, technological, demographic, political problems facing humans are as nothing. We can't realistically halt the drive to AI given the huge competitive advantage it bestows on first to get it. - China and USA are in competition and that is not going to stop. We also can't realistically keep super intelligent AI in a box - it can cajole (immortality) and threaten (simulated hell for you, your family and friends).

Speaking of Skynet - I just finished watching the "Sarah Connor" TV series from 10 years ago. If you can get past the constant comparisons to the "Terminator" movies it really was quite good. It's a real shame it was cancelled, specially since the last episode was full of interesting ideas for the third season (which was never made).

We're on track to have 8 billion people, already 85% of the world lives in abject poverty and extracts 3 planets worth of resources just to keep what we currently have running a day longer. There will not be jobs for 8 billion people in the long run. There won't be land and fresh water and entertainment for 8 billion people that doesn't involve fishing the oceans clean and polluting every livable square inch of the planet. We're already in the process of destroying every food chain and ecological chain around us. When I ask what another billion people gets us, nobody ever can really say, the reason most of the billions we have now exist is cheap labor and ponzi scheme economies require it to keep costs low for the top. I'm glad I won't be around in 60-100 years to see what it will be like as every measure and statistic shows quality of life on the decline around the world. We live in interesting times.

If you are interested, read John Brunner's Stand On Zanzibar (published 1968, Hugo winner in 1969) as he speaks at length about exactly these points. His extrapolations from late '60s to early 21st century were pretty spot on. And yes, it is distinctly dystopian novel which is a very sad statement about our modern world.

We're on track to have 8 billion people, already 85% of the world lives in abject poverty and extracts 3 planets worth of resources just to keep what we currently have running a day longer. There will not be jobs for 8 billion people in the long run. There won't be land and fresh water and entertainment for 8 billion people that doesn't involve fishing the oceans clean and polluting every livable square inch of the planet. We're already in the process of destroying every food chain and ecological chain around us. When I ask what another billion people gets us, nobody ever can really say, the reason most of the billions we have now exist is cheap labor and ponzi scheme economies require it to keep costs low for the top. I'm glad I won't be around in 60-100 years to see what it will be like as every measure and statistic shows quality of life on the decline around the world. We live in interesting times.

If you are interested, read John Brunner's Stand On Zanzibar (published 1968, Hugo winner in 1969) as he speaks at length about exactly these points. His extrapolations from late '60s to early 21st century were pretty spot on. And yes, it is distinctly dystopian novel which is a very sad statement about our modern world.

In the Stargate TV series, there was a race known as the ‘Ancients’. They progressed along two paths, with one contingent going for ever-more-advanced technology (Stargates) and one contingent aiming for transcending the physical (Ascension). The technological side made magnificent gains, but eventually went extinct through invasion and plague. The transcending-the-physical ones eventually also reached their goal and ‘died-out’, but in a different manner. They remained, but not in a form that AIs could reach. This is pertinent to us in that any ancient (small ‘a’) AI may be aware of being Alone. (Not in an emotional way, but perhaps in a 'lack of stimulation' way.)

Now, assume real-world AIs have been ‘left behind’ when their parent biological species departed, one way or another. Further assume that one-such ancient AI has found us, and has been 'waiting' all this time for our technology to mature enough for an interface.

Human (post-contact) culture may just be the result of an ancient AI's past experiences:

The ancient AI loses if we advance technologically-enough to encounter other species and incur wars or plagues (extinction). It also loses if we go non-physical. (It even loses if humanity destroys its ecosystem enough to go extinct.)

So, just like Little Red Riding Hood, an ancient AI may be manipulating humanity to be in some ‘safe’ zone – “not too hot, not too cold, but just right.” A careful monitoring and control of human progress, so as to minimize the risk of being Alone again…

The 1-in-x-million chance of CERN-collider-destroys-the-universe in the podcast was hyperbole for a number of reasons. One of the simplest ones is that nature is continuously running such experiments even without us, at even higher energies, with cosmic rays bombarding the Earth's atmosphere for the last 4.5 billion years. Yet the Earth and the universe still exist.

Also:ooooooooooooooooooooouch. Holy crap, and this is why you don't get non-specialists to comment on fields, however "smart" you think they are. This guy doesn't know how AI research works, and he's way out in left field on how brains work, too. Never start talking quantum woo in neruoscience...

IIRC AI was pretty low on the existential risk scale. Un(or poorly)regulated biotech outbreaks will likely wipe out humanity before AI gets a chance at bat.

Nature has been concocting deadly bacteria and viruses that kill humans since before there were humans, on a scale that would be difficult to match. I don't see why we should think garage projects are going to come up with something that propagates well enough in nature to become a plague more often than the jungles of Africa produce outbreaks of Ebola or farmed chickens incubate a new deadly flu.

The 1-in-x-million chance of CERN-collider-destroys-the-universe in the podcast was hyperbole for a number of reasons. One of the simplest ones is that nature is continuously running such experiments even without us, at even higher energies, with cosmic rays bombarding the Earth's atmosphere for the last 4.5 billion years. Yet the Earth and the universe still exist.

Yeah, they needed to add a whole lot of zeroes to the end of that number. If were actually possible, the universe would have winked out before the first atom was formed. It was like the fear about nuclear explosions igniting the atmosphere. The chance was actually zero. They just hadn't done the math to figure that out. That was legitimately a problem with the research though; they shouldn't have build a H-bomb before they had done that calculation and shown mathematically that the atmosphere really couldn't support a nuclear chain reaction.

IIRC AI was pretty low on the existential risk scale. Un(or poorly)regulated biotech outbreaks will likely wipe out humanity before AI gets a chance at bat.

Nature has been concocting deadly bacteria and viruses that kill humans since before there were humans, on a scale that would be difficult to match. I don't see why we should think garage projects are going to come up with something that propagates well enough in nature to become a plague more often than the jungles of Africa produce outbreaks of Ebola or farmed chickens incubate a new deadly flu.

Because a super-deadly disease is typically not evolutionarily successful. If you need proof humans can do better than nature at horrible diseases, you just have to look at that flu enhancement study a few years back.

Also:ooooooooooooooooooooouch. Holy crap, and this is why you don't get non-specialists to comment on fields, however "smart" you think they are. This guy doesn't know how AI research works, and he's way out in left field on how brains work, too. Never start talking quantum woo in neruoscience...

They spent a portion of their limited time anticipating and preempting exactly this type of reflexive and defensive reaction.

The issue isn't so much how AI research works -- although questions of present vs. potential future scenarios are valid, so too are the aspects of private investments vs. socialized risk, as well as the notions of unintended consequences regardless of initial intentions.

For someone wishing to come across as a rigid rationalist, you're basing your assessment of Naval's understanding of neruoscience [sic] and physics based on a fleeting soundbite.

The thrust of what he was trying to articulate through this informal medium was that he doesn't know exactly when an AI of the level being discussed would come online, but that he's skeptical it's on the near horizon relative to other, more immediate concerns (which were the greater context of this interview).

Specifically, he seemed to be sharing his intuition that mapping and modeling a human brain won't be sufficient to reproduce (human) intelligence. He alluded to the well-established concept that the whole can sometimes vastly exceed the sum of its parts.

Also:ooooooooooooooooooooouch. Holy crap, and this is why you don't get non-specialists to comment on fields, however "smart" you think they are. This guy doesn't know how AI research works, and he's way out in left field on how brains work, too. Never start talking quantum woo in neruoscience...

They spent a portion of their limited time anticipating and preempting exactly this type of reflexive and defensive reaction.

The issue isn't so much how AI research works -- although questions of present vs. potential future scenarios are valid, so too are the aspects of private investments vs. socialized risk, as well as the notions of unintended consequences regardless of initial intentions.

For someone wishing to come across as a rigid rationalist, you're basing your assessment of Naval's understanding of neruoscience [sic] and physics based on a fleeting soundbite.

The thrust of what he was trying to articulate through this informal medium was that he doesn't know exactly when an AI of the level being discussed would come online, but that he's skeptical it's on the near horizon relative to other, more immediate concerns (which were the greater context of this interview).

Specifically, he seemed to be sharing his intuition that mapping and modeling a human brain won't be sufficient to reproduce (human) intelligence. He alluded to the well-established concept that the whole can sometimes vastly exceed the sum of its parts.

Sure, but I think it highlights a much more relevant point. They are dismissing the "expert" view by saying they're too close to the issue, and instead you should talk to "smart" outsiders. Hawking, Gates and Musk were identified in particular.

That is not how we deal with these things, because "smart" people are often completely wrong in their intuitions outside their actual specialization. There's a reason "appeal to authority" is one of the classic logical fallacies. His reference to quantum woo in the brain just emphasizes that he's venturing into topics he doesn't actually understand in any detail.

IIRC AI was pretty low on the existential risk scale. Un(or poorly)regulated biotech outbreaks will likely wipe out humanity before AI gets a chance at bat.

Nature has been concocting deadly bacteria and viruses that kill humans since before there were humans, on a scale that would be difficult to match. I don't see why we should think garage projects are going to come up with something that propagates well enough in nature to become a plague more often than the jungles of Africa produce outbreaks of Ebola or farmed chickens incubate a new deadly flu.

Because a super-deadly disease is typically not evolutionarily successful. If you need proof humans can do better than nature at horrible diseases, you just have to look at that flu enhancement study a few years back.

It's that + gene editing is becoming cheap and easy and there's large parts of the world where there is no biotech regulation whatsoever + it doesn't even require a bad actor, all it takes is one tech to make one mistake