Chace on Age of Em

I can’t remember ever reading a book before which I liked so much, while disagreeing with so much in it. This partly because the author is such an amiable fellow. .. The writing style is direct, informal and engaging .. And the book addresses an important subject: the future.

As we disagree on much, I’ll just jump in and start replying.

Robin’s insistence that AI is making only modest advances, and will generate nothing much of interest before uploading arrives, seems dogmatic.

Given two events, my estimating that one is more likely to happen first seems to me no more dogmatic than Chace estimating the opposite.

Because of this claim, he is highly critical of the view that technological unemployment will be widespread in the next few decades. Fair enough, he might be right, but obviously I doubt it. He is also rather dismissive of major changes in society being caused by virtual reality, augmented reality, the internet of things, 3D printing, self-driving cars, and all the other astonishing technologies being developed and introduced as we speak.

I don’t dismiss such changes; they are welcome, and some will happen and matter. I just don’t see them as sufficient reason to think “this time is different” regarding massive job loss; the past saw changes of similar magnitudes.

He seems to think that when the first ems are created, they will very quickly be perfect replications of the target human minds. It seems to me more likely that we will create a series of approximations of the target person.

The em era starts when ems are cheaper than humans for most jobs. Yes of course imperfect emulations come first, but they are far less useful on most jobs. Consider that humans under the influence of recreational drugs are really quite good emulations of normal humans, yet they are much less valuable on most jobs. So emulations need to be even better than that to be very useful.

The humans in this world are all happy to be retired, and have the ems create everything they need. I think the scenario of radical abundance is definitely achievable, but I don’t think it’s a slam dunk, and I would imagine much more interaction – good and bad – between ems and humans than Robin seems to expect.

I don’t understand what kinds of interaction Chace thinks I expect less than he does here.

A couple of smaller but important comments. Robin thinks ems will be intellectually superior to most humans, not least because they will be modelled on the best of us. He therefore thinks they will be religious. Apart from the US, always an exceptional country, the direction of travel in that regard is firmly in the other direction.

In the book I gave citations on religious behavior correlating with work productivity. If someone has contrary citations, I’m all ears.

And space travel. Robin argues that we will keep putting off trying to colonise the stars because whenever you send a ship out there, it would always be overtaken by a later, cheaper one which benefits from better technology. This ignores one of the main reasons for doing it: to improve our chances of survival by making sure all our eggs aren’t in the one basket that is this pale blue dot.

I didn’t say no one would go into space; I pointed out that high interest rates discourage all long term projects, all else equal, including space projects.

As I understand Robin’s claim, the purported correlation isn’t between religiosity and intelligence, but between religiosity and productivity.

Of course the credibility of this claim depends on productivity being separable from intelligence. I think for many of us the natural intuition is to view this idea with incredulity. Surely, we think, the people with the absolute highest IQs are ‘really’ the most powerful, and any other factors that seem to outweigh this one are just an illusion or only exist due to some weird temporary accident.

Religious people have additional motives that non-religious people do not, along with all the motives that non-religious people have. So it is to be expected that religious people will be more productive, other things being equal.

Of course it does not mean that other things will actually be equal.

Alphaceph

*”Robin’s insistence that AI is making only modest advances, and will generate nothing much of interest before uploading arrives, seems dogmatic — Given two events, my estimating that one is more likely to happen first seems to me no more dogmatic than Chace estimating the opposite.”*

It would be interesting to discuss em vs AI timelines in more detail, rather than throw around intellectual insults. I personally think that it’s a close thing between human level AGI and human level ems, but it would be interesting to look at what evidence we could come across that would push that one way or the other.

I’d be interested in a blog post covering your latest thoughts on this. It’s been a decent amount of time since you last covered it. I find AI vs Ems more interesting than most OB topics (signalling and sociology has been overdone recently). Perhaps I’m asking for too much in a blog post though, and I really should read your book.

The comments regarding space seem to reflect an excluded middle. A quick jaunt to the Moon is not long term, it’s a few weeks. A trip to Mercury would take only a few months if there’s energy to burn. Whatever the doubling rate for the earthly economy, I can’t see it being faster than the doubling rate for a Mercury based economy (for long). And a *surface* earth based economy would be nuts compared to an *orbital* earth based one, given ems or even slightly em-like software.

asdf

It still doesn’t solve the fundamental Malthusian problem, there is no point in doubling the economy if per capita consumption is falling; it just quadruples the misery.

But that wasn’t the point anyway, the point merely was that star ships will be unprofitable as long as there is more to gain by innovating instead.

You don’t comment on what I think is his most pertinent criticism: “The incentive to enhance the intelligence of an entity which works for you is irresistible, and once we have models of minds in silico, it will be much easier to do so.”

In the book I gave citations on religious behavior correlating with work productivity. If someone has contrary citations, I’m all ears.

But those wouldn’t seem to be the only relevant correlations – or even the main ones. What about the (strong negative) correlation between the productivity of a society and its religiosity?

People experiencing brain emulations and being highly religious? The “spiritual” would have been palpably reduced to the physical. [Would en emulation of Bryan Caplan be able to maintain his religiosity.]

This is a blog on why we believe and do what we do, why we pretend otherwise, how we might do better, and what our descendants might do, if they don't all die.