Conflicting Abstractions

My last post seems an example of an interesting general situation: when abstractions from different fields conflict on certain topics. In the case of my last post, the topic was the relative growth rate feasible for a small project hoping to create superintelligence, and the abstractions that seem to conflict are the ones I use, mostly from economics, and abstractions drawn from computer practice and elsewhere used by Bostrom, Yudkowsky, and many other futurists.

What typically happens when it seems that abstractions from field A suggests X, while abstraction from field B suggests not X? Well first, since both X and not X can’t be true, each field would likely see this as a threat to their good reputation. If they were forced to accept the existence of the conflict, then they’d likely try to denigrate the other field. If one field is higher status, the other field would expect to lose a reputation fight, and so they’d be especially eager to reject the claim that a conflict exists.

And in fact, it should usually be possible to reject a claim that a conflict exists. The judgement that a conflict exists would come from specific individuals studying the questions of if A suggests X and if B suggests not X. One could just suggest that some of those people were incompetent at analyzing the implications of the abstractions of particular fields. Or that they were talking past each other and misunderstanding what X and not X mean to the other. So one would need especially impeccable credentials to publicly make these claims and make them stick.

The ideal package of expertise for investigating such an issue would be expertise in both fields A and B. This would position one well to notice that a conflict exists, and to minimize the chance of problems arising from misunderstandings on what X means. Unfortunately, our institutions for crediting expertise don’t do well at encouraging combined expertise. For example, often patrons are interested in the intersection between fields A and B, and sponsor conferences, journal issues, etc. on this intersection. However, seeking maximal prestige they usually prefer people with the most prestige in each field, over people who actually know both fields simultaneously. Anticipating this, people usually choose to stay within each field.

Anticipating this whole scenario, people are likely to usually avoid seeking out or calling attention to such conflicts. To seek out or pursue a conflict, you’d have to be especially confident that your field would back you up in a fight, because your credentials are impeccable and the field thinks it could win a status conflict with the other field. And even then you’d have to waste some time studying a field that your field doesn’t respect. Even if you win the fight you might lose prestige in your field.

This is unfortunate, because such conflicts seem especially useful clues to help us refine our important abstractions. By definition, abstractions draw inferences from reduced descriptions, descriptions which ignore relevant details. Usually that is useful, but sometimes that leads to errors when the dropped details are especially relevant. Intellectual progress would probably be promoted if we could somehow induce more people to pursue apparent conflicts between the abstractions from different fields.

Yeah, good luck with that. Abstractions are wonderful, but don’t always promote clarity. Except possibly in pure mathematics, combining/layering abstractions (and thus abstracting them even further) risks ultimately obscuring reality. I suspect that my IQ isn’t high enough to follow some of your abstractions. This is, perhaps, why I always push you to use as-concrete-as-possible examples in your arguments. Regardless, your comments about the impact of perspectives that arise due to differing fields of expertise immediately reminded me of that old Joni Mitchell song, Both Sides Now: “I’ve looked at clouds from both sides now. From up and down, and still somehow, it’s cloud illusions I recall. I really don’t know clouds at all.” When one’s abstractions turn into clouds, perhaps one should seek more solid ground?

This reads to me as an attempt to win an argument by running off to a meta-argument in your preferred discipline, suggesting that your opponent’s argument is uninformed for social / structural reasons. It’s not very convincing.

Huh? My discussion seems to me completely symmetric regarding the two sides described. How do you see it favoring one side over the other?

Jayson Virissimo

If anything, around this part of the blogosphere, computer science is higher status than economics, so Hanson’s “attempt” would be mostly a failure (if that was his intent).

Silent Cal

I’m not sure I see this as a conflict between abstractions from different fields. Surely economics has no trouble modeling a foom; it’s just that the assumptions required are unusual for the field. I have less understanding of what abstractions the foomers are using, but I suspect there would be other computer-based abstractions giving opposite results.

In my reading, the disagreement is not about abstractions but rather about assumptions. I’m optimistic that it’s possible to lay out an economics-based model with tunable parameters that could lead to foom or no-foom, such that the foomers would accept the model and disagree about parameter values.

Ben Albert Pace

This isn’t meant as a knock down argument, but how much relevant expertise do you have?

Silent Cal

Strictly amateur, but let me take this chance to expand on my intuition:
There was no blacksmith who invented industry and used it to take over the world. There were countries that advanced relatively quickly in industry and gained significant relative advantage. The continent of Europe pretty much did invent industry and use it to take over the world.

So, if interactions among firms during the AI transition end up being most like interactions among individuals during the industry transition, we might see some firms get rich but not to the point of hegemony. If firms are more like countries, we might see the top firms as a group come to dominate the world, but with power balanced among many firms. And if firms are more like continents, we might get a foom.

So why did industry have different distributional effects on different scales? It’s not an easy question, but it’s certainly within the field of economics and subject to economic modeling.

Ben Albert Pace

Have you read Yudkowsky’s ‘Intelligence Explosion Microeconomics’? Could you comment as to how it affected your views wrt to this subject (or link to where you already have)?

No, I hadn’t seen that paper before. Section 3.9 does indeed make his views clearer to me. Too bad I don’t have a better process for informing me of such things.

Ben Albert Pace

Yudkowsky writes a paper pertaining to this topic, it has the word ‘microeconomics’ in the title, and Robin Hanson hasn’t heard of the paper… I am surprised. I suppose in far mode, I imagine things to be perfectly efficient. Still, I am quite surprised nobody told you about it.

If you sign up for the MIRI Newsletter, they tell you about all of their new papers. They don’t spam you or send lots of needless details, so I’d advise signing up for it. Or you could just start by looking at the ‘forecasting papers’ section of their site to see if there’s any relevant stuff there: http://intelligence.org/research/#forecasting

I am not in the habit of reading all papers related to the subject of intelligence explosion. I do try to be in the habit of reading papers that mention and discuss me, but for that I need more info than the title and author of the paper. I try to send a personal email to people when I post a paper mentioning and discussing them, but that isn’t a wide custom.

Here is another interesting case of conflicting abstractions. I wonder what Robin Hanson’s take is?

Charlie

The general argument here would seem to suggest that “behavioral economists” are unpopular, because patrons prefer either psychologists or economists. Also, Kahneman’s Nobel is a bit odd in this context.

arch1

This seems to suggest that there is much value in collaborations such as the Santa Fe Institute.

[Entertaining-but-instructive aside: I just ran across a fascinating & hilarious interview of SFI cofounder Murray Gell-Mann, who Paul Kauffman once said “…may know more things than any other single human being” but who (we learn in the interview) considers himself a major league slacker: http://www.achievement.org/autodoc/printmember/gel0int-1. He touches on many topics, including academic silos, practical economics, and his own many neuroses]