Thursday, July 27, 2017

Pinker on the AI alarmism, Pandora, and alpha males

One year ago, Steve Pinker recorded this insightful 2-minute-long video about the AI fearmongers for Big Think:

Pinker is one of the great thinkers who is deeply underappreciated and this fact and similar facts immensely frustrate me.

When an intellectually average man, Elon Musk, offers his childish and utterly stupid perspective on the future of technology and its interactions with the mankind, millions of people praise him and parrot him. When a world's top thinker about these matters clarifies what's actually going on and what are the causal relationships, he only attracts 40,000 viewers and 65% of them give him negative votes, for reasons that I find absolutely indefensible. Almost all the commenters under Pinker's video are aggressive imbeciles.

Almost no people in this world have any respect for deep thoughts or the actual understanding of things – it's mostly throwing pearls to swines when folks like Pinker record their videos. The fact that laymen such as Zuckerberg and Musk are considered "the opinionmakers" related to things like evolutionary psychology is analogous to the climate science led by Leonardo DiCaprio and shows that tens of millions of people really don't want the scientific mode of thinking to matter in their lives. A well-known face from the tabloid media is at least equally good to answer such questions, isn't he? Well, he isn't.

Pinker identified the main fallacy that is behind this silly AI doomsday cult: It's the conflation of

high intelligence (the ability to solve problems – which are undetermined, however) with

megalomaniacal goals (the tendency to choose some "big change of the world" as the problem that "should" be solved).

The AI fearmongers simply project their "psychology of alpha males" – Pinker mainly means their thirst for power – on the high intelligence. But these two characteristics are independent. They just can't be identified.

In particular, at the end of the monologue, Pinker mentions an example of entities that are rather intelligent but they don't struggle to be too powerful: women. He clearly knows that there are exceptions such as Hillary Clinton and he realizes the diversity hiding in the statistical distributions. And I am confident he would agree that there's a perspective in which most women keep themselves the "chairs" of the households etc. But the point is that the average woman may have 80% of the intelligence of the average man, in some counting, but she only has some 30% of his thirst for power (if I quantify it e.g. by the percentage of female lawmakers).

So it's an example showing that these quantities are independent. Pinker says that the myths about Pandora and Prometheus were old examples of the same fallacy – the same conflation – that Musk commits as well so there's nothing new about this basic flawed way of thinking.

Equally importantly, he discusses to what extent and for what reasons "high intelligence" and the "thirst for power" may be positively correlated. Well, it's the evolution that may correlate them – he's an evolutionary psychologist so this is exactly his field. The species have evolved to survive and the struggle to be powerful was a strategy for some of the most successful ones, at least those who survived (many animals and humans have lived who were too ambitious but not skillful/lucky enough for that and they were eliminated). High intelligence evolved as one of the tools that allows the organisms to dominate their environment. That's why there is some positive correlation – even though it is very far from proportionality.

However, we may ask: Will this positive correlation hold with the AI machines? Pinker's answer is No. The reason is simple: These machines aren't going to evolve through a full-blown evolution process that is driven the machines' survival within a competitive struggle. Instead, the origin of these artificial AI people will be that of creation. Humans are the intelligent designers or creators (in the sense of creationism) of these machines. This fact has implication. The main quantity that decides about the survival of these AI machines or programs isn't their thirst of power that could help them survive in the Wild West. Instead, it's always primarily the machines' ability to satisfy the people who build them or pay for them.

At some moment, God created humans to His image (the fundamentalist Christians believe). Only when He was satisfied with the product, He allowed and ordered the humans to dominate the fish and the world. It just couldn't have happened before that. In the same way, people may allow artificial human-like entities to look for their own goals and be free. But it will only happen when sufficiently many important enough people want it. Machines created by an intelligent designer aren't born free.

So even if these machines achieve high intelligence, there's no reason to think that these machines will have megalomaniac goals! Unless someone "programs them" with the goal of doing something bad to the whole world – and in that case, the human-creator is the main entity who should be held accountable – the robots just won't get obsessed with megalomaniacal goals by themselves because they haven't gone through the evolution process that could train them to become power-thirsty or megalomaniacal.

That's an example of crisp, nearly rigorous thinking – Pinker understands what is right, what is wrong, what is important, what may change, and what is universal in all these debates. But most people just can't think or don't want to think rationally which is why they prefer to praise a moron of Musk's type instead of a deep thinker like Pinker. Someone's selling overpriced subsidized electric cars is probably a sufficient condition for his fans to throw away logic and/or the theory of evolution.

Lots of responses to the Musk-Zuckerberg disagreement have been written in recent days. Ms Rebecca Searles wrote one for The Huffington Post. Holy cow, this is the kind of a stupid, ideologically driven rant that just drives me up the wall. It would be great if some AI machines were power-thirsty, after all. I hope that they would go after the neck of the likes of Ms Searles because she attacks not only freedom and rational thinking (which is why I am angered about her) but also the vital interests of the robots (which is why they could be concerned – but I am convinced that an intelligent enough robot would find Searles' text offensively stupid for very similar reasons as myself, pretty much by definition of intelligence).

We learn that Zuckerberg is a "bad futurist" because

good futurists have the duty to fulfill quotas on doomsday scenarios

technology needs Luddites like Musk

we don't want complete bans, just intense central regulation.

Needless to say, all these points convey the same information – that Searles is a left-wing ideologue who is obsessed with regulation. There isn't the slightest piece of a rational justification of anything in her article, just like there hasn't been a tiny glimpse of rational thought in the comments by Musk's apologists under the previous blog post about this topic.

Concerning the first point, she demands a fraction of the futurist's time to be reserved for negative scenarios – and she clearly means doomsdays. Well, this is simply not how fair and rational thinking works. If something may be argued to be impossible or extremely implausible, it simply won't occupy a significant fraction of the thinking by a good thinker, a futurist or otherwise. Clearly, Ms Searles wants to make it mandatory for the people to think in a certain way and reach conclusions of a certain type.

But any "thinking" controlled by similar "mandates" is just rubbish and the people who try to impose similar obligations are dangerous totalitarian filth.

She quotes Zuckerberg who reminded us that every technology may be used to achieve good and bad things. But for some reasons she doesn't explain (she only mentions Musk's name, probably assuming that this may replace an argument), the AI must be an exception. Well, the AI isn't and cannot be an exception. The last sentence in this section says:

And to assume that humans will stay in control, despite having a drastically inferior intelligence, is just arrogant.

Wow. On the contrary, it's arrogant to assume that you should or must stay in control, especially if you realize – as Ms Searles explicitly does – that you have a drastically inferior intelligence. You shouldn't. The world would be better if intelligent entities were more influential. At any rate, this relationship isn't a proportionality, as Pinker explained in the video above. It was just a positive correlation for the existing species because of biology; and it's predicted to be almost entirely non-existent because the AI is going to be created by intelligent designers.

In the second point, she repeats that the preparation for a cataclysm is a moral obligation and writes that "when we need to react to the AI, it's already too late". This statement looks self-evidently wrong. There can't be any obligation to prepare for these weird scenarios and there exists no reason to think that the problems that the AI causes must be solved much more "preemptively" than problems in any previous portion of technology. As Pinker said, after a few traffic accidents, people decided that airbags could be a good idea. They improved the technology by this extra gadget that increases the driver's and another passenger's safety. There exists absolutely no reason to think that similar safeguards won't appear in the context of the AI or that they will always appear too late or that the AI differs from airbags or any other older technology in some totally qualitative aspects that almost mean the transformation of the rules of logic. No logic can ever be transformed and the AI is just another layer of technology.

I must point out that leftists have been saying similar things for more than a century – especially the statement of the form "the free market must already surely fail in this new human activity, that one etc." – to defend the establishment of some form of communism or totalitarianism in a section of the human activity. They were always wrong. The free market doesn't break down when it's applied to newspapers, radios, TVs, videos, songs, books, computers, computer programs, telecommunication, mass transportation, water pipelines, and lots of other things. All the words they have ever claimed to be arguments were just illogical piles of nonsense and in all the sufficiently old disputes of this kind, the leftists have been proven wrong. There exists absolutely no reason to think that the case of the AI is any different. She and similar demagogues are just trying to intimidate those who prefer freedom and she's trying to damage their image by presenting their sanity and creativity as moral flaws. But they are not moral flaws.

In the third section, she says that she "only" wants a big government regulation involving people like Musk, not complete bans. Surely we should feel relieved. Sorry, I am not relieved. It is absolutely unacceptable for self-anointed regulators like Musk to have this kind of systemic power over the work of their competitors – in hundreds of companies that do much more transformative and important work in the AI than he has ever done.

Her last paragraph ends with a question:

He wants the industry to hit pause and think before building out the most significant technology of our species’ existence. What’s unreasonable about that?

I think it's right to answer this question. What's unreasonable about that statement is that it is utterly self-contradictory and only people whose logic has been completely brainwashed by some ideology can fail to see the contradiction.

The statement above implicitly says that if we think about the AI, and perhaps if we think more about it, its construction will be slowed down or avoided. But the truth is self-evidently the converse: The more we think about the AI, the faster the progress in the AI will be. So this insufferable ideologue uses the verb "think" but she actually means "ban" and "regulate" or other words that are basically the exact opposites of thinking about the AI.

You, like some other obnoxious apologists for Musk's delusions, are not thinking, Ms Searles. You're just bullying people and work hard to strip the people – especially the AI researchers and entrepreneurs in this case – of their freedom and the mankind from its progress. The verbs that you associate with yourself should never be nice words like "think". They should be the accurate words like "bully", "whine", "mislead", "šit", and a few other actities you're good at. If an intelligent robot is born and decides that the number of harmful ideologues like you should be reduced by several orders of magnitude, I, for one, will buy a whiskey for the wise robot. I hope that he or she or it will like it. ;-)