AI: Superintelligence - paths, dangers and strategies

Request POSTS to this thread adhere generally to the Title.

The thread title is borrowed from the book of the same name which I read a few years back. The book itself is an attempt to explore possible problems and possible solutions by using analysis, logic etc.

I'm sure we all know computers and their software make the modern world go round. Sure humans matter, but our collective nervous system is silicon based and its inner workings are unknown to most. We rely on this nervous system to be fast, accurate and compliant.

And we ask more of it all the time. There are strong incentives to make our silicon nervous system more intelligent, even give it a brain you might say. And we little cells, who are no sort of in charge, may not be for long. OOOOhhhhh. Spooky.

A new version of AlphaGo Zero trounced an older version 100-0. The older version, which beat the best humans, learned by studying old games between humans and then playing itself. AlphaGo Zero just had the rules and played itself.

Don't know if it's mentioned in the article, but I'd bet $2 no human understands AlphaGo Zero's play. If they did, they might have a chance against it, right? Instead, it is described as a 'Go God.'

Now Go has a solution space far too large to simply look ahead more than a few moves. AlphaGo Zero must operate using many strategies. It cannot know the best move for sure in most cases.

So some random thoughts/questions:
1. AlphaGo Zero had to have a set of rules in order to teach itself winning strategies/gameplay. Humans fed AlphaGo the rules. For AI's let loose on the public (i.e., their actions effect the real world), what sort of rules do you think are being fed AIs today? What is the ramifications? What sort of rules might be better? <----important questions

2. AlphaGo Zero's workings are now -inscrutable-. The workings of any moderately complex learning neural network are also -inscrutable-. Which is to say, no human knows how to predict [given input A, processed through, what will result B be?] the first time around. It is not 100% predictable because the inner workings are not understood. Is this a problem?

2a. Human inner workings are also inscrutable. We have some glimpses of how we work, but nobody understands enough to predict given input A what is output B. And we've developed a TON of hardware/software to deal with this so we don't all murder each other. These compensation mechanisms include body language, laws, pecking orders and language.

Given 2 and 2a: Is their parallels between AI and humans vis a vis compensation mechanisms?

With the proviso that machine learning is something of a blind spot of mine at the moment, regarding 1:

1) It depends what you want them to do I guess. Playing games like Go is one thing since the rules are still "fixed" and quite simple in some ways and, more importantly, there is a clear objective. So I guess the question is what you mean by "better" because the precise problem is when it's not clear what "better" means. I think the idea of AI's let loose ... it's important to be more careful in that since we're still nowhere close to a "general AI". Instead we have "AIs" that are good at doing one thing in particular, like recommending ads for example.

2) Yep, depending on the application, this is a problem but I think there's ongoing work on trying to extract explanations from neural networks. I think it's not a problem when it is easy to verify or test a neural networks output (e.g., let it play Go). But some explanation is important when it is harder to verify or test (e.g., invest a billion bucks on this stock). Also explanations can enhance our own understanding.

2a) I think we tend to anthropomorphise technology too much (or perhaps the other way around ... we tend to model technology in our own image).

With the proviso that machine learning is something of a blind spot of mine at the moment, regarding 1:

First off, whoa, someone answered, thanks. After posting this turgid mass, I figured it would sink beneath the waves with nary a trace. I am only an interested observer and dabbler. I think the initial questions were more in the applied philosophy category - if such a category exists.

1) It depends what you want them to do I guess. Playing games like Go is one thing since the rules are still "fixed" and quite simple in some ways and, more importantly, there is a clear objective. So I guess the question is what you mean by "better" because the precise problem is when it's not clear what "better" means. I think the idea of AI's let loose ... it's important to be more careful in that since we're still nowhere close to a "general AI". Instead we have "AIs" that are good at doing one thing in particular, like recommending ads for example.

Good point - better for who? Obviously specialized AIs are deployed because those using them expect better results. On the commercial side this would be $$, on the government side the initial deployments would be by spooks (I would think) and maybe applied science. In both cases I find my personal space invaded and it's not better for me as far as I can tell.

I found this article just now, and it's just some dude's opinion, but it states more concisely part of what I'm getting at.

"The biggest risk isnít that AI will develop a will of its own, but that it will follow the will of people that establish its optimization function, and if that is not well thought out ó even if intent is benign ó it could have quite a bad outcome," Musk said.

How he arrived at this, I don't know. But I arrive at it by noting a particular AI system (AlphaGo) performed best at optimizing game play when freed from constraints of human experience. Thus, for an AI which has significant impact, those that feed it the rules have the potential to benefit themselves and not others. The efficient market hypothesis waves all of this away with the notion someone else will step in and give us better cable TV. Oh wait, that didn't happen here in the US, cable TV suck the rankest diseased goat balls in the cosmos. But that's monopoly, not AI. But Amazon, for example, is headed or has arrived at monopoly status in many areas. And the government in it's domain is a monopoly of sorts.

2) Yep, depending on the application, this is a problem but I think there's ongoing work on trying to extract explanations from neural networks. I think it's not a problem when it is easy to verify or test a neural networks output (e.g., let it play Go). But some explanation is important when it is harder to verify or test (e.g., invest a billion bucks on this stock). Also explanations can enhance our own understanding.

2a) I think we tend to anthropomorphise technology too much (or perhaps the other way around ... we tend to model technology in our own image).

On the anthro-thing, I tentatively posit it's the other way around since we create the technology. The holy grail is human-like ability just better. If I encounter a person who is unreadable and tells me nothing (except in his actions), at the very least I proceed with caution.

--

How does this analogy strike you... Is AI like the automobile? A good thing, creating a new world so to speak. Yet, thousands are now getting run over or killed in accidents. Eventually highway regulations come into being, highway engineering and later car safety standards to keep the damage to an acceptable tens of thousands of deaths annually in the US.

But this leads to another quote from article.

Musk wants the government to set regulations in place to root out threats early. "AI is a rare case where I think we need to be proactive in regulation than reactive," said Musk. "By the time weíre reactive in AI regulation, itís too late."

All Watched Over By Machines Of Loving Grace is a series of films about how this culture itself has been colonised by the machines it has has built. The series explores and connects together some of the myriad ways in which the emergence of cybernetics—a mechanistic perspective of the natural world that particularly emerged in the 1970s along with emerging computer technologies—intersects with various historical events and visa-versa. The series variously details the interplay between the mechanistic perspective and the catastrophic consequences it has in the real world.

2. AlphaGo Zero's workings are now -inscrutable-. The workings of any moderately complex learning neural network are also -inscrutable-. Which is to say, no human knows how to predict [given input A, processed through, what will result B be?] the first time around. It is not 100% predictable because the inner workings are not understood. Is this a problem?

Great OP and great topic. But this is where you lose me. It is possible to decompose a neural network, examine its inner workings/states, determine the chains of causality which determine its output. Complex? Yes. Time-consuming? Perhaps. Difficult? No doubt. But inscrutable? Qua preclusive to analysis? I don't think so.

Great OP and great topic. But this is where you lose me. It is possible to decompose a neural network, examine its inner workings/states, determine the chains of causality which determine its output. Complex? Yes. Time-consuming? Perhaps. Difficult? No doubt. But inscrutable? Qua preclusive to analysis? I don't think so.

A fair point. And i confess possessing an inscrutable brain that spits out thoughts that come from where I now not, I have to say this was more an intuitive statement than a well thought out one.

Kate Crawford, principal researcher at Microsoft Research, and Meredith Whittaker, founder of Open Research at Google, want to change that. They announced today the AI Now Institute, an research organization to explore how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence.

AI Now will focus on four major themes:

1. Bias and inclusion (how can bad data can disadvantage people)
2. Labor and automation (who doesnít get hired when AI chooses)
3. Rights and liberties (how does government use of AI impact the way it interacts with citizens)
4. Safety and critical infrastructure (how can we make sure healthcare decisions are made safely and without bias)

Well, being serious, when I read such things I tend to play down the actual impact this might have. Another learned body warning us about things in the future to be sure, ultimately mostly ignored. These are important people in industry but not the most important.

Another thing - three of the four focus areas are affects and not fundamental IMO. Only rights and liberties in the broadest sense strikes me as a fundamental issue, how humans stand in relation to machines and for that matter, how humans stand in relation to other humans.

Google's Latest Self-Learning AI Is Like an "Alien Civilisation Inventing Its Own Mathematics

Fanciful title and entirely un-accurate. Inventing a mathematical system is creating a language, deducing rules and truths, none of which a DNN (deep neural net which is what everybody is talking about) does.

Don't know if it's mentioned in the article, but I'd bet $2 no human understands AlphaGo Zero's play. If they did, they might have a chance against it, right? Instead, it is described as a 'Go God.'

Go doesn't work that way, humans can certainly understand AG's play, otherwise we couldn't play against it. AG is certainly playing unlike any human however.

AlphaGo Zero must operate using many strategies. It cannot know the best move for sure in most cases.

It's not operating by strategy so much as recognizing patterns and using pre-trained probability weights.

1. AlphaGo Zero had to have a set of rules in order to teach itself winning strategies/gameplay. Humans fed AlphaGo the rules. For AI's let loose on the public (i.e., their actions effect the real world), what sort of rules do you think are being fed AIs today? What is the ramifications? What sort of rules might be better? <----important questions

Go has only a few rules, the reason DNN's work so well in this domain is because the loss function (a measure of how well or poorly the weights are functioning) is so well defined. The second reason is because it's so easy to generate data, and DNN's love data.

To answer your question, with todays technology it all comes down to the training set you feed the DNN's. The 'rules' (called features in DNN's and are a bit of a different concept) are probabilistically inferred from the data. So, if we only take real world data to feed our DNN's, they will give us real world results. Skew the data (which is done for convolutional nets to increase your data set size) and you skew the net. But the test set should catch that.

2. AlphaGo Zero's workings are now -inscrutable-. The workings of any moderately complex learning neural network are also -inscrutable-. Which is to say, no human knows how to predict [given input A, processed through, what will result B be?] the first time around. It is not 100% predictable because the inner workings are not understood. Is this a problem?

Not sure what you mean, you can view the weights that led to any particular decision. Why the weight is what it is, well that is a result of the training and not something you can deductively understand, even though it comes from a deterministic deductive process.

Given 2 and 2a: Is their parallels between AI and humans vis a vis compensation mechanisms?

Problem is you're jumping the gun here, DNN's are just crude pattern recognizers. Not much different from your visual cortex (I saw that also thinking how amazed I am at how far we've come), but this is still a long distance (in technology terms, not timeframe) from general intelligence. For example snakes have pattern recognizers like our DNN's, but we don't call them intelligent, they behave according to hard coded instinctual rules.