GAT_00:We're going to get to AI eventually. It probably won't take long at all to recognize we built a superior life form that we are enslaving for our own uses.

The outcome of that is...predictable.

Several years ago I met an AI researcher. His approach was that a machine was best capable of becoming aware of its environment through touch, and he had developed this kind of 'floppy' robot (his word) that was capable of more or less slapping around to get a sense of things and could adjust its grip according to what it was feeling to manipulate it. Being British, of course, he constantly tested his designs to see how well they could pick up a pint.

DoBeDoBeDo:So we still haven't caught on that a great deal of science fiction was written to serve as a warning, NOT how to guides?

Man building a "thinking" machine is inevitable, and when we finally get it right, it'll be the last thing we ever really design- if it can improve itself at an exponential or geometric rate, it, almost immediately, would be able to design a superior version of itself with fewer limitations. It would then be placed in charge of doing all design work, and eventually, all complex decisions, based on its ability to process a vastly larger volume of information, or do simpler or more mundane things far more efficiently than humans. Humans will be reduced to doing "creative" tasks, like making art, but machines would eventually be able to do some of that as well.

This event is frequently referred to as the technological singularity, and I'm 100% confident it will happen this century.

These systems can comb through hideously large amounts of uncertain information and select the most useful parts of it. And unlike a traditional program, it can move backwards and forwards through the data, make inferences, reason under uncertainty, and efficiently home in on the best explanations. It can go through these processes in an iterative manner, repeating the exercise in a more efficient and superior way each time. Essentially, a probabilistic program learns and evolves.Or, in simpler terms, it does what you learn in an undergraduate AI class.

It sounds like what they're doing is closer to a combination of data mining with a fancy query system, so instead of having a team of experts build a system to identify terrorist hiding spots, they can just have regular army types build a system to identify the current terrorist flavor of the month.

grinding_journalist:DoBeDoBeDo: So we still haven't caught on that a great deal of science fiction was written to serve as a warning, NOT how to guides?

Man building a "thinking" machine is inevitable, and when we finally get it right, it'll be the last thing we ever really design- if it can improve itself at an exponential or geometric rate, it, almost immediately, would be able to design a superior version of itself with fewer limitations. It would then be placed in charge of doing all design work, and eventually, all complex decisions, based on its ability to process a vastly larger volume of information, or do simpler or more mundane things far more efficiently than humans. Humans will be reduced to doing "creative" tasks, like making art, but machines would eventually be able to do some of that as well.

This event is frequently referred to as the technological singularity, and I'm 100% confident it will happen this century.

As a machine learning kind-of expert, my impression is that they are missing the point. They are addressing a problem that has been solved with various algorithmic tools, and that is the recognition of certain situations through given examples. What we need is artificial reasoning.