Original post by Hinkar(Currently) AI is a misnomer. You just put your ball in the slot and it trickles through your code and you get a correct result.

You're correct in that there are a lot of misconceptions about AI and the nature of consciousness and intelligence. Unfortunately for you, what you just said is one of the common misconceptions.

The difference between AI and your code is not in nature but simply in order of magnitude.

For example, putting a ball in the slot and letting it trickle through your code and get a result is analogous to providing a human being with visual/audio input, letting it trickle through their neural network (brain) and produce a result (thought.. or the resulting action that results from that thought).

Original post by TimkinAs for Emergent's comments...[...]As a quick aside... machine learning and Optimal Control should not be considered side by side and I don't believe anyone working in OC would ever claim what they were doing was AI. If though you meant by OC merely the problem of determining an optimal control function/regulator... then ML is just a tool for doing that... as are the formal methods of OC (what we usually call Control Theory).

Actually, I have a specific example of a control theoretician saying pretty much exactly that. I quote (roughly) a professor who does research in optimal and multi-agent control:

"Most machine learning is pretty-much optimal control."

It's not a completely fair statement, but there's a real element of truth. The example he presented was Q-Learning, which he argued was just an application of Bellman's Optimality Principle. It is, of course.

I think the temptation when a control theoretician realizes this is to trivialize the algorithm: That is what this professor tried to avoid but I think did anyway. Me, I take both sides: I say that it's a very nice piece of work not to be trivialized as "just the straightforward application of Optimal Control" (Because it isn't. The update rule is non-obvious.), but also agree with the professor that we shouldn't wrap it in unnecessary mysticism.

The sentiment that this professor expressed is part of what is, I think, a larger movement by control theoreticians, who have begun to recognize that they can unleash the mathematical tools they've developed on problems traditionally considered "AI."

I'm very sympathetic to their point of view, as a lot of what they have accomplished by building on good theoretical foundations has been incredibly impressive. But I also keep in mind a warning from W.S.Anglin:

"Mathematics is not a careful march down a well-cleared highway, but a journey into a strange wilderness, where the explorers often get lost. Rigour should be a signal to the historian that the maps have been made, and the real explorers have gone elsewhere."

(Of course, this too is an oversimplification.)

Anyway, I'm getting off topic, so let me return to the point: I do think that many of the problems tackled by people in the AI community also have solutions with different flavors from the controls community (and visa versa), and whether it is an "intelligent system" or just a "controller" (or even less sexy: a "regulator"), depends largely on which researcher came up with it.