5 Answers
5

Any program in which the decisions made at time t are impacted by the outcome of decisions made at time t-1. It learns.

A very simple construct within the field of Neural Networks is a Perceptron. It learns by adjusting weights given to different input values based on the accuracy of the result. It is trained with a known set of good inputs. Here is an article that covers the theory behind a single layer Perceptron network including an introduction to the the proof that networks of this type can solve specific types of problems:

If the exemplars used to train the perceptron are drawn from two linearly separable classes, then the perceptron algorithm converges and positions the decision surface in the form of a hyperplane between the two classes.

I don't know... this is a good definition of learning systems, but artificial intelligence is a broader field, surely? If you pick up a text on AI you'll find subjects such as decision trees, search algorithms etc., which don't "learn" in the sense that you mean.
–
BenSep 19 '10 at 18:03

1

That is a very good point. True, Wikipedia notes that "AI textbooks define the field as "the study and design of intelligent agents"[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2]" en.wikipedia.org/wiki/Artificial_intelligence which is a very good definition. I have always been partial to having 'learning' as part of the definition, because the bar is much lower if you do. You don't have to mimic a level of knowledge, just the capacity to improve. Learning AI is sexy AI.
–
Larry SmithmierSep 20 '10 at 2:37

I would consider any machine that is both useful and permanently beyond my understanding to be artificially intelligent (although I dare not suggest that such machine might exist outside of fiction lest my geekhood would be cast into doubt).

A less personal definition:

A machine can be considered artificially intelligent if it can solve classes of problem that were not envisaged by its designers.

Preumably, the architects of such a machine must endow their creation with the ability to lean, or else they must be possessors of extreem good fortune. By definition, trivial machine learning is precluded (so no, your tic-tac-toe solver dosn't count). Either way, happy + surprised should characterise the mood of that machine's engineers.

It passes the Turing Test? In other words, a human being wouldn't be able to definitely tell the actions of your code from that of another human being attempting to do the same thing. Basically, can it fool someone?

-1 Passing the Turing Test is hardly the -minimal- requirement for code to be considered an AI implementation.
–
adamkNov 24 '10 at 22:59

@adamk Surely this depends on what your definition of "AI implementation" is? The fact that the OP stated what is the "minimal requirement" without first defining what "AI" is renders the word "minimal" redundant. How do you know what "minimal" means without a definition of "AI"? It doesn't make sense. My answer addresses one commonly-accepted test to define AI which is all you can really do. So mark my answer down, but realise your subjective view of "minimal" is no more correct than mine.
–
Dan DiploNov 25 '10 at 11:52

The Turing test is a measure of a machines intelligence evaluated by a human judge conversing in natural language with both a machine and a human. If the judge cannot tell which is the machine, then the AI implementation passes the test. However, a machines intelligence is not limited to communication via natural language. Using the same evaluation methodology a judge could watch the moves of Garry Kasparov playing chess against Deep Blue and be incapable of telling which is the real player, 'fooling' the judge, thereby defining the machines intelligence in this area.
–
adamkNov 25 '10 at 18:58

Extrapolating from that, as playing chess is much simpler and therefore has more minimal requirements to complete the implementation than any implementation meeting the requirements of the Turing Test. However, I was a bit pissed last night, and although I disagree with your answer, it didn't really deserve a down vote so I apologise for that.
–
adamkNov 25 '10 at 19:00

Fair enough. But I would personally exclude being able to play chess from any definition of "AI" since machines excel at chess mainly due to brute-force (having a massive database of opening/closing moves) rather than through other means. I still think the general definition of being able to "fool someone" that it could be human (or other living entity) is a reasonable rule of thumb for an AI.
–
Dan DiploNov 26 '10 at 10:13

Assuming such a definition for AI, there's no 'minimal requirements'- a Tic-Tac-Toe AI is just a simple decision tree, for example. For a small enough subset of NLP, "Hello World" is AI. There's no real answer to your question in that regard.