Personally, I found the idea of an impossible singularity quite refreshing.
It does make sense that no matter how smart the system is, it still needs time to learn and master the environment.
So even an above-human artificial intelligence will not immediately cause singularity.

Less than a week ago I've learned about a potentially interesting initiative, called "Digital immortality". Citing the email I've received from Josh, this initiative aims to apply strong AI for the creation of digital minds/modules to allow us to upload human minds to a digital form so as to live indefinitely and expand the capabilities of our minds.

Udacity project has its roots in the previously mentioned "Introduction to AI" Stanford online course. Udacity currently offers 2 courses: CS101 "Building a search engine" and CS373 "Programming a robotic car"; these two had already started, and even have first homework deadline within a day or two; more courses are promised in the nearest future. Both use Python for programming assignments.

Reading through a discussion of AI-based Transactional Analysis real-life application at Genifer google group it suddenly struck me that the most basic requirement for a real strong AI / AGI would be... self-consciousness! And this is exactly what is missing in those amateur AI projects I am aware of.

Really, think of that. Most-productive projects come up with heaps of logic algorithms and Bayesian networks used for this and that, while none seems to focus on creating AI "self".

For the artificial intelligence, there is an important concept of the software capable of rewriting (or amending) its own source code. As a modification of this basic idea, an intelligent program might be able to write other (possibly intelligent) programs.

This is already happening. Some people call this the automation pressure, others refer to the informational society (as a kind of a post-industrial society). And this is where I see the purpose of creating an artificial intelligence.

I've started looking for a position in the AI-related field, both research and hybrid research/commercial opportunities.
I will be able to start somewhere at the beginning of 2010, soon after I defend my bioinformatics PhD thesis.

In this post I'll try to figure out (primarily for myself) what is Artificial Intelligence.

Evidently, the "artificial" part requires no explanation, and the real problem is only with the "intelligence" part.

An extremely over-simplified, and actually incorrect definition would be "Intelligence is the ability to think logically". Evidently, logic cannot be the sole basis of intelligence, at least because intelligence requires an ability to comprehend the environment, not only deduct. Moreover, logic itself is not an ultimate intelligence resource - it cannot explain the environment. Even planning an experiment - a generic method of studying the environment - requires not only logic, but also some kind of stimulus to learn the environment (possibly derived from the adaptation requirement which, in turn, is one of the mechanisms of self-preservation and self-defense).

In this post, some definitions and examples are given. This is an introductory text.

First of all there is a need to explain why "intelligent" is in braces in the title. Well, it's simple: whatever the agents are at the moment of writing, they are just specific, narrow algorithms with no signs of intelligence. As soon as I come across the evidence of the contrary, I will happily remove the braces around "intelligent". But for now - braces stay.

What is an agent? According to the numerous sources I checked, agent is an entity with some characteristic features. These fundamental features are:

agent acts on behalf of others. For example, you may hire a person who would attend parents' meetings at the school of your children: in this case, that would be a "parental agent" :) , who comes to the meetings on your behalf.

agents are to some extent autonomous (i.e., enjoy some degree of autonomy). In our example, the "parental agent" has freedom to act and respond the way he feels appropriate in his communication with other parents at the meeting; but at the same time he must follow the behaviour strategy you outlined for the official messages announced by the school staff.

agents are proactive and reactive. Proactive means that agent may exhibit his own independent initiative, which is not (at least directly) related to the delegated tasks of the agent. Reactive tells us that agents will respond to stimuli - e.g., given a task, agent will try to do that task.

agents are able to learn - that is, they have memory, which influences their further actions.

agents may be cooperative - help each other or just join efforts to complete given tasks.