This is already happening. Some people call this the automation pressure, others refer to the informational society (as a kind of a post-industrial society). And this is where I see the purpose of creating an artificial intelligence.

Let me elaborate a bit.

I am not one of the 'singularity movement' zealots - I do not believe that first sufficiently intelligent sentient software-hardware devices (let us refer to those as [intelligent] robots) will be capable of faster-than-human self-education and intelligence rise - this just does not fit with the factual knowledge we have on the natural processes of [self-]education. Also, as it is always the case with human creations, first versions tend to be rather primitive in comparison to later versions. However, at some point in the future the singularists' standpoint may get strong grounds. Also, that is a separate debatable topic, but I am mentioning software-hardware devices with the purpose to stress my point of view that AI as a software-only creature is doomed to fail due to the lack of touch with reality. That will be discussed in more details separately.

Continuing my line of thought, first AIs will be intelligent to the point of performing some simple specialized tasks, which were harder to automate previously. The first example could be a job of a secretary. It is complicated and versatile enough to prevent the use of custom-programmed solutions, although there are some of the kind performing basic secretary functions - but all of those functions are limited to passive, non-interactive processing. In other words, there is no human-robot communication taking place.

Given the existence of speech recognition and speech generation software, it appears that currently the only component missing is an actual thinking core, able to comprehend and perform tasks. This addition would be sufficient to render such a robotic assistant both intelligent and useful.

Let us imagine for a moment that such a robotic secretary has been developed, and slowly gains its market share. Clearly, the job segment of secretaries will shrink, and the salaries will shrink as well under the argument of cheaper (electricity and maintenance) robotic secretaries. Of course, there cannot be a 100% adoption in a short period of time due to the human nature and overall higher attractiveness of human secretaries over robotic counterparts :)
Taking this segment from the job market will cause the increase of competition in other, higher-qualification areas. Also, the decrease of the available low-qualification jobs (assuming that not only secretaries but other similarly simple duties will be replaced by intelligent robots) should lead to higher overall education levels of the population.

What I'm leading to, is that the creation of the AI will help the education and creativity of people - through natural job competition. This will serve as a stimulus for further mankind's mental and creative evolution.

This is the major, immediately visible benefit of creating AI - as I see it. There are other, material-world benefits as well, which are also exciting, but allow too wide interpretation depending on the ownership of the first commercial AI systems.

Well i'm mostly in accordance with you, except that what is needed isn't just a "thinking core". Let me explain :)

First of all, interpreting natural language must be done in such a way that permits the representation of that information on a neural network, flexible enough to represent any information with a really limited (about 3-4) set of relational elements.
Also, you need to have in mind that the process of interpretation needs to mantain a relation between the real read syntax and the already processed information. It also has to be really flexible, so that it can adapt to read different languages, and even sentences with mixed languages, autonatically recognizing it. The information used by the natural language parser also has to be flexible, allowing the AI to make new relations between language area and memory area.

Once you have that, yes, then you "only" need the thinking core, able to work with abstract concepts (everything is abstract) and interconnect the diferent "brain areas", which would be language, memory, time vector, vision, propioception (in case of implementing it on a robot), etc. Also you need to keep in mind that each thought on the process of thinking needs to be reflected on memory and probably on the immediate memory of the thinking core.

thank you for your comment, which is the first human-generated non-spam comment on this site :)

I think I understand your point about 3-4 relational elements in the neural network for natural language processing. However, do you really believe language processing should be separated from the "thinking core"? I am aware that linguistic abilities have a distinct zone in the brain, but it is also clear that language "atoms" and "molecules" are indeed mapped to "thoughts" - otherwise there would be no meaningful communication. Also, human capability of learning and using many languages suggests that "language processing" is still a part of general human intelligence; you refer to high flexibility, which, in my opinion, is also characteristic of intelligence/thinking core. (I would also elaborate that humans map words from different languages to each other on 2 levels: the "thought" level for actual mapping, and linguistic coherence/context level for natural speech; however, this is based solely on introspection.)

Also, you need to have in mind that the process of interpretation needs to maintain a relation between the real read syntax and the already processed information. - isn't that also the function of the "thinking core"? If you are indeed referring to the context-dependent language processing, and if we do keep language processing as a separate unit, then the most natural approach will include two-way communication: language processor asking "thinking core" for context, and "thinking core" dynamically modifying the context based on what the language processor provides. Yes, keeping the "language processor" a part of "thinking core" does not immediately solve or simplify anything, but I feel that placing it closer to "thoughts" is appropriate.

Also you need to keep in mind that each thought on the process of thinking needs to be reflected on memory and probably on the immediate memory of the thinking core. Self-analyzing thoughts seem too distant to be considered problems at the moment - in other words, I find it difficult to think about that case :)

Since representing language as knowledge implemented on the memory and then using that knowledge to process language is both difficult to implement and expensive on processor ad I/O terms, i think the best approach is to use a specialized part of the thinking core to do it. The idea would be to have an indexing for the words and rules, cross-referenced with their memory registrys, used by the specialized part to translate it to thoughts and the opposite.

Since language is a symbolic sequential way of expressing thoughts, the translation isn't that difficult, actually it's like implementing a specific case of the rule (being the thought the rule, and it's expression the specific case).

Then the expression, already on thoughts (a tree structure of different elements) can be stored on memory in the form of relational elements, where the element types act as the property and it's value as attribute, and using the three relational elements, represent each relation on the three registrys of the relation (holder, property and value).
Then simply have a 4th element non-relational that defines the group of relations that form a sentence (encapsulates the relations in a group).

Here, as I see it, you'd have a sofisticated chatbot. From that point it would be adding more "thinking core" algorithms (apart from the already present ones to connect memory, language, time, etc), to check for information inconsistencies, making new concept relations, etc.

Well, after the explanation:
Yes, language processing would be thinking core, I was just specifying how should it be. Becouse the thinking core can't be a single part, it has to be divided on diferent tools and have a strong communication between them. The diferent parts will work on the same data, performing different steps on processing that data into a meaningful output (thought<->text, thougt<->memory, time<->thought) and relating data with other data.

I agree with your first paragraph on language as a specialized (sub)unit of the thinking core completely. This is indeed an optimal approach.

I also concur on the second paragraph, and have to acknowledge that the idea of "via thought" translation is quite an old one. I have even once seen a kind of an implementation of it - the "via esperanto" universal translator software. Of course, esperanto is not the language of thoughts :), and I believe that software was abandoned (I still have a copy of it somewhere, though).

I have some problems understanding your further argument.

By using a "tree structure of different elements" - do you really mean exactly the "tree"-type relation, or do you rather refer to somehow connected elements? I am aware that lexical parsing usually generates a tree, but I am not (yet?) a professional in that field, and thus I am not really convinced that tree structure is a proper (or, rather, resembling natural) representation. Acting from the unspoiled mind point of view, I would rather assume a graph-like relations of elements (nodes). Do you have a convincing link at hand to share on the subject?

Your 3rd paragraph is generally hard to understand, as it uses many concepts the reader is expected to know (or so it seems). I find it very hard to accept that all the possible relations can be expressed in the terms of holder, property, value. When I was developing the idea of internal thinking space (for my DIY AI project), and had to analyze the most basic element relations, I came to a very different conclusion at all.

Regarding "relations encapsulation into a group": do you refer to a technique of converting a complex structue ("tree" in your example) into a linear (sequential) structure? (like a sentence) Or do you mean something different?

A chatbot it could be at this stage, if it had some large enough initial corpus of data.

This language-centered approach definitely makes sense, and feels doable, but the approach itself makes me feel uneasy. Can the newly-born humans speak? Do they have some a priori language knowledge? Clearly not. What they do have is the thinking core. It is hard to define, and that makes it an unfeasible starting point - but I believe the only correct one.

With the last paragraph I agree as well. My diagram of AI has ~14 components attached to a "system bus" :), and the notion of time is one of the central to the initial design of "processing data packets". I feel there is a significant amount of consensus in our ideas, and I wonder if you are/were working on your own DIY AI project?

About the structure of thoughts, yes, I mean a real tree structure, not a graph. Even though, the tree-like structure would be only used to represent concepts on the thoughts part. It's true that with that alone it wouldn't go much further than a chatbot, but it would be only a part of the representation. On an upper level, the diferent concepts represented on tree structures would be related between them on a graph-like structure, creating lineal threads of thought that at some point could be interacting with other threads, creating the graph. I think that more complex relations than that would be not necessary and would only confuse the programmer.

About the nature of language... Well, it's true that language is a learned skill, but after giving it some thought, I think it's a formal way of representing thoughts, with a common underlying structure present on all languages. My approach would be use that structure to create the thought trees, a structure that doesn't ressemble any language, divided on layers where each layer would be the expansion of one concept represented on the upper layer.

About the relations on the memory, I think you didn't understand it well. Let's say I want to set a property, for example color, on a registry that represents a car. I'd put on the car registry a forwards relation (fw property=color, value=silver), on the color registry an intermediate relation (im property=the_car, value=silver) and on the value registry (silver in that case) a backwards relation (bw property=the_car, value=color). Of course, those relations can be encapsulated with other related relations to form an event record, or one of various descriptions, or what else is needed, so that a relation in a concept would be related by encapsulation to other relations.

Also, about the general language synthesizer/parser, I agree it shouldn't be there since language is learned, but I decided to make the language module becouse otherwise it would be way too hard to get the AI knowledge to the starting point, having to make manually all the concept relations until it could read and write.

About the last paragraph... what's a DIY? I'm sorry but i'm not a native english speaker :)
If you refer to Do It Yourself... Yes, I'm working on it :P. Currenty my project is about 5.5k pure code lines and growing although on a slow pace, since it's a passtime project.

Which language are you programming your project in?
Are you aiming at some specific commercial application? (I would :) )
Are you using the data from projects like OpenCog, FreeHAL etc?
Would you be interested in letting other people (e.g. me :) ) contribute to your project (maybe privately, not in the open-source manner)?
Do you need a small but free and private bit of server resources?