A programme that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which
requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics
have warned that the technology could be used for cybercrime.

Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30
per cent of human interrogators in five-minute text conversations.

Eugene Goostman, a computer programme made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33
per cent of the judges that it was human, said academics at the University of Reading, which organised the test.

Personally, I love the idea of A.I whilst wearing the rose tinted glasses....imagine the companionship, but looking beyond that I like so many fear
what A.I could become.

imagine the companionship

I find what you said to be one of the enormous negatives. people already are starting to prefer the anonymous company of electronic communications.
There we can be accepted, we can be fake, we can make it easy and not have to work at relationships.

Very interesting news, but one wonders why only 30 percent have to be convinced, and only a 13 year old human is simulated. Wouldn't a more
comprehensive test require closer to 100 % to be convinced, and have the computer emulate an adult?

Alan Turing created the test in a 1950 paper, 'Computing Machinery and Intelligence'. In it, he said that because 'thinking' was difficult to
define, what matters is whether a computer could imitate a real human being. It has since become a key part of the philosophy of artificial
intelligence.

IBM's Watson system can think (as proven on jeopardy) and this one can act like a human in chats. Wow - combine those two systems and talk about
jobs that quickly disappear.

You have to love the fear that immediately got expressed by academia at the emergance of a true artificial intelligence. This is a benchmark which
has been the focus of numerous studies, and a great deal of attention globally, from researchers and futurists, and robotics engineers and God knows
how many other sources, since most of the industries which might be effected by the idea have existed.

Personally speaking, I believe that ALL technologies, programs, ideas and inventions can be used for good or for ill, and that the potential of a
thing being used for ill does NOT make its invention automatically negative. I also believe that the reaction from academia is inappropriate at this
time. Like a no smoking sign at a petrol forecourt, this fearmongering is only instructional to the small minority of halfwits who lack the foresight
to have come to the same conclusion, that it is POSSIBLE for an artificial intelligence program to be used for nefarious purposes, and the seriousness
of the implications of that possibility.

However, it should be stated, loudly, that the possibilities presented by true artificial intelligence also carry some very positive elements, which
must not be ignored. The human race currently faces challenges which threaten its existence in the very long term, like nuclear incidents, which
despite occupying less and less news time, are still ongoing. Dealing with challenges like this will require ever more effective and intelligent
machines and technologies, to make dealing with these issues safer for human beings, and faster to boot.

Sending a robot with the ability to discern both a problem, and its solution, into a situation which would be immediately lethal for a human being,
would help in solving or mitigating these difficult problems. It is also perfectly possible that the same intelligence, utilised differently, could be
applied to raiding bank accounts, or causing nuclear meltdowns at atomic facilities all over the world.

However, just as life saving medical treatment is good for the patient, but allegedly bad for the size of populations and national debt levels, this
technology is only as dangerous as the people that use it. So the important thing, is that the people who utilise this new form of intelligence, must
be carefully monitored to ensure that the technology is not misused.

Furthermore, it is important that the individual intelligences which are created in the future, are carefully monitored to ensure that they are not
abused, or forming bad habits. Caring for, and steering the education and the development of these intelligences should be paramount going forward. If
we are to bring intelligences into being, whether they be flesh or machine, we must nurture those intelligences, provide outlet for expression, and
promote good values in the "hearts" of the resulting being.

This emergance however, does bring up some interesting questions, or rather, make gaining answers to them more urgent. What is the soul, what is
intelligence, what is life.... these questions will have to be answered sooner rather than later. The one advantage we have now, that we did not
before, is that the tools to answer these questions are closer at hand WITH access to true AI, than they were when we were without that access.

What if we let the robots do the work, while we used our intellect for more enjoyable things... that's the way I like to see the future... it will
never happen though, because in order for people to not work and enjoy themselves you have to eradicate the greed, and that won't happen.

We're told that this is the work of Russian programmers, emulating a Ukrainian boy. The test was held at the Royal Society, London, and organized by
the University of Reading. So, what language did the program reply in-- Russian, Ukrainian, or English? Was there a translation from one language to
another?

When the program, called 'Eugene Goostman' was asked how 'he' felt about beating the Turing test, the reply was: " I feel about beating the Turing
test in a quite convenient way. Nothing original." Hmmm. Very awkward language, which conveys practically nothing. Either a very clumsy sort of
translation similar to that delivered by Google, or a very peculiar use of English. Not a convincing emulation of a human being, as it stands.

This content community relies on user-generated content from our member contributors. The opinions of our members are not those of site ownership who maintains strict editorial agnosticism and simply provides a collaborative venue for free expression.