Strudelkugel writes: The NY Times has an article about a conference during which the potential dangers of machine intelligence were discussed. " Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences. " The money quote: "Something new has taken place in the past five to eight years," Dr. Horvitz said. "Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture."

No matter what your take on the feasibility of 'true' AI, they make a good point in saying that we need to educate people about the realities of these technologies to avoid a public outcry to ban AI and machine learning research.

I love the idea of computers with higher brain power than humans. The notion that it will cause social disruption doesn't hit a nerve with me. After all, every significant invention always causes a certain degree of turmoil. How many people lost their livelihood and lifestyle when the horse was made obsolete by cars? As for people foolish enough to abandon their religions over science and technology I find that little different from abandoning

smart machines are not inherently dangerous - as with all technology, they will be helpful or harmful, depending on your view.

But self-replicating smart machines are a different story. They will evolve in much the same way as life, and the ones which are most successful in the competition for resources will prevail eventually. In the end, they will compete with mankind, and if they are smarter, we will stop being the dominant species on this planet. As an outside observer, I would be thrilled to watch this,