Brain Mapping and AI

When talking about Artificial Intelligence many people will think of things like HAL or a star trek like computer system and then people think of what we have today and believe artificial intelligence is impossible. Computer programmers like myself realize that the amount of traditional coding required to build a consciousness would be impossible.

However scientists are working on technology and doing research today that will fundamentally change how we do AI in the future. Sebastian Seung is working on mapping the human brain wiring, something he is calling the connectome, and Henry Markram is working on building the brain one synapse at a time and both these scientists represent the coming of a perfect storm of technology for AI.

Today’s artificial intelligence all works the same and that is if Event A happens then do action B. It is a reactionary model based around what developers call if statements. This kind of reactionary coding is not only time consuming, but creating things of complexity takes a large amount of time but this will not be the future of AI. As we start mapping the human brain a new form of coding and development will occur and instead of building reactions developers will specialize in data injection.

Imagine for a moment you have a completely mapped virtual brain; its power is only going to be limited in the same way a normal human brain is limited, by its education. The difficulty will not be in building the logic but rather how to integrate data into the virtual brain so that it can become functional. So while some scientists might believe they are simply discovering the human brain what they are actually doing is laying the ground work for the first true AI that is capable of an incredible amount.

My respect: you managed to sum up the AI theme pretty well in few words.
Anyhow I would like to add that most neuroscientists are aware of the fact that a soley on”if a then b” processing based approach might be too time and energy consuming for representing the working scheme of the human brain.
In my opinion we do not only have to look for appropriate information providing but also for alternative algorithms and processes that might be involved in human neural networks.

Machines could, at least perceptually, be more compassionate then humans. The primary reason for this is machines work off fixed logic and would be more likely to embrace sustainability unless they were coded to explicitly be evil.