Hawkins founded the Redwood Center for Theoretical Neuroscience to study the brain full-time, and he co-authored On Intelligence with Sandra Blakeslee. In 2005, he co-founded Grok, originally known as Numenta, to turn his intelligence research into a marketable product.

But he wasn’t content to keep the company’s secrets to himself, so in addition to publishing a white paper outlining the theory and mathematics, the team has released NuPIC, an open source platform that includes the company’s algorithms and a software framework for building prediction systems with them.

says Linda Darling-Hammond, a professor of education at Stanford and founding director of the National Commission on Teaching and America’s Future. “In 1970 the top three skills required by the Fortune 500 were the three Rs: reading, writing, and arithmetic. In 1999 the top three skills in demand were teamwork, problem-solving, and interpersonal skills. We need schools that are developing these skills.”

Theorists from Johann Heinrich Pestalozzi to Jean Piaget and Maria Montessori have argued that students should learn by playing and following their curiosity.

The study found that when the subjects controlled their own observations, they exhibited more coordination between the hippocampus and other parts of the brain involved in learning and posted a 23 percent improvement in their ability to remember objects. “The bottom line is, if you’re not the one who’s controlling your learning, you’re not going to learn as well,” says lead researcher Joel Voss, now a neuroscientist at Northwestern University.

Atlas is a bipedal humanoid robot primarily developed by the American robotics company Boston Dynamics, with funding and oversight from the United States Defense Advanced Research Projects Agency (DARPA). The 6-foot (1.8 m) robot is designed for a variety of search and rescue tasks, and was unveiled to the public on July 11, 2013.

Computer networks that can’t forget fast enough can show symptoms of a kind of virtual schizophrenia, giving researchers further clues to the inner workings of schizophrenic brains.

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what’s meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren’t real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

“It’s an important mechanism to be able to ignore things,” says Grasemann. “What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia.”

Professor Roland Arkin from the School of interactive computing at the University of Georgia presented the results of an experiment in which scientists were able to teach a group of robots to cheat and deceive. The strategy for such fraudulent behavior was based on the behavior of birds and squirrels.

After a while, the hiding robot started deliberately overturning obstacles just to create a diversion and was hiding somewhere away from the mess he had left behind. This strategy was not originally programmed, the robot has developed its own strategy, through trial and error.

So, surely, we need some rules: Governing Lethal Behavior – And there we go: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture –

For the implementation of an ethical control and reasoning system potentially
suitable for constraining lethal actions in an autonomous robotic system.

Ruthless Robots:

By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink.

Imagination:
In an experiment, this supercomputer was given free access to the Internet and the ability to examine the contents of the network. There were no restrictions or guidelines.

There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.
Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

“Nautilus” is another self-learning supercomputer. This unit was fed millions of newspaper articles starting from 1945, by basing its search on two criteria: the nature of the publication and location. Using this wealth of information about past events, the computer was asked to come up with suggestions on what would happen in the “future.”

News outlets which published online versions were also analysed, as was the New York Times’ archive, going back to 1945.
In total, Mr Leetaru gathered more than 100 million articles. Mood detection, or “automated sentiment mining” searched for words such as “terrible”, “horrific” or “nice”.
The Egypt graph, said Mr Leetaru, suggested that something unprecedented was happening this time.

Media “sentiment” around Egypt fell dramatically in early 2011, just before the resignation of President Mubarak.

“If you look at this tonal curve it would tell you the world is darkening so fast and so strongly against him that it doesn’t seem possible he could survive.”
Similar drops were seen ahead of the revolution in Libya and the Balkans conflicts of the 1990s.
Reports were analysed for two main types of information: mood – whether the article represented good news or bad news, and location – where events were happening and the location of other participants in the story.

Super speech by Bruce Sterling in Austin back in June (Sterling talked at the Turing Symposium commemorating the centenary of Turing’s birth June 23rd 1912). Have thought about posting about it earlier – but somehow didn’t get around to it before now…

It is well known that Turing was one of the earliest pioneers of computer science and artificial intelligence. And, now, we are just beginning to wrestle with the problems of the digital age (What is real and what is not). The very problems Turing wrestled with way back in 1950 (in the ”Turing Test”).

According to Sterling, maybe the ”Turing Test” was never really about intelligence. Maybe it was about gender!

Lets imagine that it was about an artificial computational system that wants to be a woman! So, how can we help this computational system be a real woman? Help the artificial become real. Help a guy to be a real woman. And really feel like a woman does. Feel like a mother. Feel the wind like a woman feels the wind etc. (Indeed, actuality, sexuality is older than intelligence biologically speaking. Intelligence rides on gender).
Sadly, probably, we can’t help our poor artificial friend.

So, of course, the artificial system fails. And becomes terribly depressed. Maybe, it even wants to kill itself?
Thats not what we wanted! Indeed, when we think about artificial intelligences we certainly don’t want them to entertain thoughts along the lines of the actual Alan Turing!

Nowadays we see Turing as a hero. We like to see him as one of us. We like to imagine that we are not hostile to people like him anymore (homosexuals). That we, as a society, have progressed a lot.

But in Bruce Sterlings speech, we were all reminded that we are still hostile to people who are very different. Whose work will not be appreciated for 30 or 50 years. We don’t particular enjoy people who are really confused about what is real and what is not.

And, we do not like machines that want to kill themselves, when they find out that they are not real… or that they may never know what it feels like to be a woman.

We don’t like to think about Artificial Suicidal Systems. We like IQ – but not suicidal artificial intelligence. We want machines that can inspire us towards more intelligence and awareness, not towards ”non-awareness”.

But, who ever said that it was going to be easy to be an intelligent machine? Confused and depressed about not being real.

On December 14th 2012 there was a super-interesting post on Kurzweil.net that Kurzweil is joining forces with Google.

Ray Kurzweil confirmed today that he will be joining Google to work on new projects involving machine learning and language processing…

Ambitions certainly run high:

Im thrilled to be teaming up with Google to work on some of the hardest problems in computer science, so we can turn the next decades ”unrealistic” visions into reality.

As you read on, you begin to wonder, if this is really the start of Arthur C Clarkes 2025 Braincap vision.
See my Braincap article.

It is certainly all very intriguing:
On page 156 in Kurzweils ”How to Create a Mind” one reads that Kurzweil has started a new company called Patterns:

…Which intends to develop hierarchical self-organizing neocortical models that utilize HHMMs (Hierarchical Hidden Markov Models) and related techniques for the purpose of understanding natural language. An important emphasis will be on the ability for the system to design its own hierarchies in manner similar to a biological neocortex.
…
Our envisioned system will continually read a wide range of material, such as Wikipedia and other knowledge resources, as well as listen to everything you say and watch everything you write (if you let it).
The goal is for it to become a helpful friend answering your questions –before you even formulate them
– and giving you useful information and tips as you go through the day.

Ten years later, Wireless Sensor Nets making automatic digital diaries and putting them directly out on the internet for you, and what have you from Futuropolis 2058, seems almost commonplace.

Obviously, IBMs Watson was only the start.
In Jeopardy a question is posed, and Watsons machinery goes to work. Its UIMA (Unstructured Information Management Architecture) deploys hundreds of subsystems, all of which are attempting to come up with a response to the Jeopardy query. I.e. more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses. Finally, Watsons then acts as an expert system that combine the results of the subsystems. Helping to figure out how much confidence it has in the answers subsystems come up with.
Not only can Watson understand the Jeopardy queries, it can also search its 200 million pages of knowledge (Wikipedia and other sources) and come up with the correct answer faster than any human expert…
…
And that is just 2012 stuff. Kurzweill obviously won’t let us stop there.
On page 169 of ”How to Create a Mind” one reads that a better Watson should not only be able to answer a question, but also understand – pick out themes in documents and novels:

Coming up with such themes on its own from just reading the book, and not essentially copying the thoughts (even without the words) of other thinkers, is another matter.
Doing so would constitute a higher-level task than Watson is capable of today – it is what I call a Turing test-level task (That being said, I will point out that most humans do not come up with their own original thoughts either. But copy the ideas of their peers and opinion leaders).
At any rate this is 2012, not 2029, so I would not expect Turing test- level intelligence yet.

Intriguing indeed.

Arthur C. Clarke’s vision is surely well under way …

But even he didn’t anticipate a future where our memories belonged to the Cloud, Google or similar.
A cloud that will design its own cognitive hierarchies in a manner similar to a biological neocortex, based on our memories, and feed the result right back to us, shaping and directing our lives, as our most trusted friend.

By playing a game of colouring neurons, amateur neuroanatomists trace the wires of the retina, working together to find a neuronal ”wiring diagram”. Such a map, also known as the connectome, will help us understand how the retina serves visual perception.

Anyone can sign up to play; the only qualifications are curiosity and a zest for careful observation …
This is a new age of exploration. By recuiting enough amateur and professional scientist, we will be able to make significant breakthroughs in our understanding of the human brain.

Lead scientist on the project, Sebastian Seung, says that ”NeuroScientists have long hypothesized that our memories are encoded in our connectomes, because each experience leaves a trace on the brain by altering neural connections.
We will be able to test this hypothesis by attempting to read memories from connectomes”.

Bruce Hoods book The Self Illusion is a great book about the mental constructions that makes us who we are.
According to Hood, deep down, our selves might not be all that solid.
Instead, other people influence us and changing circumstances continually update our beliefs and our sense of self.

The self is shaped by the reflected opinions of others around us.
And Hood gives us a long list of very interesting observations and psychological experiments that illustrates that our selves are not rock solid things.
From Jane Elliots experiments with a third grade class (She convinced blue eyed children that brown eyed kids were smarter or vice versa) to Solomon Aschs Conformity test (Where students would rather follow the group than give the right answer) –
Hood concludes that:

We are susceptible to group pressure, subtle priming cues, stereotyping and culturally cuing, then the notion of a true, unyielding ego cannot be sustained. If it is a self that flinches and bends with tiny changes in circumstances, then it might as well be non-existent

Indeed, selves are constructed – not born, according to Hood.
People don’t remember much from before the age of four. According to Bruce Hood, the reason for this is that our selves have not been fully build at that age:

It’s not that you have forgotten what it was like to be an infant
– You were simply not ”you” at that age because there was no constructed self, and so you cannot make sense of early experiences in the context of the person to whom these events happened.

And the self is fragile. Even thinking too much about it might be a dangerous thing? We might be confused, begin to wonder if the construction, the self, can really do anything on its own? Do we, the self, have free will?

We need the self though:
Experiences are fragmented episodes unless they are woven together in meaningful narrative. This is why the self pulls it all together.
And, we also think of others as having selves. Indeed, we have not evolved to think about others as a bundle of processes. Rather we have evolved to treat others as individual selves.

And YES indeed – Curiosity has landed…. absolutely mindblowing amazing stuff!
Make sure to watch the landing video (nasajpl)!!
Certainly, some of us still find it rather hard to understand that
all of this actually worked……

And here is where it gets hilarious. The poor lonely robot, 248 million km away, turns out to be a rather sarcastic little fellow. And a poet as well….
A least thats the impression you get when you read his twitter ramblings: SarcasticRover.