We are the computational neuroscientists behind the world's largest functional brain model

Hello!

We're the researchers in the Computational Neuroscience Research Group (http://ctnsrv.uwaterloo.ca/cnrglab/) at the University of Waterloo who have been working with Dr. Chris Eliasmith to develop SPAUN, the world's largest functional brain model, recently published in Science (http://www.sciencemag.org/content/338/6111/1202). We're here to take any questions you might have about our model, how it works, or neuroscience in general.

edit: Also! Here is a link to the neural simulation software we've developed and used to build SPAUN and the rest of our spiking neuron models: [http://nengo.ca/] It's open source, so please feel free to download it and check out the tutorials / ask us any questions you have about it as well!

edit 2: For anyone in the Kitchener Waterloo area who is interested in touring the lab, we have scheduled a general tour/talk for Spaun at Noon on Thursday December 6th at PAS 2464

edit 3: http://imgur.com/TUo0x
Thank you everyone for your questions)! We've been at it for 9 1/2 hours now, we're going to take a break for a bit! We're still going to keep answering questions, and hopefully we'll get to them all, but the rate of response is going to drop from here on out! Thanks again! We had a great time!

edit 4: we've put together an FAQ for those interested, if we didn't get around to your question check here! http://bit.ly/Yx3PyI

(Xuan says): This is a rather hard question to answer. The definition of "Singularity" is different everywhere. If you are asking when we are going to have machines that have the same level of intelligence as a human being, I'd have to say that we are still a long ways away from that. (I don't like to make predictions about this, because my predictions would most certainly be wrong. =) )

(Terry says:) Oh, I have a pretty good hope that we'll be able to run this sized model in real-time in about 2 years. It's just a technical problem at that point, and there's lots of people who have worked on exactly that sort of problem.

The next goals are all going to be to add more parts to this brain. There are tons of other parts that we haven't got in there at all yet (especially long-term memory).

(Terry says:) Who knows. :) This sort of research is more about understanding human intelligence, rather than creating AI in general. Still, I believe that trying to figure out the algorithms behind human intelligence will definitely help towards the task of making human-like AI. A big part of what comes out of our work is finding that some algorithms are very easy to implement in neurons, and other algorithms are not. For example, circular convolution is an easy operation to implement, but a simple max() function is extremely difficult. Knowing this will, I believe, help guide future research into human cognition.

(Travis says:) You can take a look at our software and test it out for yourself! http://nengo.ca
There are bunch of tutorials that can get you started with the GUI and scripting, which is the recommended method.

(Terry says:) Interestingly, it turns out to be really easy to implement XOR in a 2-layer net of realistic neurons. The key difference is that realistic neurons use distributed representation: there isn't just 2 neurons for your 2 inputs. Instead, you get, say 100 neurons, each of which has some combination of the 2 inputs. With that style of representation, it's easy to do XOR in 2 layers.

(Note: this is the same trick used in modern SVMs used in machine learning)

The functions that are hard to do are functions with sharp nonlinearities in them.

How close are we to develop a reasonably validated brain theory? As Jeff Hawkins pointed out in his 2003 Ted talk that there is too much data and almost no framework to organize it but that soon there will one.

When I said reasonably validated, I meant something like the theory of evolution. Great stuff, I just hope to see something revolutionary before I die.
Can't think of a smart brain question for you guys. Why don't you tell us one cool brain trivia that blows your mind.

Can you explain? I'm a 3rd year neuro major so I haven't taken a bunch of neuro classes but I thought it was binary in the sense of inhibitory and excitatory? With taking into account the frequency of activation of course but then again I'm new to this lol

(Terry says:) The current best guess seems to be that the strength of the synapse has a couple disrecte levels -- maybe something like 3 or 4 bits (basically it's how many proteins are embedded into the wall of the synapse, which gets up to at most 10 or so). But then there's also a probability of releasing neurotransmitter at all (so one synapse might have a 42% chance of signalling, while another one might be at 87%). This is more to do with the number of neurotransmitter vessicles there are and how well they can flow into that area.

(Trevor says:) How would you define "reasonably validated"? We work with the Neural Engineering Framework (NEF) which we think is reasonably validated by Spaun. The fact that it performs human-like tasks with human-like performance seems like reasonable validation to us. Which isn't to say that it is the the only possible brain theory; Spaun, in some ways, is throwing down the gauntlet, which we hope is picked up by other theories and frameworks.

You guys did amazing work. When I saw the paper my jaw dropped. I have a few technical questions (and one super biased philosophical one):

When you 'train your brain' how many average examples did you give it? Did performance on the task correlate to the number of training sessions? How does performance compare to a 'traditional' hidden layer neural network?

Does SPAUN use stochasticity in its modelling of the firing of individual neurons?

There is a reasonable argument to be made here that you have created a model that is complex enough that it might have what philosophers call "phenomenology" (roughly, a perspective with a "what it is to be like" feelings). In the future it may be possible to emulate entire human brains and place them permanently in states that are agonizing. Obviously there are a lot of leaps here, but how do you feel about the prospect that your research is making a literal Hell possible? (Man, I love super loaded questions.)

Only the visual system in Spaun is trained, and that is so that it could categorize the handwritten digits. More accurately though, it grouped similar looking digits together in a high dimensional vector space. We trained it on the MNIST database (I think it was on the order of 60,000 training examples; 10,000 test examples).

The rest of spaun is however, untrained. We took a different approach than most neural network models out there. Rather than have a gigantic network which is trained, we infer the functionality of the different parts of the model from behavioural data (i.e. we look at a part of the brain, take a guess at what it does, and hook it up to other parts of the brain).

The analogy is trying to figure out how a car works. Rather than assembling a random number of parts and swapping them out until they work, we try to figure out the necessary parts for a working car and then put those together. While this might not give us a 100% accurate facsimile, it does help us understand the system a whole lot better than traditional "training" techniques.

Additionally, with the size of Spaun, there are no techniques right now that will allow us to train that big of a model in any reasonable amount of time. =)

We did not include stochasticity in the neurons modelled in spaun (so they tend to fire at a regular rate), although other models we have constructed show us that doing so will not affect the results.

The models in spaun are simulated using an leaky-integrate-and-fire (LIF) neuron model. All of the neuron parameters (max firing rate, etc) are chosen from a random distribution, but no extra randomness is added in calculating the voltage levels within each cell.

Well, I'm not sure what the benefit of putting a network in such a state would be. If there is no benefit to such a situation, then I don't foresee the need to put it in such a state. =)

Having the ability to emulate an entire human brain within a machine would drastically alter the way we think of what a mind is. There are definitely ethical questions to be answered for sure, but I'll leave that up to the philosophers. That's what they are there for, right? jkjk. =P

Hey guys, I don't know if you'll see this but I'm an undergrad with a Biology and Computer Science double major, interested in doing work like this. Do you have any advice for an undergrad trying to figure out how to get involved?

(Travis says:) Bio and comp-sci! That's great! I would say that your best bet is to find the neuroscience people at your school and start attending talks. Approaching and asking if there's a way you can get involved too is a great idea. It won't be anything fancy, but especially if you have good programming skills you'll be useful in some way off the bat, and as you develop a rapport with the people in the lab you'll be able to work on more interesting things and have good recommendations for when you apply to grad school! And that's huge.

First off, I just want to say that I can't believe this only has 60-odd responses. This is something that I've been interested in for a long time.

A couple questions:

What programming language(s) did you use for this project?
What computer did you use? I assume it was one of the IBM or Sun Microsystems behemoths...
How familiar are you with the Blue Brain project? Do you have any contact with the group behind that?

Lastly, what's your best guess as to when we'll see the first legitimate artificial intelligence? 20 years? 50 years? Assuming that computing power continues on its' average growth trend from the last 20 years.

(Xuan says): The core simulation code is in Java. Done so mainly for cross-compatibility between different operating systems. The model itself is coded in python (because python is so much easier to write), and all it does it hook into the java code and construct the model that way.

To simulate Spaun, we used both an in-house GPU server, as well as the supercomputing resource that we have available in Ontario, Canada. Sharcnet if you want to know what it is. =)
It's available to all universities in Ontario I believe.

We don't have contact with the people at the Blue brain project. Mostly because the approach they are taking is vastly different from what we are doing. I've used this example a few times now, but the approach they are taking is akin to trying to learn how a car works by replicating a working model atom by atom.

What we are doing on the other hand, is looking at the car, figuring out what each part does, and then constructing our own (maybe not 100% accurate) model of it.

It's hard to answer your last question, it's hard to say. People always peg it as being "50 years away", but every time they make such a prediction it's still "50 years away". Also, the brain is such a complex organ that every time we think we have solved something, 10 more questions pop up. So... I'm not even going to try making a guess at all. =)

(Trevor says:) Our simulator is open source so feel free to peruse the source and run it yourself! It's Java, which we interact with through a Swing GUI and Jython scripting.

We definitely know of the Blue Brain project, but we don't have any collaborations with them; they are trying to build a brain bottom-up, figuring out all the details and simulating it. We are trying to build a brain top-down, figuring out the functions we want it to perform and building that with biologically plausible tools. Eventually I hope that both projects will meet somewhere in the middle and it will the best collaboration ever.

Legitimate artificial intelligence is a really loaded phrase; I would argue we already have tons of legitimate AI. The fact that I can search the entire internet for anything based on a few query terms and find it in less than a second is amazing, which to me is a superset of legitimate. If you mea how long until we have the first artificial brain that does what a human brain does... I feel like I have almost no basis for making that guess. I would not be surprised if it happened in 10 years. I would not be surprised if it never happens.

(Terry says:) Simplicity. The core research software is just a simple Java application [http://nengo.ca], so that it can be easily run by any researcher anywhere (we do tutorials on it at various conferences, and there's tutorials online).

(Trevor says:) It wasn't really a conscious decision, we just used what we had available. We all have computers. A former lab member was very skilled in Java, so our software was written in Java. When we realized that a single-threaded program wouldn't cut it, we added multithreading and the ability to run models on GPUs. Moving forward, we're definitely going to use things like FPGAs and SpiNNaker.

(Xuan says): We currently have a working implementation of the neuron simulation code implemented in C++ and CUDA. The goal of this is to be able to run the neurons on a GPU cluster. We have seen speedups anywhere from 4 to 50 times, which is awesome, but still no where close to real time.

This code works for smaller networks, but for a big network like spaun (spaun has a lot of parts, and a lot of complex connections), it dies horribly. We are still in the process of figuring out where the problem is, and how to fix it.

We are also looking at other hardware implementations of neurons (e.g. SpiNNaker) which has the potential of running up to a billion neurons in real time! =O
SpiNNaker is a massively parallel implementation of ARM processors.

(Trevor says:) Yes, and we're in the process of doing that! Running on GPU hardware gives us a better tradeoff (in terms of effort of implementation versus efficiency improvement) than, say, reprogramming everything in C. The vast majority of time is spent in a few hot loops, so we only optimize those. Insert Knuth quote here.

(Xuan says): "Serial" computers have the advantage of being the most flexible of platforms. There are no architectural constraints (e.g. chip fan-in, chip maximum interconnectivity) that limit the implementation of whatever model we attempt to create. This made it the most logical first platform to use to get started.
Additionally, FPGA and other implementations are not quite fully mature enough to use on a large scale. We're still improving these techniques.

That said, we are currently working with other labs (see here) to get working implementations of hardware that is able to run neurons in real time.

"we are currently working with other labs (see here) to get working implementations of hardware that is able to run neurons in real time." So am I a little bit, my third year project is to put a spiking neural network on an fpga, as a proof of concept.

Can you give us as layman of a description as you can of how this thing actually works? How does your software actually emulate biological systems? What is the architecture of the software like at a high level? What does the data look like that makes up the 'memory'?

Spaun is comprised of different modules (parts of the brain if you will), that do different things. There is a vision module, a motor module, memory, and a decision making module.

The basic run-down of how it works is: It gets visual input, processes said visual input, and based of the visual input, decides what to do with it. It could put it in memory, or change it in some way, or move the information from one part of the brain to another, and so forth. By following a set of appropriate actions it can answer basic tasks:

e.g.
- get visual input
- store in memory
- take item in memory, add 1, put back in memory
- do this 3 times
- send memory to output

The cool thing about spaun is that it is simulated entirely with spiking neurons, the basic processing units in the brain.

The stuff in the memory modules of spaun are points in a high dimensional space. If you think about a point on a 2D plane, then on a 3D plane. Now extend that to a 512D hyperspace. It's hard to imagine. =)

This is an excellent AMA. You guys are very dedicated to answer ever question that gets asked! I'm curious about dreams; what does your research into the depths of the brain have to say about how we invent and process our dreams?

And were you always interested in studying the brain? What did you originally go to school for, and how did you end up where you are today?

(Xuan says): There is some research that suggests that dreams are a way for the brain to process all of the information we encounter during the day (maybe?). It is suggested that the brain does a "fast forward" of the day's events, and this is what a dream is. This is of course, only one possible explanation.

It is possible that Spaun may one day have a "dream" state which it uses to analyze training examples and help it perform better on future tasks.

I have always been interested in the brain, although it started out in the area of linguistics. I did my undergraduate in Computer Engineering, and when I applied for a Master's degree, I got a response from the awesome Chris Eliasmith and said "Hell yeah!"

(Trevor says:) Thanks for the kind words! I find dreams really fascinating too! Spaun doesn't have much to say about dreams; it is always focusing on the current task at hand. In a more complicated model, it's very possible that it will need a break eventually, and sleep seems like a perfect way to get that kind of break.

As for how the brain constructs these kinds of dreams, I really recommend reading our supervisor Chris Eliasmith's upcoming book, How to Build a Brain. He presents the semantic pointer architecture, which gives a way to make compressed semantic representations of things. In my view, dreams are what happens when the brain is allowed to free associate with semantic pointers; we're not constrained by our normal sensory input, so we just try to combine and manipulate the pointers randomly.

I was always interested in studying the brain, though it was only recently that I really realized it was possible. Growing up I think I just assumed that people knew what was going on, but that's not true. I originally started a computer science degree wanting to eventually go to med school and become a neurologist, but theoretical neuroscience seemed much more interesting and much more suited to my background.

(Travis says:) I am an atheist. I would find it very difficult to believe in a soul and be a neuroscientist at the same time, since we're looking to explain the brain and don't see humans as anything special apart from having more cortex for information processing. But, personally, I think "the soul" is a good metaphor and still use the word.

(Trevor says:) Disclaimer: the views expressed by Travis DeWolf are his and his alone, and do not necessarily reflect the views of the CNRG, the CTN, or the University of Waterloo.

That said, I am an atheist. I would find it very difficult to believe in a soul and be a neuroscientist at the same time, since we're looking to explain the brain and don't see humans as anything special apart from having more cortex for information processing. But, personally, I think "the soul" is a good metaphor and still use the word.

(Terry says:) I don't affiliate with any organized religion, but I'm open to the possibility.

As a researcher, I tend to use athiesm as the working hypothesis: assume that the brain is all that there is, and figure out how it works in terms of physical matter. Now, it may be that once we (100 years from now) build a complete model of a brain down to the smallest physical detail, we still find that something is missing. That could happen, and as a scientist I have to leave myself open to that possibility. If that did happen, that'd be an extremely interesting finding, and then there'd be all sort of fun research in trying to figure out the properties of that thing that's left over (which would probably end up being called a "soul"). But, until that happens, my working assumption will be that we can investigate the world and figure stuff out about it without postulating about non-physical entities. :)

(Trevor says:) Haha, that is definitely our long-term goal ;) Seriously though, computational neuroscience suffers from the same gender ratio problems as the rest of STEM. Experimental neuroscience is not as imbalanced. Hopefully everything will balance out over time!

(Terry says:) Possibly. It's still very far off, but if we do manage to figure out how (parts of) the brain work, then all sorts of interesting things like that could happen. For myself, while I enjoyed GITS, I tend to prefer books by authors like Greg Egan (Permutation City and Zendegi would be the most on-topic ones for this work). Zendegi even has a major character spending lots of time modelling the bird song-learning system, which is a pretty close analogue to one of the core parts of our Spaun model.

Ooh book recommendations too =oD
Would you describe the brain as a computer? Albeit a complex computer or do any of you have you're own name/explanation for a brain?
Do you think it would ever be possible to be able to make copies of memories? Like that movie... ummm... "The Final Cut" (had to look that up)

(Terry says:) It's a very very different sort of computer than we're used to. It may have 100,000,000,000 neurons all running in parallel, but each of those neurons is maybe running at the equivalent of 10Hz. So figuring out what sort of algorithms work on that sort of computer is very different from normal computer algorithms.

As for copies of memories, that's going to be extremely hard. Right now, the best theories are that long-term memories are stored by modifying in the individual connection weights between neurons. However, no one seems to have any good way of measuring those in bulk. The only approach right now is to freeze a chunk of the brain, slice it into 0.1 micron-thick slices, feed it to an electron microscope, and then manually trace out the size of each connection, and guess how strong the connection is based on the size. This has been done for a small piece of one neuron, and it took years of work: [http://www.youtube.com/watch?v=FZT6c0V8fW4]

That's incredibly complex and time consuming =o0
Just made a cup of tea and a billion questions just occurred to me... here are 2:
1) Has making a brain of another animal ever been done or has there been enough research into other animals for this to be possible? (we humans are very self-interested after all)
2) What uses do you see this having? Other than medical I mean.. (I saw an article about this and they we're talking about intelligent robots taking messages and doing deliveries and I just don't think that does the project justice)

(Terry says:)
1) Not really. I'd also say that most of the parts of Spaun are things that humans share with mammals, so things like the part that recognizes numbers isn't that different from what you'd find in other mammals.

2) Medical is a big one. And that includes things like prosthetic limbs, since understanding how the brain tries to control the a normal arm will help artificial arms. The other big one is just trying to understand what the algorithms are that the brain uses.

It seems like your efforts have mostly been in software (indeed, this is a good approach for keeping your efforts flexible). After your research has progressed further, do you see the specific algorithms/architecture you use being compatible with conversion into specialized hardware in order to increase the size and performance of the neural nets you're able to work with? I'm specifically thinking of something along the lines of Kwabena Boahen's work.

My opinion has long been that if the goal is to achieve performance and scale equivalent to the human brain, software running on general purpose processors (or even GPUs) will take longer to reach that level than judicious use of ASICs, and I'm curious to hear your thoughts.

The great thing is that there are a whole bunch of projects right now to build dedicated hardware for simulating neurons extremely quickly. Kwabena takes one approach (using custom analog chips that actually physically model the voltage flowing in neurons), while others like SpiNNaker [http://apt.cs.man.ac.uk/projects/SpiNNaker/] just put a whole bunch of ARM processors together into one giant parallel system. We're definitely supporting both approaches.

I should also note that, while there is a lot of work building these large simulators, the question we are most interested in is figuring out what the connections should be set to in order to produce human-like behaviour. Once we get those connections figured out, then we can feed those connections into whatever large-scale computing hardware is around.

(Terry says:) We're definitely keeping a close eye on the connectome project. My hope is that it'll progress along to a point where we might be able to compare the connections that we compute are needed to the actual connections for a particular part of the brain. However, right now the main thing we can get from the connectome project is the sort of high-level gross connectivity (part A connects to part B, but not to part C) rather than the low-level details (neuron #1,543,234 connects to neuron # 34,213,764 with strength 0.275).

(Xuan says): I can freely admit that in it's current state, it will not. However, the approach we are taking is more flexible than Watson. Watson is essentially a gigantic lookup table. It guesses what the question is asking and tries to find the "best match" in its database.

The approach (the semantic pointer architecture) we are taking however incorporates context information as well. This way you can tell the system "A dog barks. What barks?", and it will answer "dog", rather than "tree" (because "tree" is more similar to "bark" usually).

You're really doing Watson a disservice there. Watson incorporates cutting edge implementations of just about every niche Natural Language Processing task that has been tackled, and the very example you give (Semantic Role Labeling) is one of the most important components of Watson. As a computational linguistics researcher I would pretty confidently say that no large-scale system resolves "A dog barks. What barks?" better than Watson does.

Hey guys. What's next? What is the next place you're taking this project, or are you moving on to something else entirely?

I mean, you gonna give it some hands and let it modify and determine its own environment? Try and teach it an appreciation for Shakespeare? Teach it to talk? Steal bodies and build it a Frankenstein's Monster-esque body so it can rampage through the local countryside? Or perhaps just point it at Laurier?

(Travis says:) One of the major focuses of the lab right now is incorporating more learning into the model. A couple of us are specifically looking at hierarchical reinforcement learning and building systems that are capable of completing novel tasks using previously learned solutions, and adding learned solutions to its repertoire!

One of the profs at UWaterloo is actually working on incorporating robotics into our models, and having robot eyes / arm being controlled by the spiking neuron models built in Nengo! My main concern for this is getting it to learn how to properly high-five me asap.

(Terry says:) The project I'm currently working on is getting a bit more linguistics into the model. The goal is to be able to describe a new task to the model, and have it do that. Right now it's "hard-coded" to do particular tasks (i.e. we manually set the connections between the cortex and the basal ganglia to be what they would be if someone was already an expert at those tasks).

(Xuan says): It hasn't really. Just because Spaun is a computer simulation, it doesn't mean that it is entirely deterministic. There are many situations in which spaun answers differently each time, despite having the same input parameters, and same components.

(Terry says:) I actually don't think free will has anything to do with determinism. For me, free will is "making actions in accordance with my beliefs and desires". This has nothing to do with whether or not the universe is deterministic or non-deterministic. I certainly don't want to pin my free will on quantum indeterminacy -- that'd mean that instead of my actions being in accordance with my beliefs and desires, my actions are based on quantum randomness! That's not free will at all!

Okay, so I'm just a 17 year old high school kid, but I want to major in neuroscience and have already read a substantial amount of material on the subject.

I've done a lot of research on critical periods and how it relates to neurological development and learning. What are your takes on Critical Periods versus Sensitive Periods? Does your brain model learn like an actual one does (forming synapses and such)? Do you believe that ability to onset a second critical period will lead to finding cures for autism? What is the next big question in neuroscience (What topic are people being drawn to in the field)?

(Travis says:) Hi! Thanks for the interest! :D Hmm, can you specify further what you mean by critical and sensitive periods? I'm not overly familiar with the terms.
The SPAUN model performs learning by altering the values of the connection weight matrix that hooks up all the neurons to one another. So if two neurons are communicating, and we increase their connection weight from 4 to 5, it's analogous to something like increasing the effectiveness of the neurotransmitters, but we're not simulating forming new synapses.
And the next big question! That will depend on what area of neuroscience you're studying! :D My focus is in motor control, currently I'm concerned with motor learning issues, things like generalizability of learned actions and developing / exploiting forward models (models of the dynamics of the environment you're operating in).
Oh, and of course Brain Computer Interfaces are sexy, something I would really love to move towards, myself, is neuroprosthetics. How awesome are they?? So awesome.

Hi guys. I'm in a lab in another part of the world where a different kind of virtual brain has been developed, where we were interested in recreating the global spatiotemporal pattern dynamics of the cortex based on empirical connectivity measured from diffusion {spectrum, tensor, weighted} imaging.

In particular, we're pretty sure transmission delays and stochastic forcing contribute significantly to form the critical organization of the brain's dynamics. Do these elements show up in your model?

I'm also pretty keen on understanding exactly how you operationalize your tasks/functions. Are they arbitrary input/output mappings or do they form autonomous dynamical systems? Does the architecture scale to tasks or behaviors with multiple time scales such as handwriting (strokes, letters, word, sentences, e.g.)? Is this a large scale application of the 90s connectionist theories on universal function approximation, or have I missed a great theoretical advance that's been made?

While I'm at it, how do you guys relate your work to Friston's free energy theory of brain function?

In general, our methods are focused more on recreating the functional outputs of the brain, rather than matching experimental data such as DTI. Where data like that comes in for us is in guiding the development of the model; making sure that what we build actually fits with, for example, the observed connectivity in the brain. So it's kind of two different ways of approaching the data, which are both important I think.

We do not have explicit transmission delays or stochastic resonance/synchronicity in our model. Our timing data arises from the neurotransmitter time constants we use in our neuron models, which we take from experimental data. We can see synchronicity in the model if we look for it, but we did not build it into the system, or use it in any of the model's computations.

One of the most important features of the NEF methods is that we specify the functional architecture based on neural data and our theories as to what processes are occurring, and then build a model that instantiates that architecture. That is what distinguishes it most from the "90s connectionist theories", where you specify desired inputs and outputs, and hope that the learning process will find the functions that accomplish the mapping.

I think Friston's free energy theory is a very interesting way of thinking about what is going on in the brain. However, many of the details require fleshing out. The strength of the theory is that it provides a general way of thinking about the processing occurring in the brain, but that is also its weakness; it is so general, that it is often difficult to see its specific implications or predictions for understanding or modelling the brain. To date, most of the models based on the theory have been quite simple. If more large-scale models were developed that capitalized on the theory's promise of an explanation of general brain function, that would be really cool to see.

(Travis says:) We simulate them physically, but we've actually shown that we get the same results when we simulate them probabilistically! I believe that was Terry who did that, as soon as he gets back I'll ask him to comment more on that if you're interested!

(Terry says:) Yup! The normal physical simulation just uses currents and voltages (simulated in a digital computer), but it turns out that real neurons actually have a probabilistic component: when a neuron spikes it has a certain probability of affecting the next neuron. We'd ignored that when first putting together the models, but then we tried adding the probability stuff in and it all worked fine!

We have also done some basic work we actually physically simulated neurons (with custom computer chips that actually have transistors on them that mimic the cell membrane of a real neuron). That was with this project at Stanford: [http://www.stanford.edu/group/brainsinsilicon/goals.html]

Just wanted to say that you guys are absolutely amazing. I've read a bit about ANNs and such and have been interested in trying to write my own very basic ANN, but I have very little experience coding anything anywhere near that complex, let alone creating something like this. It's really mindblowing that we've gotten to this point in creating a model of the brain. I wonder what the next 5-10 years will bring.

(Travis says:) If you're interested I would recommend reading up on reinforcement learning! There are a lot of really neat demo's and sample code to get you up and running quick, for things like have a mouse learn to avoid a cat or not fall of a cliff in Python that's easy to start up in. Terry has actually written one that can be found here (along with a lot of other material as well): https://github.com/tcstewar/ccmsuite

Or if you're looking for starting in with neurons you can check out our page http://nengo.ca, and grab Nengo, and the check out the tutorials section!

(Trevor says:) Oh boy! Lots of people wanting to help! Well, the first step is to (attempt to) learn our software, and the theory behind it. There's a course for doing this at the University of Waterloo -- we're looking into ways that we can offer this to people outside of the university in something like Coursera (not for credit). Take an experimental neuroscience paper and try to model it!

(Terry says:) Thanks for the offer! I think there's two main ways to be involved:

1) Working on the core simulator. This is a pretty standard Java app, and is all on github [https://github.com/ctn-waterloo/nengo]. Speeding it up, making it more robust, and even just doing basic testing and Q&A would be incredibly useful (we try to do some of that, but there aren't enough hours in the day)

2) Building new neural models. This approach to neural modelling is pretty new, so there's lots of existing neural research that it could be applied to. When we get new people in the lab, we often just give them a bunch of different neuroscience papers to read, and if anything jumps out at them as interesting, then the first project is to try to build a model of that system. We'd definitely try to help out as best we could, if people were interested in doing something like that!

(Travis says:) It depends on how patient you are! We have 24G of RAM, and it is very, very slow on these machines. About 2-3 hours to simulate 1 second. That's 2.5 million neurons, and there are around 10 billion in a human brain, if someone can math that with Moore's law we could have an approximation!

At 3 hours per second to simulate 2.5m neurons, that is 10,800 seconds : second; log_2 10800 = 13.4 doublings or since each doubling takes 1.5 years, 20 years. So the existing model could be run in realtime at the same price in 20 years, assuming no optimizations etc.

To run in realtime and also to scale up to 10 billion neurons? Assuming scaling is O(n) for simplicity's sake, that means we need to run 4000x more neurons (10b/2.5m); log2 4000 is 11.97 or 12 more doublings, or another 18 years.

So in 38 years, one could run the current model with 10b neurons in realtime.

(Caveats: not clear Moore's law will hold that long, this is assuming equal price point but we can safely assume that a working brain would be run on a supercomputer many years before this 38 year mark, scaling issues are waved away, etc.)

(Terry says:) The biggest thing stopping us from scaling it up is that we can't just add more neurons to the model. To add a new brain part to the model, we have to take a guess as to what that brain part does, figure out how neurons can be origanized to do that, and then add that to the model. The hard part is fguring out how the neurons should be connected, not simulating more neurons.

(Terry says:) The basic components have been worked on since around 2005, but it's only been in since early last year that we felt we had enough components to try putting them all together into one model.

I read somewhere that maybe in about 20 or 30 years it will be possible to "program" a specific human brain, with all its experiences, its opinions and transform the "soul" or whatever it is that makes us feel alive into a programmed code. Will this ever be possible or is it just another utopian way of trying to achieve immortality?

(Terry says:) Definitely not in 20 to 30 years. Measuring the connections between neurons in the brain (which is where it is generally believed all these details are stored) is ridiculously difficult. For a contrary opinion, see Greg Egan's scifi book Zendegi.

As for the soul and whether that programmed copy of a brain would feel alive, if we ever get to that stage, I have no idea. But I think if we ever get to a stage (say, 100 years from now) where we have these simulations around and they do seem to behave just like normal people, then we might just have to accept that they are.

Can you recommend books / papers where I can learn more about the following?

Once when I was doing a great deal of typing, writing papers for grad school, I began to notice regularly making a weird kind of typo, generally with words of two or three syllables. Sometimes I would type a completely incorrect, but properly spelled, word that was weirdly related to the intended word. Other times, the misspelled word consisted an A part and a B part. The A part was the normal word as intended. The B part was the suffix of a different word, but one also strangely related to the intended word. Strangely as in semantically, not phonetically, and semantically but not via any direction my conscious flow of thought had been taking. All my examples are at home on a spun-down drive, I wish I had them to show you.

I thought about what had to be going on in my head in terms of subsystems to support typing the paper and to generate those typos. I think there has to be: 1) A composer, thinking about the topic area and the paper I'm writing, 2) A chunker, taking the stream of thought from the composer and converting it into chunks to be handed to the typing subsystem, 2A) Retrieval by semantic keys, converting or reifying each chunk from the composer into chunks of letters/keyboard strokes to be handed to the typing/muscular control system, i.e. a semantic map, 3) Muscular control / sequencing for typing the characters retrieved in 2A.

Given that model, the typos I was seeing happened in step 2A above. A composer token was misinterpreted by the semantic mapper, with the incorrectly retrieved chunk typed properly by the muscle sequencing system.

Can you recommend books or papers that address these kinds of brain subsystems? How do I do research to learn if people have addressed the very topics I mentioned above?

And, finally, how far is your model from being able to model the behaviors I described?

(Travis says:) I think you would be very interested to read Chris' upcoming book 'How to build a brain', which talks about the Semantic Pointer Architecture (SPA), which is the foundation behind the SPAUN model.
The basic idea is that ideas / information is compressed into smaller representations that 'point' (if you're familiar with the programming term) to the full representation, but instead of being just an address, also incorporate semantic information so it's possible to work with the pointer itself effectively. This would be along the lines of thinking words as a whole, and then when you need to get more detailed information about all the letters involved you use the pointer to pull up that info, which you can then pass along for further processing and output.

I understand the computer needs two hours of processing time for each second of Spaun simulation and from what I've read the brain's processing power is roughly 100 million MIPS, what is SPAUN's estimated?

I've also read that the brain would have "human-like" flaws, what type of flaws should we expect?

We've never actually measured or estimated Spaun's MIPS, so I don't have an answer for this. Sorry. =(

One of the easiest "human-like" flaw to demonstrate is it's memory. Typical computer memory is super accurate. When you ask a computer to store something, you'd expect it to remember it very well. Spaun however, exhibits more "human-like" memory. It has the ability to remember lists of numbers, but as the list gets longer, the memory gets worse. Also, things at the start and end of the list are better remembered. Things in the middle get lost easier.

(Terry says:) We ran Spaun on a pretty basic workstation: 16 hyperthreaded cores at 2.4GHz with 24Gb memory. I'm sure there's people reading this that have that sort of machine at their desk. (Indeed, if you want to, download Spaun from [http://models.nengo.ca/spaun] and Nengo (the simulator) from [http://nengo.ca] and run it yourself!).

But, when people estimate the brain's processing power at 100 million MIPS, they're really doing something like "10 billion neurons times 1,000 connections per neuron times about 10 operations per second per neuron", where the 10 operations per second is a measure of how long it takes for a neuron to respond to changes in its input. For Spaun, it'd be around 2.5 million neurons, and ~1,000 connections each, and 10 operations per second = 25 thousand MIPS.

Hi guys, first let me say that I periodically turned into a giddy schoolgirl when I read about SPAUN the first time. I have a couple of questions for you.

1) I'm an undergrad who wants to go into neuroscience research. Do you guys have any tips to get a leg up on the pile? I'm a sophomore bio BS major with minors in chemistry and cognitive science and working in a lab about glial signalling now....

2)... which brings me to my second question. The lab I work in is concerned with the role of astrocyte glia in the function of the nervous system. The mammalian brain is something like 50% glia by mass and while they were originally thought of as filler (hence the name) a lot of recent research is showing they fulfill vital roles in synaptic regulation such as controlling potassium and calcium concentration. I'm really interested in the emerging field of connectomics, which I imagine you guys are familiar with, but I'm worried the premise might be flawed in that it only accounts for neuronal connections. As research progresses and we see that "auxiliary" glial cells play a larger role, do you think the direction of connectome science will have to be reworked?

From my own undergraduate experience, I'd say that working in a lab is probably the best way to get a leg up (and to get experience). And it seems from your response that you are already doing that! =)

That's the awesome thing about science, when we find that the explanations we have so far are inadequate, we search for more answers. In my opinion, the connectomics project will go some ways to answering the question of how the brain works, and will definitely have to be expanded to include functions that glial cells may be contributing to the neurons.

Additionally, knowing exactly how a large network is connected may not tell us what is actually being done by this network. It's sort of like having an electrical circuit, and knowing exactly which components connect to which, but not knowing what each component does.

So yeah, the connectomics project will answer a lot of question, but will probably bring up more. =)

I agree with Xuan, work at a lab, even volunteer if you have to! Just by being interested it's likely that you'll happen upon opportunities -- take them! If you have time.

Glia is super interesting and almost completely ignored by the theoretical community, but I think that's about to change. This paper, for example, attempts to model this. I think that including these kinds of interactions in our models are going to be increasingly important over the next while -- you're studying glia at a great time!

I don't have much to say about connectomics. It sounds cool, but I share your concern with it not capturing a lot of important details. It's figuring out some stuff though, so connectomics people, keep on keepin' on.

A handful of small questions for you. Have you, or will you, consider the possibility of the ethical implications that creating a human-like AI may have?

For example, you mention that this brain has human like tenancies in some of its behaviours. Are those behaviours unanticipated? And if so, when your type of brain becomes more complex, would you expect there to be more human-like unintended behaviours and patterns of thought?

At which point do you think you should consider a model brain an AI entity and not just a program? And even if an AI brain is not as complex as a human's, does it deserve any kind of ethical treatment in your view? In the biological sciences there are ethical standards for the handing any kind of vertebrate organism, including fish, even though there is still active debate over whether fish can feel pain or fear, and whether we should care if they do.

Do people in the AI community actively discuss how we should view, treat and experiment on a human-like intelligences once they've been created?

(Terry says:) These discussions are starting to happen more and more, and I do think there will, eventually, be a point where this will be an important practical question. That said, I think it's a long way off. There aren't even any good theories yet about the more basic emotional parts of the brain, so they're not included at all in our model.

Electrophysiologist (and bad computation modeler) here. Something I've never gotten about large scale non-biophysical (i.e. not hodgkin-huxley) brain models, is what is the point? I can see the point of one built to be as biologically realistic as possible, i.e. once we think we know all of the cellular properties of the brain, if we put together a biologically accurate model, if it doesn't recapitulate brain function, then we plainly don't know everything.

However, with your simple spiking cells, put together in a minimalistic fashion.. well, if it doesn't work, you just just fiddle with some connection weightings, or numbers or spiking properties, and kinda hope that it works. That is to say: your properties are weakly constrained.

If you are simply saying, "Oh we're only minimally interested in answering fundamental neuroscience questions, and are more interested in new ways of solving problems computationally" then I get you. But if that is not the case, what are you trying to learn about the brain by doing this?

(Xuan says):
In order to understand the brain (or any complex system), there are multiple ways of approaching the problem.

There is the bottom-up approach - this is similar to the approach used by the blue brain project - build as detailed and as complex a model as possible and hope something meaningful emerges.

There is the top-down approach - this is approached used by philosophers and psychologists. These models are usually high level abstractions of behavioural data.

Then there are approaches that come in from the middle. I.e. everything else in between.

You could say that our properties are "weakly constrained", but all of the neuron properties are within those found in a real brain. The main question we were trying to answer was "can we use what understand functionally about how the brain does things to construct a model that does these things?"

It's similar to understanding how a car works. You can

Replicate it in as much detail as possible and hope it works.

Attempt to understand how each part of the car works, and what function each part has, and then constructing your own version of it. The thing your construct may not be a 100% accurate facsimile, but it does tell use about our understanding of how a car works.

First of all, huge fan of your work. It's an amazing thing you guys have accomplished!
Now for my question: I was just reading about the blue brain project, which has a goal to fully simulate a human brain by 2020. What are your thoughts on that project?

(Travis says:) The Blue Brain project really has a different goal than our work, I think. Their goal (as I understand it) is to simulate, as realistically as possible, the number of neurons in a human brain.
What we're more concerned with here is how to hook up those neurons to each other such that we get interesting function out of our models, so we're very concerned with the overall system architecture and structure. And that's how we can get out these really neat results with only 2.5 million neurons (which is just a fraction of the 10 billion a human brain has). We are definitely interested in scaling up the number of neurons we can simulate, but it's secondary to producing function.

(Travis says:) Most of the people working here in the lab have an engineering or computer science background, which comes in very handy when we're programming all our models and simulations, so that's definitely up on the list of requirements. Previous experience in modelling neural systems or machine learning is also a plus! Our current post-docs are Terry Stewart and James Bergstra, I would recommend checking out their pages! http://ctnsrv.uwaterloo.ca/cnrglab/user/19 and http://www.eng.uwaterloo.ca/~jbergstr/

(Travis says:) We were very curious to see what would happen, most of the press coverage hasn't been too far off base (from what I've read, which is not all of them!). I think that the IQ test here it's referring to is the Raven's Progressive Matrix task (http://en.wikipedia.org/wiki/Raven's_Progressive_Matrices), which SPAUN definitely is capable of passing. But the fun thing about headlines is that they necessarily cut out the details :D

I am really interested in neuroscience as a career path. However, I am currently doing Nanotech. Do you have any recommendations for an efficient career/education path to start working with stuff like SPAUN? (I would be more interested in creating a piece of hardware to mimics the brain)

(Terry says:) Great! The nice thing with this field is that it's currently pretty wide open -- there's lots of possible directions to go. The core simulator that we use is open-source [http://nengo.ca], with lots of online tutorials, so there's at least some possibility for self-education and getting familiar with the types of methods that we think are the most promising.

As for a career/education path, this sort of work tends to be called "theoretical neuroscience" or "systems neuroscience", so take a look at programs with those sorts of names.

Seriously, if you want to do it, just do it. If you're still in your undergrad, find a lab around you that's doing interesting work and see if you can get involved in it. It might mean you have to volunteer for a while, and work long long hours for little immediate reward, but things like that set you apart when you go to apply to neuroscience labs for grad school.

(Travis says:) Here is a sample! http://nengo.ca/build-a-brain
The book works through the principles that we use and how to apply these yourself, with a bunch of tutorials (that I believe are also online at the nengo page), and then walks through the details of the SPAUN model. We're hoping that it encourages people to start exploring these types of models on their own!

(Travis says:) Dr. Eliasmith's book 'The Neural Engineering Framework' is definitely on all our reading lists, but we take a course with him to get through it. And it's very painful. Aside from that, as more of an introductory book I'm a fan of this bad boy by Kandel http://www.amazon.com/Search-Memory-Emergence-Science-Mind/dp/0393329372 It's an easy read / intro to neuroscience.
Most of what we do here is reading papers and then coding up ideas / models that we develop, as things are becoming more open access or if you have access to a campus internet connection you can definitely do these things on your own as well to get into things!
For more specific reading list though, I would recommend checking out our lab page, looking through our member's list and then if someone's work interests you send them an email! Should be able to provide a nice set of papers related to their area. :)