Posts Tagged ‘brain emulation’

I have written on this subject on my website but perhaps it is time for an update. The idea of taking a human brain, preparing it properly and slicing it into thin slices that can be scanned into a computer system is not new. There have been several people who have discussed the idea, the most notable being inventor/scientist Ray Kurtzweil. The problems can be categorized into five groups: storage, computational power, tissue preservation and preparation, parallel scanning techniques, and basic neurophysiology. I will treat the easiest first.

Storage: It has been estimated that the human brain connectivity would take 15 petabytes. To be safe we would likely need 200 petabytes of storage to hold intermediate results in the process of analyzing the connectivity in the layers of samples to recreate the neural network. Today you can get two terabyte drives. An eight foot rack can easily provide 200 terabytes so a good size room could easily give us the 15 petabytes (75 racks). From this, it is easily seen that we are only a couple of generations away from the storage capacity needed. For a truly redundant and reliable system perhaps we are three generations away; certainly within the next 10 years.

The computational capability has been estimated to be 120 petaflops/second. The Tianhe-1A computer in Tianjin, China has a speed of 2.5 petaflops/second and a disk storage system of 2 petabytes. Again humanity appear to be less than 10 years away from realizing the computational capability necessary for reaching our goal.

Tissue preservation is going much slower since it is not getting the same kind of R/D attention that computer technology is getting. Recent innovations in block face scanning electron microscopes (Denk and Horstmann) with built-in microtomes provided sample thicknesses down to 50 nm with layer alignment errors of less than 10 nm. This is satisfactory for human brain tissue but the block face is only about 10 um by 10 um. This would have to be scaled up by a factor of 10,000 in both the X and Y dimensions of the block and use multiple electron beams for speed. The samples were prepared with epoxy using rather high curing temperatures (60-70 C). I would think a gel near freezing would be better at preserving bulk brain tissue and be easier on the microtome. They were having trouble with debris on the surface. My guess is that we are at least 20 years away from having the bulk tissue preservation techniques and the SEM equipment needed for the brain downloading task. Perhaps more, if money doesn’t flow in this direction.

Now we come to the neurophysiology issues. This one is very difficult to predict at this time since we don’t know what we don’t know. We have recently discovered dozens of neurotransmitters including, of all things, carbon monoxide. Some neurotransmitters work at the synaptic junction (glutamate, aspartate and glycine) where they are emitted by vesicles where others diffuse into the surrounding tissue and have a more global effects (nitric oxide and carbon monoxide). When all of the neurotransmitters and receptors have been characterized and all the neural types are thoroughly understood in the brain then, the whole brain can be emulated without understanding the functions of the networks themselves. It’s like running the same computer program on a different computer, the neural networks being the program and instead of compiling into neurons it is compiling into emulated neurons which provide the same functionality. I think an upper limit of 30 years is conservative since worldwide interest in this area seems to be awakening. Also better tools are now available including supercomputers to emulate bits and pieces of various brain subsystems.

Don’t volunteer to be first! I suspect there will be many attempts before anyone is successfully resurrected.