Month: March 2007

The Council for Science and Technology report, which criticised the UK government’s record in getting research going on the health and environmental effects of nanoparticles, has led to quite a lot of media coverage, of various degrees of inaccuracy. My favourite is this story in The Register: UK government slated by own boffins on nanotech policy. Taking its cue from the Science Minister’s line – which seemed to be, in effect, we gave the scientists lots of money but they chose to spend it doing exciting new science rather than getting the toxicology right, The Register’s Lewis Page sagely commented:

“This has been true ever since the first mad professor set up his dungeon laboratory, of course. Any scientist worth his salt would rather work out how to make dead flesh live again than write up the safety case for doing it. Even so, it’s nice to see boffins finally admitting this.”

This is irresponsible and gratuitously stereotyping journalism, of course, but I thought it was funny.

The thrust of the report is unequivocal – the government promised an extensive program of research into the toxicology and health and environmental effects of nanomaterials, and this research has not happened. The reason for this is equally clear – money wasn’t set aside to fund it. Instead, it was decided to rely on scientists coming forward with funding proposals to be judged, in competition with proposals in other areas of science, by peer review. That this approach would prove to be completely inadequate was widely predicted at the time, and those predictions turned out to be entirely correct. As I wrote myself a year ago here: ‘This seems to me to be a category error – the science we need to underpin regulation isn’t necessarily good science as defined by peer review, and if the capacity to do the research isn’t there one can’t just expect it to appear spontaneously.”

The Science Minister, Malcolm Wicks, was questioned about the report on this morning’s BBC Radio 4 Today program. In his interview – downloadable here as an MP3 file (the interview is the last item) – he accepts the basic thrust of the criticism, but blames the reseach councils (in particular the Medical Research Council) for not being proactive, and scientists for not coming forward with the proposals. This, of course, is precisely the point. To be fair to him, he’s taking the flak for decisions made by his predecessor. A full, formal government response to the CST report will presumably follow.

One of the problems of events which aim to gauge the views of the public about emerging issues like nanotechnology is that it isn’t always easy to provide information in the right format, or to account for the fact that lots of publicly available information may be contested and controversial in ways that are difficult to appreciate unless one is deeply immersed in the subject. It’s also very difficult for anybody – lay person or expert – to be able to judge what impact any particular development in science or technology might actually have on everyday life. Science Horizons is a public engagement project that’s trying to deal with this problem. The project is funded by the UK government; its aim is to start a public discussion about the possible impacts of future technological changes by providing a series of stories about possible futures which do focus on everyday dilemmas that people may face.

The stories, which are available in interactive form on the Science Horizons website, focus on issues like human enhancement, privacy in a world with universal surveillance, and problems of energy supply. These, of course, will be very familiar to most readers of this blog. The scenarios are very simple, but they draw on the large amount of work that’s been done for the UK government recently by its new Horizon Scanning Centre, which reports to the Government’s Chief Scientist, Sir David King. This centre published its first outputs earlier this year; the Sigma Scan concentrating on broader social, economic, environmental and political trends, and a Delta Scan concentrating on likely developments in science and technology.

The idea is that the results of the public engagement work based on the Science Horizons material will inform the work of the Horizon Scanning centre as it advises government about the policy implications of these developments.

If you were able to make a nanoscale submarine to fulfill the classic “Fantastic Voyage” scenario of swimming through the bloodstream, how would you power and steer it? As readers of my book “Soft Machines” will know, our intuitions are very unreliable guides to the environment in the wet nanoscale world, and the design principles that would be appropriate on the human scale simply won’t work on the nanoscale. Swimming is good example; on small scales water behaves, not as the free flowing liquid we are used to on the human scale; viscosity is much more important on small scales. To get a feel for what it would be like to try and swim on the nanoscale, one has to imagine trying to swim in the most viscous molasses. In my group we’ve been doing some experiments to demonstrate the realisation of one scheme to make a nanoscale object swim, the results of which are summarised in this preprint (PDF), “Self-motile colloidal particles: from directed propulsion to random walk”.

The brilliantly simple idea underlying these experiments was thought up by my colleague and co-author, Ramin Golestanian, together with his fellow theoretical physicists Tannie Liverpool and Armand Adjari, and was analysed theoretically in a recent paper in Physical Review Letters, “Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products” (abstract here, subscription required for full paper). If one has a particle that has a patch of catalyst on one side, and that catalyst drives a reaction that produces more product molecules than it consumes in fuel molecules, then the particle will end up in a solution that is more concentrated on one side than the other. This leads to an osmotic pressure gradient, which in turn results in a force that pushes the particle along.

Jon Howse, a postdoc working in my group, has made an experimental system that realises this theoretical scheme. He coated micron-sized polystyrene particles, on one side only, with platinum. This catalyses the reaction by which hydrogen peroxide is broken down into water and oxygen. For every two hydrogen peroxide molecules that take part in the reaction, two water molecules and one oxygen molecule results. Using optical microscopy, he tracked the motion of particles in four different situations. In three of these situations – with control particles, uncoated with platinum, in both water and hydrogen peroxide solution, and with coated particles in hydrogen peroxide solution, he found identical results – the expected Brownian motion of a micron-sized particle. But when the coated particles were put in hydrogen peroxide, the particles clearly moved further and faster.

Detailed analysis of the particle motion showed that, in addition to the Brownian motion that all micro-size particles must be subject to, the propelled particles moved with a velocity that depended on the concentration of the hydrogen peroxide fuel – the more fuel that was present, the faster they went. But Brownian motion is still present, and it has an important effect even on the fastest propelled particles. Brownian motion makes particles rotate randomly as well as jiggle around, so the propelled particles don’t go in straight lines. In fact, at longer times the effect of the random rotation is to make the particles revert to a random walk, albeit one in which the step length is essentially the propulsion velocity multiplied by the characteristic time for rotational diffusion. This kind of motion has an interesting analogy to the kind of motion bacteria do when they are swimming. Bacteria, if they are trying to swim towards food, don’t simply swing the rudder round and propel themselves directly towards it. Like our particles, they are actually doing a kind of random walk in which stretches of straight-line motion are interrupted by episodes in which they change direction – this kind of motion has been called run and tumble motion. Counterintuitively, it seems that this is a better strategy for getting around in the nanoscale world, in which the random jostling of Brownian motion is unavoidable. What the bacteria do is change the length of time for which they are moving in a straight line according to whether they are getting closer to or further away from their food source. If we could do the same trick in our synthetic system, of changing the length of the run time, then that would suggest a strategy for steering our nanoscale submarines, as well as propelling them.

There can be few more potent ideas in futurology and science fiction than that of the brain chip – a direct interface between the biological information processing systems of the brain and nervous system and the artificial information processing systems of microprocessors and silicon electronics. It’s an idea that underlies science fiction notions of “jacking in” to cyberspace, or uploading ones brain, but it also provides hope to the severely disabled that lost functions and senses might be restored. It’s one of the central notions in the idea of human enhancement. Perhaps through a brain chip one might increase ones cognitive power in some way, or have direct access to massive banks of data. Because of the potency of the idea, even the crudest scientific developments tend to be reported in the most breathless terms. Stripping away some of the wishful thinking, what are the real prospects for this kind of technology?

The basic operations of the nervous system are pretty well understood, even if the way the complexities of higher level information processing work remain obscure, and the problem of consciousness is a truly deep mystery. The basic units of the nervous system are the highly specialised, excitable cells called neurons. Information is carried long distances by the propagation of pulses of voltage along long extensions of the cell called axons, and transferred between different neurons at junctions called synapses. Although the pulses carrying information are electrical in character, they are very different from the electrical signals carried in wires or through semiconductor devices. They arise from the fact that the contents of the cell are kept out of equilibrium with their surroundings by pumps which selectively transport charged ions across the cell membrane, resulting in a voltage across the membrane. This voltage can be relaxed when channels in the membrane, which are triggered by changes in voltage, open up. The information carrying impulse is actually a shock wave of reduced membrane potential, enabled by transport of ions through the membrane.

To find out what is going on inside a neuron, one needs to be able to measure the electrochemical potential across the membrane. Classically, this is done by inserting an electrochemical electrode into the interior of the nerve cell. The original work, carried out by Hodgkin, Huxley and oters in the 50’s, used squid neurons, because they are particularly large and easy to handle. So, in principle one could get a readout of the state of a human brain by measuring the potential at a representative series of points in each of its neurons. The problem, of course, is that there are a phenomenal number of neurons to be studied – around 20 billion in a human brain. Current technology has managed to miniaturise electrodes and pack them in quite dense arrays, allowing the simultaneous study of many neurons. A recent paper (Custom-designed high-density conformal planar multielectrode arrays for brain slice electrophysiology, PDF)) from Ted Berger’s group at the University of Southern California shows a good example of the state of the art – this has electrodes with 28 µm diameter, separated by 50 µm, in an array of 64 electrodes. These electrodes can both read the state of the neuron, and stimulate it. This kind of electrode array forms the basis of brain interfaces that are close to clinical trials – for example the BrainGate product.

In a rather different class from these direct, but invasive probes of nervous system activity at the single neutron level, there are some powerful, but indirect measures of brain activity, such as functional magnetic resonance imaging or positron emission tomography. These don’t directly measure the electrical activity of neurons, either individually or in groups; instead they rely on the fact that thinking is hard work (literally) and locally raises the rate of metabolism. Functional MRI and PET allow one to localise nervous activity to within a few cubic millimeters, which is hugely revealing in terms of identifying which parts of the brain are involved in which kind of mental activity, but which remains a long way away from the goal of unpicking the brain’s activity at the level of neurons.

There is another approach does probe activity at the single neuron level, but doesn’t feature the invasive procedure of inserting an electrode into the nerve itself. These are the neuron-silicon transistors developed in particular by Peter Fromherz at the Max Planck Institute for Biochemistry. These really are nerve chips, in that there is a direct interface between neurons and silicon microelectronics of the sort that can be highly miniaturised and integrated. On the other hand, these methods are currently restricted to operate in two dimensions, and require careful control of the growing medium that seems to rule out, or at least present big problems for, in-vivo use.

Fromherz’s group have demonstrated two types of hybrid silicon/neuron circuits (see, for example, this review “Electrical Interfacing of Nerve Cells and Semiconductor Chips”, abstract, subscription required for full article). One circuit is a prototype for a neural prosthesis – an input from a neuron is read by the silicon electronics, which does some information processing and then outputs a signal to another neuron. Another, inverse, circuit is a prototype of a neural memory on a chip. Here there’s an input from silicon to a neuron, which is connected to another neuron by a synapse. This second neuron makes its output to silicon. This allows one to use the basic mechanism of neural memory – the fact that the strength of the connection at the synapse can be modified by the type of signals it has transmitted in the past – in conjunction with silicon electronics.

This is all very exciting, but Fromherz cautiously writes: “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.” Among the practical problems are the fact that it seems difficult to extend the method into in-vivo applications, it is restricted to two dimensions, and the spatial resolution is still quite large.

Pushing down to smaller sizes is, of course, the province of nanotechnology, and there are a couple of interesting and suggestive recent papers which suggest directions that this might go in the future.

Charles Lieber at Harvard has taken the basic idea of the neuron gated field effect transistor, and executed it using FETs made from silicon nanowires. A paper published last year in Science – Detection, Stimulation, and Inhibition of Neuronal Signals with High-Density Nanowire Transistor Arrays (abstract, subscription needed for full article) – demonstrated that this method permits the excitation and detection of signals from a single neuron with a resolution of 20 nm. This is enough to follow the progress of a nerve impulse along an axon. This gives a picture of what’s going on inside a living neuron with unprecendented resolution. But it’s still restricted to systems in two dimensions, and it only works when one has cultured the neurons one is studying.

Is there any prospect, then, of mapping out in a non-invasive way the activity of a living brain at the level of single neurons? This still looks a long way off. A paper from the group of Rodolfo Llinas at the NYU School of Medicine makes an ambitious proposal. The paper – Neuro-vascular central nervous recording/stimulating system: Using nanotechnology probes (Journal of Nanoparticle Research (2005) 7: 111–127, subscription only) – points out that if one could detect neural activity using probes within the capillaries that supply oxygen and nutrients to the brain’s neurons, one would be able to reach right into the brain with minimal disturbance. They have demonstrated the principle in-vitro using a 0.6 µm platinum electrode inserted into one of the capillaries supplying the neurons in the spinal cord. Their proposal is to further miniaturise the probe using 200 nm diameter polymer nanowires, and they further suggest making the probe steerable using electrically stimulated shape changes – “We are developing a steerable form of the conducting polymer nanowires. This would allow us to steer the nanowire-probe selectively into desired blood vessels, thus creating the first true steerable nano-endoscope.” Of course, even one steerable nano-endoscope is still a long way from sampling a significant fraction of the 25 km of capillaries that service the brain.

So, in some senses the brain chip is already with us. But there’s a continuum of complexity and sophisitication of such devices, and we’re still a long way from the science fiction vision of brain downloading. In the sense of creating an interface between the brain and the world, that is clearly possible now and has in some form been realised. Hybrid structures which combine the information processing capabilities of silicon electronics and nerve cells cultured outside the body are very close. But a full, two-way integration of the brain and artificial information processing systems remains a long way off.