Month: January 2009

This entry isn’t really about nanotechnology at all; instead it’s a ramble around some mathematics that I find interesting, that suddenly seems to have become all too relevant in the financial crisis we find ourselves in. I don’t claim great expertise in finance, so my apologies in advance for any inaccuracies.

Brownian motion – the continuous random jiggling of nanoscale objects and structures that’s a manifestation of the random nature of heat energy – is a central feature of the nanoscale world, and much of my writing about nanotechnology revolves around how we should do nanoscale engineering in a way that exploits Brownian motion, in the way biology does. In this weekend’s magazine reading, I was struck to see some of the familiar concepts from the mathematics of Brownian motion showing up, not in Nature, but in an article in The Economist’s special section on the future of the finance – In Plato’s Cave, which explains how much of the financial mess we find ourselves in derives from the misapplication of these ideas. Here’s my attempt to explain, as simply as possible, the connection.

The motion of a particle undergoing Brownian motion can be described as a random walk, with a succession of steps in random directions. For every step taken in one direction, there’s an equal probability that the particle will go the same distance in the opposite direction, yet on average a particle doing a random walk does make some progress – the average distance gone grows as the square root of the number of steps. To see this for a simple situation, imagine that the particle is moving on a line, in one dimension, and either takes a step of one unit to the right (+1) or one unit to the left (-1), so we can track its progress just by writing down all the steps and adding them up, like this, for example: (+1 -1 +1 …. -1) . After N steps, on average the displacement (i.e. the distance gone, including a sign to indicate the direction) will be zero, but the average magnitude of the distance isn’t zero. To see this, we just look at the square root of the average value of the square of the displacement (since squaring the displacement takes away any negative signs). So we need to expand a product that looks something like (+1 -1 +1 …. -1) x (+1 -1 +1 …. -1). The first term of the first bracket times the first term of the second bracket is always +1 (since we either have +1 x +1 or -1 x -1), and the same is true for all the products of terms in the same position in both brackets. There are N of these, so this part of the product adds up to N. All the other terms in the expansion are one of (+1 x +1), (+1 x -1), (-1 x +1), (-1 x -1), and if the successive steps in the walk really are uncorrelated with each other these occur with equal probability so that on average adding all these up gives us zero. So we find that the mean squared distance gone in N steps is N. Taking the square root of this to get a measure of the average distance gone in N steps, we find this (root mean squared) distance is the square root of N.

The connection of these arguments to financial markets is simple. According the efficient market hypothesis, at any given time all the information relevant to the price of some asset, like a share, is already implicit in its price. This implies that the movement of the price with time is essentially a random walk. So, if you need to calculate what a fair value is for, say, an option to buy this share in a year’s time, you can do this equipped with statistical arguments about the likely movement of a random walk, of the kind I’ve just outlined. It is a smartened-up version of the theory of random walks that I’ve just explained that is the basis of the Black-Scholes model for pricing options, which is what made the huge expansion of trading of complex financial derivatives possible – as the Economist article puts it “The Black-Scholes options-pricing model was more than a piece of geeky mathematics. It was a manifesto, part of a revolution that put an end to the anti-intellectualism of American finance and transformed financial markets from bull rings into today’s quantitative powerhouses… The new model showed how to work out an option price from the known price-behaviour of a share and a bond. … . Confidence in pricing gave buyers and sellers the courage to pile into derivatives. The better that real prices correlate with the unknown option price, the more confidently you can take on any level of risk.”

Surely such a simple model can’t apply to a real market? Of course, we can develop more complex models that lift many of the approximations in the simplest theory, but it turns out that some of the key results of the theory remain. The most important result is the basic √N scaling of the expected movement. For example, my simple derivation assumed all steps are the same size – we know that some days, prices rise or fall a lot, sometimes not so much. So what happens if we have a random walk with step sizes that are themselves random. It’s easy to convince oneself that the derivation stays the same, but instead of adding up N occurrences of (-1 x -1) or (+1 x +1) we have N occurrences of (a x a), where the probability that the step size has value a is given by p(a). So we end up with the simple modification that the mean squared distance gone is N times the mean of the square of the step size. So this is a fairly simple modification, which, crucially, doesn’t affect the √N scaling.

But, and this is the big but, there’s a potentially troublesome hidden assumption here, which is that the distribution of step sizes actually has a well defined, well behaved mean squared value. We’d probably guess that the distribution of step sizes looks like a bell shaped curve, centred on zero and getting smaller the further away one gets from the origin. The familiar Gaussian curve fits the bill, and indeed such a curve is characterised by a well defined mean squared value which measures the width of the curve ( mathematically, a Gaussian is described by a distribution of step sizes a given by p(a) proportional to exp(-a/2s^2), which gives a root mean squared value of step size s). Gaussian curves are very common, for reasons described later, so this all looks very straightforward. But one should be aware that not all bell-shaped curves behave so well. Consider a distribution of step sizes a given by p(a) proportional to 1/(a^2+s^2). This curve (which is known in the trade as a Lorentzian), looks bell shaped and is characterised by a width s. But, when we try to find the average value of the square of the step size, we get an answer that diverges – it’s effectively infinite. The problem is that although the probability of getting a very large step goes to zero as the step size gets larger, it doesn’t go to zero very fast. Rather than the chance of a very large jump becoming exponentially small, as happens for a Gaussian, the chance goes to zero as the inverse square of the step size. This apparently minor difference is enough to completely change the character of the random walk. One needs entirely new mathematics to describe this sort of random walk (which is known as a Levy flight) – and in particular one ends up with a different scaling of the distance gone with the number of steps.

In the jargon, this kind of distribution is known as having a “fat tail”, and it was not factoring in the difference between a fat tailed distribution and a Gaussian or normal distribution that led the banks to so miscalculate their “value at risk”. In the words of the Economist article, the mistake the banks made “was to turn a blind eye to what is known as “tail risk”. Think of the banks’ range of possible daily losses and gains as a distribution. Most of the time you gain a little or lose a little. Occasionally you gain or lose a lot. Very rarely you win or lose a fortune. If you plot these daily movements on a graph, you get the familiar bell-shaped curve of a normal distribution (see chart 4). Typically, a VAR calculation cuts the line at, say, 98% or 99%, and takes that as its measure of extreme losses. However, although the normal distribution closely matches the real world in the middle of the curve, where most of the gains or losses lie, it does not work well at the extreme edges, or “tails”. In markets extreme events are surprisingly common—their tails are “fat”. Benoît Mandelbrot, the mathematician who invented fractal theory, calculated that if the Dow Jones Industrial Average followed a normal distribution, it should have moved by more than 3.4% on 58 days between 1916 and 2003; in fact it did so 1,001 times. It should have moved by more than 4.5% on six days; it did so on 366. It should have moved by more than 7% only once in every 300,000 years; in the 20th century it did so 48 times.”

But why should the experts in the banks have made what seems such an obvious mistake? One possibility goes back to the very reason why the Gaussian, or normal, distribution, is so important and seems so ubiquitous. This comes from a wonderful piece of mathematics called the central limit theorem. This says that if some random variable is made up from the combination of many independent variables, even if those variables aren’t themselves taken from a Gaussian distribution, their sum will be in the limit of many variables. So, given that market movements are the sum of the effects of lots of different events, the central limit theorem would tell us to expect the size of the total market movement to be distributed according to a Gaussian, even if the individual events were described by a quite different distribution. The central limit theorem has a few escape clauses, though, and perhaps the most important one arises from the way one approaches the limit of large numbers. Roughly speaking, the distribution converges to a Gaussian in the middle first. So it’s very common to find empirical distributions that look Gaussian enough in the middle, but still have fat tails, and this is exactly the point Mandelbrot is quoted as making about the Dow Jones.

The Economist article still leaves me puzzled, though as everything I’ve been describing has been well known for many years. But maybe well known isn’t the same as widely understood. Just like a lottery, the banks were trading the certainty of many regular small payments against a small probability of making a big payout. But, unlike the lottery, they didn’t get the price right, because they underestimated the probability of making a big loss. And now, their loss becomes the loss of the world’s taxpayers.

What do the public think about nanotechnology? This is a question that has worried scientists and policy makers ever since the subject came to prominence. In the UK, as in other countries, we’ve seen a number of attempts to engage with the public around the subject. This article, written for an edited book about public engagement with science more generally in the UK, attempts to summarise the UK’s experience in this area.

From public understanding to public engagement

Nanotechnology emerged as a focus of public interest and concern in the UK in 2003, prompted, not least, by a high profile intervention on the subject from the Prince of Wales. This was an interesting time in the development of thinking about public engagement with science. A consensus about the underlying philosophy underlying the public understanding of science movement, dating back to the Bodmer report (PDF) in 1985, had begun to unravel. This was prompted, on the one hand, by sustained and influential critique of some of the assumptions underlying PUS from social scientists, particularly from the Lancaster school associated with Brian Wynne. On the other hand, the acrimony surrounding the public debates about agricultural biotechnology and the government’s handling of the bovine spongiform encephalopathy outbreak led many to diagnosis a crisis of trust between the public and the world of science and technology.

In response to these difficulties, a rather different view of the way scientists and the public should interact gained currency. According to the critique of Wynne and colleagues, the idea of “Public Understanding of Science” was founded on a “deficit model”, which assumed that the key problem in the relationship between the public and science was an ignorance on the part of the public both of the basic scientific facts and of the fundamental process of science, and if these deficits in knowledge were corrected the deficit in trust would disappear. To Wynne, this was both patronizing, in that it disregarded the many forms of expertise possessed by non-scientists, and highly misleading, in that it neglected the possibility that public concerns about new technologies might revolve around perceptions of the weaknesses of the human institutions that proposed to implement them, and not on technical matters at all.

The proposed remedy for the failings of the deficit model was to move away from an emphasis on promoting the public understanding of science to a more reflexive approach to engaging with the public, with an effort to achieve a real dialogue between the public and the scientific community. Coupled with this was a sense that the place to begin this dialogue was upstream in the innovation process, while there was still scope to steer its direction in ways which had broad public support. These ideas were succinctly summarised in a widely-read pamphlet from the think-tank Demos, “See-through science – why public engagement needs to move upstream ” .

Enter nanotechnology

In response to the growing media profile of nanotechnology, in 2003 the government commissioned the Royal Society and the Royal Academy of Engineering to carry out a wide-ranging study on nanotechnology and the health and safety, environmental, ethical and social issues that might stem from it. The working group included, in addition to distinguished scientists, a philosopher, a social scientist and a representative of an environmental NGO. The process of producing the report itself involved public engagement, with two in-depth workshops exploring the potential hopes and concerns that members of the public might have about nanotechnology.

The report – “Nanoscience and nanotechnologies: opportunities and uncertainties” – was published in 2004, and amongst its recommendations was a whole-hearted endorsement of the upstream public engagement approach: “a constructive and proactive debate about the future of nanotechnologies should be undertaken now – at a stage when it can inform key decisions about their development and before deeply entrenched or polarised positions appear.”

Following this recommendation, a number of public engagement activities around nanotechnology have taken place in the UK. Two notable examples were Nanojury UK, a citizens’ jury which took place in Halifax in the summer of 2005, and Nanodialogues, a more substantial project which linked four separate engagement exercises carried out in 2006 and 2007.

Nanojury UK was sponsored jointly by the Cambridge University Nanoscience Centre and Greenpeace UK, with the Guardian as a media partner, and Newcastle University’s Policy, Ethics and Life Sciences Research Centre running the sessions. It was carried out in Halifax over eight evening sessions, with six witnesses drawn from academic science, industry and campaigning groups, considering a wide variety of potential applications of nanotechnology. Nanodialogues took a more focused approach; each of its four exercises, which were described as “experiments”, considered a single aspect or application area of nanotechnology. These included a very concrete example of a proposed use for nanotechnology – a scheme to use nanoparticles to remediate polluted groundwater – and the application of nanoscience in the context of a large corporation.

The Nanotechnology Engagement Group provided a wider forum to consider the lessons to be learnt from these and other public engagement exercises both in the UK and abroad; this reported in the summer of 2007 (the report is available here). This revealed a rather consistent message from public engagement. Broadly speaking, there was considerable excitement from the public about possible beneficial outcomes from nanotechnology, particularly in potential applications such as renewable energy, and medical applications. The more general value of such technologies in promoting jobs and economic growth were also recognised.

There were concerns, too. The questions that have been raised about potential safety and toxicity issues associated with some nanoparticles caused disquiet, and there were more general anxieties (probably not wholly specific to nanotechnology) about who controls and regulates new technology.

Reviewing a number of public engagement activities related to nanotechnology also highlighted some practical and conceptual difficulties. There was sometimes a lack of clarity about the purpose and role of public engagement; this leaves space for the cynical view that such exercises are intended, not to have a real influence on genuinely open decisions, but simply to add a gloss of legitimacy to decisions that have already been made. Related to this is the fact that bodies that might benefit from public engagement may lack institutional capacity and structure to benefit from it.

There are some more practical problems associated with the very idea of moving engagement “upstream” – the further the science is away from potential applications, the more difficult it can be both to communicate what can be complex issues, whose impact and implications may be subject to considerable disagreement amongst experts.

Connecting public engagement to policy

The big question to be asked about any public engagement exercise is “what difference has it made” – has there been any impact on policy? For this to take place there needs to be careful choice of the subject for the public engagement, as well as commitment and capacity on behalf of the sponsoring body or agency to use the results in a constructive way. A recent example from the Engineering and Physical Science Research Council offers an illuminating case study. Here, a public dialogue on the potential applications of nanotechnology to medicine and healthcare was explicitly coupled to a decision about where to target a research funding initiative, providing valuable insights that had a significant impact on the decision.

The background to this is the development of a new approach to science funding at EPSRC. This is to fund “Grand Challenge” projects, which are large scale, goal-oriented interdisciplinary activities in areas of societal need. As part of the “Nanoscience – engineering through to application” cross council priority area, it was decided to launch a Grand Challenge in the area of applications of nanotechnology to healthcare and medicine. This is potentially very wide area, so it was felt necessary to narrow the scope of the programme somewhat. The definition of the scope was carried out with the advice of a “Strategic Advisory Team” – an advisory committee with about a dozen experts on nanotechnology, drawn from academia and industry, and including international representation. Inputs to the decision were sought through a wider consultation with academics and potential research “users”, defined here as clinicians and representatives of the pharmaceutical and healthcare industries. This consultation included a “Town Meeting” open to the research and user communities.

This represents a fairly standard approach to soliciting expert opinion for a decision about science funding priorities. In the light of the experience of public engagement in the context of nanotechnology, it would be a natural question to ask whether one should seek public views as well. EPSRC’s Societal Issues Panel – a committee providing high-level advice on the societal and ethical context for the research EPSRC supports – enthusiastically endorsed the proposal that a public engagement exercise on nanotechnology for medicine and healthcare should be commissioned as an explicit part of the consultation leading up to the decision on the scope of the Grand Challenge in nanotechnology for medicine and healthcare.

A public dialogue on nanotechnology for healthcare was accordingly carried out during the Spring of 2008 by BMRB, led by Darren Bhattachary. This took the form of a pair of reconvened workshops in each of four locations – London, Sheffield, Glasgow and Swansea. Each workshop involved 22 lay participants, with care taken to ensure a demographic balance. The workshops were informed by written materials, approved by an expert Steering Committee; there was expert participation in each workshop from both scientists and social scientists. Personnel from the Research Council also attended; this was felt by many participants to be very valuable as a signal of the seriousness with which the organisation took the exercise.

The dialogues produced a number of rich insights that proved very useful in defining the scope of the final call (its report can be found here). In general, there was very strong support for medicine and healthcare as a priority area for the application of nanotechnology, and explicit rejection of an unduly precautionary approach. On the other hand, there were concerns about who benefits from the expenditure of public funds on science, and about issues of risk and the governance of technology. One overarching theme that emerged was a strong preference for new technologies that were felt to empower people to take control of their own health and lives.

One advantage of connecting a public dialogue with a concrete issue of funding priorities is that some very specific potential applications of nanotechnology could be discussed. As a result of the consultation with academics, clinicians and industry representatives, six topics had been identified for consideration. In each case, people at the workshops could identify both positive and negative aspects, but overall some clear preferences emerged. The use of nanotechnology to permit the early diagnosis of disease received strong support, as it was felt that this would provide information that would enable people to make changes to the way they live. The promise of nanotechnology to help treat serious diseases with fewer side effects by more effective targeting of drugs was also received with enthusiasm. On the other hand, the idea of devices that combine the ability to diagnose a condition with the means to treat it, via releasing therapeutic agents, caused some disquiet as being potentially disempowering. Other potential applications of nanotechnology which was less highly prioritised were its use to control pathogens, for example through nanostructured surfaces with intrinsic anti-microbial or anti-viral properties, nanostructured materials to help facilitate regenerative medicine, and the use of nanotechnology to help develop new drugs.

It was always anticipated that the results of this public dialogue would be used in two ways. Their most obvious role was as an input to the final decision on the scope of the Grand Challenge call, together with the outcomes of the consultations with the expert communities. It was the nanotechnology Strategic Advisory Team that made the final recommendation about the call’s scope, and in the event their recommendation was that the call should be in the two areas most favoured in the public dialogue – nanotechnology for early diagnosis and nanotechnology for drug delivery. In addition to this immediate impact, there is an expectation that the projects that are funded through the Grand Challenge should be carried out in a way that reflects these findings.

Public engagement in an evolving science policy landscape

The current interest in public engagement takes place at a time when the science policy landscape is undergoing larger changes, both in the UK and elsewhere in the world. We are seeing considerable pressure from governments for publicly funded science to deliver clearer economic and societal benefits. There is a growing emphasis on goal-oriented, intrinsically interdisciplinary science, with an agenda set by a societal and economic context rather than by an academic discipline – “mode II knowledge production” – in the phrase of Gibbons and his co-workers in their book The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. The “linear model” of innovation – in which pure, academic, science, unconstrained by any issues of societal or economic context, is held to lead inexorably through applied science and technological development to new products and services and thus increased prosperity, is widely recognised to be simplistic at best, neglecting the many feedbacks and hybridisations at every stage of this process.

These newer conceptions of “technoscience” or “mode II science” lead to problems of their own. If the agenda of science is to be set by the demands of societal needs, it is important to ask who defines those needs. While it is easy to identify the location of expertise for narrowly constrained areas of science defined by well-established disciplinary boundaries, it is much less easy to see who has the expertise to define the technically possible in strongly multidisciplinary projects. And as the societal and economic context of research becomes more important in making decisions about science priorities, one could ask who it is who will subject the social theories of scientists to critical scrutiny. These are all issues which public engagement could be valuable in resolving.

The enthusiasm for involving the public more closely in decisions about science policy may not be universally shared, however. In some parts of the academic community, it may be perceived as an assault on academic autonomy. Indeed, in the current climate, with demands for science to have greater and more immediate economic impact, an insistence on more public involvement might be taken as part of a two-pronged assault on pure science values. There are some who consider public engagement more generally as incompatible with the principles of representative democracy – in this view the Science Minister is responsible for the science budget and he answers to Parliament, not to a small group of people in a citizens’ jury. Representatives of the traditional media might not always be sympathetic, either, as they might perceive it as their role to be the gatekeepers between the experts and the public. It is also clear that public engagement, done properly, is expensive and time-consuming.

Many of the scientists who have been involved with public engagement, however, have reported that the experience is very positive. In addition to being reminded of the generally high standing of scientists and the scientific enterprise in our society, they are prompted to re-examine unspoken assumptions and clarify their aims and objectives. There are strong arguments that public deliberation and interaction can lead to more robust science policy, particularly in areas that are intrinsically interdisciplinary and explicitly coupled to meeting societal goals. What will be interesting to consider as more experience is gained is whether embedding public engagement more closely in the scientific process actually helps to produce better science.

A kind friend, who reads a lot more science fiction than I do, gave me a copy of Charles Stross’s novel Accelerando for Christmas, on the grounds that after all my pondering on the Singularity last year I ought to be up to speed with what he considers the definitive fictional treatment. I’ve nearly finished it, and I must say I especially enjoyed the role of the uploaded lobsters. But it did make me wonder what Stross’s own views about the singularity are these days. The answer is on his blog, in this entry from last summer: That old-time new-time religion. I’m glad to see that his views on nanotechnology are informed by such a reliable source.