I was reading an otherwise very dry and sober account of different definitions of rarity of organisms, written in 1984, and was struck by this odd aside:

Indeed a time can be foreseen when genetic engineering will allow huge numbers of valuable genes to be stored as part of a composite living organism, an animal with multiple features from many species or a vast polyploid plant bearing a hundred different flowers and fruits from its branches.1

The bizarre idea seems to be that in a world of disappearing species, genetic diversity could be archived by combining them in the body of a single organism.

It’s a fantasy of a universal genetic chimaera. It brings up pictures to mind of a monster with the claws of a Siberian tiger, the strength of a mountain gorilla and the carapace of a sea turtle. An animal or plant Frankenstein made to blunt extinction. An Ark made flesh. An Ark of living wood.

I was wondering whether anyone knew of similar or related concepts? Perhaps in science, but maybe more likely from science fiction? It would be fascinating to know whether this suggestion was a single flash of the imagination or whether it has counterparts, a history or a context. If this rings any bells, then please leave a comment below.

==

1. Paul Munton, ‘Concepts of threat to the survival of species used in Red Data Books and similar compilations’, in Richard and Maisie Fitter (eds.), The Road to Extinction: Problems of Categorizing the Status of Taxa Threatened with Extinction, Gland: IUCN, 1987, pp. 71-88, pp. 87-88.

Half way through my family holiday in the Black Forest I found out that Martin Heidegger had lived in the neighbouring valley. I had accidentally chosen a location only about a mile, as a raven flies, from the little hut in which the philosopher had composed Being and Time. (I say “accidentally”. A few years ago, while driving around the suburbs of Boston, I also “accidentally” found myself taking my family past Thoreau’s pond at Walden. Of course we had to stop and have a poke around. There might be something subconsciously going on with me concerning philosophers and tiny buildings.)

Heidegger had studied at the University of Freiburg. The city lies at the western edge of the southern Black Forest, a few miles from the Rhine and the French border. After the First World War, Heidegger worked as an assistant to Edmund Husserl, the phenomenologist, whom Hannah Arendt would later accuse Heidegger, her lover and National Socialist party member, of murdering. That’s all in the future.

In 1917 Heidegger married, and Elfride’s dowry was spent on a plot of land in the hills around the ski resort town of Todtnauberg. The couple, with a young family, occupied the hut in August 1922. So began a period of great intellectual productivity. Heidegger took up a professorship at Marburg in 1923 and returned to Freiburg, on Husserl’s retirement, in 1927. Apart from a brief post-war absence, for denazification, Freiburg was Heidegger’s institutional house. But his home was in the Black Forest hills. Throughout, while on campus he lectured and supervised a stellar array of students, including Arendt, Hans-Georg Gadamer, Leo Strauss and Herbert Marcuse, Heidegger would return to his hut to live and think.

Martin Heidegger dressed for the part. When he left the city behind, with its university, cathedral and offices, the philosopher would don rustic clothes. Here’s a picture, where we can see Heidegger dressing down, comfortable playing the role of honest, simple farmer. When he left Todtnauberg to commute to work he put back the professorial garb.

Todtnauberg attracts skiers in the winter and hikers in the summer. The town is proud of its humble philosopher-celebrity and has thoughtfully mapped out a guided walk, the “Martin Heidegger Panorama-Rundweg”, which takes you around the surrounding countryside and past Heidegger’s hut. There’s a pamphlet that can be picked from the tourist office, with text in English and German. Oddly, along the Rundweg, there are a series of enormous chairs, artworks that make for uncomfortable sitting. But mostly the impression is, as the pamphlet title suggests, panoramic: a picturesque vista of hills capped with beech and conifers. Halfway along a small farm road dives to the right, and there is Heidegger’s humble abode.

The picture does not really give you the sense of just how small the hut is. It is one storey and the roof would end below my eye level if I stood against the wall. Heidegger might have moved in for the peace and quiet but I can’t imagine family life being quite so tranquil. Perhaps the children were told to play outside. The building is private property – indeed it apparently belongs to the descendants – and a discreet notice on the final path warns us that getting any closer is verboten.

A gentle wisp of smoke tells us that there’s a wood-burning stove inside, but otherwise, for most of its history, Martin Heidegger kept domestic technology to a minimum. The tourist pamphlet is rather charmingly revealing on this point:

From the hut’s simple outer appearance can be drawn conclusions to the inner appearance: the spare furnishing is still unchanged. The most modern thing in the hut is a little radio which bought Heidegger in 1962 for listening the news about the Cuba-Crisis. They got spring water from a near well. During the first years they got no electricity. In 1931 they got an electric connection.

Apparently when Martin received an offer to go to Berlin, Elfride took the opportunity to bargain with the Baden government that, in return, her hut should be connected to an electricity supply. Martin, one presumes reluctantly, accepted the modernisation, even though he stayed in post in Freiburg.

Now, of course, from an STS point of view, this attitude to technology is perhaps the most interesting aspect of seeing how Heidegger lived. When Heidegger was allowed to teach again after the Second World War, one of his first and most famous lectures concerned “The question of technology”. It is one of the foundational texts of the philosophy of technology, and requires some effort to follow. A conservative, Heidegger held a view, shared by many, that the human relationship to the environment had been impoverished by industrialisation. The lecture expressed intense disgust, for example, with the canalisation of the wild Rhine, and lingered over pre-industrial authentic, artisanal worked objects. Heidegger’s profound point was to go beyond merely the observation that we treat our environment in increasingly instrumental terms – as means to ends – and argue that we should wonder why nature has this uncanny affordance to us at all. Nature, he says, has been framed so as to offer us means, and there are other ways we can, and should, relate and live.

On the surface this argument seems to reflect, naturally, the philosopher’s chosen lifestyle: the simple abode, the rustic friendships, and the late admission of modern technologies – electricity for the wife, and a radio only at a moment of a threat of global apocalypse.

We might simply enjoy the mild irony of the Martin Heidegger Panorama Rundweg, a life arrayed in guided country walks, or even note the marked Schwarzwald enthusiasm for electric fences – these are time savers for today’s landowners who manage ski runs in the winter but want quick and effective containment of livestock in the summer. Indeed an electric fence now runs meters from Heidegger’s hut:

However, if you dig a little deeper, you can find out that the sweeping agrarian countryside, with the clear views of the alps in the distance and the cows grazing in the meadows, is not all the land contains. While browsing the small museum in the neighbouring town of Wieden, in the Rathaus across from the tourist leaflets and propped up on display case, was this map of the area:

The map shows the land underneath Heidegger’s feet. There are mine shafts in all directions. Extraction of ores made this part of the southern Black Forest a highly industrial area. Means to many ends. While the philosopher, dressed as a farmer, sat outside his humble hut admiring the view, thinking his thoughts, below men and machines worked hard together.

John Krige (1997) has alerted us to the contribution of scientists and not governments in re-organising where and how science is done, particularly in the organisation of transnational scientific cooperation. When viewed from this perspective, new and unexplored histories of science and politics are written. Since the Second World War, the most active geo-political region of scientific cooperation has been Europe, and this cooperation forms a significant part of what we call European integration. Yet what we know about European integration is dominated by political histories which privilege conventional ‘political’ integration commonly thought of as Member States ceding sovereignty to the European Community. Such histories according to Neil Rollings (2007) focus on top political figures and civil servants as the decisive actors. What is surprising is that when we look at European scientific cooperation this often precedes and draws after it conventional ‘political’ integration. From this perspective, scientific cooperation can be seen as a politically creative and binding force. The European Centre for Disease Prevention and Control is a good example of precisely this.

Based in Stockholm, the European Centre for Disease Prevention and Control (ECDC), an agency of the European Union officially began life in May 2005. The only serious history yet to be written about the ECDC, as far as I am aware, is by Scott Greer whose account of the ECDC’s origin follows the approach taken by many ‘politician-centric’ histories. Greer tells us that if we want to know why the ECDC came into being, we will have to know more about the activities of a top civil servant Fernand Sauer, and a former European Union Commissioner of public health David Byrne. This is misleading. The ECDC’s mission and organisational structure is based on the planning and lobbying of European epidemiologists and microbiologists during the 1990s who developed, often in competition with one another, what they thought would be more effective ways to control and prevent disease.

In 1992, two epidemiologists Chris Bartlett and Gijs Elzinga proposed to the European Commission that it would be beneficial to identify gaps and duplications in all of the international surveillance and training collaborations that were then currently taking place in the European Union. Following the conclusion that there were gaps (for example in food-borne diseases) and duplications, Bartlett and Elzinga asked for funding from the European Commission for twice-a-year meetings and for a small technical support unit so epidemiologists from participating Member States could strategically develop the surveillance and research of communicable diseases. The Commission agreed to fund this, and what was known as the ‘Charter Group’ emerged at the beginning of 1994 bringing together heads of communicable disease centres from around the EU on a voluntary basis.

Under the Charter Group, a network of disease-specific programmes were developed which used existing national centres like the Réseau National de Santé Publique in France to serve as focal points for the European surveillance of specific diseases such as AIDS and Salmonella infection. This approach to controlling disease became known as the Network Approach. The Charter Group established European data-sets, identified emerging diseases, and assisted in the response to national outbreaks. In 1995 the Charter Group initiated a new monthly and weekly bulletin called EuroSurveillance (now under the auspices of the ECDC) as a way to bring the editors of national surveillance bulletins together from EU member states. Also established by the Charter Group in 1995 was a European training programme for epidemiologists, the European Programme in Intervention Epidemiology Training (EPIET) to produce individuals competent to undertake epidemiological investigations at an international level, which has also been absorbed since by the ECDC.

What was distinctive about this new approach to controlling disease was that it went beyond research collaboration and involved coordinating and harmonising the surveillance and research of communicable diseases between existing national centres of disease control. A new mechanism in choosing what research to undertake was created; prioritising research against the need for it on a European scale and gauging the urgency of research on the increasingly integrated surveillance network. The Charter Group’s network approach was politically sanctioned in September 1998 with A Network for the Epidemiological Surveillance and Control of Communicable Diseases in the Community established by an Act of the European Parliament and the Council of the European Union. This was complemented in 1999 by an EU-wide rapid alert system, intended to enable rapid transmission of confidential data between national health authorities in the event of an emergency.

The Charter Group was not the only organised lobby for the future of communicable disease control however. Since 1996, another and competing vision of what the future of communicable disease control would look like in Europe was emerging. It was led by microbiologists for instance from the European Society of Clinical Microbiology and Infectious Diseases, butmost prominently Michel Tibayrenc of the Centre d’Etudes sur le Polymorphisme des Micro-organismes in France who were in favour of a central European organisation. The institutions of the European Union paralleled this divide. The European Parliament was in favour of a centre for communicable disease as an institution of the European Union. The European Commission and the Council of Ministers however were in favour of the network approach. In October 1998, a month after the act declaring the network approach as the preferred technique of surveillance, two voluble epidemiologists Weinberg and Giesecke, attempting to staunch the flow of the idea of a central organisation, declared that the ‘idea of a central edifice seems to be politically dead’.

In the wake of the pronouncement that the idea of a central edifice had been defeated, in 1998 Michel Tibayrenc initiated discussions at the International Board of Scientific Advisors in September 1998 and created a European Centre for Infectious Disease (ECID). This was essentially a lobby group, mostly comprised of microbiologists, advocating a European centre. The ECID had a ‘scientific board’ of around 30 people and a steering committee who advocated a European equivalent of the US Centers for Disease Control (CDC f.1946). The US CDC, mirroring a long history of US promotion of European integration, took a personal interest in furthering the cause of the ECID with the head of the CDC’s parasitic diseases division Dan Colley, sitting on the scientific board of advisers to the ECID. Furthermore, public health representatives from developing countries and former Soviet states were also supportive of a European centre for controlling communicable diseases, because the ECID promised to provide expert assistance and exchange information with these nations.

Advocates of both the network approach and the ECID explicitly contested each other’s claims about what benefits each approach had. Leading epidemiologists argued that the proposed coordinating functions of a centre were already being performed by the network approach. The ECID specifically countered this, arguing the network approach alone was not good enough as national centres of communicable disease were ill-prepared to face a major challenge such as bioterrorism and were not fulfilling their aim of preventing gaps and duplications in research. The idea of an initiating but disunited science community mirrors John Krige’s history (1989) of the origins of CERN in the early 1950s. Krige shows how two competing visions existed within the physics community of how European states should cooperate in nuclear physics research. One side advocated a network of nuclear research using existing laboratories and another advocated a European research laboratory and to build there a nuclear accelerator to compete in power with the nuclear accelerators at Brookhaven and Berkeley in the United States. Krige points out the two sides were not in opposition, both saw the value in cooperating, but had different views on how to cooperate.

Tibyrenc’s vision of what a central organisation should do has been reflected in the creation of the ECDC to a remarkable extent. Tibayrenc thought a central organisation would strengthen the effectiveness of the ‘network approach’ but should also take an active researching and surveillance role, and this dual function has been incorporated by todays ECDC. A good example of how the ECDC mirrors the ideas of Tibayrenc is shown in the first actions by the ECDC upon its inception. The ECDC was established concomitantly with the 2005 outbreak of H5N1 Influenza (Bird Flu) and formed part of outbreak investigation teams. In Turkey and Romania, three ECDC staff were on the ground all the time and an ECDC scientist was leading the investigation in Iraq. Here, the ECDC emulated Tibayrenc’s vision of a future ECDC having a mobile scientific staff and his conviction that disease within Europe can only be controlled by a European centre working in non-EU states as well as in EU member states.

This map, produced by the ECDC shows the distribution of the Aedes Albopictus mosquito in Europe and bordering nations. The mosquito, a native of southeast Asia is a vector of emerging diseases in Europe such as West Nile Fever.

We don’t know how far the ECID’s lobbying and Tibyrenc’s efforts directly influenced the political conviction to create the ECDC. But we do know the European Parliament was largely in favour of the centre from the start of Tibyrenc’s lobbying. We know that from 2002, the European Commissioner for public health David Byrne seems to have come on board with the idea, announcing in a speech to the Red Cross and Red Crescent in Berlin, that ‘plans are in preparation to set up a European centre for communicable diseases, to become operational in 2005’. There were other factors involved too that could have triggered or given substance to justifying the need for a centre of disease control such as the threat of bioterrorism after 9/11 or the 2003 severe acute respiratory syndrome (SARS) outbreak. But perhaps we should not place too much emphasis on these causes. They obscure the fact that the ECID had already lobbied for a European centre from 1998 and anticipated how it would work, as well as the fact that there was already in place a formal European network approach to disease control which took shape from the early 1990s.

According to Colin Talbot (2004), agencies such as the ECDC have been created in the wake of a citizenry increasingly sceptical of experts and politicians. Talbot says that agencies gain public trust through being autonomous from a centralised government. However, this approach mistakenly views agencies only through the eyes of worried politicians seeking to gain trust for expertise. As we have seen however, the ECDC was not founded on a need to gain support from a sceptical citizenry, it was the culmination of many years of lobbying to improve the effectiveness of disease control. Moreover, agencies can be very different to one another. For instance, Waterton and Wynne (2004) argue that upon its inception in 1993, the European Environment Agency (EEA) was in competition with other institutions and organisations, the European Commission’s DG for the Environment in particular. Moreover, they argue that the EEA was not intended to influence policy networks (despite it might have the ambition to do this). Scott Greer’s account of the ECDC differs from this in arguing that the crowded but fragmented institutional landscape of communicable disease control, rather than being a source of competition, is the raison d’etre for the existence of the ECDC, and is the sinews of its future growth. Of course, what Greer misses is that this rationale was developed independently by the Charter Group and the ECID.

This brief account of the ECDC has been all about its origins and not about the ECDC itself. But there is a reason for this. How the origin of a European agency is perceived, or any other type of science-based organisation, changes what we think of that agency in its current form. If looked at from a conventional political perspective, the ECDC looks like a weak organisation amongst what Scott Greer calls ‘a crowded institutional landscape’. However, if we acknowledge that the ECDC assimilated novel and ambitious projects to coordinate the research and surveillance of, and the training for, communicable diseases in Europe, the ECDC represents a new way of controlling and preventing disease which did not exist prior to the 1990s. When the origins of the ECDC are taken into consideration, the ECDC does not look like a beginning in the European control and prevention of communicable disease, but the culmination of a new way to govern communicable disease in Europe.

Krige, J, Pestre, D., Some Thoughts on the Early History of CERN, in John Krige, Luca Guzzetti (eds.), The History of European Scientific and Technological Cooperation, Luxembourg: Office for Official Publications of the European Communities, 1997 : 36-60

Waterton, Claire & Wynne, Brian., Knowledge and Political Order in the European Environment Agency. in Sheila Jasanoff (ed.), States of Knowledge: the co-production of science and social order, Routledge, London, 2004 : 87-108.

Tibayrenc, Michel., A European centre to respond to threats of bioterrorism and major epidemics, Bulletin of the World Health Organization, 2001, Vol 79 : 1094

Tibayrenc, Michel., The European Centre for Infectious Diseases: An adequate response to the challenges of bioterrorism and major natural infectious threats, (Elsevier) Infection, Genetics and Evolution, 1, 2002 : pp.179–181

Rollings, Neil., British business in the formative years of European integration: 1945-1973, Cambridge University Press, 2007
Byrne, David, (Reported by Twisselmann, Birte)., Eurosurveillance, Volume 6, Issue 17, 26 April 2002

I was in the National Archives yesterday and I came across a document that shows that the Oppenheimer trial, one of the most infamous episodes concerning scientists in the Cold War, was being monitored at the highest levels in the UK. The document, a report by Lord Cherwell to Winston Churchill, at the latter’s urgent request, seems to have been overlooked in the (otherwise) exhaustive biographies of the Manhattan Project leader by Kai Bird and Michael Sherwin (American Prometheus) and Ray Monk (Inside the Centre: the Life of J. Robert Oppenheimer) and also Charles Thorpe’s insightful and focused study Oppenheimer.

It started with a telegram, sent by Prime Minister Churchill to his close scientific adviser Lord Cherwell on the 13th April 1954. This was at the height of the Oppenheimer trial, in which enemies of the physicist, convinced that he was Communist sympathiser and angry that he had opposed the acceleration of the hydrogen bomb programme, had sought to remove Oppenheimer’s security clearance. This blow was symbolic. Edward Teller, the hydrogen bomb’s most fervent promoter, recalled in his memoirs his wish to “defrock” Oppenheimer “in his own church”.

Churchill asked Cherwell: “Let me have your views on the matter I rang you about”. Within a day Cherwell had written his three-page memorandum, reproduced below.

Cherwell had only met Oppenheimer briefly, probably three times, but was ready to offer a character assessment. “Robert Oppenheimer is a very good physicist though not perhaps quite so outstanding as some of the papers make out”, wrote Cherwell, “I first met him when he was in charge of the weapon station at Los Alamos in the Autumn of 1944 and found him friendly and unassuming and keenly interested in getting the bombs to work. When I saw him once or twice subsequently he appeared anxious to resume co-operation with England but he was excessively careful not to let out any secrets”.

Cherwell goes on to describe an encounter at an Oxford high table dinner (on the occasion of Oppenheimer’s “somewhat incomprehensible” Reith Lectures), and retails some second-hand coverage of the early stages of the Oppenheimer trial (an article in Fortune magazine that was part of the witch-hunt).

Cherwell reserves judgement somewhat, before concluding that he thought it very unlikely that Oppenheimer was a traitor: “I believe a brother of Robert Oppenheimer was at one time an avowed Communist and he may well have associated with Communists – in fact I gather he admits that this was so”, writes Cherwell, correctly, “But I should be very surprised to learn that he was taken in for long”. He concludes:

My impression is that he has vaguely left-wing sympathies and that he has a sort of feeling of guilt about having made the original bombs; and possibly his humanitarian instincts may have played some part in swaying his judgement towards a defensive rather than offensive strategy [ie radar early-warning systems rather than hydrogen bomb deterrence]. But I would consider it altogether unlikely that he should ever have betrayed any secrets – he was certainly most careful not to tell me any after the MacMahon [sic] Act was passed.

This seems to have satisfied Churchill, although he passes the document on to his Foreign Minister and fellow confidant Anthony Eden: “This may interest you. Let me have it back”.

So what does this tell us that is new? It should not be surprising that the Oppenheimer case was being carefully watched. Britain had its own nuclear troubles: cut out from collaboration with the Americans by the McMahon Act, Britain had had to launch its own crash nuclear weapon programme, one, as Ernest Bevin famously (and probably drunkenly) announced, “with a bloody Union Jack on it”. Furthermore, it had been revealed that the British contingent to the Manhattan Project had contained an extremely effective spy, Klaus Fuchs. There was good reason, beyond Cold War paranoia, to be watchful of security issues concerning scientists and the bomb.

I was in the National Archives at Kew today researching some history of microbiological research and I came across an oddity. It was a paper written by a Martin Blank, issued by the Office of Naval Research’s London branch office in July 1975. It’s title is ‘Interdisciplinary approaches to science – bioelectrochemistry and biorheology as new developments in physiology’. There are several things that odd about it – at least surprising to me.

The paper is ONR-funded science studies – sociology and philosophy of science. The author is a physiologist who wants to know why biology is becoming more ‘interdisciplinary’ and what are the causes of the rise of ‘hybrid disciplines’. He discusses briefly the well known examples of molecular biology and the application of x-ray crystallography to questions of biological structure, but moves on to consider his two more obscure case studies – bioelectrochemistry and biorheology (the science of flow in biological systems).

It starts with an outline of Popper and Kuhn – the usual suspects. But it moves on to cite some authors who have rather dropped out of STS view. For example, there’s a discussion of the geographer RJ Horvath’s notion of the expansion of “machine space” as a master narrative of human history. As machines have exapanded they have encroached on “space that is normally occupied or utilised by people”. Horvath had a paper in Geographical Review in 1974 that I’m tempted to chase up.

There are other odd things, not least the fact that the ONR had a London branch office. Who knew? I hadn’t heard of it before, and I wonder what else it did. I’m guessing that its brief was simply to channel ONR funded research findings to a UK audience. perhaps it funded more sociology and philosophy of science?

Finally, it’s very curious that the paper crops up in the file it’s in (CAB 184 285). It’s being read in the middle of a big discussion about the military withdrawal (or at least relocation) from biological warfare research, and the consequent need to find a role for the Microbiological Research Establishment at Porton Down. In the mix too is the question about how to respond to genetic engineering. Brian Balmer and I are writing a paper on this topic, which is why I came across it. I’ve no idea, yet, if US Cold War-funded science studies is going to be part of the bigger story…

I recently appeared on Resonance FM’s programme The Thread, taking about the history of electrical storage. It was a fun conversation with others including modern literature prof Steven Connor, historian of medicine (and gin) Richard Barnett and Imperial College energy policy expert Philipp Gruenewald. You can hear the programme here.

One story I told was of one my favourite theories of society and civilisation: Heinz Wolff’s argument ‘Society, storage and stability’. It’s rather obscure. In fact, according to google scholar, Wolff’s paper, which appeared in a book called Science and Social Responsibility, edited by Maurice Goldsmith and published in 1975 by the Science Policy Foundation, has never need cited. It would fail the research councils’ “Impact” test spectacularly. But it’s well worth retelling.

Before we start, let us get our Heinz Wolffs straight. It’s confession time! In the programme I got them the wrong way around, and it has taken a little research to sort things out.

In the 1970s there were two of them.

First there is Heinz Wolff, a psychotherapist at the Maudsley, an enormous psychiatric hospital in Denmark Hill. He also taught at University College Hospital. His biography is interesting. (The sources are an obituary here and a pair of interviews with Sidney Bloch that appeared in The Psychiatristhere and here.)

Then, on the other side of London, there is the second Heinz Wolff, the bioengineer, now at Brunel, and later famous for his thick accent and avuncular appearances on TV, especially as the presenter and impresario on the Great Egg Race, in which enthusiastic engineers built Heath Robinson contraptions to transport eggs. That was what science programming used to be like.

The 1975 paper on society, storage and stability is by the egg man not the head doctor. Confusingly, Heinz Wolff the bioengineer was also working in the medical sector. He was funded by the Medical Research Council at the Clinical Research Centre in Harrow, designing monitoring equipment for hospital patients.

At first glance the storage paper looks disconnected from his bioengineering career. However, its ideas and origin make sense in terms of its context. I have argued elsewhere – including here and here – that the ‘long 1960s’ (from the late 1950s to the mid-1970s) were a period of transition for science in society. In particular, the period was marked by a distinct turning inwards: a new critical awareness of the place and roles of science, expertise and authority more generally. The sociology of scientific knowledge, the critical science of science, is one phenomenon of this turn. Another is the radical science movement marked by the activity of groups such as Science for the People.

But another was a more Establishment response that entailed worrying about some of the same issues that fired up the radicals but coming to different conclusions. Maurice Goldsmith’s Science Policy Foundation was one of these responses. It was based in Benjamin Franklin House, near the City. Honorary fellows included Julian Huxley and Lewis Mumford. It was advised by some of the great and the good, including Peter Medawar, Hermann Bondi, Asa Briggs, John Kendrew, Derek de Solla Price, Lord Snow and Alvin Weinberg. Hot postwar specialties (molecular biology, cosmology), meet critical Big Science (Price, Weinberg) and the Two Cultures (Snow).

1973 was a year of strikes, IRA bombs in London and the Cod War. In October 1973, Maurice Goldsmith gathered friends and sympathisers, along with an eclectic bunch of others, including the ex-minister of technology Tony Benn, now in opposition, to discuss science and ‘social responsibility’.

Heinz Wolff spoke, I think, towards the end of the conference. What worried him the vulnerability of complex societies to the disruption caused by an ‘undesirable’ minority. Just as in his body ‘a very large proportion of my internal housekeeping is reduced to immunology … merely to cope with the invasion of what appears to be quite trivial numbers and masses of interfering organisms’, so the ‘more complex society becomes the more vulnerable it becomes to interference by a small number of its members’. Despite the immunological metaphor of invasion, it is clear that what Wolff is referring to here is strikers, the people who can withdraw their labour and derange the system. So, for example, he wonders out loud:

We could try to control he aberrant minority by having very draconian methods of law enforcement. We could, for instance, forbid people in certain sensitive positions to strike, and if they showed any signs of doing so we could threaten to shoot them, or just shoot them. But this, in the kind of society in which we are living, is not permissible. We must, therefore, look for different methods of ordering out society.

At this point, Wolff invites us to think outside the box, or, rather, to have more boxes:

The society I would like to see is one which involves the concept of storage, because storage and stability are almost the same thing. In the same way as a large capacitor is used in the power supply, as a storage device to smooth out variations and give stability to the circuit, so storage in the society sense us also a smoothing capacitor.

Implementing this idea would mean a wholesale change in how we think about and design our technological infrastructure. Technological systems needed to be redesigned so that storage was widely expanded and widely distributed. There was distinct Small is Beautiful aspect to Wolff’s proposals here. The idea is that if all ‘small communities’ were able to store things better, whether those things were energy, water, food, and so on, then collectively society would be more robust. He gives one example (which has recent resonances):

Some years ago in London we had a strike of 700 drivers of the tankers which deliver petrol to local garages, and London more or less ground to a halt. We had a city of 12 million people apparently at the mercy of the activities of 700 people.

So far, so reactionary.

What’s interesting about the argument is that Wolff goes further, extending the argument in to one encompassing all of human history. It becomes a Storage Theory of Civilisation. The important first steps in social and cultural development were not learning to hunt on the plains of Africa, but what happened next. What to do with all that rapidly rotting meat. In Wolff’s words:

When primitive man first roamed the earth he had no security, as such, because he had to find the food which he wanted to eat every day by hunting for it, and in consequence he developed no civilisation as such. He had no art and no culture. He then learned to store things. He was able, therefore, to decouple himself from the variability of nature, and he decoupled himself not only from the variability of food supply but also the variability of the weather. When he got cold, he stored heat in some way, either by lighting a fire or by wearing clothes or living in caves. So he increased the time constant over which he was able to operate independently from the inputs which nature provided for him.

Civilisation progressed as the means of storage became more sophisticated. The Golden Age was perhaps in the 16th century when a big household, purchasing its supplies from an annual fair, could store nearly all the resources necessary to live well and to live independently. But in recent times dependence on others for quick supply – a dependence that was chosen by foregoing the means of storage – had meant that society was increasingly unstable. The vulnerabilities revealed by the petrol strike ‘could not have happened in the 16th, 17th or 18th-centuries because people did not have this degree of continuous interdependence on each other’.

Wolff advocated a return to the ‘technological village, but a technological village with a very high storage capacity’. Build houses with big tanks for water and fuel. Pool the small town’s sewage and use the biofuel to run the ‘domestic bus services’. There was also room for ‘community greenhouses’, factories that hoarded spare parts, and even ‘sealed nuclear reactors’ just in case. Encourage ‘do-it-yourself’ and self-sufficiency. (An aside, the self-sufficiency sitcom The Good Life, was being commissioned as Wolff spoke.) But, most importantly, increase and distribute ‘storage’.

One of the tasks of historians of science and technology is to try and understand ideas, practices and devices in relation to the wider picture of historical change. So, to take one example, we are intensely interested not only in the development of Darwin’s theory of natural selection but we also want to know its relationship to, say, Malthus’s pessimistic portrait of people and resources and in turn to the social controversies of their time. Our field’s best work – such as, in this case, the biographies written by Adrian Desmond and Jim Moore and Janet Browne – substantiates these links, revealing a Darwin who travelled through nineteenth-century spaces, geographical and intellectual, pulling together insights to reconceptualise his world .

When talking about the big ideas and changes of the twentieth century, the English mathematician Alan Turing is an attractive figure, not least because he is one of the few names that sparks immediate popular recognition and reaction. He, too, conjured ideas – the universal machine, machine intelligence – of profound consequence. As historians we want to ask, too, how he moved through the landscape, pulling together insights and remaking his world.

I’ve recently completed a book called Science in the Twentieth Century and Beyond (Polity 2012). It’s a survey, and it offers a conceptual tool to help us talk about the relationship of science to its wider settings. I argue that sciences solve the problems of “working worlds”, and also draw inspiration from them in other ways. Working worlds are arenas of human projects that generate problems. Our lives, as were those of our ancestors, have been organised by our orientation towards working worlds. Working worlds can be distinguished and described, although they also overlap considerably. One set of working worlds are given structure and identity by the projects to build technological systems – there are working worlds of transport, electrical power and light, communication, agriculture, computer systems of various scales and types. The preparation, mobilisation and maintenance of fighting forces form another working world of sometimes overwhelming importance for twentieth-century science. The two other contenders to the status of most significant working world for science have been civil administration and the maintenance of the human body, in sickness and in health. It is my contention that we can make sense of modern science once we see them as structured by working worlds. The historian’s task can be rephrased as the revelation and documentation of these ties and to describe science’s relations to working worlds.

Here I’ll give a worked case study. I think we can understand important aspects of Alan Turing’s achievements in relation to at least two working worlds. The first concerns the sources of inspiration for Turing’s universal machine. I argue, and here I draw on my other book, The Government Machine (MIT Press, 2003), that the universal machine can be examined in the light of models of civil administration. I will deal with that case at length. But I will also note here my second working world. I can interpret the changing organisation of Bletchley Park as a response to problems generated by the working world of warfare. The problem thrown into relief by the working world of 1930s warfare was speed – how to recognise and respond to the incoming bomber, how to complete novel game-changing weapons in time, and how to decrypt coded messages fast enough to be of use. The problem of speed shaped the organisation of work on radar, the atomic bomb and codebreaking. I argue in The Government Machine that Bletchley Park had to be transformed into am industrialised, almost Ford-style enterprise, with a finely arranged division of labour, very high staff numbers, an emphasis on through-put, and innovative mechanisation at bottlenecks. Turing helped push this process. It was an industrialisation of symbol manipulation. A step further took place in radar where, again in response to the problem of speed, the organisation was reframed as an “information system”, the first modern use of this term, as far as I am aware.

However, let me now give a more detailed account of Turing in relation to the first working world.

The 1936 Paper and the Universal Machine

Both Babbage’s Analytical Engine and Turing’s description of a ‘universal computing machine’ have been claimed as computers before their time. Historians are uneasy about such claims, and have, rightly, warned against the sin of retrospective judgment. The Analytical Engine and Turing’s Universal Machine were devices of their own contexts, not forecasts of later developments. But it is not retrospective to assert that both have features – important, similar features – that stem from similarities of context. In The Government Machine (2003, pp. 39-44) I argue that we should take Babbage at his word when he described the power and mechanism of the Analytical Engine as a machine for gaining control over a ‘legislative’ and ‘executive’, that is to say he interpreted mechanical computation using the language of political philosophy. Turing’s theoretical Universal Machine should also be read as inscribed with political references. If the Analytical Engine, the Universal Machine and the computer are similar it is because they were imagined in a world in which a particular bureaucratic form – an arrangement of government – was profoundly embedded.

As a boy, Alan Mathison Turing was immersed intermittently in Civil Service culture. He was conceived in Chatrapur, near Madras, where his father, Julius Mathison Turing, was employed in the Indian Civil Service. He was born in London in 1912, after which his mother stayed in England while his father travelled back to the subcontinent. His mother rejoined Julius, leaving Alan in the hands of guardians. This pattern, in which the family was separated and reunited, marked Alan’s early life. By 1921, dedication to a civil service career had made Julius Secretary to the Government Development Department of Madras (Hodges, 1983, pp. 7-10).Even though Julius was a distant father, his employment provides one direct source for Alan’s knowledge of clerical work.

Our best account of Turing’s life and work is the biography by Andrew Hodges. He traces Turing’s long interest in the machines and the mind to the traumatic experience of losing a very close friend, Christopher Morcom, to bovine tuberculosis in 1930, when both were attending Sherborne School and preparing for entry to Cambridge University. The hope, prompted by intense remorse, that Morcom’s mind might linger after death, expressed in a paper on the ‘Nature of Spirit’ written for Christopher’s mother, was an early exploration of relationship of mind and thought in a material world, that later reoccurred in Turing’s classic works such as ‘Computing Machinery and Intelligence’ (1950). Hodges’ thesis is convincing. What I add here is an emphasis, hinted at but not developed in Hodges, on a key resource available to Turing.

While a fellow of King’s College, Cambridge, in mid-1930s, Turing had begun to attack the decidability problem – the Entscheidungsproblem – set by David Hilbert. The celebrated Göttingen mathematician had pinpointed several outstanding questions, the solution of which would, he hoped, place mathematics on a sound foundation. In particular, Hilbert hoped that mathematics could be proved to be complete, consistent and decidable. It would be complete if every mathematical statement could be shown to be either true or false; consistent if no false statement could be reached by a valid proof starting from axioms; and decidable if there could be shown to be a definite method by which a decision could be reached, for each statement, whether it was true or false. But to Hilbert’s chagrin, the Czech mathematician Kurt Gödel demonstrated in 1930 that arithmetic, and therefore mathematics, must be incomplete (or inconsistent). Gödel constructed examples of well-formulated mathematical statements that could not be shown to be either true or false. Starting with any set of axioms, there always existed more mathematics that could not be reached by deduction. This was an utterly shattering conclusion, an intellectual high-point of the twentieth century.

There remained the possibility that mathematics could still be kept respectable: perhaps, even if there existed statements that could not be proved true or false, there might still exist a method which would show (without proof) which were true and which were false – the issue of decidability. If mathematics was decidable but incomplete, then the troublesome parts could still be cut out or contained. This was the problem that fired Turing’s imagination in the summer of 1935. What is remarkable about his solution, written during a sojourn at Princeton, and published in the Proceedings of the London Mathematical Society in 1937, was that Turing not only answered the decidability question (with a ‘no’), but in doing so presented the theoretical Universal Computing Machine. (The paper was received on 28th May 1936 and read on 12th November 1936, and so it will be referred to as the ‘1936 paper’.)

The inspiration seems to have come from Turing’s mentor at Cambridge, Max Newman, who wondered aloud whether the Hilbert problems could be attacked by a ‘mechanical’ process (Hodges 1983, pp. 93-96). By ‘mechanical’ Newman meant by ‘routine’, a process that could be followed without imagination or thought. The start of Turing’s insight was to wilfully allow a slippage of meaning, and treat ‘mechanical’ to mean done by machine. Turing defined a ‘computing machine’: ‘supplied with a “tape” (the analogue of paper) running through it, and divided into sections (called “squares”) each capable of bearing a “symbol” (Turing 1937, p. 231). The computing machine could scan a symbol and move up and down the tape, one square at a time, replacing or erasing symbols. The possible behaviour of a computing machine was determined by the state the machine was in and the symbol being read. He argued that such machines, differing only by their initial m-configuration, could start with blank tape and generate numbers of a class he called ‘computable’. Although the route to answering Hilbert’s question from there is interesting, it is rather involved and beside the point for this paper. What matters is Turing’s description of a ‘universal computing machine’, which could imitate the action of any single computing machine.

To justify his definition of “computable” numbers, Turing had to show that they encompassed ‘all numbers which would naturally be regarded as computable’, that is to say all numbers expressible by a human computer – ‘computer’ most commonly referred to a human, not a machine, before the Second World War (Campbell-Kelly and Aspray, 1996, p.9; Light, 1999; Grier, 2005). Crucially, to make his case, Turing conjures up two types of human computer. They appear in an important bridging section in the logical structure of ‘On computable numbers with an application to the Entscheidungsproblem’, between the demonstration of the existence and restrictions of a universal computing machine, and its application to Hilbert’s problem. In the first type, much of the information of how to proceed was contained in many ‘states of mind’, equivalent to many m-configurations of a machine. This was a model of a generalist: work proceeds by the manipulation of symbols on paper, but with the emphasis on the managerial flexibility contained in the large number of states of mind. This interpretation is justified when we examine Turing’s second type:

We suppose, as in [the first type], that the computation is carried out on a tape; but we avoid introducing the “state of mind” by considering a more physical and definite counterpart of it. It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it. If he does this he must leave a note of instructions (written in some standard form) explaining how the work is to be continued (Turing 1937, p. 253).

It is at this point that we might wonder what might have prompted Turing to make the step of creatively reinterpreting Newman’s suggestion of looking for a “mechanical” method. What sort of machine is mechanical in this way, and would be known to Turing? Historically-minded commentators have scratched around this question, but their suggestions – that Turing machines are akin perhaps to ticker-tape or teletype machines are unconvincing. Hodges (1983, p. 109) has written: ‘His “machine” had no obvious model in anything that existed in 1936, except in general terms of the new electrical industries, with their teleprinters, television “scanning”, and automatic telephone exchange connections. It was his own invention’. I think, however, there is one clear and compelling candidate for Turing’s mechanical inspiration: the civil service.

The Civil Service as a Machine

Machines are means for building order in the world (Winner, 1977, 1980). Governments, too, claim this role. This overlap helps explain why components of government have been likened to machines. Nevertheless, the choice of metaphor, and its application, target and reception, has varied according to time and place. Otto Mayr, in a book completed when he was director general of the Deutsches Museum, but in fact a culmination of a life’s work exploring the topic, set out the early modern history of the interplay between real and metaphorical machines as models for governance. In Authority, Liberty and Automatic Machinery in Early Modern Europe (1986) he argues that discussion of the workings of authoritarian modes of governance correlated with the use of clockwork metaphor, while liberal concepts of order were exemplified by appeal to self-correcting mechanisms such as the balance. Crucially, there was interplay between metaphorical and real machines: the presence of real exemplars provided a rhetorical resource, while rhetorical popularity of certain types of mechanism might have encouraged their use. Mayr (1986, but see also 1981) argued that British engineers’ precocious development of self-governing mechanisms, such as steam governors, went hand in hand with the exploration of self-regulation in liberal economic and political thought.

In the mid-19th century, the steam engine provided just such a rhetorical resource to commentators such as the editor of the Economist, Walter Bagehot. In The English Constitution, a much-read 1867 analysis of the British political system, Bagehot divided political institutions into two camps, the ‘dignified’ parts, which ‘excite and preserve the reverence of the population’, and the ‘efficient’ ones by which the state ‘in fact, works and rules’. The pomp and ceremony of the monarch opening Parliament was an example of the ‘dignified’ portion; the discreet bureaucracy exemplified the ‘efficient’. The whole was a ‘machine’. And indeed it was an example of the most celebrated machine in Victorian culture: the steam engine. It had a ‘regulator’, a ‘safety valve’, and coal in the form of the ‘potential energy’ stored in the monarch herself. By drawing on the metaphor of the state-as-steam engine, Bagehot was able to argue for the essential role of monarchy: indeed it was the source of power for the whole apparatus of government.

The appeal to specific machines in descriptions of government is in fact of less significance than the more general phenomenon of casting government, and in particular the Civil Service as machine-like. In The Government Machine I argued that this construction was intimately tied to the aims and achievements of 19th century reform movements. One effect of administrative reform was to resolve issues of trust and reliability: from administrative action that was underwritten by trust in the gentlemanly status of the individual civil servant to trust that resided in the operation of a system. Reform was piecemeal; nevertheless the submission of the Northcote-Trevelyan report to both Houses of Parliament in 1854 marks a turning point. While its direct impact on administrative reform was patchy, it is of great interest here for its proposed formalisation of the division of labour, a discursive distinction between generalist “intellectual” work from that of “mechanicals”. The language would prove to be very influential.

Charles Edward Trevelyan brought to the task direct experience of the stresses induced in large-scale 19th century administration from his 1830s project, planned with his brother-in-law Lord Macaulay, of Indian Civil Service reform, as well as Irish famine relief in the mid-1840s. The “Irish business” had stretched the civil servants of the Treasury to breaking point. The problem, Trevelyan diagnosed, was that civil servants were not interchangeable: since actions were underwritten by an individual’s word, there was a “degree of precariousness in the transaction of public business which ought not to exist” (Minutes of Evidence of the Select Committee on Miscellaneous Expenditure, Parliamentary Papers, 1847-1848, 18, p.151). Three of Trevelyan’s predecessors had broken under the strain (Cohen 1941, p. 88). Trevleyan’s solution, the core of the Northcote-Trevelyan report (1854), was for the entry to upper ranks of the Civil Service to be judged systematically by examination, while the burgeoning ungentlemanly work of “copying, registering, posting accounts, keeping diaries, and so forth” would be done by interchangeable ‘supplementary clerks’, the “mechanicals”. A ‘proper distinction between intellectual and mechanical labour’ would be the founding principle of the new system. If in the natural world Darwin would replace the patronage of God with the competitive examination between species, so in the administrative world the generalists would picked not by corruptible patronage but by selection of fittest. The rest, the “mechanical” clerks would follow instructions, routinely and without thought.

Crucially, and oddly given the inclusion of gentlemen generalists at the top, the whole could – and increasingly was – cast as a machine. Indeed, the Civil Service was by the end of the 19th century, a general-purpose machine. Three sources of pressure reinforced this mechanical discourse (Agar 2003, p. 65). First, a distinction could be firmly drawn between politicians (as operators of the machine) and a supposedly interest-free, neutral Civil Service that could operate identically under both Liberal and Conservative governments. The Civil Service, discursively a machine, once set in motion would follow a predictable, reliable, and discreet path. Second, the Civil Service was labelled as a machine because, as the state grew, people were employed whom the gentlemanly elite could not automatically trust: lower-class clerks and women. Trust in the upper echelons was secured by the appeal to honourable secrecy and gentlemanly discretion; casting the “mechanical” groups as components of the machine helped resolve issues of trust by extending to the lower echelons a metaphorical reliability. Finally, labelling the Civil Service a machine appealed to a growing technocratic element in British government, not expected or even foreseen by the proponents of the Northcote-Trevelyan settlement. In particular, the metaphorical language of the government machine was wilfully and creatively reinterpreted by an expert movement of mechanisers, which gained influence in the First World War and grew to a peak of influence after the Second. The resulting social history of the mechanisation (and later computerisation) of administrative work in the 20th-century British civil service is told in detail in The Government Machine.

Back to Turing

At the heart of Turing’s 1936 paper we find this same social relationship: the generalist-mechanical split, with the generalist leaving the office and ensuring that the mechanical clerk will be trusted to follow the routine instructions. The ‘state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape’, he wrote (1937, pp. 253-254). Turing’s point is that such work is equivalent to the actions of a computing machine (in which case both generalist and mechanical would be part of the machine), and, in particular, that any such work would be replicable by a universal computing machine. Hodges (1983, p. 109) notes that ‘Alan had proved that there was no “miraculous machine” that could solve all mathematical problems, but in the process he had discovered something almost equally miraculous, the idea of a machine that could take over the work of any machine. And he had argued that anything performed by a human computer could be done by a machine’. It helps us understand the seemingly miraculous, if we remember that government – especially the civil service – had previously been constructed as a machine capable of general-purpose action. My claim is that the civil service model of generalists and mechanicals, and therefore the working world of civil administration, framed Turing’s imagination of the Universal Computing Machine.

I do not think we should be surprised that Turing’s figure of a human computer is positively bureaucratic, not only in its the attention to instruction-following and the manipulation of symbols on paper, but also in its mobilisation of the generalist-mechanical split. If he knew anything about what his father did at work, then the pattern would have been a resource at hand to think by. In fact there is no need to speculate about the exact train of influence, since Turing’s post-war project to literally build a stored-program computer within the British civil service provides further evidence to support my claim.

In early 1946, Turing’s proposal for a “Proposed electronic calculator”, written between October and December 1945 (Copeland 2000, 2005), was circulating Whitehall. This project, soon called the Automatic Computing Engine (ACE, note the nod to Babbage in “Engine”), was approved and work began, with Turing on the staff, at the National Physical Laboratory (NPL). The NPL was part of the civil service, best known for its metrological work. While the project suffered from obstacles, delays and interruptions, prompting Turing to return to academia, a simplified version, the Pilot ACE, ran its first programs in May 1950 (Copeland, 2005). Turing’s proposal opens with a discussion of speed. In particular, he notes that historically calculation was only partly mechanised: ‘Calculating machinery in the past has been designed to carry out accurately and moderately quickly small parts of calculations which frequently recur’ (Turing 1946, p. 2). Now, ‘instead of repeatedly using human labour for taking material out of the machine and putting it back at the appropriate moment’, the materialisation of the Universal Computing Machine, ‘all this will be looked after by the machine itself’. (It is worth here recalling the words of Colonel Partridge, an early mechaniser of the Civil Service, here: the ‘aim of every alert organisation’ should be the replacement of human by mechanical labour.) A detailed description of components of the machine follows. But what is the ‘this’ that will be ‘looked after by the machine itself’? What, precisely, will this new machine do? In Turing’s own words, the ‘Scope of the Machine’, could be stated:

The class of problems capable of solution by the machine can be defined fairly specifically. They are those problems which can be solved by human clerical labour, working to fixed rules, and without understanding (Turing 1946, p. 14).

If, as I suggest, Turing’s theoretical outline of the Universal Machine in his 1936 paper was framed by his understanding of generalist-mechanical relations, then here, in Turing’s proposal for a real machine, we can see that its capacities, too, were coterminous with clerical labour.

Conclusion

Government departments were closely involved in the first experimental stored-program computers of the late 1940s, as patrons (the Ministry of Supply provided funds for computers necessary for atomic weapons research), sites (the Department of Scientific and Industrial Research’s National Physical Laboratory housed Turing’s ACE project), and as forums for discussion. As the name of the one of these – the Brunt Committee on High-speed Calculating Machines – suggests, early computers were mostly considered narrowly as mathematical aids. Turing gave a list of problems capable of solution by the ACE. Most are numerical, but at least one is administrative (‘To count the number of butchers due to be demobilised in June 1946 from cards prepared from the army records’) although Turing wrote that this work would be more efficiently done by Hollerith punched-card techniques. This reminds us that the computer qua universal machine was not a historical inevitability but instead a category that had to be recognised, articulated and accepted. That work was a historical process, not a single event, and drew on, as one of its sources Turing’s arguments and language.

Therefore, while when civil servants confronted stored-program in the 1950s there was a sense of looking into a mirror, there was no immediate recognition of the reflection, nor should there have been. Indeed the mechanisers, an expert movement based in the Treasury, were keen promoters of all kinds of mechanical methods, including punched-card machines, as their interest in electronic computing as aids to administration began to grow. The punched-card work involved the production of explicit series of instructions, telling the machine operator at each stage what to punch and what to check. It was not a big step from such “programmes” for humans and machines of the earlier type to “programs” for electronic computers. The Treasury mechanisers learned much from the experimental tribulations at the NPL as well as at the more straightforward success at Lyons & Co, the tea shop business where electronic computers called LEO (Lyons Electronic Office) for business data processing and calculation were built in the early 1950s. ‘Programs can be prepared’, concluded on LEO manager in a report to the Brunt committee, ‘for many clerical jobs to be carried out by automatic calculators’ (Agar 2003, p. 304). Edward Newman of the ACE team at NPL went further:

it seems possible that in due course computers will do the country’s routine clerical work, most of the work in fact of a deductive character…When used for suitable purposes, and in particular for processes which are essentially serial, some automatic computers are very fast, up to 100,000 times as fast as man. Their potential power is thus very great (Newman 1953).

Indeed, he went on:

it is unlikely that there will ever be any great reduction in the time needed for programming machines, since the organisation of a complex job whether it is done by human clerks, by punched cards, or by high-speed computers is bound to be a long business, and a programme is only a coded form of this organisation.

The grasping of Newman’s point – a programme is only a coded form of organisation – was a pivotal moment of self-awareness: if bureaucracy was the original rule-based, general-purpose machine, then this moment, in the British context, was when the civil servants saw past the unfamiliar technical guise and recognised their own mirror image. An ambitious programme of computerisation followed in the late 1950s and 1960s which continued until technical expertise was outsourced in the 1970s and 1980s.

And then we have the wider world. It used to be said that you were never more than 15 feet away from a rat. It is true to say that we are now never more than a short distance from a universal machine, which is now miniaturised and embedded. There are computers on our desks, in our phones and in our cars. If you follow my argument you can see this as a dispersal and materialisation of a specific social relationship, that of between the generalist and mechanical civil servant. It is an extraordinary decentralisation and multiplication of what was once state power.

Bibliography

Jon Agar, The Government Machine: a Revolutionary History of the Computer, Cambridge, MA: MIT Press, 2003

Stafford H. Northcote and Charles E. Trevelyan, The Northcote-Trevelyan Report, reprinted in Public Administration 32 (1954), pp.1-16, originally signed 23 November 1853 and published as House of Commons Parliamentary Paper 1713 in February 1854. Northcote and Trevelyan also reported to the Treasury in 1853 in a paper ‘The Reorganisation of the Permanent Civil Service’, that was published, along with a letter from Benjamin Jowett and criticisms of the paper, in 1855.

Alan M. Turing, ‘On computable numbers, with an application to the Entscheidungsproblem’, Proceedings of the London Mathematical Society, Series 2 42 (1937), pp.230-265.[1]

I’ve been reviewing a very good edited collection of the historian of computing Michael Mahoney’s papers. Mahoney died in 2008, but he left behind a series of papers filled with good advice to historians of science. One of his best tips is that we should pay attention to what he calls the “agenda” of a discipline or specialty if we want to understand it.

Here’s Mahoney’s definition and development of the idea:

“In tracing the emergence of a discipline, it is useful to think it terms of its agenda, that is, what practitioners of the discipline agree ought to be done, a consensus concerning the problems of the field, their order of importance or priority, the means of solving them, and perhaps most importantly, what constitute solutions. Becoming a recognized practitioner of a discipline means learning the agenda and then helping to carry it out. Knowing what questions to ask is the mark of a full-fledged practioner, as is teh capacity to distinguish between trivial and profound problems. Whatever specific meaning may attach to ‘profound’, generally it means moving the agenda forward. One acquires standing in the field by solving the problems with high priority, and especially by doing so in a way that extends or reshapes teh agenda, or by posing profound problems. The standing of the field may be measured by its capacity to set its own agenda. New disciplines emerge by acquiring that autonomy. Conflicts within a discipline often come down to disagreements over the agenda: what are the really important problems? Irresolvable conflict may lead to new disciplines in the form of separate agendas.

As the Latin root indicates, agendas are about action: what is to be done?”

(Michael Mahoney, ‘Computer science: the search for a mathematical theory’, originally published in Krige and Pestre (eds), Science in the 20th Century, reprinted in Histories of Computing, Harvard University Press, 2011, p. 130)

I thought it worth abstracting this long quotation for two reasons.

First, this seems to be a really good, concrete proposal for what makes a discipline, and who is considered to be a valid practitioner. That’s a useful historiographical tool for historians of science.

Second, we might like to ask, as historians of science, what is our agenda? What do we agree ought to be done? What are the problems of the field? How should we rank them?

A new social movement in science is gathering, and it is time to give it a name. It’s a mutation of an older tradition of scientific lobbying, but it has new features and deserves some analysis.

What is it?

Let’s describe its components and features. We would include organisations like Science is Vital, which formed to campaign against cuts in science during the present austerity. There are campaigns, such as Simon Singh’s anti-libel wars against the chiropracters.

There is a cultural wing – we are thinking of the spectrum of mutual regard that spans Ben Goldacre, Brian Cox, Wired magazine in the UK, and the comedians Robin Ince and Tim Minchin. These are the geek-tastic “skeptics”, all with an immense following via social and other media which extends now into real world grassroots events such as Skeptics in the Pub.

But the geekocratic tendency is not just about love of the values of science, or protecting the resources and funds for science, or even securing greater respect for science or worrying about public understanding. None of these features are especially unique, indeed we can identify many as having long historical roots tied up with the professionalisation, and popularisation, of science. The novelty is partly a stronger political focus (and especially a fetishisation of evidence-based policy-making), but presented as an ‘outsider’ view while being articulated in fact by well-connected ‘insiders’.

The key text is Mark Henderson’s Geek Manifesto, published this month. It not only serves as a rallying cry for all these groups but also as an attempt to reappropriate the term ‘geek’. Yet, Mark is no DIY science activist. He is Head of Communications at the Wellcome Trust, one of the leviathans of UK science funding. ‘Geek’, as Steve Cross has pointed out, has changed its meaning quite radically.

This social movement has other features. It has its own heroes (teenagers bravely standing up against anti-vivisectionists) and villains (homoeopathists, creationists, politicians who don’t ‘get’ science). It is self-policing – the criticism of the recent ‘death of British science’ campaign is an interesting example. The embarrassment and derision stemmed from the fact that this is a social movement that is much more politically savvy than some of grassroots.

Analysis

A social movement needs a name so that it can be tracked, discussed and perhaps supported or criticised. Hauke Riesch has described “science activists”. Proposals kicked around included “grabby geeks”, “science botherers”, the “SciNet” (a la SkyNet of Terminator fame), and the “Geek Establishment”. We like the “Geekocracy”.

So how do we account for this social movement? Is it merely, for example, a manifestation of a network? Certainly social media have provided older science lobby networks with a visibility and an immediacy of communication which is new. Perhaps without social networks the constituency of this social movement would remain local or individual and largely invisible. We in STS@UCL will be watching with interest.

There are a lot of studies on the probability of accidents in a nuclear power plant. As far as I understand they use methods of risk analysis to calculate the failure probability of the nuclear reactor.
Here I tried a very simple empirical approach: We know the number of nuclear power reactors in the world, we know (probably) the number of severe accidents up to now, so we can calculate the empirical failure probability of a single reactor per year. Thus we are able to calculate the probability that no reactor in the world, in UK or in another country, will have an accident within the next 5, 10 or 20 years.Or that at least on reactor will fail severely. This can be done by using the Poisson distribution.
Up to now there are at least 4 reactor accidents on INES scale 5 or more. Chernobyl (1986) is the only one on level 7, Three Miles Island(1979), Windscale (1957) are on level 5. Also the present Fukushima accident (or accidents?) is level 5, at least at the moment (27.03.2011). On level 5 there are some more accidents and on level 6 is only one, but they were in other nuclear facilities, not in power reactors. One could argue that Windscale was not a civil but a military reactor, but then in Fukushima there is probably more than one reactor involved. So the number of 4 severe accidents seems quite reasonable.
The number of nuclear reactors worldwide increased drastically from 1955 until 1988, from which date the number is nearly constant. Up to the Fukushima accident there were 443 reactors operating worldwide.
By a simple graphical piecewise interpolation of the number of reactors per year a total of 15.000 reactoryears can be estimated. This crude number should be sufficient for the present purpose.
So the probalilty for one severe accident per reactoryear (ry)is
q=4acc/15.000ry
If there are N reactors in operation, the Poisson distribution gives the probability for x severe accidents within the next y years. In order to apply the Poisson distribution the expected mean number of accidents m within this time has to be estimated:
m=q*N*y
Then the probality to have x accidents when we expect a mean value of m accidents is given by
p(x)=(m^x/x!)*exp(-m)
Thus the probality for no accident is (x=0)
P(0)=exp(-m)
and the probality for at least one accident is
p(x≥1)=1-exp(-m)
Regarding the worldwide situation for the next 20 years, the number of reactors is 443, we expect an average number of severe accidents
m=(4acc/15.000ry)*443r*20y=2.34acc
so 2.34 accidents within any period of 20 years somewhere in the world. The probability for one or more severe accidents worldwide is
p(x≥1)=1-exp(-2.34)=90.36%
How is the situation for a single country? We simply have to count the number of reactors within this country and calculate the respective reactoryears.
World UK US D
reactors 443 19 104 17
reactoryears 8860 380 2080 340
mean # acc 2.34 0.100 0.549 0.089
p(≥1 90.36% 9.55% 42.26% 8.59%
On the average more than 2 accidents are expected worldwide, the probality for at least one accident ist 90% worldwide, more than 9% for the UK and more than 40% for the US.
Do you think these estimations are reasonable? Do you think a 9% probability for a Chernobyl or Fukushima accident in the UK within the next 20 years is acceptable?