You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

Dedication
For Carlo
[image: Cover page]
2
The Odds for Tomorrow
The future is already here – for those who can detect it
Fifteen years ago, the world was in thrall of the upcoming millennium. Celebrations, big and small, were planned and held. The media were full of prospects and predictions of what humanity reasonably could expect from a mere change of digits in a time reckoning that had become global. There was no lack of good intentions, private and public. On the political side, they culminated in the solemn declaration of the Millennium Goals, echoing the end-of-year rituals that many of us undergo with apparent glee. We articulate promises to ourselves of what to do better in the coming year, while secretly admitting that the chances of realizing them are small. There was no lack of positive prospects either, foremost the peace dividend widely expected to channel funds towards developing countries, instead of burning money to sustain the Cold War that had come to an end. Nor were scares and apocalyptic warnings lacking. But they seemed comparatively muted, except that for a short period the world was held in suspense of a looming electronic blackout caused by the millennium bug.
With hindsight, the millennial swerve was much ado about nothing. A page in the calendar was turned, the millennium arrived and quietly slipped into what were dubbed the zero-years. They turned out to be rich in surprises. The invasion of Kuwait by Saddam Hussein was followed by the war in Iraq and a string of entangled and serious geopolitical consequences seemingly without end. The near meltdown of the financial system with the bursting of the housing bubble in the United States and a host of unforeseen and widening consequences took the world by surprise. The Big Bang event of 9/11 continues to overshadow subsequent geopolitical developments ominously.
Meanwhile, the world continues to experience its share of epidemics and pandemics, topped by new viruses as well as viruses that return because they have not been sufficiently contained. In a globally connected world, their potential spread is assured. And despite the overwhelming scientific evidence and numerous alarming warning signals, the debate on the seriousness of the threats posed by climate change continues without leading to the decisive action that is called for.
Change, or a more fashionable word, transformation, is the normal state of natural and human affairs, exacerbated by the increasingly intricate interaction between humans and their environment. Surprises are always to be expected. Black swans do exist, although that may surprise some people. Randomness can fool anyone. This is unwelcome news for those who highly value stability and the control it heralds, even if it is often only used to reassert claims to one's own control. In retrospect, the presumed stability often reveals itself as the exception. Much of what appears to be stability turns out to be a mere illusion. Being in complete control is nothing but wishful thinking.
Surprises and unexpected events drive home the fact that societal developments and history are evolutionary processes that cannot be fully foreseen. What can be seen as lying ahead is partial and biased, inevitably framed through the present. Yet the inveterate search for certainty needs a temporal framing that pushes politicians, planners, analysts, intellectuals and the public to have a stab at foresight and forecasts. These are intended to give guidance and orientation by marking the major trends that are expected. Inevitably, trends are built on past extrapolations, deviating here and there, marked by greater or lesser swings. Every forecast is built with the awareness that it might turn out to be wrong. There are many reasons why this may happen, but one stands out: has sufficient attention be given to surprises and to whether those many surprises could have been anticipated?
In January 1986, I participated in a workshop in Sweden devoted to surprises in the long-term patterns of world development. Somewhat paradoxically, it turned out that surprises, or rather what we, the participants, invented to fuel the narratives we built as retrospective histories seen from the year 2075, were rather limited in the breadth and depth of their imaginative content. The idea for the workshop came from Torsten Hägerstrand, a highly original social geographer who had a keen interest in how societies could better be prepared in order to cope with the inevitable surprises that lie ahead. The future histories, full of surprises invented by us, were mainly focused on environmental concerns, which we saw inevitably intertwined with major political and social shifts in power, new value systems and cultural norms and traumatic political events. The organization of the workshop's way of working was unusual. Twenty-two participants from different disciplines and countries would work in parallel in three groups. The composition of the group, however, would change in accordance with the task given to it. Yet each group had to have a structural constant throughout the exercise. It had to include one of the three organizers, at least one woman and one historian. The overall task was to come up with different scenarios regarding the future trajectories of how societies would deal with scarce environmental resources, growing energy demands, water shortage, demographic changes and the like. To keep the flight of our otherwise welcome imagination closer to reality, we worked with data points and conventional wisdom figures from the biosphere project carried out at IIASA, the International Institute of Applied Systems Analysis in Laxenburg near Vienna, one of the co-organizers of the workshop.
For me, two remarkable results emerged out of this experience. One was to experience first-hand the dynamics of small groups working in relative isolation. The location of our experiment was quite unique. Situated by a beautiful Swedish lake, we had all the comforts of a well-run conference centre, but there was not much else. These were the days before ubiquitous internet connections. If one were to repeat the exercise today, participants would have to agree to a week-long abstention from their smartphones and their use of the internet for purposes other than those of the workshop. We had agreed not to speak with the other participants about the ideas of our own group during mealtimes or in the sauna. Within a very short time, this generated a climate of competitiveness and group-think. At the end of the week, we were convinced that the scenarios that each group had invented had a high probability of actually occurring. We became convinced that we had somehow uncovered what could happen in the real world. Filled with cognitive innocence, our three little groups discussing intensely and in relative isolation in a beautiful natural setting unwittingly confirmed the insights of Ludwik Fleck from 1935 about thought style and thought collectives in science (Fleck 1935 [1979]). Fleck's work was rediscovered almost thirty years later when Thomas Kuhn acknowledged it as the precursor for his own ideas in his famous book about paradigm change in science, The Structure of Scientific Revolutions (Kuhn 1962). The wish to believe our own narrative, underpinned by plausible assumptions and based on the comforting evidence derived from ‘real data’, was overwhelming. On the final day, when we returned to our normal scientific life, we were forced to recognize the fragility of the trustworthiness of the surprising futures we had invented.
The second result was equally sobering. Surprises turned out to oscillate between two opposites: they were either too conventional or too far-fetched. Their timing posed special problems. Quite often, we introduced them into our narratives when we could not find another plausible cause−effect relationship as explanation. Internal contradictions and inconsistencies remained. Like in the baroque theatre, surprises functioned often as the deus ex machina. They were literally being summoned from nowhere in order to give a decisive turn of events or to cover the gaps in our understanding. The surprises we could imagine in the context in which we worked were of a (surprisingly) limited variety. In real life, variations arise from the many possible interactions. This is the interface where the human imagination meets reality. We may be reasonably good at imagining and thus anticipating surprises with the added psychological benefit that comes of being able to say ‘See, I told you so’ when some of them actually occur. As the human brain feels challenged by real or apparent randomness, it comes up with patterns and explanations that somehow make sense. But imagination fails when surprises are not due to a single event and when the deus ex machina has to cede the stage to emergent situations which are the result of complex interactions.
Science continues to offer insights into complex adaptive systems. Nature may advance by jumps while humans stumble. The existence of tipping points and of transition phases with critical moments has become more firmly grounded in the understanding of how complex systems work. Open systems behave differently to closed systems. Most biological systems are open, non-linear and self-organized, which make them remarkably adaptive and robust. Self-organization is characterized by adaptive behaviour through interaction. Healthy heartbeats are adaptive, as they interact with breathing and other components of the autonomous nervous system. The importance and frequency of co-evolutionary processes are increasingly recognized and progress has been made in the identification of emergent phenomena. Complexity sciences mark the opening for grasping the future anew.
In this context, it is important to realize the crucial importance played by the term ‘prediction’ in physics and the considerable historical change it has undergone. As Giorgio Parisi has shown, as a result of the three revolutionary paradigm changes in physics, the notion of prediction acquired a weaker meaning. In return, the scope of physics and its applications was vastly expanded. In the days of classical mechanics, to predict was to determine the unique and reproducible result of the position and velocity of the particle according to Newton's Second Law. At the beginning of the eighteenth century, Laplace contended that an infinitely intelligent mathematician could predict the future of the universe in the classical, deterministic framework. The Newton−Laplace view turned out not so much to be wrong as useless, as it can be applied only to a restricted class of phenomena. Then came Maxwell, Boltzmann and Gibbs and their work on statistical mechanics. Prediction became restricted to the probability distribution of the system at large timescales. In quantum mechanics, the results of a single experiment are not reproducible even in principle. The most recent paradigm change came with the advent of complex systems and complexity science. The question is no longer how systems behave or how to make predictions for the properties of a system, but how to ask for the probability distribution of the properties that change with the system. Predictions in complex systems are about the general features of a system belonging to a given class. Prediction itself has become probabilistic (Parisi 2013).
To be of wider usefulness for society, numbers and probabilities have to be accompanied by words. They must be clad in a story in order to be told. This was already the case for the prophets of former times. Many prophecies revolved around the beginning and the likely end of the world and therefore needed to be buttressed by calculations. The problem with prophecies was that they were built on only one scenario. Today's forecasters and analysts also regard the world as numerically decipherable and predictable, but they have learned that it is advisable to work with several, not just one scenario. While prophets and their later interpreters relied on sacred texts and the Scriptures, which they believed to be built on a fixed divine plan, the sources for contemporary analysts lie in an enormous amount of data they can work on with unprecedented computational power. Welcome to the age of big data. Despite their name, they are no longer given, but made.
How far into the future do data allow us to see? Which kinds of data are they and what appears on their radar? To what extent is the outcome a prediction about the future or is it a representation of how the present is framed? Is it nothing but a useful prosthesis for the imagination? Peter Schwartz is one of those analysts who provide scenario planning and forecasts through their research-based consulting firms to government agencies and large corporations. The core building blocks for his and his colleagues' business are what he calls ‘predetermined elements’. Despite this somewhat inappropriate term, such elements can be anticipated as certain as they can already be detected in their early stages. These are the futures that are not here as yet, but are somehow imminent. They are adumbrated, yet inevitable as they already exist. Surprise sets in only because the timing and the results, and especially the consequences of events, cannot be foreseen. In such a perspective, only surprises are not predetermined. What can be anticipated is the range of possible results and the ways in which the rules of the game may change afterwards (Schwartz 2003).
According to William Gibson, another futurist, the future is already here. It's just not evenly distributed. If the introduction of any new technology or technological product can be said to embody ‘the future’, as this kind of self-limiting forecasting suggests, such a technological future needs to be carefully planned among selected groups of early adopters. They serve as testers. They give precious feedback on the flaws and potential improvements that emerge from actual use. In this sense, a technological future that is already here turns out to be nothing but a marketing segment. It serves as the launch pad for a wider diffusion of the latest product, with the aim of conquering a larger market share. Its uneven distribution serves to create future markets for the next wave of technological products. The future that is said to be already here is nothing but the potential of a future market.
Among those who make predictions on a professional and regular basis are economists. They display a curious ambivalence in insisting on the credibility of the outcome of their own work. The vast majority of economists failed to foresee the 2008 financial crisis. Their response is in line with the humorous remark by Paul Samuelson, who had not much faith in the predictions of stock markets either, when he alleged that Wall Street had predicted nine out of the last five recessions. Among the economists who recently contributed to a forecast for the next hundred years, many refused to take responsibility for their predictions. Jokingly, they referred to the undeniable fact that they will no longer be around when the time for testing their predictions arrives (Palacios-Huerta 2014). This professional caveat emptor is reiterated by big players like the OECD and the IMF. These organizations felt obliged to engage in a post-mortem analysis of why their forecasts during the financial crisis for the peripheral countries of the Eurozone between 2007 and 2012 were excessively optimistic and blatantly wrong. While both organizations admitted errors, they differed on the reasons why they were mistaken. Apparently, the relevant ‘predetermined elements’ have not become much easier to detect nor, even in retrospect, do they lend themselves to a similar interpretation.
As the participants of Torsten Hägerstrand's workshop in Sweden discovered, there are surprises which are easy to anticipate and surprises that result from complex interactions that doggedly escaped our admittedly naive efforts to capture them with the means we had at our disposal at the time. Since then, hope has become pinned on big data. They are expected to push the time window of the present wide open and permit glimpses into what was hidden from view before. The process of ‘datafication’, i.e. not only harvesting data that are already out there but actively producing new data, is in full swing. One of the latest examples is the development and actual use of visual apparatuses that follow eye movements and the ever-more sophisticated scanning and recording of the spatial environment. The increasing number of electronic footprints produced through mobile telephone calls and their numerous apps, or economic behaviour traceable through credit cards, constitutes by now merely the more traditional sources of a rapidly growing data-collecting pool. In this context, it is revealing that one of the slogans devoted to big data at the World Economic Forum meeting in Davos 2013 was ‘data tell you what the world is like, not what you thought it should be.’
Yet data are not just out there, nor can what the world is like be captured so easily. Data are not simply given. They are a varying mixture of facts, of traces left by events, of processes as well as artefacts from different measurements, collected not so much for a specific scientific purpose, but in more or less haphazard procedures. Most data are only proxies for events in the world. They stand in for behaviour or traits of individuals or for types of interaction between them and their environment. The proxy function is valuable in a double sense. Direct access to what they stand for is often not possible. Nor is it necessary. The proxy function is polyvalent, meaning it can be used for purposes other than the often trivial original purpose for which data have been collected. This permits the reuse of data which has become the gateway to turn big data into big business. As Amazon discovered to its benefit, leading to an enormous increase in sales, it is much easier to let data-derived material generate vastly more sales on the basis of extrapolation than by recourse to experts. The last book a customer bought provides more and better information on what book the same customer will buy next than asking experts to make recommendations. The battle between what the clicks said and what critics said was decided when Amazon realized that knowing why might be pleasant, but does not matter for stimulating sales. Knowing what drives the clicks (Mayer-Schönberger and Cukier 2013).
From asking why to asking what
What binds data together is actually a set of weak ties: correlations. This basic sociological insight was articulated forty years ago (Granovetter 1973). Forget about looking for cause−effect links, Viktor Mayer-Schönberger and Kenneth Cukier tell us more recently. Correlations are useful in a small-data world, but it is in the context of big data that their value really begins to show. They enable us to glean information faster, more easily, and more clearly than before. In the simple case of wanting to sell more products such as books or music, shampoos or cars, data from multiple sources that show correlations in the preferences and buying habits of customers are sufficient for predicting likely preferences in the future. The technique of extracting predictions from past performance or preferences is called predictive analytics. It can also be used for expanding business, but also for preventing mechanical or structural failure. Sensors placed on some machinery or material monitor the data patterns they produce and can detect problems ahead of time, thus greatly facilitating preventative maintenance (Mayer-Schönberger and Cukier 2013).
There exists one more reason for switching from asking why to what that pays off handsomely for those who know how to turn it into profit. Usually, the strong ties are those between cause and effect. They continue to play an important role in physics and in our daily world of small data. To remain within the comparatively simple effectiveness of technology: cause−effect links tell us that an aeroplane will carry us with high reliability from the point of departure to the airport of destination. The cause−effect links become more tenuous, however, once we move up to the next higher level of complexity where we encounter systemic complexity. The system in this case is the airport transportation system. Dealing with it requires: coping with its bewildering pricing system; the discomfort caused by security procedures; decisions about mergers between airlines or buying and selling landing rights. The air transport network is itself embedded in an even larger socio-technological system operating at global level which is infinitely less predictable and obviously much more complex than an aeroplane (Allenby and Sarewitz 2011).
Once we move to higher levels of complexity, interactions also become more complex. More sophisticated algorithms are needed and computational simulation takes over. But regardless of whether one deals with correlations or cause–effect links, big data or small data, prediction will only take us so far. In the end, it defies certainty. Data that have been generated and collected in the past indicate probable future behaviour or performance. This is reasonable. It requires some constancy of the underlying structures, be they physical, social, cultural, psychological or political. The assumption that the framing conditions will not change must hold. This is also determined by probabilities. Deviations in individual behaviour are tolerated. They will be compensated over time and probability theory can tell to what extent. In the example of detecting material fatigue, failures in detecting them will inevitably occur, even if the monitoring of the sensors has been carried out according to protocol. Yet the probabilities with which breakdown of routine operations are to be expected can be calculated.
Big data can tell us only probabilities. Predictions have a limited temporal range. The shakier the ground on which the assumptions and framing conditions rest, the more constrained the outcome. Some domains are more prone to this structural brittleness than others. When important decisions are taken based on processes that are inherently unpredictable, as in politics and history, predictability inevitably decreases. When minor changes in the initial conditions or problem specification have large or discontinuous effects, predictability will also shrink. Short-term weather forecasting has reached an impressive reliability, thanks to the performance of supercomputers to process enormous data-sets. But accuracy declines rapidly when looking further ahead. This is the divide which separates weather from climate. Weather prediction rests on a comparatively robust data-set in which correlations are sufficient to provide reliable outcomes. In contrast, climate is a highly complex system or, rather, a set of highly complex systems, coupled by non-linear dynamics through which cumulatively larger and larger changes will occur due to small changes in the initial conditions. As recent experience has underlined, the global financial system and presumably much of global business is subject to the same unpredictability.
If predictions for the non-imminent future are so difficult to make, can we at least get a better grasp of the past? Not if we keep asking the question why, which will remain unanswerable for human history. Nor does asking why make any sense for evolution once it has been realized that evolution is not about teleology. The switch to asking what and, even more challenging, how opens more promising avenues. Efforts to use big data to delve into the human past have progressed, in line with the stepping up of new data production – the ‘datafication’ of (almost) everything that Google can include in its computing power.
George Dyson was working on the historical origins of what he calls the digital universe, retracing Alan Turing's model of universal computation and the consequences. In the years before and during the Second World War the line of research opened by Turing was operationalized in its digital materiality at the Princeton Institute of Advanced Study by John von Neumann. The result was a two-dimensional model of the first fully random access storage matrix that underlies all computers in use today. It led to the propagation of codes, genetic ones as well as those of computer algorithms. Dyson recalls that in October 2005 he was invited to visit Google's headquarters in California. It felt as though he was entering a fourteenth-century cathedral while it was being built. The organization had been executing precisely the strategy that Turing had in mind: gathering all available answers, inviting all possible questions and mapping the results. For the atheist Turing, computers had become ‘mansions for the souls He creates’ (Dyson 2012). It is not known what Google's views on such matters were, but following Turing's strategy paid off handsomely.
At the time of Dyson's visit, Google had just begun its project of mapping millions of books from all over the world. The objections raised by book lovers were in line with Turing's vision of computers being more than just machines. They feared that the books might somehow lose their souls. Like DNA, books with their strings of code have some mysterious properties. The analogy goes further: ‘The author captures a fragment of the universe, unravels it into a one-dimensional sequence, squeezes it through a keyhole, and hopes that a three-dimensional vision emerges in the reader's mind’ (Dyson 2012: 312). Books take on a life of their own in their readers' mind. Are we scanning the books and leaving behind the soul or are we scanning the souls and leaving behind the books? Dyson asked himself after this memorable visit.
An engineer revealed to Dyson what Google's strategy was all about. The aim was not to scan books to be read by people. Books are scanned to be read by a digital computer, an AI, standing for Artificial Intelligence. As with all other big data, the AI is also reading everything else. What it reads does not imply understanding. The what becomes a what for, its utility being derived from multiple future uses. Content matters as far as it can be put into a form where it can relate to other content. And, as Turing and von Neumann foresaw, this AI would be successful at making improvements to itself. This marked the beginning of the digital code. Dyson is convinced that the power of the digital code is similar to that of the genetic code. Their respective power lies in their ambiguity: the transcription has to be exact but the expression is redundant (Dyson 2012: 310).
A cultural telescope: Newton and culturomics
Google's self-appointed mission to organize the world's information has since progressed at a rapid pace. Nine years after publicly announcing the book-scanning project, more than 30 million books have been digitalized. At present, the numbers have surpassed the collections of books in all major libraries of the world. The fascination with books continues, but it has taken a different form, in line with the what question of big data. Most of the authors of these millions of books are long dead. The content of their work stretches back for centuries. Yet, after being transformed into big data, new vistas of the history, language and culture of the human past become available. Big data can also mean long data. Thus a project got started with the data collected by Google's book project. Erez Aiden and Jean-Baptiste Michel wanted to use the data-set contained in these millions of books to explore history through the digital lens (Aiden and Michel 2013).
As with other big data, their data-set stems from many different sources and stretches over centuries, countries and cultures. The authors admit that it is riddled with errors and marred by omissions. How could one ever hope to capture changes in culture in such a messy terrain? Moreover, what could one find that had not already been written and rewritten by historians, anthropologists, sociologists, geographers, musicologists, linguists, epidemiologists and others, just by re-examining what dead authors had written about throughout the ages? A simple question − Why do we say drove and not drived? − held the key. By considerably narrowing their scope, the pioneers of what became known as culturomics turned to language as contained in the digitalized texts. And instead of delving into the subtleties of meaning that their authors had wanted to convey, Aiden and Michel settled on the numerical frequency of words as they appeared in the texts.
For some time, Zipf's law has been known to describe an astonishing range of natural and social phenomena as obeying power laws in the frequency of their occurrence (Zipf 1949). The inventors of culturomics embarked on charting the uncharted evolution of words through the course of history. Technically, they relied on the statistical analysis of text or speech content to find n of some sort. In this case, n means a combination of letters, i.e. words. Developed in cooperation with Google, the N-Gram Viewer is now widely available in its version 3.0 for books in English and in seven other languages.
But what is a word? Is it an idea? A concept? Is it merely reporting which words are used, for example in lexicons? Aiden and Michel decided on lexicons. It turned out that 52 per cent of the English language is lexical dark matter, i.e. it makes up the major part of the language actually used, but does not show up in standard references. In order to chart the uncharted they had to make the words they selected tell a story. This is achieved by plotting frequencies into startling, but simple graphs. History becomes flattened on the x-axis of annual chronology, while frequencies displayed on the y-axis represent the selected words in their graphic dance moving upwards and downwards. Only 6 per cent of all books are currently covered by the upgraded Google N-Gram Viewer. This proportion will change rapidly once it includes the e-book versions of the digital present. Further uncharted territory is waiting to be charted: manuscripts and letters, newspapers and unpublished material of various kinds.
With casual disdain for modesty, the authors of culturomics compare their feat of setting up a ‘cultural telescope’ to that of Galileo, ‘who kicked the earth out of its perch at the center of the universe with a telescope that was only thirty times better than the naked eye’ (Aiden and Michel 2013: 188). Comparing culturomics to Galileo's telescope sounds grand, but historical facts tell a somewhat different story. It is Isaac Newton's work that must be regarded as the predecessor of culturomics. Newton's writings on biblical subjects and his thorough immersion in ancient prophecies had been known for some time outside specialist circles in the history of science. Newton devoted considerable time and energy to technical chronology, supplementing the historical studies he had been working on for decades. In a recent remarkable feat of scholarship, historians of science Jed Buchwald and Mordechai Feingold reconstruct in detail how Newton extracted materials from the Scriptures and the classical sources, as well as from elaborate astronomical and genealogical computations of his own.
Like other early-modern natural philosophers of his time, Newton shared a passion for observing, experimenting and calculating in order to solve problems in the natural world and in mathematics. But he was also driven by the firm conviction that the cosmos reveals the presence of an active deity. Newton, one of the founders of modern physics and mathematics who probed nature's secrets, Buchwald and Feingold observe, was the same man who hunted the secrets of prophecy and of divinely guided human destiny. His modes of thought and practice were identical, whether his efforts were directed towards human history or towards unravelling the intricacies of mechanical nature (Buchwald and Feingold 2013).
In his Chronology, he put the method he had devised as an undergraduate in Cambridge for the study of optics, mechanics and mathematics to good use. He turned texts into numbers and tinkered with the data to fit his scheme. The motivation for doing so derived from his concerns about the origin of civilization. Already in the unpublished Origins of Monarchies, the data he so laboriously compiled were used to lay out the social laws he saw at work beneath the numbers. He remained sceptical towards singular pieces of evidence, as he was convinced that the deity followed law-like structures even in human affairs. Questioning the recorded time needed for the earth's population to recover from the deluge led him to shorten the chronology of ancient civilizations accordingly.
Newton's work with the big data available to him at the time was characterized by his deep-seated scepticism about the reliability of the senses, based on his calculations and his own experiences in the laboratory. The specific method he had developed to overcome the limitations of the senses consisted in increasing the number of measurements without discarding any. He forged a reliable result by taking an average among them. He was convinced that a good number could be produced by combining a multitude of bad ones. Towards the late 1670s, he turned his almost obsessive attention to prophecies, deploying a series of rules for interpretation. Newton's relentless mining of the Scriptures was based on the conviction that the prophecies, when properly analysed, had verisimilitude. He emphasized that his understanding and elucidation of prophecies was not the same as a mathematical demonstration. Nevertheless, it was ‘natural’ and ‘grounded’.
Relying on the conceptual structure and linguistic tools furnished by his contemporary Joseph Mede, Newton's scheme allowed the substitution of the sequential arrangement of visions found in the Scriptures with a synchrony between images and symbols. This was irrespective of their order in the sacred texts. Textual data were synchronized with rules and laws and underpinned with numbers. Thus, the visions and symbols of the biblical prophecies were shaped into a coherent structure. The test came when this structure was imposed on past events – the world history since the time of Daniel. This implied constraints for the fulfilment of the prophecies. The visions of Daniel had set a fixed point for the origins of all future history, furnishing a small set of numbers interpreted as years for the major turning points in the rise and demise of empires.
When Newton identified key historical dates as denoting specific prophecies, he studiously avoided speculating about their fulfilment. He was wary of playing the prophet. In this, he differed from his contemporaries. Only twice did he appear to commit himself to a few possible completion dates, each time accompanied by a disclaimer. He wanted to show the futility of such calculations and admonish those who rushed to announce the coming of the millennium. Thus, while he believed in the millennium, he carefully refrained from end-time calculations.
Newton's foray into the culturomics of his time was met with belligerent reactions from part of humanists, both in England and in France. His claim that structures that were not tightly integrated with data and that did not entail specific consequences did not qualify as knowledge presented the ‘rare opportunity to chronologists and antiquarians to engage the greatest man on earth in intellectual combat. The weaknesses of the Newtonian ramparts were soon revealed and the triumph was the more for the little men who had challenged the giant of the age’ (Buchwald and Feingold 2013: 412). Thus, despite his reliance on numbers, in his astronomical chronology and its seeming concordance with exact and experimental science, Newton remained within the constraints imposed by biblical chronology (Buchwald and Feingold 2013: 306). He had convinced himself that neither Egyptian nor Greek civilization could have existed much before the time of Solomon's reign. Ironically, his heroic failure in preserving the words of the Bible and of classical pagan texts contributed to the eventual rejection of the hold of the Scriptures over chronology initiated long before by the great humanist Joseph Scaliger (Buchwald and Feingold 2013: 435; Grafton 1983−1993). In the end, numbers and data had been enslaved by belief in words and firm assumptions.
There is a lesson here for all those working with big data and with long data. It is definitely not about the alleged cultural divide of the humanities vs the number-affine natural sciences. Scientific scepticism remains at the basis of each and every scientific claim. Overturning a paradigm or a cherished scientific belief has a greater chance of success when it originates from within the respective scientific community, even if developments in other fields of knowledge may support it. Relegating chronology as derived from the Scriptures and other ancient texts into an irrelevant corner of scholarly concerns resulted from advances in the interpretation of these texts and not from the seemingly evidence-based reordering of numbers, even by the great Newton. If culturomics or the most recent developments in the digital humanities are also to flourish in the future rather than just furnish us with some quaint historical details and n-graph storytelling, cross-disciplinary research needs to become more widespread and accepted, embracing words and numbers. Pursued in a way that remains acutely sensitive to the context in which they are being produced and in continuous need of interpretation, a renewed convergence can indeed be initiated by the wealth of data now at our disposal. Big data may then become ‘deep data’.
Digital analysis is capable of making things and connections visible which otherwise would remain unseen. It can uncover deeper and hidden structures. This holds also for literature. In his Stanford Literary Lab, which is officially dedicated to analysing literature with software, Franco Moretti has introduced a way of reading which is far from our familiar habits. Moretti calls it distant reading. He began to study the invisible objects that can be found in books − objects that nobody had seen before. They exist on a different scale from that typically experienced as reading. They are hidden in the relative frequencies of certain words, in their distribution and in their grammatical form. The study of these objects is quite a different experience from reading a book. The lived experience of literature creates an imaginary world in the reader's mind which acquaints her with the fictional characters invented by the author. The reader is invited to empathize with these characters and the world they inhabit. In contrast, distant reading does not transform the text into understanding nor does it evoke any of the emotions intended by the author. Instead of listening to what a single text has to say, digital analysis asks questions from a large corpora of texts that have been selected by the analyst. The epistemology of close reading and trying to understand the meaning is transformed into a mode of questioning which aims to uncover the deeper levels of linguistic structures. They are the ones that make meaning possible (Moretti 2013).
In line with other predictive analytics, digital analysis, including that of cultural products, can do more than uncover hidden structures. It can make predictions about the future success of cultural products, based on criteria that determined past success or the success of comparable products. This is also more profitable than evoking what dead authors can tell us. The ingredients of predictive criteria may be the opening sentences of books or based on the analysis of what characterizes leading authors and music in films. Data extracted from past successes across various kinds of activities and their precise combination can be analysed in correlation with the sales they achieved. The profit-making machinery behind big data continues to shift towards data-driven decisions. New recording technologies, like wearable electronics or Google Glass (even if shelved for the time being), merely extend the inherent potential of big data for becoming a huge reservoir. The fascinating glimpses into digital history pale in comparison with ‘life logging’, the recording of each and every one of our movements, moods and desires. Although it is too early to tell, the mansion of the soul that Turing believed he had built may turn out to also become the soul's prison.
From prediction to performativity: the return of the unexpected
The predictive streak that links past performance with likely future performance leads to an unexpected revival of some old philosophical questions. They spring from the irrepressible wish to sometimes say ‘no’ to what the data suggest will happen next. The refusal to follow a path-dependency generated by one's own track record is likely to grow once life logging is perceived not only to widen but also to narrow one's choices. Why should I always do what the data, even if they are my data, tell me to do? Once non-compliance or even rebelliousness sets in, courting the unexpected becomes attractive. It may induce engagement with behaviour that others regard as reckless risk taking. The more superficial the pleasure component of consumption becomes and the faster the returns diminish, the greater the need for deeper emotional reactions or anticipations. Too much predictability, if felt as constraining, may induce addiction to trying the unexpected. In this case, it is an illusion. Based on the neurological pathways of the brain, addiction may at first be felt as liberating, but surely unfolds its own predictable course. Advances in the neurosciences probe into these largely unconscious layers of preferences and avoidances which bring with them their illusionary certainty.
There is a paradox about meeting the unexpected. By definition, it is always ahead or behind, around the corner or coming through the cracks precisely at the moment when it is not expected. The wish to escape the routine that is tied to one's predicted future behaviour or reaction to something entirely foreseeable may then reveal something about the working of one's deeper self that one only vaguely knows about, yet one recognizes it when it happens. John Sloboda is a psychologist who explores the subconscious connections and disjunctions between musicians and their audiences. He asked 100 people to tell him which specific moments in recorded music triggered physical responses such as tears, shivers down their spines or butterflies in their stomachs. These ‘emotional hotspots’ typically were the moments where the music played with the listeners' expectations, such as dissonant notes that were held and then resolved. Composers and experienced practitioners may have known this for a long time, but this study is one of the first to show that listeners feel the strongest emotions in response to the unexpected (Sloboda 2014: 433). By subverting conventions, great art has often been known to evoke responses in us which are contrary to our expectations.
The small-data world may seem messy, a constant struggle to prevent the precarious order from fading away. In comparison, the big-data world appears to be extremely well ordered. Just as Newton's method consisted in not discarding any data, as even bad data are data to be subject to the average, the big-data world as we have seen engulfs everything it can get hold of. It no longer needs the judgement of experts and has dispensed with the question of asking why. This is what makes it big and the information it has gathered good enough. Big data are satisficing. Their power to predict may open what after all is only a small corridor of certainty, like purchasing habits, even if many businesses have come to rely on it. Like a balloon, this kind of certainty of what lies in the future can burst at any time, while other, and larger corridors of different kinds of uncertainty, like those connected with terrorism or geopolitical instability, may open at any moment.
One may feel steamrollered by what the AI, the digital computer and by extension governments, business and cyber criminals know about each of us. There are many reasons to be worried about surveillance and intrusion into privacy, about becoming exposed and defenceless when confronted with these forces. One reason for the unease it causes is that in this process something else is also eliminated. It is the ‘what is not’ and the ‘what is yet to come’ − thoughts, feelings, emotions, attachments, even habits still to be formed. When algorithms and silicon chips continue to reveal the what about human beings, what cannot be captured by the data becomes more desirable. ‘It is not the “what is”, but the “what is not”: the empty space, the cracks in the sidewalk, the unspoken and the not-yet thought’ that makes us human (Mayer-Schönberger and Cukier 2013: 196). The old philosophical question about free will returns in a novel guise. In a profound sense, a measure of existential uncertainty keeps the door open for what one will do and become in the future. Creativity, while it needs certain conditions to flourish, refuses to become subject to prediction. The creative process moves from disorder towards some kind of order which may include the order of random patterns. But it does not know in advance which order will emerge.
Knowing the odds for or against tomorrow and the predictions that big data make possible is grounded in their performativity. Words can move markets, and so can numbers and graphs. Performativity can sweep across organizations and is a pervasive force in social life. Numbers and graphs, indicators and what they are meant to tell us, are far from being innocent and objective. They have been designed to tell a story. Or, whatever their design and the way in which they are presented, it is the viewer who reads a story into them. Most often, it is the subtle and not-so-subtle fusion of these two strands that converge in one ballistic package, telling us who we are and what we do while transforming us into what we are being told. Then people begin to behave in the ways they are shown that they will behave.
This is no longer akin to the invention of normalcy and deviation from the norm, this ‘incredible success story’ (Ian Hacking) which arose in the wake of the understanding of probability and the statistical laws in the nineteenth and the first half of the twentieth centuries. The discovery of these social and personal laws was a matter of probability, of chance. They were inexorable. People were normal if they conformed to these tendencies, while those at the extremes were classified as pathological. The notions of normalcy and deviance fitted into the major concerns of societal control of those days: deviancy, as manifest in crime; vagrancy, madness, prostitution, suicide and disease. It was rooted in the idea that a deviant sub-population could be improved – controlled – by enumeration and classification (Hacking 1990). When nation-states began to classify, count and tabulate their subjects, an avalanche of printed numbers, many of which were published, followed. Ian Hacking describes the rise of new bureaucracies which gained authority and assured the continuity to deploy the knowledge and technology that underpinned their ascent. In order to be counted, people needed to fall into categories invented for this purpose. Class consciousness, this central Marxian concept, was buttressed by the classifications in which people came to recognize themselves. Identities were formed and transformed through bureaucratic processes that culminated in ‘making up people’ (Hacking 1986; Porter 1986; Salais 2004).
Today, it is no longer the Prussian Central Statistical Commission, nor the Napoleonic and post-Napoleonic bureaucracies, which once collected a vast amount of data, searching for law-like regularity. Instead, millions of individuals who live in liberal western democracies voluntarily offer to provide information on everything they do and on where they turn their eyes for attention. As fits an age where the process of individualization has elevated the individual above any kind of collective, class and classification, performativity works in a highly personalized and fine-tuned mode. Based on the gradual actualization from what is said to be done to what is in fact done, the description of the world is transformed into its representation. Behaviour is appropriately adjusted to conform and confirm what performance, captured by indicators or other numerical forms of representation, indicates. This is also where the performative power of data lies, once they exit from Google's supercomputers and invade the social world.
But it is still about control, the kind of control that resides in knowing what is likely to happen in the future. When Google sought to gauge what people were thinking, it became what people were thinking. Facebook sought to map the social graph and became the social graph. Algorithms that were developed to model fluctuations in financial markets gained control over them and became the way in which financial markets operate. Arguably, the societal pressure to conform to societal norms has lessened or merely changed form. Nobody is expected to become ‘normal’ any more, as was the case in the nineteenth and part of the twentieth centuries. On the contrary, all efforts are directed to ‘personalize’ whatever is on offer. Personalization has become the new normality. Both are about control.
The performative power of big data is not an isolated phenomenon. It comes with other subtle and not-so-subtle techniques of self-control. The quantified self-movement sets itself up to monitor one's health and well-being. Measuring the self's heartbeat, the number of steps taken per day or the hours slept during the night is correlated with self-reported mood swings and numerically recorded levels of well-being. The range of measurements that can be taken is extended through additional applications. Increasingly, they include the body and the analysis of body parts, like blood analysis and what it reveals about hormone levels and other measurements of bodily function. While the self reports the daily measurements foremost to itself, it is aware that the data are to be immediately shared and compared with those of other selves. Otherwise, they would remain isolated, a single measurement point and hence meaningless.
The ambivalence in connecting thick and thin description
Predictive analytics and the performativity of big data are possible only because they have succeeded in linking the individual with the aggregate level. As such, this is not new. Over centuries, nation-states have aggregated information about their subjects as part of their power base. Statistics were employed to recruit young men for their armies, for taxation purposes, for maintaining public health and for many other reasons. The taming of chance combined probability theory with data, but the individual and his or her future basically remained a random variable. Social scientists, like Adolphe Quetelet and Auguste Comte, who championed a social physics in the nineteenth century, imagined societies as aggregates of individuals, inspired by the capability of physics to make predictions.
Today, the individual does not see himself or herself as being an anonymous part of an aggregate. One's outlook on the world and self-perception is framed by the continuous construction of one's identity and uniqueness. Perception and self-awareness, self-confidence and doubts are encoded in the ‘thick’ description of the self. This term was first used by anthropologists who searched to understand the meaning of structures that make up culture. They described the everyday activities of the tribe and the members whom they studied. They interviewed informants, observed rituals, mapped households and kinship links in the attempt to discover the code with which to decode social events and relationships, cosmologies and the use of symbols. Thick description is based on commentary and interpretation of what is observed, including the conceptual structures and meanings which are assigned. Interpretation is an integral part of thickness. So is the problem of translation. It remains between the language, methods and practices of the anthropologist and the language and meanings of those whose world is described (Geertz 1973). Likewise, the individual continually has to negotiate, reconstruct and translate her or his own thick description and bring it into alignment with the description of others, but also with the thin description that strips away much of what is subjectively familiar. Translation can be betrayal, but it is also mediation between different worlds and different levels of reality.
In contrast to thick description, ‘thin’ description is a more factual account, minimizing interpretation. The peculiarities and frequencies of everyday individual behaviour are fed into the storage pool of big data. Yet the conceptual structures of thin and thick description intersect. They are superimposed. Layer by layer, thick description becomes part of a more complex system. What started out as thick description becomes increasingly thinner and more abstract. How can one reconcile the gap and the different perspectives between the individual and the aggregate level?
The aggregate level consists of a huge amount of data that have been collected on the basis of what millions of people do. Nothing is discarded. Whatever the source, it can and will be used for some purpose. Reuse is the norm and not the exception. The individual claims its uniqueness through carving out its own unique identity. This view of the self is strengthened by what the data tell: all components, however minute, have been assembled in order to make up the profile of each self. One's behaviour over time has been tracked in a precise and detailed mode and one's preferences and taste have been followed and analysed. Movements indicating with whom one likes to associate and the social environments one mingles with have been recorded. All that is known has been assembled, analysed and remixed.
But the result is paradoxical. While everything is done to assure the individual's uniqueness, this is arrived at through comparison, differentiating the individual's uniqueness from that of others. In order to make sense of the data, the information about an individual has to be placed into a larger frame of reference. Only then can the range of predictions of future behaviour be extended. Only then can it be predicted what one will buy next; the preferences for eating, drinking and lifestyle; for whom one is likely to vote in the next elections; whom one is likely to partner with or get married to. Individual uniqueness becomes encoded through the multiple and invisible links that big data collect and reveal in iterative and cumulative ways. The life sciences, and in particular the biomedical sciences, are on the verge of turning the huge amount of data that continues to accumulate towards the personalization of medicine. The collective net of knowledge about the individual is a social net in a deep sense. Socialization turns out to be a two-way process.
What does this perception of uniqueness of an individual's identity, derived and dependent on aggregate data and referenced by continuous comparison with others, do to the irreducible feeling that there is more to life than what is contained in this information? That there is something hidden somewhere in the interstices of the range of predictable behaviour that forecasts everyone as consumer, voter or prospective patient and the irrepressible conviction, which perhaps is an illusion, that one's destiny depends on choices not yet made? That there must be room for the unexpected and unpredictable, for what escapes capture through the thin description? The wider the range of what can be predicted with reasonable − i.e. probabilistic − certainty becomes, the greater the yearning for the unexpected. The tighter the indirect and subtle mechanisms of performance indicators channel behaviour, the stronger the longing for some kind of escape routes. Greater predictability for the individual is both reassuring and unsettling. Reassuring because one knows what to expect and one can act accordingly. Unsettling, because the predictability of one's behaviour, emotions and social preferences is there for everyone else to see. It increases the vulnerability of the individual because his or her motives, emotions, strategies and behaviour are exposed.
It is important to recognize that individual and aggregate levels are closely intertwined, not only in an operational mode. Most people are perfectly capable of using both registers. They are aware when switching from one mode to the other and reasonably competent in making the connection. Choices and decisions are made all the time, even if one is never certain why exactly this decision has been made. Once made, they enter the cognitive apparatus and become part of the reality to live with. In order to avoid lapsing into a state of cognitive dissonance as the result of simultaneously holding incompatible beliefs, they are defended. Big data weave all this information together into a huge tapestry of choices, values and behaviour linked to the socio-economic status that frames them. Thus, data from one's past become the trajectory for predicting the personalized reality of tomorrow. When data are made explicit by indicators and benchmarks, performativity sets in. When they remain implicit, they form the background noise of what others do and what others value, aspire to or reject in the huge aggregate cloud that simulates contemporary societies. It is possible to zoom in and to adjust the resolution to the various subcultures, the multitude of social media networks, and to the pulsations of their fluctuating interactions.
The ambivalence in linking the individual and the aggregate level is highlighted by changes in the structures of control and governance. Government has largely been replaced by various forms of governance. Indirect regulation regimes prevail in most organizations, featuring self-reporting, self-monitoring and self-control from below. Normality and norms have become personalized. The taming of chance seems to have succeeded beyond expectations. Hacking borrowed the powerful term from Charles S. Peirce, who was convinced that, in the history of the universe, blind Chance stabilizes into approximate Law. But it applied foremost to aggregate numbers, categories, classified subgroups, the ‘making up of people’ which was the outcome of the sorting processes to which populations had been subjected. For the personal universe of the individual today, chance is far from being tamed. The probabilities that rule the world do not provide sufficiently reassuring answers for anyone who yearns for certainty. For the individual, probabilities continue to provide only a range of numbers. They may alleviate uncertainty, but they deny certainty.
In a world in which determinism has been eroded and social space has been created for the autonomous laws of chance to unfold, the individual, for better or worse, was a member of a category or a subgroup, a part of a population. Now, the individual has become personalized, thanks to big data and the computational power of big corporations and governments. But, paradoxically, he or she has also become more exposed to the laws of chance in the form of probabilities. The cunning of uncertainty has removed the coercive but also the protective cover of the social arrangements of the nineteenth and first part of the twentieth century. Risk and uncertainty show themselves in novel ways. Risks have become personalized, while a new kind of risk emerges at a higher, aggregate level: systemic risk. But risk must not be confounded with uncertainty.
Risk is not uncertainty
Donald Rumsfeld, infamous for many well-known reasons, will also be remembered for the self-exculpating sentences offered in his Memoir: ‘There are known knowns, the things we know. There are known unknowns, the things we know we don't know. There is also that third category of unknown unknowns, the things we don't know we don't know. And you can only know more about those things by imagining what they might be.’ According to Rumsfeld, one such catastrophic failure of the imagination occurred in the attacks on Pearl Harbor. He quotes Thomas Schelling, who wrote in 1962: ‘There is a tendency in our planning to confuse the unfamiliar with the improbable’ (Danner 2014; Rumsfeld 2013; Schelling 1962). But the unknown unknowns are more than a mere failure of the imagination.
An old Jewish joke asks: ‘How do you make God laugh? Tell Him about your plans.’ We rely on huge machinery for predicting, managing, assessing, exploiting and assuring against everything that stands between our plans, our anticipation and imaginaries of the future, and what actually happens. The hiatus in between them is risk, which has become a central category in modern life. The late Ulrich Beck, who coined the term risk society in 1986 just a few months before the Chernobyl accident, saw the adaptation of society to face an increasing number of threats as a result of reflexive modernization. As a consequence of industrial modernity and conscious awareness, i.e. the reflexivity of its impact, its foundational infrastructures are eroded and questioned. Modernity is engaged in continuous preventative action to avoid the negative consequences and future catastrophes emanating from the infrastructures it has put in place (Beck 1986).
Indeed, risk permeates and underpins most operations in the financial markets and hence in modern capitalism. So does risk assessment and the attempts to keep risks and their potentially negative consequences at bay through risk management. Every new technology and the next generation of products it brings to market are subject to some kind of regulation and risk assessment. It has become mandatory to anticipate, assess and wherever possible to integrate risk management into the development of technology. Explicitly or implicitly, risk management pervades all organizations and systems. It seeks to hedge bets and to minimize unwanted fallout. Wherever one looks, every decision, from the most trivial to the existential, is fraught with some kind of risk that needs to be dealt with. And so is every decision not taken. For modern life, risk has become a known unknown which one would like to know in order to adequately deal with it.
When Michael Power became alarmed by the explosion of auditing activity in the 1980s in the United Kingdom, he attributed the phenomenon to three causes: the rise of New Public Management which increased demands for financial and VFM (i.e. Value for Money) auditing in the name of financial constraint and reform in the public sector; the closely related political demands for greater accountability and transparency; and the rise of quality-assurance practices that originated in an industrial production context. This changed the regulatory style of indirect regulation of organizations from above to from below (Power 1999, 2000).
The word ‘audit’ may no longer be at the core of the regulatory practices, but there can be no doubt that regulatory systems everywhere increasingly rely upon the control of control. These are self-checking, self-controlling and self-reporting arrangements, in line with performance objectives, measurement and monitoring. Based on their feedback, the performance objectives are iteratively adjusted anew. The design of objectives and the content of performance are not a neutral procedure of verification either. They follow the logic of performativity: a model validating itself in the sense of making itself successful.
The systemic and institutional consequences of these developments are far-reaching. Auditing and related practices of quality assurance, of performance measurement and monitoring are not about the isolated act based on the judgement of a practitioner, but about collectively negotiated settlements. Their ultimate raison d'être and legitimation is to assure that the risks that arise are properly managed by the system. Control of control is about the risk management of the system. One of the greatest ironies is the utter failure of the internal control system in the financial sector. The intended solution to regulatory compliance was not effective where it mattered most. This only shows that financial markets are not self-controlling. Regulation has to come from outside and be enforced.
Historically, it took a long time for risk to emerge as an unknown that could knowingly be dealt with. It had to be lifted out of an obscure and potentially threatening mire of numerous dangers that rendered life insecure and posed imminent and real threats. It came with the transformation of the unknown into something known by converting danger into a risk that could be calculated and hence contained. The origins of rischio lie on the shores of the Mediterranean in the thirteenth century. Its original meaning was to consciously put something valuable up for disposition, to risk promised desired gains and implied possible losses. The merchandise sent across stormy seas to a faraway trading partner could arrive safely and be sold for a huge profit. If not, big losses would be incurred. In order to achieve gains, it was necessary to act decisively, to dare, to take a chance. The mercantile environment that evolved at the time was supportive. The willingness to take a risk encouraged the calculation of the promised gains in monetary terms. Likewise, damage could be valorized monetarily. Rischio emerged together with an early form of capitalism, the ‘capitalism without an adjective’ as Fernand Braudel called it.
A few centuries later, and based on the same idea of converting a danger into a risk that could be calculated and supported by vastly improved statistics and probability theory, insurance as a regular business took off. Continuously updated calculations of the probabilities of various kinds of risk allowed for adjusting the risk premium. During the nineteenth and the first half of the twentieth century, it vastly expanded, together with the proliferation of social security systems in the wake of the welfare state (Ewald 1986). Additional techniques and technologies, supportive institutions and a stream of state regulation have since contributed to converting many more potential dangers into calculable risks. But in this ongoing process of conversion, the meaning of risk also changed. Risk originally implied that the outcome could be positive or negative. Dangers, in all their threatening vagueness and with their generalized catastrophic potential, were to be brought gradually under the control of calculation and, hence, civilization. Losses remained unavoidable. Gains continued to provide the attractive force. They were the incentives to act by opening up novel spaces of opportunities.
Today, the concept of risk has become impoverished and one-sided. It is now generally associated with a potentially negative and unwanted outcome which is to be avoided or at least its consequences minimized. The concept of risk is conceived as an evaluation of an uncertain loss potential. Seen from a decision-theoretic perspective, risk has now become strongly related to other concepts, such as vulnerability, robustness, and resilience, which are conceptualized in either a static or a dynamic way. The thrust behind this approach is to integrate and consider these concepts together as part of an adaptive risk management strategy (Scholz, Blumer and Brand 2012). By stripping risk of its original wider meaning, which included the lure of potential gains, it has been reduced to an object which is to be managed – through mitigation, adaptation and containment.
The reasons behind this conceptual and semantic shift, which can be observed throughout the risk literature and in the common language use of the term, are not entirely clear. Research into the psychology of risk perception has shown that gains and losses are not perceived as symmetrical. Losses are generally overestimated as they are related to assets possessed in the present, while gains lie in the future. Moreover, in a world which is predominantly monetized, the value or the valuables that are put up for disposition when taking a risky decision often become narrowed to what can be calculated in monetary terms. Other values and valuables become either eclipsed or marginalized. A woman who asks for a divorce, knowing that this decision will pose considerable economic risks for her in terms of loss of income and maintaining her lifestyle, may still want to do so because she values something else more than her economic situation. She ranks her own well-being and that of her children without the man whom she seeks to divorce higher than the economic losses she knows she will incur. In the health sector, considerable problems are known to arise when decisions need to be taken that have to weigh economic losses against more intangible values and valuables, such as the quality of life of a patient.
In general, risk management is at ease when calculating gains and losses that are expressed in the same currency. Difficulties arise when non-monetary values have to be taken into account and need to be factored into the calculation. The narratives of a daring act in defying fate or the once heroic decision to put up what one values for an ideal that ranks even higher are still a familiar trope. For most practical purposes, however, they have yielded to a cautious assessment of the unavoidable losses, i.e. the severity of adverse effects, mostly expressed in monetary terms. The concept of risk in the sense it has acquired today is therefore bound to tilt towards the negative side. Conceptually, it is no longer equipped to calculate the gains that belong to a non-monetary currency of valuables. By linking the probability dimension of risk to decision making and the decision stakes, different categories of risk-solving strategies are offered in return, as proposed by Funtowicz and Ravetz in their uncertainty-risk model (Funtowicz and Ravetz 1985). For these authors, nuclear power exhibits both high decision stakes and high system uncertainties, in contrast to low decision stakes and low system uncertainties that are found, for instance, in many familiar areas of applied science. The only problem with such a model is that it is impossible to define, and find consensus about, what high system uncertainties mean in a specific and contested case.
The attempt to brave, if not to tame, chance by taking a calculated risk has yielded to attempts to manage the ‘risk of everything’ which has become the dominant concern today. It enacts caution and emphasizes prevention. The daring act of defying fate or the wrath of the elements has become a heroic gesture consigned to the past. Risk taking has become subject to social disapproval in many areas of life. Institutionally, it has been shifted to the one area in which it is welcome as highly desirable, if not as a precondition: the financial markets and related economic activities. The original distinction between danger and risk has thereby been lost. Some risks have even been re-converted into dangers.
In 2007, Beck updated his diagnosis and expanded the typology of risks now incurred, including financial risks, linking the uncontrollability of risks to globalization. The risk society became a world risk society in the German original (Beck 2007 [2008]). As a global consensus on anticipation and prevention is nowhere in sight, a clash of risk cultures takes place. Fear is unevenly distributed and may change sides. Uncertainty always plays a part. It leaves room for not acting and for not taking the necessary preventative action.
Much of the history of technology offers a somewhat different reading. Technologies have proven rather effective in converting dangers into risks whose probabilities can be calculated and which come with the future potential of both positive and negative consequences. The real problem is that technological risks often ignore social risks. The societal discourse on risk has insistently brought back the social dimension: gains and losses for whom? Who benefits and who loses? Is the risk imposed or is it taken on voluntarily? Expecting society to be more risk-friendly or the younger generation's readiness to fail more often, as policy makers and industry do not tire of doing, also means engaging seriously with such legitimate questions. Given the ongoing expansion and growing interdependencies of technological systems, hi-tech societies will experience an increase in risks, including systemic risks. They should not be re-converted into dangers. Instead, calculating gains and losses must integrate the social dimension, including a fair distribution of positive and negative consequences. In the end, technological risks and social risks will have to be dealt with together.
One area of assessing and managing risks that was relatively shielded from public view until they recently gained heightened visibility and attention are the risks embedded in financial institutions and transactions. This experience has demonstrated that financial markets lead other markets and that financial capital dwarfs industrial capital. As the latest financial crises show, market-based risk management was definitely not up to the task it was designed for. In the heart of modern capitalism, the crucial distinction between risk and uncertainty re-emerges as fundamental.
This distinction was first drawn in 1921 by Frank Hyneman Knight, an economist and a farmer's son from Illinois. His broad liberal arts education included chemistry, German drama and philosophy, in addition to economics. He also studied with Max Weber in Heidelberg and translated Weber's book on economic history. Knight wanted to understand entrepreneurship and the role of profit as rewarding the entrepreneur. ‘At the bottom of the uncertainty problem in economics,’ he noted, ‘is the forward-looking character of the economic process itself.’ Investment is an activity that looks into the future. Profit and loss materialize only when uncertainty is present. Competition, in investment as in sports, presupposes that several individual or teams vie for the same prize and that the result cannot be predicted with certainty. Knight's contribution to the theory of entrepreneurship consists in making uncertainty its central feature. This is in contrast to Max Weber, for whom rationality was central, and differs also from Joseph Schumpeter, for whom entrepreneurs, driven by more than just the profit motive, were the protagonists of the innovation process (Brouwer 2002).
Knight contended that ‘profit arises out of the inherent, absolute unpredictability of things, out of the sheer, brute fact that the results of human activity cannot be anticipated and then only in so far as even a probability calculation in regard to them is impossible and meaningless’ (Knight 2002 [1921]: 311). As it cannot be predicted in advance which ventures will succeed and which ones fail, only uncertainty can explain profits and losses. Risk is calculable a priori and can be treated as a cost. Experience can teach us about damages to be expected, which can be included in cost calculations. Other risks can be insured. ‘Uncertainty, in contrast, is uninsurable, because it depends on the exercise of human judgement in the making of decisions by men and although these estimates tend to fall into groups within which fluctuations cancel out and hence approach constancy and measurability; this happens only after the fact’ (Knight 2002 [1921] quoted in Brouwer 2002: 92).
The difference between risk and uncertainty is whether the possible outcomes can be calculated before or not. Situations in which decision making is faced with unknown outcomes, but known ex ante probability distributions, differ in a profound sense from those where the probability distribution of the outcome is unknown. This is genuine uncertainty. Under such conditions, as Bertrand de Jouvenel observed, knowledge of the future becomes a contradiction in terms. In a trenchant critical analysis of what he calls ‘the Ghost in the Financial Machine’, Arjun Appadurai charges overconfident financial market profiteers with exploiting Knightian uncertainty. He sees them waging on uncertainty rather than on risk in practices like short-selling (Appadurai 2013). Are traders playing with risk or with uncertainty?
Short-selling is one of many activities which animate the financial system. Another, at once more banal and worrisome, is revealed in Flash Boys: A Wall Street Revolt. The author, Michael Lewis, fictionalizes the rise of high-frequency trading, where traders buy and sell in large volume at an extraordinarily fast pace. Trading is entirely automated, backed by huge investments that have gone both into hardware and software. According to Lewis, high-frequency trading is more than just playing with risk. It is designed to skim profit from other investors. He calls it a form of legalized theft. By being faster than everybody else, due to split-second automated electronic advances, high-frequency traders can anticipate the information that other investors can obtain. The feedback loops close upon themselves. The financial market is a market in which most traders are reacting to what other traders are doing, trying to outsmart and out-speed others (Surowiecki 2014).
Critics dismiss this as false and ill-informed. They object, pointing out that rational speculators, e.g. ‘value investors’ who systematically attempt to buy under-priced assets, perform a role in the ecosystem of finance and the economy. They may be parasites, but they still play an important role in price stability. They invest energy in research on detecting the ‘correct’ prices on which to speculate. By doing so, they take excessive price fluctuations from the system. However, they do so at the price of introducing systemic risk (Thurner, Farmer and Geanakoplos 2012). The ‘grace’ that once was the driving force in Weber's spirit of capitalism has been replaced by a self-bestowed confidence to play with uncertainty for the sake of profit. Seen from the perspective of the system, uncertainty at the level of the individual speculator becomes transformed into a risk for the system.
But Knightian uncertainty, ‘the sheer, brute fact that the results of human activity cannot be anticipated’, takes its revenge. ‘Flash-crashes’ happen in the real world, driven by the internal dynamics of the market, exacerbated by the behaviour of high-frequency traders and others. In the meantime, the origin and consequences of the financial crisis of 2008 have become the study objects of a vast literature. One of the many diagnoses sees the fundamental cause of the unfolding financial and economic crisis, which has since cascaded into a global economic recession, in the widespread illusion of a perpetual money machine. The accumulation of several bubbles, their interplay and mutual reinforcement lies at the origin of this illusion, allowing financial institutions to extract wealth from an unsustainable artificial process (Sornette 2009; Sornette and Woodard 2009).
If crashes and bubbles are ubiquitous, if humans tend to be over-optimistic with respect to future prospects and if herding behaviour is common, especially in financial markets, why it is so difficult to predict these known unknowns? Or are they unknown unknowns? The answer lies in the complexity of the system which is the real power base of uncertainty. Mechanisms of self-organization are at work in the distribution of certain event sizes that co-exist with power laws. Rare but large events punctuate many complex systems, natural and social ones. Outliers defy Gaussian distributions. Examples can be found across a wide range of natural and social phenomena, from the distribution of seismic events, meteorites, city sizes, material creep, epileptic seizures and, most tellingly, financial systems. Also called dragon-kings to emphasize that the phenomenon can be very different from normal, they have profound significance. While the onset of extreme events escapes prediction, dragon-kings reveal the presence of special mechanisms of self-organization. These can be phase transitions, bifurcations, catastrophes (in the sense of René Thom) or tipping points. They provide clues that allow diagnosing the maturation of a system moving towards a crisis (Sornette 2009). Where uncertainty reigns, its cunning may allow glimpses that enable advanced warning.
Dragon-kings may lead to massive losses and inflict damage on millions of people. At the other end of the scale is the everyday life world of each individual where comparatively benign decisions need to be taken. Risk and uncertainty are an integral part of both. Illusions of certainty exist, when a risk, for instance a false positive result, is mistaken as certainty. There is also the example of the turkey who mistakenly extrapolates continuation of being fed well, unaware that Thanksgiving is approaching. Gerd Gigerenzer has spent a large part of his successful career informing experts, medical doctors and laypersons alike on how to become more risk-savvy. He has devised some simple rules for situations beset by uncertainties, in which less often means more.
Gigerenzer does not deny that many situations exist in which sufficient data and information are available (at least in retrospect) that allow, if carefully analysed, a strategic outlook. His concern is with the multitude of situations in which it is advisable to leave the calculable, probabilistic world, at least temporarily, and to settle for the gut-feeling, for the momentous intuition. Such heuristics allow us to concentrate on that part of information which matters. Experts often rely on heuristics, as they know from experience which information is relevant and which can safely be ignored. Intelligent heuristics therefore provides simple rules for situations in which uncertainty is high, not much information or only few data are available and in which several routes can be taken. Heuristics heeds Einstein's dictum to make everything as simple as possible, but not simpler (Gigerenzer 2013).
Dragon-kings seem to inhabit worlds apart from the everyday situations in which heuristics have their place. No easy answers exist to the question: how relevant is the past to the future? The unexpected is always lurking behind the next corner. Uncertainty has yielded a bit of its territory for us to peek into. To ask the what question is more predictive than the why question. Humans are full of contradictions. They yearn for knowing in advance in order to prepare and, if possible, to shape what is coming. But they equally loath to become trapped by their own path-dependency. The odds for tomorrow lie in a future that is radically open – and thus uncertain.
3
The Cunning of Promises
Bringing the future into the present
Nobody quite knows how it started and who started it. On the Pont des Arts in Paris, the railings and iron arches connecting the grids are festooned with padlocks. This ritual has meanwhile spread to other bridges, from the Pont d'Archevêché just a few hundred metres away to Florence, London, Rome, Berlin, New York and Seoul. The padlocks have been attached with promises of love and togetherness by couples who throw the keys into the river in a demonstrative gesture of trust and defiance. An ongoing debate rages between those who consider these materialized forms of promise a sign of artistic urban revival, a necessary concession to tourists or an eyesore to be removed. Street vendors are eager to sell locks to those who come unprepared to promise endurance of their relationship, while city officials ponder the moment when the weight of the locks might endanger the stability of the bridges.
Promises are a risk-free mortgage on the future. They are a bet on it, premised on the belief that the uncertainty about their delivery will be overcome. Promises are based on trust, which creates social ties and is a glue of a remarkable kind between individuals, or between individuals and institutions. Promises can be made to oneself or to an imagined other, as to someone who is no longer around. They may also take the form of a vow offered to a deity. Promises may be vague and unspecified and yet create the weak ties that bind us into strong social cooperation. They may adopt the strong form of a legally binding contract. In Anglo-American law, for instance, excuses are recognized when the law of contract cannot be enforced due to impossibility, such as divorce and bankruptcy (Foster 1987). There is also a moral economy underlying promises. Legally binding promises are considered to be morally less valuable than those given without obligation. This points to the fact that a subtle balance-of-power relations undergirds the exchange between the parties. Promises involving a delivery expect something in return, if not in kind, then in support, loyalty or belief.
Expectations about the future matter a great deal. In game theory, different behavioural patterns emerge depending on whether a game is played only once or whether repetition is expected. Tit-for-tat, the equivalent retaliation, turns out to be a simple, but highly successful, strategy in repeated prisoners' dilemma games. The agent first cooperates, and subsequently replicates the opponent's previous action, thus determining patterns of cooperation or defection. Socially expected duration, i.e. the length of time a social interaction is expected to last, is another effective mechanism which regulates trust or its loss. The norm of reciprocity is a generalized promise, given on the condition that it will be redrawn upon betrayal of the underlying expectation.
The cunning of uncertainty entails a temporal dimension which, to use Hannah Arendt's words, brings the future into the present. In her theory of action, heightened unpredictability is produced at the level of the mind, which can be redeemed only through the potentialities inherent in action. The two potentialities she refers to are promise in response to the indeterminacy of the future and forgiveness to counteract the irreversibility of past actions and their unintended consequences. For Arendt, the power of promise lies in the capacity to dispose of the future as though it were the present: ‘By bringing the promised future into the present we are able to create reliability and predictability which would otherwise be out of reach.’ Similarly: ‘[B]inding oneself through promises, serves to set up in the ocean of uncertainty, which the future is by definition, islands of certainty without which not even continuity, let alone durability of any kind would be possible in the relationship between men’ (Arendt 1998 [1958], quoted in Adam and Groves 2007: 237, 245).
Some of these islands of certainty need to be buttressed through the binding force of contract, while others are stabilized by the mere trust and belief that the relevant promises will be kept. Those who make a promise should be spurred to action and those who expect that promises will be kept sustain future action through their support. Thus a working space for individual and collective imaginaries is opened up. It is geared towards action. Some of us may remember from our childhood days the intensity of what we wished to happen and how we innocently schemed, as only children can, to involve our parents in this imaginary wishing space. Some of us may also remember vividly the disappointment that followed when, for instance, our parents promised to take us to the circus or to buy an ardently desired toy, only to explain afterwards that they could not keep their promise. Later in life, having gained experience and maturity, a sometimes painful sense of realism transforms the childhood wishing space into a space of mutual expectations and anticipation.
The play of promises made and kept fluctuates throughout one's life and relationships. Happy endings and the tragedies of hope and betrayal are the stuff of which novels are made. They resume their dynamics at an institutional, even societal level. The temporal span of socially expected duration may lead up to a fixed point in time or may be left open. Even if the time span knows no deadline, the trust of those who believe in the promise is not endless. Once disappointment sets in, trust quickly erodes. Abruptly, the willingness to believe in future promises shrinks and the islands of certainty re-submerge in an ocean of uncertainty. Anthropologists have provided fascinating accounts of ‘Big-Man societies’, partly on the basis of characteristics they display and partly on the basis of promises they make (Sahlins 1963). This affects and is affected by the social structures in which such exchanges are carried out. Anthropologists have compared social structures dominated by ‘big men’ with those of ‘great men’. Among the Baruya, a tribe in Papua New Guinea discovered only in 1951, power among the great men is inherited or merited, but the accumulation of valuables plays no role. It rests on male domination and the exchange of women between the lineages of the tribe (Godelier 1982).
Or take public life in the late Roman Empire. It was much dominated by the obligation of gift giving. The Roman Empire was held together by personal ties and patronage, cemented through massive giving. Eventually, the patterns of generosity shifted from giving to cities and fellow citizens to giving to the Christian church, with the gifts no longer gaining fame for the donor in this world, but achieving salvation in the next. Obligations to give and to whom, their underlying motivations and the resulting patterns of wealth accumulation allow fascinating glimpses into the interplay of expectations and obligations, promises and rewards, the evolution of which depended on changing uses of wealth (Brown 2012). While Cicero in De officiis sees the keeping of promises as the foundation of justice, centuries later Machiavelli in chapter 18 of The Prince advises the princes that they should keep promises only when it usefully serves them to keep the state going.
Until this day, politicians continue to make promises about what they will deliver in the future in exchange for the votes they seek to reap in the present. Many have become hostages of previously made promises. Political landscapes are littered with broken promises and promises that no longer can be revoked if politicians do not want to lose whatever trust and trustworthiness remain. Examples range from ‘read my lips’ to the analysis of the reasons for the difficulties politicians face when attempting to remove benefits that followed promises that later transformed into entitlements. In the world of business, credibility plays a comparably important role. Whom to believe and trust and on what grounds forms the basis of various kinds of partnership. It permeates deals and influences strategies. Credibility undergirds a relationship which is built on the expectation that what it comes to represent will also be reliable in the future.
Bringing the future into the present through a promise seems easy. The ingredients are words; the intentions behind them and trust binds those who promise to those to whom something is promised. This creates space for action, as well as loyalty and goodwill, resources that give structure to this space and allow one to draw on a much wider range of ma­terial and immaterial assets for action. But there is a price to be paid for bringing the future into the present. It is the uncertainty that comes with every promise. Delivery is never assured. The time of fulfilment may never arrive. Hovering above uncertainty, a promise can be used instrumentally for different ends. It can also be misused or its uses may change. As long as hope and trust sustain the relationship, the actual delivery is pending on good faith. Once trust collapses, so does the imagined and hoped for future, injecting anger, disappointment or even revenge into the relations that existed before.
This familiar space, where a desired part of the future is brought into the present under an agreed-upon condition of uncertainty that comes with every promise, became institutionalized in the context of endeavours between science and society or, perhaps more accurately, society in science. What once was merely a wishing space has been transformed into a working space for the future. In close alignment with technology, science is among the most powerful and effective agents for bringing the future into the present. Scholars in science and technology studies (STS) have amply demonstrated that the processes at work are deeply implicated in the co-production of science and the social order (Jasanoff 2004). They underpin the promises made and the support that comes with the trust put into them. But uncertainty remains, oscillating between certainty based on a past record of kept promises, sometimes beyond expectations and often through routes other than planned, and teetering on the brink of despair when promises honestly cannot be kept.
Modern science arrived with the explicit promise of producing unprecedented knowledge that can be put to practical ends for the betterment of society. It came to be seen as the secular equivalent to spiritual salvation. It would improve the material conditions of life, contributing to wealth and well-being. The belief in science and its promissory engagement with society was and remains extraordinary. Initially, and throughout much of the eighteenth century, it took great strength of conviction over a long period of trial and error before promises and the beliefs that upheld them were transformed into tangible results. Economic and legal institutions had to be in place, capable of accumulating, disseminating and employing useful knowledge in novel ways. For the first time, economic outcomes were produced that created wealth, rather than redistributing it, leading to the merger of Enlightenment beliefs and industrialization (Mokyr 2009).
As a highly successful brand, science and technology became part of the definition and identity of modernity. Yet the progress of the techno-sciences, accompanied by an increased capacity of human control over nature, did not extend to a similar capacity to understand, intervene and control the social world. Already in the eighteenth century, it was apparent that scientific progress did not entail moral progress. But exaggerated ideas of the power of science to transform politics and society persist to this day (Dronamraju 1995). Technical solutions then and now are easier to implement and to adopt than solutions that entail changes to the social fabric of society, questioning established hierarchies and going against vested interests. Bertrand Russell would later even claim that science fuelled human passion with disastrous results. Science and its rationality were incapable of preventing human beings from engaging in genocide and the terrible wars that marked the twentieth century, unleashing an unprecedented technological and scientific destructive power. Nor could it prevent the other atrocities committed in that ‘short century’ (Hobsbawm 1995).
Disillusionment with science also came from other sources. Stripping away the content of many of the previously existing beliefs, science was seen to lead to disenchantment in the world, as forcefully expressed by Max Weber. The scientific worldview came with the promise of a new certainty in telling what the world was like and how it functioned. But the scientific worldview initiated through Copernicus and Galileo and later Darwin continues its transformation. The advances brought about through new discoveries lead to paradigm changes, including the astonishing world described by quantum mechanics. Society has to get accustomed to the fact that the systematic production of new knowledge is an open-ended process. Nor is it always easy or pleasant to recognize and accept the reduced place of human beings or the close genetic relationships that bind us to other living beings in a long and shared evolutionary history.
As the previously unconditional belief in scientific progress declines, it is easy in retrospect to see that expectations were too high. The promises connected with science seemed much too vague and vastly exaggerated, so that disillusionment was bound to follow. A considerable part of disillusionment, but also of resentment, is associated with technology. Technological pessimism has a long pedigree. Repeatedly, it lends itself as a vehicle to those who want to project their anger, their personal fear of the future or the bitterness of lost hope onto what they perceive to be developments beyond their control. Science and technology then function as powerful symbols. They appear as the carriers of accelerated scientific and technological change, moving the world which they so powerfully shape beyond quotidian control and familiarity. The intrinsic orientation of the techno-sciences towards progress raises all the more suspicion, the less progress there appears to be in the social world, seemingly stuck in the old problems of how to live together. The realization that technological improvements offer no lasting escape from politics implies that the public interest cannot be defined in technological terms only (Ezrahi 1994). Any technology that promises only technological solutions without due regard as to how they are to be absorbed and appropriated, socially embedded and implanted, is bound to disappoint. The working space where science and the social order co-produce is still too compartmentalized.
Let us take a closer look at the promises that undergird and structure the working space in which a part of the future is expected to be brought into the present. Admittedly, the promises are vague, diverse and even contradictory. They are couched in collective imaginations which are difficult to pin down as they oscillate in their fleeting and ephemeral forms. Yet they continue to exist in the background. Implicitly, they influence the direction and ways in which the future is being brought in. Aspirations and anticipations shape the societal, political, economic and cultural conditions that enable science ‘to work’. Of the many possibilities that the production of useful knowledge harbours, only some will take actual shape and be realized in a highly selective process. Just as there are possible and impossible futures, desirable and not desirable ones, only some collective imaginaries and anticipations will actually influence future developments. Predictions about which of the many scientific and technological possibilities will transform into tangible and sustainable outcomes are keenly analysed by governments, funding agencies, venture capitalists and a host of other stakeholders that populate the public and private quarters of the working space. The outcome will depend on some kind of structural convergence with the dynamics of larger societal developments over a longer period of time, but the scale at which this takes place is undetermined.
At present, two main but countervailing tendencies can be observed. One is to negotiate the relationship between science and society through a renewed societal ‘contract’. The idea of such a contract first arose immediately after the Second World War as part of the vision of Vannevar Bush's memorandum, Science, the Endless Frontier. A new policy for funding research would foresee grants from the US federal government which established a contractual relationship for researchers (Guston 2013). This model has since been adopted by practically all research funding agencies, as it provides a solution to the principal−agent problem. According to this theory, the principal is the one who pays, in this case the government or ultimately the taxpayer, and the agent is the researcher, commissioned to produce useful knowledge. The relationship is asymmetrical. The agent always knows more and better than the principal, yet the principal has to assure the productivity of the agent and that investments will yield economic and social returns.
The idea of a contract, wrapped as a metaphor, impli­cating science as a social institution and society as an all-encompassing construct is still popular. Society will continue to support science in material and immaterial ways while science pledges to deliver benefits to society. General and vague, this metaphor retains some of its resonance but, for the practical purposes of policies aiming to boost science, technology and innovation, it is now considered far too loose. Particularly as part of the trend towards economization of science policy, for which the New Public Management is only one manifestation, it undergoes a remarkable tightening (Berman 2014). The informal vagueness of scientific promises has been replaced by formal procedures. Their objective is to formalize what is being promised in a way that enables the control of what actually has been delivered once the research project and the contractual relationship have ended. Towards this end, an array of quantitative measurements and indicators, such as ex ante impact assessment, monitoring and benchmarking, have been developed. Promises that once were generous and unspecific are incorporated into performance agreements with milestones and deadlines. The sea of uncertainty, in wh