Data in the 21st Century Introduction

Introduction to the catalogue of the exhibition Data in the 21st Century (2015-2016)

The debate around big data is usually conducted in extreme terms. The pessimists claim big data will lead to the downfall of our civilization; the optimists say it will solve all our problems. While the passionate proponents and opponents carry on their heated discussion, most of the people affected will click away when they see the word “big data.” Yet they are the ones continuously feeding data into the databases, and the ones on whose everyday lives big data is having an increasingly significant impact. It is not easy to form a nuanced opinion when a debate is being framed in extreme terms. The Data in the 21st Century exhibition did therefore not focus on utopian or dystopian artistic representations of big data, but on art that attends to the friction that arises where the unpredictable, unstable, irrational and affective world we live in clashes with the urge to represent it in the form of data.

Data in the 21st Century (December 19, 2015–February 14, 2016) included work by Martin John Callanan and Timo Arnall as well as new versions of work by Kyle McDonald and Daniel Goddemeyer, Moritz Stefaner, Dominikus Baur, and Lev Manovich. Furthermore, new work was produced for the show by PWR Studio (Hanna Nilsson and Rasmus Svensson) and Informal Strategies (Geert van Mil and Doris Denekamp). The collective Lane/You/Debackere and the duo Max Dovey and Manetta Berends created new performance pieces for Data in the 21st Century, which took place at, respectively, the opening (December 18, 2015) and closing (February 12, 2016) events, along with a lecture by leading theorist Vinay Gupta. In a book to be published in May 2016, we will look back on Data in the 21st Century with writer and artist Douglas Coupland. Through these works, performances, lectures and book, we aimed to bring nuance to the debate around big data.

There are different definitions of the term “big data.” It is usually used simply to refer to working with extremely large data sets. More precisely, it is defined as data sets that are too big to maintain with regular database management systems. Strictly speaking, the big data paradigm involves searching for correlations within and between data sets without a hypothesis guiding the search. Some thinkers contend that this amounts to an epistemological revolution.

Large enough data sets can enable connections to emerge without a need for guiding hypotheses, and those connections can have predictive value in the real world. Explaining them is not necessary. If enough data can be collected to cover every aspect of reality, then all of reality can be predicted. And if the outcome of every process in the world can be predicted, you can use the predictions to start arranging society in a theoretically perfect way. This, in brief, is the dream and the dogma of the big data paradigm.

In order to optimize big data-driven predictions, information must be continuously fed into the system. And so our behavior, our bodily functions and our thoughts are quantified with the help of technology so the algorithmic predictions can take the human factor into account. In big data’s version of reality, everything can be quantified. Things that are not quantifiable do not exist. Our need for privacy and human values like self-determination are irrelevant in the big data paradigm. Only behaviors that are quantifiable and therefore correlatable count. Those who see big data as an instrument that will enable the creation of a perfect world regard values and needs like self-determination and privacy as old-fashioned, vaguely defined remnants of a bygone world that only clouds our view of the new.

Certain big data optimists – mainly men working for Silicon Valley technology startups and software giants – forecast an even more dominant role for big data in the future. They not only see it as a necessary tool for optimizing today’s world, but believe linking it with artificial intelligence will usher in a new economic and social reality. Their hypothesis is that big data enables a higher form of intelligence. Since it facilitates more efficient predictions than we fallible humans are capable of, they reason, it will not only lead to a more “optimal” world but to an entirely new reality. They refer, for instance, to intelligent systems that are already changing the world by using big data to autonomously anticipate reality. One is algorithmic trading, in which computers trade stocks on the basis of predictions, with no human intervention. Following this line of thought, these proponents speculate that in the future, systems will be able to do things like predict when we will need a taxi before we know it ourselves, and indicate when a person needs medication long before a doctor can interpret his or her symptoms, perhaps before the patient is even sick.

Since according to this optimistic view of big data, the best decisions are invariably made by “all-knowing” data-driven systems, some even predict that in the future democracy will be redundant. Such prospects are garnering fierce opposition, joining oft-heard criticisms about data ownership and big data systems’ lack of human values. The big question is: what will the quality of life be in this optimized, technopragmatic brave new world? Critics argue, for example, that the absolute truth suggested by these systems is an illusion that undervalues debate and compromise and imposes on human beings various values (such as transparency) that belong to the world of machines. Another question is: who should control these all-knowing systems in the future? Big data optimists generally prefer to leave such matters to the ethical faculties of companies like Google. Critics, though, point out that even if those companies have the best of intentions, the systems they build are perfect tools of oppression and therefore ideally suited to totalitarian regimes.

The rise of the big data paradigm has been made possible by new technologies like the Internet that facilitate the mass collection, storage and sharing of data. The resulting ability to capitalize data and sell it to third parties is a driving force behind big data. Critics argue that big data perfectly demonstrates what’s wrong with the capitalist market system. The price of data has little to do with its value and is amorally determined by the market. In the market, data is simply a salable good like any other, and any desire we might have to control and make decisions about our own data is, at most, one of the factors affecting pricing. Of course, to minimize costs, companies prefer to collect relevant data about their customers themselves. That is child’s play for companies like Amazon, whose contact with customers takes place almost entirely online.

Data collection is frequently a byproduct of today’s automated, digitized, networked economy and society. Purchasing behavior, travel behavior, online behavior, media behavior, smartphone behavior, social media behavior – practically our every click is recorded. The data is then used to profile customer preferences, embedded in services automatically, and employed in decision-making. To an ever- greater degree, our reality is governed by data – data we produce and consciously or unconsciously make available.

It is more or less inevitable that reality will become more and more data-driven. And that is not a problem per se. The questions are how people and society will be changed through their interaction with data and what the morality and politics of the process of change are and should be. Currently, they seem to be dictated mainly by a market approach and machine values, and blind to worldly qualities and humanist values that cannot be captured in data form. How should we continue to shape our relationship to data in the 21st century? This was the central question that we addressed in Data in the 21st Century.