Information vs. In-Formation

Information in relation to computers can be described in two ways.

The most popular notion of information stems from the 1940s’ Norbert Wiener concept, rooted in cybernetics. Information appears as a statistical property, where a time-series of measurement is created, as mathematical entities. This supports a perception of information, where everything is calculable (in the computer) and can be expressed through a model. It also comprises the idea of a black-box, where input and output can be observed, but the inner processes not – which allowed for a conceptualization of the computer analogous to the human brain. The black-box concept again served as the basis for a control-machinery that could employ feedback-mechanisms to control processes.

In contrast Markus Krajewski (2007, 2011) developed a spatial notion of information, where information is data placed in a spatial dimension, accessible through diagrammatic operations of the human brain. He starts out from the librarians’ folio in the 1700s which gets cut into single sheets (or later cards) to hold information on each book, categorized through space and typography and such conveying information. The form or the table further evolves into the punched card, from where tape and disk-memory as spatial organization emerged. Since this historical process brings data in formation, information with Krajewski is actually In-Formation.

While Wieners notion of information is closer situated to the command-and-control structures of war effort, Krajewskis In-Formation is more rooted in bio-political techniques of statistical data collection and evaluation in bureaucratic and managerial practices.

Artificial Des-Intelligence or Why machines will not take over the world. At least not now.

Part I: There is no Artificial Intelligence.

It’s pattern recognition, stupid!

A friend of mine recently exclaimed that since her Siri speech recognition became much better, compared to speech recognition ten years ago, Artificial Intelligence (AI) now has the potential to rule the world. What if there is no Artificial Intelligence at all? What if the so called AI revolution is indeed an enhanced form of pattern recognition? I agree that todays pattern recognition shows better quality in recognizing patterns in language, image, orientation and similar fields. But is pattern recognition equal to intelligence, even to human intelligence?

Pattern recognition is about perception, and it is about statistical interference with a body of data. These are two areas that have become increasingly better over the past decade. Not only have businesses (like Amazon or Google) developed new techniques for distributed large scale computing using consumer hardware in large quantity. They have also developed decentralized, large scale solutions for data storage, labeled Big Data, that forms the base for more successful statistical interference. We see how both these quantitative changes have turned into a perceived new quality of enhanced pattern recognition (EPR).

Better algorithms to search unstructured information

Three factors play into the overall growth in automation. First of all, search engine technology has grown and become better in sifting through large amounts of structured and unstructured data, especially since Google introduced tools such as Mapreduce, Bigtable in the Mid-2000s, and since new Open Source Software for data mining in unstructured information collections such as Hadoop became available. Continue reading →

→ author: Francis Hunger,
published on: 2017-Nov-15

Algorithms are made by humans

The artist Francis Hunger presents his video installation Deep Love Algorithm at the recent exhibition »Mood Swings – On Mood Politics, Sentiment Data, Market Sentiments and Other Sentiment Agencies«. In conversation with curator Sabine Winkler he tells, why we should no longer talk fearfully of algorithms.

Sabine Winkler: Your video essay Deep Love Algorithm reconstructs the evolution and history of databases as a love story between cyborg and writer Margret and Jan, a journalist. Margret embodies a kind of resistant position emerging from history, and also a linkage between the human and technology. The relation of human / database (technology) told through failing love story is an unique approach. How did this topic evolve, or rather how is this relationship structured, and why does it fail?

Francis Hunger: Margret is not necessarily a cyborg, actually it is only implied that she lives longer, compared to her appearance. This doesn’t, however, exclude that she is a cyborg. The original idea for Margret was, to create a figure who travels through the times. A figure, who beyond the ahistorical Samantha from the movie Her, or the movie figure Adaline, was and is part of political fights.

Review of »Search Routines« by Neural

»These essays constitute an added value, providing an extended account of databases’ historical development, informing the reader about a strategic past that is often overlooked.« Read the full review on neural.it.

→ author: Francis Hunger,
published on: 2017-Jun-30

Inside the Data Bank. Privatizing profits, socializing losses.

lecture at the Kulturen des Kuratorischen in May 2017 for the seminar »Leaking Bodies (and Machines)« by Julia Kurz and Anna Jehle. The session was joined by students of Peggy Buth who work about the topic of Big Data and Post-Internet art.

lecture and discussion about Data Banks

lecture and discussion about Data Banks

While the seminar was developed around reading Updating to Remain the Same: Habitual New Media (MIT Press) by Wendy Hui Kyong Chun, Francis Hunger was invited to add a more materialist perspective. Two lectures were delivered, the first developed a historical perspective on the development of computing technology in general and discussed relational databases in closer detail. It tried to give an overview about the state of art in computing technology and interconnect this with the social dimension of labor and cultural questions such as the crisis of the public/private and the cultural function of the archive. The second lecture introduced basic concepts regarding electronic infrastructure as developed by Bowker, Ruhleder and Star and ended with a discussion of data centers, the ideology of the »cloud«, and the function and meaning of data in »big data«. The following discussion evolved into a rather broad and general argument about how the mediocene emerged and influences current social relations.

Big Data as permanent future

Under the banner of big data, states and enterprises are collecting data with the intention of using it some time in the future. Seen from the perspective of databases, humans are transformed into data bodies and data potentials that are to be saved and algorithmically processed. While states, for example, allow their police to experiment with systems to predict criminal activities and their political parties to mobilise the electorate for their election campaign using big data, enterprises such as Amazon, Allianz Insurance and Deutsche Bank use their customers’ data for strategic business development purposes.

This text describes the interdependence of table making and printing process innovations during the 19th century. It is largely based on Doron Swades article »The ‘unerring vertainty of mechanical agency’: machines and table making in the nineteenth century.« from Campbell-Kelly »The history of mathematical tables: from Sumer to spreadsheets.« (2003). I had to shorten this part from my larger essay about Tables, and for the sake of saving it somewhere, it is published here.

This workshop aims to establish a notion of computing history that is oriented towards database software. During the first day we look into diverse practices of database usage, its historical and social origins.
Knowledge production by way of the library, the collection, the processing of mathematical equations in the age of human computing and bio-political practices such as statistics, data collection, resource management and insurance business have informed database technologies.
Lately, notions like big data or large scale search engines were added to this set of practices. During the second day the discussion focuses on tables and relations that form and put in form the base of data. And we go for a database dérive, which means we go outside to observe databases in their natural habitat to sense the infrastructural dimension of database usage today.

Close Reading session of Mark Posters text about Databases as electronic interpellations. We read the text paragraph by paragraph and subsequently discuss each section until it is understood.

On our way to Berlin stock exchange Building. As we learned, the stock exchange has moved out, because it is computer based now. A few people went into the hotel to find out, what data needs to be stored in their booking system, in order to get a room.

Theater des Westens has an old-fashioned ticketing system on paper cards for the subscription. But you can buy tickets online via a third-party service.

Database Derivé: ID Number at a door – electric switch room for U2 subway line

Notes: This time the database derivé took place in February which turned out to be cold. So we went mostly into buildings and asked the businesses about their database practices. One could do so individually each day, but here it seems that the groups’ supportive existence encouraged inquiring.

Some of the participants were dissatisfied because »it seems with every workshop comes a we-go-into-the-city-and-experience-it-differently session«. That made me think about it and understand that a derivé may be me more exciting for a diverse crowd as it was drawn by the Galerie Wedding database derivé last year, but may be less interesting for art students. However, what worked well in both occasions, that the sheer fact of moving around, instead of sitting in a room, fosters communication amongst the participants – it is informal but still the conversation connects to the overall topic. Also the sceptics agree.

The close reading (paragraph by paragraph) of Mark Posters essay was very appreciated by the participants, both because the text itself is inspiring and because the reading and discussing paragraph by paragraph enabled an intense debate within the group. This session also benefited from the comments of the RCPP-members who provide additional input on a high academical level.

→ author: Francis Hunger,
published on: 2016-Jan-15

Universal Concept

The database became an universal concept for software such as the Von-Neumann-Principle for computing hardware.