I’ve published a new article in the journal Datenschutz Nachrichten. In this article, I argue that privacy and data protection theory and law should get rid of its fixation on „personal information“. Based on a historical analysis of this fixation, I illustrate its shortcomings. Finally, I propose to focus instead on „person-related decision“ as a new basic building block for privacy and data protection theory and law.

While the article is in German (and also bears a German title: „Transparenz und Berechenbarkeit vs. Autonomie- und Kontrollverlust: Die Industrialisierung der gesellschaftlichen Informationsverarbeitung und ihre Folgen“), there is an English abstract:

The far-reaching digitization of all areas of life is blatantly obvious, its individual and societal consequences are the subject of a broad public and scientific debate. The concept of privacy with its traditional notion of a categorical separation between ‘private’ and ‘public’ is the wrong starting point to describe, analyze or explain these consequences. Instead, a thorough analysis must take the processes of modern societal and especially organized information processing as a starting point, the organizations and institutions they’re producing as well as the power relationship between organizations and the datafied individuals, groups and institutions.

After the first industrialization (of physical or manual labor), a second industrialization is now taking place: that of ‘intellectual work’, i.e. of societal information processing. It undermines the old mechanisms of distribution and control of power in society and threatens the bourgeois society’s promise of liberty by structurally rescinding individual and societal areas of autonomy. This very development and its consequences are what Datenschutz is addressing. Its function is the maintenance of contingency for the structurally and informationally weak under the terms of the industrialization of societal information processing and against the superior normalization power of organizations.

Modern, organized social information processing – the new surveillance (Gary T. Marx) – is certainly different from older forms of surveillance: within the last three or four decades it has been steadily industrialized. The industrialization of information processing is the process of socialization of ‘mental functions’: they are taken from the individual context of the individual and put into formalized machine-processable procedures (Wilhelm Steinmüller); it’s a transformation of a subjective into an objective process (Andreas Boes et al. with reference to Karl Marx).

Against the backdrop of this industrialization process, the presentation will challenge two assumptions commonly held in privacy research and Surveillance Studies alike: first, that one can ignore the fundamental difference between a community and a society in analyzing surveillance, and second, that social interaction theory is an adequate means for the analysis of modern, industrialized surveillance.

The community–society (Gemeinschaft–Gesellschaft) dichotomy is a very basic sociological concept (Ferdinand Tönnies, Max Weber). Social collectives that are held together by emotional ties, personal social interactions, and implicit rules are called communities, whereas society are held together by more rational decisions, indirect interactions, impersonal roles, and mechanisms such as contracts and explicit legal rules (Christoph Lutz and Pepe Strathoff). Additionally, due to the increase of complexity modern society has developed from a segmentary to a stratified-hierarchical and then to a functionally differentiated (Talcott Parsons, Niklas Luhmann). In contrast, many privacy and surveillance researchers seem to share a strong commitment to identity politics, seeking for an abandonment of modern society and a return to community-style social collectives. The question is, however, whether this is an adequate starting point for analyzing modern surveillance.

Most privacy and surveillance theory also share a common theoretical foundation: Erving Goffman’s sociological role theory. According to Goffman, social interaction is “all the interaction which occurs throughout any one occasion when a given set of individuals are in one another’s continuous presence.” Addressing the relationship between organization and individual with this theory is highly problematic, especially if the organization has industrialized its own information processing and decision making – there simply is no interpersonal relationship between a user (or better: usee) and Google’s computers. Therefore, it’s highly implausible if focusing on persons as possible attackers is adequate for understanding modern industrialized information processing.

In the presentation, I talked about three issues: (1) the relationship between the sociological concepts of community and society, (2) the relationship between privacy and surveillance theories based on interpersonal relationships and modern, organized social information processing practices, and (3) the relationship between the division of labor and the concept of informed consent in privacy theories and laws.

The conclusions of my presentation relate especially to the question whether and how we can compare “ancient and modern systems of surveillance”. With respect to the first issue mentioned above, I concluded that we can compare communities with communities, but we should not compare communities with societies. The conclusion for the second issue is highly similar: we can compare sensitivities towards or actions against surveillance practices and surveillance systems, but we should not compare pre-industrialized and industrialized surveillance systems and practices. My third conclusion is a bit more related to practical considerations for governing the privacy, surveillance and data protection problem in our society: We can – and we should – demand the education of the public to enhance its understanding of modern information processing, but we certainly should not create a (legal, technical, economic, social) protection system that only works if and only if the data subjects do understand modern surveillance practices and the risks they pose for individuals, groups, organizations and the society.

I wrote a new short article which I submitted to DANA – Datenschutznachrichten, a quarterly journal on data protection issues published by the Deutsche Vereinigung für Datenschutz. The article will be published in the 3rd or 4th issue of 2015.

Against the backdrop of the history of the purpose limitation principle in privacy and data protection discourse and law I examine and reevaluate this principle as an artifact of of a specific operationalization of privacy and data protection in the law. The term artifact on the one hand refers to something human-made, on the other hand it denotes an – oftendisturbing–phenomenon that occursas a resultofsomething likethechoice of themethod of measurementin social researchorthe algorithmused in lossyimage compression. Herebothreadingsshould be merged: I will showthat thepurpose limitationprinciple is a secondary productof previous operationalization decisionsas well asexplicitlyhuman-made.

The historical operationalization decisions I survey include the informed consent principle first formulated by Ruebhausen and Brim in 1965, the „phase orientation“ of German data protection law developed by Steinmüller and his colleagues in 1971 and the specific design of controllability of information processing and decision making examined by Hoffmann in 1991.

I also show that the purpose limitation principle is – contrary to popular opinion – not outdated. From the very beginning, this principle has been consciously created as a normative, but counterfactual response of the law to modern data and information processing capabilities, which are fundamentally purposeless.

I wrote a new short article which I submitted to FIfF Kommunikation, a quarterly journal on computers, informatics and society published by the Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung. The article will be published in the 2nd issue of 2015, to be released in June or July.

Based on a thorough reading of draft versions of the German Bundesdatenschutzgesetz, committee records from the German Bundestag, and earlier literature on „legal cybernetics“, the „automation of public administration“ and „automation-friendly laws“ I pointed out how a legal provision originally demanding „Datenschutz by Design“ was transformed into a pure IT security provision in a very short time at the end of the 1970s. Surprisingly, this transformative move was initiated by German data protection commissioners – federal as well as state commissioners – in cooperation with „interessierten Kreisen der Wirtschaft“ („interested industry stakeholders“).

In my view, it comes as no surprise then that most research which is sold today as „Datenschutz by Design“ oder „Privacy by Design“ is actually nothing more than „Security by Design“.

The limit of 20,000 characters was rather harsh, even if I missed it. Worse still, the organizers asked me to include references to other privacy theories to clarify the relationship between Datenschutz and these privacy theories. Therefore, I haven’t had enough space for the text I really wanted to write. In the end, I’m not really happy about the text. I still think that I’ve made clear that it’s possible to analyze modern, organized information processing as being industrialized and its consequences for society, organizations, groups and individuals alike.

I’m confident that I’ll have the chance to elaborate on this in the future.

Fora festschriftforRosemarie Will, a professor of Public Law, State and Legal Theory at Humboldt-Universität zu Berlin, I wrote a short articleanalyzingapreviouslyignoredparadigm shiftinthedebateabout data protection which took place in the 1970s.

While the privacy debate takes the object of protection – privacy, Privatheit, Privatsphäre, digital intimacy – as a starting point, the first generation of data protection scholars in Germany realized at the beginningof the 1970s that the analytical perspective needed to be changed fundamentally for a thorough analysis of the data protection problem. Not the outdated notion of categorically separate spheres should be used as the starting point for the analysis, but the specific practices of modern organized information processing and their properties.

This is still truetoday. We need a soundtheoryof the information society, but at leasta well-foundedtheoryof modern socialinformation processinginsocial relationships, which are characterized bystructuralpower imbalances: between individuals and groups on the one hand and organizations on the other hand, between small and large organizations, between local and central government or supranational entities, between the Parliament and the judiciary on the one hand and the public administration on the other hand. Only then we can identify problems for which there is no place in a world of categorically separated spheres, problems that even cannot be addressed there, such as the »Modellifizierung« (Wilhelm Steinmüller) or the tendencies towards the industrialization of information processing.

I just submitted a short abstract to a workshop on »Privacy and Quantifiability«, which is supposed to happen in February 2015. In this abstract, I argue that the concept of privacy is the wrong starting point for any understanding of the individual and societal consequences of modern information processing. Instead, key for any understanding is the insight that the far-reaching digitization of all aspects of life has thrown thecategoricaldistinctionbetween»public« and»private« into the dustbin of history. The starting point for an analysis should therefore be the very process of modern, organized information processing, the specific kind of organization it produces, and the relationship between the organization and the datafied individual, group or institution.

In my paper, I will try to present an analysisof thedataprotectionproblem, particularly based on previous work of Wilhelm Steinmüller and Martin Rost, which is appropriate to the state of the art ofsocietalinformation processinginorganizations.

[Update] 23 November 2014: Unfortunately, we did not receive enough abstracts for the workshop, which adequately focused on the – admittedly very demanding – workshop’s topic. Today, the organizers have decided to take the necessary action and to cancel the workshop. We’re very sorry for any inconveniences caused.[/Update]

Thedigitizationseizes allareas of life. Then, whyis the categorical separation between„public“and „private“ still retained? Bourgeois societyisasociety, not a community. Sociologically,such a findingmakes a significant difference. So why isthe vast majority of privacy and data protection theoriesbasedonunder-complexassumptions, which are based on a community-relatednessand a community–boundedness ofthe individual? Manypassages, to cite examples, use the pronoun„we“, an identifierfor a community.Who exactly is meant by this consensussheltering„we“? One of the centralfeaturesof modern socialinformation processingis that it is increasingly industrialized. Why are then individualsensitivities andneedsstillchosen as the startingpointfora problemanalysis? Andwhyis the focusprimarily onpersonsas potentialattackers? Why shoulddifferentdemands be made onpublicandprivatedata processors,if theirorganizationalandtechnicalpracticesof information processingarenow largelythe same? Isthis distinctionbetween„public“and„private“perhapsonlyan artifact planted by legal thinking?

The number ofworks describing and explaining privacy,Privatsphäre, Privatheit, surveillance ordata protection istremendous–theirqualityoftenscientificallyquestionable.Assumptionsare ofteneithernot disclosed, arehistoricallyoutmodedorbasedon awidespreadmisunderstanding oftheinformation technologicaland sociologicalfoundations. Theconstellations of actors of the underlying theoriesdo notoronlymarginally overlapwiththeconstellations of actorsobserved in thephenomenonarea. For theparticipating actorsareproperties assumedsuch asrationality, knowledgeorpracticesof information processing, which do not necessarilycoincide with theobservableand the observed. The same applies tothe objectives pursued bythe various stakeholderswith their information processing.Finally, thequestion of howactorsandsocialstructures, howindividuals andsocialsystemscome together, is not evenrecognizedas a problem.In short, thecurrent discussion on privacy, Privatsphäre, Privatheit, surveillance anddataprotectionlackssound theoreticalfoundationsin the contextof a global society, where organizations even in the field of information processing act largely in an industrialized manner. Wedo not believethata theoreticalstudy withthese issueshas become obsolete, quite the opposite.

In the workshop, we want to formulate qualityrequirements fora well-foundedtheoryof data protectionfor the 21stcentury. What demandsare to be madeto such atheory and itsgenesisfrom ascientific – disciplinary as well as interdisciplinary – point of view? Whichphenomenonareas shall bedescribedand explained bythetheory, which shall not?Whichactor constellationsand whatpower relationsshallbedescribedand explained bythetheory, which shall not?What assumptionsaboutthe environment–society, organization, interaction, technology, processes –areallowedand what not?

Please submit your abstracts until 2 November 2014.

The workshop will be held in German and English. You should therefore be able to speak either German or English and to understand both. Abstracts and papers could be submitted in either German or English.

In February 2013, we organized a little workshop – the first in our series „Fundationes“ – on the history and the theory of data protection. A few month before Edward Snowden’s revelations about the NSA’s massive Internet surveillance, 16 scholars and practitioners from different disciplines met in Berlin for a very prolific discussion. More than a year later, we finally published the workshop proceedings encompassing reviewed academic papers, introductory presentations, and a transcript of the workshop discussions.

Wilhelm Steinmüller (1934-2013) was a pioneer of the scientific study of the social effects and consequences of information technology. As a Law scholar, he first looked at the relationships between Law and electronic data processing and coined to the term „Legal Informatics.“ Based on the possibilities and conditions of the use of computers in the legal domain, he turned more and more on the impact of information technology. With this new perspective, he laid the foundation for the German data protection legislation. His analytical view expanded subsequently to a system-theoretical consideration of computer science and society as a whole.

When the news of Wilhelm Steinmüller’s death on February 1, 2013 broke, former employees, colleagues and friends met on 24 June 2013 in the European Academy Berlin for a commemoration ceremony. We agreed that Steinmüller’s life and work could be appreciated best by placing the various ideas initiated by Steinmüller in a a contemporary context and by identifying future lines of development.

For this festschrift to be presented at a symposium in memory of Wilhelm Steinmüller on May 22, 2014, I wrote a short paper analyzing the 1971 expertise on behalf of the German Federal Ministry of the Interior, „Grundfragen des Datenschutzes“ („Fundamental question of data protection“). This expertise framed the data protection problem as a threat to freedom and liberty and as a structural attack by data processors on the decision space of individuals, groups, and other less powerful entities that are datafied, computed, simulated, predicted. The German data protection legislation adopted the regulatory goals envisioned in the expertise as well as the legal architecture presented by Steinmüller, Lutterbeck and Mallmann.

In the paper, I analyzed the assumptions underlying Steinmüller’s analysis of the data protection problem and uncovered hidden assumptions as to the instrumental character of the computer and the rationality of the data processor. In addition, I reflected on the concept of information used by the authors and found it to be still up-to-date, especially in contrast to newer concepts of information presented in scientific literature for the analysis of organizational information processing in the information society. Finally, I analyzed the process-oriented model of information processing that was mirrored by the process-oriented architecture of the German data protection law, predating a very similar approach by Daniel Solove by pretty much 35 years.

In preparation of a workshop we organized last year, I wrote a short paper in German explaining that and how the German data protection law is based on the assumption that all organized processing of personally identifiable information is causality-based. As correlation-based methods are booming now, real-world information processing increasingly clashes with legal requirements due to the law’s implicit assumptions.

Abstract: In recent years, correlation-based information processing methods are booming. But from the perspective of data protection, they are particularly problematic because they don’t correspond to the implicit assumptions about information processes and information process design, which since the 70s determined the specific form of how the phase orientation was implemented in German data protection law and how the concept of necessity is to be understood. In addition to strengthening the principles of data avoidance and data minimization to replace the toothless concept of necessity, it would be necessary to reformulate legal requirements under a goal-oriented approach. Additionally, if correlation-based methods are being used, requirements on the data subject’s consent, the disclosure and the use of generated data as well as data security need to be much higher than in the past.

Abstract: Most scholars, politicians, and activists are following individualistic theories of privacy and data protection. In contrast, some of the pioneers of the data protection legislation in Germany like Adalbert Podlech, Paul J. Müller, and Ulrich Dammann used a systems theory approach. Following Niklas Luhmann, the aim of data protection is (1) maintaining the functional differentiation of society against the threats posed by the possibilities of modern information processing, and (2) countering undue information power by organized social players. It could be, therefore, no surprise that the first data protection law in the German state of Hesse contained rules to protect the individual as well as the balance of power between the legislative and the executive body of the state. Social networks like Facebook or Google+ do not only endanger their users by exposing them to other users or the public. They constitute, first and foremost, a threat to society as a whole by collecting information about individuals, groups, and organizations from different social systems and combining them in a centralized data bank. They transgress the boundaries between social systems that act as a shield against total visibility and transparency of the individual and protect the freedom and the autonomy of the people. Without enforcing structural limitations on the organizational use of collected data by the social network itself or the company behind it, social networks pose the worst totalitarian peril for western societies since the fall of the Soviet Union.

The modern history of privacy and data protection has spawned thousands of books and tens of thousands of schorlarly articles. Most of them are either useless or simple rewritings of things that should already be known but often are not.

For a good start into the topic of data protection I recommend reading the following scholarly works:

Bloustein, Edward J. (Dec. 1964). “Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser”. In: New York University Law Review 39, pp. 962–1007.

Westin, Alan F. (1966). “Science, Privacy, and Freedom: Issues and Proposals for the 1970’s. Part I—The Current Impact of Surveillance on Privacy”. In: Columbia Law Review 66.6, pp. 1003–1050.