The field of artificial intelligence has a long and underappreciated prehistory in the philosophy of rational thought and computation (Dreyfus 2008). In Western culture, idealised conceptions of mental behaviour such as 'intelligence' have long been used to justify methods of political domination and control (Cave 2017). In ancient Greece, Aristotle advocated for the subjugation of women, ethnic groups, animals and plant life on the premise that the ability to reason was naturally endowed only to men of good birth. As Alison Adam (1995, 1998) has shown, similar thinking underlies pivotal work in the logicist tradition from Descartes, Kant, and George Boole, which inspired the epistemological concerns of early AI research. In the past few years, significant scholarship has exposed contemporary forms of epistemic and other social injustice produced through the widespread deployment of AI technologies, algorithms, and automated decision-making in commercial and government sectors (e.g. O'Neil 2016, Eubanks 2017, Broussard 2018, Buolamwini and Gebru 2018, Keyes 2018). Questions of power, control and inequity recur across historical periods, even as the epistemologies and technologies through which they manifest have changed.

In the mid-twentieth century, scientific conceptions of intelligence in the United States fell into orbit with another powerful new tool for control: the electronic digital computer. Of the various disciplines that emerged from this pairing, including cognitive science and computer science, the emergence of artificial intelligence and its relation to societal control remains particularly understudied by professional historians. In place of such analysis we have a set of rich but narrow participant accounts (Nilsson 2010, Newell 2000, McCarthy 2016) and two popular surveys (McCorduck 1979, Crevier 1993) derived primarily from participant interviews. Although rich with insight into the intellectual structures of the field, some reviewers and commentators have noted that these accounts have neglected to situate the development of these scientific ideas within the geopolitical, military, economic and cultural contexts from which they emerged (Mirowski 2003, Cohen-Cole 2008, Katz 2017).

In contrast to AI, the development of the computer has attracted significant historical attention. Yet, similar to the history of AI during the twentieth and twenty-first centuries, most accounts of computer history have tended to focus on questions related to concepts and/or engineering rather than questions of political agency and societal consequence (Edwards 1997). When they invoke precursors, Plato's investigations on knowledge are linked to Leibniz's study of rationalism, or Ada Lovelace's musings on Charles Babbage's Analytical Engine, or Alan Turing's 1936 paper, 'On Computable Numbers'. Standard narratives thereby credit precursors with having laid the conceptual foundations for both digital computing and AI by demonstrating how 'thought' could be formalised and, later, mechanised. Only recently have alternative interpretations of the history of information technologies come to light.

Jon Agar's 2003 book The Government Machine: A Revolutionary History of the Computer argued that the early history of computing should be read as a history of public administration and vice versa. 'Several of the most important moments in the history of information technology revolve, rather curiously, around attempts to capture, reform, or redirect governmental action,' he argues, pointing to key contributions by Babbage and Turing in the nineteenth and twentieth century as examples (Agar 2003, pp. 7, 41, 69). Babbage described his Analytical Engine as a political machine designed to facilitate governance. Turing, likewise, took inspiration for his universal 'machine' from the general-purpose information processing structure of the British Civil Service bureaucracy in which his father worked. After having built what has been called the first working prototype of AI in 1955 (McCorduck 1979, Crevier 1993), Herbert A. Simon acknowledged this subtle but profound tie between the logics of social organization and information technology when he credited Adam Smith as the inventor of digital computing (Penn, PhD thesis). Staley has recently argued that we should complement long-standing studies that trace precedents to cybernetics to automata and bodily hybridity with a recognition that, in the interwar period, concepts of mechanisation were reframed around social organisation; in the portrayal of cities and social systems as images of the machine life and forms of mechanism, we can seethe sinews of new material and social hybridities (Staley 2018).

Similarly, over the last three decades, a rising number of scholars have attended to the computer as a socio-technical object that must be studied as such. This reorientation uncovers the human agency allied with the technical, the computer's metaphoric influence and its role in the structuring of social behaviour (Adam 1995, 1998, Ensmenger 2000, Hicks 2017, Miltner 2018). This shift in the historiographical tides has yet to reach AI (Dick et. al. forthcoming). Closer examination of the roots of artificial intelligence in the mid-twentieth century reveals conceptual debts to a range of disciplines oriented toward the small- and large-scale structuring of social behaviour using mathematics (Penn, PhD thesis). This includes but is not limited to operations research, management science, public administration, and personality research. The political implications of this nexus are not well understood, nor are the implications of the U.S. military's long patronage of the field, which dates back to its origins in the mid century. The Logic Theory Machine, Simon's prototype of AI, was funded by the U.S. Air Force to function 'intelligently' by taking decisions in the style of a for-profit corporation, rather than a human brain (Penn, PhD thesis). The complex politics that stem from this entanglement of biological, sociological, and military notions of 'intelligence' require sustained inquiry to inform understanding of how 'artificial' versions of intelligence are now shaping and scripting logics of cultural organization.

Information and statistical technologies have grown dramatically more complex since the Cold War. So, too, have their politics. In The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Harvard business theorist Shoshana Zuboff has argued that during this period of maturation, a degenerative new form of capitalism has emerged in which global commerce makes normal use of mass-surveillance, the industrial-scale manipulation of behaviour, and the hegemonic commodification of reality (Zuboff 2019). At the heart of this system lies the predictive analytical capacities of machine learning, deep learning, and other sub-domains of AI developed since the 1950s. The Facebook-Cambridge Analytica scandal, Myanmar genocide, and Brexit are a few high-profile but very different instances of how such informational infrastructure can be abused. The AI research community has responded by examining how notions of fairness, transparency, and accountability could be built into new AI systems directly. The technology industry, likewise, has invested heavily in the creation of normative principles for how to 'democratise' AI and manage its 'disruptive' excesses (Hagendorff 2019). A survey of these principles found them to be 'closer to conventional business ethics than more radical traditions of social and political justice active today, such as prison abolitionism or workplace democracy' (Greene et. al., 2019). In the words of tech-critic Anand Giridharadas, such efforts aim to 'optimise the status-quo' rather than to improve on it (2018).

This is the context into which our Seminar intervenes and to which it will make its significant contribution. In our view, debate about the future impact of AI has gravitated toward a 'cure' mentality at the expense of more preventative measures. Rather than question the historical, sociological or economic systems that bring about unsustainable technological dependencies, analysts typically focus on how to address what happens when these tools malfunction. This largely reactive mode is partially determined by the nature of AI technologies, which are often mathematically uninterpretable, but also by the systems, private or governmental, that control their production and deployment. The workings of these technologies and these systems are often inaccessible to knowledge, both metaphorically and literally 'black boxed'. Response is not sufficient. The very terms in which we engage with the structures and effects of AI technologies must be changed. This seminar's comparative and critical inquiry aims to do so by revealing and examining the historical framework that underlies them. Bringing this history to light can then inform other possible sites of, and action for, change, demonstrating how alternative histories of intelligent systems empower the imagination and pursuit of alternative futures of AI and its implications for society.

The need for such accounts is urgent as AI's impact is increasingly felt in daily life. Instances of algorithmic bias have already been registered in criminal sentencing (Eubanks 2017), insurance premiums (O'Neil 2016), search engine results (Noble 2018), and deployments of facial recognition technologies (Buolamwini and Gebru 2018, Keyes 2018). This Seminar will explore how seemingly new challenges such as these converge with historical accounts of political manipulation and oppression. Looking forwards, reports from Oxford University, the World Bank, and the International Bar Association speculate that as high as 47–66% of jobs could be at risk of automation over the next two decades, with developing countries hit hardest by the rapid decrease in demand for low-skill human labour (Osborne and Frey 2013, Kozul-Wright 2016, Wisskirchen, et al. 2017). Alongside this disruption, AI will add an estimated 15.7 trillion dollars to global GDP by 2030 (PwC 2017), testing the vitality of Western democracies by feeding income inequality, its toxic stressor.

To provide rigorous critical perspectives on the promise and perils of AI, we must demonstrate a genealogy of its power. With this Seminar, we will render such critiques more cogent by picking apart the fault lines between promise and power, and rendering a field that is most often characterised by its future more readily knowable by understanding its past.