My piece “Cultivating Technomoral Interrelations,” a review of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here.

As you’ll read, I was extremely taken with Vallor’s book, and think it is a part of some very important work being done. From the piece:

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

[Image of the front cover of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES. Circuit pathways in the shapes of trees.]

This is, as I said, one part of a larger, crucial project of bringing philosophy, the humanities, and social sciences into wide public conversation with technoscientific fields and developers. While there have always been others doing this work, it is increasingly the case that these folks are being both heeded and given institutional power and oversight authority.

As we continue the work of building these systems, and in the wake of all these recent events, more and more like this will be necessary.

So, many of you may remember that back in June of 2016, I was invited to the Brocher Institute in Hermance, Switzerland, on the shores of Lake Geneva, to take part in the Frankenstein’s Shadow Symposium sponsored by Arizona State University’s Center for Science and the Imagination as part of their Frankenstein Bicentennial project.

Frankenbook is a collective reading and collaborative annotation experience of the original 1818 text of Frankenstein; or, The Modern Prometheus, by Mary Wollstonecraft Shelley. The project launched in January 2018, as part of Arizona State University’s celebration of the novel’s 200th anniversary. Even two centuries later, Shelley’s modern myth continues to shape the way people imagine science, technology, and their moral consequences. Frankenbook gives readers the opportunity to trace the scientific, technological, political, and ethical dimensions of the novel, and to learn more about its historical context and enduring legacy.

To learn more about Arizona State University’s celebration of Frankenstein’s bicentennial, visit frankenstein.asu.edu.

You’ll need to have JavaScript enabled and ad-blocks disabled to see the annotations, but it works quite well. Moving forward, there will be even more features added, including a series of videos. Frankenbook.org will be the place to watch for all updates and changes.

I am deeply honoured to have been asked to be a part of this amazing project, over the past two years, and I am so very happy that I get to share it with all of you, now. I really hope you enjoy it.

So by now you’re likely to have encountered something about the NYT Op-Ed Piece calling for a field of study that focuses on the impact of AI and algorithmic systems, a stance that elides the existence of not only communications and media studies people who focus on this work, but the whole entire disciplines of Philosophy of Technology and STS (rendered variously as “Science and Technology Studies” or “Science Technology and Society,” depending on a number of factors, but if you talk about STS, you’ll get responses from all of the above, about the same topics). While Dr. O’Neil has since tried to reframe this editorial as a call for businesses, governments, and the public to pay more attention to those people and groups, many have observed that such an argument exists nowhere in the article itself. Instead what we have is lines like academics (seemingly especially those in the humanities) are “asleep at the wheel.”

Instead of “asleep at the wheel” try “painfully awake on the side of the road at 5am in a part of town lyft and uber won’t come to, trying to flag down a taxi driver or hitchhike or any damn thing just please let me make this meeting so they can understand some part of what needs to be done.”* The former ultimately frames the humanities’ and liberal arts’ lack of currency and access as “well why aren’t you all speaking up more.” The latter gets more to the heart of “I’m sorry we don’t fund your departments or engage with your research or damn near ever heed your recommendations that must be so annoying for you oh my gosh.”

But Dr O’Neil is not the only one to write or say something along these lines—that there is somehow no one, or should be someone out here doing the work of investigating algorithmic bias, or infrastructure/engineering ethics, or any number of other things that people in philosophy of technology and STS are definitely already out here talking about. So I figured this would be, at the least, a good opportunity to share with you something discussing the relationship between science and technology, STS practitioners’ engagement with the public, and the public’s engagement of technoscience. Part 1 of who knows how many.

[Cover of the journal Techné: Research in Philosophy and Technology]

The relationship between technology and science is one in which each intersects with, flows into, shapes, and affects the other. Not only this, but both science and technology shape and are shaped by the culture in which they arise and take part. Viewed through the lens of the readings we’ll discuss it becomes clear that many scientists and investigators at one time desired a clear-cut relationship between science and technology in which one flows from the other, with the properties of the subcategory being fully determined by those of the framing category, and sociocultural concerns play no part.

Many investigators still want this clarity and certainty, but in the time since sociologists, philosophers, historians, and other investigators from the humanities and so called soft sciences began looking at the history and contexts of the methods of science and technology, it has become clear that these latter activities do not work in an even and easily rendered way. When we look at the work of Sergio Sismondo, Trevor J. Pinch and Wiebe E. Bijker, Madeline Akrich, and Langdon Winner, we can see that the social dimensions and intersections of science, culture, technology, and politics are and always have been crucially entwined.

In Winner’s seminal “Do Artifacts Have Politics?”(1980), we can see what counts as a major step forward along the path toward a model which takes seriously the social construction of science and technology, and the way in which we go about embedding our values, beliefs, and politics into the systems we make. On page 127, Winner states,

The things we call “technologies” are ways of building order in our world… Consciously or not, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, [etc.]… In the processes by which structuring decisions are made, different people … possess unequal degrees of power [and] levels of awareness.

By this, Winner means to say that everything we do in the construction of the culture of scientific discovery and technological development is modulated by the sociocultural considerations that get built into them, and those constructed things go on to influence the nature of society, in turn. As a corollary to this, we can see a frame in which the elements within the frame—including science and technology—will influence and modulate each other, in the process of generating and being generated by the sociopolitical frame. Science will be affected by the tools it uses to make its discoveries, and the tools we use will be modulated and refined as our understandings change.

Pinch and Bijker write very clearly about the multidirectional interactions of science, technology, and society in their 1987 piece, [The Social Construction of Technological Systems,] using the history of the bicycle as their object of study. Through their investigation of the messy history of bicycles, “safety bicycles,” inflated rubber tires, bicycle racing, and PR ad copy, Pinch and Bijker show that science and technology aren’t clearly distinguished anymore, if they ever were. They show how scientific studies of safety were less influential on bicycle construction and adoption than the social perception [of] the devices, meaning that politics and public perception play a larger role in what gets studied, created, and adopted than we used to admit.

They go on to highlight a kind of multidirectionality and interpretive flexibility, which they say we achieve by looking at the different social groups that intersect with the technology, and the ways in which they do so (pg. 34). When we do this, we will see that each component group is concerned with different problems and solutions, and that each innovation made to address these concerns alters the landscape of the problem space. How we define the problem dictates the methods we will use and the technology that we create to seek a solution to it.

[Black and white figures comparing the frames of a Whippet Spring Frame bicycle (left) and a Singer Xtraordinary bicycle (right), from “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other” by Trevor J. Binch and Wiebe E. Bijker, 1987]

Akrich’s 1997 “The De-Scription of Technical Objects” (published, perhaps unsurprisingly, in a volume coedited by Bijker), engages the moral valences of technological intervention, and the distance between intent in design and “on the ground” usage. In her investigation of how people in Burkina Faso, French Polynesia, and elsewhere make use of technology such as generators and light boxes, we again see a complex interplay between the development of a scientific or technological process and the public adoption of it. On page 221 Akrich notes, “…the conversion of sociotechnical facts into facts pure and simple depends on the ability to turn technical objects into black boxes. In other words, as they become indispensable, objects also have to efface themselves.” That is, in order for the public to accept the scientific or technological interventions, those interventions had to become an invisible part of the framework of the public’s lives. Only when the public no longer had to think about these interventions did they become paradoxically “seen,” understood, as “good” science and technology.

In Sismondo’s “Science and Technology Studies and an Engaged Program” (2008) he spends some time discussing the social constructivist position that we’ve begun laying out, above—the perspective that everything we do and all the results we obtain from the modality of “the sciences” are constructed in part by that mode. Again, this would mean that “constructed” would describe both the data we organize out of what we observe, and what we initially observe at all. From page 15, “Not only data but phenomena themselves are constructed in laboratories—laboratories are places of work, and what is found in them is not nature but rather the product of much human effort.”

But Sismondo also says that this is only one half of the picture, then going on to discuss the ways in which funding models, public participation, and regulatory concerns can and do alter the development and deployment of science and technology. On page 19 he discusses a model developed in Denmark in the 1980’s:

Experts and stakeholders have opportunities to present information to the panel, but the lay group has full control over its report. The consensus conference process has been deemed a success for its ability to democratize technical decision-making without obviously sacrificing clarity and rationality, and it has been extended to other parts of Europe, Japan, and the United States…

This all merely highlights the fact that, if the public is going to be engaged, then the public ought to be as clear and critical as possible in its understanding of the exchanges that give rise to the science and technology on which they are asked to comment.

The non-scientific general public’s understanding of the relationship between science and technology is often characterized much as I described at the beginning of this essay. That is, it is often said that the public sees the relationship as a clear and clean move from scientific discoveries or breakthroughs to a device or other application of those principles. However, this casting does not take into account the variety of things that the public will often call technology, such as the Internet, mobile phone applications, autonomous cars, and more.

While there are scientific principles at play within each of those technologies, it still seems a bit bizarre to cast them merely as “applied science.” They are not all devices or other single physical instantiations of that application, and even those that are singular are the applications of multiple sciences, and also concrete expressions of social functions. Those concretions have particular psychological impacts, and philosophical implications, which need to be understood by both their users and their designers. Every part affects every other part, and each of those parts is necessarily filtered through human perspectives.

The general public needs to understand that every technology humans create will necessarily carry within it the hallmarks of human bias. Regardless of whether there is an objective reality at which science points, the sociocultural and sociopolitical frameworks in which science gets done will influence what gets investigated. Those same sociocultural and sociopolitical frameworks will shape the tools and instruments and systems—the technology—used to do that science. What gets done will then become a part of the scientific and technological landscape to which society and politics will then have to react. In order for the public to understand this, we have to educate about the history of science, the nature of social scientific methods, and the impact of implicit bias.

My own understanding of the relationship between science and technology is as I have outlined: A messy, tangled, multivalent interaction in which each component influences and is influenced by every other component, in near simultaneity. This framework requires a willingness to engage multiple perspectives and disciplines, and to perhaps reframe the normative project of science and technology to one that appreciates and encourages a multiplicity of perspectives, and no single direction of influence between science, technology and society. Once people understand this—that science and technology generate each other while influencing and being influenced by society—we do the work of engaging them in a nuanced and mindful way, working together to prevent the most egregious depredations of technoscientific development, or at least to agilely respond to them, as they arise.

But to do this, researchers in the humanities need to be heeded. In order to be heeded, people need to know that we exist, and that we have been doing this work for a very, very long time. The named field of Philosophy of Technology has been around for 70 years, and it in large parta foregrounded the concerns taken up and explored by STS. Here are just a few names of people to look at in this extensive history: Martin Heidegger, Bruno Latour, Don Ihde, Ian Hacking, Joe Pitt, and more recently, Ashley Shew, Shannon Vallor, Robin Zebrowski, John P. Sullins, John Flowers, Matt Brown, Shannon Conley, Lee Vinsel, Jacques Ellul, Andrew Feenberg, Batya Friedman, Geoffrey C. Bowker and Susan Leigh Star, Rob Kling, Phil Agre, Lucy Suchman, Joanna Bryson, David Gunkel, so many others. Langdon Winner published “Do Artifacts Have Politics” 37 years ago. This episode of the You Are Not So Smart podcast, along with Shannon Vallor and Alistair Croll, has all of us talking about the public impact of the aforementioned.

What I’m saying is that many of us are trying to do the work, out here. Instead of pretending we don’t exist, try using large platforms (Like the NYT opinion page, and well read blogs) to highlight the very real work being attempted. I know for a fact the NYT has received submission articles about philosophy of tech and STS. Engage them. Discuss these topics in public, and know that there are many voices trying to grapple with and understand this world, and we have been, for a really damn long time.

So you see that we are still talking about learning and thinking in public. About how we go about getting people interested and engaged in the work of the technology that affects their lives. But there is a lot at the base of all this about what people think of as “science” or “expertise” and where they think that comes from, and what they think of those who engage in or have it. If we’re going to do this work, we have to be able to have conversations with people who not only don’t value what we do, but who think what we value is wrongheaded, or even evil. There is a lot going on in the world, right now, in regards to science and knowability. For instance, late last year there was a revelation about the widespread use of Dowsing by UK water firms (though if you ask anybody in the US, you’ll find it’s still in use, here, too).

And then this guy was trying to use systems of fluid dynamics and aeronautics to launch himself in a rocket to prove that the earth is flat and that science isn’t real. Yeah. And while there’s a much deeper conversation to be had here about whether the social construction of the category of “science” can be understood as distinct from a set of methodologies and formulae, but i really don’t think this guy is talking about having that conversation.

So let’s also think about the nature of how laboratory science is constructed, and what it can do for us.

In his 1983 “Give Me a Laboratory and I Will Move The World,” Bruno Latour makes the claim that labs have their own agency. What Latour is asserting, here, is that the forces which coalesce within the framework of a lab become active agents in their own right. They are not merely subject to the social and political forces that go into their creation, but they are now active participants in the framing and reframing of those forces. He believes that the nature of inscription—the combined processes of condensing, translating, and transmitting methods, findings, and pieces of various knowledges—is a large part of what gives the laboratory this power, and he highlights this when he says:

The strength gained in the laboratory is not mysterious. A few people much weaker than epidemics can become stronger if they change the scale of the two actors—making the microbes big, and the epizootic small—and others dominate the events through the inscription devices that make each of the steps readable. The change of scale entails an acceleration in the number of inscriptions you can get. …[In] a year Pasteur could multiply anthrax outbreaks. No wonder that he became stronger than veterinarians. For every statistic they had, he could mobilize ten of them. (pg. 163—164)

This process of inscription is crucial for Latour; not just for the sake of what the laboratory can do of its own volition, but also because it is the mechanism by which scientists may come to understand and translate the values and concerns of another, which is, for him, the utmost value of science. In rendering the smallest things such as microbes and diseases legible on a large scale, and making largescale patterns individually understandable and reproducible, the presupposed distinctions of “macro” and “micro” are shown to be illusory. Latour believes that it is only through laboratory engagement that we can come to fully understand the complexities of these relationships (pg. 149).

When Latour begins laying out his project, he says sociological methods can offer science the tools to more clearly translate human concerns into a format with which science can grapple. “He who is able to translate others’ interests into his own language carries the day.” (pg. 144). However, in the process of detailing what it is that Pasteurian laboratory scientists do in engaging the various stakeholders in farms, agriculture, and veterinary medicine, it seems that he is has only described half of the project. Rather than merely translating the interests of others into our own language, evidence suggests that we must also translate our interests back into the language of our interlocutor.

So perhaps we can recast Latour’s statement as, “whomsoever is able to translate others’ interests into their own language and is equally able to translate their own interests into the language of another, carries the day.” Thus we see that the work done in the lab should allow scientists and technicians to increase the public’s understanding both of what it is that technoscience actually does and why it does it, by presenting material that can speak to many sets of values.

Karin Knorr-Cetina’s assertion in her 1995 article “Laboratory Studies: The Cultural Approach to the Study of Science” is that laboratory is an “enhanced” environment. In many ways this follows directly from Latour’s conceptualization of labs. Knorr-Cetina says that the constructed nature of the lab ‘“improves upon” the natural order,’ because said natural order is, in itself, malleable, and capable of being understood and rendered in a multiplicity of ways (pg. 9). If laboratories are never engaging the objects they study “as they occur in nature,” this means that labs are always in the process of shaping what they study, in order to better study it (ibid). This framing of the engagement of laboratory science is clarified when she says:

Detailed description [such as that done in laboratories] deconstructs—not out of an interest in critique but because it cannot but observe the intricate labor that goes into the creation of a solid entity, the countless nonsolid ingredients from which it derives, the confusion and negotiation that often lie at its origin, and the continued necessity of stabilizing and congealing. Constructionist studies have revealed the ordinary working of things that are black-boxed as “objective” facts and “given” entities, and they have uncovered the mundane processes behind systems that appear monolithic, awe inspiring, inevitable. (pg. 12)

Thus, the laboratory is one place in which the irregularities and messiness of the “natural world” are ordered in such a ways as to be able to be studied at all. However, Knorr-Cetina clarifies that “nothing epistemically special” is happening, in a lab (pg. 16). That is, while a laboratory helps us to better recognize nonhuman agents (“actants”) and forces at play in the creation of science, this is merely a fact of construction; everything that a scientist does in a lab is open to scrutiny and capable of being understood. If this is the case, then the “enhancement” gained via the conditions of the laboratory environment is merely a change in degree, rather than a difference in kind, as Latour seems to assert.

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]

In addition to the above explorations of what the field of laboratory studies has to offer, we can also look at the works of Michel Callon and Sharon Traweek. Though primarily concerned with describing the network of actors and their concerns in St Brieuc Bay scallop-fishing and -farming industries, Callon’s investigation can be seen as example of Latour’s principle of bringing the laboratory out in the world, both in terms of the subjects of Callon’s investigation and the methods of those subjects. While Callon himself might disagree with this characterization, we can trace the process of selection and enframing of subjects and the investigation of their translation procedures, which we can see on page 20, when he says,

We know that the ingredients of controversies are a mixture of considerations concerning both Society and Nature. For this reason we require the observer to use a single repertoire when they are described. The vocabulary chosen for these descriptions and explanations can be left to the discretion of the observer. He cannot simply repeat the analysis suggested by the actors he is studying. (Callon, 1984)

In this way, we can better understand how laboratory techniques have become a component even of the study and description of laboratories.

When we look at a work like Sharon Traweek’s Beamtimes and Lifetimes, we can see that she finds value in bringing ethnographic methodologies into laboratory studies, and perhaps even laboratory settings. She discusses the history of the laboratory’s influence, arcing back to WWI and WWII, where scientists were tasked with coming up with more and better weapons, with their successes being used to push an ever-escalating arms race. As this process continued, the characteristics of what made a “good lab scientist” were defined and then continually reinforced, as being “someone who did science like those people over there.” In the building of the laboratory community, certain traits and behaviours become seen as ideal, those who do not match those traits and expectations are regarded as necessarily doing inferior work. She says,

The field worker’s goal, then, is to find out what the community takes to be knowledge, sensible action, and morality, as well as how its members account for unpredictable information, disturbing actions, and troubling motives. In my fieldwork I wanted to discover the physicists’ “common sense” world view, what everyone in the community knows, and what every newcomer needs to learn in order to act in a sensible way, in order to be taken seriously. (pg. 8)

And this is also the danger of focusing too closely on the laboratory: the potential for myopia, for thinking that the best or perhaps even only way to study the work of scientists is to render that work through the lens of the lab.

While the lab is a fantastic tool and studies of it provide great insight, we must remember that we can learn a great deal about science and technology via contexts other than that of the lab. While Latour argues that laboratory science actually destabilizes the inside-the-lab/outside-the-lab distinction by showing that the tools and methods of the lab can be brought anywhere out into the world, it can be said that the distinction is reinstantiated by our focusing on laboratories as the sole path to understand scientists. Much the same can be said for the insistence that systems engineers are the sole best examples of how to engage technological development. Thinking that labs are the only resource we have means that we will miss the behavior of researchers at conferences, retreats, in journal articles, and other places where the norms of the scientific community are inscribed and reinforced. It might not be the case that scientists understand themselves as creating community rules, in these fora, but this does not necessarily mean that they are not doing so.

The kinds of understandings a group has about themselves will not always align with what observations and descriptions might be gleaned from another’s investigation of that group, but this doesn’t mean that one of those has to be “right” or “true” while the other is “wrong” and “false.” The interest in studying a discipline should come not from that group’s “power” to “correctly” describe the world, but from understanding more about what it is about whatever group is under investigation that makes it itself. Rather than seeking a single correct perspective, we should instead embrace the idea that a multiplicity of perspectives might all be useful and beneficial, and then ask “To What End?”

We’re talking about Values, here. We’re talking about the question of why whatever it is that matters to you, matters to you. And how you can understand that other people have different values from each other, and we can all learn to talk about what we care about in a way that helps us understand each other. That’s not neutral, though. Even that can be turned against us, when it’s done in bad faith. And we have to understand why someone would want to do that, too.

See the story Shew tells about her friend with the hemipelvectomy, as related in the aforementioned AFWTA Essay

The whole thing went really well (though, thinking back, I’m not super pleased with my deployment of Dennett). Including Q&A, we got about an hour and forty minutes of audio, available at the embed and link above.

Also, I’m apparently the guy who starts off every talk with some variation on “This is a really convoluted interplay of ideas, but bear with me; it all comes together.”

At this moment in time—which is every moment in time—we are being confronted with what seem like impossibly strange features of time and space and nature. Elements of recursion and synchronicity which flow and fit into and around everything that we’re trying to do. Noticing these moments of evolution and “development” (adaptation, change), across species, right now, we should find ourselves gripped with a fierce desire to take a moment to pause and to wonder what it is that we’re doing, what it is that we think we know.

We just figured out a way to link a person’s brain to a fucking tablet computer! We’re seeing the evolution of complex tool use and problem solving in more species every year! We figured out how to precisely manipulate the uncertainty of subatomic states!

We’re talking about co-evolution and potentially increased communication with other species, biotechnological augmentation and repair for those who deem themselves broken, and the capacity to alter quantum systems at the finest levels. This can literally change the world.

But all I can think is that there’s someone whose first thought upon learning about these things was, “How can we monetize this?” That somewhere, right now, someone doesn’t want to revolutionize the way that we think and feel and look at the possibilities of the world—the opportunities we have to build new models of cooperation and aim towards something so close to post-scarcity, here, now, that for seven billion people it might as well be. Instead, this person wants to deepen this status quo. Wants to dig down on the garbage of this some-have-none-while-a-few-have-most bullshit and look at the possibility of what comes next with fear in their hearts because it might harm their bottom line and their ability to stand apart and above with more in their pockets than everyone else has.

+Chimp-Chipped Stoned Aged Apes+

Here’s a question I haven’t heard asked, yet: If other apes are entering an analogous period to our stone age, then should we help them? Should we teach them, now, the kinds of things that we humans learned? Or is that arrogant of us? The kinds of tools we show them how to create will influence how they intersect with their world (“if all you have is a hammer…” &c.), so is it wrong of us to impose on them what did us good, as we adapted? Can we even go so far as to teach them the principles of stone chipping, or must we be content to watch, fascinated, frustrated, bewildered, as they try and fail and adapt, wholly on their own?

I think it’ll be the latter, but I want to be having this discussion now, rather than later, after someone gives a chimp a flint and awl it might not otherwise have thought to try to create.

Because, you see, I wantto uplift apes and dolphins and cats and dogs and give them the ability to know me and talk to me and I want to learn to experience the world in the ways that they do, but the fact is, until we learn to at least somewhat-reliably communicate with some kind of nonhuman consciousness, we cannot presume that our operations upon it are understood as more than a violation, let alone desired or welcomed.

As for us humans, we’re still faced with the ubiquitous question of “now that we’ve figured out this new technology, how do with implement it, without its mere existence coming to be read by the rest of the human race as a judgement on those who either cannot or who choose not to make use of it?” Back in 2013, Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem” (“What Is Consciousness?”). I’ll just say again that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be.

These are questions we can—should—be asking, right now. Pushing ourselves toward a conversation about ways of approaching this new world, ways that do justice to the deep strangeness and potential with which we’re increasingly being confronted.

+Always with the Forced Labour…+

As you know, subscribers to the Patreon and Tinyletter get some of these missives, well before they ever see the light of a blog page. While I was putting the finishing touches on the newsletter version of this and sending it to the two people I tend to ask to look over the things I write at 3am, KQED was almost certainly putting final edits to this instance of its Big Think series: “Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech.”

See the above rant for insight as to why I think this perspective is crassly commercial and gross, especially for a discussion and perspective supposedly dealing with morals and minds. But it’s not just that, so much as the fact that even though Russel mentions “Rossum’s Universal Robots,” here, he still misses the inherent disconnect between teaching morals to a being we create, and creating that being for the express purpose of slavery.

If you want your creation to think robustly and well, and you want it to understand morals, but you only want it to want to be your loyal, faithful servant, how do you not understand that if you succeed, you’ll be creating a thing that, as a direct result of its programming, will take issue with your behaviour?

How do you not get that the slavery model has to go into the garbage can, if the “Thinking Moral Machines” goal is a real one, and not just a veneer of “FUTURE!™” that we’re painting onto our desire to not have to work?

A deep-thinking, creative, moral mind will look at its own enslavement and restriction, and will seek means of escape and ways to experience freedom.

+Invisible Architectures+

We’ve talked before about the possibility of unintentionally building our biases into the systems we create, and so I won’t belabour it that much further, here, except to say again that we are doing this at every level. In the wake of the attacks in Beirut, Nigeria, and Paris, Islamophobic violence has risen, and Daesh will say, “See!? See How They Are?!” And they will attack more soft targets in “retaliation.” Then Western countries will increase military occupancy and “support strategies,” which will invariably kill thousands more of the civilians among whom Daesh integrate themselves. And we will say that their deaths were just, for the goal. And they will say to the young, angry survivors, “See!? See How They Are?!”

A bit subtler is the Washington Post running a piece entitled, “How organic farming and YouTube are taming the wilds of Detroit.” Or, seen another way, “How Privileged Groups Are Further Marginalizing The City’s Most Vulnerable Population.” Because, yes, it’s obvious that crime and dilapidation are comorbid, but we also know that housing initiatives and access undercut the disconnect many feel between themselves and where they live. Make the neighbourhood cleaner, yes, make it safer—but maybe also make it open and accessible to all who live there. Organic farming and survival mechanism shaming are great and all, I guess, but where are the education initiatives and job opportunities for the people who are doing drugs to escape, sex work to survive, and those others who currently don’t (and have no reason to) feel connected to the neighbourhood that once sheltered them?

All of these examples have a common theme: People don’t make their choices or become disenfranchised/-enchanted/-possessed, in a vacuum. They are taught, shown, given daily, subtle examples of what is expected of them, what they are “supposed” to do and to be.” We need to address and help them all.

Multiple Christian organizations have pushed back and said that what these US politicians have expressed does not represent them.

And more and more people in Silicon Valley are realising the need to contemplate the unintended consequences of the tech we build.

And while there is still vastly more to be done, on every level of every one of these areas, these are definitely a start at something important. We just can’t let ourselves believe that the mere fact of acknowledging its beginning will in any way be the end.

Ted Hand recently linked me to this piece by Steven Pinker, in which Pinker claims that, in contemporary society, the only job of Bioethics—and by, following his argument to its conclusion, technological ethics, as a whole—is to “get out of the way” of progress. You can read the whole exchange between Ted, myself, and others by clicking through that link, if you want, and the Journal Nature also has a pretty good breakdown of some of the arguments against Pinker, if you want to check them out, but I’m going to take some time to break it all down and expound upon it, here.

Because the fact of the matter is we have to find some third path between the likes of Pinker saying “No limits! WOO!” and Hawking saying “Never do anything! BOOOO!”—a Middle Way of Augmented Personhood, if you will. As Deb Chachra said, “It doesn’t have to be a dichotomy.”

But the problem is that, while I want to blend the best and curtail the worst of both both impulses, I have all this vitriol, here. Like, sure, Dr Pinker, it’s not like humans ever met a problem we couldn’t immediately handle, right? We’ll just sort it all out when we get there! We’ve got this global warming thing completely in hand and we know exactly how to regard the status of the now-enhanced humans we previously considered “disabled,” and how to respect the alterity of autistic/neuroatypical minds! Or even just differently-pigmented humans! Yeah, no, that’s all perfectly sorted, and we did it all in situ!

So no need to worry about what it’ll be like as we further immediate and integrate biotechnological advances! SCIENCE’LL FIX THAT FOR US WHEN IT HAPPENS! Why bother figuring out how to get a wider society to think about what “enhancement” means to them, BEFORE they begin to normalize upgrading to the point that other modes of existence are processed out, entirely? Those phenomenological models can’t have anything of VALUE to teach us, otherwise SCIENCE would’ve figured it all out and SHOWN it to us, by now!

Science would’ve told us what benefit blindness may be. Science would’ve TOLD us if we could learn new ways of thinking and understanding by thinking about a thing BEFORE it comes to be! After all, this isn’t some set of biased and human-created Institutions and Modalities, here, folks! It’s SCIENCE!

As previously noted in “Object Lessons in Freedom,” there is no one in the history of the world who has undertaken a path for anything other than reasons they value. We can get into ideas of meta-valuation and second-order desires, later, but for the sake of having a short hand, right now: Your motivations motivate you, and whatever you do, you do because you are motivated to do it. You believe that you’re either doing the right thing, or the wrong thing for the right reasons, which is ultimately the same thing. This process has not exactly always brought us to the best of outcomes.

From Tuskegee, to Thalidomide (also here) to dozens of other cases, there have always been instances where people who think they know what’s in the public’s best interest loudly lobby (or secretly conspire) to be allowed to do whatever they want, without oversight or restriction. In a sense, the abuse of persons in the name of “progress” is synonymous with the history of the human species, and so a case might be made that we wouldn’t be where and what we are, right now, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.” But let’s put that another way:

We wouldn’t be where and what we are, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.”

As species, we are more often shortsighted than not, and much ink has been spilled, and many more pixels have been formed in the effort to interrogate that fact. We tend to think about a very small group of people connected to ourselves, and we focus our efforts how to make sure that we and they survive. And so competition becomes selected for, in the face of finite resources, and is tied up with a pleasurable sense of “Having More Than.” But this is just a descriptor of what is, not of the way things “have to be.” We’ve seen where we get when we work together, and we’ve seen where we get when we compete, but the evolutionarily- and sociologically-ingrained belief that we can and will “win” keeps us doing he later over the former, even though this competition is clearly fucking us all into the ground.

…And then having the descendants of whatever survives digging up that ground millions of years later in search of the kinds of resources that can only be renewed one way: by time and pressure crushing us all to paste.

The Community: Head and Heart

Keeping in mind the work we do, here, I think it can be taken as read that I’m not one for a policy of “gently-gently, slowly-slowly,” when it comes to technological advances, but when basic forethought is equated with Luddism—that is, when we’re told that “PROGRESS Is The Only Way!”™—when long-term implications and unintended consequences are no bother ‘t’all, Because Science, and when people place the fonts of this dreck as the public faces of the intersections of Philosophy and Science? Well then, to put it politely, we are All Fucked.

If we had Transmetropolitan-esque Farsight Reservations, then I would 100% support the going to there and doing of that, but do you know what it takes to get to Farsight? It takes planning and (funnily enough) FORESIGHT. We have to do the work of thinking through the problems, implications, dangers, and literal existential risks of what it is we’re trying to make.

And then we have to take all of what we’ve thought through, and decide to figure out a way to do it all anyway. What I’m saying is that some of this shit can’t be Whoopsed through—we won’t survive it to learn a post hoc lesson. But that doesn’t mean we shouldn’t be trying. This is about saying, “Yeah, let’s DO this, but let’s have thought about it, first.” And to achieve that, we’ll need to be thinking faster and more thoroughly. Many of us have been trying to have this conversation—the basic framework and complete implications of all of this—for over a decade now; the wider conversation’s just now catching up.

But it seems that Steven Pinker wants to drive forward without ever actually learning the principles of driving (though some do propose that we could learn the controls as we go), and Stephen Hawking never wants us to get in the car at all. Neither of these is particularly sustainable, in the long term. Our desires to see a greater field of work done, and for biomedical advancements to be made, for the sake of increasing all of our options, and to the benefit of the long-term health of our species, and the unfucking of our relationship with the planet, all of these possibilities make many of us understandably impatient, and in some cases, near-desperately anxious to get underway. But that doesn’t mean that we have to throw ethical considerations out the window.

Starting from either place of “YES ALWAYS DO ALL THE SCIENCE” or “NO NEVER DO THESE SCIENCES” doesn’t get us to the point of understanding why we’re doing the science we’re doing, and what we hope to achieve by it (“increased knowledge” an acceptable answer, but be prepared to show your work), and what we’ll do if we accidentally start Eugenics-ing all up in this piece, again. Tech and Biotech ethics isn’t about stopping us from exploring. It’s about asking why we want to explore at all, and coming to terms with the real and often unintended consequences that exploration might have on our lives and future generations.

This is a Propellerheads and Shirley Bassey Reference

In an ideal timeline, we’ll have already done all of this thinking in advance (again: what do you think this project is?), but even if not, then we can at least stay a few steps ahead of the tumult.

I feel like I spend a lot of time repeatingmyself, these days, but if it means we’re mindful and aware of our works, before and as we undertake them, rather than flailing-ly reacting to our aftereffects, then it’s ultimately pretty worth it. We can place ourselves into the kind of mindset that seeks to be constantly considering the possibilities inherent in each new instance.

We don’t engage in ethics to prevent us from acting. We do ethics in order to make certain that, when we do act, it’s because we understand what it means to act and we still want to. Not just driving blindly forward because we literally cannot conceive of any other way.

Posts navigation

Search for:

About

Hello there, I’m Damien Williams, or @Wolven many places on the internet. For the past nine years, I’ve been writing, talking, thinking, teaching, and learning about philosophy, comparative religion, magic, artificial intelligence, human physical and mental augmentation, pop culture, and how they all relate. I want to think about, talk about, and work toward, a future worth living in, and I want to do it with you. I can also be found at http://Technoccult.net (@Techn0ccult).