Technology, Culture, and Ethics

Apps

If you’re not paying attention to Evan Selinger’s work, you’re missing out on some of the best available commentary on the ethical implications of contemporary technology. Last week I pointed you to his recent essay, “The Outsourced Lover,” on a morally questionable app designed to automate romantic messages to your significant other. In a more recent editorial at Wired, “Today’s Apps Are Turning Us Into Sociopaths,” Selinger provides another incisive critique of an app that similarly automates aspects of interpersonal relationships.

Selinger approached his piece by interviewing the app designers in order to understand the rationale behind their product. This leads into an interesting and broad discussion about technological determinism, technology’s relationship to society, and ethics.

I was particularly intrigued by how assumptions of technological inevitability were deployed. Take the following, for example:

“Embracing this inevitability, the makers of BroApp argue that ‘The pace of technological change is past the point where it’s possible for us to reject it!’”

And:

“’If there is a niche to be filled: i.e. automated relationship helpers, then entrepreneurs will act to fill that niche. The combinatorial explosion of millions of entrepreneurs working with accessible technologies ensures this outcome. Regardless of moral ambiguity or societal push-back, if people find a technology useful, it will be developed and adopted.’”

It seems that these designers have a pretty bad case of the Borg Complex, my name for the rhetoric of technological determinism. Recourse to the language of inevitability is the defining symptom of a Borg Complex, but it is not the only one exhibited in this case.

According to Selinger, they also deploy another recurring trope: the dismissal of what are derisively called “moral panics” based on the conclusion that they amount to so many cases of Chicken Little, and the sky never falls. This is an example of another Borg Complex symptom: “Refers to historical antecedents solely to dismiss present concerns.” You can read my thoughts on that sort of reasoning here.

Do read the whole of Selinger’s essay. He’s identified an important area of concern, the increasing ease with which we may outsource ethical and emotional labor to our digital devices, and he is helping us think clearly and wisely about it.

About a year ago, Evgeny Morozov raised related concerns that prompted me to write about the inhumanity of smart technology. A touch of hyperbole, perhaps, but I do think the stakes are high. I’ll leave you with two points drawn from that older post.

The first:

“Out of the crooked timber of humanity no straight thing was ever made,” Kant observed. Corollary to keep in mind: If a straight thing is made, it will be because humanity has been stripped out of it.

The second relates to a distinction Albert Borgmann drew some time ago between troubles we accept in practice and those we accept in principle. Those we accept in practice are troubles we need to cope with but which we should seek to eradicate, take cancer for instance. Troubles we accept in principle are those that we should not, even if we were able, seek to abolish. These troubles are somehow essential to the full experience of our humanity and they are an irreducible component of those practices which bring us deep joy and satisfaction.

That’s a very short summary of a very substantial theory. You can read more about it in that earlier post and in this one as well. I think Borgmann’s point is critical. It applies neatly to the apps Selinger has been analyzing. It also speaks to the temptations of smart technology highlighted by Morozov, who rightly noted,

“There are many contexts in which smart technologies are unambiguously useful and even lifesaving. Smart belts that monitor the balance of the elderly and smart carpets that detect falls seem to fall in this category. The problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning.”

From another angle, we can understand the problem as a misconstrual of the relationship between means and ends. Technology, when it becomes something more than an assortment of tools, when it becomes a way of looking at the world, technique in Jacques Ellul’s sense, fixates on means at the expense of ends. Technology is about how things get done, not what ought to get done or why. Consequently, we are tempted to misconstrue means as ends in themselves, and we are also encouraged to think of means as essentially interchangeable. We simply pursue the most efficient, effective means. Period.

But means are not always interchangeable. Some means are integrally related to the ends that they aim for. Altering the means undermines the end. The apps under consideration, and many of our digital tools more generally, proceed on the assumption that means are, in fact, interchangeable. It doesn’t matter whether you took the time to write out a message to your loved one or whether it was an automated app that only presents itself as you. So long as the end of getting your loved one a message is accomplished, the means matter not.

This logic is flawed precisely because it mistakes a means for an end and sees means as interchangeable. The real end, of course, in this case anyway, is a loving relationship not simply getting a message that fosters the appearance of a loving relationship. And the means toward that end are not easily interchangeable. The labor, or, to use Borgmann’s phrasing, the trouble required by the fitting means cannot be outsources or eliminated without fatally undermining the goal of a loving relationship.

That same logic plays out across countless cases where a device promises to save us or unburden us from moral and emotional troubles. It is a dehumanizing logic.

Even with the amazing technology we have in our pockets, we can fly through the day without remembering to send a simple “I love you” to the most important person in our lives.

Romantimatic can help.

It can help by automatically reminding you to contact the one you love and providing some helpful pre-set messages to save you the trouble of actually coming up with something to say.

Selinger has his reservations about this sort of “outsourced sentiment,” and he irenically considers the case Romantimatic’s creator makes for his app while exploring the difference between the legitimate use “social training wheels” and the outsourcing of moral and emotional responsibility. I encourage you to read the whole thing.

“What’s really weird,” Selinger concludes, “is that Romantimatic style romance may be a small sign of more ambitious digital outsourcing to come.”

That is exactly right. Increasingly, we are able to outsource what we might think of as ethical and emotional labor to our devices and apps. But should we? I’m sure there are many for whom the answer is a resounding Yes. Why not? To be human is to make use of technological enhancements. Much of our emotional life is already technologically mediated anyway. And so on.

Others, however, might instinctively sense that the answer, at least sometimes, is No. But why exactly? Formulating a cogent and compelling response to that question might take a little work. Here, at least, is a start.

The problem, I think, involves a conflation of intellectual labor with ethical/emotional labor. For better and for worse, we’ve gotten used to the idea of outsourcing intellectual labor to our devices. Take memory, for instance. We’ve long since ceased memorizing phone numbers. Why bother when our phones can store those numbers for us? On a rather narrow and instrumental view of intellectual labor, I can see why few would take issue with it. As long as we find the solution or solve the problem, it seems not to matter how the labor is allocated between minds and machines. To borrow an old distinction, the labor itself seems accidental rather than essential to the goods sought by intellectual labor.

When it comes to our emotional and ethical lives, however, that seems not to be the case. When we think of ethical and emotional labor, it’s harder to separate the labor itself from the good that is sought or the end that is pursued.

For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous. In fact, subjective experience, generally speaking, is what we seem to be increasingly tempted to outsource When it comes to our ethical and emotional lives, however, the labor is essential rather than accidental; it cannot be outsourced without undermining the whole project. The value is in the labor, and so is our humanity.

____________________________________________

Further Reading

Selinger has been covering this field for awhile; here is a related essay.