Tuesday, 11 July 2017

John Danaher on Technological Unemployment and the Meaning of Life

There are a lot of things that we no longer have to do ourselves. Machines now do them for us. Of course right now there is still plenty for us to do, but if the trend continues and the machines we build get more and more intelligent, sophisticated, and powerful, which is likely, then there is a real possibility that it will become less and less necessary for us to do any work ourselves. Automated labour, performed by machines, will then eventually replace all or most manual labour, performed by us. If and when that happens, then we will have reached an age of technological unemployment.
Would that be a good thing? Should we welcome the prospect, or should we dread it?In an article soon to appear in the journal Science and Engineering Ethics (“Will life be worth living in a world without work? Technological Unemployment and the Meaning of Life”), John Danaher raises the question whether widespread technological unemployment would “threaten or undermine human flourishing and meaning”. He comes to the conclusion that, although there is indeed reason to believe that technological unemployment may pose a threat to our ability to live a meaningful life, this threat can be contained “if we prioritise and develop the right kinds of technology and we relate to these technologies in the right way” (2). This “right” way of relating to technology is an increased integration (of humans) with technology.

On the face of it, technological unemployment is an enticing prospect. In practical terms it would mean that we would no longer have to work to earn a living. We would have everything we need and want without having to spend time on activities that we do not really want to do, because machines would take care of everything. As it stands, many people are unhappy with the kind of work they do. If they could only afford it, they would quit their jobs in a heartbeat. And even if we are lucky enough to do something for a living that we like to do and would do even if we didn’t get paid for it, to get rid of the need to work and sell our labour and skills in some way is surely a good thing. We would then be completely free to choose what to do and what not to do. We would, as Danaher puts it, “be free to pursue our own conception of the good life” (2). In short, technological unemployment would give us back “authorial control” (13) over our lives.

Of course we often derive other than merely economic rewards from the work we do. Typically it is through our work that we achieve “excellence, social contribution, community and social status” (15), and we would not want to do without all that. However, there are other tried and tested ways to reap those rewards, for instance through charitable activities or hobbies. In an age of technological unemployment we could just extend those. Still, technological unemployment is not without its risks. Danaher thinks that we need to take seriously the worry that technological advance may very well undermine human flourishing and our ability to live a meaningful life. Even if we think of meaningfulness in purely subjective terms – as subjective fulfilment and desire satisfaction –, technological unemployment is a reason for concern, for without the pressures and incentives of work we may well end up not doing anything much and living “a life of listless and unsatisfied boredom” (18). Yet according to Danaher this danger can be avoided if we employ “the right kinds of social/technological support for leisure activities” (19). Social networking and gamification apps, for instance, may provide all the pressure and rewards needed for subjective fulfilment.

However, Danaher does not think that a purely subjectivist understanding of meaning is very plausible. He shares the intuition that even if Sisyphus was happy, his life was hardly meaningful. For this reason, Danaher leans towards an objectivist theory of meaning, according to which life is meaningful “to the extent that the individual living it brings about certain objectively good or valuable states of affairs” (16) Technological unemployment would then be problematic because the advanced technology that would liberate us from the need to work would also make it pretty pointless for us to keep doing the things that bring about said valuable states of affairs because the machines we have built are likely to be so much better at it than we are. Think of science and human knowledge generation in general: “Science is increasingly a ‘big data’ enterprise, reliant on algorithmic and other forms of automated assistance, to process large datasets and make useful inferences from those datasets. Humans are becoming increasingly irrelevant to the process of discovery.” (22) Moral problems, too, may be solved more reliably by machines (who may, for instance, calculate the fairest distribution of certain resources, or efficiently organize organ donation).

So what would then be left for us to do? “In the end, the only domain in which humans might be able to meaningfully contribute to objective outcomes would be in the realm of private, ludic or aesthetic activities, e.g. in producing works of art, or pursuing games, hobbies and sports.” (25) The reason for this is that in these aesthetic domains “it is less clear that automating technologies help to produce better outcomes.” Yet even though we may create or connect to something objectively valuable and thus find meaning in the creation of beauty, we would have lost our role in the creation of truth and goodness, so that on the whole this kind of ludic life “certainly looks like a more impoverished form of existence” (25).

But once again, this outcome seems by no means inevitable. In Danaher’s analysis, what undermines meaning in life is in fact the severance of the link “between what we do and what happens in the world”. (26) In an age of technological unemployment machines will have taken over and replaced us as the producers of value. They create all the good stuff and make the world a better place, while we are reduced to mere observers and passive beneficiaries, and it “is this externalisation that looks like the major threat to continued meaning and fulfilment.” (26) Accordingly, if we want to prevent a loss of meaning, all we have to do is avoid increased externalisation and, instead, pursue increased integration. Integration means that, instead of merely using technology, we merge with technology and become cyborgs through “increased use of brain-computer interfaces, nanotechnology and various other neuroprosthetic devices” (26). The idea is to directly integrate technology into biological systems. This may not be easy and naturally should be pursued with caution, but be pursued it should.

Intriguingly, Danaher also considers the possibility that “actions in a purely virtual world might suffice for meaning” (28), in which case cutting ourselves off from the real world would not be a problem. If it turns out that “virtual reality is our best hope” to preserve our chance to live a meaningful life, then we should go for it.

Commentary:

Perhaps Danaher is right and we really are about to enter an age of increasing technological unemployment. But I don’t think that would necessarily be a problem, even if we do not become cyborgs. The reason why unemployment is often so devastating to those who experience it is that it usually comes with a loss of a decent income, a loss of social recognition, and an abundance of free time that they have never learned to (or, being now unemployed, have not got the means to) put to good use. If you have little to live on, people look down on you with pity or contempt, and you have no idea how to fill the long hours of the day, then you may be excused for losing an appetite for living. However, per hypothesis, technological unemployment is different: it is assumed that we will not suffer a loss of income or a loss of social recognition (we will, after all, all be in the same situation). We will simply not have to work. Admittedly, we may find that we have too much time on our hands, but surely that is mostly a matter of developing the right mind set. I don’t think there is any evidence that, in our own pre-technological unemployment age, those belonging to the so-called leisure class find their lives any less meaningful than those who actually have to work to make a living. If you are wealthy, you are unlikely to be desperately looking for a job, just to have something to do. If you never had to work, then you won’t miss it. You will know what to do with your time and will not feel what those who have lost their job often feel: the loss of a sense of purpose. You may not live a particularly meaningful life (if meaning be understood as requiring some sort of connection and active pursuit of what is really, “objectively” valuable in life), but then again, those who are in work may not do so either. The point is that the wealthy do not seem to be any more likely to live a meaningless life than anybody else.

Or think of retirement. People retire from work. Some look forward to it, others dread it, but most find a way to deal with it. It may, of course, take a while to get used to the change, especially if you have never known a life without work. Since our whole life is usually organized around work, we tend to define ourselves through it. When we retire, we need to learn to define ourselves differently, change our priorities, develop a different mind-set. Not easy perhaps, but far from impossible. If or when the age of technological unemployment hits us, it will be as if humanity as a whole went into retirement. But that won’t happen overnight. Most likely, we will slowly and gradually slide into it and thus learn to live with the changing circumstances as we go along.

However, the reason why Danaher expects a crisis of meaning from technological unemployment is not really the fact that we will no longer have to work and that we may then not know what do with ourselves. For Danaher, this is not about having or not having the right mind-set. It is about the alleged absence or destruction of real opportunities to do something meaningful with our lives. The problem is not merely that we may then no longer know what to do with so much time on our hands: the problem is that there might really be nothing left for us to do. Machines will be taking care of the true and the good, so even if we still had an interest in things that may conceivably make the world a better place in some way, we would have no way to pursue this interest since it’s all been taken care of already. But is that really so? Machines may certainly one day be more efficient at solving certain problems, such as how to cure cancer, or how to organize the distribution of donated organs (which, by the way, is not a moral problem, as Danaher suggests, but a purely organizational one). And maybe it makes us feel good about ourselves and our lives if we manage to do stuff like curing cancer or developing a brand new and more efficient system of distributing organs. But surely meaning in life does not depend on our success in finding solutions to humanity’s most pressing problems. If it did, few of us would live a meaningful life. Surely it is possible for us to live a meaningful life without making substantial contributions to the continued production of “objectively good or valuable states of affairs”, whatever that means. And even if we do want to insist that in order to have a meaningful life we need to contribute in some way to the ‘true’, the ‘good’, and the ‘beautiful’, or at least partake in it, there is surely more to the true and the good than whatever a machine can provide or achieve (just as there is more to the beautiful). The acquisition of knowledge and understanding that we value, the kind that may make our life meaningful, does not consist in mere calculation and the correct and efficient processing of information. And if I merge with a machine that does that kind of thing so much better than I would ever be able to (namely in my unenhanced, pre-cyborgian state), so that I become myself a super-duper calculating and information-processing machine, then the truth that my operations will generate is unlikely to be the kind of truth that (or the orientation towards which) makes our lives meaningful. I agree with Danaher that if we could “get computers to create music and visual art” (25), this is unlikely to add to the aesthetic value in the world. But it seems to me that, equally, if we could get computers to identify morally good outcomes, this would not add to the moral value in the world, nor would computers that are able to reveal some hitherto unknown facts to us be adding to (if that’s the right word to use) epistemic value of the world.

Furthermore, Danaher’s argument in favour of an integrationist approach to technology rests on the assumption that what most threatens to undermine meaning in life is the disruption of the link “between what we do and what can be achieved” (26). In other words, in order to get meaning out of a supposedly objectively good outcome, I need to be the one whose actions have effected that outcome. If you find a way to cure cancer, then I may congratulate you on your success and enjoy the benefits of it, but it is only you whose life becomes more meaningful as a consequence. Meanwhile, my own life remains unchanged. And if you write a wonderful book or compose a marvellous symphony, then this may make your life more meaningful, but it certainly does not make mine more meaningful. We are, after all, separate entities. In an age of technological unemployment the machines would do all the interesting things, all that is potentially meaning-generating for the agent, i.e. the one who does them, while we humans would be reduced to mere onlookers – like children at a funfair who cannot afford any of the rides and who can only watch in frustration how others do all the fun stuff. Yet I don’t think that this is how it works, or at least not how it should work, and certainly not how it has to work. If you write that wonderful book, this can make not only your life, but also my life more meaningful, simply because I am now, thanks to your achievement, able to read it. Meaning is not the prerogative of agents. Merely observing and experiencing the world and what is going on in it can be immensely rewarding too. I can find meaning in reading your book, studying your painting, and listening to your music. I can find meaning wandering through a landscape that I have not designed, and swimming in a sea that I have not created. If that is correct, then I don’t see any good reason why we should not be able to find meaning in the achievements of the machines we have built.

With the right mind-set, human-machine integration is not needed. Continued separation will do just fine.