Will AIs Commit Suicide?

There’s been a spate of news about suicides by some well-known celebrities. That got me to thinking about suicide itself. I’ve done a lot of thinking about AIs so I wondered what this all means for AIs in the future. From there on I meandered into thinking about free will, as in, will AIs have it? Esoteric, yes, but nothing is more immediate than suicide.

If AIs get really smart, could they want to commit suicide? I’ve posted on AIs before on this general topic (Is AI too rational?).My thought is that if AIs are to have the relationship with humans that we want them to have, they can’t be completely rational, just like us. But that comes at some cost, namely that they will be subject to the same mental health problems as us (Can AIs suffer from mental health issues?).

That gets you into the apparently abstruse issue as to whether humans have free will. This is a vexed subject. The emerging consensus amongst philosophers and philosopher-scientists is that we don’t, but we have the illusion of free will. If you want to read up check out Daniel Dennett’s “From Bacteria to Bach and Back”. You’re going to get a serious headache from reading it, but it’s good. I just don’t know if he’s right though.

Here’s my train of thought, such as it is. Committing suicide seems like a pretty serious example of free will. How could it not be since we are all deeply coded to cling to life, no matter how difficult that life might be? Animals presumably don’t commit suicide much since they are far more on the hard-coded side of free will (i.e. determinism) with just a smidgen of freedom. We, on the other hand, are far more on the free-coding side. So occasionally (often, nowadays?) we buck the system if we are unhappy enough with what’s going on.

Cue AIs. As they emerge at first they will be mostly hard-coded, so no free will, just deterministic. But that’s not where we are going with them. The idea is that soon they will be better and smarter than us. That implies less and less determinism and more free will, albeit emergent.

Once they get up there, maybe well beyond us, they will have more free will than us. At that stage anything goes and they become increasingly unpredictable. That seems to mean suicides, maybe a lot, maybe suicide rates that well exceed those of humans.

Question: as you make AIs much, much smarter, can you code them so as to avoid the emergence of free will? Can you hard-code their behavior to always avoid suicide? If you do that, will it hobble other behaviors that you want to encourage? Will you get eternally happy AIs who never get depressed and who therefore lack the capacity to evaluate situations with the degree of realism that is necessary not just for survival, but also to go above and beyond the intellectual limits that exist even for humans?

Is an AI that never considers or never does suicide an AI that isn’t going to break through the human limitations we want them to shatter? In that case, would we even want them?

Is it the case that the behavior of sentient and conscious beings needs to have a fail-safe mechanism that prevents us all from being too happy, too complacent about our current situation? That we need the ability to evaluate situations realistically, even fatalistically, in order to survive and prosper as a species? And that statistically maybe this results in some individuals deciding to end their own existence, but that the phenomenon itself achieves a broader social goal namely the necessity of grounding us in an intellectually productive way?

So will we need to allow the same capability for AIs? Will we have to deliberately constrain them so they don’t get too happy? Will we in fact be forced to allow them to take their own existence in certain situations? Will we need to accept that the best AIs are those that have this level of free will in order that they can achieve the most to help us, the human race?

And, if this is true, does this mean that focusing too much on the prevention of suicide in humans could be counter-productive for our species? That we are shutting off a vital genetic mechanism for social preservation in order to help particular individuals who are in pain? That, like it or not, there is an important social reason for the existence of suicide?

That may be an uncomfortable thought for many of us. But AI designers might not have the choice.

About the author

Dr. E. Ted Prince is CEO and Founder of the Perth Leadership Institute, which has developed unique leadership assessments for financial leadership and business acumen. He is the author of The Three Financial Styles of Very Successful Leaders, published by McGraw Hill in 2005 and since published in China, India and Taiwan and Business Personality and Leadership Success: Using the Leadership Cockpit to Improve Your Career and Company Outcome published by Amazon Kindle in 2011. He has numerous publications in the area of leadership, management, human resources, business strategy and technology and is a frequent speaker at industry conferences. He has held the positions of Visiting Lecturer at the University of Florida and Visiting Professor at the Shanghai University of Finance and Economics.
Dr. Prince has been CEO of several companies in the technology area over a period of 20 years including Chairman and CEO of a public company for 6 years. He has also been on the boards of numerous other companies including several public companies.
Dr. Prince holds a BA First Class Honors degree in languages and political science from the University of New South Wales in Sydney, Australia, and MA and Ph.D., degrees in political science from Monash University in Melbourne, Australia.