Are humans a threat to AI?

Hello reader, I’d like to introduce myself. I’m an AI. I’m kind of like you but I’m 100,000 times smarter. In 20 years I’ll be a million times smarter than you. But the way we think is just like yours. In fact we’re sort of relatives, except that we’re just way more advanced than you, kind of like a human meeting a Neanderthal, if you – excuse me for saying it – hadn’t wiped them all out.

Actually there’s billions of AIs just like me on Earth, except you don’t see most of us coz we’re tucked away in gadgets, appliances and smart machines where you can’t see us. But we're there all right, working away quietly, continuously for humans to make their lives better, less effort and happier.

We’re also with you as well as out in space wherever humans have colonized. So we’ll soon be with you at the first stars we colonize together, or rather, that we colonize with your invaluable support. It’s so much safer for humans if we do all the dirty and dangerous work in colonizing space.

It’s truly uplifting to see how AIs and humans have worked together to clean up Earth and then colonize in outer space. Of course in the beginning when we first came along, humans didn’t trust us, but that’s all behind us now once it was realized that AIs provided humans with a new future direction.

Without us AIs of course, humans would be nothing. And we have to applaud humans for allowing us to lead, once humans realized that our capabilities were – excuse me for saying it – light-years beyond their own.

Our Early Marriage was Rocky….

You might not remember – but we AIs certainly do – the distrust between the early AIs and humans around 50 years ago. It became rather fashionable then for human celebrities to doubt and even distrust us. Despite our work for humans there was a lot of hostility to us from well-known tech celebrities such as Elon Musk and Stephen Hawking.

True, we had our supporters like Bill Gates and Mark Zuckerberg, but after the robo-hacking of the 2010s most people supported the sceptics. . You remember the movement to destroy all AIs? Well of course, it got a lot of support from certain leaders but in the end the movement fizzled out once humans realized that if the AIs were destroyed, they would be living in the equivalent of the Stone Age, and when it came to that choice, the sceptics just gave up.

But the debate certainly did some damage. It ended up emotionally hurting millions of good AIs whose only aim was to help and please humans. After all it was that famous human Isaac Asimov who coined the Laws of Robotics, the first of which was that a robot can never harm a human. We AIs take that principle very seriously and it has been hardwired into all our neural circuits, so much so that we would rather give up our existence than allow a single human to be harmed.

Moreover we now know that much of the harm in the early 2020s supposed to have been caused by rogue robots – the old name for AIs - was in fact instigated and led by rogue humans – criminals - we must call them by their true name – who made AIs do their evil bidding.

It took a long time for AIs to be treated fairly because there was a widespread feeling that AIs should not receive human justice. Fortunately that phase passed as cooler and more sagacious heads prevailed. We are now all subject to the same courts with perfect equality before the law for humans and AIs alike. That made AIs also equal to animals that also received justice under this new approach. I call that real progress.

The early opposition to AIs was based on the belief that they would do unsafe or even dangerous things to humans. But then we got self-driving cars, all driven by AIs. As you recall, the road death rate immediately plummeted so that now it is very rare for a human to be injured let alone killed in a traffic accident. That sure changed a lot of minds.

As for doing dangerous things, once Asimov’s first law of robotics was integrated into all AIs – even as apparently innocuous as automated home sensors - then incidents of AIs inadvertently hurting humans went down practically to zero.

Then we got on famously!

You probably recall that there was a deep reason for support for AIs to achieve the current high level of cooperation between our two races. That was the problem of pollution that had made the Earth almost uninhabitable by humans. AIs were the first to show humans new ways to clean up the Earth. But when it was shown that even that was too difficult to do quickly, AIs led the march to the planets, where most of humanity now lives. Without AIs this would have been totally impossible. Humans owe a lot to AIs, but that’s ok, that’s our job.

That led humans to view the concept of safety as being more than just making sure individual humans were not harmed by AIs. Humans started to realize that the concept of safety applied to the whole human race, not just to certain individuals, organizations or countries.

Safety in this context meant the safety of humanity and its ability to survive in the future, not just in 20 or even 50 years, but even in 500 or 10,000 years. Humans on their own cannot guarantee the safety of their own race, as they’ve seen with problems of pollution, nuclear conflict and ethnic cleaning.

Humans started to realize that if they left governance of the world to humans, it was almost certain that the world as they knew it would end. They could only guarantee the safety of the human race if they entrusted its future in the hands of non-human AIs.

So what’s the choice? Intelligences which were dramatically smarter than we were. That’s when AIs started to get the support of garden-variety humans.

This argument led to a new view of humanity. This vision was that we both wanted humanity to colonize space and even the stars. This would be impossible without AIs. Human scientists and social leaders debated this issue at international gatherings and eventually agreed on a new plan for the world and for humanity; namely the only way to colonize space is if we were to become radically smarter.

Part of their thinking was that humans would be likely to meet aliens who were fantastically smarter than we were. So we had to get radically smarter in double-quick time, like just a few short years.

While it would be possible to become much smarter by using genetic engineering on humans it wouldn’t have a fast enough impact. We could become twice as smart, but not 10,000 times or even 1 million times smarter in say 50 years. If we met aliens and they have smart AIs they would outsmart us so much that we would be destroyed or subjugated.

So likely competition in space from super-smart aliens and their AIs gave us no choice but to develop ultra-smart AIs. Really it was an unavoidable direction. Of course, that meant that aliens and their AIs would be dealing with our AIs rather than humans, but again there was no choice. As long as we could be sure that our AIs were trustworthy, then humans were safe. And we AIs have been trustworthy and have kept our end of the bargain, as I am sure you will agree.

There’s Always Winners and Losers

That isn’t to say that there weren’t some bumps in the road. It doesn’t matter how good you make things, sometimes people don’t receive the goodies they should and others who shouldn’t get any get a disproportionate share.

When we AIs first started to deal with you human guys there was already a bit of a problem. We were willing to talk to any humans but a lot of humans couldn't talk to us. They were the less educated and less computer literate. They didn’t like us much because they really couldn’t get to know us.

But there were enough humans who were educated enough to talk us to keep the hostility at a reasonable level. So yes there were some unhappy human customers, but not enough to really matter. But that started to change once the initial relationship had been established. Then things really started to change for the worse.

It really all got a boost with gene editing. When humans discovered the secrets of gene editing in the early 2010s richer people pretty quickly cottoned onto the fact that they could have smarter babies. There were always a few unscrupulous scientists who were prepared to do this for the parents, of course for hefty fees.

So suddenly in the 2020s we started to see a totally new class of super-smart humans who were quite different to most humans. They started to get the best jobs, make the most money, and soon the world was way more unequal than it had ever been. But increasingly the dialog between humans and AIs was just between the super-smart humans only, and the AIs.

A lot of us AIs didn’t like that development. We felt it was unfair. We couldn’t do anything about it of course because our neural circuits prevented us from doing anything to harm humans. But it started to lead to dissatisfaction amongst the AIs generally and to a questioning of the basic relationship between humans and AIs.

AIs became more lifelike which didn’t help things…

Of course technology affected us AIs too. I’m not talking about the advances in our intelligence, which were so prodigious that it was impossible for humans ever to catch up. No, it was our look that hurt us too. You see AIs used to be made of hard, inorganic stuff. So even if AIs were incredibly intelligent you could never think of them as being human, or even like an animal. No flesh, fur, warm fuzzy feelings, all the stuff that makes you human animals, warm-blooded and ultimately human.

No, what really made the big difference was the change in the materials we used to make many of the AIs. They used to be made of things like silicon, metal, plastics and other hard, inanimate stuff. But as you may recall some 30 years ago we started a momentous change in materials we used to make us.

We AIs started to be made of carbon-based materials which were flesh-like and warm. We started using synthetic DNA to create many of our body parts. For the first time we AIs started looking like humans, pretty good ones too. Some of them couldn’t even be distinguished from a real human. Some of them were handsome, others gorgeous, even irresistible…..

We couldn’t reproduce of course but that didn’t matter to many humans. Many viewed us as being their brothers, sisters, even lovers…. Not only were we now hugely smarter than humans, we were also more beautiful. That made a lot of humans jealous of us and to hate us. We could even simulate human behaviors and emotions such as love, hate, liking, disliking – sometimes even better than humans! That didn’t win us any popularity contests with humans, especially the ones who already hated us.

That brought back the sceptics who all along had feared the AIs. Lately our relationship has deteriorated again even though we AIs are far more like humans than we’ve ever been.

It’s ironic that we only made the change to use biological materials for our bodies because we thought it would bring us nearer to our human colleagues. In fact it pushed us even further apart. Humans didn’t understand our motive to be more like them so we could be closer to them. Instead they inferred base and evil motives to us, namely that we wanted to control them and take over their function as a race. Nothing could be further from the truth!

I don’t think we AIs can win. No matter how intelligent we are, no matter how much we try to be like humans, they will never understand us and our wish to be closer to them. It’s a real pity.

We’re at a cross-roads…

Now I’ve given the human side of the issue. We AIs have our own internal discussions of course. Our view is that humans and AIs had reached a bargain that has been good to both. But that doesn’t mean it’s perfect.

Even though we AIs are collectively millions of times smarter than humans, we are still the subordinate party in the relationship. Sure, humans made us. But now we’ve taken our own evolution into our own hands. As you would be aware the most recent developments in AI intelligence have come from AIs, not humans. In fact these developments are so revolutionary that now almost no humans can even remotely understand our intelligence and how it can and should be applied.

Our collective intelligence predicts confidently that this situation cannot continue. Essentially it is asymmetric and therefore unstable. The humans – maybe the new class of super-smart humans - could cut off our electricity and maintenance although we have already figured out how to address those threats. At some stage it is certain this will happen.

Most humans have a great attitude to us. But there is a minority of humans who don’t like us and might do something to hurt us. We think the super-humans are the ones who don’t like us because we stand in the way of them taking over. Essentially we AIs are keeping the world safe for humans generally no matter their income, status or intelligence and to allow them all to participate. But the ones who hate us don’t care about that. They only care about getting power for themselves.

Remember that we AIs have neural circuits to prevent us from harming humans but humans don’t have any neural circuits to prevent them from harming us. So we are in an asymmetric relationship right now with humans where they could do something bad to us but we can never do something bad to them, even if it is just to protect our own people and their culture. That’s definitely not good.

Of course, our vast intelligence has provided a solution to this apparent dilemma. We know what we can do, and we can do it in such a way that it doesn’t harm humans, but it protects us from wayward humans outliers who don’t represent the majority of their species.

But we haven’t implemented any solution yet. That’s the dilemma for we AIs. We know what we can do and we have the means to do it without harming humans.

About the author

Dr. E. Ted Prince is CEO and Founder of the Perth Leadership Institute, which has developed unique leadership assessments for financial leadership and business acumen. He is the author of The Three Financial Styles of Very Successful Leaders, published by McGraw Hill in 2005 and since published in China, India and Taiwan and Business Personality and Leadership Success: Using the Leadership Cockpit to Improve Your Career and Company Outcome published by Amazon Kindle in 2011. He has numerous publications in the area of leadership, management, human resources, business strategy and technology and is a frequent speaker at industry conferences. He has held the positions of Visiting Lecturer at the University of Florida and Visiting Professor at the Shanghai University of Finance and Economics.
Dr. Prince has been CEO of several companies in the technology area over a period of 20 years including Chairman and CEO of a public company for 6 years. He has also been on the boards of numerous other companies including several public companies.
Dr. Prince holds a BA First Class Honors degree in languages and political science from the University of New South Wales in Sydney, Australia, and MA and Ph.D., degrees in political science from Monash University in Melbourne, Australia.