Ruminations and Fulminations

Bad Sci-Fi Movies and Real-World AI

Continuing my theme of doing things other than fret about Donald Trump, I have spent some time fretting about other existential threats to humanity. So, that’s healthy.

Specifically, I’ve spent the last half day thinking about the threat of alien invasions and runaway artificial intelligence. One of them you can consign to the bottom of your worry list; the other probably deserves a higher spot on the list, somewhere below Donald Trump but above death panels and “radical Islamic terrorism.”

The topic of alien invasions is the overt theme of the movie I saw last night: Life, directed by Daniel Espinosa and starring, among others, Jake Gyllenhaal, Rebecca Ferguson and Ryan Reynolds. Without giving away the plot, it explores the question of what happens when humanity encounters a lifeform that turns out to be smarter and more dangerous than it appears. Suffice it to say not all ends well for our gender- and ethnic-balanced crew aboard the International Space Station.

Despite the title of this post, the movie is not actually bad; it’s suspenseful and engaging. As I watched it, though, I was struck by how shitty the science was. As the investigators probed the alien lifeform, they repeatedly demonstrated all sorts of stupid, unrealistic practices. They let a single investigator engage in isolation with the lifeform to the extent he loses perspective. They do not do carefully measured experiments to determine both what sustains the organism and what kills it. When it demonstrates exponential growth and unexpected abilities, the researchers don’t react to this with caution but instead step on the accelerator. And, when things go wrong, they discover that their failsafe mechanisms are either non-existent or simply failures. Any epidemiologist or biologist working with potentially hazardous organisms would have been appalled.

The good news is that we’re not out scooping up biomass from other planets and bringing it back to Earth. There’s also every reason to think that the product of other evolutionary forces would not be particularly compatible with Earth’s. And, finally, there’s the fact that – despite the fact that we’ve been actively looking for decades, there’s very little sign of life – particularly intelligent life – outside of our little blue ball despite the fact that it’s a very, very big universe. This is known as the Fermi Paradox. My best guess is that you can put this issue way, way down on your list of things to worry about.

Which brings me to the other one, the existential threat of runaway artificial intelligence.

As I was driving home from the theater, it occurred to me that the movie was actually a commentary on the how we – not you or me, but some VERY smart people – are approaching the field of AI. As near as I can tell, we are using the same shitty scientific methods – the ones that would make any life science researcher cringe – to develop this technology. We have researchers all across the world laboring in secret, scientists who are less objective researchers and more would-be parents who are enraptured with the idea of strong AI or even the Singularity. Instead of running carefully controlled experiments and building in rigorous “kill steps,” AI is being deployed today in the real world – in Teslas, in fraud detection systems, in your washing machine, writing both press releases and news stories, in your favorite search engine, in the warehouses of your favorite retailer, as robo-calls and a thousand other ways. And, even though these creations are demonstrating unexpectedly rapid growth and ability (an AI-driven computer recent beat the world’s best Go players – widely considered an incredibly hard game – 60 games to none; a computer program performed a similar fear against some of the world’s best poker players), researchers are plowing onward at even faster rates.

This is perhaps not the smartest thing we’ve ever done. And, it’s not just me, your friendly blogger, who thinks so. Smart guys like Bill Gates and Elon Musk are worried about this. So are really smart guys like Stephen Hawking.

By way of fair disclosure, there are plenty of very smart people – Ray Kurzweil perhaps foremost among them – who believe the coming era of big AI will usher in an unprecedented era for humanity, giving us access to pretty much everything and an infinite lifespan to experience it. That seems like a better outcome, but this point of view is a little cultish and perhaps optimistic without hard, objective reasons. Life – whether artificial or otherwise – constantly finds ways to break out of whatever boxes it gets put into. Including the boxes we build.

If you’re inclined to read more on this, Vanity Fair coincidentally published a long interview with Musk on this topic. It is worth the 20 minutes or so it will take you and give you something to worry about instead of Trump.

There. Doesn’t that make you feel better instead of worrying about the latest cluster fuck from the White House? Next week, I’ll write about the threats of pandemics and global warming. Just call me Mr. Good News.

18 thoughts on “Bad Sci-Fi Movies and Real-World AI”

I think of the tipping point concept. We’re learning and doing more and more with AI. Much of it seems helpful and good yet there is so much unknown. I’m glad to read of the camaraderie we share concerning films (and life) full of bad science – loathsome. Whenever something is quite new, I like the idea of using some extra caution but I know that’s not the way everyone rolls. So ultimately there will be that one time and that one thing that calls out to someone who will take a leap of faith and tada! We’ll all be either shouting “hooray!” or “oh shit.” So goes the human experiment.

Hmmm…. Fascinating subject. Leave it to Austin to pose the “hard questions”! Haven’t read the Musk interview yet, but who can forget Hal 9000? Then there are the questions posed by the neuroscientists as they keep asking what it is to be conscious. If millions of firing neurons in the human brain can ultimately be replicated by the circuitry in a computer, then…?

I know my place as a pedestrian smart guy and technologist, and that I should look up to these visionaries. I do look up to them, mostly. But I think they are saying “AI”’ when ‘algorithm’ is the correct and proper word.

Algorithms aren’t sentient. They are not going to take over the world, and certainly won’t destroy it.

At the risk of wading into some deep waters, where’s the line between an algorithm and AI? Is it when the algorithm can pass a Turing test? Beat the reigning chess champion? The best Go player in the world? Win at Jeopardy? A poker tournament? Compose music?

To put it in terms of living creatures, is an earthworm sentient? Probably not. A dog? It seems to me that there’s going on behind those eyes, but it’s also possible that it’s just a complex collection of stimulus-response routines. A person? I think I am and I extend you the same assumption (though I’ve been thinking a lot lately about the question of philosophical zombies).

Based on my understanding of current AI research, we can pretty accurately model an earthworm and some significantly more complex creatures. In some narrow areas, AI is “smarter” than its human counterparts. Those don’t meet my definition of consciousness, but some of what AI does is definitely in the range of “uncanny.”

I can, I think, sidestep the whole issue of whether an algorithm has to be conscious in order to be dangerous: A poorly designed algorithm with a goal along the lines of “optimize the field for planting corn” might cause a lot of trouble if given control of a agricultural tractor and turned loose at a sporting event.

Holy smoke!! Better zip up that wet suit. Have you defined “consciousness”? We know we are “self-aware”. Can we say the same about a dog for example? Is self-awareness required for consciousness or is the test much lower, like, merely to be awake? Are you conscious of the plant in the corner that’s beyond your view, or only now that I’ve mentioned it? Can point-of-view be duplicated in a machine? Empathy? Anger?
Oops, sorry I’m going off the rails. Cocktail hour approaching. Interesting stuff.

Hey, you and PM1956 (birth date I assume) know this better than anyone. Check out pal Einstein’s Special Theory of Relativity. No physical law explains why time always points to the future. Past, present and future are not absolutes. But, hell no one has ever explained what “time” is. Does it matter if we live in a deterministic universe or not? Trump, as I scribble this is still President!

To the – very – limited extent I understand these things, if we do live in a deterministic universe – or a simulation – then time can run in any direction and at any speed. What I can’t wrap my head around – even slightly – is the notion that causality is an illusion or at least more complicated than we understand (but if it is, how can we be living in a deterministic universe?). That notion was at the heart of The Arrival, another recent sci-fi movie that was thought-provoking.

It isn’t a simulation at all, it is just the way the world really works. The idea that we have control or can in any way determine our future is BS. But unavoidable BS, all the same. We don’t control the world, we just live in it.

Well, old enough to have read a lot of stuff on philosophy and consciousness, but that was years ago…..(as well as reading lots of SF, and watching the occasional SF midnight movie with Austin, who really has a bad habit there…).

Once upon a time (when I was younger), I felt compelled to read until I was confident of Free Will. The more I read, the less tenable it seemed. And now, i am not sure that i care….

Yes, I haven’t seen “The Arrival” but just read the wonderful novella it was based on. The experience of time as a linearity with causality is questioned.Yes, these are the unanswered questions. Exciting stuff–because we don’t know. I hear you. Super you broached such an unexpected subject. Keep it up. But, I’m missing Rachel Maddow (back in my personal good graces after her fiasco with that silly 2-page tax return).

Ezra Klein
Do you think that in 200 or 300 years, human beings will be the dominant actor on Earth?

Yuval Harari
Absolutely not. If you asked me in 50 years, it would be a difficult question, but 300 years, it’s a very easy question. In 300 years, Homo sapiens will not be the dominate life form on Earth, if we exist at all.

Given the current pace of technological development, it is possible we destroy ourselves in some ecological or nuclear calamity. The more likely possibility is that we will use bioengineering and machine learning and artificial intelligence either to upgrade ourselves into a totally different kind of being or to create a totally different kind of being that will take over.

In any case, in 200 or 300 years, the beings that will dominate the Earth will be far more different from us than we are different from Neanderthals or from chimpanzees.