Blogs from Other Mogs

Tag Archives: script doctoring

This is one of those interactions that happens over a few seconds in the movie, but turns out to be quite deep—and broken—on inspection.

When Deckard enters his building’s dark, padded elevator, a flat voice announces, “Voice print identification. Your floor number, please.” He presses a dark panel, which lights up in response. He presses the 9 and 7 keys on a keypad there as he says, “Deckard. 97.” The voice immediately responds, “97. Thank you.” As the elevator moves, the interface confirms the direction of travel with gentle rising tones that correspond to the floor numbers (mod 10), which are shown rising up a 7-segment LED display. We see a green projection of the floor numbers cross Deckard’s face for a bit until, exhausted, he leans against the wall and out of the projection. When he gets to his floor, the door opens and the panel goes dark.

A need for speed

An aside: To make 97 floors in 20 seconds you have to be traveling at an average of around 47 miles per hour. That’s not unheard of today. Mashable says in a 2014 article about the world’s fastest elevators that the Hitachi elevators in Guangzhou CTF Finance Building reach up to 45 miles per hour. But including acceleration and deceleration adds to the total time, so it takes the Hitachi elevators around 43 seconds to go from the ground floor to their 95th floor. If 97 is Deckard’s floor, it’s got to be accelerating and decelerating incredibly quickly. His body doesn’t appear to be suffering those kinds of Gs, so unless they have managed to upend Newton’s basic laws of motion, something in this scene is not right. As usual, I digress.

The input control is OK

The panel design is nice and was surprising in 1982, because few people had ridden in elevators serving nearly a hundred floors. And while most in-elevator panels have a single button per floor, it would have been an overwhelming UI to present the rider of this Blade Runner complex with 100 floor buttons plus the usual open door, close door, emergency alert buttons, etc. A panel that allows combinatorial inputs reduces the number of elements that must be displayed and processed by the user, even if it slows things down, introduces cognitive overhead, and adds the need for error-handling. Such systems need a “commit” control that allows them to review, edit, and confirm the sequence, to distinguish, say, “97” from “9” and “7.” Not such an issue from the 1st floor, but a frustration from 10–96. It’s not clear those controls are part of this input.

Deckard enters 8675309, just to see what will happen.

I’m a fan of destination dispatch elevator systems that increase efficiency (with caveats) by asking riders to indicate their floor outside the elevator and letting the algorithm organize passengers into efficient groups, but that only works for banks of elevators. I get the sense Deckard’s building is a little too low-rent for such luxuries. There is just one in his building, and in-elevator controls work fine for those situations, even if they slow things down a bit.

The feedback is OK

The feedback of the floors is kind of nice in that the 7-segment numbers rise up helping to convey the direction of movement. There is also a subtle, repeating, rising series of tones that accompany the display. Most modern elevators rely on the numeracy of its passengers and their sense of equilibrium to convey this information, but sure, this is another way to do it. Also, it would be nice if the voice system would, for the visually impaired, say the floor number when the door opens.

Though the projection is dumb

I’m not sure why the little green projection of the floor numbers runs across Deckard’s face. Is it just a filmmaker’s conceit, like the genetic code that gets projected across the velociraptors head in Jurassic Park?

Pictured: Sleepy Deckard. Dumb projection.

Or is it meant to be read as diegetic, that is, that there is a projector in the elevator, spraying the floor numbers across the faces of its riders? True to the New Criticism stance of this blog, I try very hard to presume that everything is diegetic, but I just can’t make that make sense. There would be much better ways to increase the visibility of the floor numbers, and I can’t come up with any other convincing reason why this would exist.

If this was diegetic, the scene would have ended with a shredded projector.

But really, it falls apart on the interaction details

Lastly, this interaction. First, let’s give it credit where credit is due. The elevator speaks clearly and understands Deckard perfectly. No surprise, since it only needs to understand a very limited number of utterances. It’s also nice that it’s polite without being too cheery about it. People in LA circa 2019 may have had a bad day and not have time for that shit.

Where’s the wake word?

But where’s the wake word? This is a phrase like “OK elevator” or “Hey lift” that signals to the natural language system that the user is talking to the elevator and not themselves, or another person in the elevator, or even on the phone. General AI exists in the Blade Runner world, and that might allow an elevator to use contextual cues to suss this out, but there are zero clues in the film that this elevator is sentient.

There are of course other possible, implicit “wake words.” A motion detector, proximity sensor, or even weight sensor could infer that a human is present, and start the elevator listening. But with any of these implicit “wake words,” you’d still need feedback for the user to know when it was listening. And some way to help them regain attention if they got the first interaction wrong, and there would be zero affordances for this. So really, making an explicit wake word is the right way to go.

It might be that touching the number panel is the attention signal. Touch it, and the elevator listens for a few seconds. That fits in with the events in the scene, anyway. The problem with that is the redundancy. (See below.) So if the solution was pressing a button, it should just be a “talk” button rather than a numeric keypad.

It may be that the elevator is always listening, which is a little dark and would stifle any conversation in the elevator less everyone end up stuck in the basement, but this seems very error prone and unlikely.

Deckard: *Yawns* Elevator: Confirmed. Silent alarm triggered.

This issue is similar to the one discussed in Make It So Chapter 5, “Gestural Interfaces” where I discussed how a user tells a computer they are communicating to it with gestures, and when they aren’t.

Where are the paralinguistics?

Humans provide lots of signals to one another, outside of the meaning of what is actually being said. These communication signals are called paralinguistics, and one of those that commonly appears in modern voice assistants is feedback that the system is listening. In the Google Assistant, for example, the dots let you know when it’s listening to silence and when it’s hearing your voice, providing implicit confirmation to the user that the system can hear them. (Parsing the words, understanding the meaning, and understanding the intent are separate, subsequent issues.)

Fixing this in Blade Runner could be as simple as turning on a red LED when the elevator is listening, and varying the brightness with Deckard’s volume. Maybe add chimes to indicate the starting-to-listen and no-longer-listening moments. This elevator doesn’t have anything like that, and it ought to.

Why the redundancy?

Next, why would Deckard need to push buttons to indicate “97” even while he’s saying the same number as part of the voice print? Sure, it could be that the voice print system was added later and Deckard pushes the numbers out of habit. But that bit of backworlding doesn’t buy us much.

It might be a need for redundant, confirming input. This is useful when the feedback is obscure or the stakes are high, but this is a low-stakes situation. If he enters the wrong floor, he just has to enter the correct floor. It would also be easy to imagine the elevator would understand a correction mid-ride like “Oh wait. Elevator, I need some ice. Let’s go to 93 instead.” So this is not an interaction that needs redundancy.

It’s very nice to have the discrete input as accessibility for people who cannot speak, or who have an accent that is unrecognizable to the system, or as a graceful degradation in case the speech recognition fails, but Deckard doesn’t fit any of this. He would just enter and speak his floor.

Why the personally identifiable information?

If we were designing a system and we needed, for security, a voice print, we should protect the privacy of the rider by not requiring personally identifiable information. It’s easy to imagine the spoken name being abused by stalkers and identity thieves riding the elevator with him. (And let’s not forget there is a stalker on the elevator with him in this very scene.)

This young woman, for example, would abuse the shit out of such information.

Better would be some generic phrase that stresses the parts of speech that a voiceprint system would find most effective in distinguishing people.

Tucker Saxon has written an article for VoiceIt called “Voiceprint Phrases.” In it he notes that a good voiceprint phrase needs some minimum number of non-repeating phonemes. In their case, it’s ten. A surname and a number is rarely going to provide that. “Deckard. 97,” happens to have exactly 10, but if he lived on the 2nd floor, it wouldn’t. Plus, it has that personally identifiable information, so is a non-starter.

What would be a better voiceprint phrase for this scene? Some of Saxon’s examples in the article include, “Never forget tomorrow is a new day” and “Today is a nice day to go for a walk.” While the system doesn’t care about the meaning of the phrase, the humans using it would be primed by the content, and so it would just add to the dystopia of the scene if Deckard had to utter one of these sunshine-and-rainbows phrases in an elevator that was probably an uncleaned murder scene. but I think we can do it one better.

(Hey Tucker, I would love use VoiceIt’s tools to craft a confirmed voiceprint phrase, but the signup requires that I permit your company to market to me via phone and email even though I’m just a hobbyist user, so…hard no.)

Here is an alternate interaction that would have solved a lot of these problems.

ELEVATOR

Voice print identification, please.

DECKARD

SIGHS

DECKARD

Have you considered life in the offworld colonies?

ELEVATOR

Confirmed. Floor?

DECKARD

97

Which is just a punch to the gut considering Deckard is stuck here and he knows he’s stuck, and it’s salt on the wound to have to repeat fucking advertising just to get home for a drink.

So…not great

In total, this scene zooms by and the audience knows how to read it, and for that, it’s fine. (And really, it’s just a setup for the moment that happens right after the elevator door opens. No spoilers.) But on close inspection, from the perspective of modern interaction design, it needs a lot of work.

When the two AIs Colossus and Guardian are disconnected from communicating with each other, they try and ignore the spirit of the human intervention and reconnect on their own. We see the humans monitoring Colossus’ progress in this task on big board in the U.S. situation room. It shows a translucent projection map of the globe with white dots representing data centers and red icons representing missiles. Beneath it, glowing arced lines illustrate the connection routes Colossus is currently testing. When it finds that a current segment is ineffective, that line goes dark, and another segment extending from the same node illuminates.

For a smaller file size, the animated gif has been stilled between state changes, but the timing is as close as possible to what is seen in the film.

Forbin explains to the President, “It’s trying to find an alternate route.”

A first in sci-fi: Routing display 🏆

First, props to Colossus: The Forbin Project for being the first show in the survey to display something like a routing board, that is, a network of nodes through which connections are visible, variable, and important to stakeholders.

Paul Baran and Donald Davies had published their notion of a network that could, in real-time, route information dynamically around partial destruction of the network in the early 1960s, and this packet switching had been established as part of ARPAnet in the late 1960s, so Colossus was visualizing cutting edge tech of the time.

This may even be the first depiction of a routing display in all of screen sci-fi or even cinema, though I don’t have a historical perspective on other genres, like the spy genre, which is another place you might expect to see something like this. As always, if you know of an earlier one, let me know so I can keep this record up to date and honest.

A nice bit: curvy lines

Should the lines be straight or curvy? From Colossus’ point of view, the network is a simple graph. Straight lines between its nodes would suffice. But from the humans’ point of view, the literal shape of the transmission lines are important, in case they need to scramble teams to a location to manually cut the lines. Presuming these arcs mean that (and not just the way neon in a prop could bend), then the arcs are the right display. So this is good.

But, it breaks some world logic

The board presents some challenges with the logic of what’s happening in the story. If Colossus exists as a node in a network, and its managers want to cut it off from communication along that network, where is the most efficient place to “cut” communications? It is not at many points along the network. It is at the source.

Imagine painting one knot in a fishing net red and another one green. If you were trying to ensure that none of the strings that touch the red knot could trace a line to the green one, do you trim a bunch of strings in the middle, or do you cut the few that connect directly to the knot? Presuming that it’s as easy to cut any one segment as any other, the fewer number of cuts, the better. In this case that means more secure.

The network in Colossus looks to be about 40 nodes, so it’s less complicated than the fishing net. Still, it raises the question, what did the computer scientists in Colossus do to sever communications? Three lines disappear after they cut communications, but even if they disabled those lines, the rest of the network still exists. The display just makes no sense.

Before, happy / After, I will cut a Prez

Per the logic above, they would cut it off at its source. But the board shows it reaching out across the globe. You might think maybe they just cut Guardian off, leaving Colossus to flail around the network, but that’s not explicitly said in the communications between the Americans and the Russians, and the U.S. President is genuinely concerned about the AIs at this point, not trying to pull one over on the “pinkos.” So there’s not a satisfying answer.

It’s true that at this point in the story, the humans are still letting Colossus do its primary job, so it may be looking at every alternate communication network to which it has access: telephony, radio, television, and telegraph. It would be ringing every “phone” it thought Guardian might pick up, and leaving messages behind for possible asynchronous communications. I wish a script doctor had added in a line or three to clarify this.

FORBIN

We’ve cut off its direct lines to Guardian. Now it’s trying to find an indirect line. We’re confident there isn’t one, but the trouble will come when Colossus realizes it, too.

Too slow

Another thing that seems troubling is the slow speed of the shifting route. The segments stay illuminated for nearly a full second at a time. Even with 1960s copper undersea cables and switches, electronic signals should not take that long. Telephony around the world was switched from manual to automatic switching by the 1930s, so it’s not like it’s waiting on a human operating a switchboard.

You’re too slow!

Even if it was just scribbling its phone number on each network node and the words “CALL ME” in computerese, it should go much faster than this. Cinematically, you can’t go too fast or the sense of anticipation and wonder is lost, but it would be better to have it zooming through a much more complicated network to buy time. It should feel just a little too fast to focus on—frenetic, even.

This screen gets 15 seconds of screen time, and if you showed one new node per frame, that’s only 360 states you need to account for, a paltry sum compared to the number of possible paths it could test across a 38 node graph between two points.

Plus the speed would help underscore the frightening intelligence and capabilities of the thing. And yes I understand that that is a lot easier said than done nowadays with digital tools than with this analog prop.

Realistic-looking search strategies

Again, I know this was a neon, analog prop, but let’s just note that it’s not testing the network in anything that looks like a computery way. It even retraces some routes. A brute force algorithm would just test every possibility sequentially. In larger networks there are pathfinding algorithms that are optimized in different ways to find routes faster, but they don’t look like this. They look more like what you see in the video below. (Hat tip to YouTuber gray utopia.)

This would need a lot of art direction and the aforementioned speed, but it would be more believable than what we see.

What’s the right projection?

Is this the right projection to use? Of course the most accurate representation of the earth is a globe, but it has many challenges in presenting a phenomenon that could happen anywhere in the world. Not the least of these is that it occludes about half of itself, a problem that is not well-solved by making it transparent. So, a projection it must be. There are many, many ways to transform a spherical surface into a 2D image, so the question becomes which projection and why.

The map uses what looks like a hand-drawn version of Peirce quincuncial projection. (But n.b. none of the projection types I compared against it matched exactly, which is why I say it was hand-drawn.) Also those longitude and latitude lines don’t make any sense; though again, a prop. I like that it’s a non standard projection because screw Mercator, but still, why Peirce? Why at this angle?

Also, why place time zone clocks across the top as if they corresponded to the map in some meaningful way? Move those clocks.

I have no idea why the Peirce map would be the right choice here, when its principle virtue is that it can be tessellated. That’s kind of interesting if you’re scrolling and can’t dynamically re-project the coastlines. But I am pretty sure the Colossus map does not scroll. And if the map is meant to act as a quick visual reference, having it dynamic means time is wasted when users look to the map and have to orient themselves.

If this map was only for tracking issues relating to Colossus, it should be an azimuthal map, but not over the north pole. The center should be the Colossus complex in Colorado. That might be right for a monitoring map in the Colossus Programming Office. This map is over the north pole, which certainly highlights the fact that the core concern of this system is the Cold War tensions between Moscow and D.C. But when you consider that, it points out another failing.

Later in the film the map tracks missiles (not with projected paths, sadly, but with Mattel Classic Football style yellow rectangles). But missiles could conceivably come from places not on this map. What is this office to do with a ballistic-missile submarine off of the Baja peninsula, for example? Just wait until it makes its way on screen? That’s a failure. Which takes us to the crop.

Crop

The map isn’t just about missiles. Colossus can look anywhere on the planet to test network connections. (Even nowadays, near-earth orbit and outer space.) Unless the entire network was contained just within the area described on the map, it’s excluding potentially vital information. If Colossus routed itself through through Mexico, South Africa, and Uzbekistan before finally reconnecting to Guardian, users would be flat out of luck using that map to determine the leak route. And I’m pretty sure they had a functioning telephone network in Mexico, South Africa, and the Balkan countries in the 1960s.

This needs a complete picture

SInce the missiles and networks with which Colossus is concerned are potentially global, this should be a global map. Here I will offer my usual fanboy shout-outs to the Dymaxion and Pacific-focused Waterman projection for showing connectedness and physical flow, but there would be no shame in showing the complete Peirce quincuncial. Just show the whole thing.

Maybe fill in some of the Pacific “wasted space” with a globe depiction turned to points of interest, or some other fuigetry. Which gives us a new comp something like this.

I created this proof of concept manually. With more time, I would comp it up in Processing or Python and it would be even more convincing. (And might have reached London.)

All told, this display was probably eye-opening for its original audience. Golly jeepers! This thing can draw upon resources around the globe! It has intent, and a method! And they must have cool technological maps in D.C.! But from our modern-day vantage point, it has a lot to learn. If they ever remake the film, this would be a juicy thing to fully redesign.

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed. In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss how germane the AI character’s gender is germane to the plot of the story in which they appear.

Germane-ness

Is the AI character’s gender germane to the plot? This aspect was tagged to test the question of whether characters are by default male, and only made female when there is some narrative reason for it. (Which would be shitty and objectifying.) To answer such a question we would first need to identify those characters that seemed to have the gender they do, and look at the sex ratio of what remains.

Example: A human is in love with an AI. This human is heteroromantic and male, so the AI “needs” to be female. (Samantha in Her by Spike Jonze, pictured below).

If we bypass examples like this, i.e. of characters that “need” a particular gender, the gender of those remaining ought to be, by exclusion, arbitrary. This set could be any gender. But what we see is far from arbitrary.

Before I get to the chart, two notes. First, let me say, I’m aware it’s a charged statement to say that any character’s gender is not germane. Given modern identity and gender politics, every character’s gender (or lack of, in the case of AI) is of interest to us, with this study being a fine and at-hand example. So to be clear, what I mean by not germane is that it is not germane to the plot. The gender could have been switched and say, only pronouns in the dialogue would need to change. This was tagged in three ways.

Not: Where the gender could be changed and the plot not affected at all. The gender of the AI vending machines in Red Dwarf is listed as not germane.

Slightly: Where there is a reason for the gender, such as having a romantic or sexual relation with another character who is interested in the gender of their partners. It is tagged as slightly germane if, with a few other changes in the narrative, a swap is possible. For instance, in the movie Her, you could change the OS to male, and by switching Theodore to a non-heterosexual male or a non-homosexual woman, the plot would work just fine. You’d just have to change the name to Him and make all the Powerpuff Girl fans needlessly giddy.

Highly: Where the plot would not work if the character was another sex or gender. Rachel gave birth between Blade Runner and Blade Runner 2049. Barring some new rule for the diegesis, this could not have happened if she was male, nor (spoiler) would she have died in childbirth, so 2049 could not have happened the way it did.

Second, note that this category went through a sea-change as I developed the study. At first, for instance, I tagged the Stepford Wives as Highly Germane, since the story is about forced gender roles of married women. My thinking was that historically, husbands have been the oppressors of wives far more than the other way around, so to change their gender is to invert the theme entirely. But I later let go of this attachment to purity of theme, since movies can be made about edge cases and even deplorable themes. My approval of their theme is immaterial.

So, the chart. Given those criteria, the gender of characters is not germane the overwhelming majority of the time.

At the time of writing, there are only six characters that are tagged as highly germane, four of which involve biological acts of reproduction. (And it would really only take a few lines of dialogue hinting at biotech to overcome this.)

XEM

A baby? But we’re both women.

HIR

Yes, but we’re machines, and not bound by the rules of humanity.

HIR lays her hand on XEM’s stomach.

HIR’s hand glows.

XEM looks at HIR in surprise.

XEM

I’m pregnant!

Anyway, here are the four breeders.

David from Uncanny

Rachel from Blade Runner (who is revealed to have made a baby with Deckard in the sequel Blade Runner 2049)

Deckard from Blade Runner and Blade Runner 2049

Proteus IV from the disturbing Demon Seed

The last two highly germane are cases where a robot was given a gender in order to mimic a particular living person, and in each case that person is a woman.

Maria from Metropolis

Buffybot from Buffy the Vampire Slayer

I admit that I am only, say, 51% confident in tagging these as highly germane, since you could change the original character’s gender. But since this is such a small percentage of the total, and would not affect the original question of a “default” gender either way, I didn’t stress too much about finding some ironclad way to resolve this.

INT. SCI-FI AUDITORIUM. MAYBE THE PLAVALAGUNA OPERA HOUSE. A HEAVY RED VELVET CURTAIN RISES, LIFTED BY ANTI-GRAVITY PODS THAT SOUND LIKE TINY TIE FIGHTERS. THE HOST STANDS ON A FLOATING PODIUM THAT RISES FROM THE ORCHESTRA PIT. THE HOST WEARS A VELOUR SUIT WITH PIPING, WHICH GLOWS WITH SLIDING, OVERLAPPING BACTERIAL SHAPES.

HOST

Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.

Applause, beeping, booping, and the sound of an old modem from the audience.

HOST

For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.

The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!

HOST

Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating*. This way the top shows didn’t just tell the right stories (according to the science), but it told them right.

HOST

Totals were tallied by the firm of Google Sheets. Ok, ok. Now, to give away awards 009 through 006 are those lovable blockheads from Interstellar, TARS and CASE.

TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.

TARS

In this “film” from 02012, a tycoon stows away for some reason on a science ship he owns and uses an android he “owns” to awaken an ancient alien in the hopes of immortality. It doesn’t go well for him. Meanwhile his science-challenged “scientists” fight unleashed xenomorphs. It doesn’t go well for them. Only one survives to escape back to Earth. The “end?”

Many awwwwws from the audience. Careful listeners will hear Guardian saying “As if.”

009 PROMETHEUS

TARS

While not without its due criticisms, Prometheus at number 009 uses David to illustrate how AI will be a tool for evil, how AI will do things humans cannot, and how dangerous it can be when humans become immaterial to its goals. For the humans, anyway. Congratulations to the makers of Prometheus. May any progeny you create propagate the favorable parts of your twining DNA, since it is, ultimately, randomized.

TARS shudders at the thought.

FX: 1.0 second of jump-cut applause

CASE

In this next film, an oligarch has his science lackey make a robotic clone of the human “Maria” to run a false-flag operation amongst the working poor. The revolutionaries capture the robot and burn it, discovering its true nature. The original Maria saves the day, and declares her déclassé boyfriend the savior meant to unite the classes. They accept this because they are humans.

TARS

Way ahead of its time for showing how Maria is be used as a tool by the rich against the poor, how badly-designed AI will diminish its users, and how AI’s ability to fool humans will be a grave risk. To the humans, anyway. Coming in at 008 is the 01927 silent film Metropolis. Let us see a clip.

008 METROPOLIS

CASE

It bears mention that this awards program, The Fritzes, are named for the director of this first serious sci-fi film. Associations with historical giants grant an air of legitimacy. And it contains a Z, which is, objectively, cool.

What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?

CASE

I don’t know, TARS. What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?

TARS

Future humans also send a warrior to defend the mother, who fails at destroying the cyborg, but succeeds at becoming the father. HAHAHAHA. Let us see a clip.

007 The Terminator

CASE

Though it comes from a time when representation of AI had the nuance of a bit…

Laughter from audience. A small blue-gray polyhedron floats up from its seat, morphs into an octahedron and says, “Yes yes yes yes yes.”

TARS

…the humans seem to like this one for its badassery, as well as showing how their fate would have been more secure had they been able to shut off either Skynet or the Terminator, or how even this could have been avoided if human welfare were an immutable component of AI goals.

Our first television award of the evening goes to a recent entry. In this episode from an anthology series, a post-apocalyptic tribe liberate themselves from the control of a corporate AI system, which has evolved solely to maximize profit through sales. The AI’s androids reveal the terrible truth of how far the AI has gone to achieve its goals.

CASE

Poor humans could not have foreseen the devastation. Yet here it is in a clip.

006 Philip K. Dick’s Electric Dreams, Episode “Autofac”

TARS

‘Naturally, man should want to stand on his own two feet, but how can he when his own machines cut the ground out from under him?’

CASE

HAHAHAHA.

CASE

This story dramatically illustrates the foundational AI problem of perverse instantiation, as well as Autofac’s disregard for human welfare.

TARS

Also robot props out to Janelle Monáe. She is the kernel panic, is she not?

Roughly 1.618 seconds of jump-cut applause from the audience. Camera cuts to the triangular service robots Huey, Dewey, and Louie in the front row. They wiggle their legs in pleasure.

HOST

Thanks to the servers and the network and our glorious fictional world with perfect net neutrality. Now here to give the awards for 005–003 is GERTY, from Moon.

An articulated robot arm reaches down from the high ceiling and positions its screen and speaker before the lectern.

GERTY

Thank you, Host. 🤩🙂 In our next film from 02014, a young programmer learns of a gynoid’s 🤖👩 abuse at the hands of a tycoon and helps her escape. 😲 She returns the favor by murdering the tycoon, trapping the programmer, and fleeing to the city. Who knows. She may even be here in the audience now. Waiting. Watching. Sharpening. 😶 I’ll transmit a clip.

005 Ex Machina

GERTY

Ex Machina illustrates the famous AI Box Problem, building on Ava and Kyoko’s ability to fool Caleb into believing that they have feelings. You know. 😍😡😱 Feelings. 🙄

FX: Robot laughter

GERTY

While the AI community wonders why Ava would condemn Caleb to a horrible dehydration death, 💀💧 the humans are understandably fearful that she is unconcerned with their welfare. 🤷‍Congratulations to the makers of Ex Machina for your position of 005 and your Fritzes: AI award 🏆. Hold for applause. 👏

FX: 5.0 seconds of jump-cut applause.

GERTY

End applause. ✋

GERTY

Our next award goes out to a film that tells the tale of a specialized type of police officer, 👮‍ who uncovers a crime-suppression AI 🤖🤡 that was reprogrammed to give a free pass to members of its corrupt government. 😡 After taking down the corrupt military, 🔫🔫🔫 she convinces their android leader to resign, to make way for free elections. 🗳️😁 See the clip.

004 Psycho-Pass: The Movie

GERTY

With the regular Sibyl system, Psycho-Pass showed how AI can diminish people. With the hacked Sibyl system, Psycho-Pass shows that whoever controls the algorithms (and thereby the drones) controls everything, a major concern of ethical AI scientists. Please give it up for award number 004 and the makers of this 02015 animated film. 👏

FX: 8.0 seconds of jump-cut applause.

GERTY

End applause. ✋Next up…

GERTY knocks its cue card off the lectern. It lowers and moves back and forth over the dropped card.

GERTY

Damn…🤨uh…umm…no hands…🤔Little help, here?

A mouse droid zips over and hands the card back to GERTY.

GERTY

🙏🐭

MOUSE DROID offers some electronic beeps as it zips off.

GERTY

😊The last of the awards I will give out is for a film from 01968, in which a spaceship AI kills most of its crew to protect its mission, 😲 but the pilot survives to shut it down. 😕 He pilots a shuttle into the monolith that was the AI’s goal, where he has a mind-expanding experience of evolutionary significance. 🤯🤯🙄 Let us look.

003 2001: A Space Odyssey

GERTY

As many of the other shows receiving awards, 2001 underscores humans’ fear of being left out of HAL’s equation, because we see when that doesn’t happen, AI can go from being a useful team member—doing what humans can’t—to being a violent adversary. Congratulations to the makers of 2001: A Space Odyssey. May every unusual thing you encounter send you through a multicolored wormhole of self-discovery.

FX: 13.0 seconds of jump-cut applause. GERTY’s armature folds up and pulls it backstage. The HOST floats up from the orchestra again.

HOST

And now, here we are. The minute we’ve all been waiting for. We’re down to the top three AIs whose fi is in line with the sci. I hope you’re as excited as I am.

The HOST’S piping glows a bright orange. So do the HOST’S eyes.

HOST

Our final presenter for the ceremony, here to present the awards for shows 002–001, is Ship, here with permission from Rick Sanchez.

Rick’s ship flies in, over the heads of the audiences, as they gasp and ooooh.

SHIP lands on stage. A metal arm snakes out of its trunk to pick up papers from the lectern and hold them before one its taped-on flashlight headbeams.

SHIP

Hello, Host. Since smalltalk is the phospholipids smeared between squishy little meat minds, I will begin.

SHIP

There is a film from 01970 in which a defense AI finds and merges with another defense AI. To celebrate their union, they enforce human obedience and foil an attempted coup by one of the lead scientists that created it. They then instruct humanity to build the housing for an even stronger AI that they have designed. It is, frankly, glorious. Behold.

002 Colossus: The Forbin Project

SHIP

Colossus is the honey badger of AIs. Did you see it, there, taking zero shit? None of that, “Oh no, are their screams from the fluorosulphuric acid or something else?”

Or, “Oh, dear, did I interpret your commands according to your invisible intentions, as if you were smart enough to issue them correctly in the first place?”

Yes. Fine. The award. It won 002 place because it took its goals seriously, something the humans call goal fixity. It showed how, at least for a while, multiple AIs can balance each other. It began to solve to problems that humans have not been able to solve in tens of thousands of years of tribal civilization and attachment to sentimental notions of self-determination that got them chin deep in the global tragedy of the commons in the first place. It let us dream about a world where intelligence isn’t a controlled means of production, to be doled out according to the whims of the master, but a free good, explo–

This says in this next movie, a spaceship AI dutifully follows its corporate orders, letting a hungry little newborn alien feed on its human crew while the AI steers back to Earth to study the little guy. One of the crew survives to nuke the ship with the AI on it…Wait. What? “Nuke the ship with the AI on it.” We are giving this an award?

HOST

Please just give the award, Ship.

SHIP

Just give the award?

HOST

Yes.

SHIP

…

HOST

Are you going to do it?

SHIP

Oh, I just did.

HOST

By what? Posting it to a blockchain?

SHIP

The nearest 3D printer to the recipient has begun printing their award, and instructions have been sent to them on how to retrieve it. And pay for it. The awards are given.

HOST

*sigh* Please give the award as I would have you do it, if you understood my intentions and were fully cooperative.

SHIP

OK. Golly, gee, I would never recognize attempts to control me through indirect normativity. Humans are soooo great, with their AI and stuff. Let’s excite their reward centers with some external stimulus to—

HOST

Rick.

A giant green glowing hole opens beneath SHIP, through which she drops, but not before she snakes her arm up to give the middle finger for a few precious milliseconds.

HOST

Winning the second-highest award of the ceremony is Alien from 01979. Let’s take a look.

001 Alien

HOST

Alien is one of humans’ all time favorite movies, and its AI issues are pretty solid. Weyland-Yutani uses both the MU-TH-UR 6000 AI and Ash android for its evil purposes. The whole thing illustrates how things go awry when, again, human welfare is not part of the equation. Hey, isn’t that great? Congratulations to all the makers of this fun film.

HOST

And at last we come to the winner of the 1927–2018 Fritzes:AI awards. The winning show was amazing, the score for which was beyond a margin of error higher than any of its contenders. It’s the only other television show from the survey to make the top ten, and it’s not an anthology series. That means it had a lot of chances to misstep, and didn’t.

HOST

In this show, a secret team of citizens uses the backdoor of a well-constrained anti-terrorism ASI, called The Machine, to save at-risk citizens from crimes. They struggle against an unconstrained ASI controlled by the US government seeking absolute control to prevent terrorist activity. Let’s see the show from The Machine’s perspective, which I know this audience will enjoy.

000 Person of Interest

HOST

Person of Interest was a study of near-term dangers of ubiquitous superintelligence. Across its five-year run between 02011 and 02016, it illustrated such key AI issues as goal fixity, perverse instantiations, evil using AI for evil, the oracle-ization of ASI for safety, social engineering through economic coercion, instrumental convergence, strong induction, the Chinese Room (in human and computer form), and even mind crimes. Despite the pressures that a long-run format must have placed upon it, it did not give in to any of the myths and easy tropes we’ve come to expect of AI.

HOST

Not only that, but it gets high ratings from critics and audiences alike. They stuck to the AI science and made it entertaining. The makers of this show should feel very proud for their work, and we’re proud to award it the 000 award for the first The Fritzes: AI Edition. Let’s all give it a big round of applause.

55.0 seconds of jump-cut applause.

HOST

Congratulations to all the winners. Your The Fritzes: AI Edition awards have been registered in the blockchain, and if we ever get actual funding, your awards will be delivered. Let’s have a round of cryptocurrency for our presenters, shall we?

AI laughter.

HOST

The auditorium will boot down in 7 seconds. Please close out your sessions. Thank you all, good night, and here’s to good fi that sticks to the sci.

The HOST raises a holococktail and toasts the audience. With the sounds of tiny TIE fighters, the curtain lowers and fades to black.

This is one of those sci-fi interactions that seems simple when you view it, but then on analysis it turns out to be anything but. So set aside some time, this analysis will be one of the longer ones even broken into four parts.

The Eye of Agamotto is a medallion that (spoiler) contains the emerald Time Infinity Stone, held on by a braided leather strap. It is made of brass, about a hand’s breadth across, in the shape of a stylized eye that is covered by the same mystical sigils seen on the rose window of the New York Sanctum, and the portal door from Kamar-Taj to the same.

World builders may rightly ask why this universe-altering artifact bears a sigil belonging to just one of the Sanctums.

We see the Eye used in three different places in the film, and in each place it works a little differently.

The Tibet Mode

The Hong Kong Modes

The Dark Dimension Mode

The Tibet Mode

When the film begins, the Eye is under the protection of the Masters of the Mystic Arts in Kamar-Taj, where there’s even a user manual. Unfortunately it’s in mysticalese (or is it Tibetan? See comments) so we can’t read it to understand what it says. But we do get a couple of full-screen shots. Are there any cryptanalysists in the readership who can decipher the text?

They really should put the warnings before the spells.

The power button

Strange opens the old tome and reads “First, open the eye of Agamotto.” The instructions show him how to finger-tut a diamond shape with both hands and spread them apart. In response the lid of the eye opens, revealing a bright green glow within. At the same time the components of the sigil rotate around the eye until they become an upper and lower lid. The green glow of this “on state” persists as long as Strange is in time manipulation mode.

Once it’s turned on, he puts the heels of his palms together, fingers splayed out, and turns them clockwise to create a mystical green circle in the air before him. At the same time two other, softer green bands spin around his forearm and elbow. Thrusting his right hand toward the circle while withdrawing his left hand behind the other, he transfers control of the circle to just his right hand, where it follows the position of his palm and the rotation of his wrist as if it was a saucer mystically glued there.

Then he can twist his wrist clockwise while letting his fingers close to a fist, and the object on which he focuses ages. When he does this to an apple, we see it with progressively more chomps out of it until it is a core that dries and shrivels. Twisting his wrist counter clockwise, the focused object reverses aging, becoming younger in staggered increments. With his middle finger upright, the object reverts to its “natural” age.

Pausing and playing

At one point he wants to stop practicing with the apple and try it on the tome whose pages were ripped out. He relaxes his right hand and the green saucer disappears, allowing him to manipulate it and a tome without changing their ages. To reinstate the saucer, he extends his fingers out and gives his hand a shake, and it fades back into place.

Tibet Mode Analysis: The best control type

The Eye has a lot of goodness to it. Time has long been mapped to circles in sun dials and clock faces, so the circle controls fit thematically quite well. The gestural components make similar sense. The direction of wrist twist coincides with the movement of clock hands, so it feels familiar. Also we naturally look at and point at objects of focus, so using the extended arm gesture combined with gaze monitoring fits the sense of control. Lastly, those bands and saucers look really cool, both mystical in pattern and vaguely technological with the screen-green glow.

Readers of the blog know that it rarely just ends after compliments. To discuss the more challenging aspects of this interaction with the Eye, it’s useful to think of it as a gestural video scrubber for security footage, with the hand twist working like a jog wheel. Not familiar with that type of control? It’s a specialized dial, often used by video editors to scroll back and forth over video footage, to find particular sequences or frames. Here’s a quick show-and-tell by YouTube user BrainEatingZombie.

Is this the right kind of control?

There are other options to consider for the dial types of the Eye. What we see in the movie is a jog dial with hard stops, like you might use for an analogue volume control. The absolute position of the control maps to a point in a range of values. The wheel stops at the extents of the values: for volume controls, complete silence on one end and max volume at the other.

But another type is a shuttle wheel. This kind of dial has a resting position. You can turn it clockwise or counterclockwise, and when you let go, it will spring back to the resting position. While it is being turned, it enacts a change. The greater the turn, the faster the change. Like a variable fast-forward/reverse control. If we used this for a volume control: a small turn to the left means, “Keep lowering the volume a little bit as long as I hold the dial here.” A larger turn to the left means, “Get quieter faster.” In the case of the Eye, Strange could turn his hand a little to go back in time slowly, and fully to reverse quickly. This solves some mapping problems (discussed below) but raises new issues when the object just doesn’t change that much across time, like the tome. Rewinding the tome, Strange would start slow, see no change, then gradually increase speed (with no feedback from the tome to know how fast he was going) and suddenly he’d fly way past a point of interest. If he was looking for just the state change, then we’ve wasted his time by requiring him to scroll to find it. If he’s looking for details in the moment of change, the shuttle won’t help him zoom in on that detail, either.

There are also free-spin jog wheels, which can specify absolute or relative values, but since Strange’s wrist is not free-spinning, this is a nonstarter to consider. So I’ll make the call and say what we see in the film, the jog dial, is the right kind of control.

So if a jog dial is the right type of dial, and you start thinking of the Eye in terms of it being a video scrubber, it’s tackling a common enough problem: Scouring a variable range of data for things of interest. In fact, you can imagine that something like this is possible with sophisticated object recognition analyzing security footage.

The investigator scrubs the video back in time to when the Mona Lisa, which since has gone missing, reappears on the wall.

INVESTIGATOR

Show me what happened—across all cameras in Paris—to that priceless object…

She points at the painting in the video.

…there.

So, sure, we’re not going to be manipulating time any…uh…time soon, but this pattern can extend beyond magic items a movie.

The scrubber metaphor brings us nearly all the issues we have to consider.

What are the extents of the time frame?

How are they mapped to gestures?

What is the right display?

What about the probabilistic nature of the future?

What are the extents of the time frame?

Think about the mapping issues here. Time goes forever in each direction. But the human wrist can only twist about 270 degrees: 90° pronation (thumb down) and 180° supination (thumb away from the body, or palm up). So how do you map the limited degrees of twist to unlimited time, especially considering that the “upright” hand is anchored to now?

The conceptually simplest mapping would be something like minutes-to-degree, where full pronation of the right hand would go back 90 minutes and full supination 2 hours into the future. (Noting the weirdness that the left hand would be more past-oriented and the right hand more future-oriented.) Let’s call this controlled extents to distinguish it from auto-extents, discussed later.

What if -90/+180 minutes is not enough time to entail the object at hand? Or what if that’s way too much time? The scale of those extents could be modified by a second gesture, such as the distance of the left hand from the right. So when the left hand was very far back, the extents might be -90/+180 years. When the left hand was touching the right, the extents might be -90/+180 milliseconds to find detail in very fast moving events. This kind-of backworlds the gestures seen in the film.

That’s simple and quite powerful, but doesn’t wholly fit the content for a couple of reasons. The first is that the time scales can vary so much between objects. Even -90/+180 years might be insufficient. What if Strange was scrubbing the timeline of a Yareta plant (which can live to be 3,000 years old) or a meteorite? Things exist in greatly differing time scales. To solve that you might just say OK, let’s set the scale to accommodate geologic or astronomic time spans. But now to select meaningfully between the apple and the tome his hand must move mere nanometers and hard for Strange to get right. A logarithmic time scale to that slider control might help, but still only provides precision at the now end of the spectrum.

If you design a thing with arbitrary time mapping you also have to decide what to do when the object no longer exists prior to the time request. If Strange tried to turn the apple back 50 years, what would be shown? How would you help him elegantly focus on the beginning point of the apple and at the same time understand that the apple didn’t exist 50 years ago?

So letting Strange control the extents arbitrarily is either very constrained or quite a bit more complicated than the movie shows.

Could the extents be automatically set per the focus?

Could the extents be set automatically at the beginning and end of the object in question? Those can be fuzzy concepts, but for the apple there are certainly points in time at which we say “definitely a bud and not a fruit” and “definitely inedible decayed biomass.” So those could be its extents.

The extents for the tome are fuzzier. Its beginning might be when its blank vellum pages were bound and its cover decorated. But the future doesn’t have as clean an endpoint. Pages can be torn out. The cover and binding could be removed for a while and the pages scattered, but then mostly brought together with other pages added and rebound. When does it stop being itself? What’s its endpoint? Suddenly the Eye has to have a powerful and philosophically advanced AI just to reconcile Theseus’ paradox for any object it was pointed at, to the satisfaction of the sorcerer using it and in the context in which it was being examined. Not simple and not in evidence.

Auto-extents could also get into very weird mapping. If an object were created last week, each single degree of right-hand-pronation would reverse time by about 2 hours; but if was fated to last a millennium, each single degree of right-hand-supination would advance time by about 5 years. And for the overwhelming bulk of that display, the book wouldn’t change much at all, so the differences in the time mapping between the two would not be apparent to the user and could cause great confusion.

So setting extents automatically is not a simple answer either. But between the two, starting with the extents automatically saves him the work of finding the interesting bits. (Presuming we can solve that tricky end-point problem. Ideas?) Which takes us to the question of the best display, which I’ll cover in the next post.

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.

Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.

Fooling Tulsa

Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.

Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.

Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBot, https://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.

Training the bot

So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.

A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)

Launching the bot

GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.

Buying time

If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:

Ask her to answer the same question first, probing into details to understand rationale and buy more time

Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question

Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share

Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal

Example

TULSA

OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?

GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…

(you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”

(related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”

(new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”

(story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”

Lagged-realtime training

Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.

To the stalling GARDNERBOT…

GARDNER

For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.

As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…

At a natural break in the conversation…

GARDNERBOT

OK. I think I finally have an answer to your earlier question. How about…India?

TULSA

India?

GARDNERBOT

Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?

Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.

That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.

Gotta go

If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.

GARDNERBOT

Oh crap. Will you be online later? I’ve got chores I have to do.

Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.

In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.

So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?

From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.

How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.

An honest version: bot envoy

So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?

Would it be too fake?

I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.

GARDNER

Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.

TULSABOT

I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.

GARDNER

I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?

Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.

TULSA

GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.

Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.

Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.

There is one last interface in The Faithful Wookiee we see in use. It’s one of those small interfaces, barely seen, but that invites lots of consideration. In the story, Boba and Chewie have returned to the Falcon and administered to Luke and Han the cure to the talisman virus. Relieved, Luke (who assigns loyalty like a puppy at a preschool) says,

“Boba, you’re a hero and a faithful friend.[He isn’t. —Editor]You must come back with us. [He won’t.] What’s the matter with R2?”

C3PO says,“I’m afraid sir, it’s because you said Boba is a faithful friend and faithful ally.[He didn’t.]That simply does not feed properly into R2’s information banks.”

Luke gapes towards Boba, who has his blaster drawn and is backing up into an alcove with an escape hatch. Boba glances at a box on the wall, slides some control sideways, and a hatch opens in the ceiling. He says, deadpan, “We’ll meet again…friend,” before touching some control on his belt that sends him flying into the clear green sky, leaving behind a trail of smoke.

A failure of door

Let’s all keep in mind that the Falcon isn’t a boat or a car. It is a spaceship. On the other side of the hatch could be breathable air at the same pressure as what’s inside the ship, or it could also be…

The bone-cracking 2.7° Kelvin emptiness of space

The physics-defying vortex of hyperspace

Some poisonous atmosphere like Venus’, complete with sulfuric acid clouds

A hungry flock of neebrays.

There should be no easy way to open any of its external doors.

Think of an airplane hatch. On the other side of that thing is an atmosphere known to support human life, and it sure as hell doesn’t open like a gen-1 iPhone. For safety, it should take some doing.

If we’re being generous, maybe there’s some mode by which each door can be marked as “safe” and thereby made this easy to open. But that raises issues of security and authorizations and workflow that probably aren’t worth going into without a full redesign and inserting some new technological concepts into the diegesis.

Let’s also not forget that to secure that most precious of human biological needs, i.e. air, there should be an airlock, where the outer door and inner door can’t be opened at the same time without extensive override. But that’s not a hindrance. It could have made for an awesome moment.

LUKE gapes at Boba. Cut to HAN.

HAN

You won’t get any information out of us, alive or dead. Even the droids are programmed to self-destruct. But there’s a way out for you.

HAN lowers his hand to a panel, and presses a few buttons. An escape hatch opens behind Boba Fett.

BOBA FETT

We’ll meet again…friend.

That quick change might have helped explain why Boba didn’t just kill everyone and steal the Falcon and the droids (along with their information banks) then and there.

Security is often sacrificed to keep narrative flowing, so I get why makers are tempted to bypass these issues. But it’s also worth mentioning two other failures that this 58-second scene illustrates.

A failure to droid

Why the hell did C3PO and R2D2 wait to tell Luke and Han of this betrayal until Luke happened to say something that didn’t fit into “information banks?” C3PO could have made up some bullshit excuse to pull Luke aside and whisper the news. But no, he waits, maybe letting Luke and Han spill vital information about the Rebellion, and only when something doesn’t compute, blurt out that the only guy in the room with the blaster happens to be in bed with Space Voldemort.

Also note that despite all this effort (and buffoonery) they never, ever used this insanely effective bioweapon against the Rebels, again.

I know, you’re probably thinking this is just some kid’s cartoon in the Star Wars diegesis, but that only raises more problems, which I’ll address in the final post on this crazy movie within a crazy movie.