What Movie UIs Say About the Future

A 3D FX and UI designer examines UI concepts in futuristic movies.

Article No :502 | March 8, 2010 | by Tony Walt

While I was doing research for a virtual user interface I was creating in 3D, I spent some time looking at some of the virtual UIs that have come out of Hollywood. A lot of money and thought goes into their development, so I figured they would make good reference material for my project. While you can’t take the virtual UIs in movies at face value, they do contain some nuggets of information on what the future might hold.

Complexity

I’ve noticed that UIs in feature films are continually getting more elaborate and complex. Meanwhile, though, real-world interfaces are getting more simple and intuitive. It seems an odd contradiction that the futuristic UIs we dream up for movies follow one path, while real world ones are heading down another path.

But the reason for this is simple. Complexity conveys the impression that a system is very robust and advanced, and a character’s mastery of a complex system is more impressive than it would be if the system were simple and intuitive. No matter how complex the system gets, the hero can always operate it expertly, leaving the audience dazzled by the UI and the character’s skill. In the real world, though, users are more often like Mr. Magoo than like Tony Stark or (as in the clip below) an MI5 agent. So while high-aptitude, heavily trained users might be the fantasy world for UX professionals, it’s not the world we live in. The trend toward complexity in movie UIs doesn’t give us much of a preview of the world to come.

Gestural UIs

The most notable use of gestural UIs that I can think of was in Minority Report. It’s impressive to see Tom Cruise moving his arms around to call up and manipulate video. But the large and intricate motions he makes wouldn’t work in actual practice. Our arms get tired, and it is hard to make such intricate motions with precision without any form of tactile feedback. Another issue with this method is that all of the commands Tom Cruise employs are completely memorized. Systems that don’t show commands rely completely on memorization and training. This is faster for an expert but takes a long time to master. Recalling commands, especially when stressed, can be very challenging.

Gestural UIs will be a part of our future. They are already present in several devices such as the iPhone and some video game systems, and they’re in development for televisions. In order to be successful these UIs will have to be supplemented with menus or be extremely intuitive. If they are to be a major part of the overall interface they will need to be driven by lazy or small motions that won’t tire out a user. The exception here would be something like the Wii where the gestures are more engaging and getting tired is part of the game.

The XBox Project Natal is a new gaming system that will be gesture driven. Unlike the Wii (which uses a remote with an accelerometer to capture movements), Natal will use a camera to detect motion. This may not go over well with users as it doesn’t provide any sort of feedback. Holding a prop steering wheel as you would with the Wii feels more engaging than an imaginary one as you would with Natal.

Eye Tracking UIs

This concept can be seen in the movie Iron Man. Tony Stark accesses various widgets just by looking at them. This concept is universal (cross-cultural)—just look at something to activate it. My concern is how the system knows the difference between someone glancing over an item and intentionally focusing on it. The idea of “hover intent” isn’t as applicable since the human eye doesn’t scan across a UI and come to rest on a particular spot like a mouse. Our eyes dart from spot to spot with temporary pauses as they pass over a screen. This could be worked out by having timer trigger based on the eye movements. Another issue would be temporary distractions that cause us to look away from the UI would potentially close applications we were working on. Interactive billboards will be a very likely candidate for this technology; in fact, a few of them already exist.

Voice Activated UIs

Probably the most famous incarnation of this is Star Trek. The ship’s crew can issue almost any command verbally and the ship complies. This technology is already present in most cell phones, some cars, and computer programs. If you own any of these systems you may already know some of the current technological pitfalls. The systems struggle when you speak fast or issue long commands. They also rely heavily on you speaking with the proper inflection (which is hard to do when you are panicked, distracted, or sick) and they user must have commands memorized. I often only use a couple commands in my car because they are the only ones I can recall while flying down an interstate full of cars. The only other alternative is asking for a list of commands that is lengthy and distracting.

On the other hand, these systems are extremely useful when you can’t use your hands or have a handicap that prevents you from interacting with the system normally. The key here in the future will be an easy method of retrieving commands and keeping voice commands short and simple.

Stereoscopy / Holographic UIs

Most recently, you can see these in the movie Avatar and District 9. In Avatar, human brains are projected in 3D allowing the doctor to look around at all parts for any abnormalities. In District 9, the alien ships are piloted with holographic UIs, something that is especially useful in navigation. These are a great idea as they help separate content from UI, and separates what is important at the moment and what isn’t. This can be faked in 2.5D systems as is done now, but with full dimensionality the effect is enhanced as well as allowing the user to create better groupings and spatial mappings. One trick to this system will be locating the right uses of this technology. Novelty will not be a good reason to make a UI 3D. Dealing with geography, multiple dimensions, and multiple axes will be good reasons.

While the keyboard and mouse work just fine as 2D inputs, these UIs will benefit greatly from other forms of 3D input such as multiple cameras comparing imagery to locate users in 3D space, manipulating a device in 3D that contains an accelerometer, or perhaps other current methods of 3D motion capture used in films and games today. In any of these methodologies, feedback will be important. Users will need to feel some sort of resistance to know they have pushed a holographic button. A simple visual indication won’t be satisfying enough. Perhaps a glove that provides feedback will be the solution.

Transparent UIs

Many of the movies that have come out lately feature transparent UIs. They are very visually stimulating and work for something like a HUD in a jet fighter since you need the UI laid on top of the elements behind it. However, it doesn’t work for a typical screen. It provides too many distractions when you add the elements on the screen with the complex visual scene and motions occurring behind it.

Large Fonts

Jakob Nielsen included the use of large fonts in his list of top 10 movie UI bloopers. I don’t agree with him on this one. His reasoning that fonts are unnecessarily large so that people in the audience can read them is sound. However, our culture of computer users is going from a “leaning forward” posture to a “laid-back” one. As we buy larger monitors and find more UIs on our television screens combined with wireless input devices, we’ll need those larger fonts to read the screens from farther away. Instead of sitting at a desk to interact with a computer, we are doing it more and more from our couches.

Adaptive UIs

The only really good adaptive UI I can think of is the Omega widget in Tony Stark’s final Iron Man suit. In the movie, the Omega widget is a single widget that contains all of the information from the previous widgets. However this one only shows information and options that are currently pertinent.

The easiest UIs are ones where each command has a unique button, but the number of buttons shown is limited to only the current options. This methodology allows for a tremendous amount of information and commands to be available, but without cluttering the user’s screen. Adobe is utilizing this currently in Catalyst and I’ve seen it in sneak previews of Adobe Rome.

About the Author(s)

Tony Walt is the design and technical director of Rich Media at EffectiveUI, where he oversees interface development and rich media integration. Tony believes in creating unique, immersive, user-friendly experiences and thrives on pushing the bounds of user interaction models while creating unexpected experiences. Tony has worked on a variety of projects, including for the following clients: Wells Fargo, Qwest, Microsoft, The Discovery Channel, Audi, The Learning Channel, Adobe, Oakley and T-Mobile.

I don't think the concept of complex UIs/UX ("UIX") portrayed in movies is an unrealistic possibility for average users of The Future. A movie is someone's take on the evolution of UIX and, yes, it makes good movie-money sense to show an expert user accomplishing complex tasks with tools from their future. But can't we assume that *all* users of The Future will have an evolved baseline comprehension of an UIX experience? What seems complex to us now may not seem that way to users in The Future.

How movies predict The Future UIX will always seem like a positive feedback loop when zoomed-in. We can't bypass the random potential that time offers. If we zoom-out from now and zoom-in a 1000 years into The Future and look at an UIX, it'd be a snapshot of all the incremental nudges brought about by centuries worth of design inspirations, technological advances, psychological considerations, good ol' inventiveness, and all the other factors that impress our environment.

If we were to snapshot an UIX from The Present and took it to an user in The Past, would the learning required for comprehension of that system be greater than it would be for an user from The Present or The Future? It seems the further back in time you take an UIX the greater the amount of learning required for the end user. What happens if we take a converse example? If we took a UIX (or equivalent) from The Past and presented it to users from The Present/Future, would less learning be required? Is it because the The Future User has evolved from the UIX of The Past? Do we eventually reach a threshold where two UIX snapshots from different points in time are so far removed that users cannot draw from any context in which to understand them thereby reducing (if not entirely eliminating) all the learning and evolution that time has provided?

I looked a bit more into it and indeed a 180^ phase shifted photons do cancel, and the rainbow effect on bubbles is an everyday example of this phenonmenon.

Anyway, the elephant in the room is that emitting black light has nothing to with rendering a 3-dimensional image.

What is a more reasonable stab at an answer would be a material like phosphor in CRTs that only lights up when hit by waves from 2 directions.

Thus, a cube of glass and phosphor or a chamber or other contained area could be filled with phosphor 'smoke' and two rays could paint the image by emitting beams that crossover at the computed points making up the image.

Although perhaps we need to go back to 1991 and look at the arcade game Time Traveller for a more immediate solution. Imagine combining the parabolic mirror floating effect with modern 3D screens.

You can't cancel signals by emitting a "frequency-shifted" version of it, that's as if you could cancel red by emitting blue. Net frequency doesn't exist like that.

What you are looking for is a 180 degree phase shift, but good luck with that. Light waves are in the nanometre range, try making a phase adjustable emitter that precise. Light is emitted in all directions from a screen, otherwise it has no viewing angle; how are you going to measure and re-emit photons for all those directions. What about the latency of the screening device, you're up against the speed of light. It's generally not possible.

You should review how light interference in the double-slit experiment works. It was puzzling back then for a reason; seems the answer still eludes you for now.

Until somebody "invents" black light, transparent or 3D UI's will never progress beyond the rather excellent Star Wars interpretation. it is a personal bug bear of mine to shout at TV programmes and Films when they decide to add their own black to projected UI's, when in reality black can only ever come from what is directly behind the projection, therefore bright room = no black and a very confusing image.
This is why a reliable and clear projected UI can never go beyond the single "monochrome" image

The 'Johnny Mnemonic' interface with VR goggles and gloves is a favorite of mine. I feel that the obsession with screens is/should be a thing of the past. Augmented reality overlayed over vision is IMHO the ultimate. I have seen a blog discussion about 'display in a contact lens', i like the idea, but am unsure if this invades personal space just a little. somewhere between the VR Geek helmet of Johnny Mnemonic and the contact lens display is the sweet spot I have been waiting for.

Personally, I love looking at UIs in movies. Yes - they're product of the now, but I think movies, in general, are always from the perspective of when they're created. Film UIs are just the same. Some are important and really do change how we see technology (think: 2001); others are fun candy, not to be taken too seriously.

I posted a followup to this post on my blog... where I talk a bit about the issue of complexity that Tony brings up. I think his point is totally right - that film UIs are out-of-sync with current trends towards simplicity. But that, perhaps, in some instances (like within a secret intelligence agency) complexity may be appropriate.

I just want to pick up on what Robert said there regarding the absurdity of movie UIs.

It's my belief that what most people perceive as futuristic means something we've already seen, i.e. prior art, either prochronistically or in a fictional future setting.

Look at it this way. If you took a took an iPhone back to 600BC, would it look futuristic to people of that time? Would their ideas of what the future looks like coincide with the design of that object? Essentially, will it look like it came from the future as they image it? Unlikely.

We are no different in 2010. We determine whether something possesses futuristic qualities based on identifying an evolution of the state-of-the-art and by drawing from visions depicted in movies and fictional media.

Therefore the whole process is a positive feedback loop where designers mimic stuff seen in movies in order produce something that appears futuristic, and movie-makers advance stuff seen in real life.

Designers are thus led by the response of the consumer to a design, and so it's arguable that we are condemned to create the absurd things seen in movies.

There is also some scientific research on this topic: "A Survey of Human-Computer Interaction Design in Science Fiction Movies". It was done by the AI Group of the Saarland University and you can see some slides of a presentation talk here.

Most UI's in film are absurd because, above all else, the job of film visuals is to be 'cinematic'. That is, to be interesting to look at, help tell the story and reveal character/organisation traits.

The UI in James Bond is unnecessarily complex because it needs to convey the fact that MI6 has clever technology at their disposal that you and I couldn't possibly understand.

Minority Report's visuals were distilled from a think-tank that Spielberg put in place to discuss what the future would be like. The trouble is while gestural UI was very much on the cards it wasn't very 'cinematic' (read 'cool') so we get Mr Cruise waving his hands inthe air like a teenage girl at a Jonas Bros gig.

I love watching these clever attempts at UI in movies - I've even designed a few - but at the end of the day they're pure theatre and not meant to be taken seriously. The danger comes when we think they are and try to design to replicate them because the reality never lives up to the promise.