There’s an imposing wall dividing real world creation and digital design. To transfer a paper design to a computer, you need training and experience in technically demanding computer assisted design (CAD) programs.

Instead, imagine if we could mold digital designs in three dimensions as easily as we mold clay. Intuitive, powerful, and immersive interfaces would open the field to more people and inject more serendipity and improvisation into digital design.

We may be entering a new era of computer interfaces where standard two-dimensional screen, keyboard, and mouse are enhanced by more instinctive 3D modes of interaction—modes that more closely mimic real world design methods.

In sci-fi, 3D interfaces tend to be holographic. A surface projects 3D schematics into the air where people can manipulate images, spin them around their axes. Though some folks are working on such holographic interfaces, they yet remain elusive.

Increasingly, instead of holograms, 3D interfaces are powered by a suite of devices working in tandem to produce the illusion of depth in augmented reality.

Take the recently developed GravitySketch tablet. The device combines a tablet with an embedded Arduino chip, an infrared stylus, and a pair of augmented reality glasses to draw and design in 3D. Wearing the AR glasses, designs that appear to be hovering in the air over the tablet can be rotated, edited, and augmented with the stylus.

GravitySketch was invented by four students at London’s Royal College of Art—Daniela Paredes Fuentes, Pierre Paslier, Oluwaseyi Sosanya, and Guillaume Couche—who say they began by questioning the creative process, the disconnect between our unlimited imagination and our limited digital tools.

They surveyed a group of creators and found that, for many, the creative process begins with a simple pad of paper and pencil. Not until they try and translate that initial vision into a final, producible product do they shift to the computer.

The Gravity Sketch team hopes their invention can bring the two creative modes closer together, and in doing so, lessen the obstacles between pure vision and practical result. “The tools that are commonly used for drawing, designing and making in 3D have a heavy influence on the output of the initial idea the creator set out to bring to life.”

How does Gravity Sketch work?

Users make sketches on a gridded perspex pad using an infrared stylus. The pad’s Arduino chip and Unity software track the stylus and convert sketches to 3D, sending the resulting images to a pair of Laster augmented reality glasses. The glasses display the sketch as a 3D object to be manipulated by one or more users.

The project is still a youngster, first begun in October 2013. While the video clearly illustrates the direction and ease of use they’re gunning for, to be as useful as a pad of paper and pencil, it’ll need to be as intuitive as a pad of paper and pencil. And if not quite as cheap, it’ll need to be affordable and, of course, portable.

Though that may be easier said than done, the team’s focus on augmented reality is in the right direction. In the near future, 3D interfaces will likely rely more on augmented reality than the holographic displays of Iron Man, Star Wars, and Minority Report (among many others).

A number of companies are likewise working the problem from other angles.

CastAR’s augmented reality glasses, for example, project a stereoscopic, 3D image onto a table covered in the reflective material used in traffic signs. The material reflects light directly back at users, creating a 3D image that naturally adapts as they move around it.

Like Gravity Sketch, zSpace uses a stylus and glasses. These are paired with a monitor. As in 3D movies and television, the glasses are polarized so one eye sees video frames from one perspective and the other eye sees frames from a different perspective. However, zSpace adds motion sensors for head tracking and manipulates images so users can look to either side of them.

Meanwhile, Meta SpaceGlasses do away with tablet and stylus in favor of Kinect-like depth sensing. Folks use their hands to interact with 3D images in front of their noses.

These interfaces (and later inventions too) will likely find a variety of applications.

Gravity Sketch, for example, may be used by surgeons to upload an image of a bone or ligament and sketch required surgical fixtures directly onto it. Other applications include hooking up to a 3D printer for even faster parts prototyping, augmented reality gaming, or gathering and using information hands-free in industrial settings.

Whatever the ultimate application, augmented reality will hopefully make the conversion of thoughts to bits to atoms more rapid, seamless, and simple.

Jason is managing editor of Singularity Hub. He cut his teeth doing research and writing about finance and economics before moving on to science, technology, and the future. He is curious about pretty much everything, and sad he'll only ever know a tiny fraction of it all.