The end of typing as we know it may be a gesture, a blink, or a thought away

When Christopher Latham Sholes invented the modern QWERTY keyboard in the 1870s, could he have known that the essentials of his design would persist for 150 years, well into the Digital Age? It's remarkable. In a high-tech world where change is rapid and constant, the computer keyboard has been the undisputed monarch of input devices from the very beginning. We've seen a few pretenders to the throne, but for everyday computing, the keyboard is still king (with the mouse as court jester, I suppose).

Only very recently -- in the last few years -- has the reign of the keyboard been seriously threatened. Continuing innovations in areas like speech recognition and gesture control are merging with even newer technologies to open up possibilities. Here we take a look at some existing and emerging concepts that may point the way to a postkeyboard world.

It's an apparently mandatory scene in contemporary science fiction films, from "Minority Report" to "Ender's Game" to "The Avengers": The steely-eyed heroes stand in front of a translucent display of 3D holograms and call up schematics by waving their hands -- swiping, pulling, and pinching -- to access the data they need.

Part of that technology is already here in the form of motion control and gesture recognition systems, in use with game consoles like Microsoft Kinect and high-end AI research environments. For everyday office use, the Leap Motion controller is designed to work in conjunction with a standard keyboard to track finger movements within 1/100 of a millimeter. Stand-alone 3D holograms are still a ways off, though.

Speech recognition is, of course, another significant existing technology that's been making forays into King Keyboard's realm for several decades. Why type in words when you can simply speak them aloud? The answer is simple: For a looong time, speech recognition was a notoriously dodgy software proposition -- about as accurate as a six-year-old Little League pitcher.

That's all changed in recent years, thanks to astounding advances in artificial intelligence and "deep learning" algorithms. These natural-language user interface systems now power everything from hands-free computing for the disabled to in-car voice commands to mobile device virtual assistants like Siri, Cortana, and Google Now. Recently, Skype took it to the next level, introducing actual real-time translation -- between English and Spanish -- using machine learning and natural-language interface technology.

Then there are the postkeyboard ideas that drift into the land of Zen. Late last year, the AirType project made the rounds of gadget-flavored blogs, depicting what is essentially a keyboard-less keyboard.

Still in early prototype phase, the AirType (now called Noki, apparently) uses a set of sensors that wrap around your hands and track your finger movements as you type -- on a keyboard that isn't there at all. You can type on any surface, or none, and the sensors are designed to learn your particular typing habits. Dynamic text correction and prediction help, too, but with no visual keyboard, you'll have to be a proficient touch (no-touch?) typist to make this work -- check out the demo video.

That brings us to the similar but significantly older idea of projection keyboards, which do exactly that: project a virtual keyboard onto a surface and track your finger movements as you type. Depending on configuration, projection keyboards typically use a combination of lasers, sensors, and infrared beams to replicate a traditional QWERTY keyboard on a flat surface.

The idea is to optimize portability by providing a full-size keyboard for mobile devices, while at the same time eliminating the pesky tradition of, you know, physical matter. (You can get projection pianos too, by the way.) It's all well and good, and the technology is improving every year. But surely we can all agree that the concept reached its zenith with the frankly awesome R2-D2 Virtual Keyboard.

Still another variation on the theme, The Ring is a wearable input device that brings finger-tracking down to the single-digit level. Riding high after a successful Kickstarter campaign, the gadget made a splash at this year's Consumer Electronics Show in Las Vegas. When paired via Bluetooth with your smartphone or smart watch -- or even your networked appliances -- The Ring allows you to create shortcut commands for various tasks, which you execute with a wiggle of your finger.

You can draw letters in the air to send texts or spell out "TV" to turn on the living room television. The Ring uses gesture recognition paired with miniaturized light and haptic feedback to let you communicate with your various devices; LED flashes and vibrations let you know when you've successfully sent a text, made a payment, or what-have-you. The Japanese company behind The Ring hopes to have it on shelves sometime this summer, with a price tag of around $130.

If a ring isn't your thing, there may be other future jewelry options. The Cicret Bracelet, from a small company out of France, combines elements of projection keyboards, wearable computers, and mobile tech to imagine a new kind of input device. By way of a miniaturized projector and proximity sensors, the Cicret lets you use your own skin as a touchscreen.

The device is in early prototype phase as of now, but the design team says you will be able to tap and swipe as you would any other touchscreen surface. The wristband is designed to run a stand-alone version of Android OS, or it can be linked via Bluetooth with the phone in your pocket, with Wi-Fi and a micro USB port also built in. The downside: Scratch an itch, and you might mistakenly group text your address book. But that's progress for you.

The prevailing view in recent years is that our postkeyboard future is very likely to be informed by these kinds of wearable "always on" input devices: rings, bracelets, smart watches, or everyone's favorite Next Big Thing, smart glasses.

Smart glasses may be the most promising wearable input device in that their proximity to the eye -- our primary sensory apparatus -- opens up a lot of options. Most smart glasses employ OHMD (optical head-mounted display) technology, which essentially bounces a kind of picture-in-picture display into your field of vision. Combined with other alternate input device solutions, like Google Glass' voice command system, OHMD can provide truly mobile hands-free viewing.

Meanwhile, companies like Tobii in Sweden are deploying eye-tracking systems that are already in use as computer input device solutions. By tracking your eye movements and determining where your gaze lingers, such systems can be used to execute commands with traditional workstation setups, through your smart glasses, in your car, or even in old-school video games.

While wearable gizmos and eye-tracking tech have great potential, gesture and speech recognition are still considered the two bedrock technologies most likely to power postkeyboard input devices. After all, the idea is to interact with our computers as efficiently and naturally as we do with one another. And we humans communicate primarily though spoken words and body language -- plus, passive-aggressive office behavior, depending on the situation.

On the other hand, if you're comfortable with a more cyborgian vision of the future, maybe we can shortcut all that entirely and aim for direct brain-to-chip transmission. Research into BCI (brain computer interface) technology goes back almost 50 years, but recent developments are bringing it out of the labs and onto retail shelves. For instance, NeuroSky's EEG biosensors allow users to trigger computer interface options by, quite literally, thinking about them. Dermal pads monitor your level of concentration and input commands when you reach a particular threshold. There's already a Google Glass peripheral in development.

With the advent of magic rings, smart glasses, and mind-reading input devices, the keyboard may be finally settling into its twilight years. Fare thee well, gentle friend. Thanks for the carpal tunnel syndrome.