Main menu

Tag Archives: augmented reality

Out of five big innovations that IBM Research predicts will change our lives in the next five years, one in particular caught our eye, since it might just require some of our precision optics: hyperimaging technology. Here is an introduction to this burgeoning optoelectronics opportunity.

“More than 99.9 percent of the electromagnetic spectrum cannot be observed by the naked eye. Over the last 100 years, scientists have built instruments that can emit and sense energy at different wavelengths.” – IBM Research

Hyperimaging technology is special because it will help us to see beyond visible light by combining multiple bands of the electromagnetic spectrum to add to what is visible; in other words, it will allow us to see qualities beyond what is normally visible, perhaps into the realm of Superman-type seeing.

Existing tools can illuminate objects and opaque environmental conditions using different frequencies of the electromagnetic spectrum such as radio waves, microwaves, millimeter waves, infrared and x-rays, and reflect them back to us. However, these instruments only see across their own specific portions of the electromagnetic spectrum.

IBM is building a portable hyperimaging platform that “sees” across numerous portions of the electromagnetic spectrum collectively, to potentially enable a host of practical applications that are part of our everyday experiences.

How will hyperimaging affect our daily lives? In five years, it could aid in identifying the nutritional value of food, detect fraudulent drugs, deepen the augmented reality experience, or help make driving conditions more clear. For example, using millimeter wave imaging (a camera and other sensors), hyperimaging technology could help a car see through fog or detect hazardous and hard-to-see road conditions such as black ice. Cognitive computing technologies will have the ability to draw conclusions about the hyperimaging data and recognize what might be a cardboard box versus an animal in the road.

In all, it sounds like a promising and cool new technology on the horizon. Check out IBM’s other predictions for the big five in five innovations here.

While those of us who are adults now primarily remember video games as joystick- or button-controlled pastimes, children growing up in the immediate future will most likely have a completely different experience to remember as adults, using motion-control games such as the Wii and Kinect, or perhaps even more likely, using the position of their eyes and hands to control what happens. Given the intense amount of resources being focused upon augmented and virtual reality at the moment, gaming in alternate realities looks ready to explode into the mass market.

Goldman Sachs recently published a report saying that AR and VR could potentially become an $80 billion market by 2025, which is big – roughly the size of the desktop PC market today. The reasoning behind this growth is that AR and VR will not only be used for gaming, but in a wide variety of practical applications throughout sectors such as healthcare, real estate, and education.

However, the most commercially anticipated VR and AR area is gaming. Given the recent launches of both the Oculus Rift and HTC Vive (and the PlayStation VR later this year), the VR space is going to quickly become a tech battleground. In fact, there will also be battles amongst the companies streaming content to VR such as Netflix, Hulu, and Amazon. Stay tuned.

Wearable technology is a trending term now used for a wide array of products – from fitness trackers and smart watches to the latest augmented reality glasses – all of which connect wirelessly to your smartphone or computer from its place on your body. The growing wearable market is expected to reach over $70 billion by 2025 (IDTechEx). Indeed, wearables are on the rise; meanwhile, innovators are thinking hard about the next phase of this category: testing out personal technology concepts that push the envelope.

Auger Loizeau’s Audio Tooth Implant

Further emphasizing the cyborg-like qualities of wearable technologies are implantable wearables – that’s right, connected devices inside your body. Pictured here is a tooth implant which, in a spy-like fashion, is embedded with a miniature audio output and receiver to bring communication capabilities to its user’s mouth. A modified mobile telephone or dedicated device is used to receive the long-range signal.

Project Underskin

There are also devices that can be embedded just below the surface of the skin to detect vitals or unlock a smart door. Devices such as this will send internal data or images to an app and will likely be able to run on energy from our body. Depending on the device, these could be used for an array of purposes, including to monitor diseases, communicate with doctors, and even treat ailments by releasing medication into your body via remote control.

Nest thermostat

In yet another take on personal sensors, tech writers have coined the new concept of “senseables,” described as a series of sensors embedded throughout an environment that provide users with instant data feedback to customize their experience. Cameras assembled with active alignment could potentially be needed to actualize this technology. For instance, Audi has recently unveiled Pre-Sense, for which a number of sensors are embedded inside a car to measure a driver’s emotions, body language, and involuntary reactions. This data is then used to automatically adjust safety mechanisms within the car; for example, if a driver is distracted, the car safety control will ensure it does not drift into an adjacent lane. Similarly, sensors embedded in the Nest thermostat automatically adjust the temperature when particular events in the environment are detected.

So going beyond wearable cameras and smart watches toward implanted and surrounding sensing technologies is not just science fiction…it will soon be part of our reality.

Innovation is happening in the virtual reality and augmented reality universe, and VC firms are investing in it significantly. Much like the “holodeck” in Star Trek – a room that can change into any location in the universe via holographic image—these wearable alternate reality devices plunge users into another world. Oculus Rift is currently the leading virtual reality product, and, as recently announced in Wired, Microsoft has been developing what they call HoloLens, an augmented reality headset that layers a multi-dimensional cyber world on top of the real world. These systems create an amazing array of opportunities to collaborate, visualize, create, experiment, and, of course, play.

HoloLens Augmented Reality Headset (Wired)

The new Hololens’ depth camera has a field of vision that spans 120 by 120 degrees, so it can sense your hands even when they are almost outstretched. As many as 18 sensors flood the device with data every second, all managed with an onboard CPU. Users control the device by gesture recognition, voice, and gaze. Scenes might be anything from a 3D video game to the landscape of Mars. In fact, the Mars hologram was so impressive that NASA has signed on to use the system right away so that agency scientists can use it to collaborate on a mission.

In addition to Oculus Rift, other virtual reality systems include the Zeiss VR One and the Samsung Gear VR. The HoloLens, still in development, is being touted as very ambitious and bold, and will be a unique and groundbreaking augmented reality system that combines reality with virtual surroundings. People will expect a thrilling ride when it arrives, and it will be a delight to see the inventive applications developers come up with to maximize this technology.

In Gartner’s 2014 Hype Cycle for Emerging Technologies, there are several exciting areas in which technologies depend on optical components and camera modules for key functions – functions that likely are dependent upon clarity of images and require active alignment for their optics. The most prominent are gesture control, virtual reality, augmented reality, and autonomous vehicles. Of those, the most advanced one on the cycle is gesture control technology; according to Gartner, its “plateau of productivity,” in which mainstream adoption begins to take place, will be reached in 2-5 years.

Gartner’s 2014 Hype Cycle for Emerging Technologies

Gesture control technology has been embraced by companies that range from small venture-funded start-ups to large corporations looking for the next big thing. Some companies, such as Samsung, are partnering with these start-ups to incorporate gesture recognition in their next generation models of televisions or other electronics. Others are forging ahead with their own cutting-edge products; for example, Intel has recently publicized its wide-ranging RealSense Technology, by which a camera in the computer can see in 3D, recognize gestures, and take refocusable photos.

At Kasalis, we are fostering innovation at the intersection of software and optics, providing precision active alignment for optics that can then be used to clearly and accurately translate hand movements, or gestures, into commands that the software can understand. We are thrilled that camera module and optical quality has become a top priority for the most cutting-edge technologies, and delighted, knowing that our technology plays a key support role in their advancement.

The EyeRing system, invented at MIT Media Lab’s Fluid Interfaces Group, is a chunky ring device with a camera module mounted on it that can aid the blind by providing audio responses about what is in front of them. For example, in the EyeRing video (below), a man is shopping and commands the ring to detect the color of the shirt he holds. The image is sent through the system and the result is translated into words; the EyeRing responds, “grey.”

When the camera snaps a photo, it is sent to a Bluetooth-linked smartphone. A special Android app processes the image using computer-vision algorithms and then a text-to-speech module to communicate the results through earphones. So far, the device is capable of detecting currency type, color, and the amount of open space ahead (the “Virtual Walking Cane”).

The camera module sends an image to the mobile phone app, which then translates the image into words and tells the user what it sees. Image: MIT

The EyeRing camera will identify currency for the vision impaired. Image: MIT

Although commercialization is likely at least two years away, the potential for this type of technology to help the blind to “see” what is in front of them is huge. The team is currently working on the next prototype, incorporating more advanced capabilities for the device, such as potentially reading non-braille words, taking real-time video, and adding sensors and a microphone. The design will also be streamlined to be smaller and have a lower center of gravity. While finger-worn devices are not new, most of the existing ones have been designed for people with sight, so this is truly an exciting breakthrough for the visually impaired.