Nvidia May Be Lone Rider on Next Big Technology Wave

Dell’s most forward-looking people spoke about the future at Dell World a few weeks ago. One of the sessions I attended dovetailed with something that appears to be glaringly obvious which is that is that robots likely will be the next big technology wave. Dell is not alone. Nvidia also figured out early that autonomous cars were going to be a thing and largely pivoted from the mobile device efforts that were not going much of anyplace to self-driving cars. Last week, Nvidia announced Isaac, which is based on its Jetson platform, and it is targeting robotics. Once again, Nvidia has anticipated the future and, in its segment, is largely going it alone. Applying what it learned developing autonomous vehicles gave the company a huge jump on this segment, and its initial offering looks surprisingly mature as a result. I’ll share some observations about what Nvidia’s Isaac is going to enable and close with my product of the week: Cinego, a movie-watching solution that provides a big screen experience on your head and actually is damn comfortable.
Self-driving cars are basically robots that carry people. They are very advanced, because these robots must be able to deal with a massive variety of changing conditions in real time. Using a blend of cameras and technologies like LIDAR, they must look for and anticipate problems, respond to them in milliseconds, and ensure the safety of the vehicle, passengers, and anyone near the vehicle. They are far more advanced and faster, in terms of being able to think and form decisions, than most defense systems, most computer systems, and most traffic control systems. They wouldn’t be safe on the road. One of the elements Nvidia realized it needed late in the process was the ability to create electronic simulations of various traffic, road and weather conditions, and train the autonomous driving computers at computer speed. Previously, training had been done at human speed on real roads, which limited significantly the system’s learning speed and created potential life-threatening risks. Training on a virtual system entails little or no risk, so the result of the pivot to simulation was a massive increase in system capabilities.
Nvidia has applied these same tools to Isaac, and the result is that its robotic solution starts out years ahead of where it otherwise might be. So, the end result is a robotic intelligence system with much of the power of Nvidia’s Autonomous Vehicle system, giving it the ability to navigate, see and make decisions. Even voice command is built in, given that you largely will interface with an autonomous vehicle with your voice. Autonomous cars can read signs, so the robots based on this technology should be able to read as well. Using this system, developers should be able to give the robot the ability to respond to commands, read labels on food packaging and medicine bottles, and perform many of the same tasks as a caregiver over time.