NVIDIA DRIVE Touts Commercially Available Automated Driving System

It is now easier for developers and manufactures to view the raw data from sensors in real-time

Santa Clara-based NVIDIA has unveiled NVIDIA DRIVE AutoPilot, a tool it claims is the world’s first commercially available level 2+ automated driving system.

Announced at CES in Las Vegas, Drive Autopilot combines several of NVIDIA’s hardware and software projects into one AI technology package which it hopes will enable car manufactures to mass develop autonomous vehicles on an improved time schedule.

Rob Csongor, VP of Autonomous Machines at NVIDIA commented that: “A full-featured, Level 2+ system requires significantly more computational horsepower and sophisticated software than what is on the road today”

“NVIDIA DRIVE AutoPilot provides these, making it possible for carmakers to quickly deploy advanced autonomous solutions by 2020 and to scale this solution to higher levels of autonomy faster.”

Using the computational power of deep neural networks in analysing the data from 360 cameras and data gathered from external sensors on the vehicle, NVIDIA Drive enables a vehicle to successfully evade obstacles in its environment as well as navigated high speed lane changes and instigate automatic emergency breaking.

DRIVE Autopilot does not make a car fully autonomous.

Instead it installs driver assistance features, such as lane changing, merging, parking assistance and pedestrian detection. All of the components of autonomous vehicles, but the driver is still very much required.

Mapping Software

NVIDIA’s Drive Localisation software makes use of third-party maps to determine where a vehicle is in real-time using low-cost sensors instead of costly LIDAR technology.

DRIVE Localisation uses a front facing camera in conjunction with a Global Navigation Satellite System receiver, the vehicle’s speedometer and an Inertial Measurement Unit to gather real time data on the vehicles environment and position.

Rather than go through the process of building their own high definition maps, NVIDIA have turned to third-party map creators Baidu, Here, Zenrin, TomTom and NavInfo. Drive Localisation uses the features from these HD maps to match landmarks identified by the vehicles sensor systems with those stored in the mapping data.

The result is a highly accurate location defining system for vehicle navigation, which due to the use of low-cost sensors and available third-party mapping technology is available for installation in current market vehicles.

In The Cockpit

A key component in NVIDIA’s autonomous hardware is its DRIVE AGX Pegasus kit which can operate at 320 deep learning tera operations per second using two Turing architecture TensorCore GPU’s. Pegasus also contains two Xaiver system-on-a-chip processors which individually can deliver 30 trillion operations per second.

The Drive software suit rolled out by NVIDIA also brings new capabilities to the internal systems of a vehicle. Most notably DRIVE IX visualises the sensor date computed by the external sensors into a display that developers and manufactures can view in real-time.

A developer can also view the planned route of the vehicle signified by a yellow arrows on their display, while the mapping component gives a clear picture of the vehicles surrounding objects.

A novel feature also included in NVIDIA’s vehicle sensor system is driver monitoring which tracks the drivers facial expressions. Using a driver-facing camera in conjunction with deep neural network processors the system tracks driver fatigue and distraction levels.

Developers and drivers are able to view the raw data in displays as it shows the eye and face detection process. If the driver is shown to be drowsy it will also display a cup of coffee as a signifier. Attention to the road is measure by a timer. If the deep neural network algorithms detect either statistic moving into a dangerous position then the video display will be tinted red and the driver will be alerted.