Apple quietly wheels out 'Voxelnet' driverless car tech paper

We'd ask them what it means, but, uh...

Apple researchers have released a paper about a "trainable deep architecture", setting out the fruity firm's plans to make autonomous vehicles better at detecting cyclists and pedestrians.

The paper, jointly authored by Apple researchers Yin Zhou and Oncel Tuzel, details a system the pair call Voxelnet. A voxel is a point on a 3D grid.

The Voxelnet proposal would, say the Apple twosome, divide "a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer".

The paper goes on to claim, unsurprisingly, that Apple's practical tests of its own new system have outperformed existing "LIDAR-based 3D detection methods by a large margin".

Table 2 from the paper. Click to enlarge

LIDAR-based sensor suites are a standard fit nowadays for self-driving cars and existing road vehicles modified to serve as driverless car testbeds. Apple's proposal effectively involves putting its own software suite on the end of the LIDAR sensor itself, which it claims greatly increases its effectiveness.