Interview - Baselabs' Eric Richter on sensor fusion : Page 3 of 4

June 07, 2019 //By Christoph Hammerschmidt

The automation of driving requires an immense amount of software in the vehicles, also and above all in the area of sensor data fusion. The software company Baselabs has gained a strong position in this area. With Baselabs co-founder and director customer relations Eric Richter, eenews Europe talked about the software requirements and the role artificial intelligence will play in cars.

eeNews Europe: Talking about radar. I assume that the radar sensor of a company X provides a different structure of a point cloud than the one of company Y. With cameras, it is perhaps even more pronounced, since preprocessing is already partly carried out in the camera. Developers then have to deal with completely different data.

Richter: Exactly. This is an important issue. There are different data levels for each sensor; even the terms are not exactly defined. Many then speak of raw data or feature level data, detection level data and object level data - these are the usual three to four levels that are distinguished. The exact idea differs slightly from manufacturer to manufacturer. For us it is important to take a close look at what level a sensor delivers. The two highest levels - object level and detection level - have existed for the longest time; this is where we have already made the most progress with our product range. Newer approaches, which we are also developing at Baselabs, such as the Dynamic Grid, a new algorithmic procedure, primarily address the lower levels, i.e. feature levels and raw data.

eeNews Europe: Dynamic Grid? Please explain.

Richter: This is our term for this process group. The background: You have to reliably determine the free space around the vehicle in order to calculate the trajectory you want to travel. So far, occupancy grids have mainly been used here. However, these methods have some decisive disadvantages. Above all, they are not able to distinguish between static and dynamic objects. At higher SAE autonomy levels defined, from Level 3 and above, i.e. things like motorway pilot ADAS, this causes difficulties. This is where this new process group, which we call Dynamic Grid, comes in. For each space element, per grid cell, not only is it determined whether this cell is occupied by another vehicle, but it is also determined in which direction the object is moving and at what speed. Thus, this method helps to distinguish between dynamic and static objects and can directly process point clouds from lidar sensors or HD radar images.

This site uses cookies to enhance your visitor experience. By continuing your visit to this site, you accept the use of cookies to offer services and offers tailored to your interests (see our Privacy & Data Protection Policy).

Vous êtes certain ?

This site use Cookies

Essential cookies

These cookies are required to navigate on our Site. They allow us to analyse our traffic. If you
disable cookies, you can no longer browse the site. You can of course change the setting

Advertising cookies

Analytical Cookies

These cookies are used to gather information about your use of the Site to improve your access to
the site and increase its usability.

Third party cookies

These cookies allow you to share your favourite content of the Site with other people via social
networks. Some sharing buttons are integrated via third-party applications that can issue this type of
cookies. This is particularly the case of the buttons "Facebook", "Twitter", "Linkedin". Be careful, if
you disable it, you will not be able to share the content anymore. We invite you to consult the
privacy policy of these social networks.

Pay attention, some cookies cannot be removed

To cancel some cookies, please follow the procedures on the following links