CARLA 0.8.0 Release

In this release we have focused on increasing the speed, but there have been
many other improvements too, including a great community contribution, a Lidar
implementation!

We have also upgraded to Unreal Engine 4.18, if you are building from source
you need to upgrade your Unreal Engine version. We have made some changes too in
the client-server communication, so previous versions of the client won’t work
with this release. As usual, the updated Python API is shipped in the same
package as the release.

The CARLA Documentation has been also updated to a
more tutorial-like format hoping to help new users getting started with CARLA.

Faster CARLA!

We have been working very hard to make CARLA faster. We mostly focused on
improving the performance of capturing images, so the greatest gains are seeing
when running CARLA with cameras and no agents (without other vehicles and
pedestrians), being up to 45% faster.

We have improved several aspects of the scene to increase the render speed:
optimized shaders, reduced textures, enhanced level of detail of trees, and
reduced the background to a matte painting. Parallel to this, we reworked how
the images are captured to reduce latency (more info on this below).

Quality levels

In this release we also introduce Quality Level settings. Configurable for
each episode, they control the visual quality of the simulation. We have
currently implemented the Low and Epic modes, but we plan to add
intermediate levels in the future. The Low mode aggressively reduces the
quality but runs considerably faster. This is especially useful where visual
quality is not as important as running a huge number of episodes in a short
time, as typically happens when testing reinforcement learning methods.

Sensors

0.7.1

0.8.0 Epic mode

0.8.0 Low mode

No cameras

22 FPS

26 FPS

60 FPS

One camera

9 FPS

13 FPS

45 FPS

Three cameras

4 FPS

6 FPS

15 FPS

Table: Comparison between CARLA versions, measured in a typical scenario with
20 vehicles and 30 pedestrians on Town01, running on a machine with Ubuntu
16.04 and a GTX1080 GPU.

Improving CARLA performance is our top priority right now. We are not yet
satisfied with the current speed, and we will keep working to make CARLA faster
in the coming releases.

Fully free and distributable assets

We have created our own pedestrian 3D models, completely free to use and
distribute. We have also removed all the dependencies with Epic’s Automotive
Materials. This way it won’t be necessary to download this package and link it
when building CARLA from source; of course, it is still possible to add it to
the project, but this is now an optional step. Our release version does not make
use of any proprietary assets now, all of CARLA is available on our GitHub
build.

Measurements changed their units

This is an important change that may affect many client code; if you are
developing your own CARLA client you should take this into account when
upgrading to the 0.8.0 release.

Now CARLA uses SI units everywhere, so we finally say goodbye to the strange
units we were carrying around (yes, we were using units like (km/h)/s for the
acceleration).

Location

Speed

Acceleration

Collision

Angle

Time-stamps

Now all the units are coherent and derived from the SI unit system. All the
measurements received by the client use meters, seconds, and kilograms. We
only kept time-stamps in milliseconds for practical reasons.

Community contribution: Ray-cast Lidar

In this release we are including the great contribution of Anton Pechenko
(Yandex), a ray-cast implementation of a rotating Lidar. Similar to Velodyne
models HDL-32E or VLP-16, this Lidar implementation is fully configurable:
number of channels, points per second, frequency, range, etc. The Lidar can be
attached to the vehicle in the same manner cameras are attached. For more detail
see the Lidar’s documentation.

On the client-side, for each Lidar a point cloud is received. We have added
basic methods to export this point cloud in PLY format to easily visualize it in
external software like MeshLab.

Depth-based point cloud

We have added some methods to our Python API to extract a 3D point cloud from a
depth image. We have also put code to transform these 3D points to world
coordinates. The point cloud format is shared with the Lidar output thus the
transformations are compatible with both, and in the same way, it can be saved
in PLY format. A sample of how to use it can be found in
point_cloud_example.py.

Asynchronous captures

As mentioned above, as part of our set of performance improvements, images are
now captured in Unreal’s render thread. By doing so we reduce the latency and
avoid blocking the main thread as much as possible. This significantly reduces
the overhead of the cameras but introduces small changes in the way images
arrive to the client.

In asynchronous mode, images may arrive up to two frames later than the
measurements. In Unreal Engine, the render has its own dedicated thread that
runs slightly behind the game thread, by reducing synchronization the whole
simulation runs faster.

In synchronous mode, images are still captured in the render thread, but at
the moment of sending the measurements, the main thread is blocked until all the
images are ready. This introduces some overhead, so the performance gain in this
mode is not as high.

More changes

There has been a bunch of other small changes and fixes, for the full list of
release notes see the CHANGELOG at our
GitHub.