I am also interested in this question. Am curious to know what you have tried / found out. I am also a user like you, not a developer.

I have looked at the map of a city and removed “tracks” and “footpaths” etc and reduced the size considerably. I only left primary, secondary, tertiary and a few other highways in. I found that it is indeed faster, by about 10%, and affects fastest more than shortest. It also depends on whether you use CH or not.

Consider that scenario: I always run the Route API to calculate ETA on paths over a determined polygon (Sao Paulo). However, my dataset is the whole planet-osm.
Could the routing algorithm have a better response time on a trimmed dataset (Brazil-only instead of planet-osm)?

Make sense to me that a trimmed dataset requires less memory. But I would like to know about the performance (can I use the measurement action to calculate it?)

Hello karussell.
Just to let you know I did load testings with smaller datasets.
I got some performance improvements on it.

However…
I run the performance tests on graphhopper 0.9 (my current production state) and graphhopper 0.12 (we will update to that version).
In my load testings using the same configurations and same data, the version 0.9 was faster than 0.12.

Usually we improve on performance. But performance tests are really tedious to get right and also it is complex to improve all scenarios. So in order to know if this is really a problem you need to send us a reproducable measurement.

Preferable you do

./graphhopper.sh measurement area.pbf

and then you see in the resulting properties files what is going on and if the differences of both versions is reproducable.