This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS) which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically estimate the required excavation volume when widening a road from point cloud data acquired by an LMMS. Firstly, the input point cloud is down-sampled to a uniform grid and outliers are removed. For each of the resulting grid points, both on and off the road, the local surface normal and 2D slope are estimated. Normals and slopes are consecutively used to separate road from off-road points which enables the estimation of the road centerline and road boundaries. In the final step, the left and right side of the road points are sliced in 1-m slices up to a distance of 4 m, perpendicular to the roadside. Determining and summing each sliced volume enables the estimation of the required excavation for a widening of the road on the left or on the right side. The procedure, including a quality analysis, is demonstrated on a stretch of a mountain road that is approximately 132 m long as sampled by a Lynx LMMS. The results in this particular case show that the required excavation volume on the left side is 8% more than that on the right side. In addition, the error in the results is assessed in two ways. First, by adding up estimated local errors, and second, by comparing results from two different datasets sampling the same piece of road both acquired by the Lynx LMMS. Results of both approaches indicate that the error in the estimated volume is below 4%. The proposed method is relatively easy to implement and runs smoothly on a desktop PC. The whole workflow of the LMMS data acquisition and subsequent volume computation can be completed in one or two days and provides road engineers with much more detail than traditional single-point surveying methods such as Total Station or GPS profiling. A drawback is that an LMMS system can only sample what is within the view of the system from the road.

There are various modern surveying and mapping techniques that make it possible to quickly obtain 3D surface geometry data. Notably, in recent years, there has been rapid development in Light Detection and Ranging (LiDAR) systems, for both airborne and mobile applications. In this study, a car-based system is used, which is generally referred to as a Laser Mobile Mapping Systems (LMMS). Such a system typically integrates one or more LiDAR profilers with a Global Navigation Satellite System (GNSS) and an odometer for positioning, and an Inertial Measurement Unit (IMU) for attitude control [1]. While driving, the LiDAR profilers determine distances between the car and its surroundings. By combining these distances with the position and attitude of the vehicle, a 3D point cloud is obtained that represents the area surrounding the road as visible from the car. Traditionally, the road and roadside geometry are measured by surveyors on a point-by-point basis using Total Station or GNSS. Unlike these methods, LMMS does not measure the 3D positions of well-defined points in the terrain. Rather, it acquires many random 3D points by sampling the whole road surface and the surface of the roadside, visible by the LMMS system. The quality of the individual 3D LMMS points is worse than the quality provided by Total Station or GNSS, but an LMMS is able to provide full coverage instead of acquiring only single points. In addition, the acquisition is fast and automatic, and there is no need for surveyors to leave the car. These latter advantages also distinguish LMMS from static Terrestrial Laser Scanning, in which a point cloud is obtained from a panoramic scanner mounted on a tripod [2–4]. Such panoramic scans need to be combined with GNSS positioning data in post-processing to acquire a georeferenced point cloud. To conclude, the LMMS is currently the fastest ground based method for acquiring 3D surface information in large areas. It has already been applied for high way surveying [5,6], sandy coast morphology and riverine erosion measurements [7,8], railway monitoring and rail geometry extraction [7,9], road environment management [8,10–12], and highway documentation [13–17]. Road management starts with the planning phase and ends with the rehabilitation or maintenance phase [7]. When the road has been constructed, documentation for road information is needed for an increasing number of other applications, such as noise modeling, road safety, road maintenance, location-based services and navigation [13]. Road documentation and management mainly consist of recording the road geometry and monitoring the road environment [14,18]. Road geometry refers to parameters used for the design of a road, such as design speed, stop sign distance, line of sight, number of lines, line width, longitudinal and transverse slope, road pavement materials and so on. Road environment refers to the surroundings of the road on both sides, including buildings, trees, vegetation, traffic signs, traffic light poles and other objects.

The information discussed above plays an important role in the ongoing maintenance of the road, especially for mountain roads where there is a risk for rock and stone fall. Additionally, the geometric features observed on a mountainous road could be helpful for monitoring the flow of rainwater in the case of heavy rain and may be used to assist natural disaster prevention [19]. Moreover, steep and unstable road sides may cause landslides, resulting in further road damage [20].

This paper presents an approach for automatically estimating the roadside material volume to be excavated for road widening. Firstly, LMMS point cloud data in a mountainous area is down sampled and the outliers and noisy points are removed. Based on the data, normal vectors, and 2D slopes are estimated at every point. Then, an automatic iterative floating window approach, taking advantage of point height, normal vector and slope, is used to filter and segment the road points. After that, a local neighbourhood feature is defined based on the vectors between a query point and its neighbouring points to obtain the outline and skeleton of the road. These steps finally allow us to compute the volume of the roadside material that would have to be moved to widen the road by 4 m. In addition, an analysis of the quality of the results is presented notably by comparing results from different datasets both sampling the region of interest. Finally, conclusions and future work are discussed.

2.Methodology

This section explains the procedure for estimating the amount of roadside material that has to be removed for road widening. It consists of five steps. (1) Point cloud data preprocessing; the original point cloud data have a very high point density and need to be downsampled before processing. Additionally, outliers are removed, and a Digital Surface Model (DSM) is generated; (2) Surface normal estimation; (3) Slope and aspect estimation; (4) Road detection and segmentation; taking the normal and slopes as estimated in steps (2) and (3) as input, an automatic iterative filtering approach is used to segment the road points from the downsampled point cloud data; (5) Volume computation; the material volume that needs to be moved to widen the road is computed based on step (4). A flowchart of the procedure is shown in Figure 1.

2.1.Point Cloud Data Preprocessing

The raw point cloud data have a very high point density, so for quick and efficient processing, downsampling is a necessity. The approach followed here is to represent the point cloud data by voxels [21–23]. In this paper, a uniform-sized voxel filtering approach was used, as implemented in the Point Cloud Library (PCL) [22]. The main concept of the approach is to create a 3D voxel using a given voxel size. All points in a voxel are represented by their centroid [13].

The outliers of the downsampled point cloud data have to be removed before estimating geometric features from the point cloud data. In this procedure, the query point neighbourhood concept is introduced, as shown in Figure 2. Here, P represents the set of points within radius Rquery of query point pquery, so P is defined as:
(1)P={pi|‖pi−pquery‖≤Rquery}where pi represents the individual points from the point cloud.

Outliers are the measurements located at jump edges or discontinuous boundaries where there should be no points [24]. There is abundant literature on the approaches used to remove outliers [25–27]. The approach used in this paper is based on a statistical analysis of each point’s neighbourhood [28,29]. For each point pquery ∈ P, the mean distance d̄ to its closest k neighbours is calculated. After that, for each point in the point cloud, the mean distance and standard deviation of the distances to their k nearest neighbours are determined. The main objective is to retain only those points whose mean distance to the closest k neighbors is similar to the average mean distance. As this describes a measure of the underlying point cloud density surrounding a point, the criterion for keeping a point is simply formulated as follows:
(2)P*={pq∈P|(μk−ασk)≤d¯≤(μk+ασk)}where, α is a desired density restrictiveness factor, while μk and σk are the mean and standard deviation of the distance from a query point to its neighbors respectively, and P* is the set of remaining points.

2.2.Local Surface Normal Estimation

The surface normal at a discrete point is a vector perpendicular to the tangential plane of the local surface at that point. Various methods exist to estimate a normal at a certain point in 3D point cloud data [29–33]. The simplest is based on 3D plane fitting. With this method, the problem of determining the normal of a point becomes a least-square 3D plane estimation problem for a suitable spatial neighbourhood. Figure 3 depicts the concept of normal estimation in discrete 3D point cloud data.

When estimating a local normal using least-squares, the goal is to approximate the tangent plane at a point of interest and take that plane’s normal. The best fitting plane is obtained by determining those planar parameters that minimize the squared distances between suitable neighbouring points and the plane to be estimated. Suppose we have a point of interest with Euclidean coordinates (x, y, z)T and a set of k neighbouring points. The least-squares method yields a normal vector n⃗ = (nx, ny, nz)T, for which the error is minimized.
(3)error=∑i=1k(piTn→−d)2

Additionally, |n⃗| = 1 where pi = [xi, yi, zi]T is a neighborhood point and k is the predefined number of considered neighborhood points.

The solution of Equation (3) for n⃗ is in general given by the smallest eigenvector of the matrix
(4)M=1k∑i=1k(pi−p¯)(pi−p¯)Twhere
(5)p¯=1k∑i=1kpi

2.3.Local Slope Computation

The 2D slope, also known as the 2D gradient, is the vector field of a surface. The vector direction points in the direction of the greatest change in height, and the vector’s magnitude is equal to the rate of change. Based on the previously generated gridded DSM, the height at every grid point is interpolated from neighbouring points by inverse distance interpolation with power 2. If the grid size is dgrid, then the slope Si in the direction of each of the eight neighboring grid cells is given by:
(6)Si=Hi−hquerydiHere, Hi is the elevation of the i-th neighbour of the query point, and hquery is the elevation of the query point itself. The variable di is the distance between the i-th neighbour and the query point. Note that di is the square root of the grid size in diagonal direction.

2.4.Road Detection

Based on the normal vectors and slopes that were computed at each point in the previous steps, an automatic iterative point cloud data filtering approach is used to detect road surface points from the point cloud data. The main steps are as follows:
▪

Input an initial grid and window size;

▪

Generate a virtual reference 3D gridded layer. The elevation of each grid point is interpolated from its neighbouring points, and also the grid point’s normal as unit vector and its direction to zenith. Based on this layer, a window of predefined size is created to move over the grid and point cloud;

▪

In the current moving window, compare for each grid point the 3D layer grid elevation and the normal vector direction with that of the point cloud; Calculate the height and angle differences between the 3D layer and the point cloud to verify if the differences are beyond the threshold;

▪

If the difference is less than the distance and direction threshold, then the point is accepted as a road point, else the point is regarded as off-road point;

▪

Go to step (2) and generate a new virtual layer using a smaller grid size, then iteratively process the point cloud again;

▪

The loop ends when the grid size reaches a pre-defined smallest size.

2.5.Calculation of Excavation Volume

In this step, based on the segmented road points from step 4, the road’s outlines are determined. First, a local feature descriptor is defined based on the neighbourhood concept.

As shown in Figure 4, if P = {p1, p2,…, pk} is the set of neighbours of Pquery within radius Rquery. The local vector r⃗ι at query point Pquery is defined as
r→i=Pik−Pquery. The edge descriptor is defined as the sum of these local vectors,
(7)Edes=∑i=1kr→i

In case the query point is indeed an edge point, the value ‖Edes‖ is greater than that of a non-edge point in the road point cloud and the direction of the descriptor toward the inner body of the road.

After the road outline is determined, the road central line is estimated based on the location of the road edges. Now suppose the road needs to be widened to four lanes. As a consequence, a certain volume of the road has to be removed or added to extend the flat road surface.

As shown in Figure 5, roadsides were divided into slices. For each slice, the volume is computed. Summing all of the sliced volumes together gives the total volume that needs to be excavated or filled. Note that the sign of a sliced volume indicates whether material needs to be removed (positive sign) or added (negative sign). Based on the road central line and edge lines, as well as the expanded width of the road, the expanded edge points are found.

To minimize the sample error, the resolution for both the road parallel and the road perpendicular direction is defined based on the point cloud data resolution, as shown in Figure 6.

Suppose
VTL and
VTR denote the total volume on the left and right side respectively, such that
ΔviL and
ΔviR represent the volume of the i-th slice on left and right side. NL and NR denote the number of slices. Thus, the total volume to be moved on each side of the road is determined by:
(8)VTL=∑i=1NLΔviLandVTR=∑i=1NRΔviR

If for example
VTL>VTR, we could choose to extend the road on the right side to save time and expenses.

3.Implementation and Testing of the Method3.1.Software Implementation

The methodology described above is implemented on an ordinary Dell desktop computer, which has an Intel Xeon 3.6 GHz CPU on board and 16 GB random memory. The implementation of the software is in the C++ language. Also, the Point Cloud Library (PCL) statistical outliers filtering tool is used in the processing. The whole processing took 23.184 s for the tested dataset.

3.2.Data Description

The point cloud data studied in this paper was acquired by the University of Vigo, Spain. The approximate location of the studied road is shown in Figure 7.

The entire study area is shown in Figure 8. The study area contains a road in a mountainous region. An overhead view of the dataset is shown in Figure 8a. The mobile LiDAR system selected for this work was the Lynx Mobile Mapper from OPTECH. The Lynx uses two LiDAR sensors to collect survey-grade LiDAR data at 500,000 measurements per second with a 360° FOV (per scanner) [33,34]. The system incorporates the POS LV 520 unit produced by Applanix, which integrates an Inertial Navigation System with two GNSS antennas, providing an accuracy of 0.015° in heading, 0.005° in roll and pitch, 0.02 m in the X, Y dimension and 0.05 m in the Z axis. All those data are determined by differential GPS post-processing after data collection using GPS base station data. The coordinate system used for this work is UTM-WGS84. The original point cloud dataset contains 5,838,794 points and has an average point density of 2084 points per square meter. It covers a 132-m stretch of road. In Figure 8c, we can see that the road was constructed in a mountainous area and has steep embankments on either side.

3.3.Geometric Computations3.3.1.Slope Computation

As depicted in Figure 9, the slope in this area varies from 0 to 88.1 degrees. The figure is colour coded with red indicating a large slope and blue indicating a small slope. Points with small slopes are mainly road points. The points of the road have larger slope values corresponding to the steepness of the road side. The dots on the road encircled in red have larger slope value than other road points; these are in fact traffic cones that can also be seen in Figure 8d. There is a known landslide site located within the light green circle, which has a smaller slope value that stands in contrast to the steep roadside of its neighboring points.

3.3.2.Road Detection and Segmentation

Road points were identified according to the method outlined in Section 2. The minimum virtual grid size was set to 0.1 m, because there is a very high point density in the original point cloud dataset. Additionally, the height threshold was set to 0.3 m and the angle threshold was set to 15 degrees. In this processing, a total of 42,717 road points was abstracted and segmented, as shown in Figure 10.

3.3.3.Volume Computation

First, based on the segmented road points, the road outline was determined following the method of Section 2.5. In this paper, the local descriptor threshold was set at 1.5, which means that all points with an edge descriptor value greater than 1.5 were regarded as road outline points.

The abstraction results are shown in Figure 11. In total, 429 outline points were identified. Figure 11a depicts the abstracted road central line, and Figure 11b illustrates in addition the road lines of a projected road expansion by 4 m on each side.

After the extraction of the points above the possible location of the widened road (i.e., 4 m left and right of the current road), the vertical distances between points representing the current surface and the expanded road planes were determined, and the excavation volume was estimated. Figure 12 shows two profiles of the current surface height at a distance of 4 m left and right from the current roadsides. The horizontal axis follows the road starting from its lowest point. In this paper, the resolution in the road parallel direction was set to 1 m and in the road perpendicular direction to 0.5 m. Figure 12 shows the overall height increase of the surface profiles in the road parallel direction.

Figure 13 shows the volumes of the slices which were computed as described in Figure 6. Because of some low water drain elements on the road side, there are values that are below 0, which means that if the road is widened by 4 m, some of the volume should be moved to those locations to match the road surface height. The arrow points to a location that has a greater surface height resulting in a slice with a larger volume.

Figure 14 shows the cumulative volume in the road parallel direction. At the left side of the road, 542.22 m3 needs to be excavated compared to 462.35 m3 on the right side of the road. Because the studied road is not straight, the road parallel direction distances for the two road sides differ. The right side is 137.2 m long, whereas the left side is 124.1 m long. At the location indicated by the arrow in both Figures 13 and 14, there is one slice with a particularly large volume which causes a steep rise in the cumulative volume. As shown in the cumulative volume estimation results, the material volume that needs to be excavated on the left side is 8% greater than on the right side. When applied in real applications, such results may help engineers to optimize road widening design to minimize the time and costs of the project.

4.Quality Discussion and Validation

There is no data available from other sensors that can be used to verify the computed results. Instead the quality of the results is analyzed by considering the quality of the input data in combination with an analysis of how this quality propagates into the final volume computations. In addition, the excavation volumes were determined from a second LMMS dataset, acquired in a second run by the same system on the same day. Moreover, a possible measurement plan for further validation of the results is sketched.

4.1.Discussion on the Quality of the Results

Since the total volume is computed by summing up slices, the squared total error equals the squared sum of the errors in the determination of each sliced volume. The random error in the computation of a sliced volume consists of a variance component caused by random measurement errors in the original point cloud. This component is denoted as σPTS. Another variance component corresponds to the surface roughness and is denoted σR. Using the law of error propagation, the relationship between these errors is given by Equation (9).
(9)σTotal2=∑i=1kσi2=∑i=1k(σi,pts2+σi,r2)where σTotal is the total error of the volume computation, σi is the random error in the estimation of the volume of the i-th slice, while σi,pts and σi,r denote the point cloud measuring error and roughness of slice i respectively, k is the number of the slices volumes, here equal to 132.

Thus, the error of the volume of a single slide is studied first. According to the specifications of the Lynx LMMS and previous error studies [34,35], the range precision and range accuracy is 8 mm and ±10 mm, respectively.

As can be seen in Figure 15, such single slide is divided into 1-m by 0.5-m blocks in the road parallel and the road perpendicular direction, respectively. In each block, the mean and standard deviation of the points in that block are computed. A slice from the north side slope of the road was randomly selected to compute mean and standard deviation of points in each block of the slice. A side view of both the original and the down sampled point cloud is shown in Figure 16.

The resulting standard deviation (st.dev) values for the eight blocks that together form the slice depicted in Figure 8 are given in Table 1. The average number of points per block is reduced from 488 to 36. This table also clearly demonstrates the purpose of the downsampling strategy: close to the road, point density is very high and therefore the reduction in the number of points is high as well. Further from the road, the point density drops and a much larger fraction of the original point is maintained. On top of that, the geometry of the terrain with regard to the lasers on the car has a strong influence on the point density.

To summarize the results from Table 1, we determine the differences between the means per block from the original data and the reduced data. The mean of the absolute differences equals 0.18 m. Further validation is needed to verify which means are actually better: the means from the original data are computed based on more points, but some parts of the surface may also be overrepresented in the original point cloud due to local variations in scanning geometry induced by local relief variations. For both the original and the reduced blocks, the st.dev values are comparable, between 0.5 and 0.6 m. These st.dev values are larger than the absolute differences between full and reduced data, and also much larger than the quality of the individual points. Therefore, it is concluded that these values are dominated by surface relief which is also clear from Figure 16.

Assuming a st.dev value per block of 0.55 m, the st.dev per slice equals 1.56 m. Assuming 132 slices, this results in a st.dev for the total volume on one road side of 17.9 m. This st.dev value corresponds to an error below 4%, when compared to a value of 500 m3 of total excavation volume. As the current error is dominated by surface relief, a reduction in the error could be obtained by decreasing the block size.

4.2.Validation Using Data from a Second Run

For validating the results shown in Section 3, in this paragraph the same method will be applied to a second dataset obtained using the same LMMS on the same day. The differences in outcome will be compared to as discussed in Section 4.1.

4.2.1.Description of the Point Cloud Obtained in the Second Run

For the second run, the same system was used but the position of the car on the road was different, as will be shown below. As for the first dataset, the data of the second run consists of a georeferenced point cloud and of a dataset giving the trajectory of the LMMS car during data acquisition. The point cloud of the second run cropped to the same piece of road consists of 6,374,830 points, and has a point density of ∼2,000 points per square meter. A side view of the second run point cloud is shown in Figure 17.

4.2.2.Computation Results

Following the same methodology as described in Section 2, the data of the second run was processed, and the excavation volumes for both road sides were computed. The results are shown in Figure 18.

A comparison of the results from both datasets is given in Table 2. The results show that the difference in excavation volumes for both sides of the road are within the error budget as derived in Section 4.1, which was determined as 4% of the total excavation volume.

4.2.3.Comparison Analysis

As can be seen from Table 2, there are some differences in the excavation volumes as computed from the original point cloud data and the data from the second run. Recall that volumes are determined from 1 m slices that are further divided in eight blocks, comparing Figure 15. To obtain insight in the differences between the outcomes from the first and the second run, Figure 19 shows differences in height per block in meter of 1.0 m in road parallel direction and 0.5 m in road perpendicular direction. The dotted red line is the abstracted center line of the studied road. The purple and chocolate line in Figure 19 depict the trajectories of the LMMS while collecting the original and the second run point cloud data, respectively.

As shown in Figure 19, most of the blocks have approximately the same height, which demonstrates that the two datasets are consistent. Only the purple circles indicate locations where local height differences in the order of 1–2 m occur. Examining the two point clouds in detail indicates that at those locations hardly any points were sampled in one of the two runs. This local under sampling is probably caused by limited visibility of the roadside from the location of the LMMS acquisition.

This effect is illustrated in Figure 20, which shows a schematized cross section roadside geometry. For the trajectory 1, the purple area is invisible from the LMMS and is therefore not sampled. However, the area can be scanned from trajectory 2. A good solution would be to combine data from both runs such that the two point clouds data can supplement each other in such situations.

Figure 20 shows a cross section of mountainous road geometry. For the trajectory 1, the purple area has no laser reflection, and thus, has no measured points. However, for trajectory 2, the area can be scanned and have point cloud data, and vice versa. The point cloud data could supplement with each other in those similar locations.

4.3.Proposal for Further Validation

There are several options to further validate the results of the methodology proposed in this paper in a field experiment. A general idea is to locally use other, preferably superior measuring methods to sample the geometry of a piece of the road and road side considered, and repeat the computations with these superior data. A traditional method would be to use a total station or RTK-GPS to measure some profiles of 3D road surface points in a local georeferenced datum, and import the obtained data into modeling software such as AutoCAD or 3ds Max, to construct a local road model and compute the volume. This method should give accurate results, but is labor intensive. A total different approach would be to actually perform measurements directly before a planned road extension. In this way, the real volume of the material that is excavated can be measured and compared to the results of the analysis of the corresponding LMMS data.

5.Conclusions

In this paper, a method is proposed for the estimation of the excavation volume of a planned road widening from a Laser Mobile Mapping point cloud. Starting with a LMMS point cloud data sampling a mountainous road, we used a uniform-size voxel to downsample the point cloud data and remove outliers. Then, local normals and 2D slopes were estimated at each resulting grid point to separate road from off-road points. Finally, the volume needed to excavate the road by 4 m on both sides was computed. It was shown on LMMS data representing a mountain road in Spain that the volume to be excavated on the left side differs by 8% to that on the right side. A more detailed analysis of one slice of data indicates that the error in the estimated excavation volume is below 4%. The results were partly validated by a comparison to results from analyzing a second point cloud obtained by the same system on the same day, but from a different trajectory. The resulting excavation volumes as estimated from both datasets differed by 2.5%–3.5%.

A further step would be to use the proposed method for determining the widening of the road of, e.g., 4 m by x meters on the right and (4 − x) meters on the left, with 0 m ≤ x ≤ 4 m, that minimizes the moved volume over a stretch of, say, 100 m of road. Further research is also needed to determine an optimal block size: In this paper, blocks of size 0.5 m by 1 m are used; reducing the block size will decrease the effect of surface relief on the error, but will increase the effect of measurement noise and varying point densities.

The authors would like to thank the three anonymous reviewers for their comments in improving the manuscript. Also, the authors gratefully acknowledge financial support from the China Scholarship Council. The authors would also like to thank the support from project p10-TIC-6114 JUNTA ANDALUCIA. This paper is partly supported by IQmulus (FP7-ICT-2011-318787), a project aiming at a High-volume Fusion and Analysis Platform for Geospatial Point Clouds, Coverages and Volumetric Data Sets.

Normal estimation at a query point using a local plane fitting approach.

Figure 4.

Illustration of local neighbourhood feature descriptor.

Figure 5.

Computation of excavation volume.

Figure 6.

Slice volume computational geometry.

Figure 7.

Approximate location of the studied road.

Figure 8.

Original point cloud dataset of the study area. (a) Original laser mobile mapping systems (LMMS) point cloud data in 3D view; (b) Original LMMS point cloud data in 3D side view; (c) Study area in Google Street View; (d) Data collection with LMMS.