Bottom Line:
To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

ABSTRACTLocalization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

f8-sensors-15-15830: Camera's intrinsic calibration results. The left side of the figure is the 3D geometric results of the checker board from the camera calibration, and the right side of the figure is the checker board images used for calibration.

Mentions:
Calibration was performed for the hybrid camera and laser scanner approach. In order to obtain the camera's intrinsic parameters, intrinsic calibration was conducted by using Zhang's method [36,37]. This calibration method was implemented with an open source technique called the Camera Calibration Toolbox [38]. A total of 38 checker board images were used to perform the camera's intrinsic parameter calibration. The pattern of the checker board consists of 13 vertical by 10 horizontal lines, and the grid size is 41.3 mm × 41.3 mm. Figure 8 shows the intrinsic parameter calibration results. The right side of Figure 8 is the image dataset for camera calibration, and the left side shows the 3D geometric results of the checker board obtained using the intrinsic parameters from the camera calibration. The camera's intrinsic parameters are listed in Table 2, and are used for constructing KC in Equation (7).

f8-sensors-15-15830: Camera's intrinsic calibration results. The left side of the figure is the 3D geometric results of the checker board from the camera calibration, and the right side of the figure is the checker board images used for calibration.

Mentions:
Calibration was performed for the hybrid camera and laser scanner approach. In order to obtain the camera's intrinsic parameters, intrinsic calibration was conducted by using Zhang's method [36,37]. This calibration method was implemented with an open source technique called the Camera Calibration Toolbox [38]. A total of 38 checker board images were used to perform the camera's intrinsic parameter calibration. The pattern of the checker board consists of 13 vertical by 10 horizontal lines, and the grid size is 41.3 mm × 41.3 mm. Figure 8 shows the intrinsic parameter calibration results. The right side of Figure 8 is the image dataset for camera calibration, and the left side shows the 3D geometric results of the checker board obtained using the intrinsic parameters from the camera calibration. The camera's intrinsic parameters are listed in Table 2, and are used for constructing KC in Equation (7).

Bottom Line:
To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor.The experimental results were compared with those of the conventional GMappingapproach.The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

ABSTRACTLocalization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.