This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
As the most important sensor in the robot perception system, LiDAR plays an indispensable role in object detection, localization, and mapping in the field of unmanned driving. According to its working principle, LiDAR can be divided into conventional LiDAR, solid-state LiDAR, and hybrid solid-state LiDAR. Among them, conventional LiDAR, with the longest development time, is also the most widely used.
However, the conventional LiDAR is limited by the working principle of its rotary scanning, so the point cloud compensation problem in one sweep must be considered. For example, Figure 1 shows the position relation of a vehicle carrying LiDAR at three moments. Assuming that these three moments are all within a LiDAR scanning cycle, for obstacle
[figure(s) omitted; refer to PDF]
Furthermore, if objects in the scene are moved during scanning, the point cloud of the moving object will be distorted, as shown in Figure 2. At this point, while the LiDAR is moving, the object to be measured is also moving, which causes the measurement deviation to increase. Most LiDAR downstream tasks choose to ignore the impact of this problem. For example, in the well-known LOAM [1] framework, only the distortion caused by LiDAR movement is calibrated. However, we find that the negative impact of the distortion caused by the movement of objects in the scene on various applications of LiDAR cannot be ignored.
[figure(s) omitted; refer to PDF]
Based on the above problems, we propose a LiDAR motion compensation method. Through a new feature point selection method of point cloud, the planar entropy and linear entropy of the points are firstly calculated, and then, Soft-NMS-Select is used to obtain the feature description of the current sweep. Then, the prior information of LiDAR’s own motion was obtained by linear interpolation twice to improve the orthodontic effect. At the same time, the coordinates and normal vectors of the center point of the point cloud cluster are calculated by Euclidean clustering, and then, the movement posture of objects in the scene is estimated, and the point cloud of moving objects in the scene is also compensated, so that the scene information can be described more accurately.
In addition, we selected two laser SLAM algorithms, A-LOAM and LEGO-LOAM, as the benchmark, and compared absolute pose error (APE) to quantitatively analyze and demonstrate the effectiveness of the proposed method. The structure of the rest of this paper is as follows: in Section 2, we introduce the methods commonly used to calculate motion posture and the general motion compensation methods of laser SLAM. In Section 3, the proposed motion compensation algorithm is introduced in detail. In Section 4, we provide the experimental effect under the actual scene. The limitations and deficiencies of this method are discussed in Section 5. Finally, we summarize this article.
2. Related Works
When solving motion pose, it is often necessary to match the point cloud between adjacent frames. As one of the mainstream registration algorithms, the ICP algorithm [2, 3] iteratively optimizes the conversion between source point cloud and target point cloud by minimizing the error measure between the two point clouds. Based on the matching of the ICP algorithm, other scholars further proposed point-to-line ICP [4], point-to-point ICP [5], and generalized ICP [6], making great contributions to the accuracy of point cloud registration. As the accuracy of point cloud registration is improved, the ICP algorithm is used in LiDAR odometer to reduce the accumulated error of continuous registration by combining the closed-loop mechanism and pose construction process [7]. The ICP algorithm is improved by means of downsampling and point cloud matching rejection, and the location drift of LiDAR odometer is reduced obviously. Although the ICP algorithm can be very accurate in point cloud conversion, it has a slightly insufficient requirement on real-time performance in automatic driving. Therefore, in order to improve the efficiency of the ICP algorithm, some articles use parallel computing for operation [8–11]. When ICP is used for matching scanning, if the scanning motion is relatively slow, serious motion distortion will occur. Therefore, researchers use laser intensity return to create visual images and match visually different features between images [12] to restore the motion of ground vehicles [13–16]. However, these methods require dense point cloud images. In [17–19], sparse point cloud images can be used for motion recovery, and point clouds can be used to match the geometric structure of local point clusters [19].
After completing interframe matching, LiDAR motion compensation is required. Mainstream laser SLAM algorithms [20–22] unify the point cloud within a sweep to a timestamp, and this unified time point is often the start time of the scan. Then, using the result of the last interframe laser odometer as the motion between the current two frames and assuming that the current frame is also moving at a uniform speed, we can also estimate the position and pose of each point relative to the starting time. That is, the motion of
3. Methodology
3.1. Selection of Point Cloud Feature Points
The LiDAR used in this paper is Velodyne 16, and the laser points under a sweep reach more than 30,000. If all points are used for point cloud registration, it will cause great computing overhead. Therefore, we use the idea of LOAM for reference and select key points in the scene to participate in the subsequent registration work. In addition, before the feature description of the point cloud in a sweep, this paper did not adopt any downsampling filtering method as the pretreatment, because we found that the filtering method would reduce the compensation effect of subsequent moving objects.
Before calculating the feature description of point cloud, PCA should be performed on the processed point, and the eigenvalues of its covariance matrix should be calculated, which are arranged in descending order as follows:
It can be seen from the above formula that the flatness depends on the size of the third principal component in the overall composition. Approximately, as the value of the third principal component changes from large to small, the value of the flatness characteristic
This indicates that part point clouds with a larger
[figure(s) omitted; refer to PDF]
In addition, this paper finds that the introduction of entropy of local feature description can improve the consistency of the selection of key points of adjacent frames, because entropy represents the uncertainty of information, so selecting a larger entropy value can screen out more significant neighborhood features, which is specifically expressed as
According to the above analysis, points with significant features are characterized by low flatness or linearity and high entropy. Therefore, by introducing flatness and linearity into the above equation, the flatness entropy and linearity entropy can be obtained as follows:
Therefore, the feature point selection problem can be described as
See Figure 4. Key points in area 2 and area 3 do not cover a large number of scene features, so the accuracy of subsequent registration will be reduced. In this paper, a Soft-NMS-Select method is proposed to solve the above problems. Firstly, a neighborhood radius
[figure(s) omitted; refer to PDF]
Therefore, this paper uses a soft method to suppress nonmaxima to achieve uniform selection. The pseudo-code of Soft-NMS-Select is shown in Algorithm 1.
Algorithm 1: Soft-NMS-Select.
Input: key points set (C), radius
Output: new key points set (
Create
C = C.sorted(max
whileC.size > 0 or
ifdo
C.remove(
elsedo
ifdo
C.remove(
else do
C.remove(
end if
end if
end while
return key points set (
Using different radii
[figure(s) omitted; refer to PDF]
3.2. Dedistortion Based on LiDAR Odometer
After obtaining the key points of the point cloud, odometer information needs to be calculated according to the registration result, so as to obtain the LiDAR movement posture at each time. The key points at moment
[figure(s) omitted; refer to PDF]
It can be described as follows:
Then, singular matrix
The optimal solution can be obtained through continuous iteration
Generally, the use of laser speedometer for LiDAR is an important premise to the distortion of the uniform model assumption, the assumption of LiDAR between adjacent sweep movements is uniform, and with a moment of motion being equal, using
The above formula can be used to calibrate the point cloud at the intermediate time to the coordinate system at
As for the rotating attitude, classical Euler angle
Let
The point cloud effect after two interpolation is shown in Figure 7. Compared with single linear interpolation, the calibration effect of LiDAR carrier under acceleration or deceleration is improved because first-order difference is used to calculate the rate of change.
[figure(s) omitted; refer to PDF]
3.3. Moving Object Compensation Based on Euclidean Clustering
Another innovation of this method is to calculate the movement posture of objects scanned by LiDAR, so as to realize rough compensation. Therefore, after the key points set in the scene are extracted, the point cloud in the current sweep needs to be clustered. Due to the randomness of the scene, it is impossible to predict the number of categories in advance, so this paper selects European clustering to process the point cloud. Euclidean clustering is a clustering algorithm based on Euclidean distance; that is, the Euclidean distance between points is calculated to determine whether some point clouds belong to the same class, and the distance threshold is set to
Algorithm 2: Euclidean clusters and extraction center.
Input: Point cloud(C)={
Output: Point set(P)={...
create P
create K
forc in Cand cnot in (P) do
find k nearest neighbour
for k in K do
ifandcnot in (P)then
P.append(k)
end if
calculate M=
return set(P), center M
The effect of realizing Euclidean clustering of objects in the scene by using the above methods is shown in Figure 8.
[figure(s) omitted; refer to PDF]
The coordinate transformation of the central point set to its adjacent sweep inner central point set can be described as follows:
The actual center set is
[figure(s) omitted; refer to PDF]
The effect of compensating the moving objects in the scene by using the above methods is shown in Figure 10.
[figure(s) omitted; refer to PDF]
Since this method needs to solve the motion posture of each point cloud cluster after clustering in the scene, resulting in too much computing overhead, it is necessary to limit the point cloud cluster for different scenes. Since an unmanned driving scene is taken as the benchmark in this paper, the point cloud cluster is set as
4. Result
In this part, we first use three SLAM algorithms, A-LOAM, LEGO-LOAM, and T-LOAM, as the benchmark for experimental comparison. Then, we use the point cloud dedistortion method proposed in this paper, respectively, for the original versions of the above three algorithms and obtain their trajectories to quantitatively analyze the performance differences between the proposed method and the original method. The experimental site was selected in the campus environment, the PIX unmanned vehicle carrying VLP-16 and RTK was used to obtain external information, and the algorithm was implemented in Ubuntu 18.04 (ROS) under AMD 3950X. All the algorithms are implemented in C++.
In the campus scene experiment, we avoided the driverless car driving through the sparse environment and ensured the presence of a certain number of moving objects (such as pedestrians and bicycles) in the scene. In other words, stable feature points can be obtained for interframe matching, while some moving objects exist, and then, distorted point clouds are generated.
We show the two benchmark algorithms and the absolute pose error (APE) after the improved point cloud compensation stage, respectively, as shown in Figure 11.
[figure(s) omitted; refer to PDF]
Because the reference algorithm lacks the correction of the distortion of moving objects in the scene, it can be found that the orthodontic algorithm proposed in this paper can effectively lower the APE of the reference algorithm and improve its positioning accuracy in any direction by comparing the figures. Meanwhile, we also compared the motion trajectory of the two benchmark algorithms and the improved algorithm reference RTK, as shown in Figure 12. It can be seen intuitively that the two benchmark algorithms can improve the coincidence degree with the real trajectory after using the orthodontic method proposed in this paper.
[figure(s) omitted; refer to PDF]
In addition, the method proposed in this paper can further improve the mapping effect. We used the improved LEGO-LOAM 3D map to overlay with the satellite map, as shown in Figure 13. It can be seen that after the improved motion compensation algorithm, more accurate 3D scene information can be obtained, and the constructed map and the actual map can be accurately matched.
[figure(s) omitted; refer to PDF]
Table 1 shows the ablation experiments of the proposed method. Since the motion compensation of LiDAR is loosely coupled with point cloud orthodontics of moving objects, we compare the performance of one module alone with that of the whole algorithm to illustrate the effectiveness of each module of the algorithm.
Table 1
Comparison of ablation results.
Method | LiDAR compensation | Motion compensation | APE in cm | |||||
Max | Mean | Median | RMSE | SSE | Std | |||
A-LOAM | — | — | 67.8541 | 14.4364 | 10.4834 | 20.7422 | 611797 | 14.8919 |
√ | — | 52.6475 | 11.2146 | 8.99951 | 16.1594 | 425757 | 12.2473 | |
√ | √ | 36.6778 | 8.12781 | 5.76563 | 11.2797 | 180924 | 7.82116 | |
LEGO-LOAM | — | — | 14.8682 | 4.08091 | 2.9907 | 5.8075 | 47959.9 | 4.13198 |
√ | — | 12.6464 | 3.25944 | 2.91616 | 4.45671 | 31614.5 | 3.25294 | |
√ | √ | 12.5331 | 2.93791 | 2.85014 | 4.15562 | 24556.8 | 2.93902 | |
T-LOAM | — | — | 8.1165 | 2.1564 | 2.1168 | 3.9515 | 26546.4 | 2.16445 |
√ | — | 6.5941 | 1.9594 | 2.2661 | 3.5941 | 21646.1 | 1.94865 | |
√ | √ | 5.3911 | 1.9849 | 2.6489 | 3.1646 | 19816.7 | 1.89446 |
Table 1 shows the overall performance improvement effect of the two modules on the algorithm. Due to the large number of moving objects in the scene, the interframe matching is not accurate, so the performance of the original version of A-LOAM is poor. By using the point cloud compensation method proposed in this paper, the attitude of LiDAR itself can be estimated more reliably, and the errors caused by point cloud distortion of moving objects can be calibrated to a certain extent, which makes the APE smaller, so as to obtain more accurate positioning effect.
5. Limitations
In this experiment, the feature points of the scene are used as the basis for interframe matching, and the movement posture is obtained by calculating the movement of the Euclidean clustering center point and the deviation of the normal vector, and then, the point cloud of the moving object is dedistorted. In practice, in a sparse scene, most of the feature points selected by the Soft-NMS-Select algorithm may be distributed in the point cloud cluster of moving objects. In this case, the experimental effect of this algorithm is worse than the benchmark, because point cloud matching highly depends on the selection of feature points within the scene. If most of the feature points are in motion, the odometer will be extremely inaccurate. In addition, due to the extra computational overhead brought by Euclidean clustering, the algorithm takes more time in most scenarios. However, not all objects involved in Euclidian clustering in the scene are in motion, so the algorithm has some invalid operations.
6. Conclusion
In this paper, we propose a LiDAR motion compensation method. We calculate the flatness and linearity of point cloud, introduce entropy value to screen the feature points with greater degree of information, and then optimize the odometer information of the point cloud by obtaining the prior information of its own motion through quadratic interpolation. Then, Euclidean clustering is performed for the current scanning scene, and cluster center points and normal vectors are calculated to represent moving objects. By eliminating distortion of the point cloud of moving objects, higher quality scanning results can be obtained, and positioning and mapping results can be further improved. We conducted localization and mapping experiments using VLP-16 in a real campus scene and evaluated our method with two benchmark LiDAR SLAM algorithms. The results show that the proposed method can effectively reduce the APE and improve the accuracy of pose estimation.
Acknowledgments
The authors would like to thank the National Natural Science Foundation of China (51974229), the Shaanxi Province Science and Technology Department (2021TD-27), and the National Key Research and Development Program, China (2018YFB1703402), for their support in this research.
[1] J. Zhang, S. Singh, "LOAM: Lidar odometry and mapping in real-time," Robotics: Science and Systems, vol. 2 no. 9, 2014.
[2] Y. Chen, G. Medioni, "Object modelling by registration of multiple range images," Image and Vision Computing, vol. 10 no. 3, pp. 145-155, DOI: 10.1016/0262-8856(92)90066-C, 1992.
[3] Z. Zhang, "Iterative point matching for registration of free-form curves and surfaces," International Journal of Computer Vision, vol. 13 no. 2, pp. 119-152, DOI: 10.1007/BF01427149, 1994.
[4] A. Censi, "An ICP variant using a point-to-line metric," 2008 IEEE International Conference on Robotics and Automation, pp. 19-25, DOI: 10.1109/ROBOT.2008.4543181, .
[5] K. L. Low, "Linear least-squares optimization for point-to-plane ICP surface registration," Chapel Hill, University of North Carolina, vol. 4 no. 10, 2004.
[6] A. Segal, D. Haehnel, S. Thrun, "Generalized-ICP," Robotics: science and systems, vol. 2 no. 4, 2009.
[7] D. Kovalenko, M. Korobkin, A. Minin, "Sensor aware lidar odometry," 2019 European Conference on Mobile Robots (ECMR),DOI: 10.1109/ECMR.2019.8870929, .
[8] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, "Kinectfusion: real-time dense surface mapping and tracking," 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127-136, DOI: 10.1109/ISMAR.2011.6092378, .
[9] A. Nüchter, Parallelization of Scan Matching for Robotic 3D Mapping, 2007.
[10] D. Qiu, S. May, A. Nüchter, "GPU-accelerated nearest neighbor search for 3D registration," International conference on computer vision systems, pp. 194-203, DOI: 10.1007/978-3-642-04667-4_20, .
[11] D. Neumann, F. Lugauer, S. Bauer, J. Wasza, J. Hornegger, "Real-time RGB-D mapping and 3-D modeling on the GPU using the random ball cover data structure," 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1161-1167, DOI: 10.1109/ICCVW.2011.6130381, .
[12] H. Bay, T. Tuytelaars, L. Van Gool, "Surf: speeded up robust features," European conference on computer vision, pp. 404-417, DOI: 10.1007/11744023_32, .
[13] H. Dong, T. D. Barfoot, "Lighting-invariant visual odometry using lidar intensity imagery and pose interpolation," Field and Service Robotics, pp. 327-342, DOI: 10.1007/978-3-642-40686-7_22, 2014.
[14] S. Anderson, T. D. Barfoot, "RANSAC for motion-distorted 3D visual sensors," 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2093-2099, DOI: 10.1109/IROS.2013.6696649, .
[15] C. H. Tong, T. D. Barfoot, "Gaussian process Gauss-Newton for 3D laser-based visual odometry," 2013 IEEE International Conference on Robotics and Automation, pp. 5204-5211, DOI: 10.1109/ICRA.2013.6631321, .
[16] S. Anderson, T. D. Barfoot, "Towards relative continuous-time SLAM," 2013 IEEE International Conference on Robotics and Automation, pp. 1033-1040, DOI: 10.1109/ICRA.2013.6630700, .
[17] R. Zlot, M. Bosse, "Efficient large-scale 3D mobile mapping and surface reconstruction of an underground mine," Field and Service Robotics, pp. 479-493, DOI: 10.1007/978-3-642-40686-7_32, 2014.
[18] M. Bosse, R. Zlot, P. Flick, "Zebedee: design of a spring-mounted 3-d range sensor with application to mobile mapping," IEEE Transactions on Robotics, vol. 28 no. 5, pp. 1104-1119, DOI: 10.1109/TRO.2012.2200990, 2012.
[19] M. Bosse, R. Zlot, "Continuous 3D scan-matching with a spinning 2D laser," 2009 IEEE International Conference on Robotics and Automation, pp. 4312-4319, DOI: 10.1109/ROBOT.2009.5152851, .
[20] T. Shan, B. Englot, "Lego-loam: lightweight and ground-optimized lidar odometry and mapping on variable terrain," 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758-4765, DOI: 10.1109/IROS.2018.8594299, .
[21] P. Zhou, X. Guo, X. Pei, C. Chen, "T-loam: truncated least squares lidar-only odometry and mapping in real time," IEEE Transactions on Geoscience and Remote Sensing, vol. 99, 2021.
[22] S. W. Chen, G. V. Nardari, E. S. Lee, C. Qu, V. Kumar, "Sloam: semantic lidar odometry and mapping for forest inventory," IEEE Robotics and Automation Letters, vol. 5 no. 2,DOI: 10.1109/LRA.2020.3002203, 2020.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Peilin Qin et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
LiDAR plays a pivotal role in the field of unmanned driving, but in actual use, it is often accompanied by errors caused by point cloud distortion, which affects the accuracy of various downstream tasks. In this paper, we first describe the feature of point cloud and propose a new feature point selection method Soft-NMS-Select; this method can obtain uniform feature point distribution and effectively improve the result of subsequent point cloud registration. Then, the point cloud registration is completed through the screened feature points, and the odometry information is obtained. For the motion distortion generated in a sweep, the prior information of the LiDAR’s own motion is obtained by using two linear interpolations, thereby improving the effect of motion compensation. Finally, for the distortion caused by the motion of objects in the scene, Euclidean clustering is used to obtain the position and normal vector of the center point of the point cloud cluster, and the motion pose of the object is calculated according to the offset between adjacent sweeps and eliminated distortion. Essentially, our method is a general point cloud compensation method that is applicable to all uses of LiDAR. This paper inserts this method into three SLAM algorithms to illustrate the effectiveness of the method proposed in this paper. The experimental results show that this method can significantly reduce the APE of the original SLAM algorithm and improve the mapping result.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer