Cargando…
An Effective Camera-to-Lidar Spatiotemporal Calibration Based on a Simple Calibration Target
In this contribution, we present a simple and intuitive approach for estimating the exterior (geometrical) calibration of a Lidar instrument with respect to a camera as well as their synchronization shifting (temporal calibration) during data acquisition. For the geometrical calibration, the 3D rigi...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9329985/ https://www.ncbi.nlm.nih.gov/pubmed/35898082 http://dx.doi.org/10.3390/s22155576 |
Sumario: | In this contribution, we present a simple and intuitive approach for estimating the exterior (geometrical) calibration of a Lidar instrument with respect to a camera as well as their synchronization shifting (temporal calibration) during data acquisition. For the geometrical calibration, the 3D rigid transformation of the camera system was estimated with respect to the Lidar frame on the basis of the establishment of 2D to 3D point correspondences. The 2D points were automatically extracted on images by exploiting an AprilTag fiducial marker, while the detection of the corresponding Lidar points was carried out by estimating the center of a custom-made retroreflective target. Both AprilTag and Lidar reflective targets were attached to a planar board (calibration object) following an easy-to-implement set-up, which yielded high accuracy in the determination of the center of the calibration target. After the geometrical calibration procedure, the temporal calibration was carried out by matching the position of the AprilTag to the corresponding Lidar target (after being projected onto the image frame), during the recording of a steadily moving calibration target. Our calibration framework was given as an open-source software implemented in the ROS platform. We have applied our method to the calibration of a four-camera mobile mapping system (MMS) with respect to an integrated Velodyne Lidar sensor and evaluated it against a state-of-the-art chessboard-based method. Although our method was a single-camera-to-Lidar calibration approach, the consecutive calibration of all four cameras with respect to the Lidar sensor yielded highly accurate results, which were exploited in a multi-camera texturing scheme of city point clouds. |
---|