Reconstruction of Indoor Environments Using LiDAR and IMU

Author: Vetle Smedbakken Sillerud and Fredrik Kristoffer Johanssen

Link to the thesis: Download

Today there is a trend towards reconstruction of 3D scenes with movement over time, in both image-based and point cloud based reconstruction systems. The main challenge in point cloud-based systems is the lack of data. Most of the existing data sets are made from 3D-reconstructed meshes, but the density of these constructions is unrealistic.

Point cloud from a fixed LIDAR scan (left) to a LIDAR sweep (right)

In order to do proper research into this field, it must be possible to generate real data sets of high-density point clouds. To deal with this challenge, we have been supplied with a VLP-16 laser scanner and a Tinkerforge IMU Brick 2.0. In our final setup, we position the IMU at the top center of the VLP-16 by utilizing a 3D printed mounting plate. This assembly is fastened to a tripod, in order to move the assembly about well-defined axes. Because most laser scanners acquire points sequentially, these devices do not have the same concept of frame as for images where all data are captured in the same instant. To deal with this issue we divide one scan, i.e., a 360◦ LiDAR sweep, into data packets and transform these data packets using the associated pose to global space. We compensate for mismatch in sampling frequency between the VLP-16 and the IMU by linear interpolation between the acquired orientations. We generate subsets of the environment by changing the laser scanner orientation in static positions and estimate the translation between static positions using point- to-plane ICP. The registration of these subsets is also done using point- to-plane ICP. We conclude that at subset level, our reconstruction system can reconstruct high-density point clouds of indoor environments with a precision that is mostly limited to the inherent uncertainties of the VLP- 16. We also conclude that the registration of several subsets obtained from different positions is able to preserve both visual appearance and reflective intensity of objects in the scene. Our reconstruction system can thus be utilized to generate real data sets of high-density point clouds.