Tag Archives: LIDAR

Point Cloud Based Room Reconstruction

Download PDF on request.

Pipeline for Point Cloud Processing focused on Room Reconstruction

Scanning and modeling the interior rooms and spaces are important topics in Computer Vision. The main idea is to be able to capture and analyze the geometry of these interior environments.

Despite the considerable amount of work put in indoor reconstruction over the past years, all implementations suffer from various limitation and we still have yet to ?nd a method that will work in any scenario, in our approach we will focus in encoding points to reduce the space taken by point cloud on the disk as well as introducing a 3D meshing techniques that add coherence and readability to the point cloud. This is why we suggest the following approach.

When scanning a closed room, important/main features are the walls; it would be interesting to detect (and compress) the walls, floor and ceiling. Walls, floor and ceiling bound the room and have a high probability to represent high-density of points.

It is relevant to attempt to detect the main planar component in order to determine the boundaries of the point cloud and also we can, in a second step replace them with a more simplistic modelization, like a surface that only takes 4 vertices and 4 edges potentially replacing thousands of points. This will also allow us to filter the points scanned through windows that will be outside the rooms

We can code this 3D Mesh box as a graph since the point cloud might be not complete and suffer from occlusion. A graph-based architecture will make a strong starting point to start developing other features. But in order to do that we will need to detect all relevant planar component in our point cloud, this time we will initiates the segmentation with a region growing method in order to avoid the issue detected in Figure 7, insuring each plane resulting from the plane segmentation is linked to exactly one main planar component fed into the graph approach. Our graph structure is kept to the most simplistic entities so that it can be applied to a wild variety of scenarios (faces for the planes, edges are intersections of two planes, and corners/vertices are intersections of 3 planes.)

After detecting the walls and replacing them with simple faces, we will add them features lost by the plane approximation using height maps that are textures that model the height of the walls of each coordinate (X,Y). The operation opens up the domain of image processing and we are able to generate high-resolution versions of the room as well as low-resolution versions.

Reconstruction of Indoor Environments Using LiDAR and IMU

Today there is a trend towards reconstruction of 3D scenes with movement over time, in both image-based and point cloud based reconstruction systems. The main challenge in point cloud-based systems is the lack of data. Most of the existing data sets are made from 3D-reconstructed meshes, but the density of these constructions is unrealistic.

Point cloud from a fixed LIDAR scan (left) to a LIDAR sweep (right)

In order to do proper research into this field, it must be possible to generate real data sets of high-density point clouds. To deal with this challenge, we have been supplied with a VLP-16 laser scanner and a Tinkerforge IMU Brick 2.0. In our final setup, we position the IMU at the top center of the VLP-16 by utilizing a 3D printed mounting plate. This assembly is fastened to a tripod, in order to move the assembly about well-defined axes. Because most laser scanners acquire points sequentially, these devices do not have the same concept of frame as for images where all data are captured in the same instant. To deal with this issue we divide one scan, i.e., a 360◦ LiDAR sweep, into data packets and transform these data packets using the associated pose to global space. We compensate for mismatch in sampling frequency between the VLP-16 and the IMU by linear interpolation between the acquired orientations. We generate subsets of the environment by changing the laser scanner orientation in static positions and estimate the translation between static positions using point- to-plane ICP. The registration of these subsets is also done using point- to-plane ICP. We conclude that at subset level, our reconstruction system can reconstruct high-density point clouds of indoor environments with a precision that is mostly limited to the inherent uncertainties of the VLP- 16. We also conclude that the registration of several subsets obtained from different positions is able to preserve both visual appearance and reflective intensity of objects in the scene. Our reconstruction system can thus be utilized to generate real data sets of high-density point clouds.