🐼 3D scene reconstruction with Kinect®

Acquisition of a three-dimensional model through Microsoft Kinect®

A project done in collaborator with the great eng. Marco Gazzoli (linkedin).

Figure: a 3D acquisition and reconstruction of my bedroom.

The goal of the project is the construction of a three-dimensional model of an environment, through the data acquired by the Microsoft Kinect® sensor. The algorithm has been implemented through Open source libraries PCL (link).
The final result is the reconstruction of a scene (typically indoor) represented as a cloud of points in 3D space, colored according to the image returned by the camera.
The ultimate goal, the complete reconstruction of the scene is achieved by aligning portions of the scene obtained by progressive scans (each scan of the Kinect® sensor captures only a small area of ​​the real scene).
The basic steps to align successive scans are the following:
  • composition of a 3D point cloud with RGB image acquisition and image depth. The acquisition captures only a portion of the scene;
  • Point cloud filtering in order to remove outliers due to big errors of acquisition;
  • Downsampling operation, to reduce the points on which to operate calculations that following;
  • Smoothing operation, to reduce measure error of the sensor;
  • Estimation of normal for each point;
  • Extraction of some keypoints, that are descriptive, through 3D implementation of the SIFT features;
  • Description of the keypoints extracts;
  • Comparison between keypoints provide by different acquisitions, in order to determine an initial roto-translation matrix (a raw version) between the analysis scanning and the last scan aligned;
  • The last step is to improve the accuracy of the roto-translation matrix, comparing not keypoints belonging to the clouds but the entire point clouds;
  • This matrix is applied to obtain the cloud of the complete scene.

The project aimed to replicate the Kinect Fusion®(link) and its open source version Kinfu (link). Here the project presentation (link).

References:

[1] KinectFusion: Real-Time Dense Surface Mapping and Tracking, Richard A. Newcombe (et al.), IEEE ISMAR, 2011 http://dx.doi.org/10.1109/ISMAR.2011.6092378