This application claims the benefit of Taiwan application Serial No. 108148098, filed Dec. 27, 2019, the disclosure of which is incorporated by reference herein in its entirety.
The disclosure relates in general to a method, a system, and a computing device for reconstructing three-dimensional planes.
Reconstructing three-dimensional planes of a scene by a mobile device is useful for architecture design, interior design, space layout, etc. In these applications, only planar structure is required in the reconstructed 3D model.
However, existing approaches cannot automatically exclude non-planar objects nor moving objects. Many approaches are limited in static scene, they could not reconstruct a correct 3D scene model with walking people or moving objects.
Usually the 3D scene model reconstructed by existing methods contains huge amount of point cloud that occupy lots of storage. Most vertices in the point cloud are unnecessary for planar structures because each plane only requires 3 or 4 vertices.
Therefore, a new method is developed for scanning and reconstructing three-dimensional planar structures in the dynamic scene to solve the issues mentioned above and improve the efficiency and quality.
According to one embodiment, a method for reconstructing three-dimensional planes is provided. The method includes the following steps: obtaining a series of color information, depth information and pose information of a dynamic scene by a sensing device; extracting a plurality of feature points according to the color information and the depth information, and marking part of the feature points as non-planar objects including dynamic objects and fragmentary objects; computing point cloud according to the unmarked feature points and the pose information, and instantly converting the point cloud to a three-dimensional mesh; and growing the three-dimensional mesh to fill vacancy corresponding to the non-planar objects according to information of the three-dimensional mesh surrounding or adjacent to the non-planar objects.
According to another embodiment, a system for reconstructing three-dimensional planes is provided. The system includes a sensing device and a computing device. The sensing device is configured to obtain a series of color information, depth information and pose information of a dynamic scene. The computing device includes a feature point extraction unit, a non-planar objects marking unit, a mesh computing unit and a mesh filling unit. The feature point extraction unit is configured to extract a plurality of feature points according to the color information and the depth information. The non-planar objects marking unit is configured to mark part of the feature points as non-planar objects including dynamic objects and fragmentary objects. The mesh computing unit is configured to compute point cloud according to the unmarked feature points and the pose information, and instantly convert the point cloud to a three-dimensional mesh. The mesh filling unit is configured to grow the three-dimensional mesh to fill vacancy corresponding to the non-planar objects according to information of the three-dimensional mesh surrounding or adjacent to the non-planar objects.
According to an alternate embodiment, a computing device is provided. The computing device includes a feature point extraction unit, a non-planar objects marking unit, a mesh computing unit and a mesh filling unit. The feature point extraction unit configured to extract a plurality of feature points according to color information and depth information. The non-planar objects marking unit is configured to mark part of the feature points as non-planar objects including dynamic objects and fragmentary objects. The mesh computing unit is configured to compute point cloud according to the unmarked feature points and the pose information, and instantly convert the point cloud to a three-dimensional mesh. The mesh filling unit is configured to grow the three-dimensional mesh to fill vacancy corresponding to the non-planar objects according to information of the three-dimensional mesh surrounding or adjacent to the non-planar objects.
The above and other aspects of the invention will become better understood with regard to the following detailed description of the preferred but non-limiting embodiment(s). The following description is made with reference to the accompanying drawings.
Referring to
As indicated in
In step S102, a plurality of feature points are extracted by the feature point extraction unit 210 according to the color information S1 and the depth information S2, and part of the feature points are marked as non-planar objects (including dynamic objects and fragmentary objects) by the non-planar objects marking unit 220. In the present step, the non-planar objects marking unit 220 checks a series of color information and depth information with corresponding feature points, and then marks and deletes the feature points of the frames determined as dynamic objects or fragmentary objects. Examples of the dynamic objects include people at their work or the vehicles. Examples of the fragmentary objects include messy stationeries. The procedures for marking dynamic objects and fragmentary objects are disclosed below with detailed flowcharts.
Referring to
Referring to
In step S1022, the feature points F1 whose confidence degrees S4 are smaller than a threshold are marked as dynamic objects and are deleted by the dynamic objects marker 222, and the feature points O1 of the non-dynamic objects are reserved. Then, whether the non-dynamic objects are fragmentary objects is checked.
Referring to
In step S1024, the feature points F1 corresponding to the object whose three-dimensional size S5 is smaller than a size threshold is marked as a fragmentary object and is deleted by the fragmentary object marker 224. After steps S1022 and S1024 are performed, the feature points O2 of non-dynamic and non-fragmentary objects are reserved, and the point cloud and the mesh are calculated.
The feature points F1 marked as dynamic objects or fragmentary objects will be excluded, and will not be used to construct or modify the three-dimensional mesh. Refer to the step S103 of
Refer to
In step S1032 as indicated in
In step S1033 as indicated in
Details of the step of S104 of
Thus, during the computing process, there is no need to store a large volume of point cloud, and the construction of the three-dimensional planar structure only requires a small amount of memory and processor resources. As disclosed above, the method, the system, and the computing device for reconstructing three-dimensional planes of the disclosure are capable of obtaining three-dimensional planar structures by eliminating dynamic objects and fragmentary objects through the analysis of color information, depth information and feature points. Moreover, after a local point cloud is obtained, the local point cloud is instantly converted to a three-dimensional mesh, such that the memory required for storing the point cloud can be reduced. Meanwhile, since the three-dimensional mesh is continuously updated according to the newly generated point cloud, the processing efficiency can be increased.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
108148098 | Dec 2019 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
8798965 | Quan et al. | Aug 2014 | B2 |
9613423 | Dixon et al. | Apr 2017 | B2 |
9646410 | Collet Romea et al. | May 2017 | B2 |
9953618 | Ramachandran et al. | Apr 2018 | B2 |
20160055268 | Bell | Feb 2016 | A1 |
20180018805 | Kutliroff et al. | Jan 2018 | A1 |
20180253856 | Price et al. | Sep 2018 | A1 |
20180253894 | Krishnan et al. | Sep 2018 | A1 |
20190340746 | Lu | Nov 2019 | A1 |
20190392228 | Zhu | Dec 2019 | A1 |
20200349761 | Zhang | Nov 2020 | A1 |
20200410293 | Bergmann | Dec 2020 | A1 |
20210004630 | Uscinski | Jan 2021 | A1 |
20210256776 | Cappello | Aug 2021 | A1 |
20210334988 | Xiao | Oct 2021 | A1 |
Number | Date | Country |
---|---|---|
104318622 | Jan 2015 | CN |
104599314 | May 2015 | CN |
106780646 | May 2017 | CN |
107274477 | Oct 2017 | CN |
108171780 | Jun 2018 | CN |
108335357 | Jul 2018 | CN |
108921078 | Nov 2018 | CN |
105493155 | Feb 2019 | CN |
110148217 | Aug 2019 | CN |
201128569 | Aug 2011 | TW |
201944358 | Nov 2019 | TW |
Entry |
---|
Taiwanese Office Action and Search Report, dated Sep. 29, 2020, for Taiwanese Application No. 108148098. |
Bescos et al., “DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes”, IEEE Robotics and Automation Letters, 2018, vol. 3, No. 4, pp. 4076-4083. |
Jiang et al., “Static-Map and Dynamic Object Reconstruction in Outdoor Scenes Using 3-D Motion Segmentation”, IEEE Robotics and Automation Letters, 2016, vol. 1, No. 1, pp. 324-331. |
Li et al., “Indoor Localization with Occlusion Removal”, IEEE 2017, pp. 191-198. |
Liu et al., “PlaneRCNN: 3D Plane Detection and Reconstruction from a Single Image”, 2019, Total 16 pages. |
Tan et al., “Robust Monocular SLAM in Dynamic Environments”, 2013, Total 10 pages. |
Number | Date | Country | |
---|---|---|---|
20210201577 A1 | Jul 2021 | US |