The present disclosure belongs to the technical field of cartography, and in particular, to an indoor structure segmentation method based on a laser measurement point cloud.
With the rapid development of economic activities, the number of buildings has increased substantially, and human indoor activities have increased significantly. In addition, the internal structure of the building is also gradually enlarged and complicated.
Therefore, the understanding and cognitive needs of indoor environment is also increasing, and understanding and recognition of indoor scenes has aroused extensive attention. Indoor structure extraction and segmentation, as the basic part and main component of indoor scene understanding, is of great significance in indoor environment cognition and understanding.
It is time-consuming and laborious to manually segment and recognize indoor structures by traditional measuring means. With the development of laser sensor technology, the segmentation and recognition of indoor structures based on laser point cloud has the advantages of high speed and high precision, and has become a research hotspot of indoor scene understanding and recognition. However, there are still some problems in indoor structure segmentation based on indoor three-dimensional point cloud, such as large area missing of wall information caused by mutual occlusion of indoor facilities, noise and measurement errors caused by strong reflective surfaces such as indoor windows and glass, and surface oversegmentation caused by difficulty of surface structure fitting. The above problems bring great difficulties to the automatic segmentation and recognition of indoor structures.
In order to solve the above technical problems, the present disclosure provides a technology to realize automatic segmentation of indoor structures by taking an indoor three-dimensional point cloud containing noise and occlusion as an input, and provides an effective data basis for subsequent indoor scene understanding and three-dimensional modeling.
A technical solution adopted by the present disclosure is an indoor structure segmentation method based on a laser measurement point cloud, including the following steps:
Preferably, a method for classifying the supervoxels in step 1.2 is as follows:
Preferably, a method for calculating the distance from the supervoxel to the model and normalizing the distance in step 5.1 is as follows:
represents a shortest distance from the surface point pk to the plane model mi ∈MS, ∥pk−pt|·sin θ−r| represents that a distance from pk to the surface model mi ∈MC is regarded as a difference between a minimum distance from the point to the central axis of the cylindrical model and the radius of the cylinder, γth represents the distance threshold and is influenced by precision of a three-dimensional point cloud acquisition device, and if the acquisition device is based on fixed station scanning, γth is valued within 0.5-1 cm; and if the acquisition device is a mobile measuring device, γth is valued within 2-5 cm;
Preferably, a method for calculating the distance from the surface point to the model and normalizing the distance in step 5.2 is as follows:
represents a shortest distance from the surface point pi to the plane model mi ∈MS, ∥pi−pt|·sin θ−r| represents that a distance from pi to the surface model mi ∈MC is regarded as a difference between a minimum distance from the point to the central axis of the cylindrical model and the radius of the cylinder, γth represents the distance threshold and is influenced by precision of the three-dimensional point cloud acquisition device, and if the acquisition device is based on fixed station scanning, γth is valued within 0.5-1 cm; and if the acquisition device is a mobile measuring device, γth is valued within 2-5 cm; and
Preferably, a process of searching a matrix x* in step 5.3 is set as follows:
Compared with the prior art, the present disclosure has the following beneficial effects: a simple and effective indoor structure extraction and segmentation technique is provided, which ensures the segmentation stability of indoor surface structure. Existing point cloud segmentation methods focus on plane structure rather than surface structure. Although the RANSAC method can achieve surface segmentation through regular model fitting, it is affected by the random sampling uncertainty and cannot effectively counteract the noise and outliers in the data, which is easy to produce false surface errors. In order to ensure the sampling consistency of the same structure, the method uses multi-resolution supervoxels as the unit to fit the model to avoid false models. Based on the framework of “pre-segmentation-surface separate fitting-model matching”, the indoor structure segmentation and extraction can be realized quickly and efficiently. The method is also suitable for indoor positioning and indoor mapping.
In order to facilitate the understanding and implementation of the present disclosure by those of ordinary skill in the art, the present disclosure is further described in detail in combination with the accompanying drawings and embodiments. It should be understood that the embodiments described here are only used to illustrate and explain the present disclosure, not to limit the present disclosure.
Indoor scene understanding through point cloud has always been the focus of research, and indoor structure segmentation and extraction has always been the prerequisite and research basis of indoor scene understanding. In this context, the present disclosure provides an indoor structure segmentation method based on a laser measurement point cloud, which transforms the segmentation of indoor structure into supervoxel segmentation and matching by pre-segmentation of supervoxel, so as to further refine the problem. The method of present disclosure includes the following steps.
Step 1, An indoor three-dimensional point cloud is input, and the point cloud is pre-segmented based on multi-resolution supervoxels to extract plane supervoxels. The step includes the following sub-steps.
Step 1.1, Supervoxel segmentation of the point cloud is performed using a TBBPS [1] supervoxel segmentation method based on a given initial resolution r to generate a supervoxel set ci ∈C. Each segmented point cloud cluster represents a supervoxel.
Step 1.2. For each supervoxel ci ∈C, a central point pc of the supervoxel is calculated using Σj=1npj/n, and a covariance matrix C3×3 of a point set about the central point pc is solved according to coordinates of the central point. Three eigenvalues λ1C, λ2C and λ3C of C3×3 are calculated, where λ1C≤λ2C≤λ3C. If the eigenvalues of ci satisfy Formula 1 and Formula 2, the supervoxel ci is stored in the plane supervoxel set PC. Otherwise, the supervoxel ci is stored in the surface supervoxel set NPC:
Step 1.3, A current resolution r is modified to r*rdio. For points in the NPC, the TBBPS [1] supervoxel segmentation method is used again based on the current resolution r to generate a new supervoxel set ci′∈C′. Supervoxels in the NPC are emptied.
Step 1.4, Step 1.2 is repeated for the new set C′.
Step 1.5, If the NPC is not empty after execution, steps 1.3 and 1.4 are repeated. Iteration is repeated until the NPC is empty or the current resolution r is less than a given resolution threshold rmin to obtain the plane supervoxel set PC.
Step 2, A surface supervoxel interior point set is extracted based on the TBBPS segmentation method. The step includes the following sub-steps.
Step 2.1, Supervoxel segmentation of the point cloud is performed using the TBBPS [1] segmentation method based on a given resolution r to generate a supervoxel set of the point cloud ci ∈C. Each segmented point cloud cluster represents a supervoxel.
Step 2.2, For each supervoxel ci ∈C, a central point pc of the supervoxel is calculated using Σj=1npj/n, and a covariance matrix C3×3 of a point set about the central point pc is solved according to coordinates of the central point. Three eigenvalues λ1C, λ2C and λ3C of C3×3 are calculated. If the eigenvalues of ci do not satisfy Formula 1 or Formula 2, the supervoxel ci is stored in the surface supervoxel set NPC. The point set in each surface supervoxel ci ∈NPC is stored in the point set NP.
Step 3, A plane model is extracted by taking the plane supervoxel pci ∈PC as a unit. The step includes the following sub-steps.
Step 3.1. In the PC, a curvature of each pci ∈PC is calculated, the curvature Curve is calculated as shown in Formula 3, and the curvatures Curve are sorted from small to large:
Step 3.2, A supervoxel pcs ∈PC with a minimum curvature is selected, a central point and normal vector of pcs are calculated, and a plane model ηs is established. The plane model is shown in Formula 4:
Step 3.3, Other supervoxels pci εPC in the PC are traversed. If an included angle of normal vectors between pci and ηs is less than an angle threshold θth, and a distance between pci and ηs on the ηs normal vector is less than a distance threshold γth, pci is classified into ηs.
Step 3.4, If a number of supervoxels in ηs is greater than a number threshold Nmax, ηs is reserved. Otherwise, ηs is deleted, pcs′ ∈PC with a second minimum curvature is found, a plane model ηs′ is established, and steps 3.3 and 3.4 are repeated.
Step 3.5, For unclassified supervoxels in the PC, steps 3.2 to 3.4 are run iteratively until all the supervoxels are classified.
Step 3.6, A model with a largest number of supervoxels ηmax is reserved and is stored in a plane model set MS, and the remaining models are deleted.
Step 3.7, Steps 3.2 to 3.6 are repeated until all of the supervoxels in PC are classified or the plane model set MS is no longer added with a model.
Step 4, A surface model in a surface unit is fit by taking a surface point npi ∈NP as a unit and a cylindrical model as a model monomer and using an RANSAC method [2], and a surface model set MC is saved.
Step 5, Based on α-expansion optimization, the supervoxel units and surface point units extracted in steps 1 and 2 are allocated to an optimal model (the surface or plane model established in steps 3 and 4), so as to classify and segment the units, thus realizing extraction and segmentation of the indoor structure. The step includes the following sub-steps.
Step 5.1, A distance from each supervoxel pc ∈PC to each model dist(pc, mi) is calculated, and the distance calculation is shown in Formula 5. The supervoxel pc to the model mi can be described as: the sum of distances from each point pk ∈pc in the pc to the model mi, while the distance from point to model is shown in Formula 6:
dist(pc,mi)=Σp
represents a shortest distance from the surface point pk to the plane model mi ∈MS, ∥pk−pt|·sin θ−r| represents that a distance from pk to the surface model mi ∈MC is regarded as a difference between a minimum distance from the point to the central axis of the cylindrical model and the radius of the cylinder, γth represents the distance threshold and is influenced by precision of a three-dimensional point cloud acquisition device, if the acquisition device is based on fixed station scanning. γth is valued within 0.5-1 cm, and if the acquisition device is a mobile measuring device, γth is valued within 2-5 cm.
After calculation, the distance from the supervoxel to the model dist (pc, mi) is normalized to 0-1. A normalization equation is shown in Formula 7. In the formula, max (dist) and min(dist) respectively represent maximum and minimum distances from the supervoxel to the model.
Step 5.2, A distance from each surface point pi ∈NP to each model mi distance (pi, mi) is calculated, as shown in Formula 8. A distance from each surface point pi to the plane model mi ∈MS can be regarded as the shortest distance from the point to the plane, namely
where nm and pm respectively represent the normal vector and central point coordinates of the plane model, while pi represents the coordinates of the surface point. The distance from pi to the surface model mi ∈MC can be regarded as a difference between a minimum distance from the point to the central axis of the cylindrical model and the radius of the cylinder, namely ∥pi−pt|·sin θ−r|, where pt and r represent the central point and radius of the cylindrical model respectively, and θ represents an included angle between a pi normal vector and an axial direction vector of the cylinder. Since the fitting accuracy of the plane model is higher than that of the surface model, the distance from the point to the surface model needs to be multiplied by the weight W in order to balance the fitting error. If the distance from the point to the model is greater than twice the distance threshold value γth, this point can be regarded as the outlier of the model. Therefore, the distance is set as 2*γth.
represents a shortest distance from the surface point pi to the plane model mi ∈MS, ∥pi−pt|·sin θ−r| represents that a distance from pi to the surface model mi ∈MC is regarded as a difference between a minimum distance from the point to the central axis of the cylindrical model and the radius of the cylinder, γth represents the distance threshold and is influenced by precision of the three-dimensional point cloud acquisition device, and if the acquisition device is based on fixed station scanning. γth is valued within 0.5-1 cm; and if the acquisition device is a mobile measuring device, γth is valued within 2-5 cm.
After calculation, the distance from each surface point pi ∈NP to each model mi distance(pi, mi) is normalized to 0-1. A normalization method is shown in Formula 9 in step 5.1.
Step 5.3, Unit segmentation (supervoxel units and surface point units, total number of N units) has been completed in steps 1 and 2, model fitting (surface models and plane models, total number of K models) has been completed in steps 3 and 4, and the matching distance between each unit and the model have been calculated in steps 5.1 and 5.2. Therefore, the structure segmentation of point cloud can be regarded as searching an N-row and K-column matrix x* which minimizes a matching error on the basis of ensuring a minimum number of models, as shown in Formula 8.
The α-expansion optimization algorithm is used to minimize Formula 8, so as to find the optimal corresponding model for each unit.
Step 5.4, The segmented units are marked with different colors, and the segmentation results are output to complete indoor structure segmentation.
When implemented, the above method can be automatically operated by computer software technology, and the device running the method of the present disclosure shall also be within the scope of protection.
The above descriptions show merely one embodiment of the present disclosure, and is not intended to limit the present disclosure. Any modifications, improvements, etc. made within the spirit and principle of the present disclosure should be included within the protection scope of the present disclosure.
Number | Name | Date | Kind |
---|---|---|---|
20160154999 | Fan | Jun 2016 | A1 |
20170116781 | Babahajiani | Apr 2017 | A1 |
20230186647 | Mahata | Jun 2023 | A1 |
Number | Date | Country |
---|---|---|
112288857 | Jan 2021 | CN |
117197159 | Dec 2023 | CN |
118135220 | Aug 2024 | CN |
Number | Date | Country | |
---|---|---|---|
20240303916 A1 | Sep 2024 | US |