The present disclosure relates to image processing techniques, and particularly, to a three-dimensional point cloud model reconstruction method, a computer readable storage medium and a device.
In production applications nowadays, a main bottleneck of the development of the three-dimensional techniques, such as computer aided design, reverse engineering, virtual reality, and three-dimensional animation and game, is that there is still no a convenient method for quickly obtaining a three-dimensional model stored in the computer. In recent years, the three-dimensional laser scanner is widely applied by virtue of its advantage of conveniently and flexibly acquiring three-dimensional surface data of a real object. After the researches of more than twenty years, the technique of reconstructing a three-dimensional model from relatively complete point cloud data has been mature. However, it is restricted from being a universal inverse reconstruction technique mainly because that a large-scale data missing of the point cloud data usually occurs, which cannot be solved by the existed hardware scanning device.
The so called point cloud model generally means that a three-dimensional scanning device emits scanning light to a surface of an object to be measured, and reflected light is received to calculate a set of three-dimensional coordinate points on the surface of the object. The so called point cloud three-dimensional reconstruction means based on certain point cloud model data, complementing and reconstructing grid data representing the source model, for the convenience of computer rendering and user interaction. But how to directly and quickly obtain a practical point cloud model by scanning the point cloud data is still a problem. The difficulty of the point cloud data processing is that the point cloud data is usually dispersed, with significant missing data, noises, and outliers. In addition, the point cloud completion is a recognized ill-posed problem, wherein all the methods deduce unknown information from already existed information, and no method can ensure that the region complemented using an algorithm is consistent with the source model.
The point cloud completion method in the prior art fits and fills the lost information using a quadric surface according to local information of the point cloud model. This method can well fill the holes in a small area, but cannot solve the large-scale data missing problem. Considering more global information is an important way to improve the existed completion method.
The embodiments of the present disclosure provide a three-dimensional point cloud model reconstruction method, includes:
1) sampling and WLOP-consolidating an input point set to generate an initial surface point set, and copying the initial surface point set as an initial position of an interior skeleton point set, to establish a correspondence relation between surface points and skeleton points;
2) moving points in the interior skeleton point set inwards along a direction opposite to a normal vector thereof, to generate interior points;
3) using a self-adaptive anisotropic neighborhood as a regularization term to perform an optimization of the interior points, and generating skeleton points;
4) performing a consolidation and completion of the initial surface point set using the skeleton points, to generate consolidated surface points;
5) reconstructing a three-dimensional point cloud model according to the skeleton points, the surface points and the correspondence relation between the surface points and the skeleton points.
In addition, an embodiment of the present disclosure further provides a computer readable storage medium containing computer readable instructions, wherein when being executed, the computer readable instructions enable a processor to perform at least the operations in the above three-dimensional point cloud model reconstruction method.
Meanwhile, the present disclosure further provides a device, includes:
a scanning module for scanning a model to acquire an input point set;
a memory for storing program instructions;
a processor connected to the scanning module and the memory, for executing the program instructions in the memory, and processing the input point set in the steps in the above three-dimensional point cloud model reconstruction method.
In order that the aforementioned and other objectives, features and advantages of the present disclosure are more easily to be understood, preferred embodiments will be described in details with reference to the accompanying drawings.
In order to more clearly describe the technical solutions in the embodiments of the present disclosure or the prior art, the accompanying drawings to be used in the descriptions of the embodiments or the prior art will be briefly introduced as follows. Obviously, the accompanying drawings in the following descriptions just illustrate some embodiments of the present disclosure, and a person skilled in the art can obtain other accompanying drawings from them without paying any creative effort.
Next, the technical solutions in the embodiments of the present disclosure will be clearly and completely described with reference to the accompanying drawings of the embodiments of the present disclosure. Obviously, those described herein are just parts of the embodiments of the present disclosure rather than all the embodiments. Based on the embodiments of the present disclosure, any other embodiment obtained by a person skilled in the art without paying any creative effort shall fall within the protection scope of the present disclosure.
An embodiment of the present disclosure provides a three-dimensional point cloud model reconstruction method. As illustrated in
step S101: sampling and WLOP-consolidating an input point set to generate an initial surface point set, and copying the initial surface point set as an initial position of an interior skeleton point set, to establish a correspondence relation between surface points and skeleton points;
step S102: moving points in the interior skeleton point set inwards along a direction opposite to a normal vector thereof, to generate interior points;
step S103: using a self-adaptive anisotropic neighborhood as a regularization term to perform an optimization of the interior points, and generating skeleton points;
step S104: performing a consolidation and completion of the initial surface point set using the skeleton points, to generate consolidated surface points;
step S105: reconstructing a three-dimensional point cloud model according to the skeleton points, the surface points and the correspondence relation between the surface points and the skeleton points.
An embodiment of the present disclosure provides a computer readable storage medium containing computer readable instructions, wherein when being executed, the computer readable instructions enable a processor to at least perform the three-dimensional point cloud model reconstruction method.
Meanwhile, the present disclosure provides a device, as illustrated in
a scanning module 321 for scanning a model to acquire an input point set;
a memory 323 for storing program instructions;
a processor 323 connected to the scanning module and the memory, for executing the program instructions in the memory, processing the input point set in the following steps, and outputting a model:
1) sampling and WLOP-consolidating an input point set to generate an initial surface point set, and copying the initial surface point set as an initial position of an interior skeleton point set, to establish a correspondence relation between surface points and skeleton points;
2) moving points in the interior skeleton point set inwards along a direction opposite to a normal vector thereof, to generate interior points; 3) using a self-adaptive anisotropic neighborhood as a regularization term to perform an optimization of the interior points, and generating skeleton points;
4) performing a consolidation and completion of the initial surface point set using the skeleton points, to generate consolidated surface points;
5) reconstructing a three-dimensional point cloud model according to the skeleton points, the surface points and the correspondence relation between the surface points and the skeleton points.
The present disclosure mainly proposes a new point cloud presentation to solve the problems of denoising and hole filling for the point cloud model.
As illustrated in
In order to better complement the point cloud data, extracting a skeleton from the point cloud is an important step. Since the skeleton contains semi-global topology and volume information of the object, the extraction of the skeleton of the point cloud is actually a process of understanding the shape of the original object. Once the skeleton of the point cloud is acquired, we can fill the holes and enhance the point cloud data using the skeleton. Being different from the existed method of extracting the skeleton of the point cloud, the skeleton extracted by the present disclosure is a combined skeleton with both a one-dimensional curve and a two-dimensional slice, rather than a conventional one-dimensional curve skeleton. In contrast, the one-dimensional curve skeleton is more abstract, while the combined skeleton has a richer presenting capability. Specifically, the method generates a one-dimensional curve in a tubular structure region of the model, and generates a two-dimensional slice in other regions. By using such characteristic of the combined skeleton, a targeted policy can be taken during the point cloud completion. Thus, how to extract a combined skeleton from the point cloud is one of the key points of the present disclosure.
The closest solution: the present disclosure proposes a new point cloud presentation, which concerns the point cloud denoising and consolidation, the point cloud skeleton extraction, and the point cloud completion and reconstruction in the prior art, and the present disclosure makes an improvement to the point cloud denoising and consolidation, the point cloud skeleton extraction, and the point cloud completion and reconstruction in the prior art. Next, the best methods of the three directions in the prior art will be respectively introduced and compared with the present method.
The best method for point cloud denoising consolidation is WLOP, which will be briefly introduced as follows.
The locally optimal projection projects (down-samples) a dispersed point cloud to a new point set which better presents the potential model of an object. In addition to the denoising function of the conventional filtering algorithm, it can also simplify and homogenize the point cloud while promoting the calculation of the normal vector and the topological structure of the point cloud. The algorithm has the advantages of parameter-free, robust and high-efficient, without using the normal vector information of the point. The principle of the algorithm is:
a dispersed point set C=f cj gj 2J ½R3 is given, and LOP defines a projection point set P=f pigi 2 I ½R3. Assuming that k iterative operations need to be performed, k=0, 1, 2 . . . , then the following optimal equation is available, for the purpose of minimizing P:
wherein, the point set P is obtained by a random down-sample from an original point set C. Generally, the effect of the algorithm is improved when the number of pints of P is reduced. But if the number of points is too small, the details of the prototype may be lost. kk is a secondary paradigm, μ(r)=ei r
Density Weighted Locally Optimal Projection (WLOP): observations show that the LOP algorithm does not causes aggregation or vibration under some situations, and the effect of global homogenization is not good enough. Because the function of E1 is to project each point to a central position of the local distribution. When the input point set are highly not homogeneous, the final result is that the dense area is still relatively dense regardless of the selection of P0, and the calculation of the normal vector will be adversely influenced. Thus, WLOP adds a weighting function describing the dense degree of the point cloud:
wherein, qj 2 C; pi 2 P. At the dense part of the point cloud, the “attractive force” can be decreased by vi, and the “repulsive force” can be increased by wik. The two functions are added into LOP for solving the optimal equation:
and pik+1 indicates coordinates of the ith point in the (k+1)th iteration. On the other hand, certain outliers outside the model may be obtained sometimes when the point cloud is scanned. In that case, those outliers of a relatively low density can be eliminated quickly using the definition of the density.
WLOP in the prior art is introduced as above, and the present disclosure inherits the idea of WLOP in the aspect of denoising, but has the following distinctions.
1. The objectives of this solution include data completion in addition to data denoising. That is, the sample point p may be at a position having no input data point nearby. Thus two situations may be considered simply.
(1) If there are many input points near the sample point pi, the data fitting term (“attractive force”) of WLOP is still used.
(2) If the sample point pi does not have enough input points as the basis of data fitting, our newly defined volume preservation term may be used, i.e., the sample point pi shall keep being consistent with adjacent points pj in “volume” so far as possible. In a specific operation, assuming that a skeleton point set Q=f qi gi 2 I ½R3 has been calculated, and the sample points are one-to-one corresponding to the skeleton points. The volume information of the sample point pi is presented with the connection line to a corresponding skeleton point qi. The detailed algorithm will be introduced later.
In fact, the algorithm of this solution does not obligatorily sort the sample points P into those of two situations, but adds a weighting function to mix the data fitting term and the volume preservation term. That is, the weight of the data fitting term is higher, when the sample point pi is close to the input data point and the volume variation around is large; and the weight of the volume preservation term is higher, when the sample point pi is far away from the input data point and the volume variation around is consistent. Thus, WLOP merely keeps a smooth surface based on existed data, while this solution adds a volume preservation (smoothing) term, so that a sufficient consolidation can be carried out when there is no data, as illustrated in
2. Direction constraint WLOP is proposed. The present disclosure proposes projecting the “attractive force” and the “repulsive force” in WLOP onto the normal vector and the tangent plane of the model surface respectively, which brings two major advantages: firstly, the “attractive force” is only effective in the direction of the normal vector of the surface point (the meaning of the surface point is substantially the same as that of the sample point), i.e., the surface point can only move along the positive and negative directions of the normal vector under the effect of the “attractive force”. Thus, as compared with WLOP, the sample point probably goes beyond the boundary of the input model, i.e., enters a region not supported by the input point, and then the hole filling is possible. Otherwise, when a calculation is made using the original “attractive force”, the sample point still will be attracted back by the “attractive force” of a large weight, even if it is pushed out of the boundary of the model by the “repulsive force”, and the effect of hole filling cannot be achieved. Secondly, the “repulsive force” is only effective in the direction of the tangent plane of the surface point, i.e., the surface point can only move along the direction of the tangent plane under the effect of the “repulsive force”, which ensures that the sample point just moves along the current local plane regardless of the existence of data. By using this technique, in the perspective of the effect, the sample point distribution is smoother in a region having data than WLOP, and in a region having no data, it is equivalent to gradually extrapolating existed data into that region to achieve a smooth hole filling, as shown in
3. New normal vector calculation mode. As can be seen from the above text, the direction constraint WLOP proposed by us depends on the normal vector of the surface point, while the local tangent plane is also determined by the normal vector. To be noted, our method does not consider the orientation (inward or outward) of the normal vector. Thus, in the region having data, the conventional PCA algorithm can be used to continuously calculate and update the normal vector of the sample point. The key technical difficulty is that in the region having no data, the normal vector calculated directly using PCA is problematic. In a case where data is seriously lost, e.g., a sphere model only has its half scanned, if the normal vector of the sample point is calculated directly using PCA, the sample point will endlessly escape along the tangent plane at the boundary of the model, and a closed hole filling cannot be achieved. With respect to the defect, we for the first time propose correcting the normal vector of the sample point having no data support, using information of the connection line between the surface point and the skeleton point, and the specific algorithm will be introduced later. By using the technique, even in a region where data is seriously lost, the sample point still can well complete the task of hole filling with the aid of the skeleton point.
The best method for point cloud skeleton extraction is the L1 medial skeleton method, which is briefly introduced as follows:
The core of the L1 medial skeleton method is the local L_1 median value, which can be generalized with the following optimal formula:
wherein, Q=f qi gj 2 J ½R3 is an input point set, X=f xigi 2I ½R3 is a point set initially down-sampled from Q, and the number of points in the point set is iIJ<<jJj. After the above formula is minimized, X becomes a normal “skeleton point set” at the center of the input model, as illustrated in
Generally, the point distribution condition in a space is described with the degree of dispersion or intensity. On the skeleton branch, the skeleton points are distributed relatively dispersedly in respective dimensions, while on the skeleton trunk, the skeleton points are continuously and intensively in a one-dimensional straight line. As to the three-dimensional sample, we describe the dispersed degree of the sample points by means of the characteristic values of the covariance matrix. As to any sample point xi 2 I, its covariance matrix is calculated as:
C
i=′i
wherein, the distance weighting function μ has the same function as the local L1 median value, both for considering the local distribution of the sample points, rather than the weight distribution. xi is a row vector, and Ci is a 3×3 matrix. Next, we calculate three characteristic values, i0·, i1·, i2 of the matrix which have been sorted, and corresponding characteristic vectors f vi0; vi1; vi2g. Those three characteristic vectors form an orthogonal frame, and they are the three major directions of the point set. The characteristic value corresponding to one major direction indicates the intensive degree of the points distributed in the major direction, and also may be referred to as energy in that direction. If the three characteristic values are approximate to each other, it means that the intensive degrees of the points distributed in the three major directions are similar, i.e., the distribution of the point set is dispersed in the three-dimensional space. On the contrary, if the maximum characteristic value is much larger than the other two characteristic values, it means that the point set is distributed intensively in a certain direction. We can define a distribution metric to describe the distribution condition of the point set:
Since the point distribution measuring tool is available, the definition of the regularization term with point distribution metric in formula (1) can be given as follows:
wherein, f°igi 2 I is a balance parameter for controlling the balance between the attractive force of the input point and the repulsive force of the sample point.
Now, formula (1) is solved. When the energy function gradient value is 0, the sample point meets the following relation:
after an organization, it can be obtained that :
This formula can be deemed as an equation where X is an unknown number, i.e., AX=BQ. Thus only when 0·1<1=2 can it be ensured that A is a nonsingular matrix that is strictly diagonal dominant. Thus, the solution of the equation is X=Ai 1BQ.
By using the moving point iteration method, when the point set of the current iteration is given as Xk=f xikg; k=0; 1; : : : , the point set of the next iteration is obtained as Xk+1=Ai
As can be seen from the above formula, the local repulsive force can be automatically calculated and adjusted when 2 (0; 1].
The L1-medial skeleton is introduced as above, and the present disclosure inherits its idea in the calculation method of the point cloud skeleton, but has the following distinctions:
An important application of the point cloud skeleton extraction is the point cloud data completion. In a case where the data missing is serious, the point cloud skeleton is helpful to ensure that the complemented data well maintains the original topological structure of the model. Observations show that only using the one-dimensional curve skeleton is adverse to the three-dimensional shape completion, and the two-dimensional curved surface skeleton is more convenient to present the non-cylindrical part of an object. As illustrated in
In the specific algorithm, the solution makes two improvements to the L1-medial skeleton:
(1) The surface points are preliminarily moved inwards using information of the normal vector of the point cloud, so as to obtain a point set inside the object. The main idea of the inward movement of the surface point is that the surface point goes in a direction opposite to its normal vector (i.e., a direction toward the interior of the object) until it encounters a point moving inwards from the opposite side of the object. This preprocessing operation has two advantages: firstly, it avoids the defect of the prior art that the calculated skeleton might be located outside the object. Once the point cloud skeleton is located outside the object, a disastrous influence on the point cloud completion will be caused, which destroys the entire topological structure of the model. Secondly, it avoids the defect that in the L1-medial skeleton, the neighborhood ambiguity is solved by gradually expanding the neighborhood, which greatly saves the calculation amount of the algorithm and the time for the user to adjust the algorithm parameters (initial neighborhood and neighborhood growth rate).
(2) Since the objective of this solution is to form a mixed-type skeleton combining one-dimensional curve and two-dimensional curved surface, the point distribution weighting function
in the weighted regularization term of the L1-medial skeleton is transformed. The main idea is to replace the weighted regularization term of the L1-medial skeleton with an anisotropic regularization term. The weighted regularization term controls the magnitude of the repulsive force according to the local point distribution condition, while the anisotropic regularization term projects the repulsive force onto each local major direction according to the distribution of the interior points and the distribution of the lines connecting the interior points and the outliers. If the distribution of the local points presents a one-dimensional curve, the repulsive force will only effect on the one-dimensional curve; and if the distribution of the local points presents a two-dimensional curved surface, the repulsive force will only effect on the two-dimensional curved surface. The result is a homogeneously distributed mixed-type skeleton combining one-dimensional curve and two-dimensional curved surface, while avoiding the use of the bridging point in the L1-medial skeleton.
As to the point cloud with a large-scale data missing, the Poisson reconstruction algorithm in the prior art is the best method for completion and reconstruction at present.
Such conventional implicit curved surface reconstruction algorithm achieves a good fitting in a region locally having data, but it can only ensure that the output grid is sealed in a region where global data is lost in a large scale. Another method is to complement the lost data using the point cloud skeleton to maintain the global topological structure of the object, and then make a curved surface reconstruction in the conventional method. The method adopted by the present disclosure as well as the L1-medial skeleton and the ROSA skeleton mentioned above belong to the second method. As illustrated in the following two figures, our method uses the skeleton, i.e., the semi-global information, to better recover the entire topological structure of the object, particularly for seriously lost data or slice type data.
The present disclosure mainly has the following technical contents:
(1) A new point cloud presentation—deep point. Each surface of the deep point is corresponding to a skeleton point. It is a presentation combining information of the local surface and information of the semi-global volume of the object.
(2) A method for extracting a mixed-type point cloud skeleton combining one-dimensional curve and two-dimensional curved surface, which is a key technique of deep point construction.
(3) A point cloud consolidation method that improves WLOP using a deep point, which achieves information supplementation and enhancement in a region having no input data.
(1) Point cloud preprocessing, i.e., point cloud down-sampling and WLOP consolidation. For an input point set C=f cj gj 2 J ½R3, we obtain a point set P=f pi gi 2 I ½R3 through a random sampling, and perform a smoothing consolidation thereof using the conventional WLOP algorithm introduced above. Next, we obtain an initial position of an interior skeleton point set Q=f qi gi2l ½R3 by copying a surface point set P. In that case, the two point sets are completely the same, and any operation of addition or deletion or changing the serial numbers will not be performed on the two point sets in the subsequent algorithm, which certainly ensures that the surface points are always one-to-one corresponding to the skeleton points.
(2) Deep point definition. In the embodiment of the present disclosure, a deep point is formed by a point pair hpi; qi, wherein pi and qi are located on the surface and the interior skeleton of the model, respectively. A complete deep point set <P,Q>={<pi, qi>}i∈I⊂R6 shall meet the following characteristics:
the surface point set P is located on the implicit curved surface of the model and uniformly distributed;
the skeleton point set Q forms a mixed-type skeleton composed of a one-dimensional curve and a two-dimensional curved surface;
the direction mi=(pi i q)=kpi i q k of the deep point pair is consistent with the normal vector of the surface point.
(3) Neighborhood size determination. The average distance r between each sample point and a closest adjacent point after the WLOP consolidation is calculated as follows and taken as a criterion for the main parameters.
Firstly, the neighborhood of the surface points is defined as {tilde over (P)}={pi′|∥pi′−pi∥<σpr}, wherein the default value of σp is 5. The neighborhood of the interior points is {tilde over (Q)}i={qi′|∥qi′−qi∥<σqr}, wherein the default value of σ1 is 2. The neighborhood of the surface points is static and only needs to be calculated for one time, while the neighborhood of the interior points is dynamic and needs to be recalculated for each iteration.
(4) The surface points move inwards to form interior points. That is, the copies of surface point set functioning as the initial positions of the skeleton points move inward. The point set Q={qi}i∈I ⊂R3 preliminarily moves inwards through information of the normal vector of the point cloud to obtain an interior point set inside the object.
The main idea of the point inward movement is to cause the point to move in a direction opposite to its normal vector (i.e., a direction toward the interior of the object) until it encounters a point moving inwards from the opposite side of the object. Firstly, the condition of stopping the point inward movement is defined as that, in a neighborhood Qi of the point, the maximum angle between the normal vector of the sample point and those of neighboring points is smaller than a threshold, i.e., maxi′∈I
(5) The interior points contract to form the skeleton points. Firstly, the point set obtained in step (4) is copied as an interior point set H={hi}i∈I, and the point set Q is further contracted and normalized to form the skeleton points, i.e., the optimization is solved:
Being different from the conventional L1 medial skeleton, in the embodiment of the present disclosure, a self-adaptive anisotropic neighborhood is used as the regularization term. The purpose is that in the cylindrical part, the shape of the neighborhood range presents a thin and long ellipsoid; in the quasi-planar part, the shape of the neighborhood range presents a large and flat ellipsoid; at the end point and the edge of the planar part, the shape of the neighborhood range presents a small ellipsoid, so as to avoid the excessive contraction of the boundary and keep the entire shape. For this purpose, we make a principal component analysis (PCA) for the neighborhood Qi of qi to obtain three principal axes f {dot over (v)}i1; {dot over (v)}i2; {dot over (v)}i3g of the neighborhood ellipsoid, which have lengths f Ii1; Ii2; Ii3g. Next, the normal vectors of the points within the neighborhood {tilde over (Q)} are projected onto each principal axis, and the length of each axis is determined through the following equation:
m=1; 2; 3:, wherein the default parameter of is 0.1.
Thus, back to the above optimization equation, it may be defined that
By analyzing the optimization, the following analytic expression can be deduced:
and the default parameter 1=0:4.
(6) Consolidation and completion of the surface points. Being different from the conventional WLOP algorithm, our optimization has three terms:
The first term is a data fitting term, wherein local input points are taken as a neighborhood
is a density function, i.e., the weight of the data fitting term increases when the surface point is closer to the source point. The second term {circumflex over (R)}(P) is a regularization term, which is defined as
which are any two orthogonal vectors on the tangent plane perpendicular to the normal vector. The third term G(P) is a shape (volume) preservation term, which is defined as
is an average volume thickness of the current point. γ(pi)=(1+var({∥pi′−qi′∥}i′∈I
and i′ is an adjacent point of the sample point.
(7) Normal vector correction. In each iteration, the normal vector obtained by the PCA calculation is corrected using the orientation of the deep point and the distribution condition of the input points around the point, i.e., the normal vector in step (6) is corrected, and the corrected normal vector is
The present disclosure proposes a new point cloud presentation, i.e., the deep point which has its surfaces one-to-one corresponding to the skeleton points. It is a presentation combines information of the local surface and information of the semi-global volume of the object, and can be applied to the aspects such as point cloud completion and reconstruction, feature extraction, point cloud deformation, and point cloud registration.
The present disclosure further proposes a method for extracting a mixed-type point cloud skeleton combining one-dimensional curve and two-dimensional curved surface, and this method is the key technique for the deep point construction. As compared with the conventional one-dimensional curve skeleton, the mixed-type skeleton has a richer presenting capability, and can be applied to the aspects such as point cloud completion, three-dimensional matching search, and three-dimensional animation.
A point cloud consolidation method which improves the WLOP algorithm using a deep point. The conventional WLOP algorithm can only perform denoising and smoothing in a region having input data, while the present method can perform data supplementation and enhancement in a region having no input data.
A technical concept of performing joint consolidation of the surface point and the skeleton point in the process of forming the deep point. That is, the formation of the skeleton point is dependent on relation information such as the surface neighborhood provided by the consolidated surface points. The completion and enhancement of the surface point is dependent on information such as volume and orientation (inward or outward) provided by the formed skeleton point. They are supplementary to each other, and the iterative consolidation can be repetitively performed continuously.
As compared with the existed methods for point cloud denoising and skeleton extraction, the present disclosure proposes a point cloud skeleton presentation more beneficial to the point cloud completion, and combines the concept of deep point, so that the existed best method for point cloud denoising, i.e., WLOP, can perform a point cloud completion at the same time of denoising. As compared with a three-dimensional reconstruction made directly, a three-dimensional reconstruction made based on the complemented and consolidated point cloud can better deal with the situation of large-scale data missing, and the reconstructed model is more accurate in the topological structure.
A person skilled in the art shall understand that the embodiments of the present disclosure can be provided as a method, a system or a computer program product. Therefore, the present disclosure can take the form of a full hardware embodiment, a full software embodiment, or an embodiment with combination of software and hardware aspects. Moreover, the present disclosure can take the form of a computer program product implemented on one or more computer usable storage mediums (including, but not limited to, a magnetic disc memory, CD-ROM, optical storage, etc.) containing therein computer usable program codes.
The present disclosure is described with reference to a flow diagram and/or block diagram of the method, device (system) and computer program product according to the embodiments of the present disclosure. It shall be understood that each flow and/or block in the flow diagram and/or block diagram and a combination of the flow and/or block in the flow diagram and/or block diagram can be realized by the computer program instructions. These computer program instructions can be provided to a general computer, a dedicated computer, an embedded processor or a processor of other programmable data processing device to generate a machine, such that the instructions performed by the computer or the processor of other programmable data processing devices generate the device for implementing the function designated in one flow or a plurality of flows in the flow diagram and/or a block or a plurality of blocks in the block diagram.
These computer program instructions can also be stored in a computer readable memory capable of directing the computer or other programmable data processing devices to operate in a specific manner, such that the instructions stored in the computer readable memory generate a manufactured article including an instruction device that implements the function(s) designated in one flow or a plurality of flows in the flow diagram and/or a block or a plurality of blocks in the block diagram.
These computer program instructions can also be loaded onto the computer or other programmable data processing devices, such that a series of operation steps is executed on the computer or other programmable devices to generate the processing realized by the computer, therefore the instructions executed on the computer or other programmable devices provide the steps for implementing the function designated in one flow or a plurality of flows in the flow chart and/or a block or a plurality of blocks in the block diagram.
The above are only the preferable embodiments of the present disclosure, and are not used for limiting the present disclosure. For a person skilled in the art, the embodiments of the present disclosure can be modified and changed variously. Any modification, equivalent substitutions and improvements within the spirit and principle of the present disclosure shall be contained in the protection scope of the present disclosure.
This application is a continuation of International Application No. PCT/CN2015/100005, filed on Dec. 30, 2015, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2015/100005 | Dec 2015 | US |
Child | 15389959 | US |