Pure pose solution method and system for multi-view camera pose and scene

Information

  • Patent Grant
  • 12094162
  • Patent Number
    12,094,162
  • Date Filed
    Tuesday, December 31, 2019
    4 years ago
  • Date Issued
    Tuesday, September 17, 2024
    2 months ago
Abstract
A pure pose solution method and system for a multi-view camera pose and scene are provided. The method includes: a pure rotation recognition (PRR) step: performing PRR on all views, and marking views having a pure rotation abnormality, to obtain marked views and non-marked views; a global translation linear (GTL) calculation step: selecting one of the non-marked views as a reference view, constructing a constraint tr=0, constructing a GTL constraint, solving a global translation (I), reconstructing a global translation of the marked views according to tr and (I), and screening out a correct solution of the global translation; and a structure analytical reconstruction (SAR) step: performing analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose. The method and system can greatly improve the computational efficiency and robustness of the multi-view camera pose and scene structure reconstruction.
Description
CROSS REFERENCE TO THE RELATED APPLICATIONS

This application is the national phase entry of International Application No. PCT/CN2019/130316, filed on Dec. 31, 2019, which is based upon and claims priority to Chinese Patent Application No. 201911267354.1, filed on Dec. 11, 2019, the entire contents of which are incorporated herein by reference.


TECHNICAL FIELD

The present disclosure relates to the field of computer vision, and in particular, to a pure pose solution method and system for a multi-view camera pose and scene.


BACKGROUND

Reconstruction of camera pose and scene structure has always been the core of reconstruction structure from motion in computer vision. In the conventional multi-view geometric description, the reconstruction of camera pose and scene structure requires initialization of global parameters and bundle adjustment (BA). On the one hand, the initialization of global parameters provides initial values for the BA and mainly includes initialization of global attitude, global translation, and three-dimensional (3D) scene point coordinates, where the difficulty lies in the initialization of global translation. The conventional global translation method generally takes a relative translation of two views as an input and optimizes the global translation by minimizing an algebraic error. Abnormalities may occur in cases such as camera pure rotation or co-linear motion in the conventional global translation method. On the other hand, the objective of the BA is to minimize a re-projection error. A parameter space includes 3D scene point coordinates, a pose parameter, a camera parameter, and others. In the case of m 3D scene points and n images, the space dimensionality of parameters to be optimized is 3m+6n. There are generally a large number of 3D scene points, resulting in high space dimensionality of the parameters to be optimized.


A patent (Patent Publication No. CN106408653A) discloses a real-time robust bundle adjustment method for large-scale 3D reconstruction. The current mainstream BA method is a nonlinear optimization algorithm considering the sparsity of a parametric Jacobian matrix, but the method still fails to meet the real-time and robustness requirements in large-scale scenes.


SUMMARY

To solve the defects in the prior art, an objective of the present disclosure is to provide a pure pose solution method and system for a multi-view camera pose and scene.


The present disclosure provides a pure pose solution method for a multi-view camera pose and scene, where the method uses initial attitude values of views as an input and includes:


a pure rotation recognition (PRR) step: performing PRR on all views and marking views having a pure rotation abnormality to obtain marked views and non-marked views;


a global translation linear (GTL) calculation step: selecting one of the non-marked views as a reference view, constructing a constraint tr=0, constructing a GTL constraint, solving a global translation {circumflex over (t)}, reconstructing a global translation of the marked views according to tr and {circumflex over (t)}, and screening out a correct solution of the global translation; and


a structure analytical reconstruction (SAR) step: performing analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose.


Preferably, the PRR step includes the following steps:


step 1: for a view i (1≤i≤N) and a view j∈Vi, calculating θi,j=∥[Xj]xRi,jXi∥ by using all image matching point pairs (Xi,Xj) and a relative attitude Ri,j of dual views (i,j) and constructing sets









Θ

i
,
j




and



Θ
i


=




j


V
i




Θ

i
,
j




,





where a proportion of elements in Θi that are greater than δ1 is denoted by γi;


step 2: if γ1<δ2, marking the view i as a pure rotation abnormality view, recording a mean value of elements in the set Θi,j as θi,j, letting







l
=


argmin

j


V
i






{


θ
¯


i
,
j


}



,





and constructing a constraint ti=tl;


where if a 3D point XW=(xW, yW, zW)T is visible in n (≤N) views, for i=1, 2 . . . , n, Vi is a set composed of all co-views of the view i; Xi and Xj represent normalized image coordinates of a point XW on the view i and the view j, respectively; δ1 and δ2 are specified thresholds; Ri and ti represent a global attitude and a global translation of the view i, respectively; Ri,j (=RjRiT) and ti,j represent a relative attitude and a relative translation of the dual views (i,j), respectively; and [Xj]x represents an antisymmetric matrix formed by vectors Xj; and


step 3: repeating step 1 to step 2 for all the views.


Preferably, the GTL calculation step includes the following steps:


step 1: for a current 3D point, selecting views








(

ϛ
,
η

)

=


argmax


1

i

,

j

n






{

θ

i
,
j


}



,





where custom character is a left baseline view, and η is a right baseline view;


step 2: for all the non-marked views (excluding the reference view), constructing a GTL constraint according to the form of Btη+Cti+custom character=0;


where normalized image coordinates of a 3D point xW on the view i include Xi˜custom character+custom charactercustom characterYi, ˜ represents an equation under homogeneous coordinates, aT=−([Xη]xcustom character)T[Xη]x, and the superscript T represents transposition of a matrix or vector. In order to solve the global translation linearly, different target function forms are defined. For example, (I3−Xie3T)Yi=0 and [Xi]xYi=0; I3 represents a 3D unit matrix and e3 represents a third-column vector e3=(0,0,1)T of the unit matrix. In addition, because the relative translation ti,j has different forms with respect to the global translation, for example, ti,j=Rj(ti−tj) and ti,j=tj−Ri,jti, matrices B, C, and D also have different forms correspondingly:


(1) for the target function [Xi]xYi=0 and the relative translation ti,j=Rj(ti−tj): B=[Xi]xcustom character, C=[Xi]xcustom character, D=−(B+C);


(2) for the target function (I3−Xie3T)Yi=0 the relative translation ti,j=Rj(ti−tj): B=(I3−Xie3T)custom character, C=(I3−Xie3T)custom character, D=−(B+C);


(3) for the target function [Xi]xYi=0 and the relative translation ti,j=tj−Ri,jti: B=[Xi]xcustom character, C=[Xi]xcustom character, D=−(custom character+custom character); and


(4) for the target function (I3−Xie3T)Yi=0 and the relative translation ti,j=tj−Ri,jti: B=(I3−Xie3T)custom character, C=(I3−Xie3T)custom character, D=−(custom character+custom character);


step 3: repeating step 1 to step 2 for other 3D points, constructing a linear equation, and solving the global translation i;


step 4: reconstructing the global translation of the marked views according to ti=tl by using {circumflex over (t)} and tr; and


step 5: screening out the correct solution of the global translation t according to custom character≥0.


Preferably, an optional camera pose optimization step is added between the GTL calculation step and the SAR step:


expressing image homogeneous coordinates fi of the 3D point XW on the view i as follows:

fi˜custom character+custom character


where ˜ represents an equation under homogeneous coordinates, custom character=∥[Xη]xcustom character∥/custom character, and a re-projection error is defined as follows:







ε
i

=



f
i



e
3
T



f
i



-


f
˜

i






where {tilde over (f)}i represents image coordinates of a 3D point on the view i and a third element is 1. For all views of the 3D point, a re-projection error vector ε is formed. For all 3D points, an error vector Σ is formed. A target function of global pose optimization is described as arg min ΣTΣ, and an optimization solution of the global pose is calculated accordingly. It should be noted that the camera pose optimization step may be replaced with another optimization algorithm, such as a classic BA algorithm; in this case, the 3D scene point coordinates may adopt an output result of the classic BA algorithm or may be obtained by using the following SAR step.


Preferably, the SAR step includes:


performing analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose;


for a current 3D point, calculating a depth of field in the left baseline view custom character:








z
ˆ

ϛ
W

=





1

j

n



j

ϛ




ω

ϛ
,
j




d
ϛ

(

ϛ
,
j

)








calculating a depth of field in the right baseline view:








z
ˆ

η
W

=





1

j

n



j

η




ω

j
,
η




d
η

(

j
,
η

)








where dη(j,η)=∥[Rj,ηXj]xtj,η∥/θj,η, custom character and ωj,η represent weighting coefficients. For example, in analytical reconstruction of a 3D point based on the depth of field in the left baseline view, it is specified that








ω

ϛ
,
j


=


θ

ϛ
,
j


/





1

j

n



j

ϛ



θ

ϛ
,
j





,





and in this case, coordinates of the current 3D feature point are as follows:

XW=custom character+custom character


Coordinates of all the 3D points can be obtained through analytical reconstruction. Similarly, the coordinates of the 3D points can be obtained through analytical reconstruction based on the depth of field in the right baseline view. An arithmetic mean of the foregoing two categories of coordinate values of the 3D points can be calculated.


The present disclosure provides a pure pose solution system for a multi-view camera pose and scene, including:


a PRR module configured to perform PRR on all views, and mark views having a pure rotation abnormality to obtain marked views and non-marked views;


a GTL calculation module configured to select one of the non-marked views as a reference view, construct a constraint tr=0, construct a GTL constraint, solve a global translation {circumflex over (t)}, reconstruct a global translation of the marked views according to tr and {circumflex over (t)}, and screen out a correct solution of the global translation; and


an SAR module configured to perform analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose.


Preferably, the PRR module includes the following modules:


a module M11 configured to: for a view i (1≤i≤N) and a view j∈Vi, calculate θi,j=∥[Xj]xRi,jXi∥ by using all image matching point pairs (Xi,Xj) and a relative attitude Ri,j of dual views (i,j) and construct sets Θi,j and








Θ
i

=




j


V
i




Θ

i
,
j




,





where a proportion of elements in Θi that are greater than δ1 is denoted by γi;


a module M12 configured to: if γi2, mark the view i as a pure rotation abnormality view, record a mean value of elements in the set Θi,j as θi,j, let







l
=


argmin

j


V
i






{


θ
¯


i
,
j


}



,





and construct a constraint ti=tl;


where if a 3D point XW=(xW,yW,zW)T is visible in n (≤N) views, for i=1, 2 . . . , n, Vi is a set composed of all co-views of the view i; Xi and Xj represent normalized image coordinates of a point XW on the view i and the view j, respectively; δ1 and δ2 are specified thresholds; Ri and ti represent a global attitude and a global translation of the view i, respectively; Ri,j (=RjRiT) and ti,j represent a relative attitude and a relative translation of the dual views (i,j), respectively; and [Xj]x represents an antisymmetric matrix formed by vectors Xj; and


a module M13 configured to repeat operations of the module M11 to the module M12 for all the views.


Preferably, the GTL calculation module includes:


a module M21 configured to: for a current 3D point, select views








(

ϛ
,
η

)

=


argmax


1

i

,

j

n






{

θ

i
,
j


}



,





where custom character is a left baseline view, and η is a right baseline view;


a module M22 configured to: for all the non-marked views, construct a GTL constraint according to the form of Btη+Cti+custom character=0;


where normalized image coordinates of a 3D point XW on the view i include Xi˜custom character+custom charactercustom characterYi, ˜ represents an equation under homogeneous coordinates, aT=−([Xη]xcustom character)T[Xη]x,and the superscript T represents transposition of a matrix or vector;


In addition, because the relative translation ti,j has different forms with respect to the global translation, matrices B, C, and D also have different forms correspondingly:


(1) for the target function [Xi]xYi=0 and the relative translation ti,j=Rj(ti−tj): B=[Xi]xcustom character, C=[Xi]xcustom character, D=−(B+C);


(2) for the target function (I3−Xie3T)Yi=0 and the relative translation ti,j=Rj(ti−tj): B=(I3−Xie3T)custom character, C=(I3−Xie3T)custom character, D=−(B+C);


(3) for the target function [Xi]xYi=0 and the relative translation ti,j=tj−Ri,jti: B=[Xi]xcustom character, C=[Xi]xcustom character, D=−(custom character+custom character); and


(4) for the target function (I3−Xie3T)Yi=0 and the relative translation ti,j=tj−Ri,jti: B=(I3−Xie3T)custom character, C=(I3−Xie3T)custom character, D=−(custom character+custom character);


a module M23 configured to repeat operations of the module M21 to the module M22 for other 3D points, construct a linear equation, and solve the global translation {circumflex over (t)};


a module M24 configured to reconstruct the global translation of the marked views according to ti=tl by using {circumflex over (t)} and tr; and


a module M25 configured to screen out the correct solution of the global translation t according to custom character≥0.


Preferably, the system further includes a camera pose optimization module configured to: express image homogeneous coordinates fi of the 3D point XW on the view i as follows:

fi˜custom character+custom character


where ˜ represents an equation under homogeneous coordinates, custom character=∥[Xη]xcustom character∥/custom character, and a re-projection error is defined as follows:







ε
i

=



f
i



e
3
T



f
i



-


f


i






where e3T=(0,0,1), {tilde over (f)}i represents image coordinates of a 3D point on the view i and a third element is 1. For all views of the 3D point, a re-projection error vector ε is formed. For all 3D points, an error vector Σ is formed. A target function of global pose optimization is described as arg min ΣTΣ, and an optimization solution of the global pose is calculated accordingly.


Alternatively, the camera pose optimization step is replaced with a classic BA algorithm; in this case, the 3D scene point coordinates adopt an output result of the classic BA algorithm or are obtained by using the SAR step.


Preferably, the SAR module is configured to:


perform analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose;


for a current 3D point, calculate a depth of field in the left baseline view custom character:








z
ˆ

ϛ
W

=





1

j

n



j

ϛ




ω

ϛ
,
j




d
ϛ

(

ϛ
,
j

)








calculate a depth of field in the right baseline view:








z
ˆ

η
W

=





1

j

n



j

η




ω

j
,
η




d
η

(

j
,
η

)








where dη(j,η)=∥[Rj,ηXj]xtj,η∥/θj,η, custom character and ωj,η represent weighting coefficients; and


perform analytical reconstruction to obtain coordinates of all the 3D points accordingly; or perform analytical reconstruction to obtain coordinates of the 3D points by using the depth of field of the right baseline view, or calculate an arithmetic mean of the foregoing two categories of coordinate values of the 3D points.


Compared with the prior art, the present disclosure has the following beneficial effects.


The present disclosure solves the bottleneck problem of traditional initial value and optimization methods and can substantially improve the robustness and computational speed of the camera pose and scene structure reconstruction.





BRIEF DESCRIPTION OF THE DRAWINGS

Other features, objectives, and advantages of the present disclosure will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following accompanying drawings.


FIGURE is a flowchart of the present disclosure.





DETAILED DESCRIPTION OF THE EMBODIMENTS

The present disclosure is described in detail below with reference to specific embodiments. The following embodiments will help those skilled in the art to further understand the present disclosure, but they do not limit the present disclosure in any way. It should be noted that several variations and improvements can also be made by a person of ordinary skill in the art without departing from the ideas of the present disclosure. These all fall within the protection scope of the present disclosure.


As shown in FIGURE, the present disclosure provides a pure pose solution method for a multi-view camera pose and scene, where the method uses initial attitude values of views as an input and includes the following steps:


PRR step: Perform PRR on all views and mark views having a pure rotation abnormality to obtain marked views and non-marked views.


GTL calculation step: Select one of the non-marked views as a reference view, construct a constraint tr=0, construct a GTL constraint, solve a global translation {circumflex over (t)}, reconstruct a global translation of the marked views according to tr and {circumflex over (t)}, and screen out a correct solution of the global translation.


SAR step: Perform analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose.


The PRR step includes the following steps:


Step 1: For a view i (1≤i≤N) and a view j∈Vi, calculate θi,j=∥[Xj]xRi,jXi∥ by using all image matching point pairs (Xi,Xj) and a relative attitude Ri,j of dual views (i,j) and construct sets Θi,j and








Θ
i

=




j


V
i




Θ

i
,
j




,





where a proportion of elements in Θi that are greater than δ1 is denoted by γi.


Step 2: If γi2, mark the view i as a pure rotation abnormality view, record a mean value of elements in the set Θi,j as θi,j, letting







l
=


argmin

j


V
i






{


θ
¯


i
,
j


}



,





and construct a constraint ti=tl.


If a 3D point XW=(xW, yW,zW)T is visible inn (≤N) views, for i=1, 2 . . . , n, Vi is a set composed of all co-views of the view i; Xi and Xj represent normalized image coordinates of a point XW on the view i and the view j, respectively; δ1 and δ2 are specified thresholds; Ri and ti represent a global attitude and a global translation of the view i, respectively; Ri,j (=RjRiT) and ti,j represent a relative attitude and a relative translation of the dual views (i,j), respectively; and [Xj]x represents an antisymmetric matrix formed by vectors Xj.


Step 3: Repeat step 1 to step 2 for all the views.


The GTL calculation step includes the following steps:


Step 1: For a current 3D point, select views








(

,
η

)

=


argmax


1

i

,

j

n






{

θ

i
,
j


}



,





where custom character is a left baseline view and η is a right baseline view.


Step 2: For all the non-marked views (excluding the reference view), construct a GTL constraint according to the form of Btη+Cti+custom character=0.


Normalized image coordinates of a 3D point XW on the view i include Xi˜custom character+custom charactercustom characterYi, ˜ represents an equation under homogeneous coordinates, aT=−([Xη]xcustom character)T[Xη]x, and the superscript T represents transposition of a matrix or vector. In order to solve the global translation linearly, different target function forms are defined, for example, (I3−Xie3T)Yi=0 and [Xi]xYi=0. I3 represents a 3D unit matrix, and e3 represents a third-column vector e3=(0,0,1)T of the unit matrix. In addition, because the relative translation ti,j has different forms with respect to the global translation, for example, ti,j=Rj(ti−tj) and ti,j=tj−Ri,j ti, matrices B, C, and D of also have different forms:


(1) for the target function [Xi]x Yi=0 and the relative translation ti,j=Rj(ti−tj): B=[Xi]xcustom character, C=[Xi]xcustom character, D=−(B+C);


(2) for the target function (I3−Xie3T)Yi=0 and the relative translation ti,j=Rj(ti−tj): B=(I3−Xie3T)custom character, C=(I3−Xie3T)custom character, D=−(B+C);


(3) for the target function [Xi]xYi=0 and the relative translation ti,j=tj−Ri,jti: B=[Xi]xcustom character, C=[Xi]xcustom character, D=−(custom character+custom character); and


(4) for the target function (I3−Xie3T)Yi=0 and the relative translation ti,j=tj−Ri,jti: B=(I3−Xie3T)custom character, C=(I3−Xie3T)custom character, D=−(custom character+custom character).


Step 3: Repeat step 1 to step 2 for other 3D points, construct a linear equation, and solve the global translation {circumflex over (t)}.


Step 4: Reconstruct the global translation of the marked views according to ti=tl by using {circumflex over (t)} and tr.


Step 5: Screen out the correct solution of the global translation t according to custom character≥0.


An optional camera pose optimization step is added between the GTL calculation step and the SAR step:


Express image homogeneous coordinates fi of the 3D point XW on the view i as follows:

fi˜custom character+custom character


where ˜ represents an equation under homogeneous coordinates, custom character=∥[Xη]xcustom character∥/custom character, and a re-projection error is defined as follows:







ε
i

=



f
i



e
3
T



f
i



-


f
i

~






where {tilde over (f)}i represents image coordinates of a 3D point on the view i and a third element is 1. For all views of the 3D point, a re-projection error vector ε is formed. For all 3D points, an error vector Σ is formed. A target function of global pose optimization is described as arg min ΣTΣ, and an optimization solution of the global pose is calculated accordingly. It should be noted that the camera pose optimization step may be replaced with another optimization algorithm, such as a classic BA algorithm; in this case, the 3D scene point coordinates may adopt an output result of the classic BA algorithm or may be obtained by using SAR step.


The SAR step includes:


performing analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose.


For a current 3D point, a depth of field in the left baseline view custom character is calculated as follows:








z
ˆ

ϛ
W

=







1

j

n






j

ϛ







ω

ϛ
,
j




d
ϛ

(

ϛ
,
j

)








A depth of field in the right baseline view is calculated as follows:








z
^

η
W

=





1

j

n



j

η




ω

j
,
η




d
η

(

j
,
η

)








where dη(j,η)=∥[Rj,ηXj]xtj,η∥/θj,η, custom character and ωj,η represent weighting coefficients. For example, in analytical reconstruction of a 3D point based on the depth of field in the left baseline view, it is specified that








ω

ς
,
j


=


θ

ς
,
j


/





1

j

n



j

ς



θ

ς
,
j





,





and in this case, coordinates of the current 3D feature point are as follows:

XW=custom character+custom character


Coordinates of all the 3D points can be obtained through analytical reconstruction. Similarly, the coordinates of the 3D points can be obtained through analytical reconstruction based on the depth of field in the right baseline view. An arithmetic mean of the foregoing two categories of coordinate values of the 3D points can be calculated.


Based on the foregoing pure pose solution method for a multi-view camera pose and scene, the present disclosure further provides a pure pose solution system for a multi-view camera pose and scene, including:


a PRR module configured to perform PRR on all views, and mark views having a pure rotation abnormality, to obtain marked views and non-marked views;


a GTL calculation module configured to select one of the non-marked views as a reference view, construct a constraint tr=0, construct a GTL constraint, solve a global translation {circumflex over (t)}, reconstruct a global translation of the marked views according to tr and {circumflex over (t)}, and screen out a correct solution of the global translation; and


an SAR module configured to perform analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose.


Those skilled in the art are aware that in addition to being realized by using pure computer-readable program code, the system and each apparatus, module, and unit thereof provided in the present disclosure can realize a same program in a form of a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller, or an embedded microcontroller by performing logic programming on the method steps. Therefore, the system and each apparatus, module, and unit thereof provided in the present disclosure can be regarded as a hardware component. The apparatus, module, and unit included therein for realizing various functions can also be regarded as a structure in the hardware component; the apparatus, module and unit for realizing the functions can also be regarded as a software program for implementing the method or a structure in the hardware component.


The specific embodiments of the present disclosure are described above. It should be understood that the present disclosure is not limited to the above specific implementations, and a person skilled in the art can make various variations or modifications within the scope of the claims without affecting the essence of the present disclosure. The embodiments in the present disclosure and features in the embodiments may be arbitrarily combined with each other in a non-conflicting manner.

Claims
  • 1. A pure pose solution method for a multi-view camera pose and scene, comprising: a pure rotation recognition (PRR) step, wherein the PRR step comprises performing a PRR on views, and marking views having a pure rotation abnormality of the views to obtain marked views indicating pure rotation abnormality and non-marked views;a global translation linear (GTL) calculation step, wherein the GTL calculation step comprises selecting one of the non-marked views as a reference view, constructing a constraint tr=0, constructing a GTL constraint, solving a global translation {circumflex over (t)}, reconstructing a global translation of the marked views according to tr and {circumflex over (t)}, and screening out a correct solution of the global translation; anda structure analytical reconstruction (SAR) step, wherein the SAR step comprises performing an analytical reconstruction on coordinates of 3D points according to a correct solution of a global pose,wherein the PRR step further comprises:step 1: for a view i (1≤i≤N) and a view j (j∈Vi), calculating θi,j=∥[Xj]xRi,jXi∥ by using image matching point pairs (Xi,Xj) and a relative attitude Ri,j of dual views (i,j), and constructing a set Θi,j and a set
  • 2. The pure pose solution method according to claim 1, wherein the PRR step further comprises: step 2: when γi<δ2, marking the view i as a pure rotation abnormality view, recording a mean value of elements in the set Θi,j as θi,j, letting
  • 3. The pure pose solution method according to claim 2, wherein the GTL calculation step comprises: step 3: for a current 3D point, selecting views
  • 4. The pure pose solution method according to claim 3, further comprising a camera pose optimization step between the GTL calculation step and the SAR step, wherein the camera pose optimization step comprises: expressing image homogeneous coordinates fi of the 3D point XW on the view i, wherein fi˜+wherein ˜ represents the equation under homogeneous coordinates, =∥[Xη]x∥/, and a re-projection error is defined, wherein
  • 5. The pure pose solution method according to claim 3, wherein the SAR step further comprises: performing an analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose;for the current 3D point, calculating a depth of field in the left baseline view , wherein
  • 6. An application specific integrated circuit comprising a pure pose solution system for a multi-view camera pose and scene, comprising: a pure rotation recognition (PRR) module, wherein the PRR module is configured to perform a PRR on views, and mark views having a pure rotation abnormality of the views to obtain marked views and non-marked views;a global translation liner (GTL) calculation module, wherein the GTL calculation module is configured to select one of the non-marked views as a reference view, construct a constraint tr=0, construct a GTL constraint, solve a global translation {circumflex over (t)}, reconstruct a global translation of the marked views according to tr and {circumflex over (t)}, and screen out a correct solution of the global translation; anda structure analytical reconstruction (SAR) module, wherein the SAR module is configured to perform an analytical reconstruction on coordinates of 3D points according to a correct solution of a global pose,wherein the PRR module further comprises:a module M1, wherein the module M1 is configured to: for a view i (1≤i≤N) and a view j (j∈Vi), calculate θi,j=∥[Xj]xRi,jXi∥ by using image matching point pairs (Xi,Xj) and a relative attitude Ri,j of dual views (i,j) and construct a set Θi,j and a set
  • 7. The pure pose solution system according to claim 6, wherein the PRR module further comprises: a module M2, wherein the module M2 is configured to: when γ1<δ2, mark the view i as a pure rotation abnormality view, record a mean value of elements in the set Θi,j as θi,j, let
  • 8. The pure pose solution system according to claim 7, wherein the GTL calculation module comprises: a module M4, wherein the module M4 is configured to: for a current 3D point, select views
  • 9. The pure pose solution system according to claim 8, further comprising a camera pose optimization module, wherein the camera pose optimization module is configured to: express image homogeneous coordinates fi of the 3D point XW on the view i, wherein fi˜+wherein ˜ represents the equation under homogeneous coordinates, =∥[Xη]x∥/, and a re-projection error is defined, wherein
  • 10. The pure pose solution system according to claim 8, wherein the SAR module is further configured to: perform an analytical and weighted reconstruction on a multi-view 3D scene structure according to a camera pose;for the current 3D point, calculate a depth of field in the left baseline view , wherein
  • 11. The pure pose solution system according to claim 6, wherein the PRR module, the GTL calculation module, and the SAR module further comprise at least one of computer-readable program code, a logic gate, a switch, an application-specific integrated circuit, a programmable logic controller, or an embedded microcontroller.
Priority Claims (1)
Number Date Country Kind
201911267354.1 Dec 2019 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2019/130316 12/31/2019 WO
Publishing Document Publishing Date Country Kind
WO2021/114434 6/17/2021 WO A
US Referenced Citations (6)
Number Name Date Kind
11704787 Dwivedi Jul 2023 B2
20170083751 Tuzel Mar 2017 A1
20180071032 de Almeida Barreto Mar 2018 A1
20190130590 Volochniuk et al. May 2019 A1
20220198695 Luo Jun 2022 A1
20230186519 Wu Jun 2023 A1
Foreign Referenced Citations (5)
Number Date Country
106289188 Jan 2017 CN
106408653 Feb 2017 CN
107507277 Dec 2017 CN
108038902 May 2018 CN
109993113 Jul 2019 CN
Related Publications (1)
Number Date Country
20230041433 A1 Feb 2023 US