Method for implementing high-precision orientation and evaluating orientation precision of large-scale dynamic photogrammetry system

Information

  • Patent Grant
  • 9857172
  • Patent Number
    9,857,172
  • Date Filed
    Monday, September 25, 2017
    7 years ago
  • Date Issued
    Tuesday, January 2, 2018
    6 years ago
Abstract
The present invention provides a method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system, including steps: a) selecting a scale bar, arranging code points at two ends of the scale bar, and performing length measurement on the scale bar; b) evenly dividing a measurement space into multiple regions, sequentially placing the scale bar in each region, and photographing the scale bar by using left and right cameras; d) limiting self-calibration bundle adjustment by using multiple length constraints, adjustment parameters including principal point, principal distance, radial distortion, eccentric distortion, in-plane distortion, exterior orientation parameter and spatial point coordinate; and e) performing traceable evaluation of orientation precision of the photogrammetry system. The present invention can effectively reduce the relative error in length measurement.
Description
FIELD OF THE INVENTION

The present invention relates to a method for implementing high-precision orientation and evaluating orientation precision, and more particularly to a method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system.


BACKGROUND OF THE INVENTION

A dynamic photogrammetry system includes two or more cameras that can capture images at a high speed. The cameras synchronously capture images of a target point to be measured. By calibrating projection imaging models of the cameras, spatial light may be restored from image plane points, and spatial coordinates of the point to be measured may be calculated from an intersection of the spatial light. Imaging models of cameras include interior and exterior orientation parameter types. Interior orientation parameters include principal point, principal distance and various distortion parameters of different orders. Exterior orientation parameters include the position and directional angle of the camera in a spatial coordinate system. Finding a solution of the exterior orientation parameter is to perform positioning and orientation of the photogrammetry system.


Currently, it is generally recognized that a self-calibration bundle adjustment method can provide the highest positioning and orientation precision, because this technology considers the influence of the interior orientation parameters on the measurement result during adjustment and the obtained interior and exterior orientation parameter results are optimal results that match the measurement environment and the measurement network. The most commonly used self-calibration methods in the field of stereo vision measurement are Tsai's two-stage calibration method and Zhang Zhengyou's plane calibration method. Commonly used calibration objects include a three-dimensional calibration object, a planar checkerboard calibration plate, a flat dot calibration plate, and so on. The essence of such calibration methods is to generate a measurement network having several control points through relative movement of the calibration object and the camera, so as to implement self-calibration bundle adjustment of a single camera, and then solving an orientation relationship between the two cameras in a same coordinate system.


Such methods have been applied to machine vision, structured light measurement, and high-precision large-scale stereo vision measurement systems. Such self-calibration methods are suitable for small measurement ranges (generally smaller than 1 m*1 m), because the precision and the size of the calibration object are inversely proportional to each other. When such methods are used for structured light measurement in a small space or microscopic measurement, a desirable calibration precision can be obtained.


Such self-calibration methods have the following defects: the focus of the adjustment process is the interior orientation parameter, and there is no direct optimization or evaluation on the orientation relationship between cameras; coordinate precision of target points on the calibration plate that are used as control points leads to a system error; differences between the target points on the calibration plate and the measurement points in material and imaging quality lead to a system error; in addition to the image plane error or spatial coordinate error estimation, there is no reliable objective spatial evaluation indicators. Therefore, such method are not suitable for large-scale measurement occasions having high requirements on precision and precision traceable.


SUMMARY OF THE INVENTION

To solve the above-mentioned technical problem, the present invention provides a method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system, including steps:


a) selecting a scale bar, arranging code points at two ends of the scale bar, and performing length measurement on the scale bar;


b) evenly dividing a measurement space into multiple regions, sequentially placing the scale bar in each region, and photographing the scale bar by using left and right cameras, where continuous photographing is carried out in the following manner:


b1) adjusting left and right cameras so that fields of view of the left and right cameras fully cover the scale bar, synchronously moving the left and right cameras along a perpendicular direction of a line connecting the code points at the two ends of the scale bar, and at the same time rotating the scale bar by using the perpendicular direction of the line connecting the code points at the two ends of the scale bar as an axis of rotation; and


b2) adjusting the left and right cameras, synchronously moving the left and right cameras along the perpendicular direction of the line connecting the code points at the two ends of the scale bar in a direction opposite to that in the step b1), and at the same time rotating the scale bar in a direction opposite to that in the step b1);


c) performing preliminary relative positioning and orientation of the photogrammetry system according to images shot by the left and right cameras;


d) limiting self-calibration bundle adjustment by using multiple length constraints, adjustment parameters including principal point, principal distance, radial distortion, eccentric distortion, in-plane distortion, exterior orientation parameter and spatial point coordinate;


e) performing traceable evaluation of orientation precision of the photogrammetry system.


Preferably, a distance between the left and right cameras is the same as a region length according to which the measurement space is evenly divided.


Preferably, the photographing process of the step b) is repeated multiple times.


Preferably, the step c includes steps:


c1) determining a solution space of a fundamental matrix by using a five-point method: automatically selecting five imaging points in a position of the scale bar, where positions of the imaging points are a center and four vertexes of the shot image, and the automatic selection is performed using the following algorithm:

abs(xli)+abs(yli)+abs(xri)+abs(yri)→min
(xli)+(yli)+(xri)+(yri)→max
(−xli)+(yli)+(−xri)+(yri)→max
(−xli)+(−yli)+(−xri)+(−yri)→max
(xli)+(−yli)+(xri)+(−yri)→max


where [xli yli] represents image plane coordinates of the ith point captured by the left camera, and [xri yri] represents image point coordinates of the ith point captured by the right camera; and


c2) obtaining a solution of an essential matrix by using an elimination method.


Preferably, the obtaining of the solution of the essential matrix in the step c2) includes the following key steps:


c21) building a tenth-degree polynomial of an unknown number w;


c22) solving a real root of the tenth-degree polynomial in the step c21; and


c23) judging the solution of the essential matrix.


Preferably, in a method of building the tenth-degree polynomial of w in the step c21, a Toeplitz matrix is used:


c211) describing all polynomials as an array:

amxm+Am-1xm-1+ . . . +a2x2+a1x+a0 A=[amam-1 . . . a2a1a0]T
bnxn+bn-1xn-1+ . . . +b2x2+b1x+b0 B=[bnbn-1 . . . b2b1b0]T; and


c212) performing programming by using the Toeplitz matrix to compute polynomial multiplication, with a formula being:






C
=


T
*
B

=


[




a
m



0


0





0





a

m
-
1





a
m



0





0





a

m
-
2





a

m
-
1





a
m






0





















0



a
0




a
1







a
m






















0


0


0


0



a
0




]



[




b
n






b

n
-
1












b
1






b
0




]







where T is a Toeplitz matrix corresponding to a polynomial A, a quantity of columns in T is equal to a quantity of elements in B, and a quantity of rows in T is equal to (m+1)+(n+1)−1.


Preferably, the step c212 may further include: performing programming by using the Toeplitz matrix to compute polynomial division, specifically including the following steps:


c213) describing n as n=T(d)q+r by using a Toeplitz matrix of d, where it is set that a numerator polynomial is n, a denominator polynomial is d, a quotient is q, and a remainder is r; and


c214) calculating an optimal solution of a quotient of the polynomial division: q=(TTT)−1TTn.


Preferably, a method for solving the real root of the tenth-degree polynomial in the step c22 is:


c221) building a Sturm sequence of the tenth-degree polynomial, with a formula being:

p0(x)=p(x)
p1(x)=p′(x)
p2(x)=−rem(p0,p1)
p3(x)=−rem(p1,p2)
.
.
.
0=−rem(pm-1,pm)


where rem represents calculating the remainder of the polynomial, and P(x) is a known tenth-degree polynomial;


c222) searching all single intervals of the polynomial by recursive dichotomy in combination with the Sturm's Theorem; and


c223) after the single intervals are obtained, calculating a numerical solution of a real root of |C(w)|=0 in each single interval by dichotomy.


Preferably, the step d includes the following steps:


d1) obtaining a camera imaging model;


d2) building an error equation of the photogrammetry system;


d3) performing adaptive proportional adjustment of error equation terms;


d4) performing a partitioning operation on the normal equation;


d5) obtaining self-calibration bundle adjustment with length constraints; and


d6) evaluating precision of the photogrammetry system.


Preferably, the step e performing traceable evaluation of orientation precision of the photogrammetry system is performed by using direct linear transformation, and includes the following steps:


e1) obtaining an error equation of a single imaging point;


e2) when the photogrammetry system includes two cameras, obtaining an error equation set corresponding to a spatial point [X Y Z];


e3) obtaining a corresponding normal equation and coordinate correction amount, and optimizing spatial point coordinates through multiple iterations;


e4) obtaining a measured value of a corresponding length of the scale bar through the step e3;


e5) obtaining an average value of measurement results of lengths of the scale bar in all orientations;


e6) performing scaling by using a ratio of an average length to a gage length as an orientation result, and performing an uncertainty analysis on the length measurement result, to obtain an orientation precision evaluation;


e7) obtaining a quality parameter of a measurement instrument;


e8) calculating a relative error of each length; and


e9) calculating an uncertainty of the relative error in length measurement.


Based on the above, according to the method for high-precision positioning and orientation of a large-scale multi-camera photogrammetry system of the present invention, the photogrammetry system remains stable, positioning and orientation are completed at the measurement site, the calibration object is convenient to process and measure, and the calibration process is simple. Once the orientation process is completed, traceable evaluation of the length measurement error of the photogrammetry system can be provided. The present invention achieves the following technical effects:


1. In the present invention, the measurement space is divided into regions so that the distance between the left and right cameras is equal to the region length of the divided space, and the gauge attitude and the camera position are adjusted at the same time during the photographing process, making the length orientation field more reliable.


2. A method for numerical computation of relative orientation by using a five-point method and a technology for feasible solution judgment by using spatially orthogonal length constraints.


3. Multi-camera self-calibration bundle adjustment is indirect adjustment with multiple constraint conditions. Reference lengths of multiple positions and attitudes are used as three-dimensional constraint conditions to control the entire adjustment, so that the correlation between interior and exterior orientation parameter can be decoupled, thereby implementing full-parameter self-calibration bundle adjustment (including principal point, principal distance, radial distortion, eccentric distortion, in-plane distortion, exterior orientation parameter and spatial point coordinate).


4. When the orientation process is completed, traceable precision evaluation of the entire positioning and orientation process is performed by using a statistical result of multiple length measurement errors.





BRIEF DESCRIPTION OF THE DRAWINGS

To describe the technical solutions in the embodiments of the present invention or the prior art more clearly, the drawings that need to be used in the embodiments or the prior art are briefly introduced. It would be obvious that the drawings in the following description are merely some embodiments of the present invention, and those of ordinary skill in the art may further obtain other drawings according to these drawings without creative efforts.



FIG. 1 is an experimental assembly of a method for high-precision positioning and orientation of a large-scale multi-camera photogrammetry system according to the present invention;



FIG. 2 is a schematic experimental diagram of the system;



FIG. 3 shows images of a scale bar in two cameras in the system; and



FIG. 4 is a diagram showing the relative positions between scale bars at different positions and attitudes in space and two cameras.





DETAILED DESCRIPTION OF THE INVENTION

The objectives and functions of the present invention and methods for achieving such objectives and functions will be illustrated through exemplary embodiments. However, the present invention is not limited to the exemplary embodiments disclosed below, but may be implemented in different forms. The essence of this specification is merely for the purpose of helping those skilled in the art to comprehensively understand the specific details of the present invention.


The present invention provides a large-scale dynamic photogrammetry system. The system includes a camera 101, a power source 102, a network cable 103, a signal line 104 and a host computer 105. The power source 102 supplies power to the camera 1, the camera 2 and the host computer 105. A network cable 103a is connected between the camera 1 and the host computer 105, and a network cable 103b is connected between the camera 2 and the host computer 105, for providing communication between the cameras and the host computer. A signal line 104a is connected to the camera 1 and the host computer 105, and a signal line 104b is connected to the camera 2 and the host computer 105, for transmitting signals between the host computer 105 and the camera 101. The host computer 105 is further equipped with a USB signal adapter. The host computer 105 sends a synchronization signal via the USB signal adapter 106, and controls the two cameras 101 via the signal line 104 to synchronously capture images.



FIG. 2 is a schematic experimental diagram of the system. The measurement space 201 mainly includes two cameras 202 and a scale bar 203. The measurement space 201 provides an experimental verification environment for the method for implementing high-precision orientation and evaluating orientation precision of the system.


The present invention provides a method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system, including steps:


a) selecting a scale bar, arranging code points at two ends of the scale bar, and performing length measurement on the scale bar.


b) evenly dividing a measurement space into multiple regions, sequentially placing the scale bar in each region, and photographing the scale bar by using left and right cameras, where continuous photographing is carried out in the following manner:


b1) adjusting left and right cameras so that fields of view of the left and right cameras fully cover the scale bar, synchronously moving the left and right cameras along a perpendicular direction of a line connecting the code points at the two ends of the scale bar, and at the same time rotating the scale bar by using the perpendicular direction of the line connecting the code points at the two ends of the scale bar as an axis of rotation; and


b2) adjusting the left and right cameras, synchronously moving the left and right cameras along the perpendicular direction of the line connecting the code points at the two ends of the scale bar in a direction opposite to that in the step b1), and at the same time rotating the scale bar in a direction opposite to that in the step b1);


c) performing preliminary relative positioning and orientation of the photogrammetry system according to images shot by the left and right cameras, where in the photographing process, a distance between the left and right cameras is the same as a region length according to which the measurement space is evenly divided, and preferably, in some embodiments, the photographing process of the step b) is repeated multiple times.


d) limiting self-calibration bundle adjustment by using multiple length constraints, adjustment parameters including principal point, principal distance, radial distortion, eccentric distortion, in-plane distortion, exterior orientation parameter and spatial point coordinate.


e) performing traceable evaluation of orientation precision of the photogrammetry system.


Based on the method of the step b, the scale bar of the present invention is designed and calibrated through the following operations:


The main body of the scale bar is a carbon fiber bar, and two backlight reflection code points are fixed at two ends of the bar as endpoints for length measurement. During calibration, the scale bar at a particular position is imaged in two cameras. As shown in FIG. 3, white point areas are backlight reflection points 301 adhered at two ends of the scale bar.


As shown in FIG. 4, the three-dimensional space includes coordinates 402 of the two cameras and multiple scale bars 401 at different positions, and backlight reflection points 403 are adhered at two ends of the scale bar and used for calibrating the length of the scale bar. The number of images shot should be not less than 60.


Step c:


According to an embodiment of the present invention, the step c includes the following steps:


c1) determining a solution space of a fundamental matrix by using a five-point method: automatically selecting five imaging points in a position of the scale bar, where positions of the imaging points are a center and four vertexes of the shot image, and the automatic selection is performed using the following algorithm:

abs(xli)+abs(yli)+abs(xri)+abs(yri)→min
(xli)+(yli)+(xri)+(yri)→max
(−xli)+(yli)+(−xri)+(yri)→max
(−xli)+(−yli)+(−xri)+(−yri)→max
(xli)+(−yli)+(xri)+(−yri)→max  (3.1)


where [xli yli] represents image plane coordinates of the ith point captured by the left camera, and [xri yri] represents image point coordinates of the ith point captured by the right camera;


The point satisfying the above formulas is a point closest to the expected position.


The corresponding image point satisfies:

X′TEX=0  (3.2)


where E is an essential matrix, and X and X′ are respectively coordinates obtained by normalizing respective intrinsic parameters of corresponding points on left and right images.


Five points are selected and substituted into the above formula, to obtain the following equation set:










[









X
1




X
1






X
1




Y
1






X
1




Z
1






Y
1




X
1






Y
1




Y
1






Y
1




Z
1






Z
1




X
1






Z
1




Y
1






Z
1




Z
1








X
2




X
2






X
2




Y
2






X
2




Z
2






Y
2




X
2






Y
2




Y
2






Y
2




Z
2






Z
2




X
2






Z
2




Y
2






Z
2




Z
2





































X
5




X
5






X
5




Y
5






X
5




Z
5






Y
5




X
5






Y
5




Y
5






Y
5




Z
5






Z
5




X
5






Z
5




Y
5






Z
5




Z
5





]






[








e
11






e
12






e
13






e
21






e
22






e
23






e
31






e
32






e
33








]

=





[







0




0




0




0




0




0




0




0




0







]







(
3.3
)







A fundamental system of solutions of the above formula is obtained by singular value decomposition (SVD), thus extending a solution space of the essential matrix:

E=wEw+xEx+yEy+Ez  (3.4)


c2) obtaining a solution of an essential matrix by using an elimination method;


The essential matrix has the following two natures:

|E|=0
2EETE−tr(EET)E=0  (3.5)


Ten equations of unknown numbers w, x, y may be obtained by using the above formulas. The ten equations are rewritten by using x, y, x2, xy, y2, x3, x2y, xy2 and y3 as unknown variables and was a known variable, to eliminate x and y. The following equation set is obtained:












C


(
w
)



10
×
10




[



1




x




y





x
2






x





y






y
2






x
3







x
2


y






x






y
2







y
3




]


=
0




(
3.6
)







where C(w) is a polynomial matrix of w.


The homogeneous linear equation set described by the formula (3.6) has a non-zero solution, and therefore the determinant of C(w) is zero. Theoretically, a polynomial expressed by the determinant of C(w) is described, and a root of the polynomial is solved, thus obtaining numerical solutions of w and C(w); further, values of x and y can be obtained by calculating a non-zero solution of the formula (3.6). This inevitably involves the description of the polynomial of a 10-order symbolic matrix determinant, and how to solve a real root of a higher-degree polynomial.


According to an embodiment of the present invention, the obtaining of the solution of the essential matrix in the step c2) includes the following key steps:


c21) building a tenth-degree polynomial of an unknown number w;


A solution of the determinant is calculated by programming using the following method which is suitable for both symbolic polynomial and numerical operations:


m=1;


for k=1: n−1

    • for I=k+1: n
      • for j=k+1: n
        • A(i, j)=(A(k, k)A(i, j)−A(I, k)A(k, j))/m;


end

    • end
    • m=A(k, k);


end

    • return A(n, n);


The above algorithm involves polynomial multiplication and division.


According to an embodiment of the present invention, in a method of building the tenth-degree polynomial of w in the step c21, programming and computation are performed by using a Toeplitz matrix, with a method being:


c211) describing all polynomials as an array:

amxm+Am-1xm-1+ . . . +a2x2+a1x+a0 A=[amam-1 . . . a2a1a0]T
bnxn+bn-1xn-1+ . . . +b2x2+b1x+b0 B=[bnbn-1 . . . b2b1b0]T   (3.7)


c212) performing programming by using the Toeplitz matrix to compute polynomial multiplication, with a formula being:









C
=


T
*
B

=


[




a
m



0


0





0





a

m
-
1





a
m



0





0





a

m
-
2





a

m
-
1





a
m






0





















0



a
0




a
1







a
m






















0


0


0


0



a
0




]



[




b
n






b

n
-
1












b
1






b
0




]







(
3.8
)







where T is a Toeplitz matrix corresponding to a polynomial A, a quantity of columns in T is equal to a quantity of elements in B, and a quantity of rows in T is equal to (m+1)+(n+1)−1.


According to an embodiment of the present invention, the step c212 may further include: performing programming by using the Toeplitz matrix to compute polynomial division, specifically including the following steps:


c213) describing n as n=T(d)q+r by using a Toeplitz matrix of d,


where it is set that a numerator polynomial is n, a denominator polynomial is d, a quotient is q, and a remainder is r; and


c214) calculating an optimal solution of a quotient of the polynomial division: q=(TTT)−1TTn.


The method of solving the polynomial division by using the Toeplitz matrix does not have the error accumulation phenomenon of a long division method and can avoid the ill-conditioned problem of division of a large number by a small number. The obtained result is an optimal solution in the least squares sense.


c22) solving a real root of the tenth-degree polynomial in the step c21;


Solving the real root of the higher-degree polynomial can rely only on a numerical method such as Newton's method and dichotomy, but the key problem lies in determining a single interval.


According to an embodiment of the present invention, single intervals are searched by using the Sturm's Theorem in the present invention. For a known polynomial P(x), its Sturm sequence may be determined:

p0(x)=p(x)
p1(x)=p′(x)
p2(x)=−rem(p0,p1)
p3(x)=−rem(p1,p2)
.
.
.
0=−rem(pm-1,pm)  (3.9)


where rem represents calculating the remainder of the polynomial.


The Sturm's Theorem describes the number of real roots of the polynomial P(x) within an interval (a, b]. Assuming that the sign change frequency of the function value of the Sturm sequence at the endpoint a is c(a) and the sign change frequency of the function value of the Sturm sequence at the endpoint b is c(b), the number of real roots of the polynomial P(x) within this half-open interval is c(a)−c(b). In the present technology, a recursive dichotomy technique is designed, and all single intervals are searched in a wide numerical range (a, b], with pseudo-code of the corresponding function being shown as follows:


determining the Sturm sequence sturmSequence of the polynomial P(x);


function [rootIntervals]=rootIntervalsDetect(a, b, sturmSequence)


determining the number of real roots, numRoot, within (a, b] by using the Sturm's Theorem;


rootRegion=[ ];


if numRoot<1


elseif numRoot==1

    • rootRegion=[rootRegion; a b];


elseif numRoot>1

    • a1=a; b1=(a+b)/2;
    • a2=(a+b)/2; b2=b;
    • o1=rootIntervalsDetect(a1, b1, sturmSequence);
    • rootRegion=[rootRegion; o1];
    • o2=rootIntervalsDetect(a2, b2, sturmSequence);
      • rootRegion=[rootRegion; o2];


end


After single intervals are obtained, the real root of |C(w)|=0 is solved in each single interval by dichotomy. The solved w is substituted into the expression of C(w) in the formula (3.6), unknown terms about x and y are solved by SVD, and values of x and y are further obtained. The solved w, x and y are substituted into the formula (3.4), to obtain the essential matrix E, the number of solutions of which is the same as the number of real roots of w.


c23) judging the solution of the essential matrix.


Each w has a camera position relationship and corresponding reconstructed spatial point coordinates. In special cases, for example, imaging of a planar scene, a feasible solution cannot be judged according to the conventional criterion of smallest image plane error, because there are two solution having similar image plane errors. To fundamentally solve the root judgment problem, the present technology uses spatial information as a constraint and a length ratio of two scale bars at orthogonal positions as a criterion of judgment. An essential matrix whose reconstructed space length ratio is closest to 1 is the final feasible solution.


Further, the translation amount and the spatial coordinates are scaled according to the ratio of the space length to the reconstructed length. The obtained exterior orientation parameter and spatial coordinates may be used as initial parameter values for subsequent self-calibration bundle adjustment.


Step d:


According to an embodiment of the present invention, the step d includes the following steps:


d1) obtaining a camera imaging model;


The camera orientation is described by using [X0 Y0 Z0 Az El Ro]T, coordinates of a spatial point are [X Y Z] T, and coordinates [X Y Z] T in a camera coordinate system are described as:















[




X
_






Y
_






Z
_




]

=


R


(


[



X




Y




Z



]

-

[




X
0






Y
0






Z
0




]


)


=


[




a
1




b
1




c
1






a
2




b
2




c
2






a
3




b
3




c
3




]



(


[



X




Y




Z



]

-

[




X
0






Y
0






Z
0




]


)








(
4.1
)










where











R
=

[













cos


(
Ro
)




cos


(
Az
)



+







sin


(
Ro
)




sin


(
El
)




sin


(
Az
)














cos


(
Ro
)




sin


(
Az
)



-







sin


(
Ro
)




sin


(
El
)




cos


(
Az
)










sin


(
Ro
)




cos


(
El
)













-

sin


(
Ro
)





cos


(
Az
)



+







cos


(
Ro
)




sin


(
El
)




sin


(
Az
)














-

sin


(
Ro
)





sin


(
Az
)



-







cos


(
Ro
)




sin


(
El
)




cos


(
Az
)










cos


(
Ro
)




cos


(
El
)









cos


(
El
)




sin


(
Az
)







-

cos


(
El
)





cos


(
Az
)






-

sin


(
El
)










]





(
4.2
)







Linear projection coordinates of this spatial point are:













x
l

=



-
f




X
_


Z
_



=


-
f










a
1



(

X
-

X
0


)


+


b
1



(

Y
-

Y
0


)


+


c
1



(

Z
-

Z
0


)






a
3



(

X
-

X
0


)


+


b
3



(

Y
-

Y
0


)


+


c
3



(

Z
-

Z
0


)













y
l

=



-
f




Y
_


Z
_



=


-
f










a
2



(

X
-

X
0


)


+


b
2



(

Y
-

Y
0


)


+


c
2



(

Z
-

Z
0


)






a
3



(

X
-

X
0


)


+


b
3



(

Y
-

Y
0


)


+


c
3



(

Z
-

Z
0


)













(
4.3
)







An object distance of it relative to the imaging system is:

s′=−Z  (4.4)


Researches show that a radial distortion parameter of a spatial point with an object distance of s′ on the imaging plane of the imaging system is:










k

1


ss




=



α

s






C

s


2


C

s
1

2




k

1






ss
1




+


(

1
-

α

s




)




C

s


2


C

s
2

2




k

1


ss
2









(
4.5
)







k

2


ss




=



α

s






C

s


4


C

s
1

4




k

2






ss
1




+


(

1
-

α

s




)




C

s


4


C

s
2

4




k

2


ss
2

















k

3


ss




=



α

s






C

s


6


C

s
1

6




k

3






ss
1




+


(

1
-

α

s




)




C

s


6


C

s
2

6




k

3


ss
2

















α

s



=




S
2

-

S





S
2

-

S
1



·



S
1

-
f



S


-
f















where s is the focus length of the imaging system, k1ss1, k2ss1, k3ss1 and k1ss2, k2ss2, k3ss2 are calibration results of the radial distortion parameter over two distances, and Cs1 and Cs2 are respectively image distances of Gaussian imaging formulas corresponding to the two distances.


Assume that principal point offsets of the camera is xp and yp, and eccentric distortion and in-plane distortion parameters corresponding to this spatial point are P1, P2, B1, B2. The image point distortion amount at (xl, yl) is:












Δ





x

=



x
_



(



K

1






ss






r
2


+


K

2






ss






r
4


+


K

3






ss






r
6



)


+


P
1



(


2







x
_

2


+

r
2


)


+

2






P
2



xy
_


+


B
1



x
_


+


B
2



y
_















Δ





y

=



y
_



(



K

1






ss






r
2


+


K

2






ss






r
4


+


K

3






ss






r
6



)


+


P
2



(


2







y
_

2


+

r
2


)


+

2






P
1



xy
_


















x
_

=

x
-

x
p







y
_

=

y
-

y
p






r
=




x
_

2

+


y
_

2
















(
4.6
)







Therefore, final image point coordinates of the spatial point on the camera image plane at this position is:

x=xl+xp−Δx
y=yl+yp−Δy custom character{tilde over (L)}=φ({tilde over (X)})  (4.7)


The exterior orientation parameter and the spatial coordinates are correlated to the radial distortion parameter through s′.


d2) building an error equation of the photogrammetry system;


for a dual-camera photogrammetry system, using (xij, yij) to represent imaging coordinates of the jth point in space for the ith image, the error equation is:












l
^

ij

=



l
ij

+

v
ij


=




K
ij




x
^

ij





[





x
ij

-


φ
x



(

X
ij
0

)









y
ij

-


φ
y



(

X
ij
0

)






]

+

[




v
xij






y
yij




]



=




A
ij



[




Δ






X

0





i








Δ






Y

0





i








Δ






Z

0





i








Δ





A






z
i







Δ





E






l
i







Δ





R






o
i







Δ






x

0





i








Δ






y

0





i








Δ






f





i








Δ






K

1





i








Δ






K

2





i








Δ






K

3





i








Δ






P

1





i








Δ






P

2





i








Δ






B

1





i








Δ






B

2





i






]


+


B
ij



[




Δ






X
j







Δ






Y
j







Δ






Z
j





]



=



A
ij



δ
i


+


B
ij




δ
.

j


















i
=
1

,

2
;

j
=
1


,
2
,







n






(
4.8
)







where Xij0 is all parameters related to the imaging point (xij, yij), (vxij, vyij) is a residual error, and Aij and Bij are Jacobian matrices of an imaging model (4.7) for the exterior orientation parameter of the ith image, coordinates of the jth spatial point, and the camera imaging parameter.










A
ij

=

[







x
ij





X

0





i










x
ij





Y

0





i










x
ij





Z

0





i










x
ij





A







z
i









x
ij





E







l
i









x
ij





R







o
i









x
ij





x
p









x
ij





y
p









x
ij




f








x
ij





K

1


ss
2











x
ij





K

2


ss
2











x
ij





K

3


ss
2











x
ij





P
1









x
ij





P
2









x
ij





B
1









x
ij





B
2











y
ij





X

0





i










y
ij





Y

0





i










y
ij





Z

0





i










y
ij





A







z
i









y
ij





E







l
i









y
ij





R







o
i









y
ij





x
p









y
ij





y
p









y
ij




f








y
ij





K

1


ss
2











y
ij





K

2


ss
2











y
ij





K

3


ss
2











y
ij





P
1









y
ij





P
2









y
ij





B
1









y
ij





B
2






]





(
4.9
)







B
ij

=

[







x
ij





X
j









x
ij





Y
j









x
ij





Z
j











y
ij





X
j









y
ij





Y
j









y
ij





Z
j






]













Each term in Aij is solved through the following process:



















C

s







s




=

-


f
2



(


s


-
f

)

2


















α

s







s




=

-



(


S
1

-
f

)



(


S
2

-
f

)




(


S
2

-

S
1


)




(


S


-
f

)

2















k

1


ss








s




=






α

s







s







C

s


2


C

s
1

2




k

1






ss
1




+

2





C

s







s








α

s





C

s





C

s
1

2




k

1


ss
1




+


(

-




α

s







s





)




C

s


2


C

s
2

2




k

1


ss
2




+

2





C

s







s








(

1
-

α

s




)



C

s





C

s
2

2




k

1


ss
2
















k

2


ss








s




=






α

s







s







C

s


4


C

s
1

4




k

2






ss
1




+

4





C

s







s








α

s





C

s


3



C

s
1

4




k

2


ss
1




+


(

-




α

s







s





)




C

s


4


C

s
2

4




k

2

s






s
2




+

4





C

s







s








(

1
-

α

s




)



C

s


3



C

s
2

4




k

2


ss
2
















k

3


ss








s




=






α

s







s







C

s


6


C

s
1

6




k

3






ss
1




+

6





C

s







s








α

s





C

s


5



C

s
1

6




k

3


ss
1




+


(

-




α

s







s





)




C

s


6


C

s
2

6




k

3

s






s
2




+

6





C

s







s








(

1
-

α

s




)



C

s


5



C

s
2

6




k

3


ss
2











(
4.10
)







It can be seen from the relationship described by the formula (4.4) that:














s






X
0



=

a
3












Δ






x




s




=


x
_



(






k

1


ss








s






r
2


+





k

2


ss








s






r
4


+





k

3


ss








s






r
6



)













Δ






y




s




=


y
_



(






k

1


ss








s






r
2


+





k

2


ss








s






r
4


+





k

3


ss








s






r
6



)













Δ






x




X
0



=






Δ






x




s









s






X
0












Δ






y




X
0




=





Δ






y




s









s






X
0










(
4.11
)







Through the description of the formula (4.3), partial derivatives of xl and yl relative to X0 can be solved:
















x
l





X
0



=


f


(



a
1



Z
_


-


a
3



X
_



)




Z
_

2










y
l





X
0



=


f


(



a
2



Z
_


-


a
3



Y
_



)




Z
_

2









(
4.12
)







Therefore, with reference to the formula (4.7), a partial derivative of an observed value relative to the term X0 in the exterior orientation parameters is:















x




X
0



=





x
l





X
0



-




Δ






x




X
0











y




X
0



=





y
l





X
0



-




Δ






y




X
0











(
4.13
)







Likewise, partial derivatives of the observed value relative to other exterior orientation parameters can be obtained:

















s






Y
0



=

b
3









Δ






x




X
0



=





Δ






x




s









s






Y
0












Δ






y




Y
0



=





Δ






y




s









s






Y
0





















x
l





Y
0



=


f


(



b
1



Z
_


-


b
3



X
_



)




Z
_

2










y
l





Y
0



=


f


(



b
2



Z
_


-


b
3



Y
_



)




Z
_

2


















x




Y
0



=





x
l





Y
0



-




Δ






x




Y
0











y




X
0



=





y
l





X
0



-




Δ






y




X
0












(
4.14
)






















s






Z
0



=

c
3









Δ






x




Z
0



=





Δ






x




s









s






Z
0












Δ






y




Z
0



=





Δ






y




s









s






Z
0





















x
l





Z
0



=


f


(



c
1



Z
_


-


c
3



X
_



)




Z
_

2










y
l





Z
0



=


f


(



c
2



Z
_


-


c
3



Y
_



)




Z
_

2


















x




Z
0



=





x
l





Z
0



-




Δ






x




Z
0











y




Z
0



=





y
l





Z
0



-




Δ






y




Z
0












(
4.15
)







The description of partial derivatives of linear terms xl and yl relative to different angles is complex, and reference can be made to relevant literatures. Herein, only partial derivatives of Δx and Δy relative to different angles are analyzed.














s






A






z


=

-

(



cos


(
El
)




cos


(
Az
)




(

X
-

X
0


)


+


cos


(
El
)




sin


(
Az
)




(

Y
-

Y
0


)



)




















Δ






x




A






z


=





Δ






x




s









s






A






z











Δ






y




A






z


=





Δ






y




s









s






A






z























x




A






z


=





x
l





A






z


-




Δ






x




A






z










y




A






z


=





y
l





A






z


-




Δ






y




A






z











(
4.16
)











s






E






l


=

-

(



-

sin


(
El
)





sin


(
Az
)




(

X
-

X
0


)


+


sin


(
El
)




cos


(
Az
)




(

Y
-

Y
0


)


-

cos


(
El
)



)




















Δ






x



El


=





Δ






x




s









s





El











Δ






y



El


=





Δ






y




s









s





El























x



El


=





x
l




El


-




Δ






x



El










y



El


=





y
l




El


-




Δ






y



El











(
4.17
)



















s





Ro


=
0








Δ






x



Ro


=
0








Δ






y



Ro


=
0




















x



Ro


=




x
l




Ro









y



Ro


=




y
l




Ro











(
4.18
)







The present technology uses one-sided self-calibration bundle adjustment, and interior parameters participating in the adjustment are xp, yp, f, P1, P2, B1, B2, and one-sided radial distortion parameters k1ss1, k2ss2, and k3ss3 of S2.














Δ






x




x
p



=


-

(



K

1


ss






r
2


+


K

2






ss






r
4


+


K

3


ss






r
6



)


+



x
_



(


-
2







x
_


)




(


K

1






ss




+

2






K

2






ss






r
2


+

3


K

3


ss






r
4



)


+


P
1



(


-
6



x
_


)


+

2



P
2



(

-

y
_


)



-

B
1













Δ






y




x
p



=




y
_



(


-
2



x
_


)




(


K

1


ss




+

2






K

2






ss






r
2


+

3






K

3


ss






r
4



)


+


P
2



(


-
2



x
_


)


+

2







P
1



(

-

y
_


)















Δ






x




y
p



=




x
_



(


-
2



y
_


)




(


K

1


ss




+

2






K

2






ss






r
2


+

3






K

3


ss






r
4



)


+


P
1



(


-
2



y
_


)


+

2







P
2



(

-

x
_


)



-

B
2













Δ






y




y
p



=


-

(



K

1


ss






r
2


+


K

2






ss






r
4


+


K

3


ss






r
6



)


+



y
_



(


-
2







y
_


)




(


K

1






ss




+

2






K

2






ss






r
2


+

3


K

3


ss






r
4



)


+


P
2



(


-
6



y
_


)


+

2



P
1



(

-

x
_


)





















x




x
p



=

1
-




Δ






x




x
p











y




x
p



=

-




Δ






y




x
p












(
4.19
)

















x




y
p



=

-




Δ






x




y
p











y




y
p



=

1
-




Δ






y




y
p












(
4.20
)

















x



f


=

-


X
_


Z
_










v



f


=

-


Y
_


Z
_











(
4.21
)



















K

1


ss








K

1


ss
2





=


(

1
-

α

s




)




C

s


2


C

s
2

2











K

2


ss








K

2


ss
2





=


(

1
-

α

s




)




C

s


4


C

s
2

4
























K

3


ss








K

3


ss
2





=


(

1
-

α

s




)




C

s


6


C

s
2

6










x




K

1


ss
2





=


-

x
_




r
2






K

1


ss








K

1


ss
2


























x




K

2


ss
2





=


-

x
_




r
4






K

2


ss








K

2


ss
2













x




K

3


ss
2





=


-

x
_




r
6






K

3


ss








K

3


ss
2























(
4.22
)


















y




K

1


ss
2





=


-

y
_




r
2






K

1


ss








K

1


ss
2













y




K

2


ss
2





=


-

y
_




r
4






K

2


ss








K

2


ss
2























y




K

3


ss
2





=


-

y
_




r
6






K

3


ss








K

3


ss
2












(
4.23
)

















x




P
1



=

-

(


2



x
_

2


+

r
2


)









y




P
1



=


-
2




x





y

_










(
4.24
)






















y




P
2



=


-
2




x





y

_










y




P
2



=

-

(


2



y
_

2


+

r
2


)










(
4.25
)

















x




B
1



=

-

x
_









y




B
1



=
0








(
4.26
)

















x




B
2



=

-

y
_









y




B
2



=
0








(
4.27
)







Each term in Bij is solved through the following process:















x



X


=

-



x




X
o














x



Y


=

-



x




Y
o











x



Z


=

-



x




Z
o














(
4.28
)















y



X


=

-



y




X
o











y



Y


=

-



y




Y
o











z



Z


=

-



z




Z
o



















(
4.29
)







d3) performing a partitioning operation on the normal equation;


Assuming that a dual-camera photogrammetry system performs photographing and measurement on n scale bar positions, the error equation set is:











[




v

1
,
11







v

1
,
12







v

1
,
21







v

1
,
22












v

1
,

i





1








v

1
,

i





2













v

1
,

n





1








v

1
,

n





2








v

2
,
11







v

2
,
12







v

2
,
21







v

2
,
22












v

2
,

j





1








v

2
,

j





2













v

2
,

n





1








v

2
,

n





2






]

+

[




l

1
,
11







l

1
,
12







l

1
,
21







l

1
,
22












l

1
,

i





1








l

1
,

i





2













l

1
,

n





1








l

1
,

n





2








l

2
,
11







l

2
,
12







l

2
,
21







l

2
,
22












l

2
,

j





1








l

2
,

j





2













l

2
,

n





1








l

2
,

n





2






]


=




[




A

1
,
11




0



B

1
,
11




0


0


0





0


0





0


0





A

1
,
12




0


0



B

1
,
12




0


0





0


0





0


0





A

1
,
21




0


0


0



B

1
,
21




0





0


0





0


0





A

1
,
22




0


0


0


0



B

1
,
22







0


0





0


0











































A

1
,

j





1





0


0


0


0


0






B

1
,

j





1





0





0


0





A

1
,

j





2





0


0


0


0


0





0



B

1
,

j





2








0


0











































A

1
,

n





1





0


0


0


0


0





0


0






B

1
,

n





1





0





A

1
,

n





2





0


0


0


0


0





0


0





0



B

1
,

n





2







0



A

2
,
11





B

2
,
11




0


0


0





0


0





0


0




0



A

2
,
12




0



B

2
,
12




0


0





0


0





0


0




0



A

2
,
21




0


0



B

2
,
21




0





0


0





0


0




0



A

2
,
22




0


0


0



B

2
,
22







0


0





0


0










































0



A

2
,

j





1





0


0


0


0






B

2
,

j





1





0





0


0




0



A

2
,

j





2





0


0


0


0





0



B

2
,

j





2








0


0










































0



A

2
,

n





1





0


0


0


0





0


0






B

2
,

n





1





0




0



A

2
,

n





2





0


0


0


0





0


0





0



B

2
,

n





2






]



[




δ
1






δ
2







δ
.

11







δ
.

12







δ
.

21







δ
.

22












δ
.


j





1








δ
.


j





2













δ
.


n





1








δ
.


n





2





]




v
+
l


=

A


δ








(
4.30
)







where the symbol subscript i,jk represents the kth endpoint on (i=1,2, j=1,2, . . . n, k=1,2) the jth reference length for the ith camera.


Assuming vTv→min, the to-be-solved parameter increment δ is solved by using the following formula:

(ATA){right arrow over (δ)}=ATlcustom characterN{right arrow over (δ)}=W  (4.31)


In view of the sparsity of A, a regular partitioned description may be made for AT A and AT l according to indexes of the image and the target point:











A
T


A

=

N
=

[





N
11


(

32
×
32

)






N
12


(

32
×
6





n

)








N
21


(

6

n
×
32

)






N
22


(

6

n
×
6

n

)





]








A
T


l

=

W
=

[





w
1


(

32
×
1

)








w
2


(

6

n
×
1

)





]












N
11

=

[







j
=
1

n






k
=
1

2




A

1
,
jk

T



A

1
,
jk







0




0






j
=
1

n






k
=
1

2




A

2
,
jk

T



A

2
,
jk








]








N
22

=

[







i
=
1

2




B

i
,
11

T



B

i
,
11






0


0


0





0


0





0


0




0






i
=
1

2




B

i
,
12

T



B

i
,
12






0


0





0


0





0


0




0


0






i
=
1

2




B

i
,
21

T



B

i
,
21






0





0


0





0


0




0


0


0






i
=
1

2




B

i
,
22

T



B

i
,
22









0


0





0


0




































0


0


0


0









i
=
1

2




B

i
,

j





1


T



B

i
,

j





1







0





0


0




0


0


0


0





0






i
=
1

2




B

i
,

j





2


T



B

i
,

j





2










0


0




































0


0


0


0





0


0









i
=
1

2




B

i
,

n





1


T



B

i
,

n





1







0




0


0


0


0





0


0





0






i
=
1

2




B

i
,

n





2


T



B

i
,

n





2








]








N
12

=

[





A

1
,
11

T



B

1
,
11







A

1
,
12

T



B

1
,
12







A

1
,
21

T



B

1
,
21







A

1
,
22

T



B

1
,
22










A

1
,

j





1


T



B

1
,

j





1








A

1
,

j





2


T



B

1
,

j





2











A

1
,

n





1


T



B

1
,

n





1








A

1
,

n





2


T



B

1
,

n





2










A

2
,
11

T



B

2
,
11







A

2
,
12

T



B

2
,
12







A

2
,
21

T



B

2
,
21







A

2
,
22

T



B

2
,
22










A

2
,

j





1


T



B

2
,

j





1








A

2
,

j





2


T



B

2
,

j





2











A

2
,

n





1


T



B

2
,

n





1








A

2
,

n





2


T



B

2
,

n





2







]








N
21

=

N
12
T








w
1

=

[







j
=
1

n






k
=
1

2




A

1
,
jk

T



l

1
,
jk













j
=
1

n






k
=
1

2




A

2
,
jk

T



l

2
,
jk








]








w
2

=

[







i
=
1

2




B

i
,
11

T



l

i
,
11












i
=
1

2




B

i
,
12

T



l

i
,
12












i
=
1

2




B

i
,
21

T



l

i
,
21












i
=
1

2




B

i
,
22

T



l

i
,
22












i
=
1

2




B

i
,

j





1


T



l

i
,

j





1













i
=
1

2




B

i
,

j





2


T



l

i
,

j





2













i
=
1

2




B

i
,

n





1


T



l

i
,

n





1













i
=
1

2




B

i
,

n





2


T



l

i
,

n





2








]





d4) obtaining self-calibration bundle adjustment with length constraints;


Constraint conditions are described by using the following formula:












C

(

n
×

(

32
+

6





n


)


)





δ




(

32
+

6





n


)

×
1



+


W
x


(

n
×
1

)



=
0




(
4.32
)






C
=

[



0


0






L
1





c
11









L
1





c
12





0


0





0


0





0


0




0


0


0


0






L
2





c
21









L
2





c
22








0


0





0


0










































0


0


0


0


0


0









L
j





c

j





1










L
j





c

j





2









0


0










































0


0


0


0


0


0





0


0









L
n





c

n





1










L
n





c

n





2







]













W
x

=

[




L
-

L
1







L
-

L
2












L
-

L
j












L
-

L
n





]













where L is a measured length value, Lj is the length of the scale bar at the jth position reconstructed by the photogrammetry system, and a method for solving a partial derivative of the length relative to spatial coordinates of the corresponding endpoint is:










L
j





c

j





1




=

[






x

j





1


-

x

j





2




L
j







y

j





1


-

y

j





2




L
j







z

j





1


-

z

j





2




L
j





]











L
j





c

j





2




=

[




-



x

j





1


-

x

j





2




L
j






-



y

j





1


-

y

j





2




L
j






-



z

j





1


-

z

j





2




L
j






]





Then, a least-squares solution and corresponding contact vector of the formula (4.31) under the constraint conditions described by the formula (4.32) are:











δ


=



(


N

-
1


-


N

-
1




C
T



N
cc

-
1




CN

-
1




)


W

-


N

-
1




C
T



N
cc

-
1



Wx










N
cc

=


CN

-
1




C
T










K
s

=


N
cc

-
1




(



CN

-
1



W

+

W
x


)







(
4.33
)







d5) evaluating precision of the photogrammetry system;


The mean error of weight unit for the photogrammetry system is:











s
^

0

=




V
T


V



3

n

-
32







(
4.34
)







To ensure the orientation precision, it is recommended that scale bars be placed at 60 or more positions.


An error covariance matrix of interior and exterior orientation parameters of the two cameras and spatial coordinates of feature points of the scale bar is:










C

(


(

32
+

6

n


)

×

(

32
+

6

n


)


)


=



s
^

0
2



(



N

-
1


-


N

-
1




C
T



N
cc

-
1




CN

-
1





(


(

32
+

6

n


)

×

(

32
+

6

n


)


)


)






(
4.35
)







d6) performing adaptive proportional adjustment of error equation terms;


Unknown variables to be solved vary considerably in order of magnitude, especially for interior parameters. For example, generally terms K1, P1, P2, B1 and B2 have 10-5, term K2 has 10-7, and term K3 has 10-11. Such a considerable difference in order of magnitude leads to the ill-conditioned problem of the matrix A in the error equation (4.31). Due to the common problems in numerical computation such as machine precision, loss of low-order digits, and division of a large number by a small number, the computational result of the normal equation may be greatly affected. This often leads to problems such as matrix rank deficiency. To unify the orders of magnitude of all the terms in the error equation, an adaptive proportional adjustment technique is designed herein.


Each time the adjustment iteration process begins, the order of magnitude Mj and an adjustment coefficient kj of each column in the error equation are calculated first, using the following method:

Mj=average(round(−ln(A(:,j))))custom characterkj=10Mj  (4.36)


Then, the computational result of each column is multiplied by a corresponding proportionality coefficient, to obtain an adjusted error equation:







A


=

A


[




k
1



0





0




0



k
2






0


















0


0






k
u




]






A relationship between an unknown variable {right arrow over (δ)}′ and {right arrow over (δ)} to be solved is calculated by using the formula (4.31):











δ


=


[




k
1



0





0




0



k
2






0


















0


0






k
u




]



δ




,




(
4.37
)







As proved by both simulation and experiments, such adaptive proportional adjustment eliminates the ill-conditioned problem of the normal equation in numerical computation, and provides higher adjustment stability and adjustment precision.


Step e


After the orientation process is completed, interior and exterior orientation parameters of the two cameras are fixed, and then spatial coordinates of points to be measured are solved by using a least-squares technique. In the process of solving the spatial coordinates, because there is no correlation between the measured points, the process may be performed point by point, thereby avoiding the time consumption caused by large-scale matrix operations and improving the performance of dynamic measurement. This is in essence a direct linear transformation method.


According to an embodiment of the present invention, a specific method for performing traceable evaluation of orientation precision of the photogrammetry system is:


e1) describing an error equation of a single imaging point;












l
^

ij

=



l
ij

+

v
ij


=




K
ij




x
^

ij





[





x
ij

-


φ
x



(

X
ij
0

)









y
ij

-


φ
y



(

X
ij
0

)






]

+

[




v
xij






v
yij




]



=



B
ij



[




Δ






X
j







Δ






Y
j







Δ






Z
j





]


=



B
ij




δ
.

j






i

=
1





,

2
;

j
=
1


,
2
,







n





(
5.1
)







The sparse matrix Bij is a partial derivative matrix of image plane coordinates relative to spatial coordinates, and is solved by using the formulas (4.30) and (4.31).


e2) when the photogrammetry system includes two cameras, an error equation set corresponding to a spatial point [X Y Z] is:












[




l
1






l
2




]


(

4
×
1

)


+


[




v
1






v
2




]


(

4
×
1

)



=





[




B
1






B
2




]


(

4
×
3

)




[




Δ





X






Δ





Y






Δ





Z




]




l
+
v


=

B






δ
.







(
5.2
)







e3) a corresponding normal equation and coordinate correction amount are:

(BTB){dot over (δ)}=BTl {dot over (δ)}=(BTB)−1BTl  (5.3)


e4) the orientation precision evaluation is evaluation of the length measurement, and is performed by using data in the orientation process. Coordinates of endpoints of 2n scale bars are reconstructed by using the formula (5.3), and then a measured value of the length of the corresponding scale bar is solved:


L1 L2 . . . Ln


e5) an average value of measurement results is:










L
_

=





j
=
1

n



L
j


n





(
5.4
)







e6) performing scaling by using a ratio of an average length to a gage length as an orientation result (translation amount of exterior orientation parameter):












s
=

L

L
_







T
1


=

sT
1






T
2


=

sT
2








(
5.5
)










L
1


=

sL
1






L
2


=

sL
2









L
n


=

sL
n








(
5.6
)







and performing an uncertainty analysis on the length measurement result, to obtain an orientation precision evaluation:










u


(

L
i


)


=







i
=
1

n




(


L
i


-
L

)

2



n
-
1



=







i
=
1

n



Δ






L
i







2





n
-
1



=

su


(

L
i

)








(
5.7
)







e7) a quality parameter of a measurement instrument is described as a multiple of the uncertainty of the length:

B=ku(Li′)  (5.8)


e8) a relative error of each length is:










Δ






r
i



=




L
i


-
L

L

=




L
i

-

L
_



L
_


=

Δ






r
i








(
5.9
)







e9) the relative error in length measurement is equal to the relative error in length measurement before scaling is performed, and therefore, an uncertainty of the relative error in length measurement is also equal to an uncertainty of the original relative error in length measurement:










u


(

Δ






r
i



)


=


u


(

Δ






r
i


)


=


u


(

L
i

)



L
_







(
5.10
)







According to the method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system of the present invention, the following verification experiment was carried out:


A photogrammetry system used in the experiment includes two industrial cameras. As shown in FIG. 3, a scale bar having a length of about 950 mm is used for orientation, and a reference length is indicated by backlight reflection code points at two ends of the scale bar. The volume of the measurement environment is about 3.6 m×2.4 m×2.4 m. The scale bar is moved in the measurement environment and imaged, self-calibration positioning and orientation of the photogrammetry system is implemented by using the method of the present invention, and the orientation precision of the system is evaluated by using the evaluation method according to claim 11. Results of three orientation experiments are shown in Table 1 and Table 2. Table 1 shows an orientation result of self-calibration bundle adjustment aimed at the smallest image plane error. Table 2 shows an orientation result of self-calibration bundle adjustment with multiple length constraints according to the present technology.









TABLE 1







orientation result of conventional


self-calibration bundle adjustment











length




measurement




error











standard deviation of
stand-
relative



image plane error
ard
error














direction
direction
devi-
uncer-




x
y
ation
tainty





orientation
camera
2.85E−05
4.24E−04
3.800
1/250


experiment
1






1
camera
2.32E−05
3.43E−04





2






orientation
the
1.60E−05
4.72E−04
6.271
1/150


experiment
camera






2
1







camera
1.53E−05
4.92E−04





2






orientation
camera
1.71E−05
3.02E−04
3.474
1/270


experiment
1






3
camera
1.43E−05
2.88E−04





2
















TABLE 2







orientation result of self-calibration bundle adjustment with


multiple length constraints










standard deviation of image




plane error
length













direction

measurement error














x


direction x















orientation
camera
1.80E−04
4.60E−04
0.060
1/15800


experiment 1
1







camera
1.85E−04
4.18E−04





2






orientation
the
2.15E−04
5.43E−04
0.064
1/14800


experiment 2
camera







1







camera
2.28E−04
5.32E−04





2






orientation
camera
1.47E−04
3.68E−04
0.051
1/18700


experiment 3
1







camera
1.89E−04
3.34E−04





2









It can be seen from comparison of standard deviations of image plane errors of the three orientation experiments in Table 1 and Table 2 that in the conventional self-calibration method, the error in the direction x of the image plane is reduced to a ignorable degree, and is smaller than the image plane error in the other direction by one order of magnitude. This is obviously incorrect, because error levels in two directions of imaging should approximate. Such an adjustment method leads to an incorrect self-calibration result and incorrect positioning and orientation result, and the corresponding length error is large. For the self-calibration adjustment technique with multiple length constraints according to the present invention, due to the constraint of the space length, the image plane error is not the only factor that determines the adjustment result, the image plane error is more reasonable, and the corresponding length error is significantly reduced.


The experimental results indicate that the use of the relative error in length measurement as an indicator of evaluation can ignore the analytical complexity caused by the reference length measurement error. If the length measurement is traceable, the precision of the photogrammetry system can also be evaluated according to an absolute measurement error, and the result of evaluation is also traceable.


Based on the above, in the present invention, on-site positioning and orientation of a dual-camera large-scale dynamic photogrammetry system is completed by using a single scale bar. The measurement space is divided into regions so that the distance between the left and right cameras is equal to the region length of the divided space, and the gauge attitude and the camera position are adjusted at the same time during the photographing process, making the length orientation field more reliable. Preliminary orientation is performed by combining a five-point method with the space length ratio, so that the solution loss problem in special cases can be overcome, and a feasible solution can be accurately determined in multiple solutions. In addition, the correlation between parameters can be overcome by using self-calibration bundle adjustment with multiple space length constraints, thereby improving the overall orientation precision of the photogrammetry system. The statistical result of the length measurement error provides a traceable objective evaluation indicator. The experimental result shows that the relative error in length measurement of the present technology reaches 1/15000.


The above descriptions are merely preferred embodiments of the present invention, but are not intended to limit the scope of implementation of the present invention. Hence, any equivalent variations or modifications made in accordance with the structures, features and principles described in the claims of the present invention shall fall within the scope of the claims of the present invention.

Claims
  • 1. A method for implementing high-precision orientation and evaluating orientation precision of a large-scale dynamic photogrammetry system, comprising steps: a) selecting a scale bar, arranging code points at two ends of the scale bar, and performing length measurement on the scale bar;b) evenly dividing a measurement space into multiple regions, sequentially placing the scale bar in each region, and photographing the scale bar by using left and right cameras, wherein continuous photographing is carried out in the following manner:b1) adjusting left and right cameras so that fields of view of the left and right cameras fully cover the scale bar, synchronously moving the left and right cameras along a perpendicular direction of a line connecting the code points at the two ends of the scale bar, and at the same time rotating the scale bar by using the perpendicular direction of the line connecting the code points at the two ends of the scale bar as an axis of rotation; andb2) adjusting the left and right cameras, synchronously moving the left and right cameras along the perpendicular direction of the line connecting the code points at the two ends of the scale bar in a direction opposite to that in the step b1), and at the same time rotating the scale bar in a direction opposite to that in the step b1);c) performing preliminary relative positioning and orientation of the photogrammetry system according to images shot by the left and right cameras;d) limiting self-calibration bundle adjustment by using multiple length constraints, adjustment parameters comprising principal point, principal distance, radial distortion, eccentric distortion, in-plane distortion, exterior orientation parameter and spatial point coordinate;e) performing traceable evaluation of orientation precision of the photogrammetry system.
  • 2. The method according to claim 1, wherein a distance between the left and right cameras is the same as a region length according to which the measurement space is evenly divided.
  • 3. The method according to claim 1, wherein the photographing process of the step b) is repeated multiple times.
  • 4. The method according to claim 1, wherein the step c comprises the following steps: c1) determining a solution space of a fundamental matrix by using a five-point method: automatically selecting five imaging points in a position of the scale bar, wherein positions of the imaging points are a center and four vertexes of the shot image, and the automatic selection is performed using the following algorithm: abs(xli)+abs(yli)+abs(xri)+abs(yri)→min(xli)+(yli)+(xri)+(yri)→max(−xli)+(yli)+(−xri)+(yri)→max(−xli)+(−yli)+(−xri)+(−yri)→max(xli)+(−yli)+(xri)+(−yri)→maxwherein [xli yli] represents image plane coordinates of the ith point captured by the left camera, and [xri yri] represents image point coordinates of the ith point captured by the right camera; andc2) obtaining a solution of an essential matrix by using an elimination method.
  • 5. The method according to claim 4, wherein the obtaining of the solution of the essential matrix in the step c2) comprises the following key steps: c21) building a tenth-degree polynomial of an unknown number w;c22) solving a real root of the tenth-degree polynomial in the step c21; andc23) judging the solution of the essential matrix.
  • 6. The method according to claim 5, wherein in a method of building the tenth-degree polynomial of w in the step c21, a Toeplitz matrix is used: c211) describing all polynomials as an array: amxm+Am-1xm-1+ . . . +a2x2+a1x+a0 A=[amam-1 . . . a2a1a0]T bnxn+bn-1xn-1+ . . . +b2x2+b1x+b0 B=[bnbn-1 . . . b2b1b0]T; andc212) performing programming by using the Toeplitz matrix to compute polynomial multiplication, with a formula being:
  • 7. The method according to claim 6, wherein the step c212 further comprises: performing programming by using the Toeplitz matrix to compute polynomial division, specifically comprising the following steps: c213) describing n as n=T(d)q+r by using a Toeplitz matrix of d,wherein it is set that a numerator polynomial is n, a denominator polynomial is d, a quotient is q, and a remainder is r; andc214) calculating an optimal solution of a quotient of the polynomial division: q=(TTT)−1TTn.
  • 8. The method according to claim 5, wherein a method for solving the real root of the tenth-degree polynomial in the step c22 is: c221) building a Sturm sequence of the tenth-degree polynomial, with a formula being: p0(x)=p(x)p1(x)=p′(x)p2(x)=−rem(p0,p1)p3(x)=−rem(p1,p2)...0=−rem(pm-1,pm)wherein rem represents calculating the remainder of the polynomial, and P(x) is a known tenth-degree polynomial;c222) searching all single intervals of the polynomial by recursive dichotomy in combination with the Sturm's Theorem; andc223) after the single intervals are obtained, calculating a numerical solution of a real root of |C(w)|=0 in each single interval by dichotomy.
  • 9. The method according to claim 1, wherein the step d comprises the following steps: d1) obtaining a camera imaging model;d2) building an error equation of the photogrammetry system;d3) performing adaptive proportional adjustment of error equation terms;d4) performing a partitioning operation on the normal equation;d5) obtaining self-calibration bundle adjustment with length constraints; andd6) evaluating precision of the photogrammetry system.
  • 10. The method according to claim 1, wherein the step e of performing traceable evaluation of orientation precision of the photogrammetry system is performed by using direct linear transformation, and comprises the following steps: e1) obtaining an error equation of a single imaging point;e2) when the photogrammetry system comprises two cameras, obtaining an error equation set corresponding to a spatial point [X Y Z];e3) obtaining a corresponding normal equation and coordinate correction amount, and optimizing spatial point coordinates through multiple iterations;e4) obtaining a measured value of a corresponding length of the scale bar through the step e3;e5) obtaining an average value of measurement results of lengths of the scale bar in all orientations;e6) performing scaling by using a ratio of an average length to a gage length as an orientation result, and performing an uncertainty analysis on the length measurement result, to obtain an orientation precision evaluation;e7) obtaining a quality parameter of a measurement instrument;e8) calculating a relative error of each length; ande9) calculating an uncertainty of the relative error in length measurement.
US Referenced Citations (5)
Number Name Date Kind
20130223693 Chamberlain Aug 2013 A1
20150085297 Hughes Mar 2015 A1
20160077515 Pfeffer Mar 2016 A1
20170094251 Wolke Mar 2017 A1
20170191822 Becker Jul 2017 A1