Method for assessing a camera calibration

Information

  • Patent Grant
  • 11748910
  • Patent Number
    11,748,910
  • Date Filed
    Thursday, August 26, 2021
    3 years ago
  • Date Issued
    Tuesday, September 5, 2023
    a year ago
  • CPC
  • Field of Search
    • CPC
    • G06T7/85
    • G06T2207/30208
    • G06T5/006
    • G06T7/80
    • H04N17/002
  • International Classifications
    • G06K9/00
    • G06T7/80
    • Disclaimer
      This patent is subject to a terminal disclaimer.
      Term Extension
      50
Abstract
A method for assessing a camera calibration, in which a first quality measure is ascertained. A statistical error is assessed using the first quality measure. An expected value for a mapping error is ascertained. Optimal model parameters and their covariance matrix initially are accessed. A matrix of a mapping error is then determined. Finally, the expected value of the mapping error is ascertained.
Description
CROSS REFERENCE

The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 102020211507.9 filed on Sep. 14, 2020, which is expressly incorporated herein by reference in its entirety.


FIELD

The present invention relates to a method for assessing a camera calibration and to a system for carrying out the method.


BACKGROUND INFORMATION

Photographic apparatuses, referred to as cameras, are able to record static or moving images on a photographic film or electronically on a memory medium or to convey them via an interface.


A measuring process, referred to as calibration, is used to establish a deviation of one measuring device with respect to another device, a reference device. This deviation is then taken into account in the subsequent use of the measuring device for correcting the read values. Thus, within the scope of the calibration of a camera, the mapping behavior thereof is established as compared to the mapping behavior of a reference camera.


A prerequisite for the use of camera systems as measuring instruments is that their geometric mapping behavior is precisely known. Specifically, this means that the mapping function, i.e., the projection of a point from the three-dimensional world into the two-dimensional image (p:custom character3custom character2, x=(x,y,z)T→u=(u,v)T, must be known. Errors in the determination of the parameters of this mapping function or the selection of an unsuitable model may have a serious impact on all subsequent processing steps.


With the ever increasing use of camera systems comes also an increased need for methods for determining these model parameters. This should also be able to be carried out by laypersons.


Two known problems in terms of camera calibration are firstly systematic errors, in which the model is unable to describe or able to only insufficiently exactly describe the actual mapping behavior, and secondly high or unknown residual parameter uncertainties, which are typically the result of too few measurements or observations. To be able to recognize these two types of errors requires expert knowledge and experience, for example, with regard to how similar camera systems behave, or complex control experiments as they are known in photogrammetry.


SUMMARY

In accordance with the present invention, a method and a system for assessing a camera calibration are provided. Specific embodiments of the present invention are disclosed herein.


In accordance with an example embodiment of the present invention, the method is used to assess a camera calibration, the method being used for the purpose of assessing a statistical error and to that end to ascertain a first quality measure, which enables an assessment of the statistical error.


In the method, an expected value for a mapping error is ascertained, optimal model parameters and their covariance matrix being initially accessed, then a matrix of a mapping error being determined and finally the expected value of the mapping error being ascertained.


A mapping error is understood to mean the difference in the mapping behavior between the camera model estimated typically within the scope of the calibration and the true camera.


The expected value of the mapping error is the so-called uncertainty metric. The first quality measure considered here is the uncertainty metric. Thus, it is quantified how uncertain one is regarding the estimated model parameter. This uncertainty metric therefore corresponds at the same time to a measure for the statistical error to be expected. High parameter uncertainty results statistically in higher errors. The definition or form of the uncertainty metric involves that an expected value of a mapping error is determined. Thus, the expected mapping error is deduced from the uncertainty in the model parameters.


The mapping error used may, for example, be a mean square error in the image space. Alternatively, a mean square error in a local image area may also be used or else the error in a specific application of the calibrated camera.


The matrix determined may, for example, be a matrix that describes the increase of the mapping error as a function of the error in the model parameters.


In with an example embodiment of the present invention, the method makes it possible to assess or to estimate the aforementioned statistical error and thus to be able to take this error into account.


In addition to the statistical error, there is also a so-called systematic error, for the assessment of which in an embodiment of the method a second quality measure is ascertained. Thus, a specific example embodiment of the present invention is provided, in which two informative error and uncertainty measures, which are based purely on the photos or measurements made for the calibration, are taken into account. Thus, it is provided in an embodiment to provide two quality measures and a method for determining these two quality measures, which quantitatively assess firstly systematic errors and secondly residual uncertainties. The combination of these quality measures permits both a direct feedback to the person carrying out the calibration as well as the assessment of calibration methods and structures existing or under design.


The second quality measure may be ascertained by segmenting an entire calibration object, which includes at least one calibration object, virtually into calibration objects, for each calibration object a detector noise being estimated, which are combined to form a total estimate, which is compared with an estimate of the detector noise of the entire calibration object.


With regard to detector noise, it is stated: a detector or feature detector is an algorithm of image processing, which extracts the position or the image coordinates of distinctive points in the image (features). In the camera calibration, the features are typically the edges of the checkered-like calibration target. Reference is therefore made to edge detectors. The detection of features is generally not perfect, i.e., the extracted image coordinates (u, v) deviate from the true image coordinates. These deviations are typically described as uncorrelated noise, the so-called detector noise.


After the detailed description of the quality measures, these are also demonstrated here in a feasibility study, a so-called proof-of-concept.


The assessment of a camera calibration takes place typically with respect to the errors remaining after the calibration, namely, the residues, i.e., the differences between the observations and the prediction by the estimated model. In most cases, the root of the mean error square sum, the so-called root mean squared error (RMSE), or a similar measure is specified, which reflects the mean error on the calibration data set.


The following applies:











R

M

S

E

=



1

n

o

b

s








i
=
1



n

o

b

s


/
2







u
i

-


u
ˆ

i




2





,




(
1
)








ui being the observed pixels and ûi being the pixels estimated on the basis of the model. The number of the individual observations is nobs, each observed pixel (ui=(u,v)T being contributed in accordance with two observations.


The residues are made up of a systematic error, for example, resulting from a non-modeled distortion, and a stochastic error resulting from the so-called detector noise. The detection of the checkerboard corners in the accompanying figures is subject to random and uncorrelated errors.


The following applies asymptotically for the RMSE:











R

M

S

E

=

σ





1
-


n

p

a

r

a

m



n

o

b

s





+




        

statistical


error
         







ϵ

b

i

a

s





systematic
error





,




(
2
)









    • and:














RMSE
2

=


σ
2





(

1
-


n
param


n
obs



)

+




       

statistical


error
      







ϵ
bias
2




systematic
error





,




(
3
)







depending on the type of error, a describing the detector noise, nparam describing the number of parameters of the model and ∈bias describing the systematic error. In general, however, it remains unclear how high the individual contributions (σ and ∈bias) are. This presents a general problem, since σ is typically different for various camera optics combinations. Thus, empirical values from earlier calibrations of the same camera system, typically with the same settings such as aperture and sharp definition, are typically required for evaluating a camera system on the basis of the RMSEs. Moreover, the RMSE includes no information regarding the accuracy with which the model parameters may be estimated on the basis of the present data.


The covariance of the parameters from the optimization is typically utilized in order to evaluate the uncertainty of the estimated parameters: The smaller the variance of a parameter, the more certain one is regarding its value. The variances of individual parameters are poorly suited as a quality measure, however, since there is a multitude of camera models having different parameters. This results in a lack of comparability. In order to improve this, there is already a method for propagating the parameter uncertainty via a Monte-Carlo simulation into the image space and in this way for estimating a maximum uncertainty in the image. Moreover, a method has been introduced, in which the uncertainty of the model parameters, here quantified via the approximated Hesse matrix, is weighted with the influence of the parameters on the camera model.


The method provided herein in its different specific embodiments differs significantly from the aforementioned approaches as is described in detail below.


The method in accordance with the present invention makes it possible, at least in one example embodiment, to provide an informative error and uncertainty measure for assessing calibration body-based camera calibrations. In one specific embodiment of the present invention, it includes both contributions of the potential error: firstly, systematic (model) errors and secondly a residual uncertainty or variance. These measures allow a user, through direct feedback, to assess and to improve the calibration in a targeted manner. This may take place, for example, by selecting a more suitable model in the case of a systematic error or by recording additional data in the case of excessive uncertainty.


The error feedback may also be used to estimate errors in subsequent applications, for example, self-localization, triangulation, etc., and to request best possible additional measurements. In addition, the quality measures may also be used for the purpose of assessing, in principle, existing or new calibration methods and calibration structures.


The quality measures provided herein allow for the quantitative assessment of calibrations, the quality measures presupposing no empirical values regarding the present camera or the camera model. Thus, they allow, in particular, laypersons to assess and to potentially directly improve the calibration result. Further advantages are explained in detail below.


Further advantages and embodiments of the present invention result from the description herein and from the figures.


It is understood that the features cited above and those still to be explained below are usable not only in each indicated combination, but also in other combinations or when considered alone, without departing from the scope of the present invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an overview of embodiments of the method presented for an error and uncertainty feedback for a camera calibration, in accordance with the present invention.



FIG. 2 shows in a flowchart one possible sequence of the method in accordance with the present invention.



FIG. 3 shows visualizations of the method presented.



FIG. 4 shows in a flowchart one embodiment of the method in accordance with an example embodiment of the present invention.



FIG. 5 shows an example of the use of an uncertainty metric.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

The present invention is schematically represented in the figures based on specific embodiments of the present invention and is described in greater detail below with reference to the figures.



FIG. 1 shows in three representations an overview of the method described for an error and uncertainty feedback for a camera calibration. A first representation shows a camera calibration based on three checkerboard patterns, model parameters being estimated. The results are model parameters, RMSE and a covariance matrix. A second representation 12 illustrates a method for recognizing systematic errors. Incorporated in this representation is the RMSE (arrow 16). An arrow 18 illustrates the adaptation of the model. A third representation 20 illustrates a method for determining the uncertainty in the mapping behavior.


The starting point is a calibration body-based camera calibration, in which the images of well-defined calibration bodies are recorded from various perspectives. On the basis of these images, (i) the position of the camera relative to, for example, multiple calibration bodies in each image is estimated with the aid of extrinsic parameters and (ii) the model parameters of the camera are estimated with the aid of intrinsic parameters θ. The estimate takes place, for example, via a so-called bundle adjustment, a calibration cost function, typically, the rear projection error, being optimized with the aid of a non-linear, least squares method. The results of the optimization are optimal model parameters {circumflex over (θ)}, the residues, as well as the covariance matrix of model parameters Σθ. In connection with such a standard camera calibration, it is possible to use the methods presented herein.


To recognize systematic errors (∈bias in equation (2)) it is provided to determine an independent estimate of the detector noise; this is identified below by {circumflex over (σ)}. This may then be related to the estimation on the basis of calibration event σcalib. In a calibration without systematic errors, the relation should be close to one, since ∈bias disappears. A value deviating therefrom indicates non-modeled properties or other errors.


For an independent estimate of the detector noise, the influence of systematic errors must be minimized. The method presented is now based on the finding that systematic errors such as, for example, a non-modeled distortion are expressed locally in the image to a lesser degree. The calibration object is therefore segmented virtually into V physically smaller calibration objects, whose poses, i.e., positions and orientation, are each estimated independently. Assuming that the estimates are locally free of systematic errors, namely bias-free, an estimation of the detector noise results for each of the virtual calibration objects according to equation (3):











R

M

S


E
υ


=


σ
υ




1
-


n


p

a

ram

,
virt



n

obs
,
virt







,




(
4
)








with v∈{1 . . . V}. In a first variant of this method (A1), the checkerboard calibration bodies used for the calibration are segmented into individual tiles, whose poses are then optimized separately. After the calculation of the RMSE values (RMSEv) on the basis of the residues of each optimization, the pose parameter is then calculated σv, in this case nobs,virt=8, since each tile 4 contributes 4 corners and therefore 8 observations, and nparam,virt=6 applies, since a pose has 6 degrees of freedom. The individual estimates σv are then combined to form a total estimate {circumflex over (σ)}. This takes place by means of averaging. Once the calibration is carried out, RMSEcalib may then be determined according to equation (1). With equation (2) and assuming that no systematic errors are present, the result is










σ
calib

=



R


MSE
calib




1
-


n

p

a

r

a

m



n

o

b

s






.





(
5
)







The relation σcalib/{circumflex over (σ)} is then determined. If the calibration was bias-free, then the value should be close to one and, in practice, below a threshold value or threshold τratio. The method is summarily reproduced once again below with reference to a flowchart in FIG. 2.


In a first step 100, σcalib is determined on the basis of calibration residues ui−ûi with i∈{1 . . . nobs/2} according to equation (1) and (5).


In a second step 102, the pose parameters of each of the V virtual calibration objects are optimized, the parameters of the mapping behavior of the camera remaining unchanged. The residual errors (residuals) are determined after the optimization (step 104).


In a fourth step 106, σv is determined from the residues according to equation (4), here the residues of all V virtual calibration bodies being used to calculate the RMSEs.


In a fifth step 108, the quality measure for recognizing systematic errors σcalib/{circumflex over (σ)} is determined.


In a sixth step 110, the calibration is considered to be free of systematic errors (bias-free), if σcalib/{circumflex over (σ)}≤τratio.



FIG. 3 shows a visualization of the method for recognizing systematic errors. By separately optimizing local groups of data points, it is possible to achieve a virtually bias-free estimate of the detector noise (reference numeral 150). Results for two sample cameras, which have been calibrated using variously complex camera models (reference numeral 152). For camera 1160, a pinhole camera model including two radial distortion parameters is required, in simpler models, the method recognizes systematic errors. For camera 2162, a fisheye model is required.



FIG. 3 shows by way of example the selection of a suitable camera model using this method.


In fact, a systematically disrupted mapping function results in {circumflex over (σ)} being greater than the actual noise level of detector σ. This presents no problem, however, since σcalib is more heavily influenced by systematic errors. In general, robust estimate methods, such as M-Estimator (maximum-likelihood estimator including robust cost function) should be used both for determining the calibration parameters, RMSEcalib, and {circumflex over (σ)}. Important in this case is the fact that all values are determined on the basis of similar and compatible methods.


The second type of errors in a camera calibration occurs as a result of residual uncertainty in estimated model parameters. The aim of the second quality measure or uncertainty measure is to quantify the uncertainty in the mapping behavior of the camera. Since only the uncertainty of the parameters results from the calibration, it must be quantified how a parameter error Δθ impacts the mapping behavior of the camera.


For this purpose, a mapping error K({circumflex over (θ)},Δθ) is defined, which describes the difference in the mapping behavior of two camera models pC(x;{circumflex over (θ)}) and pC(x;{circumflex over (θ)}+Δθ). In this regard, reference is made to FIG. 1. For each individual pixel ui, i=1, . . . n in the image, ri=pC−1(ui;{circumflex over (θ)}) is determined by back projection using camera model 1 of the associated eye ray. Subsequently, 3D points on these eye rays are selected and projected back into the image using camera model 2. Mapping error K({circumflex over (θ)},Δθ) then quantifies the mean difference in the coordinates of original pixels ui and of back-projected pixels ũi, i=1, . . . n:










K

(


θ
ˆ

,

Δ

θ


)

=


1

2

n







p

ixel


i

n






u
i

-


p
C

(


x
i

;


θ
ˆ

+

Δ

θ



)




2







(
6
)













=


1

2

n







p

ixel


i

n





Δu
i



2




,




(
7
)








where xi=pC−1(ui;{circumflex over (θ)}) is the 3D point determined by inverse projection of pixels ui. The mapping error may be expressed as follows via a Taylor approximation up to the 2nd order:

K({circumflex over (θ)},Δθ)≈ΔθTHΔθ,  (8)

the matrix






H
=


1

2

n




J
res
T



J

r

e

s








being defined as the product of the Jacobi Matrix Jres=dres/dΔθ of residue vector res({circumflex over (θ)},Δθ)=(Δu1T, . . . , ΔuNT)T.


It may then be mathematically deduced that the expected value of the mapping error of a calibration result pC(x;{circumflex over (θ)}) with covariance matrix Σθ as compared to the true (unknown!) camera model pC(x;θ) is provided by:

custom character[K]=trace(Σθ1/2θ1/2).  (9)


This means that the expected mapping error of the calibration result as compared to the true camera model may be predicted, even though the true camera model is unknown. This expected value trace(Σθ1/2θ1/2) is the uncertainty metric.


The calculation of the uncertainty metric takes place specifically as is explained with reference to the accompanying flowchart in FIG. 4.


In a first step 200, the calibration body-based calibration is carried out and optimal model parameters {circumflex over (θ)} and covariance matrix Σθ are determined.


In a second step 202, matrix H of the mapping error is determined:

    • implement mapping error K({circumflex over (θ)},Δθ) according to equation 7 as a function of estimated model parameters {circumflex over (θ)} and of a parameter error Δθ.
    • determine Jacobi Matrix Jres=dres/dΔθ of mapping error K at the point of optimal model parameters {circumflex over (θ)} via numerical approximation.







determine


H

=


1

2

n




J

r

e

s

T




J

r

e

s


.






In a third step 204, custom character[K]=trace(Σθ1/2θ1/2) is determined.


One example for the use of the uncertainty metric is given below. Depending on the data set, the calibration remains an uncertainty. This is a function of the number of recordings and of the information content of the recordings. Examples of informative and less informative recordings are shown above. The uncertainty matrix presented indicates after one calibration how high the error to be expected is in the image. This decreases with the number of data points and with the information content of the data.



FIG. 5 in the following section shows by way of example how the metric quantifies the residual uncertainty for calibrations using different data sets.


This method may be clearly differentiated from the existing methods for quantifying the uncertainty.


The calculation of the metric presented requires no complex Monte-Carlo simulation. Instead of a maximum error in a selected set of points, a mean error occurs across all pixels. The method provided herein makes it possible to consider possible compensations using extrinsic parameters, as well as other application-specific adaptations of the mapping error to be predicted.


The observability indicates the increase of the calibration cost function in the most poorly observable parameter direction. In contrast, the method provided herein considers the uncertainty of all parameter directions (not only the most poorly observable parameter direction). Moreover, the metric provided herein is more readily interpretable: the error to be expected is determined in the image space, while the observability indicates an increase of the calibration cost function.


The behavior of both measures is represented below based on real experiments.


In FIG. 3, it is initially shown using two cameras, how it may be decided purely on the basis of relation σcalib/{circumflex over (σ)} whether a mapping model is sufficient. For this purpose, the two cameras have been calibrated in each case using camera models increasing in complexity. The graphic representations in FIG. 3 below show camera models of increasing complexity on the x-axis and the corresponding values of σcalib (dashed line) and {circumflex over (σ)} (solid line) on the y-axis. Values σcalib and {circumflex over (σ)} in this case have been determined by robust estimation on the basis of the median absolute deviation method (MAD) and are therefore identified by calibration MAD and detector MAD. It may be seen how the values of calibration MAD and detector MAD become nearly identical with increasing model complexity (ratio approximately one). This shows that these models are able to describe the mapping behavior free of systematic errors under the given observations.


The absolute values in this case are not significant and may widely fluctuate. For this reason, expert knowledge and empirical values were always required in the past for the specific camera model. The virtual segmentation of the calibration body into individual tiles (variant 1A) is shown once again at the top in FIG. 3.



FIG. 5 shows by way of example the application of the uncertainty measure. In general, the residual uncertainty of a calibration is a function of the calibration data set. The more recordings and the more informative the recordings, the less is the residual uncertainty. FIG. 5 shows the calibration of camera 1 (FIG. 3, 160).


A pinhole camera model including two radial distortion parameters has been used. Recordings in which the calibration body in the image is large and in which the body has extreme angles of inclination relative to the camera, are particularly informative. Less informative are recordings in which the body is far away and is positioned frontoparallel to the camera. Laypersons frequently take unsuitable (uninformative) recordings. In return, the uncertainty metric provided herein offers a direct feedback (FIG. 5 below): when calibrating with few and uninformative recordings, the expected mapping error is significantly higher than in the case of many and informative recordings. Depending on the application, it may then be decided whether the calibration is already sufficiently exact, or whether more measurements are required. The absolute value of the metric is independent, in particular, of the selected camera model and is therefore comparable across a wide variety of calibrations.


The quality measures provided herein allow for the quantitative assessment of calibrations, the quality measures presupposing no empirical values with respect to the present camera or the camera model. Thus, they allow lay users, in particular, to assess and to potentially directly improve the calibration result. Further advantages are cited in detail below.


Recognition and quantification of systematic errors:


Previously used measures such as, for example, RMSE, contain a mixture of stochastic errors (detector noise) and systematic errors. Since the stochastic error varies depending on the camera and depending on the corner detector, it is not apparent from the previously used measures whether a systematic error or only strong noise is present. The method provided herein enables the independent estimation of the noise and thus the decoupling of both portions.


Virtually all types of systematic errors such as, for example, an inadequate camera model, false imaging assignment, errors in the calibration body, etc., may be recognized with the method.


The recognition and quantitative assessment take place without the addition of new data or of a reference experiment, as is otherwise customary in photogrammetry but requiring great additional effort.


The measure presented is independent of the underlying camera model and may therefore be generally used.


The measure presented presupposes no empirical values for examined cameras. This would be the case, for example, in an analysis based purely on the RMSE.


Quantification of the Residual Uncertainty


The residual uncertainty is indicated typically via the (co)variance of the camera model parameters, or via the sum of these variances. There is, however, a multitude of different camera models, ranging from simple pinhole cameras including three parameters up to and including local camera models including approximately 105 parameters. The specification of parameter uncertainties is therefore difficult to interpret and is not comparable across camera models. The method provided propagates the parameter uncertainty in the image space and offers as a result an interpretable and comparable measure for the uncertainty in the image.


The method enables a flexible adaptability of the reference experiment: depending on the application, for example, the expected mean error in a particular image area may be predicted. Instead of the error in the image space, the error in the angles of the camera eye rays may also be predicted. A specific application, for example, triangulation, self-localization, etc., may also be defined as a reference experiment. This then provides, for example, the square triangulation error to be expected.


The metric may be utilized to request best possible additional measurements. In this way, the expected error in the image space may be reduced as quickly as possible.


The calculation of the metric provided herein requires no complex Monte-Carlo simulation. Instead of a maximum error in a selected set of points, a mean error occurs across all pixels. The method provided herein makes it possible to consider possible compensations using extrinsic parameters, as well as other application-specific adaptations of the mapping error to be predicted.


The observability indicates the increase of the calibration cost function in the most poorly observable parameter direction. In contrast, the method provided herein considers the uncertainty of all parameter directions (not only the most poorly observable parameter direction). Moreover, the metric provided herein is more readily interpretable: the error to be expected is determined in the image space, whereas the observability indicates an increase of the calibration cost function.


Recognition of Systematic Errors


(1B) In general, various methods for estimating values σcalib and σv may be used. In practice, the use, in particular, of robust estimators such as M-estimators, or Median Absolute Deviation is important.


(1C) The calibration object may be segmented in various ways into virtual calibration objects. In general, it is only the case that more than six independent observations must be present for the estimate to be overdetermined. In the case of exactly six observations, the pose parameters could be estimated, although σv could not be determined, since RMSEv=0 would apply. Groups of six corners, for example, could also be used.


(1D) The method is not limited to planar or checkerboard-like calibration bodies. Calibration bodies that include circular markings not situated on one plane may, for example, also be used. The only prerequisite is that the relative position of the individual markings to one another is known. When calibrating only one camera, even knowledge of the overall dimension (scales) of the calibration body is not required in some cases.


(1E) Alternative quality measures may also be calculated from σcalib and {circumflex over (σ)}, for example










g
1

=


σ
ˆ


σ

c

a

l

i

b







(
10
)













g
2

=


σ

c

a

l

i

b


-

σ
ˆ






(
11
)













g
3

=

100

%



σ

c

a

l

i

b



σ
ˆ







(
12
)














g
4

=



σ

c

a

l

i

b


-

σ
ˆ



σ
ˆ



,




(
13
)







or more generally

g5=ƒ(σcalib,{circumflex over (σ)}).  (14)


(1F) In addition, there is the possibility of computing bias term ∈bias directly. For this purpose, RMSEcalib and {circumflex over (σ)} in equation (2) are used and transformed to










ϵ

b

i

a

s


=


R

M

S


E
calib


-


σ
ˆ





1
-


n

p

a

r

a

m



n

o

b

s





.







(
15
)







Thus, further alternative quality measures may be defined, such as










g
6

=


ϵ

b

i

a

s




R

M

S


E

c

a

l

i

b



-

ϵ

b

i

a

s








(
16
)













g
7

=

1

0

0


%


ϵ

b

i

a

s




R

M

S


E

c

a

l

i

b



-

ϵ

b

i

a

s










(
17
)













g
8

=


ϵ

b

i

a

s



R

M

S


E

c

a

l

i

b








(
18
)













g
9

=



R

M

S


E

c

a

l

i

b

2


-



σ
ˆ

2

(

1
-


n
param


n

o

b

s




)



R

M

S


E

c

a

l

i

b

2







(
19
)













g

1

0


=

1

0

0



%


ϵ

b

i

a

s



R

M

S


E

c

a

l

i

b





.






(
20
)







(1G) A further manner of defining a quality measure results from the prediction of the RMSE











c

a

l

i

b


=


σ
ˆ




1
-


n

p

a

r

a

m



n

o

b

s










(
21
)







and to relate this to RMSEcalib.










g

1

1


=


R

M

S


E

c

a

l

i

b





c

a

l

i

b







(
22
)













g

1

2


=


RMSE

c

a

l

i

b

2


calib
2






(
23
)













g

1

3


=



R

M

S


E

c

a

l

i

b



-


c

a

l

i

b





c

a

l

i

b







(
24
)













g

1

4


=




R

M

S


E

c

a

l

i

b



-


c

a

l

i

b




R

M

S


E

c

a

l

i

b




.





(
25
)







Some of the above formulations may be mathematically equivalent and differ only in the derivation.


Quantification of the Residual Uncertainty


(2B) In order to accelerate the calculation, it is possible to determine mapping error K({circumflex over (θ)},θ) across a subset of pixels instead of across all pixels (for example, a homogenous grid distributed across the image area).


(2C) In particular application scenarios, only limited image areas are relevant. In this case, mapping error K({circumflex over (θ)},θ) may be determined only across pixels in this image area instead of across all pixels. Moreover, image areas may also be weighted to various degrees.


(2D) Depending on the application, it may be taken into account in the case of mapping error K({circumflex over (θ)},θ) that differences in intrinsic parameters may frequently be compensated for by a different selection of the extrinsic parameters (of the position of the camera). In such cases, the mapping error is therefore less than the error indicated by equation 6. The alternative formulation taking the extrinsic into account is:









K
=


1
N






pixel


i

N





u
i

-



p
C

(



[

R
/
t

]



x
i


;


θ
ˆ

+
Δθ




2









(
26
)














with


3

D


points



x
i


=


p
C

-
1


(


u
i

;

θ
^


)


,




(
27
)













and



extrinsic

[

R
/
t

]


=


min

[

R
/
t

]



K
.






(
28
)







(2E) Instead of the mean error in the image, it is possible to predict the mean error in the angles of the eye rays. Mapping error K({circumflex over (θ)},θ) in this case defines not the mean difference in the image space (equation 7), but the mean difference in angles.


(2F) The expression custom character[K]=trace(Σθ1/2θ1/2) may also be formulated as follows (mathematical equivalent):

custom character[K]=trace(ΣθH),  (29)
custom character[K]=trace(θ),  (30)
custom character[K]=Σi=1Nλi,  (31)


where λi are the eigenvalues of matrix Σθ1/2θ1/2.


(2G) Instead of the expected value of mean square error custom character[K] (in pixel2 units), the square root thereof

    • √{square root over ([K])}=√{square root over (trace(Σθ1/2θ1/2))} (in pixel units) may be used.


This is potentially even more easily interpretable.


(2H) More generally, an arbitrary function ƒ(custom character[K]) may be used.

Claims
  • 1. A method for assessing a camera calibration, the method comprising the following steps: ascertaining a first quality measure; andassessing a statistical error using the first quality measure;wherein the ascertaining of the first quality measure includes ascertaining an expected value for a mapping error, with optimal model parameters and their covariance matrix being accessed, then a matrix of a mapping error being determined, and finally the expected value of the mapping error being ascertained.
  • 2. The method as recited in claim 1, wherein the assessing includes an assessment of a calibration measuring set-up or of a calibration data set.
  • 3. The method as recited in claim 1, further comprising, based on the first quality measure, updating the calibration with additional measurements.
  • 4. The method as recited in claim 1, wherein the assessing includes an assessment of a parameter selection.
  • 5. The method as recited in claim 1, further comprising: ascertaining a second quality measure with which a systematic error is assessed.
  • 6. The method as recited in claim 5, wherein the second quality measure is ascertained by: segmenting an entire calibration object, which includes at least one calibration object, virtually into calibration objects;for each of the calibration objects, estimating a detector noise,combining the estimated noises of each of the calibration objects to form a total estimate; andcomparing the total estimate with an estimate of the detector noise of the entire calibration object.
  • 7. The method as recited in claim 5, wherein the assessing includes an assessment of a model selection.
  • 8. The method as recited in claim 1, wherein a user is given a direct feedback in order for the user to assess the calibration.
  • 9. The method as recited in claim 1, wherein a mean square error in an image space is used as the mapping error.
  • 10. The method as recited in claim 1, wherein a matrix is determined, which describes an increase of the mapping error as a function of an error in the model parameters.
  • 11. A method for a camera, the method comprising: ascertaining a matrix of a mapping error based on a covariance matrix of optimal model parameters; andascertaining an expected value of the mapping error based on the ascertained matrix of the mapping error, the expected value being a first quality measure of an expected statistical error of a calibration of the camera.
  • 12. The method as recited in claim 11, wherein the mapping error is a difference between a camera model of the calibration and an actual behavior of a camera.
  • 13. The method as recited in claim 11, wherein the ascertained expected value is a metric of an uncertainty of the model parameters.
  • 14. The method as recited in claim 11, wherein the mapping error is a mean square error in an image are of the camera.
  • 15. The method as recited in claim 11, wherein the ascertained matrix of the mapping error is represents an increase of the mapping error as a function of an error in the model parameters.
  • 16. The method as recited in claim 11, further comprising: modifying the calibration based on the first quality measure; andresponding, by the camera, to a surrounding environment by generating and outputting an image representation of the surrounding environment based on the modified first quality measure.
  • 17. The method as recited in claim 11, further comprising: ascertaining a second quality measure, which is a representation of a detector noise of the model.
  • 18. The method as recited in claim 17, wherein the ascertainment of the second quality measure is performed by: segmenting a calibration object into a plurality of virtual calibration objects;for each of the virtual calibration objects, estimating a respective detector noise;forming a total estimate based on a combination of the respective estimated detector noises; andcomparing the formed total estimate to an estimate of detector noise of the calibration object as a whole.
  • 19. The method as recited in claim 11, further comprising: modifying the calibration based on the first quality measure and the second quality measure; andresponding, by the camera, to a surrounding environment by generating and outputting an image representation of the surrounding environment based on the modified first quality measure.
Priority Claims (1)
Number Date Country Kind
102020211507.9 Sep 2020 DE national
US Referenced Citations (12)
Number Name Date Kind
10771776 Schilling Sep 2020 B2
11361469 Raag Jun 2022 B2
11367263 Devernay Jun 2022 B1
20040073360 Foxlin Apr 2004 A1
20060258938 Hoffman Nov 2006 A1
20070146391 Pentenrieder Jun 2007 A1
20080297502 Simard Dec 2008 A1
20110310255 Medeiros Dec 2011 A1
20180253107 Heinla Sep 2018 A1
20210225026 Amthor Jul 2021 A1
20220086425 Hagemann Mar 2022 A1
20220270358 Cox Aug 2022 A1
Non-Patent Literature Citations (1)
Entry
Peng et al. “Calibration Wizard: A Guidance System for Camera Calibration Based on Modelling Geometric and Corner Uncertainty”, computer vision foundation, ICCv papers, 2019. (Year: 2019).
Related Publications (1)
Number Date Country
20220084250 A1 Mar 2022 US