The disclosure relates to image reconstruction, and, more particularly, to registration to support the superposition of a plurality of mutually offset images of a scene.
When attempting to image a scene for which single images have a low signal-to-noise ratio (SNR), it can be necessary to collect and superimpose multiple images so as to produce a composite image having an increased SNR. However, in some imaging scenarios the physical subject being imaged may not be static, and/or the means of imaging the subject may not be static, thereby causing misalignment of the superimposed images, and degrading the quality of the composite image.
An example would be attempting to image a very small, dim feature using a microscope, where the feature may be subject to Brownian motion and/or the microscope stage may be unavoidably affected by vibrations. Even if the feature movements are small in absolute terms, the microscope may magnify these movements such that the feature will appear to move sporadically within the field of view, causing a composite image of the feature to appear blurry, and degrading the efficacy of the microscope as a tool to image the feature. Similar examples arise, for example, in the fields of astronomy and navigation.
This problem is illustrated by
When this problem arises, it is desirable to make an attempt to “register” the individual images, before they are combined into the composite image, by estimating the X and Y locations of the feature or features within each of the images. These location estimates then inform how to correctively shift each image, so that the final locations of the features are in near alignment, and signal from the features directly overlaps when the images are superimposed to form the composite image. Each of these X and Y locations can be interpreted as a combination of an average X and Y location that is common to all of the images plus some absolute shift which changes between the images. The registration process then entails estimating the absolute shifts, so that reverse shifts can be applied to the images, so that all of the features in all of the images are located at their average locations.
The general problem can be defined as follows:
The ultimate goal is to estimate the absolute shifts of the individual noisy images, so that they can be “registered” and superimposed to form a high-quality composite image. In many cases, it is not necessary to estimate the absolute shifts of the images, because only the relative translations of the images are needed to form a high-quality composite image. The image alignment process will be deemed successful when the images are translated so that all of the features in the images are in the same final location. If the absolute signal placement matters for a specific application, a final shift may be estimated from the superimposed, composite image, which will have much higher SNR as compared to the individual images.
This is illustrated in
Of course, it is often desirable for these absolute shift estimations to be made rapidly, and without requiring an excess of computing resources.
What is needed, therefore, is a method and apparatus for accurately and efficiently estimating the positional shifts between a plurality of images of a feature, including images where the precise nature of the feature or features is not known a priori.
The present disclosure is a method and apparatus for accurately and efficiently estimating the positional shifts between a plurality of images of a feature, including images where the precise nature of the feature or features is not known a priori.
The disclosed method addresses this problem by using image cross correlations to estimate all of the relative shifts, ({right arrow over (δ)}p−{right arrow over (δ)}q), and extracts estimates of the absolute shifts, {right arrow over (δ)}n, by relating the relative estimates and absolute shifts in a linear system of equations and solving the system through a tailored robust regression technique. The regression is significantly expedited by tailoring the core matrix arithmetic to take maximum advantage of the sparsity and symmetry of the linear system of equations.
In some cases, applying pre-processing to the image input data can improve the performance of shift-estimating calculations, for example by improving the validity of the assumptions used in the calculations or tempering the impact of the noise. The choice of what pre-processing should be applied is highly context specific, and depends critically on the nature of the noise and of the signal. Some examples of pre-processing include various forms of smoothing, background subtraction, application of filters, regularization, spatial transformations, up-sampling/down-sampling, and cropping.
Once any pre-processing has been applied, the next step is to estimate the relative shifts between each pair of images Mp and Mq, {right arrow over (δ)}pq=({right arrow over (δ)}p−{right arrow over (δ)}q), by computing 2D cross-correlations between all possible combinations of the measured images, and then “peak-picking” the locations of the correlation peaks, which correspond to estimates of the relative shifts. It is not necessary to calculate autocorrelations (p=q) nor reflected cross-correlations ({right arrow over (d)}qp).
The method that is used for the peak-picking (aka peak-finding) will be application specific. Some examples include finding the maximum intensity cross correlation pixel or finding the centroid of a region of generally high intensity. The peak-finding may benefit from applying pre-processing to the cross-correlations, such as masking, smoothing, and other filtering. Depending on the selected technique, the peak-finding process may yield more than one peak estimate for some of the cross-correlations. In embodiments, all of the estimates are retained, which can lead to improved results.
The process of calculating these cross correlations and peak finding results in distinct estimates of the relative shifts {right arrow over (d)}pq=({right arrow over (δ)}p−{right arrow over (δ)}q). The next step is to use the relative shift estimates to determine estimates for the absolute shifts {right arrow over (δ)}n. Each {right arrow over (d)}pq estimate can be considered to be a measurement of a linear combination of the full list of (unknown) {right arrow over (δ)}n values. The underlying {right arrow over (δ)}n values are determined by solving the corresponding system of linear equations by applying a regression algorithm.
Most regression algorithms are designed for application to a 1D list of independent variables. When these algorithms are applied to a 2D system, the 2D system is typically represented as the concatenation of the 1D horizontal component sub-system and the 1D vertical component subsystem. The resulting linear system can be expressed as AD=E. Due to the nature and quantity of relative shift estimates, the present system of linear equations is overdetermined, in that there is no solution for D which satisfies all of the equations, and equivalently the naïve matrix solution D=A−1E is wholly invalid.
To counter this common circumstance, a standard approach is to search instead for a “least squares” solution, i.e. determine the D that minimizes the sum of the squares of the fit residuals: Σi((AD)i−Ei)2. In standard practice, this is solved through the corresponding “normal equation,” ATAD=ATE. The solution D=(ATA)−1(ATE) is mathematically equivalent to finding the least squares solution. The main benefit of the normal equation is that ATA is not overdetermined, and has full rank so long as A has full rank, thus guaranteeing the existence of a unique solution. Moreover, the normal equation returns the problem to a matrix arithmetic problem for which solutions are readily calculated.
Outliers in the relative shift estimates forming E present a challenge for a least squares solution, because entries of E which are inconsistent with the majority of the estimates will have a much larger contribution to the sum Σi((AD)i−Ei)2. Regression techniques which are specifically-designed to handle these outliers are referred to as “robust”. A technique that is applied by many such robust regression implementations is to generate a weighted least squares solution instead of an ordinary least squares solution. Given a list of nonnegative weights wi corresponding to entries in E, the weighted least squares solution minimizes τiwi·((AD)i−Ei)2, wherein the square of the residuals has a “weighted” contribution to the sum. The corresponding weighted normal equation is ATWAD=ATWE, whose solution is similarly D=(ATWA)−1(ATWE), where W is the square matrix having the weights wi inserted along its diagonal, with all other matrix values being zero.
According to the present disclosure, the execution speed and estimation quality of the robust regression is implicitly improved by tailoring the robust regression implementation in the following ways:
In summary, embodiments of the method of the present disclosure include the following steps:
Advantages of these steps, in embodiments, are as follows:
relative shifts, which means that each estimate need not be perfect.
A first general aspect of the present disclosure is a method of combining a plurality of images of a scene having random mutual offsets to form a composite image having increased signal-to-noise. The method includes obtaining a plurality of images of a scene, the images having unknown, random vertical and horizontal offsets relative to a center of the images, each of the images being a combination of signal and noise, the center of the images being defined such that averages of both the horizontal offsets and the vertical offsets are zero, forming a plurality of cross-correlations of the images, where each of the images is cross-correlated with each of the other images, while all auto-correlations and reflections are omitted, each of the cross-correlations being a cross-correlation of an associated pair of the images, for each of the cross-correlations, picking at least one peak as an estimate of a relative offset between its associated images, each of said relative offsets comprising a relative horizontal offset component and a relative vertical offset component, assigning a weight to each of the picked peaks.
The method further includes calculating a matrix H having rows and columns corresponding to the images, wherein for each of the cross-correlations, a value of H at a row and column corresponding to the pair of images associated with the cross-correlation is equal to a sum of all weights assigned to peaks picked in the cross-correlation, and wherein all other values within H are zero, calculating a matrix G that is equal to a sum of H and its transpose HT.
The method further includes calculating a matrix G′ wherein each diagonal value of G′ is equal to a sum of the values of the corresponding row of G, and all other values of G′ are zero, forming a matrix Q filled with ones having a dimensionality equal to a dimensionality of G, calculating a matrix L that is equal to Q+G′−G.
The method further includes forming a 2-column matrix E′, wherein the rows of E′ correspond with the peaks that were picked for the cross-correlations, and wherein a first of the columns of E′ comprises the relative horizontal offset components of the picked peaks, and a second of the columns of E′ comprises the relative vertical offset components of the picked peaks.
The method further includes calculating a matrix Vhorz having rows and columns corresponding to the images, wherein for each of the cross-correlations, a value of Vhorz at a row and column corresponding to the pair of images associated with the cross-correlation is equal to a sum of all of the weights of the peaks that were picked for the cross-correlation multiplied by the value of the first column of E′ in the row that corresponds to the picked peak.
The method further includes calculating a matrix Vvert having rows and columns corresponding to the images, wherein for each of the cross-correlations, a value of Vvert at a row and column corresponding to the pair of images associated with the cross-correlation is equal to a sum of all of the weights of the peaks that were picked for the cross-correlation multiplied by the value of the second column of E′ in the row that corresponds to the picked peak, calculating matrices Zhorz=Vhorz−(Vhorz)T and Zvert=Vvert−(Vvert)T.
The method further includes calculating a 2-column matrix R, wherein the rows of R correspond with the images of the scene, wherein each of the values of R in a first of the columns of R is equal to a sum of all of the values of Zhorz in the corresponding row thereof, and each of the values of R in a second of the columns of R is equal to a sum of all of the values of Zvert in the corresponding row thereof.
The method further includes solving for a 2-column matrix D′=L−1R, where the values in a first of the columns of D′ are estimates of the horizontal offsets of the images relative to the center of the images, and the values in a second of the columns of D′ are estimates of the vertical offsets of the images relative to the center of the images.
Finally the method includes shifting each of the images horizontally and vertically according respectively to the first and second columns of D′, and adding the shifted images together to form the composite image.
Embodiments further include applying pre-processing to the images before forming the plurality of cross-correlations. In some of these embodiments, the pre-processing includes at least one of smoothing, background subtraction, filtering, regularization, special transformation, up-sampling, down-sampling, and cropping.
In any of the above embodiments, picking at least one peak for at least one of the cross-correlations can include finding a maximum intensity pixel of the cross-correlation.
In any of the above embodiments, picking at least one peak for at least one of the cross-correlations can include finding a centroid of a region of relatively high intensity in the cross-correlation.
In any of the above embodiments, the weights can be determined according to a degree of confidence that the corresponding peak is an outlier.
In any of the above embodiments, the values of the weights can be iteratively updated until a corresponding fit stabilizes.
In any of the above embodiments, for each of the peaks, the weight that is applied to the peak can be determined by unsupervised anomaly detection according to a degree of confidence that the peak is an outlier.
A second general aspect of the present disclosure is a method of combining a plurality of images of a scene having random mutual offsets to form a composite image having increased signal-to-noise. The method includes obtaining a plurality of N images of a scene, the images having unknown, random vertical δnvert and horizontal δnhorz offsets relative to a center of the images, where n=1 to N, each of said offsets being expressed as a vector {right arrow over (δ)}n=(δnhorz, δnvert), each of the images being a combination of signal and noise.
The method further includes forming a plurality of cross-correlations Xp,q of the images, where each of the images is cross-correlated with each of the other images, while all auto-correlations Xp,p and reflections Xq,p are omitted, resulting in
cross-correlations Xp,q, for each of the cross-correlations Xp,q.
The method further includes picking at least one peak as an estimate {right arrow over (d)}k of a relative offset {right arrow over (d)}p,q={right arrow over (δ)}p−{right arrow over (δ)}q=(δphorz−δqhorz, δpvert−δqvert) between image p and image q, where the total number K of estimates {right arrow over (d)}k, is equal to or greater than
and wherein at least one peak is picked for each of the cross-correlations Xp,q.
The method further includes forming a K×2 matrix E′, wherein for k=1 to K, E′k,l=dkhorz and E′k,2=dkvert, solving for an N×2 matrix D′, where for each n=1 to N, D′n,1 is an estimate of δnhorz and D′n,2 is an estimate of δnvert, by selecting the center of the images such that Σn=1ND′n,1=0 and Σn=1ND′n,2=0, and applying a weighted regression to an equation D′=(AintTWintAint)−1(AintTWintE′).
Applying said weighted regression includes solving an equation AintTWintAint=L according to the following steps:
The method further includes solving an equation AintTWintE′=R according to the following steps:
Finally, the method includes shifting each of the images n=1 to N horizontally and vertically according respectively to D′n,1 and D′n,2, and adding the shifted images together to form the composite image.
Embodiments further include applying pre-processing to the images before forming the plurality of cross-correlations. In some of these embodiments, the pre-processing includes at least one of smoothing, background subtraction, filtering, regularization, special transformation, up-sampling, down-sampling, and cropping.
In any of the above embodiments, picking at least one peak for at least one of the cross-correlations Xp,q can include finding a maximum intensity pixel of Xp,q.
In any of the above embodiments, picking at least one peak for at least one of the cross-correlations Xp,q can include finding a centroid of a region of relatively high intensity in Xp,q.
In any of the above embodiments, applying the weighted regression can include applying an iterative re-weighted least squares regression wherein the values of wk are iteratively updated until a corresponding fit stabilizes.
In any of the above embodiments, applying the weighted regression can include determining the weights by unsupervised anomaly detection, wherein for each value of k=1 to K, the value of wk is determined according to a degree of confidence that {right arrow over (d)}k is an outlier.
A third general aspect of the present disclosure is non-transitory computer readable storage medium having instructions stored thereon that, when executed by a computing device, combine a plurality of images of a scene according to a process. The process includes obtaining a plurality of images of a scene, the images having unknown, random vertical and horizontal offsets relative to a center of the images, each of the images being a combination of signal and noise, the center of the images being defined such that averages of both the horizontal offsets and the vertical offsets are zero, forming a plurality of cross-correlations of the images, where each of the images is cross-correlated with each of the other images, while all auto-correlations and reflections are omitted, each of the cross-correlations being a cross-correlation of an associated pair of the images, for each of the cross-correlations, picking at least one peak as an estimate of a relative offset between its associated images, each of said relative offsets comprising a relative horizontal offset component and a relative vertical offset component, assigning a weight to each of the picked peaks.
The process further includes calculating a matrix H having rows and columns corresponding to the images, wherein for each of the cross-correlations, a value of H at a row and column corresponding to the pair of images associated with the cross-correlation is equal to a sum of all weights assigned to peaks picked in the cross-correlation, and wherein all other values within H are zero.
The process further includes calculating a matrix G that is equal to a sum of H and its transpose HT and calculating a matrix G′ wherein each diagonal value of G′ is equal to a sum of the values of the corresponding row of G, and all other values of G′ are zero.
The process further includes forming a matrix Q filled with ones having a dimensionality equal to a dimensionality of G, calculating a matrix L that is equal to Q+G′−G, and forming a 2-column matrix E′, wherein the rows of E′ correspond with the peaks that were picked for the cross-correlations, and wherein a first of the columns of E′ comprises the relative horizontal offset components of the picked peaks, and a second of the columns of E′ comprises the relative vertical offset components of the picked peaks.
The process further includes calculating a matrix Vhorz having rows and columns corresponding to the images, wherein for each of the cross-correlations, a value of Vhorz at a row and column corresponding to the pair of images associated with the cross-correlation is equal to a sum of all of the weights of the peaks that were picked for the cross-correlation multiplied by the value of the first column of E′ in the row that corresponds to the picked peak.
The process further includes calculating a matrix Vvert having rows and columns corresponding to the images, wherein for each of the cross-correlations, a value of Vvert at a row and column corresponding to the pair of images associated with the cross-correlation is equal to a sum of all of the weights of the peaks that were picked for the cross-correlation multiplied by the value of the second column of E′ in the row that corresponds to the picked peak.
The process further includes calculating matrices Zhorz=Vhorz−(Vhorz)T and Zvert=Vvert−(Vvert)T, and calculating a 2-column matrix R, wherein the rows of R correspond with the images of the scene, wherein each of the values of R in a first of the columns of R is equal to a sum of all of the values of Zhorz in the corresponding row thereof, and each of the values of R in a second of the columns of R is equal to a sum of all of the values of Zvert in the corresponding row thereof.
The process further includes solving for a 2-column matrix D′=L−1R, where the values in a first of the columns of D′ are estimates of the horizontal offsets of the images relative to the center of the images, and the values in a second of the columns of D′ are estimates of the vertical offsets of the images relative to the center of the images.
Finally, the process includes shifting each of the images horizontally and vertically according respectively to the first and second columns of D′, and adding the shifted images together to form the composite image.
The features and advantages described herein are not all-inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and not to limit the scope of the inventive subject matter.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The present disclosure is a method and apparatus for accurately and efficiently estimating the absolute positional shifts between a plurality of images of a feature, including images where the precise nature of the feature or features is not known a priori.
The disclosed method addresses this problem by using image cross correlations to estimate all of the relative shifts, ({right arrow over (δ)}p−{right arrow over (δ)}q), and extracts estimates of the absolute shifts, {right arrow over (δ)}n, by relating the relative estimates and absolute shifts in a linear system of equations and solving the system through a tailored robust regression technique. The regression is significantly expedited by tailoring the core matrix arithmetic to take maximum advantage of the sparsity and symmetry of the linear system of equations, as is explained in more detail below.
In some cases, applying pre-processing to the image input data can improve the performance of shift-estimating calculations, for example by improving the validity of the assumptions used in the calculations or tempering the impact of the noise. The choice of what pre-processing should be applied is highly context specific, and depends critically on the nature of the noise and of the signal. Some examples of pre-processing include various forms of smoothing, background subtraction, application of filters, regularization, spatial transformations, up-sampling/down-sampling, and cropping.
If the raw images do not have flat, zero-mean background structure, then measuring and subtracting the background structure can be a powerful step that can remove deterministic sources of spurious correlation features. The background subtraction may be performed using a high-pass spatial filter, mean subtraction, or by some other method that is customized to the specific nature of the background structure and the signal structure.
Once any pre-processing has been applied, the next step is to estimate the relative shifts between Mp and Mq, {right arrow over (d)}pq=({right arrow over (δ)}p−{right arrow over (δ)}q), by computing 2D cross-correlations between all possible combinations of the measured images, and then “peak-picking” the locations of the correlation peaks, which correspond to estimates of the relative shifts. Each of the 2D cross-correlations provides an estimate of the relative shift between two images that is agnostic to the image content. In the absence of noise, each cross-correlation result would have a peak at a location corresponding to {right arrow over (d)}pq. Accordingly, estimating the position of the peak in the cross correlation (i.e. peak-finding) yields an estimate of the relative shift between the cross-correlated images. An example of such cross-correlation peaks applied to five images in a moderately high SNR regime is illustrated in
In order to extract as much information as possible from the cross correlations, and to corroborate their different estimates, the correlate-and-peak-find process is performed for as many combinations of (p,q) as possible. Naïvely, for a total of N images there are N2 such combinations. However, N of them are autocorrelations whose peaks are trivially located (p=q⇒{right arrow over (d)}pq={right arrow over (0)}), and
(about half) are reflection cross-correlations, and are therefore redundant (the correlation results from (p,q) and (q,p) are just reflections of each other). As is illustrated in
(p,q) pairs. In
While calculating the cross correlation of two images is a well-defined process, the task of extracting estimates of the locations of its peaks is much less prescribed. In the absence of noise, the cross correlation of two equivalent-but-shifted images is a translated version of the signal autocorrelation which necessarily takes a maximum intensity at its center. Exploiting the presence of this peak, peak-finding is commonly used to estimate relative shifts in this way. In the presence of noise, the location of the cross correlation's peak is either perturbed or overpowered by a spurious noise peak in an unrelated location. The presence of these perturbed peaks and outlier peaks is the primary motivation behind the ensuing robust regression techniques in the present disclosure. The characteristics of each peak is dependent on its underlying signal structure, so implementations of peak-finding are application specific. Some example approaches include finding the maximum intensity cross correlation pixel, and finding the centroid of a region of generally high intensity. The peak-finding will benefit in some cases from pre-processing of the correlations, for example by applying masking, smoothing, and/or other filtering.
In various embodiments, the relative shift estimates can be either nearest-pixel estimates, or can offer sub-pixel accuracy, whereby the estimates yield non-integer estimates for {right arrow over (δ)}n. Pixel-rounded estimates can be less ideal, because of the implicit rounding error, but sub-pixel relative shift estimates need not be inherently more accurate.
The process for calculating cross correlations and estimating peaks involves many independent operations (one for each (p,q)), and as such affords an opportunity for parallelized computing, e.g. through central processing unit (CPU) multithreading, or through exploiting other architectures like graphics processing units (GPUs).
With reference to
At their core, robust regression techniques search for a consistent trend present in most of the estimates, and treat deviants from that global trend as outliers, and subsequently de-weight them. This de-weighting scheme removes the need to arbitrarily down-select to a single estimate when multiple estimates are similarly viable. An analogous 1D example is illustrated in
Generating multiple estimates of the same relative shift should be done sparingly though, since it will eventually obfuscate the true structure underlying the whole body of estimates by dilution. Equivalently, the multiple instantaneous estimates 303, 304 may create a more challenging initial condition for the regression approach to recover from. Thresholds for how similar two peaks must be within the same cross-correlation to justify retaining both of them, and corresponding limits on multiple-estimate prevalence, are, of course, context and implementation specific.
The result of calculating these cross correlations and performing peak finding is a total of K estimates of the relative shift estimates {right arrow over (d)}pq=({right arrow over (δ)}p−{right arrow over (δ)}q), where K will be greater than
if there are multiple estimates for any of the cross-correlation peaks. The next step is to use these relative shift estimates to determine estimates for the absolute shifts {right arrow over (δ)}n. Each {right arrow over (d)}pq estimate can be considered to be a measurement of a linear combination of the full list of (unknown) {right arrow over (δ)}n values. The underlying {right arrow over (δ)}n values are determined by solving the corresponding system of linear equations by applying a regression algorithm. According to the present disclosure, the regression algorithm is “tailored” so as to be optimized for efficient application to the problem described herein.
Due to the unknown structure of the signal, any definition of the “absolute” shift is necessarily arbitrary. As such, absolute shifts can only be determined up to an arbitrary, global reference point. If the resulting absolute signal placement matters for a specific application, a final shift may be estimated from the superimposed image, which will have much higher SNR due to the improved alignment. Thus, for all intents and purposes, {right arrow over (δ)}n is interchangeable with ({right arrow over (δ)}n−{right arrow over (c)}) for any constant reference {right arrow over (c)}.
However, due to this ambiguity, the linear system mentioned thus far does not have full rank. Accordingly, an additional constraint is required to render the linear system full rank, and thereby solvable. This amounts to selecting an arbitrary “zero” location within the scene from which the offsets {right arrow over (δ)}n will be measured. In embodiments, the additional constraint that is imposed is that the sum of the absolute shifts must be equal to zero: Σn{right arrow over (δ)}n={right arrow over (0)}, and must, therefore, also average to zero. Almost any other similar static constraint can be applied. However, requiring that the sum of the absolute shifts is zero is the least complicating choice.
Most regression algorithms are designed for application to a 1D list of independent variables. When these algorithms are applied to a 2D system, the 2D system is typically represented as the concatenation of the 1D horizontal component sub-system and the 1D vertical component subsystem. This corresponds to concatenating and zero-padding the numeric matrix representations appropriately to capture the simultaneous, independent component equations. The resulting linear system can be expressed as AD=E, where
where the multiple picked peaks produce 3 extra estimates. Each value of k corresponds with a cross correlation between images pk and qk, but due to the additional peaks that are picked for some of the cross-correlations, some values of k share the same values of pk and qk. The (K+1)th row of Aint establishes the additional constraint that arises from selection of an arbitrary “center” of the images from which all of the individual offsets {right arrow over (δ)}n are to be measured. In embodiments, the center is chosen such that the sum of all of the offsets of all of the images is zero.
Due to estimation error and the presence of outliers, this system of linear equations is overdetermined, in that there is no solution for D which satisfies all of the equations, and equivalently the naïve matrix solution D=A−1E is wholly invalid. To counter this common circumstance, a standard approach is to search instead for a “least squares” solutions, i.e. determine the D that minimizes the sum of the squares of the fit residuals: Σi((AD)i−Ei)2. In standard practice, this is solved through the corresponding “normal equation,” ATAD=ATE. The solution D=(ATA)−1(ATE) is mathematically equivalent to finding the least squares solution. The main benefit of the normal equation is that ATA is not overdetermined, and has full rank so long as A has full rank, thus guaranteeing the existence of a unique solution. Moreover, the normal equation returns the problem to a matrix arithmetic problem that nearly all computing hardware and coding languages are well equipped to solve.
Outliers in the relative shift estimates forming E present a challenge for a least squares solution, because entries of E which are inconsistent with the majority of the estimates will have a much larger contribution to the sum Σi((AD)i−Ei)2. Regression techniques that are intentionally-designed to handle these outliers are referred to as “robust”. A technique that is applied by many such robust regression implementations is to generate a weighted least squares solution instead of an ordinary least squares solution. Given a list of nonnegative weights wi corresponding to entries in E, the weighted least squares solution minimizes Σiwi·((AD)i−Ei)2, wherein the square of the residuals has a “weighted” contribution to the sum.
With reference to
Examples of weighted robust regression algorithms include:
Since these approaches are all directed to solving the (weighted) least squares problem, the aforementioned normal equation constitutes some or much of their operation. Because of this commonality, the method of the present disclosure implements improvements to these core computations, which can then be applied to implicitly improve the execution speed of the robust regression. In particular, the robust regression implementation of the present disclosure is tailored in the following ways:
According to the present disclosure, each of the relative shifts {right arrow over (d)}pq represents a pair of estimates, the horizontal and vertical components, which are intrinsically tied to each other. Due to that direct component relationship and the internal correspondence within the linear system AD=E, there is an inherent symmetry in the contents of the coefficients matrix A. As can be seen in
Because of this, the linear system can be represented by an equivalent matrix equation AintD′=E′, where D′ and E′ are N×2 and (K+1)×2 matrices, respectively. According to this approach, instead of concatenating the horizontal and vertical components into a long vector, as with D and E, they now form two separate columns of D′ and E′, where the rows of each represent the pairs of δnhorz and δnvert, or dkhorz and dkvert, respectively.
This represents a departure from standard linear system representations that may not be compatible with conventional robust regression algorithms.
Modifications of such conventional algorithms that may be necessary to restore compatibility can include changes to bookkeeping items, to the residual analysis, to the anomaly detection details, and/or to the weight generation.
Notably, the weights matrix W is also affected. Rather than independently determining the weights for the horizontal and vertical components of a relative shift estimate, this approach forces the weights of the respective horizontal and vertical components to be equal, i.e. the weights are applied on a per-estimate-vector basis, rather than on a per-estimate-component basis. This requirement is consistent with the nature of the specific problem of image registration, in that a peak that is “found” in a cross-correlation will either be a signal correlation, i.e. a “true” peak, or a noise correlation, i.e. a “false” peak, such that both of the horizontal and vertical components of the peak will be either true or false. This intrinsic set of weights is then captured in a (K+1)×(K+1) matrix Wint, which includes the estimate-vector weights wk along its diagonal, and is terminated with the constraint equation's weight.
According to the present method, the normal equation is equivalently transformed to yield a solution for the new system: D′=(AintTWintAint)−1(AintTWintE′). This equation is illustrated in
Analytically Condensing the Weighted Normal Equation
The approach described above for exploiting the dimensional symmetry of the problem reduces the amount of computer memory that is required to store A by a factor of four, and enables the direct use of the intrinsic Aint. Nevertheless, Aint can still be very tall (dimensions (N2)×N). As a result, matrix arithmetic operations necessitated by the normal equation can become slow at even moderately large N. Because Aint is rather sparse (only 1/N of the entries are nonzero) and has a great deal of structure to it, the present method is able to further increase the computational speed and decrease the computational storage demands by performing several of the intermediate operations analytically. In particular, according to embodiments of the present disclosure, the coefficients matrix AintTWintAint and the inhomogeneous matrix AintTWintE′ are determined directly by circumventing conventional computational approaches. According to this approach, the inputs are the matrix E′ and the list of weights wk (1≤k≤K denotes estimate weights, and wK+1 is the constraint weight).
Analytic Shortcut to Determining the Coefficients Matrix
The normal equation's coefficients matrix AintTWintAint is traditionally calculated through a plurality of matrix transposition and multiplication operations. However, according to embodiments of the present disclosure, the coefficients matrix is calculated analytically, thereby producing the same resulting matrix while considerably reducing the demands placed on the available computing resources, and thereby reducing the execution time. According to the present disclosure, AintTWintAint=L is computed according to the following steps:
According to the present disclosure, the normal equation's inhomogeneous matrix AintTWintE′ is calculated analytically in a similar manner to produce the same result with similar reductions in computing resource demands and execution time. Specifically, the inhomogeneous matrix AintTWintE′=R is calculated according to the following steps:
The result of the above steps is a consistent, full rank, linear system solution of the form D′=L−1R, as illustrated in
Further Consideration of Residuals and Weights
Alongside the algorithm tailoring described above for solving variants of the linear system, common mechanisms in the periphery can play similarly critical roles in the overall performance of the robust algorithm's behavior. Two of the more important and global features are the handling of fit residuals and the generation of weight values.
According to the present disclosure, calculation of the fit residuals goes beyond merely evaluating E-AD, because the matrix A is no longer explicitly calculated. According to the present disclosure, the residuals are instead calculated with (E′)kd−((D′)p
In addition to tailoring the calculation of the residuals, embodiments of the present disclosure also tailor their subsequent use. Since most robust regression algorithms are constructed to process a list of scalar residual measurements, one direct way to re-establish compatibility with the surrounding algorithm is to convert the coupled horizontal and vertical residuals to a scalar residual representation. This can be done, for example, through a root-sum-square (RSS) operation, or by taking the maximum or average of their individual magnitudes. However, this must be done with care, since many algorithms assume some form of distribution on the residuals to enable anomaly detection techniques, so the statistical analysis may need tailoring to compensate.
Similarly, the robust regression's implementation may try to determine weights for each estimate-component, but as previously mentioned, this must be forced to generate weights for each estimate-vector. In embodiments, a scalar representation of the 2D raw residuals is supplied to algorithms that estimate weights using the residuals. In various embodiments, algorithms that estimate the weights through some means of omission of estimates are forced to omit the corresponding pairs of horizontal and vertical component estimates in tandem. The weight determination process is artificially prevented from giving the constraint equation zero weight. Any constraint weight wK+1>0 will deliver exactly the same weighted normal equation solution, so this demand is minimal.
Following is an example calculation based on a tailored “iteratively re-weighted least squares fit” algorithm applied to the cross-correlation estimates presented in
The next step is to calculate G=H+HT, as illustrated in
Calculation of Vhorz is illustrated in
Calculation of the elements of R is illustrated in
To summarize, with reference to
The method continues by forming a plurality of cross-correlations of the images 602, where each of the images is cross-correlated with each of the other images, while all auto-correlations and reflections are omitted, each of the cross-correlations being a cross-correlation of an associated pair of the images;
For each of the cross-correlations, at least one peak is picked 604 as an estimate of a relative offset between its associated images, each of said relative offsets comprising a relative horizontal offset component and a relative vertical offset component. Weights are assigned 606 to each of the picked peaks.
A matrix H is then calculated 608 having rows and columns corresponding to the images, wherein for each of the cross-correlations, a value of H at a row and column corresponding to the pair of images associated with the cross-correlation is equal to a sum of all weights assigned to peaks picked in the cross-correlation, and wherein all other values within H are zero, and a matrix G is calculated 610 that is equal to a sum of H and its transpose HT;
A matrix G′ is calculated 612, wherein each diagonal value of G′ is equal to a sum of the values of the corresponding row of G, and all other values of G′ are zero. A matrix Q is then formed that is filled with ones and has a dimensionality equal to a dimensionality of G, and a matrix L is calculated 614 that is equal to Q+G′−G.
A 2-column matrix E′ is then formed 616, wherein the rows of E′ correspond with the peaks that were picked for the cross-correlations, and wherein the first column of E′ comprises the relative horizontal offset components of the picked peaks, and the second column of E′ comprises the relative vertical offset components of the picked peaks;
Matrices Vhorz and Vvert are calculated 618 having rows and columns corresponding to the images, wherein for each of the cross-correlations, values of Vhorz and Vvert at rows and columns corresponding to the pair of images associated with the cross-correlation are equal to sums of all of the weights of the peaks that were picked for the cross-correlation multiplied by the value of the first and second columns of E′ respectively in the row that corresponds to the picked peak. Matrices Zhorz=Vhorz−(Vhorz)T and Zvert=Vvert−(Vvert)T are then calculated 620.
A 2-column matrix R is calculated 622, wherein the rows of R correspond with the images of the scene, wherein each of the values of the first column of R is equal to a sum of all of the values of Zhorz in the corresponding row thereof, and each of the values of the second of the columns of R is equal to a sum of all of the values of Zvert in the corresponding row thereof.
A 2-column matrix D′=L−1R is then solved for 624, where the values in the first column of D′ are estimates of the horizontal offsets of the images relative to the center of the images, and the values in the second column of D′ are estimates of the vertical offsets of the images relative to the center of the images.
Finally, each of the images is shifted horizontally and vertically according respectively to the first and second columns of D′ 626, and the shifted images are added together 628 to form the composite image.
It should be noted that, in embodiments, steps 616-622 will either be executed after steps 608-614, as shown in
The foregoing description of the embodiments of the disclosure has been presented for the purposes of illustration and description. Each and every page of this submission, and all contents thereon, however characterized, identified, or numbered, is considered a substantive part of this application for all purposes, irrespective of form or placement within the application. This specification is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of this disclosure.
Although the present application is shown in a limited number of forms, the scope of the disclosure is not limited to just these forms, but is amenable to various changes and modifications. The disclosure presented herein does not explicitly disclose all possible combinations of features that fall within the scope of the disclosure. The features disclosed herein for the various embodiments can generally be interchanged and combined into any combinations that are not self-contradictory without departing from the scope of the disclosure. In particular, the limitations presented in dependent claims below can be combined with their corresponding independent claims in any number and in any order without departing from the scope of this disclosure, unless the dependent claims are logically incompatible with each other.
Number | Name | Date | Kind |
---|---|---|---|
6108458 | Hart | Aug 2000 | A |
6473462 | Chevance | Oct 2002 | B1 |
6990254 | Nahum | Jan 2006 | B2 |
7885480 | Bryll | Feb 2011 | B2 |
7961982 | Sibiryakov | Jun 2011 | B2 |
8155452 | Minear | Apr 2012 | B2 |
8160364 | Lam | Apr 2012 | B2 |
8285079 | Robertson | Oct 2012 | B2 |
8306274 | Grycewicz | Nov 2012 | B2 |
8842735 | Robertson | Sep 2014 | B2 |
10445616 | Tom | Oct 2019 | B2 |
10949987 | Liu | Mar 2021 | B2 |
20150324966 | Clifton | Nov 2015 | A1 |
20160188559 | Maltz | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
2850854 | Jun 2013 | CA |
WO-2018189039 | Oct 2018 | WO |
Entry |
---|
Coakley; K.J. “Alignment of Noisy Signals.” IEEE Transactions on Instrumentation and Measurement 50.1 (2001): 9 pages. |
Number | Date | Country | |
---|---|---|---|
20240020789 A1 | Jan 2024 | US |