A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Automatically obtaining precision three dimensional information relative to a surface or object is vital to many industries and processes. For example, in the electronics assembly industry, precision three-dimensional information relative to an electrical component on a circuit board can be used to determine whether the component is placed properly. Further, three-dimensional information is also useful in the inspection of solder paste deposits on a circuit board prior to component mounting in order to ensure that a proper amount of solder paste is deposited in the proper location on the circuit board. Further still, three-dimensional information is also useful in the inspection of semiconductor wafers and flat panel display. Finally, as the precision of such three-dimensional information improves, it is becoming useful for a variety of additional industries and applications. However, as the precision of the three-dimensional information acquisition improves, it becomes more and more important to compensate for the various causes of minor system disturbances. Calibration of systems for obtaining three-dimensional is thus becoming increasingly important.
The calibration process for a three-dimensional structured light measurement sensor should compensate the projectors and cameras for the usual optical non-idealities, including lens geometric distortion, obliquity/keystone effects, rotation errors, and line of sight errors. Some of these non-idealities will not change appreciably over time to affect the measurement accuracy. However, following a sensor calibration, other non-idealities may drift appreciably over the time period of minutes to days and will affect measurement performance For example, line of sight may change appreciably due to thermal expansion from environmental changes.
Sophisticated, methods exist to accurately calibrate three-dimensional sensors and typically require precision equipment such as motion systems and calibration artifacts. It is also relatively time-consuming to acquire the necessary images of calibration artifacts and analyze them.
A method of calibrating a three-dimensional measurement system having a plurality of cameras and at least one projector is provided. The method includes performing a full calibration for each camera/projector pair where the full calibration generates at least two sets of correction matrices. Subsequently, an updated calibration is performed for each camera/projector pair. The updated calibration changes less than all of the sets of correction matrices.
In accordance with the various embodiments described herein, a precise, full calibration process is provided that compensates for the usual non-idealities, facilitates fast run-time processing, and generates a set of reference data that can be subsequently applied to measure calibration drift. This full calibration, while time-consuming, need only be performed once, or at least, infrequently, since it will compensate for non-idealities that will not change appreciably over time. Also in accordance with embodiments described herein, a calibration update process is provided that measures the amount of calibration drift, and, if required, updates only those portions of the full calibration that are expected to drift over time. The calibration update process is a relatively quick process to perform and requires only a relatively simple calibration artifact.
For each camera and projector pair, the full calibration process generates two sets of corrections for each X, Y, and Z direction that are then applied during the measurement process. The first set of corrections includes calibration corrections that may drift over shorter time intervals, for example, on the order of minutes or a couple of days. The second set of corrections includes calibration corrections that may drift over longer time intervals, for example, on the order of several months, or never. These first set of corrections are those that are updated by the calibration update process and the second set of corrections are only updated during subsequent full calibrations.
Method 100 begins at block 102 where a camera calibration is performed. Calibrating the cameras 118, 120, 122, and 124 includes obtaining images of a well-characterized calibration target 114 (shown in
During the camera calibration process, described with respect to block 102, the images are analyzed by controller 126 or other suitable processing logic to establish the geometrical transformation between pixel space and physical space. During this analysis, the relationship between the X, Y coordinates of the calibration target 114, the camera pixel coordinates, α and β, and the target location, Z, can be described by Equation 1:
(X,Y)=f1(α,β,Z) Equation 1
The X, Y coordinates are accurately known from the well-characterized calibration target 114 (e.g. lithographically patterned checkerboard or diamond pattern), where the Z location of the calibration target is known accurately, for example using a precision translation stage with encoders, and where f1(*)is an arbitrary trivariate function.
During the projector calibration process, described with respect to block 104, a series of patterns are projected onto a target and the camera acquires images of the patterns through the z range. For example, three sinusoidal patterns with phases of 0, 120, and 240 degrees may be projected onto the target at each Z location. See
φ=f2(α,β,Z) Equation 2
The Z location of the calibration target is known accurately, for example, using a precision translation stage with encoders and where f2(*)is an arbitrary trivariate function.
During the run-time process of reconstructing the X,Y,Z coordinates for each camera pixel, Equations 3 and 4:
(X,Y)=f1(α,β,Z) Equation 3
φ=f2(α,β,Z) Equation 4
are intractable for reconstruction because Z is an independent variable, yet it is also an unknown. Instead, the functions must be provided of the form of Equation 5:
(X,Y,Z)=f3(α,β,φ) Equation 5
An important insight that facilitates a practical reconstruction is that for any pixel of a given calibration image, one can associate the quantities (X, Y, Z, α, β, φ) without any assumptions about the functional forms. From a suitably-large set of such sextuplets, one can re-assign the independent and dependent variables by regression, with trivariate polynomials being a suitable fitting function. Thus, one computes the height Z by:
Z=(wrap height)*φc/2π Equation 6
where wrap height is the nominal scaling from phase φc to height Z, the corrected phase is:
φc=P(α,β,φ) Equation 7
and the corrected lateral positions X and Y by:
X=Q1(α,β,φ) Equation 8
Y=Q2(α,β,φ) Equation 9
P(·), Q1(·), and Q2(·) are trivariate polynomials found by the regressions. The regressions may be done once at calibration time, with the results stored in memory 130 of controller 126 or any other suitable memory for later application at runtime.
There are several advantages of this approach. One advantage is that telecentricity does not cause singularities or require special cases in the treatment, as it does for processes that rely on determination of an effective pupil height. Instead, the associated correction coefficients are very small or zero. Another advantage is that pupil distortions are automatically modeled and accounted for. Yet another advantage is that the rectification may be applied using only multiplications and additions, which facilitates high-speed processing; no divisions or iterations are required.
To facilitate the calibration update process, the full calibration process generates two sets of corrections for each X, Y, and Z direction and a set of reference data. When sinusoidal phase-shifted patterns are used for reconstruction, then the Z position is directly proportional to the corrected phase. In one embodiment, the first set of phase corrections are offset corrections relative to a reference plane at nominal best focus (Z=0) and the second set of phase corrections are scaling factors to account for fringe direction and changes in triangulation angle across the field of view. The first set of X corrections are offset corrections at nominal best focus (Z=0) and the second set of corrections characterize the X shifts through the working volume Similarly for the Y corrections, the first set of Y corrections are offset corrections at nominal best focus (Z=0) and the second set of corrections characterize the Y shifts through the working volume. Additionally, the full calibration process generates several sets of reference data that are used during the calibration update process.
More specifically, the corrected phase, φc, is calculated by Equation 10
φc=P(α,β,φ)=dφ0+φ·dφc/dφ Equation 10
φc, dφ0, φ, dφc/dφ, are all two-dimensional matrices that match the size of the image. Matching the size of the image means that the matrix has the same number of elements as pixels in the image. Thus, in the example shown above, there are four different matrices, each having the same number of elements as pixels in the image. The first phase correction matrix dφ0 is the phase offset at Z=0 relative to a flat reference surface for each pixel coordinate (α,β) in the image. Each element of the 4 matrix is the nominal phase at that pixel location generated by the phase shift method. The second phase correction matrix dφc/dφ is a unitless matrix that compensates the wrap height of each pixel for fringe direction and triangulation angle changes within the field of view. φ·dφc/dφ is the element by element product of the two matrices which is also called the Hadamard product.
The height at each pixel is then given by Equation 11.
Z=(wrap height)*φc/2π Equation 11
The corrected X location for each pixel is calculated by Equation 12:
X=Q1(α,β,φ)=Xnom+dX0+φ·dX/dφ Equation 12
where X, Xnom, dX0, φ, dX/dφ are all two-dimensional matrices that match the size of the image. Each element of the Xnom matrix is the nominal X image coordinate based on pixel coordinate and nominal resolution. The first X correction matrix dX0 is the X offset for each pixel coordinate (α,β) in the image that corrects for geometric distortions at Z=0. The second X correction matrix dX/dφ characterizes lateral shifts as function of nominal phase. φ·dX/dφ is the element by element product of the two matrices.
The corrected Y coordinate for each pixel is calculated by a similar process as the X corrections as set forth in Equation 13:
Y=Q2(α,β,φ)=Ynom+dY0+φ·dY/dφ Equation 13
where Y, Ynom, dY0, φ, dY/dφ are all two-dimensional matrices that match the size of the image. Each element of the Ynom matrix is the nominal Y image coordinate based on pixel coordinate and nominal resolution. The first Y correction matrix dY0 is the Y offset for each pixel coordinate (α,β) in the image that corrects for geometric distortions at Z=0. The second Y correction matrix dY/dφ characterizes lateral shifts as function of nominal phase. φ·dY/dφ is the element by element product of the two matrices.
The full calibration process will be further explained with respect to
To calibrate the projector 128, the camera calibration target 114 may be replaced by the projector calibration target as shown in
Reference data for the calibration update process is then generated by projecting a reference pattern onto the projector calibration target located at Z=0 as shown in
In another embodiment, the same target is used for both camera calibration and the projector calibration. For camera calibration, the target may be illuminated by diffuse back lighting to view the camera calibration pattern. The surface of this target may be also slightly roughened so that it diffusely scatters light when it is illuminated from the front by the projector.
After the full calibration process is complete, the calibration update process may be performed relatively quickly whenever required by adjusting the relative position of a calibration update target at Z=0, the nominal best focus of the sensor.
dX0=dX0,full+(dX0,upd−dX0,ref) Equation 14
dY0=dY0,full+(dY0,upd−dY0,ref) Equation 15
dφ0=dφ0,upd Equation 16
where dX0,full dY0,full are the first sets of X and Y correction matrices, respectively, generated during the full calibration process. The difference matrices (dX0,upd−dX0,ref) and (dX0,upd−dX0,ref) are a measure of the calibration drift. Even if there is some distortion of the reference pattern by the projection optics, the same distortion is present during the full calibration and calibration update processes and is canceled by taking the difference of the reference and update correction matrices. If the residuals of these difference matrices are sufficiently low, then the first sets of correction matrices remain those generated during the full calibration process:
dX0=dX0,full
dY0=dY0,full
dφ0=dφ0,full
where dφ0,full is the first phase correction matrix generated during the full calibration process.
A full calibration process has been disclosed that provides first and second sets of calibration corrections. A simple calibration update process has been disclosed that updates the first sets of corrections. In a more general sense, the calibration update process updates only a portion of the full calibration. Although specific patterns have been disclosed for camera, projector, reference, and update calibrations, the invention is not limited to those specific patterns.
Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
The present application is based on and claims the benefit of U.S. Provisional Patent Application Ser. No. 62/095,329, filed Dec. 22, 2014, the content of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6044204 | Takamatsu | Mar 2000 | A |
7418364 | Horton | Aug 2008 | B1 |
7660588 | Sheynblat | Feb 2010 | B2 |
9161015 | Myokan | Oct 2015 | B2 |
20030193566 | Matsuda | Oct 2003 | A1 |
20030202691 | Beardsley | Oct 2003 | A1 |
20040085939 | Wallace | May 2004 | A1 |
20050089213 | Geng | Apr 2005 | A1 |
20050103351 | Stomberg | May 2005 | A1 |
20050225684 | George | Oct 2005 | A1 |
20050271299 | Ajito | Dec 2005 | A1 |
20070104361 | Alexander | May 2007 | A1 |
20070120855 | Kondo | May 2007 | A1 |
20070146742 | Klassen | Jun 2007 | A1 |
20080101725 | Lin | May 2008 | A1 |
20080181486 | Spooner | Jul 2008 | A1 |
20080239344 | Wang | Oct 2008 | A1 |
20110205355 | Liu | Aug 2011 | A1 |
20110316980 | Dubbelman | Dec 2011 | A1 |
20120030721 | Smith | Feb 2012 | A1 |
20120115509 | Sheynblat | May 2012 | A1 |
20120154694 | Nishihata | Jun 2012 | A1 |
20120300044 | Thomas | Nov 2012 | A1 |
20130002832 | Lasenby | Jan 2013 | A1 |
20130046506 | Takabayashi | Feb 2013 | A1 |
20130076789 | Majumder et al. | Mar 2013 | A1 |
20130083171 | Habuka | Apr 2013 | A1 |
20130100282 | Siercks | Apr 2013 | A1 |
20130293684 | Becker | Nov 2013 | A1 |
20130293701 | Tani | Nov 2013 | A1 |
20140028805 | Tohme | Jan 2014 | A1 |
20140098191 | Rime | Apr 2014 | A1 |
20140118579 | Kim | May 2014 | A1 |
20140168379 | Heidemann | Jun 2014 | A1 |
20140240587 | Cote | Aug 2014 | A1 |
20140268093 | Tohme | Sep 2014 | A1 |
20150124055 | Kotake | May 2015 | A1 |
20150178908 | Jesenko | Jun 2015 | A1 |
20150208041 | Wang | Jul 2015 | A1 |
20150260845 | Takemura | Sep 2015 | A1 |
20150269451 | Seki | Sep 2015 | A1 |
20150310728 | Calabrese | Oct 2015 | A1 |
20150363265 | Liu | Dec 2015 | A1 |
20160117820 | Oh | Apr 2016 | A1 |
Entry |
---|
International Search Report and Written Opinion for International Application No. PCT/US2015/067051, dated Apr. 14, 2016, date of filing: Dec. 21, 2015, 11 pages. |
International Preliminary Report on Patentability for International Patent Application No. PCT/US2015/067051, dated Jul. 6, 2017, 10 pages. |
Search Report and Written Opinion for International Patent Application No. PCT/2015/067051 dated Apr. 14, 2016, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20160180511 A1 | Jun 2016 | US |
Number | Date | Country | |
---|---|---|---|
62095329 | Dec 2014 | US |