The ability to replicate the exterior surface of an article, accurately in three-dimensional space, is becoming increasingly useful in a wide variety of fields. Industrial and commercial applications include reverse engineering, inspection of parts and quality control, and for providing digital data suitable for further processing in applications such as computer aided design and automated manufacturing. Educational and cultural applications include the reproduction of three-dimensional works of art, museum artifacts and historical objects, facilitating a detailed study of valuable and often fragile objects, without the need to physically handle the object. Medical applications for full and partial scanning of the human body continue to expand, as well as commercial applications providing 3D representations of products in high detail resolution to internet retail catalogs.
In general, three-dimensional non-contact scanning involves projecting radiant energy, for example laser light or projected white light structured in patterns, onto the exterior surface of an object, and then using a CCD array, CMOS array, or other suitable sensing device to detect radiant energy reflected by the exterior surface. The energy source and energy detector typically are fixed relative to each other and spaced apart by a known distance to facilitate locating the point of reflection by triangulation. In one approach known as laser line scanning, a planar sheet of laser energy is projected onto the object's exterior surface as a line. The object or the scanner can be moved to sweep the line relative to the surface to project the energy over a defined surface area. In another approach known as white light projection or referred to more broadly as structured light, a light pattern (typically patterned white light stripes) is projected onto the object to define a surface area without requiring relative movement of the object and scanner.
Three-dimensional non-contact scanning systems obtain measurements of objects, such as manufactured components at the micron scale. One example of such a three-dimensional non-contact scanning system is sold under the trade designation CyberGage® 360 by LaserDesign Inc. a business unit of CyberOptics Corp. of Golden Valley, Minn. It is desirable for these and other scanning systems to provide measurement stability. However, it is currently difficult to create a three-dimensional, non-contact scanning system that consistently generates accurate measurements while coping with frequent imaging use, aging components, and the many challenges that arise from imaging at such fine granularity. Scanners, and components thereof, such as cameras and projectors often experience mechanical drift with respect to their factory settings. Accuracy can be significantly impacted by the effects of temperature and age on both the cameras and projectors. For example, temperature can affect magnification of the camera which may negatively impact the geometrical accuracy of measurements. These and other sensor opto-mechanical drifts ultimately permeate through the scanning system and impact imaging performance. Further, the effects of mechanical drifts are exacerbated in systems that use multiple sensors.
A three-dimensional non-contact scanning system is provided. The system includes a stage and at least one scanner configured to scan an object on the stage. A motion control system is configured to generate relative motion between the at least one scanner and the stage. A controller is coupled to the at least one scanner and the motion control system. The controller is configured to perform a field calibration where an artifact having features with known positional relationships is scanned by the at least one scanner in a plurality of different orientations to generate sensed measurement data corresponding to the features. Deviations between the sensed measurement data and the known positional relationships are determined. Based on the determined deviations, a coordinate transform is calculated for each of the at least one scanner where the coordinate transform reduces the determined deviations.
Embodiments of the present invention generally perform a coordinate transform to reduce errors caused by mechanical drift and other measurement inaccuracies. In features of the present invention where multiple scanners are used (e.g. scanners 102(a) and 102(b)) a coordinate transform maps each of the scanner coordinate systems to a world coordinate system. More specifically, but not by limitation, a calibration artifact is used to measure the effects of sensor opto-mechanical drift. Differences between each scanner's reported measurements and known information about the calibration artifact can be used to generate a coordinate transformation for each scanner that reduced the differences. As shown in
Particular embodiments provided herein calibrate scanning system 100 by use of field transform logic 122, which generally maps data from each scanner coordinate system to a coordinate system that is tied to rotary stage 110. Specifically, measurements of a calibration artifact, placed on the rotary stage, are compared to accurately known geometry of said artifact. One particular system that uses field transform logic 122 also uses one or more ball bars to calibrate axis orthogonality of rotary stage 110 and to generate correction results. For instance, a measuring volume can be defined and one or more ball bars (with accurately known geometry) can be positioned in the defined volumetric space. Where the scanning system does not experience scale errors or drifts and when the system axis are orthogonal, the ball bar lengths are reported correctly.
Ball bar 130(a) is illustratively shown as being positioned radially near the top edge of measurement volume 136. Further, ball bar 130(b) is positioned radially near the bottom edge of measurement volume 136. In addition, ball bar 130(c) is shown as being positioned vertically near a vertical edge of the cylinder that defines measurement volume 136. During field calibration, the user may use a single ball bar 130 placed sequentially at the several positions (a, b, c) or may use three ball bars 130(a, b, c) simultaneously. Note that the ball bars do not need to be precisely positioned relative to rotary stage 110. Accordingly, a user may place the calibration artifact in the sensing volume at an arbitrary position and the system will sweep the calibration artifact through most, if not all, of the sensing volume for the various scans. This means that the calibration artifact need not be placed in a pre-determined position or orientation on the stage for effective calibration.
In operation of system 100, in one embodiment, a first scan is performed and first measurement data 120 is generated for each of the ball bars 130 and their corresponding angular positions on rotary stage 110. By measuring the ball bars 130 at several different stage 110 positions, it is possible to collect data from much of the measurement volume 136. If the scanner(s) 102 have been perturbed from their original factory calibrated state (e.g. errors in scale or axes orthogonality) then several anomalies may be found in the measurement data 120; for instance the ball bar 130 lengths may be incorrect or seem to vary as the stage 110 rotates, the individual balls may seem to orbit rotary axis 126 in an ellipse, the balls may seem to orbit an axis which is displaced from the rotary stage axis, or the balls may seem to wobble in their orbit around axis 126. By noting these errors, data processor 118 may calculate a spatial mapping (such as a projective transform) from scanner 102 measurement space to a corrected world coordinate system.
Ball bars, as used in accordance with features described herein, are advantageous in that they are robust and inexpensive. For instance, any number of ball bars with any known measurements and in any orientation with respect to rotary stage 110 can be used. However, the use of ball bars may require repositioning of said bars to properly capture complete measurement data.
Ball plate 200 illustratively includes any number of spheres 202(n1),(n2),(n3) . . . (ni). In the illustrated example, ball plate 200 includes 10 spheres 202 that project from both sides of plate 200. Spheres 202 are visible from all angles when viewing plate 200 with, for instance, sensing assemblies 102(a) and 102(b). In one embodiment, the centers of the spheres are substantially coplanar. However, embodiments of the present invention can be practiced where the calibration artifact is not a plate, but in fact a constellation of balls that do not have coplanar centers. Each sphere 202 of plate 200 is precisely measured at the time of manufacture of plate 200. Therefore, the measured diameter and X,Y,Z center position of each sphere 202 can be used as known data in performing field calibration of system 100. The algorithm described below treats the ball plate as a set of ball bars, where any pair of balls acts as a separate ball bar. Effectively, the illustrated example of ball plate 200 provides 45 ball pairs (e.g. 45 measurements of distance between sphere centers, such by effectively providing 45 ball bars manufactured into plate 200). In one embodiment, ball plate 200 includes a first plurality of balls having a first diameter and a second plurality of balls having a second diameter that is larger than the first diameter in order to unambiguously determine ball plate orientation in the scan data.
As shown in
It is also noted that the present disclosure provides improved features for obtaining known, accurate measurements of a calibration artifact. In an embodiment where a calibration artifact is a ball plate 200, ball plate 200 can include machine-readable visual indicia 204, as shown in
As discussed above, a variety of transforms can be performed by system 100 to map from uncorrected to corrected space. These transforms and their associated operations of system 100 will be further discussed below with respect to
At block 304, the method illustratively includes collecting raw data that corresponds to scanner coordinates. Collecting raw data generally refers to the sensing of surface properties of an object that is imaged or otherwise detected by one or more cameras in a scanner. As noted above, each scanner has its own coordinate system, and therefore raw measurement data is dependent on that coordinate system. As indicated by block 320, collecting raw data includes scanning the calibration artifact with a scanner such as scanner 102(a) and/or 102(b). For instance, system 100 senses the calibration object relative to the particular scanner's coordinate system. Collecting raw data further illustratively includes collecting data from multiple stage positions. For instance, a rotary stage is rotated to a variety of angular positions. Rotation of the rotary stage allows for all surface features of the object to be viewable. In addition, the precise position of the rotary stage is determined with a position encoder that is coupled to the stage. As shown in
At block 306, the method illustratively includes obtaining known artifact measurement data. It is first noted that known artifact measurement data can include any measurement data for the artifact that is precisely known to be accurate (e.g. measured with accurate instrumentation at the time of manufacture of the artifact). In one example of block 330, a QR Code® is sensed by the scanning system. Based on the sensed QR Code®, the current artifact being imaged is identified. While a QR Code® is one type of visual indicia that can be provided on a surface of the calibration artifact for sensing, a variety of other visual indicia can also or alternatively be used. A matrix code (such as a QR Code®) may contain both the artifact identifying information and the actual artifact measurement data (the ball X, Y, Z positions and diameters). Further, other types of identifiers can also be used in accordance with embodiments of the present invention, such as RFID tags. Block 330 may further include querying a database for the known artifact measurement data corresponding to the identified calibration artifact, as illustratively shown at block 332. At block 334, other mechanisms for obtaining known artifact measurement data can be used in addition or alternatively to those discussed above. For instance, an operator can manually input known measurement data for the artifact being imaged. In a particular embodiment, three-dimensional, non-contact scanning system automatically identifies the artifact based on a sensed visual indicia (e.g. QR Code®) and further automatically retrieves relevant data.
Continuing with block 308, the method includes comparing the collected raw rata (e.g. raw data that is sensed using the scanner coordinate system) to the obtained known calibration artifact measurement data. Of course, a variety of comparisons can be done across the two data sets. In one embodiment, degrees of freedom of the scanning system are identified and used to calculate errors between the collected raw data and the known artifact data, in accordance with block 310. Further, for instance, one or more point clouds can be generated. A point cloud, as used herein, generally includes a collection of measurement properties of a surface of an imaged object. These surface measurements are converted to a three-dimensional space to produce a point cloud. It is noted that point clouds that are generated are relative to their respective sensing system. For instance, scanner coordinate systems (e.g. coordinates in three-dimensional space within the scanner's field of view) can vary, especially as a system ages and experiences mechanical drift. As such, calculated deviations between measured surface positions and expected surfaces positions can provide the system with an indication that a particular coordinate system (of one of the scanners) has drifted over time and requires re-calibration in the field.
Calculating errors, as shown at block 310, generally includes calculating variations between scanner data tied to a scanner coordinate system and known measurement data for a calibration artifact being imaged within the scanner coordinate system. Several examples of error calculations that can be performed in accordance with block 310 will now be discussed. At block 336, a distance error is calculated. A distance error generally includes a calculated difference between the collected raw measurement distance (e.g. sensed distance between two sphere 202 centers in ball plate 200) and the obtained accurate measurement distance. Calculating errors also illustratively includes calculating a rotation radius error, as shown at block 338. For instance, spheres or balls of a calibration artifact will rotate within the scanning system at a constant radius (e.g. on a stage with minimal wobble). As such, when calibration errors occur due to mechanical drift, for instance, block 338 includes calculating a variation in radius for each artifact (e.g. sphere or ball) at each angle of rotation around the rotary stage. In addition, the method includes identifying variations in the direction along the axis of rotation of the artifact object. In accordance with block 340, calculating errors illustratively includes calculating errors or variations in the position, along the axis of rotation (e.g. Y axis of rotation 126) of the calibration artifact as it rotates on the stage. As a calibration artifact is rotated about the axis of rotation, the calibration artifact passes around an orbit of the rotation, defined in part by the measurement volume. The method illustratively includes calculating errors in chord length of calibration artifact features as they rotate around the orbit, as indicated at block 342. For instance, the total orbit distance that is traveled by the calibration artifact should match the measured chord distance of the balls as they rotate where there is no mechanical drift or other measurement inaccuracies. As an example only, and not by limitation, measured chord length can be compared to known measurements to calculate errors by using the following equation:
Of course, a variety of additional or alternative error calculations can be used. Other error calculations are shown at block 344.
Continuing with block 312, the method illustratively includes generating a spatial mapping such as a projective transform to minimize a sum of the calculated errors. A variety of techniques can be used to generate a coordinate transform, in accordance with embodiments of the present invention. In one embodiment, block 312 includes determining an appropriate algorithm to use in generating the coordinate transform. For instance, where the method determines, at block 310, that the calculated errors are relatively small, a coordinate transform can be employed to convert points in scanner coordinates to points in world coordinates using a rigid body or affine transform. Equation 2A is an example of an affine transform matrix array:
If errors are larger, a projective array as shown in Equation 2B can be used:
As shown in Equation 2, XW is a world position (i.e. position tied to a rotary stage) and XC is the point position in scanner coordinate system [x,y,z,1]T. Equations 2 map from XC to XW.
The values in the transform matrices may be calculated using a least squares algorithm, as illustrated at block 348. One example of a least squares algorithm that can be used in accordance with block 312 is the Levenberg-Marquardt algorithm. In this and similar algorithms, the sum of the squares of the deviations (e.g. errors) between the sensed measurement values and the obtained known calibration measurement values is minimized.
Further, in example systems where it is determined that mechanical drifts are large (e.g. error calculations are indicative of large deviations between scanner coordinate system measurement outputs and known measurements), generating a coordinate transform illustratively includes using tri-variate functions such as polynomials. A polynomial allows correction of non-linear errors that can occur if there is a large mechanical change in the sensing system. This is shown at block 350. As such, the projective transform is no longer a linear algebraic equation. Rather, in one example, a set of three polynomials having the following functions are used where the W subscript indicates world coordinates and the C subscript indicates scanner coordinates.
x
W
=F
x(xc, yc, zC) Equation 3
y
W
=F
y(xc, yc, zC) Equation 4
z
W
=F
z(xc, yc, zC) Equation 5
At block 314, the method illustratively includes the step of correcting the scanning system based on the coordinate transform that is generated. In one embodiment, block 314 includes using the projective transform to map the data obtained using the scanner coordinate system (where the coordinate system is determined to produce measurement inaccuracies, e.g. block 310 ) to a world coordinate system that is tied to the rotary stage. For instance, in addition to determining deviations (e.g. errors) between measurements sensed by the factory-calibrated scanner coordinate system and the measurements known to be accurate at the various precise positions of a stage, systems and methods in accordance with embodiments herein calibrate the scanner coordinate system in the field using the coordinate transform. As such, deviations can be used to correct mechanical drifts within each of the scanner coordinate systems, as each coordinate system varies individually. Mapping a scanner coordinate system to a world system, based on the transform, is indicated at block 352.
With the coordinate transforms determined for each scanner, the system can use the transforms to more accurately sense objects placed within the scanning volume. Accordingly, after the coordinate transforms are determined, they are used to scan objects placed in the sensing volume more accurately. The field calibration described above can be performed at any suitable interval such as after a certain number of objects have been scanned, at the end or beginning of a shift, etc.
While embodiments described thus far have focused on a single operation that obtains the requisite spatial mapping to correct the coordinate system for each scanner, embodiments of the present invention also include iteration of the method. For example, the general result of the process is to obtain a spatial mapping from scanner coordinates (uncorrected) to world coordinates (corrected). For example, the equation: XW=PXC provides a projective transform, P, that maps the scanner coordinates (XC) to world coordinates (XW).
The calculation of P is, in one embodiment, based on the measured center positions of a number of spheres. First, points on the surface of the spheres in the scanner coordinate system are measured, then the sphere centers are calculated (still in the scanner coordinate system). These sphere center positions are then provided to a least squares solver to minimize errors in order to obtain P. Generally, the method begins by finding the surface of a sphere in the scanner coordinates. Then, for each sphere, the center of the identified surface is calculated (in the scanner coordinate system). Then, the sphere centers are used to calculate P. In some instances, the scanner calibration or correction can be large enough that the surface of the spheres can be distorted enough that there is a small but meaningful error in finding the true sphere centers. The iterative technique remedies this problem.
The iterative technique proceeds as follows. First, (1) the surface of the spheres is found in the scanner coordinate system. Again, (2) for each sphere, the center is calculated (in the scanner coordinate system). Next, (3) the sphere centers are used to calculate P. On the first iteration, this step P is close to correct, but not exact. Next, (4) P is applied to the sphere surface found in the step 1 (the surface is now approximately correlated). Next, (5) the centers of the corrected sphere surfaces are found. Next, (6) the corrected center position of the spheres is moved back to the scanner coordinate system: XC=P−1XW, where P−1 is the inverse of the P transform. Next, steps 3-6 are repeated using the more accurately estimated sphere centers for a better estimate of P.
Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
The present application is based on the benefit of priority of the following U.S. provisional patent application with Ser. No. 62/307,053, filed Mar. 11, 2016, the contents of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62307053 | Mar 2016 | US |