Additive manufacturing machines, commonly referred to as 3D printers, may be used to produce three-dimensional objects. In some examples, the three-dimensional objects are produced in layers using build material.
The figures are not to scale. Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. While the drawings illustrate examples of printers and associated controllers, other examples may be employed to implement the examples disclosed herein.
Disclosed herein are example methods, apparatus, systems, and articles of manufacturing for calibrating a stereo vision system. In general, a stereo vision system includes the use of two cameras, spaced apart from each other, that obtain digital images of a common area, referred to herein as a field of view (FoV). Stereo vision systems are used to extract depth information, such as 3D features, of the scene in the FoV.
Disclosed herein are example additive manufacturing (AM) machines, commonly referred to as 3D printers, that utilize stereo vision systems to extract depth information relating to the build process being performed by the additive manufacturing system. The stereo vision system may be used, for example, to ensure the spread of build material (e.g., metallic powder) is sufficiently even or level, thereby increasing the accuracy of a 3D object being generated. Additionally or alternatively, the stereo vision system may be used to ensure that each hardened layer of the 3D object is sufficiently level or even. As the accuracy of the stereo vision system affects the accuracy of the depth information extraction process, proper calibration of the stereo vision system ensures accurate results during the depth information extraction process.
Traditionally, a stereo vision system is calibrated by holding a calibration target in the FoV and taking a plurality of images with the cameras with the calibration target in different orientations. The calibration target may include, for example, a known grid or patter of features. The images of the calibration target are analyzed by a processor and may be used to determine one or more of extrinsic parameters of the cameras, intrinsic parameters of the cameras, and distortion coefficients. These parameters may then be used to extract depth information from the FoV during use of the stereo vision system. However, the process of manually holding the calibration target and moving the calibration target through the plurality of positions is time consuming and cumbersome. Further, this process is subject to human error and often has to be repeated.
Disclosed herein are example automated calibration target stands that support a calibration target and move the calibration target to the plurality of positions for calibrating a stereo vision system. An example automated calibration target stand includes a platform to which the calibration is to be mounted. The platform is tiltable and rotatable using two motors. The example automated calibration target stand, along with the calibration target, may be placed into the AM machine in the FoV of the stereo vision cameras. The example automated calibration target stand moves the calibration target to the plurality of positions (e.g., tilts, rotations, etc.) and the stereo vision cameras obtain images of the calibration target in the different positions. The example automated calibration target stand enables high precision in positioning the calibration target and enables high repeatability of the same positions and/or sequence of positions.
In some examples, the automated calibration target stand is part of a calibration system that includes a calibrator. The calibrator manages the position(s) and/or sequence of positions and controls the movement automated calibration target stand. In some examples, the calibrator is implemented as a software program or application operated on a computer. In some examples, the automated calibration target stand is placed into an AM machine and plugged (e.g., via a cord) into the computer. The calibrator executes a calibration process that instructs the automated calibration target stand to move to the position(s) and/or sequence of positions and instructs the stereo vision cameras to obtain images when the calibration target is in the desired positions.
In some examples, the example calibration system can be used to identify positions and/or sequences of positions that result in relatively low calibration error. These positions and/or sequences of positions can be saved and repeated in subsequent calibrations. As such, this process can be used to iteratively refine the calibration process and determine more effective positions and/or sequences of positions that result in a more accurate calibration process. Also disclosed herein are example methods, apparatus, systems, and articles of manufacturing for selecting parameters of a calibration target, determining the boundaries of the tilt positions, and determining the positions and/or sequence of positions to be used while calibration a stereo vision system.
In the illustrated example, a computing device 112 is provided that controls the printer 100 to build a 3D object according to a 3D model or build file. In particular, the computing device 112 controls the operations of the printer 100, such as controlling the build platform 102, the roller 106, the dispenser 109, the light 110, and/or any other part of the printer 100. In some examples, the computing device 112 is part of the printer 100 (e.g., coupled to or built into a housing of the printer 100). In other examples, the computing device 112 is separate from the printer 112 and may be electrically coupled to the printer 100 (e.g., via a cord) to interface with the components of the printer 100. The computing device 112 is illustrated twice in
To enhance the accuracy of the building process, the example printer 100 includes an example stereo vision system 120. In the illustrated example, the stereo vision system 120 includes a first camera 122 and a second camera 124 that are spaced from each other and aimed at a common area and have the same FoV. In this example, the FoV includes the build platform 102. The first and second cameras 122, 124 take digital images of the FoV and an image analyzer 126 extracts 3D information based on the image set (one image from each of the cameras 122, 124). In the illustrated example, the image analyzer 126 of the stereo vision system 120 is implemented as a program or application that may be executed by a processor of the computing device 112. To extract 3D depth information, the image analyzer 126 measures the change (e.g., in number of pixels) in a position of a common point or feature between the two images and determines the Z, vertical, height of the point or feature relative to other points or features based on the change. As such, the example stereo vision system 120 can be used to determine the position of a feature in the FoV including changes in the Z height of any objects and/or surfaces in the FoV.
In some examples, the stereo vision system 120 is used to ensure that each layer of the powder material has a substantially uniform thickness. Uneven spreads may cause uneven layers in the generated 3D object and, as a result, can lead to defects and/or undesired features (e.g., a void) in the 3D object. Therefore, in some examples, after each spread of the powder material, the stereo vision system 120 obtains an image set of the layer of powder material. If the image analyzer 126 measures a relatively large variance or anomaly in the Z height of the powder spread, the printer 100 may re-spread the layer of powder material and/or take another course of action. Additionally or alternatively, the stereo vision system 120 may obtain images of the hardened layers after each of the layers is created to ensure each layer is sufficiently even or flat. Therefore, the accuracy of the building process relies on accurate depth measurements from the stereo vision system 120. Thus, proper calibration of the stereo vision system 120 ensures the accuracy of the stereo vision measurements.
In the illustrated example of
Referring briefly to
In the illustrated example of
In
Referring back to
In the illustrated example of
In some examples, prior to beginning a calibration sequence, the calibration executor 142 instructs the user where to position the stand 132 on the build platform 102. For example, the build platform 102 may include guide marks (e.g., a grid with XY coordinates) and/or positioning hardware (e.g., indexing pins and holes, matched projections and depressions, edges to position the base 206 against, etc.) that enable a user to accurately place the stand 132 in a specific XY location on the build platform 102. The initial position may be displayed on a display screen of the computing device 112, for example. In other examples, the stand 132 may be placed anywhere on the build platform 102 and, during the calibration process, the calibrator 134, which determines where the calibration target 136 is located in space, may calculate the transformations that yield the desired set of motions. In some such examples, the calculation accounts for the rigid body motion between the actual and expected initial position of the calibration target 136, based on an initial image pair. The expected initial position of the calibration target 136 may be the center of the build platform 102, with the dot grid rows and columns in the Y and X directions, for example.
As disclosed herein, a calibration process may include obtaining images of the calibration target 136 in multiple positions, where each position is defined by a specific orientation of the calibration target 136 (defined by the rotational angles about the X, Y, and/or Z axes) and/or and a specific location in the FoV (defined by the XYZ location of the calibration target 136 in the FoV relative to the first and second cameras 122, 124). As a result, multiple images of the calibration target 136 may be obtained with the calibration target 136 in different orientations and locations in the FoV. In some examples, the stand 132 is placed in a first XY location on the build platform 102 and a calibration sequence (e.g., a first position or set of positions) is performed with the stand 132 (by rotating the platform 200 about the X and Z axes in
Also, as disclosed above, a position of the calibration sequence may include a specific height or depth (along the Z axis) relative to the first and second cameras 122, 124. Therefore, in addition to or as an alternative to changing the tilt, rotation, and/or XY location of the calibration target 136, the stand 132 may be moved vertically (linearly) upward and/or downward, which enables more calibration images throughout the volume of space in the FoV and, thus, better calibration results. For example, the calibration executor 142 may instruct the platform controller 114 to move the build platform 102 up or down while the stand 132 is disposed on the build platform 102, thereby moving the calibration target 136 linearly toward or away from the first and second cameras 122, 124. In some examples, the build platform 102 is moved down to a level where the platform 200 (
In some examples, the calibrator 134 includes a position determiner 148 that determines and/or otherwise selects the calibration position(s) that should be used for calibrating the stereo vision system 120. Each position may be defined by a specific orientation (rotation about the XYZ axes) of the calibration target 136 and/or location (in the XYZ frame) of the FoV. In some examples, the position determiner 148 determines the position(s) based on a type of the stereo vision system 120, a type of the printer 100, a size of the FoV, a time since a last calibration, and/or any other parameter of the stereo vision system 120 and/or the printer 100. In some examples, the position determiner 148 selects a position sequence based on a standard or guideline sequence used in other stereo vision calibration processes. Additionally or alternatively, a position sequence may be established and/or otherwise created via user input. For example, a user may manually enter the desired position(s) (e.g., defined by the rotational angles about the XYZ axes and/or location in the XYZ reference frame) to be included in the calibration process. Other example processes for determining the positions of a calibration sequence are disclosed in further detail herein.
The images obtained by the first and second cameras 122, 124 of the calibration target 136 are analyzed by a parameter determiner 150 of the calibrator 134 to determine various parameters of the stereo vision system 120 that are used during the depth extraction process, as disclosed in further detail herein. To ensure the accuracy of the calibration process, the calibrator 134 of
In Equation 1, Pi is the projection matrix of a camera for the i-th calibration view, xj is the number of detected grid points, and Xj is the planar points of the 3D world. In some examples, the calibration error calculator 152 calculates the quality of the calibration process by taking the root means square (RMS) of the target reprojection error values of all the calibration images for the sequence using Equation 2 below.
The RMS calibration error value is useful measure of how well the calculated camera parameters correspond to the actual system setup. In some examples, only the results of a calibration sequence having a RMS calibration error value that meets a threshold error value are considered reliable. For example, if the RMS calibration error value is greater than a threshold error, the calibration sequence may not be considered reliable and the calibration sequence may need to be performed again. If the RMS calibration error value meets the RMS calibration error threshold (e.g., is below the threshold), the results may be considered reliable, and the parameters of the stereo vision system 120 may be calculated with confidence. In some examples, a RMS calibration error value of 0.1 is considered acceptable, while lower values, such as 0.03 are considered great. The example calibration system 130 disclosed herein can be used to achieve RMS calibration errors of down to 0.02 and even lower because of the accuracy, reliability, and repeatability of positions that can be performed with the example stand 132. With better calibration results, the example stereo vision system 120 can detect smaller variations in Z height more precisely (e.g., down to 4-5 microns).
In some examples, the calibrator 134 includes a position optimizer 154 that identifies a position or a combination of positions that result in minimal calibration errors. The example position optimizer 154 may analyze the reprojection error value(s) of a position or a combination of positions of previously performed calibrations. In some examples, the position optimizer 154 compares the reprojection error value(s) to a reprojection error threshold. If the reprojection error value does not meet (e.g., is less than) the reprojection error threshold, the image set for that position may be retaken or the position may be changed. If the new position produces a better result, the new position is saved such that that the calibration sequence results in a more accurate calibration. The position optimizer 154 may save (e.g., in the database 144) these positions and/or combinations of positions as optimal positions to be used in subsequent calibrations. After multiple iterations, a calibration process may be refined over-and-over until a relatively accurate sequence of positions is achieved. An example process of minimizing calibration error is disclosed in connection with
In some examples, the position optimizer 154 determines certain ones of the image sets to remove or delete from the analysis (e.g., because of a high error value that negatively effects the calibration process). Additionally or alternatively, the position optimizer 154 may determine certain positions and/or areas in the FoV that need additional calibration data. For example, the position optimizer may identify that a certain tilt angle produced high error results. The position optimizer 154 may determine one or a plurality of additional positions around the tilt angle so as to add more calibration data for this specific region or area.
As mentioned above, the parameter determiner 150 uses the results of the calibration process to estimate certain parameters of the stereo vision system 120 that may be used to establish a measurement reference frame in the 3D world coordinates, correct for lens distortions, and/or extract quantitative depth information from stereo image data during the printing process. As such, the calibration process directly impacts the accuracy of the stereo depth extraction technique. The parameters include intrinsic parameters of the cameras 122, 124, extrinsic parameters of the cameras 122, 124, and/or lens distortion coefficients of the cameras 122, 124. The lens distortion coefficients are used to correct for lens distortion, such that measurements taken from the images are reliable and accurate. The intrinsic parameters include the camera-specific geometric and optical characteristics, such as the equivalent lens focal length measured in pixels, the coordinates of the true optical center, and/or the pixel skew coefficient. Extrinsic parameters include the relative position and orientation of cameras 122, 124 in the 3D world coordinates, such as rigid body translation and rotation vectors of the cameras 122, 124. Together, the intrinsic and extrinsic parameters define the geometry used to determine the relationship between measured pixel disparity values and quantifiable Z height used by the image analyzer 126 when analyzing image sets. For example, Equation 3 below illustrates the stereo vision geometry used to extract depth information:
In Equation 3, Z is the perpendicular distance from the cameras 122, 124 to the calibration target 136 (in meters), f is the lens focal length (in pixels), B is the baseline distance between the cameras 122, 124 (in meters), and D is the disparity between common features in stereo images (in pixels). The lens focal length f and the baseline distance B are the parameters determined via the stereo calibration process using the parameter determiner 150.
In some examples, the parameter determiner 150 determines the parameters algorithmically by analyzing a sequence of calibration images. In particular, the images depict a planar view of the calibration target 136 containing a known grid pattern and positioned at different tilts and orientations within the FoV of the cameras 122, 124. In some examples, the parameter determiner 150 analyzes the images based on the pinhole camera model to account for Seidel lens distortions. The pinhole camera model sets up a parametric fitting process to solve the correspondence problem between the 3D world coordinates and the analogous 2D image points. Moving the calibration target 136 around the FoV and, in some examples, introducing extreme amounts of out-of-plane tilt, improves the parametric fitting process by providing additional correspondence information between the 3D world coordinates and the 2D image points. The outputs of the pinhole correspondence problem are the intrinsic and extrinsic parameters. The Seidel modification leverages the constraint that the planar grid pattern should appear uniform in corrected calibration images to calculate the coefficients required to remove lens distortions from subsequence images taken with either of the cameras 122, 124. In other examples, the parameter determiner 150 may utilize other camera models or methodologies, such as a Direct Linear Transform (DLT), Tsai's method, and/or Zhang's method. In some examples, the calibration results remain valid as long as the camera focus, aperture setting, and relative positioning of the cameras 122, 124 remains the same. However, if adjustments are made to the cameras 122, 124, the stereo vision system 120, and/or the printer 100, the stereo vision system 120 should be recalibrated. The stereo vision system 120 may be calibrated before or after use. For example, the stereo vision system 120 could be calibrated after the stereo vision system 120 obtains images of the build process, and the determined parameter(s) may be used to analyze the images afterwards.
In some examples, the calibration error is inversely proportion to the number of dots contained in a calibration target. Therefore, in some instances, increasing the number of dots tends to improve calibration quality. However, there is a point of diminishing return when the dots become too small, or too closely spaced, to be accurately identified and processed by the calibrator 134.
In some examples, to achieve sufficient resolution of circular features for the purpose of image analysis, a standard of 9×9 pixels of resolving power is used, which translates to a dot diameter D of at least 9 pixels (i.e., 9 pixels multiplied by the camera spatial resolution). In some calibration targets, such as the calibration target 136 shown in
d≥9*(Camera Spatial Resolution) Equation 4
In this example, the optimal inner diameter ratio d/D is 0.4. This allows Equation 4 to be rewritten as Equation 5 below.
0.4D≥9*(Camera Spatial Resolution) Equation 5
Equation 6 gives the final expression for the minimum dot diameter as a function of camera spatial resolution.
D≥22.5*(Camera Spatial Resolution) Equation 6
Equation 6 provides a result that is sufficient for determining the minimum dot size requirement when camera spatial resolution is a limiting design factor. However, in other examples, the resolution of a camera may be much higher. For example, the printer 100 of
D≥22.5*(Printer Spatial Resolution)=22.5*(0.085 mm)=1.9 Equation 7
This result means that if the spatial resolution of the stereo vision system 120 is below 85 microns per pixel, the minimum dot diameter should still be about 2 millimeters (mm) to assure that the circular features are accurately printed. If the spatial resolution of the stereo vision system 120 is above 85 micros, or if special high-resolution printing is being implemented, then the expression for minimum dot diameter as a function of camera spatial resolution provides an appropriate minimum dot diameter.
When determining dot spacing, the goal is to avoid a spacing that results in two more dots being processed as a single dot by the calibrator 134. The lower limit of dot spacing depends on the spatial resolution of the stereo vision system 120. The edge-to-edge dot spacing impacts the distinguishability of the individual dots, specifically when the calibration target 136 is subject to out-of-plane tilt (e.g., about the X (horizontal) axis). An example geometry for determining the minimum edge-to-edge dot spacing S is illustrated in
P>Camera Spatial Resolution Equation 8
Solving for P in terms of the dot spacing S, according to the geometry shown in
P=S sin(θcamera−θgrid) Equation 9
Solving for S to determine the minimum edge-to-edge dot spacing for a given camera system is given by Equation 10 below.
Considering an extreme case, where the minimum difference between the camera and the grid angles is less than 1 degree and the spatial resolution of the stereo vision system 120 is 48 microns per pixel, the above criteria suggests adhering to the dot spacing S shown below.
Alternatively, the above expression can be used to determine a limit on tilt angle that retains dot distinguishability for a given spacing. For instance, dropping the pixel−1 term from the units does not change the meaning of the above result.
Another consideration is the calibration target dimensions. In some examples, a calibration target that is approximately 50-75% of the camera FoV is selected. In some instances, smaller grids present challenges in terms of adequately calibrating the entire FoV after reasonable number of iterations, and larger grids become difficult to keep within the camera FoV while also achieving a sufficient amount of target variation. In other examples, a calibration target having smaller or larger dimensions is selected.
In some examples, the position determiner 148 of the calibrator 134 determines and/or otherwise selects the positions and/or sequences of positions for the calibration target 136 based on various factors. For example, the position determiner 148 may consider that the goal of the calibration process is to determine the relationship between the 3D world coordinates and the corresponding 2D image points. When the grid dots are distributed largely out-of-plane, the parametric fitting process can more confidently decipher how the volume of space in the real word is being projected onto the image plane. As such, subjecting the calibration target to extreme out-of-plane tilts may improve the calibration process. Therefore, the position determiner 148 may select more out-of-plane tilt positions.
As another example, the position determiner 148 may consider that there is benefit to applying intermediate tilts to the calibration target 136. While extreme out-of-plane tilts may provide information about what is happening across a large space of the imaging space, intermediate tilt positions (between vertical and horizontal) improve upon the quantity of information provided at each particular depth of the image volume.
As another example, the position determiner 148 may consider that the camera parameters are directionally dependent. Therefore, in some examples, the calibration target is subjected to changes in tilt and orientation equally in both planar coordinate directions, which ensures that there is minimal directional discrepancy in the calibration quality. In some examples, the position determiner 148 may consider the calibration is spatially dependent. In some examples, positions are selected such that the grid pattern is present at every location within the camera FoV in the aggregate of the calibration image data. This may improve the calibration results by considering an estimation of the system parameters in each region of the image space.
As disclosed above, in some examples, extreme angles of out-of-plane tilt may benefit the calibration process by supplying a greater range of 3D world coordinate information to solve the correspondence problem. However, at some angle, the grid pattern 220 becomes obstructed or even defocused in one or both of the cameras 122, 124. In either case, the particular calibration image set may be rendered useable. As such, in some examples, the position determiner 148 may use constraints when determining limitations on tilt angle for a calibration sequence.
One constraint, for example, considers that after applying out-of-plan tilt to the calibration target 136, the grid pattern 220 should remain visible in both of the cameras 122, 124. For example, referring to
Another example constraint considers that after applying out-of-plane tilt to the calibration target 136, the grid pattern 220 should remain in focus in both of the cameras 122, 124 (
Rearranging Equation 11 to isolate the out-of-plane tilt angle yields and applying the mathematical focus constraint yields is given by Equation 12 below.
The expression in Equation 12 is valid when Wgrid>DoF. Otherwise, the maximum out-of-plane tilt angle should adhere to the previous constraint of θgrid,max<θcamera.
The position determiner 148 may select any number of positions to obtain calibration images. In some examples, the position determiner 148 selects a sequence of at least 27 positions. For example, this may correspond to three sets of nine out-of-plane tilts with orientations of 0°, 45°, and 90° of the calibration target 136 within the FoV, as illustrated in
In some examples, the stereo vision system 120 of the printer 100 may only need to be calibrated once. As long as the relative positions of the cameras 122, 124, the focal length, etc. remain relatively the same, the calibration parameters should remain valid. However, if any of the parameters of the stereo vision system 120 are changed, the stereo vision system 120 should recalibrated. The example stand 132 disclosed herein is portable and can be easily used to calibrate the stereo vision system 120 at any time. Additionally, the example stand 132 can be similarly used in other printers. In some examples, once an AM machine is set up (e.g., in a laboratory or workshop), the example calibration system 130 is used to calibrate the AM machine. Additionally or alternatively, when manufacturing the AM machine, for example, the stereo vision system may be calibrated after the stereo vision system is installed in the printer 100. Thus, the example calibration system 130 disclosed herein provides an easy, simple calibration.
While the examples disclosed herein are described in connection with a stereo vision system having two cameras, in other examples, the stereo vision system 120 may have more than two cameras. In some instances, the use of additional cameras may assist with feature recognition by reducing the stereo angle between cameras. For example, a third camera may be disposed between the first and second cameras 122, 124. In such an example, an incremental correlation may be performed between the first camera 122 and the third camera, and then between the third camera and the second camera 124. As such, disparity measurements may be made with more certainty and yield depth measurements with higher reliability. In another example, an array of microelectromechanical systems (MEMS) cameras may be employed to obtain equivalent (or higher) spatial resolution at a reduced cost In such an example, the aggregate of image data may be stitched together and subsequently used for calibration or each individual MEMS camera pair may be calibrated individually. Further, in other examples, instead of a two camera system, a single camera may be used that is moved to different vantage points to create the stereoscopic effect, or a camera system having dual lens with a single camera sensor may be used.
While an example manner of implementing the calibrator 134 is illustrated in
Flowcharts representative of example machine readable instructions for implementing the calibrator 134 of
As mentioned above, the example processes of
At block 802, the position determiner 148 accesses a sequence of positions or orientations to be used for obtaining calibration images. At block 804, the calibration executor 142 controls the stand 132, via a command signal, to move the platform 200 (and, thus, the calibration target 136) to a first position in the sequence of positions. The movement may include rotating the platform 200 about the X (horizontal) axis via the second motor 214 and/or the Z (vertical) axis via the first motor 210. In some examples, the stand 132 may have another degree of freedom to rotate the platform 200 about the Y axis. In such an example, moving the platform 200 may including rotating (e.g., via motor) the platform 200 about the Y axis. At block 806, the calibration executor 142 determines whether the first position includes a change in Z height of the calibration target 136. In some such examples, to control the Z height, the calibration executor 142 moves the build platform 102 of the printer 100 up or down (e.g., via the platform controller 114) to position the platform 200 (and, thus, the calibration target 136) at the desired Z height, at block 808. In other examples, the stand 132 may include a motor to move the platform 200 (and, thus, the calibration target 136) linearly along the Z (vertical) axis. Otherwise, control proceeds to block 810.
At block 810, the calibration executor 142 determines whether the platform 200 and/or the calibration target 136 is in the first position. In some examples, the calibration executor 142 determines whether the platform 200 is in the desired position based on a return signal from the stand 132. For example, once in the desired position or orientation, the motor controller 216 of the stand 132 may send a signal (e.g., based on feedback from the servo motors) to the calibrator 134 that the platform 200 is in the desired position. Likewise, if the build platform 102 is being used to change the Z height of the stand 132, the platform controller 114 may send a signal to the calibrator 134 once the build platform 102 is at the desired Z height. If the calibration target 136 is not in the desired position, the calibration executor 142 waits for the stand 132 and/or the build platform 102 to complete their movement.
Once the platform 200 (and, thus, the calibration target 136) are in the first position, the calibration executor 142, at block 812, controls the stereo vision system 120 (e.g., via the camera controller 146) to obtain images (i.e., an image set) of the calibration target 136 with the first and second cameras 122, 124. At block 814, the calibration executor 142 determines whether there are other positions in the sequence. If there is another position in the sequence, control returns to block 804 and the calibration executor 142 moves the stand 132 and/or the build platform 102 to position the calibration target 136 in the next position in the sequence of positions. This process may be repeated for each position until images of the calibration target 136 are obtained for each position in the sequence.
At block 816, the parameter determiner 150 determines at least one of an intrinsic parameter, an extrinsic parameter, and/or a lens distortion coefficient of the stereo vision system 120, using a camera model based on the image set(s) from the calibration process. In some examples, the parameter determiner 150 uses the pinhole camera model to determine the parameter(s). Additionally or alternatively, the parameter determiner 150 may use another camera model, such as the pinhole camera model, a Direct Linear Transform (DLT), Tsai's method, and/or Zhang's method. These parameter(s) may then be used by the stereo vision system 120 when extracting depth information during the printing process.
In some examples, the calibration sequence may not result in images of the calibration target 136 in all areas of FoV of the stereo vision system 120. Therefore, in some examples, after a first calibration sequence is performed with the stand 132 in a first XY location in the printer 100, the stand 132 may be moved to a second XY location on the build platform 102 and a second, subsequent calibration sequence may be performed. In some examples, the stand 132 includes additional degrees of freedom to move the platform 200 (and, thus, the calibration target 136) horizontally. For example, the stand 132 may include additional motors to move the post 208 along the X and/or Y axes, thereby changing the XY location of the calibration target 136 in the FoV. Therefore, in some examples, moving the platform 200 to the desired position (e.g., at blocks 804-808) may include translating the platform 200 horizontally to the desired XY location. In some examples, the stand 132 is moved to multiple different XY locations on the build platform 102 during a calibration sequence. In other examples, one XY location may be sufficient.
In some examples, prior to determining the system parameters at block 816, the calibration error calculator 152 calculates an overall error value (e.g., a RMS calibration error value) for the calibration images and, if the error value does not meet a desired error threshold, certain ones of the image sets may be retaken and/or the corresponding position(s) may be changed. An example process to update an image set and/or position is disclosed in further detail in connection with
If the RMS calibration error value does not meet (e.g., is less than) the RMS calibration error threshold (determined at block 902), the calibration error calculator 152 identifies an image set for a position of the calibration sequence that has a relatively high reprojection error value at block 906. As mentioned above, a reprojection error value may be calculated using Equation 1 for each of the image sets. In some examples, an image set having the highest reprojection error is identified. Additionally or alternatively, the calibration error calculator 152 may compare the reprojection error values to a reprojection error threshold, and if a reprojection error value does not meet (e.g., is below) the reprojection error value, the image set is identified as having a high reprojection error value at block 906.
At block 908, the calibration executor 142 controls the stand 132 to move the calibration target 136 to the position and controls the cameras 122, 124 to obtain another image set of the calibration target 136. At block 910, the calibration error calculator 152 recalculates the reprojection error value (e.g., using Equation 1) for the updated image set. At block 912, the calibration error calculator 152 determines whether the recalculated reprojection error value meets the reprojection error threshold. If the recalculated reprojection error value meets (e.g., is equal to or above) the reprojection error threshold, at block 914, the calibration error calculator 152 recalculates the RMS calibration error value for the calibration sequence and control returns to block 902.
If the recalculated reprojection error value does not meet (e.g., is below) the reprojection error threshold, at block 916, the position optimizer 154 determines a new position for the calibration target 136, which may be a small change (e.g., ±2° rotation) in the position with respect to the current position. The calibration executor 142 controls the stand 132 to move the calibration target 136 to the new position and controls the cameras 122, 124 to obtain a new image set of the calibration target 136 in the new position. At block 918, the calibration error calculator 152 recalculates the reprojection error value for the new image set. At block 920, the calibration error calculator 152 determines whether the recalculated projection error value meets the reprojection error threshold. If the projection error value of the new image set does not meet (e.g., is less than) the reprojection error threshold, control returns to block 916 and the position optimizer 154 may determine another change to the position. This process may continue numerous times until the position optimizer 154 identifies a position that results in a desired reprojection error value. If the reprojection error value of the new image set does meet (e.g., is equal to or greater than), the calibration error calculator 152, at block 914, recalculates the RMS calibration error value for the calibration sequence and control returns to block 902. This process may continue numerous times until the RMS calibration error value satisfies the RMS calibration error threshold. This process also helps identify optimal positions and/or combinations of positions that produce low calibration errors and, thus, accurate calibration results. These positions and/or combinations of positions may be used in subsequence calibrations of the same AM machine and/or another AM machine.
The processor platform 1000 of the illustrated example includes a processor 1012. The processor 1012 of the illustrated example is hardware. For example, the processor 1012 can be implemented by an integrated circuit, a logic circuit, a microprocessor or a controller from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor 1012 may implement the calibration executor 142, the example position determiner 148, the example parameter determiner 150, the example calibration error calculator 152, the example position optimizer 154, the example target selector 156, and/or, more generally, the example calibrator 134.
The processor 1012 of the illustrated example includes a local memory 1013 (e.g., a cache). The processor 1012 of the illustrated example is in communication with a main memory including a volatile memory 1014 and a non-volatile memory 1016 via a bus 1018. The volatile memory 1014 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory 1016 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1014, 1016 is controlled by a memory controller.
The processor platform 1000 of the illustrated example also includes an interface circuit 1020. The interface circuit 1020 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.
In the illustrated example, input devices 1022 are connected to the interface circuit 1020. The input device(s) 1022 permit(s) a user to enter data and/or commands into the processor 1012. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. In this example, the input device(s) 1022 may include the first and/or second cameras 122, 124.
Output devices 1024 are also connected to the interface circuit 1020 of the illustrated example. The output device(s) 1024 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit 1020 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor. In this example, the output device(s) 1024 may include the platform controller 114, the camera controller 146, and/or the motor controller 216.
The interface circuit 1020 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1026 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processor platform 1000 of the illustrated example also includes mass storage devices 1028 for storing software and/or data. Examples of such mass storage devices 1028 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. The mass storage device 1028 may include, for example, the database 144.
Coded instructions 1032 of
From the foregoing, it will be appreciated that example methods, apparatus, systems, and articles of manufacture have been disclosed that improve the accuracy of stereo vision calibration, including stereo vision systems in AM machines. The example automated calibration target stands disclosed herein enable high accuracy, reliability, and repeatability in moving a calibration target through a sequence of positions. The examples disclosed herein enable an un-skilled or trained user to be able to calibrate a stereo vision system. Examples disclosed herein can also be used to iteratively refine and update positions that result in low calibration error, thereby further improving the calibration process. While the examples disclosed herein are described in connection with 3D printing or AM machines, the example methods, apparatus, systems, and articles of manufacture disclosed herein can similarly be used with stereo vision systems in other applications not relating to 3D printing.
Although certain example methods, apparatus, systems, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus, systems, and articles of manufacture fairly falling within the scope of the claims of this patent.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/051580 | 9/14/2017 | WO | 00 |