This disclosure relates generally to systems and methods for evaluating the calibration of multiple cameras. More particularly, this disclosure relates to systems and methods for automatically assessing the quality of calibration for multiple camera systems.
In some environments, such as aircraft manufacturing, it is often desirable to use optical imaging systems (e.g., cameras) to evaluate components or assemblies during manufacture. It is often necessary to calibrate the cameras to ensure, among other things, that they are functioning as desired. Additionally, it is often desirable to have an assessment or quality check of the calibration. Current approaches for a calibration quality check require the use of calibration boards and running an entire calibration procedure. The conventional procedure is laborious and time consuming because it typically requires positioning a calibration board at numerous locations and orientations in the field of view of the cameras. Processing calibration board data typically takes several hours of computing time due to, among other things, the number of poses and high image resolution. Other drawbacks, inconveniences, and issues with current calibration quality checks also exist.
Accordingly, disclosed systems and methods address the above-noted and other drawbacks, inconveniences, and issues of current systems and methods. Disclosed embodiments include a method for assessing the calibration of an array of cameras, the method including inputting captured images from at least two cameras of the array of cameras, the captured images having features from an image. The method includes extracting one or more extracted features from the captured images, matching one or more extracted features between pairs of the at least two cameras to create a set of matched features, selecting matching points from the set of matched features, generating a three-dimensional reconstruction of objects in a field of view of the at least two cameras, and outputting the three-dimensional reconstruction wherein the three-dimensional reconstruction comprises indicators of calibration errors.
The method may also include extracting one or more extracted features using ORB (Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features) feature detection, using SURF (Speeded Up Robust Features) feature detection, selecting a subset of candidate features to extract and extracting using ANMS (Adaptive Non-Maximal Suppression) feature detection, or extracting using ANMS feature detection with SSC (Suppression via Square Covering) feature detection.
The method may also include matching one or more extracted features by matching through brute force matching, or by matching through Fast Library for Approximate Nearest Neighbors (FLANN) approximate matching.
The method may also include selecting matching points from the set of matched features by selecting matching points based on a distance ratio and applying a homography transform with RANSAC (RANdom SAmple Consensus).
The method may also include generating a three-dimensional reconstruction by producing a three-dimensional point cloud and producing a reconstruction error rate. The method may further include outputting the three-dimensional point cloud or outputting the reconstruction error rate. Embodiments of the method may include outputting the three-dimensional reconstruction by outputting a visualization of the matched features and wherein the indicators of calibration errors are indicated by lines.
In some embodiments, the method may also include projecting the image onto the objects in the field of view of the at least two cameras.
Also disclosed is a system for assessing the calibration of an array of cameras, the system including at least one processor and a memory storing a set of instructions that when executed by the processor cause the processor to initiate operations including inputting captured images from at least two cameras of the array of cameras, the captured images having features from an image, extracting one or more extracted features from the captured images, matching one or more extracted features between pairs of the at least two cameras to create a set of matched features, selecting matching points from the set of matched features, generating a three-dimensional reconstruction of objects in a field of view of the at least two cameras, and outputting the three-dimensional reconstruction wherein the three-dimensional reconstruction comprises indicators of calibration errors.
In some embodiments, the system may also include a projector for projecting the image onto the objects in the field of view of the at least two cameras.
Other embodiments are also disclosed.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, it should be understood that the disclosure is not intended to be limited to the particular forms disclosed. Rather, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
Cameras 12 as used herein refer to imaging devices and may comprise visible imaging devices (e.g., Single Lens Reflex (SLR) cameras, digital cameras, and the like), non-visible imaging device (e.g., Infrared (IR) or heat-imaging cameras), structured or laser-light imaging devices (e.g., Time-of-flight (ToF) imaging devices), Spatial Phase Imaging (SPI) devices (e.g., 3D sensing devices), or the like. For example, system 10 may include one or more ToF cameras 12A and one or more SPI cameras 12B. Other cameras 12 may also be used.
Embodiments of pylon wall 14 may include a generally open, multiple post or column structure, a solid panel, or the like to support the cameras 12 and, optionally, projectors 16. Pylon wall 14 may be movable, tilt, rotate, extend, or the like as desired. In some embodiments, pylon wall 14 may be positionable under the control of processor 20 or a separate positioning system (not shown).
Pylon wall 14 is positioned to capture images of whatever object is being inspected or evaluated. For example,
Processor 20 is a suitable processor-based computing device, such as a personal computer (PC), laptop computer, tablet, or the like. As indicated in
Each image includes features of the view captured by the cameras 12. For example, if the observed object in the view of the cameras 12 contains naturally visible features such as shapes, colors, contours, or the like, that are visible, those features are captured in the images. Alternatively, if the observed object in the view of cameras 12 is relatively lacking in naturally visible features (e.g., horizontal stabilizer 24 is relatively monochromatic and featureless) then, as indicated at 401 patterns containing visible features may be projected onto the observed object.
As indicated at 404 features are extracted from the images by detection component 202 of processor 20. Extraction step 404 may be accomplished by processor 20 in a number of ways. For example, as schematically indicated in
As indicated in
As indicated in
As indicated in
As indicated in
Output step 412 may also output a 3D point cloud obtained from the 3D reconstruction routine in step 410. Additionally, output step 410 may output the 3D reprojection error generated by the 3D reconstruction generation step 410. The reprojection error may by output as a Pose_error.csv file and can provide a more quantitative indication of the calibration error. However, a small reprojection error is a necessary, but not sufficient, condition to ensure a good calibration and there are cases where a bad calibration may still have a small reprojection error (e.g., if the matching features are sparse).
The Feature Extraction function 1000 has several parameters that can be tuned for a specific image set. Default parameters are provided but the user can change the parameters directly in Workspace. The following paragraphs are a description of some of the tunable parameters.
Nfeatures: Maximum number of features to be detected. Recommendation: 150000 to 300000. Default: 150000.
ScaleFactor: ORB detector's pyramid decimation ratio. Recommendation: 1 to 2. Default: 1.2.
Nlevels: ORB detector's number of pyramid levels. Recommendation: 8 to 16. Default: 8.
FastThreshold: ORB detector's FAST threshold. Recommendation: 0 to 20. Default: 8.
ReturnPtsRatio: Percentage of max number of features to retain in the non-maximal suppression algorithm. Recommendation: 0.1 to 1 (one keeps all features). Default: 0.4.
MatchRatio: Maximum distance between the best and second-best match to reject low-quality matches. Recommended: 0.5 to 0.8. Default: 0.7.
RansacReprojThreshold: Maximum reprojection error to reject points as outliers. Recommended: 1 to 10. Default: 4.
The procedure for calibration assessment makes use of the output of the algorithm. Qualitatively, each matching feature should have a corresponding feature in the same location in the other images.
Although various embodiments have been shown and described, the present disclosure is not so limited and will be understood to include all such modifications and variations would be apparent to one skilled in the art.
Number | Name | Date | Kind |
---|---|---|---|
9305378 | Holmes et al. | Apr 2016 | B1 |
10102773 | Towers et al. | Oct 2018 | B2 |
10127658 | Lee | Nov 2018 | B2 |
10127691 | Lee | Nov 2018 | B2 |
10298910 | Kroeger | May 2019 | B1 |
10880667 | Cho | Dec 2020 | B1 |
20090086081 | Tan | Apr 2009 | A1 |
20140347486 | Okouneva | Nov 2014 | A1 |
20150324658 | Zhang et al. | Nov 2015 | A1 |
20160266256 | Allen et al. | Sep 2016 | A1 |
20160353068 | Ishikawa | Dec 2016 | A1 |
20170052070 | Marsh et al. | Feb 2017 | A1 |
20170118409 | Im | Apr 2017 | A1 |
20170210011 | Hull | Jul 2017 | A1 |
20170219336 | Kurtz et al. | Aug 2017 | A1 |
20170353670 | Zimmer | Dec 2017 | A1 |
20180249142 | Hicks | Aug 2018 | A1 |
20180330149 | Uhlenbrock et al. | Nov 2018 | A1 |
20200005489 | Kroeger | Jan 2020 | A1 |
20200282929 | Kroeger | Sep 2020 | A1 |
Entry |
---|
Zhengyou Zhang; A Flexible New Technique for Camera Calibration; IEEE Transactions on Pattern—Analysis and Machine Intelligence, vol. 22, No. 11 Nov. 2000. |
Ethan Rublee et al; ORB: an efficient alternative to SIFT or SURF; IEEE International Conference on Computer Vision 2011. |
Marius Muja et al; Fast Approximate Nearest Neighbors With Automatic Algorithm Configuration (2019). |
Heiko Hirschmüller; Stereo Processing by Semiglobal Matching and Mutual Information; IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, No. 2, Feb. 2008. |
Ryo Furukawa et al; Multiview Projectors/Cameras System for 3D Reconstruction of Dynamic Scenes (2011). |
JiaWang Bian et al; GMS: Grid-based Motion Statistics for Fast, Ultra-robust Feature Correspondence; 2017 IEEE Conference on Computer Vision and Pattern Recognition (2017). |
Oleksandr Bailo et al; Efficient adaptive non-maximal suppression algorithms for homogeneous spatial keypoint distribution; School of Electrical Engineering, KAIST, Daejeon 34141, Republic of Korea; Pattern Recognition Letters 106 53-60 2018 (2018). |
Number | Date | Country | |
---|---|---|---|
20210158568 A1 | May 2021 | US |