The present invention relates to calibration systems and methods, and calibration objects (targets) used in machine vision system applications
In machine vision systems (also termed herein “vision systems”), one or more cameras are used to perform vision system process on an object or surface within an imaged scene. These processes can include inspection, decoding of symbology, alignment and a variety of other automated tasks. More particularly, a vision system can be used to inspect a workpiece residing in an imaged scene. The scene is typically imaged by one or more vision system cameras that can include internal or external vision system processors that operate associated vision system processes to generate results. It is generally desirable to calibrate one or more cameras to enable it/them to perform the vision task(s) with sufficient accuracy and reliability. A calibration object or target can be employed to calibrate the cameras with respect to an appropriate coordinate space and physical units. By way of example, the image(s) of the workpiece can be characterized by two-dimensional (2D) image pixel data (e.g. x and y coordinates), three-dimensional (3D) image data (x, y and z coordinates) or a hybrid 2.5D image data, in which a plurality of x-y coordinate planes are essentially parallel and characterized by a variable z-height.
The calibration object or target (often in the form of a “plate”) is often provided as a flat structure with distinctive patterns (artwork) made visible on its surface. The distinctive pattern is generally designed with care and precision, so that the user can easily identify each visible feature in an image of the target acquired by a camera. Some exemplary patterns include, but are not limited to, a tessellating checkerboard of squares, a checkerboard with additional inlaid codes at periodic intervals within the overall pattern, which specify feature positions, dot grids, line grids, a honeycomb pattern, tessellated triangles, other polygons, etc. Characteristics of each visible feature are known from the target's design, such as the position and/or rotation relative to a reference position and/or coordinate system implicitly defined within the design.
The design of a typical checkerboard pattern, which is characterized by a tessellated array of crossing lines, provides certain advantages in terms of accuracy and robustness in performing calibration. More particularly, in the two-dimensional (2D) calibration of a stationary object, determining the relative position of individual checkerboard tile corners by edges of the calibration checkerboards is typically sufficient to determine accuracy of the vision system, and as appropriate, provide correction factors to the camera's processor so that runtime objects are measured in view of such correction factors.
By way of further background, calibration of a vision system camera involves mapping the pixels of the camera sensor to a predetermined coordinate system. The target can provide features that define the coordinate system (e.g. the X-Y-axis arrangement of a series of checkerboards), such as 2D codes (also termed “barcodes”) inlaid in the feature pattern, or distinctive fiducials that otherwise define the pattern coordinate system. By mapping the features to camera pixels, the system is calibrated to the target. Where multiple cameras are used to acquire images of all or portions of a calibration target, all cameras are mapped to a common coordinate system that can be specified by the target's features (e.g. X and Y along the plane of the target, Z (height) and rotation θ about the Z axis in the X-Y plane), or another (e.g. global) coordinate system. In general, a calibration target can be used in a number of different types of calibration operations. By way of example, a typical intrinsic and extrinsic camera calibration operation entails acquiring images of the target by each of the cameras and calibrating relative to the coordinate system of the calibration target itself, using one acquired image of the target, which is in a particular position within at least part of the overall field of view of all cameras. The calibration application within the vision processor deduces the relative position of each camera from the image of the target acquired by each camera. Fiducials on the target can be used to orient each camera with respect to the portion of the target within its respective field of view. This calibration is said to “calibrate cameras to the plate”.
Users may encounter certain inconveniences when attempting to calibrate a 2D, 2.5D or 3D vision system using a typical, planar calibration target. Such inconveniences can derive from two sources. Firstly, an accurate calibration target with 3D information requires the manufacture of a calibration target in the micron level, which is not only time-consuming but also costly. Secondly, the calibration of perspective or stereo vision systems requires a calibration target to be imaged in multiple poses that are visible to all cameras. This process is lengthy and error-prone for users, especially when the stereo vision system is complicated (e.g. involving multiple cameras). For example, certain commercially available vision systems composed of four cameras may require twenty or more views of the calibration target to achieve sufficient calibration.
This invention overcomes disadvantages of the prior art by providing a calibration target that defines a calibration pattern on at least one (one or more) surface(s). The relationship of locations of calibration features (e.g. checkerboard intersections) on the calibration pattern(s) are determined for the calibration target (e.g. at time of manufacture of the target) and stored for use during a calibration procedure by a calibrating vision system. Knowledge of the calibration target's feature relationships allow the calibrating vision system to image the calibration target in a single pose and rediscover each of the calibration features in a predetermined coordinate space. The calibrating vision system can then transform the relationships between features from the stored data into the calibrating vision system's local coordinate space. The locations can be encoded in a barcode that is applied to the target (and imaged/decoded during calibration), provided in a separate encoded element (e.g. a card that is shipped with the target) or obtained from an electronic data source (e.g. a disk, thumb drive or website associated with the particular target). The target can include encoded information within the pattern that defines a particular location of adjacent calibration features with respect to the overall geometry of the target. In an embodiment, the target consists of at least two surfaces that are separated by a distance, including a larger plate with a first calibration pattern on a first surface and a smaller plate applied to the first surface of the larger plate with a second calibration pattern that is located at a spacing (e.g. defined by a z-axis height) from the first calibration pattern. The target can be two-sided so that a first surface and a smaller second surface with corresponding patterns are presented on each of opposing sides, thereby allowing for 360-degree viewing, and concurrent calibration, of the target by an associated multi-camera, vision system. In other embodiments, the target can be a 3D shape, such as a cube, in which one or more surfaces include a pattern and the relationships between the features on each surface are determined and stored for use by the calibrating vision system.
In an illustrative embodiment, a calibration target is provided, and includes a first surface with a first calibration pattern. A data source defines relative positions of calibration features on the first calibration pattern. The data source is identifiable by a calibrating vision system, which acquires an image of the calibration target, so as to transform the relative positions into a local coordinate space of the vision system. A second surface with a second calibration pattern can also be provided, in which the second surface is located remote from the first surface. The data source, thereby, also defines relative positions of calibration features on the second calibration pattern.
Illustratively, the second surface is provided on a plate adhered to the first surface, or it is provided on a separate face of a three-dimensional object oriented at a non-parallel orientation to the first surface. In an exemplary embodiment, the first calibration pattern and the second calibration pattern are checkerboards. The data source can comprise at least one of (a) a code on the calibration target, (b) a separate printed code and (c) an electronic data source accessible by a processor of the calibrating vision system. The relative positions can be defined by an accurate vision system during or after manufacture of the calibration target, so as to be available for use by the calibrating vision system. The accurate vision system can comprise at least one of (a) stereoscopic vision system, (b) a three-or-more-camera vision system a laser displacement sensor, and (c) a time-of-flight camera assembly, among other types of 3D imaging devices. Illustratively, the calibration target can include a third surface, opposite the first surface with a third calibration pattern and a fourth surface with a fourth calibration pattern, the fourth surface can be located at a spacing above the third surface. The data source can, thereby, define relative positions of calibration features on the first calibration pattern the second calibration pattern, the third calibration pattern and the fourth calibration pattern. Illustratively, the accurate vision system and the calibrating vision system are each arranged to image the calibration target on each of opposing sides thereof. In embodiments, the calibrating vision system is one of a 2D, 2.5D and 3D vision system. Illustratively, at least one of the first calibration pattern and the second calibration pattern includes codes that define relative locations of adjacent calibration features with respect to an overall surface area.
In an illustrative method for calibrating a vision system, a calibration target having a first surface with a first calibration pattern is provided. A data source that defines relative positions of calibration features on the first calibration pattern is accessed. The data source is generated by acquiring at least one image of the calibration target by an accurate vision system. An image of the calibration target is subsequently acquired by the calibrating vision system during a calibration operation by a user. The relative positions by the accurate vision system are transformed into a local coordinate space of the calibrating vision system. Illustratively, a second surface with a second calibration pattern is provided. The second surface is located remote from the first surface and the data source defines relative positions of calibration features on the second calibration pattern.
In an illustrative method for manufacturing a calibration target at least a first surface with a predetermined first calibration pattern is provided. An image of the first surface is acquired, and calibration pattern features are located thereon. Using the located calibration features, a data source is generated, which defines relative positions of calibration features on the first calibration pattern. The data source is identifiable by a calibrating vision system acquiring an image of the calibration target so as to transform the relative positions into a local coordinate space of the vision system. Illustratively a second surface is provided, with a second calibration pattern positioned with respect to the first surface. The second surface is located remote from the first surface and the data source defines relative positions of calibration features on the second calibration pattern. The second surface can be provided on a plate adhered to the first surface, or the second surface can be provided on a separate face of a three-dimensional object oriented at a non-parallel orientation to the first surface. Illustratively, the first calibration pattern and the second calibration pattern can be checkerboards. In an exemplary embodiment, a third surface is provided, opposite the first surface with a third calibration pattern. A fourth surface with a fourth calibration pattern is applied to the third surface. The fourth surface is located at a spacing above the third surface, and the data source, thereby, defines relative positions of calibration features on the first calibration pattern the second calibration pattern, the third calibration pattern and the fourth calibration pattern. The data source can be provided in at least one of (a) a code on the calibration target, (b) a separate printed code, and (c) an electronic data source accessible by a processor of the calibrating vision system.
The invention description below refers to the accompanying drawings, of which:
The camera(s) 110-116 each include an image sensor S that transmits image data to one or more internal or external vision system processor(s) 130, that carry out appropriate vision system processes using functional modules, processes and/or processors. By way of non-limiting example, the modules/processes can include a set of vision system tools 132 that find and analyze features in the image—such as edge finders and contrast tools, blob analyzers, calipers, etc. The vision system tools 132 interoperate with a calibration module/process 134 that handles calibration of the one or more cameras to at least one common (i.e. global) coordinate system 140. This system can be defined in terms of Cartesian coordinates along associated, orthogonal x, y and z axes. Rotations about the axes x, y and z can also be defined as θx, θy and θz, respectively. Other coordinate systems—such as polar coordinates, can be employed in alternate embodiments. The vision system process(or) 130 can also include an ID/code finding and decoding module 136, that locates and decodes barcodes and/or other IDs of various types and standards using conventional or custom techniques.
The processor 130 can be instantiated in a custom circuit or can be provided as hardware and software in a general purpose computing device 150 as shown. This computing device 150 can be a PC, laptop, tablet, smartphone or any other acceptable arrangement. The computing device can include a user interface—for example a keyboard 152, mouse 154, and/or display/touchscreen 156. The computing device 150 can reside on an appropriate communication network (e.g. a WAN, LAN) using a wired and/or wireless link. This network can connect to one or more data handling device(s) 160 that employ the vision system data generated by the processor 130 for various tasks, such a quality control, robot control, alignment, part accept/reject, logistics, surface inspection, etc.
The calibration target 120 of the exemplary arrangement is one of a variety of implementations contemplated herein. In an alternate embodiment, the target can consist of a plate with a single exposed and imaged surface and an associated artwork/calibration pattern (for example, a checkerboard of tessellating light and dark squares). However, in the depicted example, the calibration target consists of a plurality of stacked plates 170 and 172, each with a calibration pattern applied thereto. The method of application of the pattern is highly variable—for example screen-printing or photolithography can be employed. In general the lines defining the boundaries of features and their intersections is crisp enough to generate an acceptable level of resolution—which depending upon the size of the overall scene can be measured in microns, millimeters, etc. In an embodiment, and as depicted further in
The plates 170, 172 and 210 can be assembled together in a variety of manners. In a basic example, the smaller-area plates 172, 210 are adhered, using an appropriate adhesive (cyanoacrylate, epoxy, etc.) to the adjacent surface 220, 222 of the central plate in an approximately centered location. Parallelism between surfaces 230, 220, 222 and 240 is not carefully controlled, nor is the centering of the placement of the smaller plates on the larger plate. In fact, the introduction of asymmetry and skew can benefit calibration of the calibrating vision system (100), as described generally below.
Notably, the relationship between features in three dimensions is contained in a set of data 180, which can be stored with respect to the processor in association with the particular calibration target 120. The data can consist of a variety or formats. For example the data 180 can consist of the location of all (or a subset of all) calibration features in the calibration target 120, or groups of features. The data can be obtained or accessed in a variety of manners. As shown, a 2D barcode (e.g. a DataMatrix ID code) 182 can be provided to a location (e.g. an edge) of the calibration target 120 so that it is acquired by one or more camera(s) of the vision system and decoded by the processor 130 and module 136. Other mechanisms for providing and accessing the data 180 can include supplying a separate label or card with the shipped target 120 with a code that is scanned, downloading the data from a website in association with a serial number (or other identifier) for the target, providing the data in a disk, flash memory (thumb drive), or other electronic data storage device, etc.
The data that describes the relationship of calibration pattern features for an exemplary calibration target is generated in accordance with the procedure 300 of
In step 310 of the procedure 300, the manufactured calibration target (according to any of the physical arrangements described herein) is positioned within the field of view of a highly accurate vision system. A stereoscopic vision system with one or more stereo camera assemblies is one form of implementation. However, highly accurate vision systems can be implemented using (e.g.) one or more laser displacement sensors (profilers), time-of-flight cameras, etc. In an embodiment, shown in
In the procedure 500, information related to the relationship of calibration features (e.g., true relative positions) on the specific calibration target is accessed—either from storage or by reading an ID code on the target (among other mechanisms), in step 540. Referring now to
In step 630, the retrieved feature relationship data in the exemplary procedure 600 is associated with the actual located features (e.g., measured relative positions) in the image of the calibration target (see also, step 530 in
In step 560 of the calibration procedure 500 of
The above-described calibration target is depicted as a one-sided or two sided plate structure with two sets of 2D features stacked one atop the other with the top plate having a smaller area/dimensions than the underlying, bottom plate so that features from both plates can be viewed and imaged. In alternate embodiments, a single layer of features—with associated stored representations can be employed. This is a desirable implementation for 2D (or 3D) calibration, particularly in arrangements where it is challenging for the vision system to image all features on the plate accurately during calibration. Roughly identified features on the imaged target can be transformed into an accurate representation of the features using the stored/accessed feature relationships.
Other calibration target embodiments can employ more than two stacked sets of 2D features.
In another alternate arrangement, the calibration target can comprise a polyhedron—such as a cube 810 as shown in
It should be clear that the above-described calibration target and method for making and use provides a highly reliable and versatile mechanism for calibrating 2D and 3D vision systems. The calibration target is straightforward to manufacture and use, and tolerates inaccuracies in the manufacturing and printing process. Likewise, the target allows for a wide range of possible mechanisms for providing feature relationships to the user and calibrating vision system. The target also effectively enables full 360-degree calibration in a single image acquisition step.
The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments of the apparatus and method of the present invention, what has been described herein is merely illustrative of the application of the principles of the present invention. For example, as used herein, various directional and orientational terms (and grammatical variations thereof) such as “vertical”, “horizontal”, “up”, “down”, “bottom”, “top”, “side”, “front”, “rear”, “left”, “right”, “forward”, “rearward”, and the like, are used only as relative conventions and not as absolute orientations with respect to a fixed coordinate system, such as the acting direction of gravity. Additionally, where the term “substantially” or “approximately” is employed with respect to a given measurement, value or characteristic, it refers to a quantity that is within a normal operating range to achieve desired results, but that includes some variability due to inherent inaccuracy and error within the allowed tolerances (e.g. 1-2%) of the system. Note also, as used herein the terms “process” and/or “processor” should be taken broadly to include a variety of electronic hardware and/or software based functions and components. Moreover, a depicted process or processor can be combined with other processes and/or processors or divided into various sub-processes or processors. Such sub-processes and/or sub—processors can be variously combined according to embodiments herein. Likewise, it is expressly contemplated that any function, process and/or processor herein can be implemented using electronic hardware, software consisting of a non-transitory computer-readable medium of program instructions, or a combination of hardware and software. Also, while various embodiments show stacked plates, surfaces can be assembled together using spacers or other distance-generating members in which some portion of the plate is remote from contact with the underlying surface. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
The application claims the benefit of co-pending U.S. Provisional Application Ser. No. 62/486,411, entitled HIGH-ACCURACY 3D CALIBRATION TARGET AND METHOD FOR MAKING AND USING THE SAME, filed Apr. 17, 2018, the teachings of which are expressly incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62486411 | Apr 2017 | US |