This invention relates to machine vision systems and associated methods for alignment and inspection of objects in an imaged scene.
Machine vision systems, also termed “vision systems” herein, are used to perform a variety of tasks in a manufacturing environment. In general, a vision system consists of one or more cameras with an image sensor (or “imager”) that acquires grayscale or color images of a scene that contains an object under manufacture. Images of the object can be analyzed to provide data/information to users and associated manufacturing processes. The data produced by the image is typically analyzed and processed by the vision system in one or more vision system processors that can be purpose-built, or part of one or more software application(s) instantiated within a general purpose computer (e.g. a PC, laptop, tablet or smartphone).
Common vision system tasks include alignment and inspection. In an alignment task, vision system tools, such as the well known PatMax® system commercially available from Cognex Corporation of Natick, Mass., compares features in an image of a scene to a trained (using an actual or synthetic model) pattern, and determines the presence/absence and pose of the pattern in the imaged scene. This information can be used in subsequent inspection (or other) operations to search for defects and/or perform other operations, such as part rejection.
A particular challenge in determining alignment and quality for (e.g.) printed surface is where “clutter” is present. In this context, clutter can be defined as a proliferation of gradient features that surround a feature of interest. For example, a smudge surrounding a printed letter can be considered clutter.
This invention overcomes disadvantages of the prior art by providing a system and method for determining the level of clutter in an image in a manner that is rapid, and that allows a scoring process to quickly determine whether an image is above or below an acceptable level of clutter—for example to determine if the underlying imaged runtime object surface is defective without need to perform a more in-depth analysis of the features of the image. The system and method employs clutter test points that are associated with featureless regions on the image (for example, exhibiting a low gradient magnitude) that should be empty according to a training pattern. This enables the runtime image to be analyzed quickly using a training pattern that maps clutter test points at locations in the coordinate space in which emptiness should exist, and in which presence of features/high magnitude means that clutter may be present, and if located, can rapidly indicate differences and/or defects that allow for the subject of the image to be accepted or rejected without (free-of) further image analysis. In further embodiments, the clutter score determined according to the system and method can be combined with a traditional (e.g.) “coverage” score (for example using regular image feature probes) to achieve analysis of a runtime image.
In an illustrative embodiment, a system and method for determining clutter in an imaged scene with a vision system is provided. The system and method includes providing a runtime image of a scene and a trained pattern with a set of clutter test points that represent emptiness in the trained pattern. A runtime pose is established, including a coordinate space for the image with respect to the trained pattern. Illustratively, the clutter test points are mapped on the coordinate space for the image and a level of emptiness is determined, respectively, at the mapped clutter test points. Based upon the determined level of emptiness, an associated level of clutter is determined in at least a portion of the image. Illustratively, the determination of the level of clutter comprises generating respective clutter score values for the mapped clutter test points. The clutter score values can be summed and the system and method can generate an overall clutter score value for at least the portion of the image. The determination of the level of emptiness can comprise determining a gradient magnitude respectively at the mapped clutter test points and comparing the magnitude to a predetermined threshold, and the establishment of the pose can comprise at least one of (a) employing alignment tools in a vision system to automatically align the runtime image to the trained pattern and (b) obtaining information indicating the pose from a user input.
In another illustrative embodiment, a system and method for determining a level of clutter in at least a portion of an imaged scene with a vision system is provided. This system and method includes providing a training image with a feature of interest and a predetermined clutter threshold value relative to a level of emptiness that indicates clutter-free regions in the image. Clutter test points are established with respect to a coordinate space of the training image based upon locations in the coordinate space that represent the level of emptiness that is free-of clutter. Illustratively, the training image can include a mask that indicates which areas of the training image should be evaluated for emptiness and/or features specified by description free of reliance on pixel values. The clutter test points can be determined by presence of a gradient magnitude in regions of the training image less than a predetermined threshold value. The clutter test points can be stored with respect to the coordinate space of the trained image for use in clutter determination in a runtime image, and the clutter threshold value can be computed based upon a histogram of gradient values within the training image. The clutter threshold can be provided as an input parameter.
In another illustrative embodiment, a system and method for determining a level of clutter in a runtime candidate image based on a trained pattern using a vision system is provided. A training time clutter point generator generates an array of clutter test points relative to a coordinate space in a training image having information based upon locations in the training image that have a level of emptiness below a clutter threshold. The clutter threshold is established based upon predetermined parameters and the information in the training image. A runtime clutter determiner maps the locations of the clutter test points to the runtime candidate image based upon a runtime pose, and computes a level of emptiness at each of the locations to determine the level of clutter in at least a portion of the acquired image. Illustratively, the level of emptiness in the runtime candidate image can be based upon a determined gradient magnitude at the locations, respectively. A clutter utilizer employs the determined clutter level to perform a predetermined action with respect to a surface associated with the runtime candidate image based upon the determined level of the clutter. By way of example, the predetermined action can include at least one of part reject, issuing an alert, transmitting quality data and stopping a moving line. At least one of the predetermined parameters can be input by a user. The clutter determiner can create a clutter score at each of the locations. The clutter determiner sums the clutter score for each of the locations to generate an overall clutter score for the runtime candidate image. The information can include a feature of interest, and can further comprise an alignment tool that determines the runtime pose based upon a location of at least a portion of the feature of interest in each of the training image and the runtime candidate image.
The invention description below refers to the accompanying drawings, of which:
In the illustrative embodiment, the vision process and processor includes a clutter determination process/processor 160. That operates to determine a level of clutter in an acquired image according to embodiments herein. Clutter is generally considered information in an image where, according to the trained pattern in the image, none should exist. One example of clutter is a smudge (e.g. clouded area 170) in association with a desired printed pattern 172. In general, most conventional search tools look for the presence of a trained pattern, and if found, return a result that indicates success. However, in the case of clutter or other undesired “noise” in an image of an object surface, the desired/trained pattern may exist, but the overall image still indicates a defect (due to the additional noise or clutter) that would cause a user to reject the object as defective. By way of a further example, when searching for a pattern—for example a printed “P”—the search tool may properly find the elements of the P, but ignore that a further element—the bottom-right bar indicative of a printed “R”, for example, exists in the image. The bottom right bar can be considered a form of “clutter”. Thus, it is both desirable and challenging to determine a level of such “clutter” in the image and form a basis for whether the imaged object surface is acceptable or unacceptable (or even properly registered by the vision system), based upon the level of such clutter.
In determining a level of clutter in the image, the system first provides training image data 180, which typically includes features of interest (e.g. printing), and can be based upon acquired images of an actual training object surface and/or synthetic image data. That is, the training image and associated training pattern can be specified by a description provided in (e.g.) a CAD model, synthetic square, etc. The term “training image” and “training pattern” should thus be taken broadly to include data sets that are specified generally free of reliance of pixel values. As also shown in
Reference is now made to
In step 220, the procedure 200 provides a clutter threshold value that is based on the maximum gradient magnitude in the training image. This threshold can be provided as a user-input or system-provided parameter. The threshold can be computed in the alternative.
In step 230, the procedure 200 generates clutter test points in association with each specified location in the training image. These locations can be based upon pixel locations or can be associated with sub-pixel locations, or can generally be established with any acceptable coordinate space relative to the training image. Each of the clutter test points are established at respective locations that have a magnitude less than the clutter threshold. In other words, clutter test points (also termed “clutter probes”) are applied to weak edges in an image where the measured/analyzed values for gradient magnitude (or another indicia of image features/characteristics) is below a given threshold.
By way of example, the procedure includes establishment of a sigmoid response function that is particularly adapted to clutter determination in accordance with illustrative embodiments. By way of non-limiting example, the form of the sigmoid response function can be represented by:
1/(1+(t/x)1/σ)
where an input x is provided to the function and t is a “soft” threshold. Values for x and t are in the same units, e.g. gradient magnitude. The soft threshold specifies the value at which the sigmoid's output is ½. In addition, a sigmoid's rate of transition from 0 to 1 can be controlled by a parameter typically called σ. The soft threshold t defines the center-point of this clutter response function sigmoid. The value for the soft threshold t can be defined by a default value, such as 0.01, or t can be specifically input as a different parameter value by the user. The soft threshold t is configured similarly to a noise threshold value in other vision system applications/processes (e.g. PatMax®, or RedLine™, also available from Cognex Corporation). The mechanisms and techniques used for determining and/or inputting such noise thresholds are, thus applicable using skill in the art to the specification/input of the value t to the system. Additionally, in an illustrative embodiment, the value for σ in the above sigmoid response function can be provided by the exemplary equation:
σ=log t/−3
where log is a base-10 logarithm, and t expresses the threshold as a gradient magnitude between 0 and 1. This equation is highly variable in alternate implementations. The particular relationship provided, by way of non-limiting example above, provides a desirable response for the sigmoid function over the entire range of meaningful thresholds. That is, the exemplary equation for σ allows the response functions “ramp-up” to be more-gradual when the threshold is higher, but never makes the response slow enough to overly resemble a simple linear response. Notably, using the scoring sigmoid response function described above, the procedure can determine the input gradient magnitude that would produce an output of a value denoted by the parameter/variable ClutterScoreCutoff.
In general, the above described value for ClutterThreshold can be computed similarly to a noise threshold in vision system processes using, e.g. a histogram. Then, for each pixel in the fine gradient image that has a magnitude of less than ClutterThreshold, and is marked “present” (or another similar flag) in the training mask (if given), the procedure generates a clutter test point. As such, the system considers (for later runtime analysis) locations on the training image/pattern that should have a low gradient magnitude at runtime, and are thus potential locations to provide a clutter test point.
Note that regions of the image can be masked out from location of clutter test points if they are considered unimportant to the analysis of the image.
With reference to step 230 in
With reference also to the sub-procedure 400 of
In step 540, the found “pose” of the runtime image is used to map the clutter test points to the coordinate space of the map the runtime image, nearest neighbor. For each clutter test point, the procedure 500 scores it at step 550 by passing the runtime gradient magnitude through the above-described sigmoid function, and then subtracting the value for ClutterScoreCutoff from the result, clamped at 0. This result is then (by way of non-limiting example) multiplied by (1.0/(1.0−ClutterScoreCutoff)), which rescales the score space to be normalized (i.e. between 0 and 1). Note that other normalization techniques (or no normalization) can be employed in alternate embodiments. Note that in various embodiments, the sigmoid normalization table can be modified to include the subtraction, clamping, and rescaling directly, making those operations free at runtime.
This computed score information enables the procedure 500 to provide a level of clutter in accordance with step 560. In an embodiment, and with further reference to
In step 570, the clutter score or other information on the level of clutter in the runtime candidate image can be utilized by downstream (optional) processes and tasks to perform various actions, such as (but not limited to) stopping a production line, sounding alerts, storing quality control data, and/or rejecting parts.
In the example of the above-described surface 112 (
Reference is now made to
Note that use of gradient magnitude to locate features and/or emptiness in an image is one of a variety of techniques that should be clear to those of skill. For example an alternative technique entails performing a Sobel operator analysis on the image and then performing an edge-chaining process, searching for areas with no chains to determine presence/absence of features. Thus, determination of emptiness/empty regions in the image should be taken broadly to include a variety of techniques.
With reference now to
It should be clear that the generation of a clutter test point training pattern allows for rapid and reliable detection of unwanted features on a runtime candidate image. This approach allows for acceptance or rejection of candidates before more in depth and processor/time-intensive analysis is undertaken, thereby increasing operational efficiency and throughput speed.
The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments of the apparatus and method of the present invention, what has been described herein is merely illustrative of the application of the principles of the present invention. For example, as used herein various directional and orientational terms (and grammatical variations thereof) such as “vertical”, “horizontal”, “up”, “down”, “bottom”, “top”, “side”, “front”, “rear”, “left”, “right”, “forward”, “rearward”, and the like, are used only as relative conventions and not as absolute orientations with respect to a fixed coordinate system, such as the acting direction of gravity. Moreover, a depicted process or processor can be combined with other processes and/or processors or divided into various sub-processes or processors. Such sub-processes and/or sub-processors can be variously combined according to embodiments herein. Likewise, it is expressly contemplated that any function, process and/or processor herein can be implemented using electronic hardware, software consisting of a non-transitory computer-readable medium of program instructions, or a combination of hardware and software. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
5109425 | Lawton | Apr 1992 | A |
6173070 | Michael et al. | Jan 2001 | B1 |
6323776 | Jackson | Nov 2001 | B1 |
6476803 | Zhang et al. | Nov 2002 | B1 |
6836560 | Emery | Dec 2004 | B2 |
6920241 | Dutta-Choudhury | Jul 2005 | B1 |
6941026 | Nadabar et al. | Sep 2005 | B1 |
7110602 | Krause | Sep 2006 | B2 |
7181066 | Wagman | Feb 2007 | B1 |
7268939 | McDowell | Sep 2007 | B1 |
7558419 | Ye | Jul 2009 | B1 |
7995054 | Wheeler et al. | Aug 2011 | B2 |
8054217 | Bruyere et al. | Nov 2011 | B2 |
8260059 | Hofhauser | Sep 2012 | B2 |
8269830 | Delaney | Sep 2012 | B1 |
8442304 | Marrion et al. | May 2013 | B2 |
8447099 | Wang et al. | May 2013 | B2 |
8488877 | Owechko et al. | Jul 2013 | B1 |
20020057838 | Steger | May 2002 | A1 |
20060088202 | Venkatachalam | Apr 2006 | A1 |
20070161898 | Hao | Jul 2007 | A1 |
20080181487 | Hsu | Jul 2008 | A1 |
20080311551 | Reed | Dec 2008 | A1 |
20090002224 | Khatib | Jan 2009 | A1 |
20090116748 | Davison | May 2009 | A1 |
20090232388 | Minear et al. | Sep 2009 | A1 |
20130096884 | Parker | Apr 2013 | A1 |
20130163851 | Dalla-Torre | Jun 2013 | A1 |
20130242354 | Dewancker | Sep 2013 | A1 |
20130293532 | Vaddadi et al. | Nov 2013 | A1 |
20130308875 | Jacobson | Nov 2013 | A1 |
20130336575 | Dalla-Torre | Dec 2013 | A1 |
20140050387 | Zadeh | Feb 2014 | A1 |
20140078353 | Tezaur | Mar 2014 | A1 |
20140086495 | Hao | Mar 2014 | A1 |
20140120319 | Joseph | May 2014 | A1 |
20140192050 | Qiu et al. | Jul 2014 | A1 |
20140337775 | Northrup | Nov 2014 | A1 |
20150003723 | Huang et al. | Jan 2015 | A1 |
20150006126 | Taguchi et al. | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
103559680 | Feb 2014 | CN |
08-086716 | Apr 1996 | JP |
08-189905 | Jul 1996 | JP |
11-195121 | Jul 1999 | JP |
2007218743 | Aug 2007 | JP |
2013257304 | Dec 2013 | JP |
1997040342 | Oct 1997 | WO |
2010042466 | Apr 2010 | WO |
2012146253 | Nov 2012 | WO |
2015002114 | Jan 2015 | WO |
Number | Date | Country | |
---|---|---|---|
20160180198 A1 | Jun 2016 | US |