Embodiments of the present invention relate, in general, to defect detection. More specifically, embodiments of the present invention relate to image-based automatic detection of a defective area in an industrial product.
One conventional approach to defect detection is by human inspection. In such approaches, a human operator may need to examine each image of an industrial product to identify a defective area, or areas, and to manually label the defects. This human process may depend heavily on the skills and expertise of the operator. Additionally, the time required to process different images may be significantly different, which may cause a problem for a mass-production pipeline. Furthermore, the working performance may vary considerably between human operators and may drop quickly over time due to operator fatigue.
Other conventional approaches to defect detection may comprise image template matching, for example, phase correlation in the image frequency domain and normalized cross correlation in the spatial image domain. However, these methods may be sensitive to image noise, contrast change and other common imaging degradations and inconsistencies. Perhaps more importantly, these methods cannot handle the situation when a model image is geometrically transformed due to camera motion and different operating settings.
Robust and automatic methods, systems and apparatus that can perform defect detection on different images in substantially the same amount of time and at substantially the same level of accuracy may be desirable. Additionally, a method, system and apparatus that can learn from previous input and improve its performance automatically may also be desirable.
Embodiments of the present invention relate, in general, to defect detection. More specifically, embodiments of the present invention relate to image-based automatic detection of a defective area in an industrial product.
According to a first aspect of some embodiments of the present invention, an offline training stage may be executed prior to an online classification stage.
According to a second aspect of some embodiments of the present invention, both the online stage and the offline stage comprise robust image matching in which a geometric transform between an input image and a corresponding model image may be estimated. The estimated geometric transform may be used to transform the input image and the corresponding model image to a common coordinate system, thereby registering the input image and the corresponding model image. Image difference measures may be computed from the registered input image and the registered corresponding model image.
According to a third aspect of some embodiments of the present invention, the robust image matching may comprise an iterative process in which feature-point matches between feature points in the input image and feature points in the corresponding model image may be updated guided by a current estimate of the geometric transform. The geometric transform estimate may, in turn, be updated based on the updated feature-point matches.
According to a fourth aspect of some embodiments of the present invention, pixel-based difference measures may be computed from the registered input image and the registered corresponding model image.
According to a fifth aspect of some embodiments of the present invention, window-based difference measures may be computed from the registered input image and the registered corresponding model image.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
Embodiments of the present invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The figures listed above are expressly incorporated as part of this detailed description.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the methods and systems of the present invention is not intended to limit the scope of the invention but it is merely representative of the presently preferred embodiments of the invention.
Elements of embodiments of the present invention may be embodied in hardware, firmware and/or software. While exemplary embodiments revealed herein may only describe one of these forms, it is to be understood that one skilled in the art would be able to effectuate these elements in any of these forms while resting within the scope of the present invention.
Embodiments of the present invention relate, in general, to defect detection. More specifically, embodiments of the present invention relate to image-based automatic detection of a defective area in an industrial product, for example, an electronic circuit, a Liquid Crystal Display (LCD) panel and other industrial products.
According to some embodiments of the present invention, a digital image of an industrial product may be acquired from one, or more, digital cameras in order to assess whether or not the industrial product comprises a defective area, and, if so, to identify the defective area and classify the defects.
Some embodiments of the present invention may comprise an offline training stage described in relation to
Performing 106 robust matching of the input image and the corresponding model image may comprise estimation of a 2-dimensional (2D) geometric transform, also referred to as an inter-image transform or a transform, between the input image and the corresponding model image. The 2D geometric transform may be a one-to-one transform that may warp a first pixel in one of the images to a new position in the other image. After the 2D geometric transform is estimated, the input image and the corresponding model image may be warped to a common 2D coordinate system. In some embodiments, the corresponding model image may be warped to the coordinate system of the input image. In alternative embodiments, the input image may be warped to the coordinate system of the corresponding model image. In still alternative embodiments, the input image and the corresponding model image may be both warped to a new coordinate system.
Estimation of the inter-image transform, also referred to as image registration, according to embodiments of the present invention may be understood in relation to
where T12 may be represented by a 3×3 matrix associated with a 2D translation, a rotation, a scaling, an affine transformation, a homography and other inter-image transforms. According to embodiments of the present invention, a number of feature points may be detected 200 in the input image and the corresponding model image. A shape descriptor may be extracted 202 for each detected feature point. A person having ordinary skill in the art will recognize that there are, known in the art, many methods and systems for the detection 200 of the feature points and the extraction 202 of a shape descriptor for each feature point. Exemplary methods may be found in “Evaluation of Interest Point Detectors,” International Journal of Computer Vision, 37(2):151-172, June 2000, by C. Schmid, R. Mohr and C. Bauckhage and in “Evaluation of Features Detectors and Descriptors based on 3D Objects,” International Journal of Computer Vision, 73(3):263-284, July 2007, by P. Moreels and P. Perona, both articles of which are hereby incorporated by reference herein in their entirety.
In some embodiments of the present invention described in relation to
In some embodiments of the present invention described in relation to
After an initial geometric transform estimate has been determined 206, 210, then the feature-point matches may be updated 212 as guided by the current geometric transform estimate, initially the initial geometric transform. The current-geometric-transform-estimate guided feature-point matches update 212 may be understood in relation to
Referring again to
The next unprocessed non-zero pixel in the comparison mask image may be selected 502, and corresponding pixel color values from the registered input image and the registered model image may be extracted 504. Pixel-based difference measures maybe computed 506 from the color pixel values. In some embodiments of the present invention, a direct absolute difference value may be calculated according to:
diffabs(x,y)=|I(x,y)−M(x,y)|,
where I(x, y) and M (x, y) may denote the color values at pixel location (x, y) in the registered input image and the registered model image, respectively. In some embodiments of the present invention, a direct relative difference value may be calculated according to:
In some embodiments, a linear 1-dimensional (1D) transform between the registered input image intensity values and the registered model image intensity values may be determined according to:
where a and b are the transform parameters. In some embodiments of the present invention, a compensated direct absolute difference value and a compensated direct relative difference value may be calculated according to:
respectively.
Pixel color values may be extracted 508 from a local region around the selected pixel location, and windowed difference measures may be computed 510. In some embodiments, a squared difference value may be calculated according to:
where W denotes the local region. In some embodiments, a normalized cross correlation value may be calculated according to:
where
A determination 512 may be made as to whether or not all non-zero mask pixels have been processed. If not 513, then the next non-zero mask pixel may be selected 502 and processed. If so 515, then an image edge-weight map may be computed 516 from the model image. The image edge-weight map may determine pixel weighting values based on the distance of a pixel from an edge in the model image. Pixel locations closer to an edge may have a smaller weighting value than pixel locations further from an edge. In some embodiments, the weighting values may be in the range of zero to one with pixel locations coincident with an edge location being given a weighting value of zero and those furthest from an edge being given a weighting value of one. The difference measures may be adjusted 518 according to the image edge-weight map, and the adjusted difference measures may be made available 520 to subsequent processes and for subsequent processing. In some embodiments, the distance measures may be adjusted by multiplying the unadjusted distance measure values by the corresponding weight value.
An image difference vector may be formed at each pixel location as described in relation to
Referring to
The statistical classification model takes as input image difference vectors associated with an input image and generates an output label for each pixel location in the input image. The labeled output may be referred to as a labeled classification map indicating the classification of each pixel, for example, “defect” and “non-defect.” Exemplary embodiments for the statistical classification model may include a support vector machine, a boosted classifier and other classification models known in the art.
The training data may be divided 700 into two subsets: a training subset and a testing subset. An untested set of training parameters maybe selected 702 from a set of possible training parameters. The statistical classification model may be trained 704 on the training subset using the selected training parameters. The trained statistical classification model may be tested 706 using the testing subset. The testing may comprise input of the image difference vectors associated with an input image in the testing subset to the statistical classification model which may generate a labeled output wherein each pixel location in the input image is labeled as a “defect” or a “non-defect.” After all input images in the testing subset are processed, a performance score may be computed 708 for the currently trained classification model using the ground-truth defect masks. In some embodiments of the present invention, the performance score associated with a classification model may be defined based on the true positive rate and the false positive rate according to:
Score=TP+α(1−FP),
where TP and FP are the true positive rate and the false positive rate, respectively, and α is design parameter associated with the performance requirements of the defect detection system. In these embodiments, a higher score may be indicative of better classification model performance.
The performance score of the currently tested classification model may be compared 710 to the performance score of the current best classification model. If the performance score of the currently tested classification model is better 711 than the performance score of the current best classification model, the current best classification model may be updated 712 to the currently tested classification model, and a determination 714 may be made as to whether or not there remain parameter sets to be tested. If the performance score of the currently tested classification model is not better 713 than the performance score of the current best classification model, then a determination 714 may be made as to whether or not there remain parameter sets to be tested. If there are 715 parameter sets remaining to be tested, the next untested parameter set may be selected 702. If there are no 717 parameter sets remaining to be tested, the training may be completed 718, and the current best classification model may be used in an online defect detection process.
Some online defect detection embodiments of the present invention may be understood in relation to
Although the charts and diagrams in the figures may show a specific order of execution, it is understood that the order of execution may differ from that which is depicted. For example, the order of execution of the blocks may be changed relative to the shown order. Also, as a further example, two or more blocks shown in succession in a figure may be executed concurrently, or with partial concurrence. It is understood by those with ordinary skill in the art that software, hardware and/or firmware may be created by one of ordinary skill in the art to carry out the various logical functions described herein.
Some embodiments of the present invention may comprise a computer program product comprising a computer-readable storage medium having instructions stored thereon/in which may be used to program a computing system to perform any of the features and methods described herein. Exemplary computer-readable storage media may include, but are not limited to, flash memory devices, disk storage media, for example, floppy disks, optical disks, magneto-optical disks, Digital Versatile Discs (DVDs), Compact Discs (CDs), micro-drives and other disk storage media, Read-Only Memory (ROMs), Programmable Read-Only Memory (PROMs), Erasable Programmable Read-Only Memory (EPROMS), Electrically Erasable Programmable Read-Only Memory (EEPROMs), Random-Access Memory (RAMS), Video Random-Access Memory (VRAMs), Dynamic Random-Access Memory (DRAMs) and any type of media or device suitable for storing instructions and/or data.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalence of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
Number | Name | Date | Kind |
---|---|---|---|
4811409 | Cavan | Mar 1989 | A |
6266437 | Eichel et al. | Jul 2001 | B1 |
6356300 | Shiba | Mar 2002 | B1 |
6456899 | Gleason et al. | Sep 2002 | B1 |
6578188 | Pang et al. | Jun 2003 | B1 |
6753965 | Kumar et al. | Jun 2004 | B2 |
6882896 | Ting et al. | Apr 2005 | B2 |
6922482 | Ben-Porath | Jul 2005 | B1 |
6987873 | Ben-Porath et al. | Jan 2006 | B1 |
7003146 | Eck et al. | Feb 2006 | B2 |
7003755 | Pang et al. | Feb 2006 | B2 |
7132652 | Testoni | Nov 2006 | B1 |
7196785 | Nishiyama et al. | Mar 2007 | B2 |
7330581 | Ishikawa | Feb 2008 | B2 |
7425704 | Miyai et al. | Sep 2008 | B2 |
7508973 | Okabe et al. | Mar 2009 | B2 |
7538750 | Kim et al. | May 2009 | B2 |
7583832 | Okuda et al. | Sep 2009 | B2 |
20010020194 | Takagi et al. | Sep 2001 | A1 |
20060226865 | Gallarda et al. | Oct 2006 | A1 |
20080004742 | Hirai et al. | Jan 2008 | A1 |
20090136117 | Barkol et al. | May 2009 | A1 |
20090224777 | Kim et al. | Sep 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20120027288 A1 | Feb 2012 | US |