This invention relates to a method for structure-guided image inspection.
Template matching is a general and powerful methodology in machine vision applications. It requires little user technical knowledge or interaction to achieve reasonable performance. Users need only supply template images for matching in the teaching phase. In the application phase, the template is compared to like-sized subsets of the image over a range of search positions. It is robust to linear and some monotonic nonlinear variations in shading and does not require separation of objects from their background (i.e. segmentation) that often introduces errors. Template matching methods have been extended to search an input image for instances of templates that are altered by rotation, scale, and contrast changes.
Template matching methods are primarily developed for location and identification applications. Due to the simplicity and usability, it is tempting to apply template matching to inspection applications and use the resulting matching score to check against a tolerance value for acceptance or rejection criteria. However, small uncertainties at image edges can lead to large uncertainties in image gray values. Large uncertainties in gray values due to normal edge variations in turn limit the ability of template matching methods to discriminate between defects and acceptable variations. A template matching method is least able to detect defects reliably at and around image edges, but unfortunately such edges are often the most critical for inspection because they generally correspond to physical features of the object. The near blindness of template matching method along edges is perhaps its most serious limitation for inspection applications.
Alternatively, edge detection and thresholding methods are used to detect object boundaries of interest and dimensional measurements or defect detection are performed using the detected edges. (Hanks, J, “Basic Functions Ease Entry Into Machine Vision”, Test & Measurement World, Mar. 1, 2000, Titus, J, “Software makes machine vision easier”, Test & Measurement World, Oct. 15, 2001) An edge represents gray value changes that are high-frequency components in an image. Therefore, edge detection methods are inherently sensitive to noise. Furthermore, edge detection is also sensitive to contrast variations because of its sensitivity to changes.
A structure-guided processing method uses application domain structure information to automatically enhance and detect image features of interest (Lee, S “Structure-guided image processing and image feature enhancement”, U.S. patent application Ser. No. 09/738,846, filed Dec. 15, 2000, Lee, S, Oh, S, Huang, C “Structure-guided Automatic Learning for Image Feature Enhancement”, U.S. patent application Ser. No. 09/815,466, filed May 23, 2001). The structure information compensates for severe variations such as low image contrast and noise. It retains the ability to detect true defects in the presence of severe process or sensing variations. However, the methods described only enhance regular shape structures such as straight lines, circles, circular arcs, etc. They cannot effectively detect a mismatch between the expected structures and the image features and could generate misleading results when a mismatch exists.
To perform an inspection using the prior art approach, carefully controlled inspection conditions are required. Inspection often fails in applications with significant variations.
The structure-guided inspection method of this invention overcomes the prior art limitations. It is flexible, adaptable (easy to use for a wide range of applications), can deal with noise (difficult cases) and can maximally utilize domain specific knowledge.
An objective of the invention is to provide a method that uses structure information to enhance and detect image features of interest even when the shape of the image structure of interest is not regular. A further objective of the invention is to check both global and local structures of objects to be inspected. The global structure inspection detects gross errors in image structure therefore side effects caused by mismatched structure-guided processing are avoided. Furthermore, subtle defects along the edge of a structure can be detected by local structure inspection. Another objective of the invention is to allow an edge detection based inspection system to tolerate significant noise and contrast variations. An additional objective of the invention is to provide a structure-guided transformation that transforms a region of an image into a region in the structure-transformed image according to the desired structure. A further objective of the invention is to achieve efficient and accurate structure-guided processing such as filtering, detection and comparison in the transformed domain and thereby use simple operations to enhance or detect straight lines or edges.
A structure guided inspection method accepts an input image, performs structure-guided transformation of that image to create a structure-transformed image. Structure detection is performed using the transformed image. The structural transformation recipe can be generated using a contour method or a radial based method. A structure decomposition step can be used for non-convex structures.
The preferred embodiments and other aspects of the invention will become apparent from the following detailed description of the invention when read in conjunction with the accompanying drawings which are provided for the purpose of describing embodiments of the invention and not for limiting same, in which:
1. Concept
I. Introduction
Template matching is a general and powerful methodology in machine vision applications. It requires little user technical knowledge or interaction to achieve reasonable performance. Users need only supply template images for matching in the teaching phase. In the application phase, the template is compared to like-sized subsets of the image over a range of search positions. Template matching is robust to linear and some monotonic nonlinear variations in shading and does not require separation of objects from their background (i.e. segmentation) that often introduces errors. Template matching methods have been extended to search an input image for instances of template like regions that have been subjected to rotation, scale and contrast changes (Silver, B, “Geometric Pattern Matching for General-Purpose Inspection in Industrial Machine Vision”, Intelligent Vision '99 Conference—Jun. 28–29, 1999, Lee, S, Oh, S, Seghers, R “A Rotation and Scale Invariant Pattern Matching Method”, U.S. patent application Ser. No. 09/895,150, filed Jun. 29, 20).
Template matching methods are primarily developed for location and identification applications. Due to the simplicity and usability, it is tempting to apply template matching to inspection applications and use the resulting matching score to check against a tolerance value for acceptance or rejection criteria (Hanks, J, “Basic Functions Ease Entry Into Machine Vision”, Test & Measurement World, Mar. 1, 2000 http://www.e-insite.net/tmworld/index.asp?layout=article&articleid=CA187377&pubdate=3/1/2000, Titus, J, “Software makes machine vision easier”, Test & Measurement World, Oct. 15, 2001 http://www.e-insite.net/tmworld/index.asp?layout=article&articleid=CA177596&pubdate=10/15/2001). However, small uncertainties at image edges can lead to large uncertainties in image gray values. Large uncertainties in gray values due to normal edge variations, in turn, limit the ability of template matching method to discriminate between defects and acceptable variations. A template matching method is least able to detect defects reliably at and around image edges, but unfortunately such edges are often the most critical for inspection because they generally correspond to physical features of the object. The near blindness of template matching method along edges is perhaps its most serious limitation for inspection applications.
Alternatively, an edge detection and thresholding method is used to detect object boundaries of interest and dimensional measurements or defect detection are performed on the detected edges. Edges represent gray value changes that are high-frequency components in an image. Therefore, an edge detection method is inherently sensitive to noise. Furthermore, it is also sensitive to contrast variations due to the detection of changes. To perform inspection using an edge detection method, carefully controlled inspection conditions are required. Edge detection often fails in applications with significant variations.
A structure-guided processing method uses application domain structure information to automatically enhance and detect image features of interest (Lee, S “Structure-guided image processing and image feature enhancement”, U.S. patent application Ser. No. 09/738,846, filed Dec. 15, 2000, Lee, S, Oh, S, Huang, C “Structure-guided Automatic Learning for Image Feature Enhancement”, U.S. patent application Ser. No. 09/815,466, filed May 23, 2001). The structure information compensates for severe variations such as low image contrast and noise. It retains the ability to detect true defects in the presence of severe process or sensing variations. However, the methods only enhance regular shape structures such as straight lines, circles, circular arcs, etc. They cannot effectively detect mismatch between the expected structures and the image features and could generate misleading results when mismatch exists.
The structure-guided inspection method of this invention overcomes the prior art limitations. It is flexible, adaptable (easy to use for a wide variety of applications), can deal with noise (difficult cases) and can maximally utilize domain specific knowledge. The invention provides a method that uses structure information to enhance and detect image features of interest when the shape of the image structure is not regular. The invention also checks both global and local structures of objects to be inspected. The global structure inspection detects gross errors in image structure. Therefore side effects caused by mismatched structure-guided processing are avoided. Furthermore, subtle defects along the edge of a structure are detected by local structure inspection. The invention also allows an edge detection based inspection system to tolerate significant noise and contrast variation. The invention provides a structure-guided transformation that transforms a region of an image into a region in the structure-transformed image according to the desired structure. In essence, the invention enhances the image on the basis of the structures that are being inspected for in order to make the inspection for them much easier. In one embodiment of the invention, the contour of the desired structure is lined up to form a straight line in the structure-transformed image. This facilitates efficient and accurate structure-guided processing such as filtering, detection and comparison in the transformed domain using simple operations to enhance or detect straight lines or edges
II. Structure-Guided Inspection Approach
The structure-guided inspection method of this invention uses structure information to guide image enhancement for reliable feature and defect detection. It checks both global and local structures of objects to be inspected. There is a learning phase and an application phase of the invention. In the learning phase, the structure transformation recipe, structure-guided filter recipe and the expected structure in the transformation domain are generated.
In the application phase, there are two steps in the structure-guided inspection of this invention: a global structure inspection step and a local defect inspection step. The structure-guided inspection is first performed in the global scale that strongly enforces the structure information. It could tolerate high noise, low contrast and small defects yet local details may be replaced by the structure information and may lose their fidelity. Therefore, a local feature detection and inspection step is performed. The global detected features serve as reference to perform local feature detection and defect inspection.
The global structure inspection step determines the estimated global structure of the input image and compares the estimated global structure with the expected structure generated in the learning phase. If the difference between the estimated global structure and the expected structure exceeds the acceptable tolerance, an error will be reported. After passing the global structure inspection step, the estimated global structure will be used as the basis to check the difference between the structure and image features at different local locations of the image. If significant differences are detected, a local defect will be reported. Otherwise, a refined image structure output that combines the global and local structures is generated. The processing flow for one embodiment of the global and local defect detection is shown in
In the learning phase, structure information 104 and learning images 100 are used by a structure-guided transformation learning process 102 to create a structure transformation recipe 106. Note that in some applications, only the learning image 100 is required for this learning and the structure information 104 is derived from the learning image 100. In other applications, only structure information 104 is required for this learning. At least one structure-transformed image 108 is created in the learning process. In one embodiment of the invention, a structure-guided filter learning and feature detection process 110 learns a structure-guided filter recipe 112 and generates expected transformed structure 114. The structure-guided filter learning process is performed only when the application requires a structure-guided filter recipe. The learning steps can be performed in the field in a learnable automatic inspection system or performed by the inspection system supplier in the factory. The learning can be conducted manually or performed by a learning system automatically or semi-automatically.
In the application phase, an input image 116 is processed by the structure-guided transformation step 118 using the structure structure-transformation recipe 106 to generate a structure-transformed image output 120. In one embodiment of the invention, the structure-transformed image 120 has a flat (or linear) boundary for objects having the desired structure. The structure-guided transformation 118 may involve multiple separate transformations for different portions of the structure. In some applications, structure-guided filtering 122 is applied to create a structure filtered image output 126. The filtering operation 122 removes noise and enhances contrast using the structure information. From the filtered image 126, the global structure is detected 124 having detected global structure outputs 128 and compared with the expected structure 114 in a global structure inspection step 130 to generate a global structure inspection result output 132. If the difference is significant, the input image 116 is corrupted in global structure. When the difference is small 134, the filtered image 126, detected global structure 128, and input image 116 could be further processed by the local structure defect inspection stage 136 to detect local defects, producing a local defect result output 140. In addition, a refined image structure output 138 that combines the global and local structures could be generated that removes noise yet preserves local details.
III. Structural-Guided Transformation
Structure-guided transformation transforms a region of an image into a region in the transformed image according to the desired structure. In one embodiment of the invention, the contour of the desired structure is lined up to form a straight line in the structure-transformed image. This facilitates efficient and accurate structure-guided processing such as filtering, detection and comparison in the transformed domain using simple operations that are designed to enhance or detect straight lines or edges. In this way, all operations are simple and unified regardless of the differences in structures.
As shown in
The structure-guided transformation learning process 102 creates a structure transformation recipe 106 for a given structure 104. The recipe is used in the application phase to efficiently perform structure-guided transformation 118. In the learning phase, the structure transformation recipe 106 is generated from the learning image 100 and/or reference structure input 104. In one embodiment of the invention, the reference structure is read from a Computer Aided Design (CAD) data file. In another embodiment of the invention, the reference structure is specified by users through a graphical interface. In a third embodiment of the invention, the reference structure is derived from the learning image by image processing either semi-automatically or automatically. The reference structure includes the contour of the structure and reference points or coordinates for alignment.
In one embodiment of this invention there are two types of structure-guided transformation: a contour based transformation, and a radial based transformation. The contour based transformation can be used for reference structures with simple closed or open contours. The radial based transformation can be used for more complicated reference structures. The radial based transformation includes methods to decompose a complicated shape into multiple components and perform structure-guided transformation of each component and integrate them into one transformed image.
III.1 Contour Based Structure-Guided Transformation
Contour based structure-guided transformation can be used for simple open or closed contours. In one embodiment of the invention, the processing flow of contour based transformation learning is shown in
The
Those skilled in the art should recognize that other mapping methods such as linear or nonlinear interpolation could be used.
In the application phase, the contour based structure-guided transformation is as simple as reading the mapping look-up table to determine the corresponding (x,y) pixel location for each (u,v) coordinate and then read the input image value from the (x,y) pixel location and assign to the (u,v) location of the transformed image. Depending on the application, some pre-alignment step may be necessary to align the input image with the reference structure mask (Lee, S, Oh, S, Seghers, R “Structure-guided Automatic Alignment for Image Processing”, U.S. patent application Ser. No. 09/882,734, filed Jun. 13, 2001).
III.2 Radial Based Structure-Guided Transformation
The radial based structure-guided transformation can be used for reference structures with complex contours. The method normalizes the distance of the boundary from a given center point. It includes two steps: determining the center for transformation of a reference structure and performing radial based transformation.
III.2.1 Determining the Center for Transformation
If the difference between the maximum and minimum distance of the boundary is large in a radial based structure-guided transformation, the dynamic range of the transform is also large. Since the input image has fixed resolution, the precision of the transformed domain is degraded when large dynamic range is required. The center location plays a significant role in the required dynamic range for the transformation. A good center location is one that has the smallest difference between the maximum and minimum distance to the boundary. In one embodiment of the invention, this center point is determined by the following steps:
In one embodiment of the invention, the u-axis of the transformed domain is the radial angle and the v-axis of the transformed domain is the normalized distance.
A special case of the radial based structure-guided transformation is the polar coordinate transform. In this case, the relationship between (x,y) in the original coordinate system and (u,v) in the transformed coordinate system is
x=v*cos(u)+xc
y=v*sin(u)+yc.
where (xc, yc) is the center of the transformation.
In the learning phase, the radial based structure-guided transformation learning determines the (u,v) locations of the transformed domain for each pixel in (x,y) domain. The (u,v) locations are determined using the u-axis of the transformed domain as the radial angle and the v-axis of the transformed domain as the normalized distance from a given center point. The structure transformation learning determines the mapping for each point in the (u,v) domain using points in the (x,y) domain. In one embodiment of the invention, a nearest neighbor approach is used for the mapping. This results in a mapping look-up table stores corresponding (x,y) values indexed by (u,v). The mapping lookup table is the structure transformation recipe.
Those skilled in the art should recognize that other mapping methods such as linear or nonlinear interpolations could be used.
III.3 Structure Decomposition in a Radial Based Structure-Guided Transformation
If the reference structure is convex, then the radial based structure-guided transformation requires only one transformation. However, in complicated cases, multiple separate transformations can be performed for different portions of the structure. The partial results from multiple transformation domains are then integrated to form the final result. When the multiple transformations are required, the region of the reference structure mask is decomposed into different components. Each component has its own structure transformation.
In the case of complex structures such as 800 and 810, users may be asked to perform the decomposition. In another embodiment of the invention, the structure-guided inspection system could provide assistance to users for the decomposition. In a third embodiment, the system automatically performs the decomposition.
III.3.1 Automatic Decomposition Method
When the reference structure mask is given, the automatic decomposition method can be applied. In one embodiment of the invention, the automatic structure decomposition method includes sequential application of operations for decomposition region selection and selected decomposition region exclusion.
If more than one point has the maximum distance value within the mask, then any one of the points can be chosen as the center. The boundary of a desired region must be continuous. Therefore, when a line segment 1210, 1212, 1214 starting from center 1206 toward boundary 1200 intersects multiple points on the boundary 1208, 1216, further decomposition that removes that region from the desired Nth region is required. The boundary point that has the shortest distance from the center 1208 is the boundary point to include in the Nth region. In one embodiment of the invention, the boundary detection is accomplished by erosion residue using a 3 by 3 structuring element. When the line segment from the center 1206 for a given angle is the tangential line 1212 of the boundary 1200 (shown tangential at boundary location 1218), a limiting line 1202 is generated using the limitation of the distance condition. The limiting line 1202 is then used to separate the Nth region and the N+1th region 1204.
In one embodiment of the invention, the method to create the limiting line calculates the maximum distance of the next angle from the center.
The above equation is derived by the triangular theorem as illustrated in
Those skilled of the art should recognize that other methods of desired region selection could be used. For example, an alternative embodiment to select the desired region is shown in
An example of multiple radial based transformation is given in
IV. Structure-Guided Filtering
The transformed image could have rough or unclean boundary because of the contamination by noise, variations, distortion, defects, and artifacts of transformation. To reduce this effect, structure-guided filtering is applied in one embodiment of the invention. Note that after the transformation, the reference structure becomes a rectangular region. The automatic filtering learning and application method disclosed in a prior invention (Lee, S, Oh, S, Huang, C “Structure-guided Automatic Learning for Image Feature Enhancement”, U.S. patent application Ser. No. 09/815,466, filed May 23, 2001) is directly applicable. The structure information is represented by the directional box caliper described in the prior invention.
V. Structure-Guided Feature Detection
The boundary of the transformed image is ideally a horizontal straight line. Therefore, in one embodiment of the invention, the feature detection is performed on the transformed image. This can be done on the filtered or un-filtered transformed image. The filtered image could result in enhanced features. The image features such as edges, lines, texture contrasts, etc. can be detected by the structure-guided image processing and feature extraction method as disclosed in a prior invention (Lee, S “Structure-guided image processing and image feature enhancement”, U.S. patent application Ser. No. 09/738,846, filed Dec. 15, 2000). Parametric estimation of the features can be performed. The method as disclosed in a prior invention (Lee, S, Oh, S “Structure-guided Image Measurement Method”, U.S. patent application Ser. No. 09/739,084, filed Dec. 15, 2000) can be used for feature estimation. From the extracted features global inspection can be performed.
VI. Global Structure Inspection
The global structure inspection method 130 compares the detected global structure information 128 and the parameters to the expected global structure 114. For example, when the expected structure is a circle, the center location and radius can be compared. Also, the statistics of the radius, such as maximum radius, deviation of the radius can be compared. If the expected structure is a rectangle shape, the angle between the linesat the intersection points can be compared. For structure that is composed of many different geometrical entities, the partial structure information, the parameter and the relative locations of each geometrical entity can be compared with the expected one. If the difference is larger than an allowable threshold, a global defect is reported 132. Otherwise, the structural information is sent 134 to the local structure inspection step 136. In one embodiment of the invention, the comparison is performed in the transformed domain that simplifies the structure representation and comparison to produce a local defect result 140. In an alternative embodiment of the invention, the comparison can be performed by image subtraction (or exclusive OR operation for binary masks) of the detected features and expected feature masks. The difference between the detected and expected images is the non-zero portion of the image. Those skilled in the art should recognize that the image comparison could be performed on either the original image domain or the transformed image domain.
VII. Local Structure Inspection
In one embodiment of the invention, if the global structure defect is not detected 134, the local defect detection 136 is performed from the original input image 116 and filtered output 126 as well as global detection results 128 that are inverse transformed back to the original image domain. Because the automatic filtered output 126 enhances the expected structure, the filtered output can be used as a reference for local structure inspection. The local inspection step 136 first extracts local features from the original image or original image with local enhancement. The local feature extraction can be performed using the structure-guided processing method as disclosed in a prior invention (Lee, S “Structure-guided image processing and image feature enhancement”, U.S. patent application Ser. No. 09/738,846, filed Dec. 15, 2000). The locally detected features are refined using the structure filtered output 126 from the global inspection. The local inspection step compares the refined feature detection results with the globally estimated ones. In one embodiment of the invention, the comparison can be performed by image subtraction (or exclusive OR operation for binary masks) of the detected features and expected feature masks. The difference between the detected and expected images is the non-zero portion of the image. If the difference is greater than an allowable tolerance threshold, a local defect is detected 140.
VII.1 Local Feature Refinement
From the global filtered image and the locally detected features, the refined features are generated to represent image structure. In one embodiment of the invention, the method to refine the features from the global filtered image and the locally detected features maximizes the feature intensity with constraints such as
Gain=Σ{FI[x][y]}2−α*Σ{δx*FIx[x][y]+δy*FIy [x][y]}2
where FI[x][y] is magnitude of the feature intensity such as gradient values of the boundary, FIx[x][y] and FIy[x][y] are x and y components of the feature intensity. δx and δy are the displacement in x and y directions at location (x,y). α is a refinement parameter. The location that maximizes the gain is the refined location of the features. The initial feature location is derived from the local feature detection result and the feature intensity and adjustment is performed on the global filtered image. The refined image structure output 138 that combines the global and local structures could remove noise yet preserve local details.
The invention has been described herein in considerable detail in order to comply with the Patent Statutes and to provide those skilled in the art with the information needed to apply the novel principles and to construct and use such specialized components as are required. However, it is to be understood that the inventions can be carried out by specifically different equipment and devices, and that various modifications, both as to the equipment details and operating procedures, can be accomplished without departing from the scope of the invention itself.
Number | Name | Date | Kind |
---|---|---|---|
5758129 | Gray | May 1998 | A |
5881170 | Araki et al. | Mar 1999 | A |
6459809 | Jensen et al. | Oct 2002 | B1 |
6751363 | Natsev et al. | Jun 2004 | B1 |
20020010704 | Kim et al. | Jan 2002 | A1 |
20020044689 | Roustaei et al. | Apr 2002 | A1 |
Number | Date | Country | |
---|---|---|---|
20040052417 A1 | Mar 2004 | US |