The present invention relates to machine vision systems and, more specifically, to aligning components having a plurality of non-uniformly spaced features using a machine vision system.
The use of advanced machine vision systems and their underlying software is increasingly employed in a variety of manufacturing and quality control processes. Machine vision enables quicker, more accurate and repeatable results to be obtained in the production of both mass-produced and custom products. Basic machine vision systems include one or more cameras (typically having solid-state charge couple device (CCD) imaging elements) directed at an area of interest, frame grabber/image processing elements that capture and transmit CCD images, a computer and display for running the machine vision software application and manipulating the captured images, and appropriate illumination on the area of interest.
Many applications of machine vision involve the inspection of components and surfaces for defects that affect quality. Where sufficiently serious defects are noted, a part of a surface is marked as unacceptable/defective. Machine vision has also been employed in varying degrees to assist in manipulating manufacturing engines in the performance of specific tasks. Specifically, machine vision systems may be utilized for inspection of components along an assembly line to ensure that the components meet a predefined criteria before insertion and/or assembling of the components into a finished product.
Machine vision systems are typically utilized in alignment and inspection of components having a ball grid array (BGA) and/or flip chip form factor. BGA/flip chip components typically include a plurality of small solder balls on a mounting side of the component. The solder balls may then be soldered using ultrasound technology once a component is appropriately placed on a circuit board. Over the past few years, the number of balls on a flip chip have dramatically increased so that current flip chip components may have on the order of 12,000 balls. Furthermore, modern flip chip components typically have the solder balls less aligned on a grid pattern, i.e., the solder balls are non-uniformly spaced on the component.
Both of these trends complicate current machine vision systems that are utilized for alignment of flip chip designs. As the number of balls grows very large, current methods that rely on extracting balls or otherwise measuring ball features typically execute at a speed that is insufficient for run time. Furthermore, as the patterns of balls become more complex, search-based approach to alignments may enter worst-case scenarios. This may occur because a small misalignment in the translation or the angle may mean that a majority of individual features match thereby increasing the probability of an incorrect match occurring. Furthermore, flip chips often have strong body features present in an image obtained of the component. The body feature is typically not precisely aligned with the solder ball pattern, which means that these features must not be used for alignment. However existing machine vision tools are likely to use the strong body feature for alignment rather than the ball features, thereby resulting in unsatisfactory accuracy of alignment.
Additionally, conventional machine vision systems utilized for flip chips typically require geometric descriptions. However, a noted disadvantage since of such geometric descriptions is that they are extremely slow to train the geometric description when a flip chip has a non-repetitive pattern and/or a very large number of balls. As noted above, current trends in flip chip designs are increasing the number of balls and moving to non-repetitive, that is non-grid like patterns. As such, conventional machine vision systems for alignment of flip chips are becoming progressively slower as the current trends in design of flip chips continue.
The present invention overcomes the disadvantages of the prior art by providing a system and method for high-speed alignment of components having a plurality of non-uniformly spaced features. In accordance with an illustrative embodiment of the present invention, during training time of a machine vision system, a small subset of alignment significant blobs along with a quantum of geometric analysis for picking granularity is determined. By utilizing only the alignment significant blobs and the geometric analysis, the use of conventional alignment techniques may achieve significantly better speed and robustness for component alignment. In operation, during training time, grayscale blobs are extracted using a scale space search. Alignment significant blobs are then determined from the grayscale blobs. Once alignment significant blobs are determined, run time smoothing and down sampling are then determined on the alignment significant blobs. The machine vision system is then trained to operate with the alignment significant regions.
In operation during run-time, an image is captured of the component which is then smoothed and down sampled using the previously defined values. A coarse alignment is then performed before a fine alignment is performed. The fine alignment information is an output from the vision software. The coarse and fine alignments only utilize the previously identified alignment significant blobs, thereby providing higher speed and accuracy of alignment of the component.
The above and further advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements:
A. Machine Vision System
The image analysis system 125 is illustratively programmed in accordance with the teachings of the present invention provides for high speed alignment of components by utilizing detection of alignment significant blobs in accordance with an illustrative embodiment of the present invention. The image analysis system 125 may have one or more central processing units (processors) 130, main memory 135, input/output systems 145 and one or more disk drives or other form of mass storage 140. Illustratively, the input/output system 145 interconnects with the communications path 120 between the capturing device 105 and the image analysis system 125. The system 125 may be configured by programming instructions in accordance with the teachings of the present invention to perform the novel multi-image trained pattern recognition and registration of the present invention. As will be appreciated by those skilled in the art, alternative hardware and/or software configurations may be utilized to implement the principles of the present invention. Specifically, the teachings of the present invention may be implemented in software, hardware, firmware and/or any combination thereof. Furthermore, during run-time, as opposed to training time, additional components may be included in the machine vision system 100. For example, objects 115 may be transported by a conveyor belt (not shown) or other assembly line apparatus, etc.
In accordance with an illustrative embodiment of the present invention, the machine vision system 100 may be utilized to generate the training model for a run-time machine vision system. Thus, the machine vision system 100 may be utilized to generate a training model that may be utilized in a plurality of machine vision systems utilizing similar components.
It should be noted that while the present invention is described in terms of a machine vision system 100, the principles of the present invention may be utilized in a variety of differing embodiments. As such, the term machine vision system should be taken to include alternative systems. More generally, the principles of the present invention may be implemented on any system that aligns components. For example, one embodiment may involve a conventional machine vision system comprising of a stand alone camera operatively interconnected with a stand alone computer programmed to process images, etc.
However, the principles of the present invention may be utilized in other devices and/or systems that align components based on images acquired of the component. For example, a vision sensor, such as the Checker product available from Cognex Corporation, or other device that comprises illumination sources, image acquisition capabilities and/or processing capabilities. Such vision sensors may be trained and/or configured via separate modules, such as a Cognex Vision View. In such embodiments, the user may train the vision sensor using a plurality of objects, instead of a single object. The user may select a first object, place it in front of the sensor and indicate to the system that the training object is positioned. A second (third, etc.) object may be similarly trained. The user may control the training step using, e.g., a graphical user interface (GUI) and/or buttons or other control surfaces located on either the training module and/or the vision sensor itself. Furthermore, the functionality of the present invention may be incorporated into handheld devices, wireless compatible devices, etc. As such, the term machine vision system should be interpreted broadly to encompass all such systems and devices that may utilize one or more of the teachings of the present invention.
B. Components Having Non-Uniformly Spaced Features
In illustrative embodiments of the present invention, the size, shape, orientation of regions lacking balls on a component may vary dramatically. Component 200 should be taken as an exemplary component for purposes of illustrating the principles of the present invention. However, as will be appreciated by one skilled in the art, the principles of the present invention may be utilized with components having any number of regions of missing and/or varying balls including those component that utilize non-uniform ball densities, i.e., non-grid aligned solder ball patterns.
Once grayscale blobs have been extracted, the machine vision system then identifies alignment significant blobs in step 315. Alignment significant blobs are those blobs that cannot be easily mistaken for their neighbors when a pattern is shifted in various directions. More generally, blobs that are on boundaries of dense regions are typically alignment significant. One technique for determining alignment significance is described below. However, it should be noted that alternative techniques for determining alignment significance may be utilized in alternative embodiments of the present invention. As such, the alignment significance determination technique described herein should be taken as exemplary only.
For the purposes of an illustrative embodiment of an alignment significance determination, an immediate neighbor of a blob is defined to be those neighbors within a predefined constant factor of a given blob's nearest neighbor distance. In an illustrative embodiment, the predefined constant factor is 1.5. Thus, those blobs within it 1.5 units of is a given blob's nearest neighbor are defined to be the blob's immediate neighbors. A blob is then defined as alignment significant if a sufficiently large angle exists in which there are no immediate neighbors.
Thus, in accordance with step 315 of procedure 300, each blob in illustrative component 400 is examined for alignment significance. Exemplary blob 405 is examined; however, as blob 405 is on the interior of a conventional checkerboard pattern, the largest angle range that is empty is approximately 45°. Therefore, blob 405 is deemed to not be alignment significant. Next, blob 410 is examined and it is determined that the angle in which there are no neighbors is approximately 180°. As this is greater than the exemplary 135° threshold, blob 410 is deemed to be alignment significant. Finally, blob 415 is examined. As the largest angle without neighbors is approximately 90°, which is less than 135°, blob 415 is also deemed to not be alignment significant.
Returning to procedure 300, in step 320, the machine vision system determines appropriate run time smoothing and down sampling for the image. Smoothing is performed to enable search methods to be robust by ensuring that a reasonable match score is attained. In step 325, an alignment mask is generated that only includes those alignment significant blobs. Illustratively, a plurality of alignment masks may be generated with differing masks being specific to certain alignment tools and/or techniques. Once the alignment mask is generated, the generated mask is utilized to train one or more alignment tools in step 330. Such alignment tools may include, e.g., normalized correlation tools, edgelet based tools, etc.
The procedure 300 then completes in step 335. Illustratively, procedure 300 works to train a machine vision system by focusing only on alignment significant regions of a component. This reduces the complexity and increases the speed at which components can be aligned during run time.
A coarse alignment is performed of the image in step 620. A fine alignment of the image is then performed in step 625. The coarse and fine alignment steps 620, 625 may be combined into a single alignment step in alternative embodiments of the present invention. The coarse and fine alignment steps 620,625 are illustratively implemented using conventional machine vision techniques with only the alignment significant regions masked in. This enables the alignment techniques to operate at sufficient speed to meet production requirements and to provide the necessary degree of accuracy. The fine alignment information is then output in step 630. The fine alignment information that is output may be used by, e.g., a robotic actuator (not shown) to properly align the flip chip prior to it being soldered in place. It should be noted that in an illustrative embodiment of the present invention, the coarse alignment step 620 is performed using a normalized correlation technique, while the fine alignment step 625 utilizes an edglet-based matching technique. As such, the description of the coarse and fine alignment steps using the same technique should be taken as exemplary only. The procedure 600 then completes in step 635.
The foregoing description has been directed to particular embodiments of this invention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. Additionally, the procedures, processes and/or modules described herein may be implemented in hardware, software, embodied as a computer-readable medium having program instructions, firmware, or a combination thereof. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5446960 | Isaacs et al. | Sep 1995 | A |
| 5465152 | Bilodeau et al. | Nov 1995 | A |
| 5621530 | Marrable, Jr. | Apr 1997 | A |
| 5642261 | Bond et al. | Jun 1997 | A |
| 5652658 | Jackson et al. | Jul 1997 | A |
| 5669545 | Pham et al. | Sep 1997 | A |
| 5710063 | Forehand et al. | Jan 1998 | A |
| 5753904 | Gil et al. | May 1998 | A |
| 5768759 | Hudson | Jun 1998 | A |
| 5796590 | Klein | Aug 1998 | A |
| 5978502 | Ohashi | Nov 1999 | A |
| 5983477 | Jacks et al. | Nov 1999 | A |
| 6119927 | Ramos et al. | Sep 2000 | A |
| 6129259 | Stansbury | Oct 2000 | A |
| 6151406 | Chang et al. | Nov 2000 | A |
| 6173070 | Michael et al. | Jan 2001 | B1 |
| 6177682 | Bartulovic et al. | Jan 2001 | B1 |
| 6196439 | Mays et al. | Mar 2001 | B1 |
| 6278193 | Coico et al. | Aug 2001 | B1 |
| 6525331 | Ngoi et al. | Feb 2003 | B1 |
| 6956963 | Ulrich et al. | Oct 2005 | B2 |
| 6963143 | Howarth | Nov 2005 | B2 |
| 7005754 | Howarth | Feb 2006 | B2 |
| 7117469 | Dahl | Oct 2006 | B1 |
| 7129146 | Hsu | Oct 2006 | B2 |
| 7139421 | Fix et al. | Nov 2006 | B1 |
| 7141450 | Pardo | Nov 2006 | B2 |
| 7340076 | Stach et al. | Mar 2008 | B2 |
| 7847938 | Dohse | Dec 2010 | B2 |
| 20020093812 | Kiest et al. | Jul 2002 | A1 |
| 20020100955 | Potter | Aug 2002 | A1 |
| 20030094707 | Howarth | May 2003 | A1 |
| 20040120571 | Duvdevani et al. | Jun 2004 | A1 |
| 20080095465 | Mullick et al. | Apr 2008 | A1 |
| 20100067780 | Kawaragi | Mar 2010 | A1 |
| 20100104216 | Ge et al. | Apr 2010 | A1 |
| 20100177191 | Stier | Jul 2010 | A1 |
| Entry |
|---|
| Lowe, David G., “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, Jan. 2004, pp. 1-28. |
| Wu, Wen-Yen “A System for Automated BGA Inspection”, Proceedings of the 2004 IEEE, Conference on Cybernetics and Intelligent Systems, Singapore Dec. 1-3, 2004. |