Machine vision methods for image segmentation using multiple images

Information

  • Patent Grant
  • 6396949
  • Patent Number
    6,396,949
  • Date Filed
    Thursday, June 15, 2000
    24 years ago
  • Date Issued
    Tuesday, May 28, 2002
    22 years ago
Abstract
Machine vision methods for segmenting an image include the steps of generating a first image of the background of an object, generating a second image of the object and background, and subtracting the second image from the first image. The methods are characterized in that the second image is generated such that subtraction of it from the first image emphasizes the object with respect to the background.
Description




RESERVATION OF COPYRIGHT




The disclosure of this patent document contains material which is subject to copyright protection. The owner thereof has no objection to facsimile reproduction by anyone of the patent document or of the patent disclosure, as it appears in the United States Patent and Trademark Office patent file or records, but otherwise reserves all rights under copyright law.




BACKGROUND OF THE INVENTION




The invention pertains to machine vision and, more particularly, to methods for image segmentation and object identification and defect detection.




In automated manufacturing, it is often important to determine the location, shape, size and/or angular orientation of an object being processed or assembled. For example, in automated wire bonding of integrated circuits, the precise location of leads in the “lead frame” and pads on the semiconductor die must be determined before wire bonds can be soldered them.




Although the human eye can readily distinguish between objects in an image, this is not historically the case for computerized machine vision systems. In the field of machine vision, the task of analyzing an image to isolate and identify its features is referred to as image segmentation. In an image of a lead frame, image segmentation can be employed to identify pixels in the image representing the leads, as well as those representing all other features,, i.e., “background.” By assigning values of “1” to the pixels representing leads, and by assigning values of “0” to the background pixels, image segmentation facilitates analysis of the image by other machine vision tools, such as “connectivity” analysis.




The prior art suggests a number of techniques for segmenting an image. Thresholding, for example, involves identifying image intensities that distinguish an object (i.e., any feature of interest) from its background (i.e., any feature not of interest). For example, in an image of a lead frame, thresholding can be used to find an appropriate shade of gray that distinguishes each pixel in the image as object (i.e., lead) or background, thereby, completing segmentation. More complex thresholding techniques generate multiple threshold values that additionally permit the object to be identified.




Connectivity analysis is employed to isolate the features in a thresholded image. This technique segregates individual features by identifying their component pixels, particularly, those that are connected to each other by virtue of horizontal, vertical or diagonal adjacency.




Though the segmentation techniques described above are useful in isolating features of simple objects, they are often of only limited value in identifying objects with complex backgrounds. This typically arises in defect detection, that is, in segmenting images to identify defects on visually complicated surfaces, such as the surface of a semiconductor die, a printed circuit board, and printed materials. In these instances, segmentation is used to isolate a defect (if any) on these complex surfaces. If the surface has no defects, segmentation should reveal no object and only background. Otherwise, it should reveal the defect in the image as clusters of 1's against a background 0's.




To aid in segmenting complicated images, the prior art developed golden template comparison (GTC). This is a technique for locating defects by comparing a feature under scrutiny (to wit, a semiconductor die surface) to a good image—or golden template—that is stored in memory. The technique subtracts the good image from the test image and analyzes the difference to determine if the expected object (e.g., a defect) is present. For example, upon subtracting the image of a good pharmaceutical label from a defective one, the resulting “difference” image would reveal missing words and portions of characters.




Before GTC inspections can be performed, it must be “trained” so that the golden template can be stored in memory. To this end, the GTC training functions are employed to analyze several good samples of a scene to create a “mean” image and “standard deviation” image. The mean image is a statistical average of all the samples analyzed by the training functions. It defines what a typical good scene looks like. The standard deviation image defines those areas on the object where there is little variation from part to part, as well as those areas in which there is great variation from part to part. This latter image permits GTC's runtime inspection functions to use less sensitivity in areas of greater expected variation, and more sensitivity in areas of less expected variation. In all cases, the edges present in the parts give rise a large standard deviation as a result of discrete pixel registration requirements, thus decreasing sensitivity in those regions.




At runtime, a system employing GTC captures an image of a scene of interest. Where the position of that scene is different from the training position, the captured image is aligned, or registered, with the mean image. The intensities of the captured image are also normalized with those of the mean image to ensure that variations illumination do not adversely affect the comparison.




The GTC inspection functions then subtract the registered, normalized, captured image from the mean image to produce a difference image that contains all the variations between the two. That difference image is then compared with a “threshold” image derived from the standard deviation image. This determines which pixels of the difference image are to be ignored and which should be analyzed as possible defects. The latter are subjected to morphology, to eliminate or accentuate pixel data patterns and to eliminate noise. An object recognition technique, such as connectivity analysis, can then be employed to classify the apparent defects.




Although GTC inspection tools have proven quite successful, they suffer some limitations. For example, except in unusual circumstances, GTC requires registration—i.e., that the image under inspection be registered with the template image. GTC also uses a standard deviation image for thresholding, which can result in a loss of resolution near edges due to high resulting threshold values. GTC is, additionally, limited to applications where the images are repeatable: it cannot be used where image-to-image variation results form changes in size, shape, orientation and warping.




An object of this invention, therefore, is to provide improved methods for machine vision and, more particularly, improved methods for image segmentation.




A further object is to provide such methods that can be used for defect identification.




Yet another object is to provide such methods that can be used in segmenting and inspecting repeatable, as well as non-repeatable, images.




Yet still another object is to provide such methods that do not routinely necessitate alignment or registration of an image under inspection with a template image.




Still yet a further object of the invention is to provide such methods that do not require training.




Still other objects of the invention include providing such machine vision methods as can be readily implemented on existing machine vision processing equipment, and which can be implemented for rapid execution without excessive consumption of computational power.




SUMMARY OF THE INVENTION




The foregoing objects are among those achieved by the invention which provides, in one aspect, a machine vision method for segmenting an image. The method includes the steps of generating a first image of at least the “background” or an object, generating a second image of the object and background, and subtracting the second image from the first image. The method is characterized in that the second image is generated such that subtraction of it from the first image emphasizes the object with respect to the background. As used here and throughout, unless otherwise evident from context, the term “object” refers to features of interest in an image (e.g., a defect), while the term “background” refers to features in an image that are not of interest (e.g., surface features on the semiconductor die on which the defect appears).




In related aspects of the invention, the second step is characterized as generating the second image such that its subtraction from the first image increases a contrast between the object and the background. That step is characterized, in still further aspects of the invention, as being one that results in object-to-background contrast differences in the second image that are of opposite polarity from the object-to-background contrast differences in the first image.




In further aspects, the invention calls for generating a third image with the results of the subtraction, and for isolating the object on that third image. Isolation can be performed, according to other aspects of the invention, by connectivity analysis, edge detection and/or tracking, and by thresholding. In the latter regard, a threshold image—as opposed to one or two threshold values—can be generated by mapping image intensity values of the first or second image. That threshold image can, then, be subtracted from the third image (i.e, the difference image) to isolate further the object.




Still further objects of the invention provide for normalizing the first and second images before subtracting them to generate the third image. In this aspect, the invention determines distributions of intensity values of each of the first and second images, applying mapping functions to one or both of them in order to match the tails of those distributions. The first and second images can also be registered prior to subtraction.




According to further aspects of the invention, the first and second images are generated by illuminating the object and/or its background with different respective light or emission sources. This includes, for example, illuminating the object from the front in order to generate the first image, and illuminating it from behind in order to generate the second image. This includes, by way of further example, illuminating the object and its background with direct, on-axis lighting to generate the first image, and illuminating it with diffuse, off-access or grazing light to generate the second image. This includes, by way of still further example, illuminating the object with different wavelengths of light (e.g., red and blue) for each of the respective images, or in capturing reflections of different orientations (e.g., polarized and unpolarized) reflected from the object.




Additional aspects of the invention provide methods incorporating various combinations of the foregoing aspects.




These and other aspects of the invention are evident in the drawings and in the descriptions that follow.











BRIEF DESCRIPTION OF THE DRAWINGS




A better understanding of the invention may be attained by reference to the drawings in which:





FIG. 1

depicts a machine vision system for practice of the invention;





FIGS. 2A-2C

depict illumination arrangements for generating images analyzed in accord with the invention;





FIGS. 3A-3F

depict sample images (and their difference images) generated by the lighting arrangements shown in

FIGS. 2A-2B

; and





FIG. 4

depicts a methodology for machine image segmentation according to the invention.











DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENT





FIG. 1

illustrates a system


5


for determining machine vision image segmentation according to the invention. The system


5


includes a capturing device


10


, such as a conventional video camera (such as the Sony XC75 camera with COSMICAR lens) or scanner, that generates an image of a scene including an object


1


. Image data (or pixels) generated by the capturing device


10


represent, in the conventional manner, the image intensity (e.g., color or brightness) of each point in the scene at the resolution of the capturing device. The illustrated object is illuminated by on-axis light


7


and ring light


8


for generation of multiple images for segmentation in accord with methods discussed herein.




The digital image data is transmitted from capturing device


10


via a communications path


11


to an image analysis system


12


. This can be a conventional digital data processor, or a vision processing system (such as the Cognex 5400) of the type commercially available from the assignee hereof, Cognex Corporation, programmed in accord with the teachings hereof to perform image segmentation. The image analysis system


12


may have one or more central processing units


13


, main memory


14


, input-output system


15


, and disc drive (or other mass storage device)


16


, all of the conventional type.




The system


12


and, more particularly, central processing unit


13


, is configured by programming instructions according to the teachings hereof for image segmentation, as described in further detail below. Those skilled in the art will appreciate that, in addition to implementation on a programmable digital data processor, the methods and apparatus taught herein can be implemented in special purpose hardware.





FIG. 2A

illustrates an arrangement of emission sources according to the invention for on-axis and diffuse (or grazing) light illumination of a circuit element


20


, e.g., a semiconductor die. The arrangement includes lighting sources


22


and


24


for illuminating the surface of element


20


. Lighting source


22


provides direct, on-axis lighting, via reflection off a half-silvered, partially transparent, angled, one-way mirror


28


. Lighting source


24


provides diffuse, off-access lighting, or grazing light, for illuminating the object. Images of the illuminated element


20


are captured by camera


26


.




Lighting source


22


is of the conventional type known in the art for on-axis illumination of objects under inspection in a machine vision application. A preferred such light is a diffused on-axis light (DOAL) commercially available from Dolan Jenner. The source


22


is positioned to cause objects (i.e., potential defects) on the surface of element


20


to appear as dark features against a light background.




Lighting source


24


is also of a conventional type known in the art for use in providing diffuse, off-axis light or grazing light in machine vision applications. One preferred source


24


is an arrangement of several point light sources, e.g., fiber optic bundles, or line lights, disposed about element


20


. Another preferred such lighting source


24


is a ring light and, still more preferably, a ring light of the type disclosed in commonly assigned U.S. Pat. No. 5,367,439. The lighting source


24


is positioned to illuminate the surface of element


20


in such a way to cause objects (i.e., potential defects) thereon to appear as light features against a dark background.




Other lighting sources known in the art can be used in place of on-axis source


22


and ring light source


24


to illuminate a surface under inspection. Considerations for selection and positioning of the sources


22


,


24


are that objects thereon, e.g., expected defects, appear differently (if at all) with respect to the background when illuminated by each respective source


22


,


24


.




More particularly, the lighting sources


22


,


24


are selected and positioned in such that the subtraction of an image captured by camera


26


when the surface is illuminated by one of the sources (e.g.,


22


) from an image captured by camera


26


when the surface is illuminated by the other source (e.g.,


24


) emphasizes objects on that surface—e.g., by increasing the contrast between the object and the background (i.e., the remainder of the surface).




Put another way, the lighting sources


22


,


24


are selected and positioned in such a way that an image generated by camera


26


when the surface is illuminated one source has an object-to-background contrast of an opposite polarity then the object-to-background contrast of an image generated by camera


26


when the surface is illuminated the other source.




Thus, for example, in a preferred arrangement to detect defects on the surface of a semiconductor die or leads of its package (or lead frame)—and, particularly, unwanted adhesive patches on those dies or leads—the on-axis lighting source


22


is selected and positioned (in conjunction with mirror


28


) to cause the defect to be dark on a light background (e.g., “positive” object-to-background contrast polarity), while the diffuse ring light


24


is selected and positioned to make the same defect appear light on a dark background (e.g., “negative” object-to-background contrast polarity).





FIG. 3A

similarly depicts an image generated by camera


26


when a defective semiconductor die (e.g., a die with adhesive on its surface) is illuminated by ring light or grazing light source


24


. As shown in the illustration, the ring/grazing light reveals the adhesive as light patches


60


,


62


on a dark background.





FIG. 3B

depicts an image of the type generated by camera


26


when that same semiconductor die


20


is illuminated by on-axis lighting source


22


. As shown in the drawing, the on-axis lighting reveals adhesive patches


60


,


62


, on the die surface as dark patches on a light background.





FIG. 3C

reveals a result according to the invention of subtracting the images generated by camera


26


under these two separate lighting conditions. Put another way,

FIG. 3C

represents the result of subtracting the image of

FIG. 3B

from the image of FIG.


3


A. In

FIG. 3C

, the defects on the semiconductor die surface


20


are revealed as very light patches against a very dark background, as indicated by dashed lines. (Note that this figure shows the output of the subtraction after remapping step


114


, described below.)




As a consequence of the manner in which the defective semiconductor die


20


is illuminated by the illustrated embodiment for purposes of generating the images of

FIGS. 3A and 3B

, the difference image of

FIG. 3C

emphasizes the contrast between the defects


60


,


62


and the background (i.e., die


20


).





FIG. 4

illustrates a method for image segmentation according to the invention. That method is described below with respect to an embodiment that uses on-axis/grazing illumination and that segments semiconductor die images to identify adhesive patches (i.e., defects) on the die surfaces (as shown, by way of example, in FIGS.


3


A-


3


C). This same methodology can employed to detect adhesive patches on the leads of the die package (or lead frame), as well as in a host of other applications.




In step


100


, the method acquires an image of the semiconductor die with lighting source


24


or other grazing light. Likewise, in step


102


, the method acquires an image of the semiconductor die with on-axis light source


22


. Though these images can be acquired at any times—though not concurrently—they are typically acquired at about the same time. This reduces the risk that the object will be moved between acquisitions and, thereby, removes the need to register the images.




In the discussion that follows, the image acquired in step


100


is referred to as “Image


1


,” while the image acquired in step


102


is referred to as “Image


2


.” Although the discussion herein is directed toward subtraction of Image


2


from Image


1


, those skilled in the art will likewise appreciate that Image


1


can be subtracted from Image


2


. Preferably, Image


2


is subtracted from Image


1


in instances where the object is lighter the background in Image


1


, and where object is darker than the background in Image


2


. Conversely, Image


1


is preferably subtracted from Image


2


in instances where the object is lighter the background in Image


2


, and where object is darker than the background in Image


1


.




In optional step


104


, the method registers the images to insure alignment of the features therein. Though not necessary in many instances, this step is utilized if the semiconductor die or camera is moved between image acquisitions. Image registration can be performed, for example, by a two-dimensional cross-correlation of images, in the manner disclosed in Jain,


Fundamentals of Digital Image Processing.


(Prentice Hall 1989) at Chapter 2, the teachings of which are incoporated herein by reference.




In steps


104


and


106


, the method windows Images


1


and


2


. These steps, which are optional, reduce the area, (or pixels) of the respective images under consideration and, thereby, reduce processing time and/or computational resources. These steps can be performed by selecting the relevant subset of the pixel array of each image.




In steps


108


and


110


, the method normalizes the (windowed) images. These optional steps, which compensate for overall differences in image intensity, can be performed by any technique known in the art. Preferably, however, normalization is global, using a map derived from the global statistics of the (windowed) images. The map is defined to match the extrema (or tails) of the statistical distributions of both images.




In step


112


, the method generates a difference image, Image


3


, by subtracting Image


2


from Image


1


. This subtraction is performed in the conventional manner known in the art. Objects in Image


3


, i.e., the “difference” image, can be isolated by standard techniques such as connectivity analysis, edge detection and/or tracking, and by thresholding. The latter technique is preferred, as discussed below.




In step


114


, the method maps Image


3


to remove any negative difference values (i.e., negative pixel values) resulting from the subtraction. It also can be used to normalize (or rescale) the difference image to facilitate later stages of processing. This step, which can be performed in a conventional manner known in the art, is optional.




In step


116


, the method performs morphology on the difference image. Morphology, which is well known in the art, is a technique for eliminating or accentuating data in the difference image, e.g., by filtering out of variations due to video noise or small defects. This can be performed, for example, in a manner disclosed by Jain, supra, at Chapter 9.9, the teachings of which are incoporated herein by reference.




In step


118


, the method thresholds, or binarizes, the image to distinguish or isolate objects of interest, such as adhesive patches on the die surface or package leads. Thresholding can be performed in the conventional manner known in the art. Thus, for example, a single threshold intensity value can be determined from a histogram of Image


3


. Preferably, however, the threshold intensity value is predetermined, i.e., based on empirical analysis of prior images.




In certain applications, use of a high global threshold intensity value will result in portions of the object of interest being interpreted as background and, therefore, will result in poor segmentation. Likewise, use of a low global threshold intensity value will result in background being interpreted as objects of interest. To overcome this, the method includes an optional step of thresholding using a threshold image generated by mapping Image


2


; see step


120


. That threshold image is made up of pixels representing local threshold values.




In instances where a threshold image is used (e.g., in the bottle inner side wall inspection and the photofilm envelope inspection described below), binarization step


118


involves subtracting the threshold image from image


3


, then, mapping positive differences to 1 (indicating object) and negative differences to zero (indicating background).




Following binarization, the method of step


122


conducts connectivity analysis to determine the properties of any objects in the binarized image. Those properties, which include size, position, orientation, and principal moments, can be used to determine whether the object is indeed a defect requiring rejection of the semiconductor die.




Described above are embodiments of the invention employing direct and gazing light sources to segment images of a semiconductor die to identify defects thereon. Those skilled in the art will, of course, appreciate that such lighting arrangements and methodologies can be applied in segmenting and identifying a wide range of objects of interest.




The use of additional lighting arrangements permits segmentation and object identification in still further applications. For example,

FIG. 2B

illustrates an arrangement employing front and back lighting to inspect the inner side wall of a bottle


30


. In this regard, the prior employs a camera “looking downward” from the top of the bottle to inspect the inner side wall. This represents an attempt to inspect behind the bottle label


31


, which typically fully circumscribes surrounds the bottle. However, use of the downward-looking camera—and, significantly, its wide angle lens—results in a large amount of geometric distortion.




As shown in

FIG. 2B

, the illustrated arrangement uses a side-viewing camera with intense back lighting to “see through” the label and, thereby, permit detection of unwanted objects (which tend to be opaque) on the inner side wall. In this arrangement, a front lit image of the bottle shows its front label and represents the effective “background.” By subtracting that front lit image from the back-lit image, any objects on the side wall can be readily discerned. Those skilled in the art will, of course, appreciate that it is generally not necessary to subtract an image of the back label itself, since the glass and bottle hollow tend to diffuse (and thereby remove) any features it might present in the back-lit image.




In the drawing, there are shown back lighting illumination source


32


and front lighting illumination source


34


. The lighting source


32


is of the type described above in connection with source


22


. The lighting source


34


is of the type described above in connection with source


24


. The light


32


is selected to provide sufficient intensity to permit back-lighting of the entire inner side wall of the bottle, including that portion beneath the label


31


. The front light


34


is selected and positioned to provide sufficient intensity to illuminate label


34


. Camera


38


is of the conventional type known in the art.




The camera


38


and lighting sources


32


,


34


are beneficially employed in accord with the invention to generate two images of the bottle that can be subtracted from one another to reveal any defects (e.g., cigarette butts, spiders, bugs, etc.) behind the label


31


. To this end, a method as illustrated in

FIG. 4

is used to acquire a first image (Image


1


) of the bottle as illuminated by the front light


34


; c.f., step


100


. Image


1


shows the “background” alone, i.e., the print on the front label. The method is likewise used to acquire a second image (Image


2


) of the bottle as illuminated by back light


32


; c.f., step


102


. Image


2


shows the background and object, i.e., the print on the front label as well as any defect on in inner side wall. Because of the dispersive effect of the glass and bottle hollow, print on the back label does not appear in Image


2


.





FIG. 3D

depicts an image of the type resulting from back lighting bottle


30


with source


32


.

FIG. 3E

depicts the image resulting from front lighting the bottle


30


with source


34


.

FIG. 3F

depicts a difference image of the type produced by subtracting the image of

FIG. 3E

from the image of FIG.


3


D.




As noted, the methodology of

FIG. 4

can be applied to segment and identify defects in images of the types depicted in

FIGS. 3D and 3E

. Depending on the nature of the label


31


, it can be typically necessary to utilize an image map of the type generated in step


120


, as opposed to a single threshold value. This prevents defects from being obscured (or falsely indicated) as a result of labelling.




The front/back lighting arrangement of

FIG. 2B

can be used in applications other than bottle inner side wall inspection. For example, that lighting arrangement and the foregoing methodology can be used to identify film cartridges in sealed envelopes. The backlighting reveals any film cartridge in the envelope and the printing on the front of the envelope, while the front lighting reveals only the print on the front of the envelope. As above, the printing on the back of the envelope is diffused and, hence, does not appear in the backlit image. A further appreciation of this application of the methodology may be attained by reference to the Attachment filed herewith.




In further embodiments, the invention contemplates an image capture arrangement as shown in FIG.


2


C. Here, rather than employing two lighting sources, a system according to the invention captures light reflected from the element


40


under inspection in two different wavelengths. For this purpose, the object is illuminated by a single light source


42


, which can be, for example, a white light. Reflections from the object captured by camera


26


can be filtered to capture the differing wavelengths. Such filtering can be provided, e.g., by filters


48


,


50


, which are selected such that objects on the surface of element


40


appear differently (if at all) with respect to the background when the filtered light is captured by the camera


46


.




In addition to capturing light of differing wavelengths, filters


48


and


50


can capture light of differing orientations. To this end, they can be polarizing lens of differing orientation for capturing light from source


42


(which may also be polarized) that is reflected off element


40


.




Described above are machine vision methods meeting the objects set forth. These methods provide improved machine vision image segmentation and object identification overcoming the deficiencies of the prior art segmentation techniques, such as GTC. For example, apart from instances where an illuminated object is moved between image captures, the method does not require registration of images prior to subtraction. Nor the method require training. Still further, the method is applicable to the wide range of repeatable and nonrepeatable images.




It will be appreciated that the embodiments described above are illustrative only and that additional embodiments within the ken of those of ordinary skill in the art fall within the scope of the invention. Thus, for example, it will be appreciated that the lighting arrangements illustrated in

FIGS. 2A-2C

are merely by way of example, and that other lighting arrangements which result in difference images with greater object/background contrast may also be employed. Moreover, as noted above, although the discussion herein primarily refers to subtraction of Image


2


from Image


1


, those skilled in the art will likewise appreciate that Image


1


can, alternatively, be subtracted from Image


2


with like success (albeit with a reversal of “polarity” in the resulting image).



Claims
  • 1. A machine vision method for inspecting an object, comprising the steps of:illuminating the object with an illumination source selected from a group of illumination sources including (i) a first source that illuminates the object along a direction of a first axis, and (ii) a second source that illuminates the object from an angle other than along the direction of the first axis, and generating a first image of the object with an image capture device while the object is so illuminated, the image capture device being oriented for capturing the first image in the direction of the first axis; illuminating the object with another illumination source selected from the aforesaid group, and generating a second image of the object with an image capture device while it is so illuminated, the image capture device being oriented for capturing the second image in the direction of the first axis; and subtracting the second image from the first image to form a third image that increases a contrast between the object and a background thereof.
  • 2. A method according to claim 1, wherein the step of generating the second image includes the step of generating that image such that subtraction of the second image from the first image increases a contrast between the object and the background.
  • 3. A method according to claim 1, comprising the step of isolating the object within the third image.
  • 4. A method according to claim 3, where the isolating step comprises the step of performing connectivity analysis on the third image to distinguish the object from the background.
  • 5. A method according to claim 3, wherein the isolating step comprises the step of detecting and tracking edges in the third image to isolate the object.
  • 6. A method according to claim 3, wherein the isolating step comprises the step of thresholding the third image to distinguish at least one of the object and its edges from the background.
  • 7. A method according to claim 6, wherein the thresholding step comprises the step of determining an intensity threshold value that distinguishes at least one of the object and its edges from the background.
  • 8. A method according to claim 6, comprising the steps ofgenerating a threshold image from at least one of the first and second images, the threshold image having pixels representing local threshold intensity values; and using the threshold image to distinguish, in the third image, at least one of the object and its edges from the background.
  • 9. A method according to claim 8, wherein the step of generating the threshold image includes the step of mapping image intensity values in the second image to generate the threshold image.
  • 10. A method according to claim 8, wherein the step of using the threshold image includes the step of subtracting the threshold image from the third image.
  • 11. A method according to claim 1, comprising the step of normalizing at least one of the first and second images before the subtracting step.
  • 12. A method according to 11, wherein the normalizing step includes the steps ofdetermining distributions of intensity values of each of the first and second images; generating a mapping function for matching extrema of those distributions; and transforming the intensity values of at least one of the first and second images with that mapping function.
  • 13. A method according to claim 1, including the step of generating the first and second images with light of different respective polarizations.
  • 14. A method according to claim 1, including the step of generating the first and second images by illuminating the semiconductor device with emissions in different respective wavelengths.
  • 15. A method according to claim 1, including the further step of registering the first and second images with one another before the subtracting step.
  • 16. A machine vision method for inspecting an object, comprising the steps of:illuminating the object with an illumination source selected from a group of illumination sources including (i) a first source that illuminates the object along a direction of a first axis, and (ii) a second source that illuminates the object from an angle other than along the direction of the first axis; and generating a first image of the object with an image capture device while the object is so illuminated, the image capture device being oriented for capturing the first image in the direction of the first axis; illuminating the object with another illumination source selected from the aforesaid group, and generating a second image of the object with an image capture device while it is so illuminated, the image capture device being oriented for capturing the second image in the direction of the first axis; isolating the object from the background in the third image by any of segmentation, edge detection and tracking, connectivity analysis, and thresholding.
  • 17. A machine vision method for inspecting an object, comprising the steps of:lighting the object with a source selected from a group of sources including (i) a first source that lights the object along a direction of a first axis, and (ii) a second source that lights the object from an angle other than along the direction of the first axis, and generating a first image of the object with an image capture device while the object is so lighted, the image capture device being oriented for capturing the first image in the direction of the first axis; lighting the object with another source selected from the aforesaid group, and generating a second image of the object with an image capture device while it is so lighted, the image capture device being oriented for capturing the second image in the direction of the first axis; and subtracting the second image from the first image to form a third image that increases a contrast between the object and a background thereof.
  • 18. A method according to claim 17, wherein the step of generating the second image includes the step of generating that image such that subtraction of the second image from the first image increases a contrast between the object and the background.
  • 19. A method according to claim 17, comprising the step of isolating the object within the third image.
  • 20. A method according to claim 19, where the isolating step comprises the step of performing connectivity analysis on the third image to distinguish the object from the background.
  • 21. A method according to claim 19, wherein the isolating step comprises the step of detecting and tracking edges in the third image to isolate the object.
  • 22. A method according to claim 19, wherein the isolating step comprises the step of thresholding the third image to distinguish at least one of the object and its edges from the background.
  • 23. A method according to claim 22, wherein the thresholding step comprises the step of determining an intensity threshold value that distinguishes at least one of the object and its edges from the background.
  • 24. A method according to claim 22, comprising the steps ofgenerating a threshold image from at least one of the first and second images, the threshold image having pixels representing local threshold intensity values; and using the threshold image to distinguish, in the third image, at least one of the object and its edges from the background.
  • 25. A method according to claim 24, wherein the step of generating the threshold image includes the step of mapping image intensity values in the second image to generate the threshold image.
  • 26. A method according to claim 24, wherein the step of using the threshold image includes the step of subtracting the threshold image from the third image.
  • 27. A method according to claim 17, comprising the step of normalizing at least one of the first and second images before the subtracting step.
  • 28. A method according to 27, wherein the normalizing step includes the steps ofdetermining distributions of intensity values of each of the first and second images; generating a mapping function for matching extrema of those distributions; and transforming the intensity values of at least one of the first and second images with that mapping function.
  • 29. A method according to claim 17, including the step of generating the first and second images with light of different respective polarizations.
  • 30. A method according to claim 17, including the step of generating the first and second images by lighting the semiconductor device with emissions in different respective wavelengths.
  • 31. A method according to claim 17, including the further step of registering the first and second images with one another before the subtracting step.
  • 32. A machine vision method for inspecting an object, comprising the steps of:lighting the object with a source selected from a group of sources including (i) a first source that lights the object along a direction of a first axis, and (ii) a second source that lights the object from an angle other than along the direction of the first axis, and generating a first image of the object with an inmage capture device while the object is so lighted, the image capture device being oriented for capturing the first image in the direction of the first axis; lighting the object with another source selected from the aforesaid group, and generating a second image of the object with an image capture device while it is so lighted, the image capture device being oriented for capturing the second image in the direction of the subtracting the second image from the first image to form a third image that enhances a contrast between the object and a background thereof; and isolating the object from the background in the third image by any of segmentation, edge detection and tracking, connectivity analysis, and thresholding.
REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. Ser. No. 08/621,137, filed Mar. 21, 1996, now U.S. Pat. No. 6,259,827 entitled “MACHINE VISION METHODS FOR IMAGE SEGMENTATION USING MULTIPLE IMAGES.” This application is related to copending, commonly assigned U.S. patent application Ser. No. 08/621,189, for MACHINE VISION METHODS FOR INSPECTION OF LEADS ON SEMICONDUCTOR DIE PACKAGES, filed this same day herewith, the teachings of which are incorporated herein by reference. This application is related to copending, commonly assigned U.S. patent application Ser. No. 08/521,190, for MACHINE VISION METHODS FOR INSPECTION OF SEMICONDUCTOR DIE SURFACES, filed this same day herewith, the teachings of which are incorporated herein by reference.

US Referenced Citations (226)
Number Name Date Kind
3816722 Sakoe et al. Jun 1974 A
3936800 Ejiri et al. Feb 1976 A
3967100 Shimomura Jun 1976 A
3968475 McMahon Jul 1976 A
3978326 Shimomura Aug 1976 A
4011403 Epstein et al. Mar 1977 A
4115702 Nopper Sep 1978 A
4115762 Akiyama et al. Sep 1978 A
4183013 Agrawala et al. Jan 1980 A
4200861 Hubach et al. Apr 1980 A
4254400 Yoda et al. Mar 1981 A
4286293 Jablonowski Aug 1981 A
4300164 Sacks Nov 1981 A
4385322 Hubach et al. May 1983 A
4435837 Abernathy Mar 1984 A
4441124 Heebner et al. Apr 1984 A
4441206 Kuniyoshi et al. Apr 1984 A
4519041 Fant et al. May 1985 A
4534813 Williamson et al. Aug 1985 A
4541116 Lougheed Sep 1985 A
4570180 Baier et al. Feb 1986 A
4577344 Warren et al. Mar 1986 A
4581762 Lapidus et al. Apr 1986 A
4606065 Beg et al. Aug 1986 A
4617619 Gehly Oct 1986 A
4630306 West et al. Dec 1986 A
4631750 Gabriel et al. Dec 1986 A
4641349 Flom et al. Feb 1987 A
4688088 Hamazaki et al. Aug 1987 A
4706168 Weisner Nov 1987 A
4707647 Coldren et al. Nov 1987 A
4728195 Silver Mar 1988 A
4730260 Mori et al. Mar 1988 A
4731858 Grasmueller et al. Mar 1988 A
4736437 Sacks et al. Apr 1988 A
4742551 Deering May 1988 A
4752898 Koenig Jun 1988 A
4758782 Kobayashi Jul 1988 A
4764870 Haskin Aug 1988 A
4771469 Wittenburg Sep 1988 A
4776027 Hisano et al. Oct 1988 A
4782238 Radl et al. Nov 1988 A
4783826 Koso Nov 1988 A
4783828 Sadjadi Nov 1988 A
4783829 Miyakawa et al. Nov 1988 A
4809077 Norita et al. Feb 1989 A
4821333 Gillies Apr 1989 A
4831580 Yamada May 1989 A
4860374 Murakami et al. Aug 1989 A
4860375 McCubbrey et al. Aug 1989 A
4876457 Bose Oct 1989 A
4876728 Roth Oct 1989 A
4891767 Rzasa et al. Jan 1990 A
4903218 Longo et al. Feb 1990 A
4907169 Lovoi Mar 1990 A
4908874 Gabriel Mar 1990 A
4912559 Ariyoshi et al. Mar 1990 A
4912659 Liang Mar 1990 A
4914553 Hamada et al. Apr 1990 A
4922543 Ahlbom et al. May 1990 A
4926492 Tanaka et al. May 1990 A
4932065 Feldgajer Jun 1990 A
4953224 Ichinose et al. Aug 1990 A
4955062 Terui Sep 1990 A
4959898 Landman et al. Oct 1990 A
4962423 Yamada et al. Oct 1990 A
4972359 Silver et al. Nov 1990 A
4982438 Usami et al. Jan 1991 A
5012402 Akiyama Apr 1991 A
5012524 LeBeau Apr 1991 A
5027419 Davis Jun 1991 A
5046190 Daniel et al. Sep 1991 A
5054096 Beizer Oct 1991 A
5060276 Morris et al. Oct 1991 A
5063608 Siegel Nov 1991 A
5073958 Imme Dec 1991 A
5081656 Baker et al. Jan 1992 A
5081689 Meyer et al. Jan 1992 A
5086478 Kelly-Mahaffey et al. Feb 1992 A
5090576 Menten Feb 1992 A
5091861 Geller et al. Feb 1992 A
5091968 Higgins et al. Feb 1992 A
5093867 Hori et al. Mar 1992 A
5113565 Cipolla et al. May 1992 A
5115309 Hang May 1992 A
5119435 Berkin Jun 1992 A
5124622 Kawamura et al. Jun 1992 A
5133022 Weideman Jul 1992 A
5134575 Takagi Jul 1992 A
5143436 Baylor et al. Sep 1992 A
5145432 Midland et al. Sep 1992 A
5151951 Ueda et al. Sep 1992 A
5153925 Tanioka et al. Oct 1992 A
5155775 Brown Oct 1992 A
5159281 Hedstrom et al. Oct 1992 A
5159645 Kumagai Oct 1992 A
5164994 Bushroe Nov 1992 A
5168269 Harlan Dec 1992 A
5175808 Sayre Dec 1992 A
5179419 Palmquist et al. Jan 1993 A
5185810 Freischlad Feb 1993 A
5185855 Kato et al. Feb 1993 A
5189712 Kajiwara et al. Feb 1993 A
5206820 Ammann et al. Apr 1993 A
5216503 Paik Jun 1993 A
5225940 Ishii et al. Jul 1993 A
5230027 Kikuchi Jul 1993 A
5243607 Masson et al. Sep 1993 A
5253306 Nishio Oct 1993 A
5253308 Johnson Oct 1993 A
5265173 Griffin et al. Nov 1993 A
5271068 Ueda et al. Dec 1993 A
5287449 Kojima Feb 1994 A
5297238 Xuguang Wang et al. Mar 1994 A
5297256 Wolstenholme et al. Mar 1994 A
5299269 Gaborski et al. Mar 1994 A
5307419 Tsujino et al. Apr 1994 A
5311598 Bose et al. May 1994 A
5315388 Shen et al. May 1994 A
5319457 Nakahashi et al. Jun 1994 A
5327156 Masukane et al. Jul 1994 A
5329469 Watanabe Jul 1994 A
5337262 Luthi et al. Aug 1994 A
5337267 Colavin Aug 1994 A
5363507 Nakayama et al. Nov 1994 A
5367439 Mayer et al. Nov 1994 A
5367667 Wahlquist et al. Nov 1994 A
5371690 Engel et al. Dec 1994 A
5388197 Rayner Feb 1995 A
5388252 Dreste et al. Feb 1995 A
5398292 Aoyama Mar 1995 A
5432525 Maruo et al. Jul 1995 A
5440699 Farrand et al. Aug 1995 A
5455870 Sepai Oct 1995 A
5455933 Schieve et al. Oct 1995 A
5471312 Watanabe et al. Nov 1995 A
5475766 Tsuchiya et al. Dec 1995 A
5475803 Stearns et al. Dec 1995 A
5477138 Efjavic et al. Dec 1995 A
5481712 Silver et al. Jan 1996 A
5485570 Bushboom et al. Jan 1996 A
5491780 Fyles et al. Feb 1996 A
5495424 Tokura Feb 1996 A
5495537 Bedrosian et al. Feb 1996 A
5496106 Anderson Mar 1996 A
5500906 Picard et al. Mar 1996 A
5506617 Parulski et al. Apr 1996 A
5506682 Pryor Apr 1996 A
5511015 Flockencier Apr 1996 A
5519840 Matias et al. May 1996 A
5526050 King et al. Jun 1996 A
5528703 Lee Jun 1996 A
5532739 Garakani et al. Jul 1996 A
5539409 Mathews et al. Jul 1996 A
5544256 Brecher et al. Aug 1996 A
5548326 Michael Aug 1996 A
5550763 Michael Aug 1996 A
5550888 Neitzel et al. Aug 1996 A
5553859 Kelly et al. Sep 1996 A
5557410 Huber et al. Sep 1996 A
5557690 O'Gorman et al. Sep 1996 A
5566877 McCormack Oct 1996 A
5568563 Tanaka et al. Oct 1996 A
5574668 Beaty Nov 1996 A
5574801 Collet-Beillon Nov 1996 A
5581632 Koljonen et al. Dec 1996 A
5583949 Smith et al. Dec 1996 A
5583954 Garakani Dec 1996 A
5586058 Aloni et al. Dec 1996 A
5592562 Rooks Jan 1997 A
5594859 Palmer et al. Jan 1997 A
5602937 Bedrosian et al. Feb 1997 A
5608490 Ogawa Mar 1997 A
5608872 Schwartz et al. Mar 1997 A
5640199 Garakani et al. Jun 1997 A
5640200 Michael Jun 1997 A
5642158 Petry, III et al. Jun 1997 A
5647009 Aoki et al. Jul 1997 A
5649032 Burt et al. Jul 1997 A
5657403 Wolff et al. Aug 1997 A
5673334 Nichani et al. Sep 1997 A
5676302 Petry Oct 1997 A
5696848 Patti et al. Dec 1997 A
5715369 Spoltman et al. Feb 1998 A
5715385 Stearns et al. Feb 1998 A
5717785 Silver Feb 1998 A
5724439 Mizuoka et al. Mar 1998 A
5740285 Bloomberg et al. Apr 1998 A
5742037 Scola et al. Apr 1998 A
5751853 Michael May 1998 A
5754679 Koljonen et al. May 1998 A
5757956 Koljonen et al. May 1998 A
5761326 Brady et al. Jun 1998 A
5761337 Nishimura et al. Jun 1998 A
5768443 Michael et al. Jun 1998 A
5793899 Wolff et al. Aug 1998 A
5796386 Lipscomb et al. Aug 1998 A
5796868 Dutta-Choudhury Aug 1998 A
5801966 Ohashi Sep 1998 A
5805722 Cullen et al. Sep 1998 A
5809658 Jackson et al. Sep 1998 A
5818443 Schott Oct 1998 A
5822055 Tsai et al. Oct 1998 A
5825483 Michael et al. Oct 1998 A
5825913 Rostami et al. Oct 1998 A
5835099 Marimont Nov 1998 A
5835622 Koljonen et al. Nov 1998 A
5845007 Ohashi et al. Dec 1998 A
5848189 Pearson et al. Dec 1998 A
5850466 Schott Dec 1998 A
5859923 Petry, III et al. Jan 1999 A
5861909 Garakani et al. Jan 1999 A
5872870 Michael Feb 1999 A
5878152 Sussman Mar 1999 A
5900975 Sussman May 1999 A
5901241 Koljonen et al. May 1999 A
5909504 Whitman Jun 1999 A
5912768 Sissom et al. Jun 1999 A
5912984 Michael et al. Jun 1999 A
5918196 Jacobson Jun 1999 A
5933523 Drisko et al. Aug 1999 A
5943441 Michael Aug 1999 A
5974169 Bachelder Oct 1999 A
5983227 Nazem et al. Nov 1999 A
6002738 Cabral et al. Dec 1999 A
6016152 Dickie Jan 2000 A
Foreign Referenced Citations (11)
Number Date Country
0 527 632 Feb 1993 EP
0 777 381 Nov 1996 EP
WO 9521376 Aug 1995 WO
WO 9522137 Aug 1995 WO
WO 9721189 Jun 1997 WO
WO 9722858 Jun 1997 WO
WO 9724692 Jul 1997 WO
WO 9724693 Jul 1997 WO
WO 9852349 Nov 1998 WO
WO 9859490 Dec 1998 WO
WO 9915864 Apr 1999 WO
Non-Patent Literature Citations (29)
Entry
Bursky, Dave, “CMOS Four-Chip Set Process Images at 20-MHz Data Rates,” Electronic Design, May 28, 1987, pp. 39-44.
Chapter 3: “Guidelines for Developing MMX Code,” Intel.
Chapter 4: “MMX Code Development Strategy,” Intel.
Chapter 5: “MMX Coding Techniques,” Intel.
Chapter 3: “Optimization Techniques for Integer Blended Code,” Intel.
“Geometrical Image Modification,” pp. 421-442.
Gevorkian David Z., Astola Jaakko T., and Atourian Samvel M. “Improving Gil-Werman Algorithm for Running Min and Max Filters” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 5, May 1997, pp. 526-529.
Gil, Joseph and Werman Michael. “Computing 2-D Min, Median, and Max Filters” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, No. 5, May 1993, pp. 504-507.
Grimson, W. Eric L. and Huttenlocher, Daniel P., “On the Sensitivity of the Hough Transform for Object Recognition”, May 1990, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, No. 3.
Horn, Berthold Klaus Paul. “Robot Vision”, The Massachusetts Institute for Technology, 1986.
Medina-Mora et al. (1981) An Incremental Programming Environment, IEEE Transactions on Software Eng. SE-7:472-482.
NEC Electronics Inc., PD7281 Image Pipelined Processor, Product Information Brochure, pp. 2-169-2-211.
Newsletter from Acquity Imaging, Inc., “Remote Vision Support Package—The Phones Are Ringing!,” 1 page.
PictureTel Corporation Product Brochure “PictureTel Live PCS 100(tm) Personal Visual Communications System,” 3 pp. (1993).
PictureTel Corporation Product Brochure “PictureTel System 1000: Complete VideoConferencing for Cost Sensitive Applications,” 4 pp. (1993).
PictureTel Corporation Product Brochure, “PictureTel System 4000(tm) A Family of Models to Fit Your Application From Offices to Boardrooms, Classrooms, and Auditoriums,” 4 pp. (1993).
Plessey Semiconductors, Preliminary Information, May 1986, Publication No. PS2067, May 1986, pp. 1-5.
Pratt, William K. Digital Image Processing (2nd Ed.), 1991, pp. 421-445.
Racca Roberto G., Stephenson Owen, and Clements Reginald M. High-speed video analysis system using multiple shuttered charge-coupled device imagers and digital storage. Optical Engineering (Jun. 1992) 31;6.
Ray, R. “Automated inspection of solder bumps using visual signatures of specular image-highlights,” Computer Vision and Pattern Recognition, 1989. Proceedings CVPR '89. pp. 588-596.
Rosenfeld, Azriel. “Computer Vision: Basic Principles,” Proceedings of the IEEE. vol. 76, No. 8, Aug. 1988, pp. 863-868.
Symantec Corporation, “The Norton pcAnywhere User's Guide,” Table of Contents 8 pp; Introduction of pcAnywhere Technology pp i-vii; Chapter 7—Sessions; pp. 191-240 (1991).
Teitelbaum et al. (19810 The Cornell Program Synthesizer: A Syntax-Directed Programming Environment, Communications of the ACM 24:563-573.
Tsai, Roger Y. “A Versatile Camera Calibration Technique for High-Accuracy 3D Mahcine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses,” The Journal of Robotics and Automation, vol. RA-3, No. 4, Aug. 1987, pp. 323-344.
Tsai, Roger Y. “An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision,” Proceedings IEEE Conference on Computer Vision and Pattern Recognition Jun. 22-26, 1986, pp. 364-374.
Turney, Jerry L. “Recognizing Partially Occluded Parts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-7 (1985) Jul., No. 4, pp. 410-421.
Unser, Michael. “Convolution-Based Interpolation for Fast, High-Quality Rotation of Images,” IEEE Transactions on Image Processing vol. 4 No. 10 (Oct. 1995) pp. 1371-1381.
Viitanen, Jouko, et al. “Hierarchical pattern matching with an efficient method for estimating rotations,” Proceedings IECON '87 International Conference on Industrial Electronics, Control, and Instrumentation, Nov. 3-6, 1987, 6 pp.
Wu, Yifeng and Maitre, Henri. “Registration of a SPOT Image and SAR Image Using Multiresolution Representation of a Coastline,” 10th International Conference on Pattern Recognition Jun. 16-21, 1990, pp. 913-917.
Continuations (1)
Number Date Country
Parent 08/621137 Mar 1996 US
Child 09/594474 US