Semiconductor device image inspection with contrast enhancement

Information

  • Patent Grant
  • 6298149
  • Patent Number
    6,298,149
  • Date Filed
    Tuesday, August 25, 1998
    26 years ago
  • Date Issued
    Tuesday, October 2, 2001
    23 years ago
Abstract
Machine vision methods for inspection of semiconductor die lead frames include the steps of generating a first image of the lead frame, generating a second image of the lead frame and any defect thereon, and subtracting the second image from the first image. The methods are characterized in that the second image is generated such that subtraction of it from the first image emphasizes the defect with respect to the lead frame.
Description




RESERVATION OF COPYRIGHT




The disclosure of this patent document contains material which is subject to copyright protection. The owner thereof has no objection to facsimile reproduction by anyone of the patent document or of the patent disclosure, as it appears in the United States Patent and Trademark Office patent file or records, but otherwise reserves all rights under copyright law.




BACKGROUND OF THE INVENTION




The invention pertains to machine vision and, more particularly, to methods for inspection of leads on semiconductor die packages (or lead frames).




At the heart of an integrated circuit is a semiconductor die. This is a wafer of semiconducting material (e.g., silicon) with hundreds of thousands or millions of electronic circuit components etched into its layers. To enhance processing speed and reduce power consumption, the dies are made as small as possible, e.g., less than a square-inch in area and several mils thick. To facilitate handling, the dies are glued into supporting frames, i.e., lead frames. In addition to providing stability, these frames have large conductive leads that can be soldered to other circuit components, e.g., on a printed circuit board. The leads are typically connected to corresponding pads on the die via a process called wire bonding, wherein a small conductive thread is bonded to each lead and its corresponding pad. Once a semiconductor die and its frame are assembled, they are typically packaged in a ceramic or plastic, forming an integrated circuit.




Inspection of the lead area of the semiconductor die packages is important in the semiconductor industry. Such inspection typically involves checking the leads on the package; both before and after the die is bonded to the package.




The most common defect in assembly is the deposit of unwanted adhesive on the leads. This is sometimes referred to as an AOL defect. Since the adhesive is conductive, it can effectively “short circuit” the semiconductor die's electronic functions.




The inspection of semiconductor packages for adhesive on leads has proven to be a vexing machine vision problem. This is a result of the complexity of the “background,” i.e., the lead pattern which must be inspected in order to find the defect. This is further complicated by the decreasing size, and increasing number, of leads, as well as by the limited resolution of the cameras typically used for inspection. In this regard, it will be appreciated that while there are a variety of lead configurations, there are two basic types: etched leads and flying/free leads. The former are rigid and are etched onto a substrate, while the latter are mechanically pressed but non-rigid.




The prior art suggests the use of a technique referred to golden template comparison (GTC) to inspect the package leads. GTC is a technique for locating objects by comparing a feature under scrutiny (to wit, a lead frame) to a good image—or golden template—that is stored in memory. The technique subtracts the good image from the test image and analyzes the difference to determine if the expected object (e.g., a defect) is present. For example, upon subtracting the image of a good lead frame from a defective one, the resulting “difference” image would reveal an adhesive blotch that could be flagged as a defect.




Before GTC inspections can be performed, the system must be “trained” so that the golden template can be stored in memory. To this end, the GTC training functions are employed to analyze several good samples of a scene to create a “mean” image and “standard deviation” image. The mean image is a statistical average of all the samples analyzed by the training functions. It defines what a typical good scene looks like. The standard deviation image defines those areas on the object where there is little variation from part to part, as well as those areas in which there is great variation from part to part. This latter image permits GTC's runtime inspection functions to use less sensitivity in areas of greater expected variation, and more sensitivity in areas of less expected variation.




At runtime, a system employing GTC captures an image of a scene of interest. Where the position of that scene is different from the training position, the captured image is aligned, or registered, with the mean image. The intensities of the captured image are also normalized with those of the mean image to ensure that variations illumination do not adversely affect the comparison.




The GTC inspection functions then subtract the registered, normalized, captured image from the mean image to produce a difference image that contains all the variations between the two. That difference image is then compared with a “threshold” image derived from the standard deviation image. This determines which pixels of the difference image are to be ignored and which should be analyzed as possible defects. The latter are subjected to morphology, to eliminate or accentuate pixel data patterns and to eliminate noise. An object recognition technique, such as connectivity analysis, can then be employed to classify the apparent defects.




Although GTC inspection tools have proven quite successful, they suffer some limitations. For example, except in unusual circumstances, GTC requires registration—i.e., that the image under inspection be registered with the template image. GTC also uses a standard deviation image for thresholding, which can result in a loss of resolution near edges due to high resulting threshold values. GTC is, additionally, limited to applications where the images are repeatable: it cannot be used where image-to-image variation results form changes in size, shape, orientation and warping.




GTC is typically used to inspect only etched lead configurations, where it can be effectively used if the lead count is not high. Where that count is high, the frequency of etches results in a large area being effectively masked by the high standard deviation at the lead edges. GTC has not proven effective in inspections of flying/free configurations. Moreover, it is limited in that it requires excessive memory or processing time in instances where the package under inspection is rotated.




Blob analysis is also used to inspect etched lead configurations, as well as free-flying lead configurations. However, this analysis technique is only effective if the lead count is not high.




An object of this invention, therefore, is to provide improved methods for machine vision and, more particularly, improved methods for inspecting leads on semiconductor die packages or lead frames.




A further object is to provide such methods that can be used to identify defects such as adhesive blotches on those leads.




Yet another object is to provide such methods that can be used in inspecting the full range of die packages, including both etched lead packages and flying/free lead packages.




Yet still another object is to provide such methods that do not routinely necessitate alignment or registration of an image under inspection with a template image.




Still yet a further object of the invention is to provide such methods that do not require training.




Still other objects of the invention include providing such machine vision methods as can be readily implemented on existing machine vision processing equipment, and which can be implemented for rapid execution without excessive consumption of computational power.




SUMMARY OF THE INVENTION




The foregoing objects are among those achieved by the invention which provides, in one aspect, a machine vision method for inspecting leads on semiconductor die package, or lead frame. The method includes the steps of generating a first image of the lead frame (including, its leads and other structures—together, referred to as the “lead frame” or “background”), generating a second image of the lead frame and any defects thereon (e.g., excessive adhesive), and subtracting the second image from the first image. The method is characterized in that the second image is generated such that subtraction of it from the first image emphasizes the defect with respect to the background.




In related aspects of the invention, the second step is characterized as generating the second image such that its subtraction from the first image increases a contrast between the defect and the background. That step is characterized, in still further aspects of the invention, as being one that results in defect-to-background contrast differences in the second image that are of opposite polarity from the defect-to-contrast differences in the first image.




In further aspects, the invention calls for generating a third image with the results of the subtraction, and for isolating the expected defects on that third image. Isolation can be performed, according to other aspects of the invention, by conventional machine vision segmentation techniques such as connectivity analysis, edge detection and/or tracking, and by thresholding. In the latter regard, a threshold image—as opposed to one or two threshold values—can be generated by mapping image intensity values of the first or second image. That threshold image can, then, be subtracted from the third image (i.e, the difference image) to isolate further the expected defects.




Still further objects of the invention provide for normalizing the first and second images before subtracting them to generate the third image. In this aspect, the invention determines distributions of intensity values of each of the first and second images, applying mapping functions to one or both of them in order to match the tails of those distributions. The first and second images can also be registered prior to subtraction.




According to further aspects of the invention, the first and second images are generated by illuminating the lead frame with different respective light or emission sources. This includes, for example, illuminating it direct, on-axis lighting to generate the first image, and illuminating it with diffuse, off-access or grazing light to generate the second image.




Additional aspects of the invention provide methods incorporating various combinations of the foregoing aspects.




These and other aspects of the invention are evident in the drawings and in the descriptions that follow.











BRIEF DESCRIPTION OF THE DRAWINGS




A better understanding of the invention may be attained by reference to the drawings in which:





FIG. 1

depicts a machine vision system for practice of the invention;





FIGS. 2A-2B

depict illumination arrangements for generating images analyzed in accord with the invention;





FIG. 3A-3C

depict sample images (and their difference images) generated by the lighting arrangement shown in

FIG. 2A

;





FIG. 4

depicts a methodology for semiconductor die lead frame inspection according to the invention.











DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENT





FIG. 1

illustrates a system


5


for inspecting semiconductor die lead frames in accord with the invention. The system


5


includes a capturing device


10


, such as a conventional video camera (such as the Sony XC75 camera with COSMICAR lens, 50 mm lens) or scanner, that generates an image of a lead frame


1


. Image data (or pixels) generated by the capturing device


10


represent, in the conventional manner, the image intensity (e.g., color or brightness) of each point in the scene at the resolution of the capturing device. The lead frame is illuminated by on-axis light


7


and ring light


8


for generation of multiple images that facilitate identification of defects in accord with methods discussed herein.




The digital image data is transmitted from capturing device


10


via a communications path


11


to an image analysis system


12


. This can be a conventional digital data processor, or a vision processing system (such as the Cognex 5400) of the type commercially available from the assignee hereof, Cognex Corporation, programmed in accord with the teachings hereof to perform image segmentation. The image analysis system


12


may have one or more central processing units


13


, main memory


14


, input-output system


15


, and disc drive (or other mass storage device)


16


, all of the conventional type.




The system


12


and, more particularly, central processing unit


13


, is configured by programming instructions according to the teachings hereof for image segmentation, as described in further detail below. Those skilled in the art will appreciate that, in addition to implementation on a programmable digital data processor, the methods and apparatus taught herein can be implemented in special purpose hardware.





FIG. 2A

illustrates an arrangement of light sources according to the invention for on-axis and diffuse (or grazing) light illumination of lead frame


20


. The arrangement includes lighting sources


22


and


24


positioned for illuminating lead frame


20


. Lighting source


22


provides direct, on-axis lighting via reflection off a half-silvered, partially transparent, angled one-way mirror


28


, while lighting source


24


provides diffuse, off-access lighting, or grazing light. Images of the illuminated frame


20


are captured by camera


26


through mirror


28


.




Lighting source


22


is of the conventional type known in the art for on-axis illumination of objects under inspection in a machine vision application. A preferred such light is a diffused on-axis light (DOAL) commercially available from Dolan Jenner. The source


22


and mirror


28


are positioned to cause potential defects on the lead area of the frame


20


(e.g., adhesive patches) to appear as dark features against a light background.




Lighting source


24


is also of a conventional type known in the art for use in providing diffuse, off-axis light or grazing light in machine vision applications. One preferred source


24


is an arrangement of several point light sources, e.g., fiber optic bundles, or line lights, disposed about element


20


. Another preferred such lighting source


24


is a ring light and, still more preferably, a ring light of the type disclosed in commonly assigned U.S. Pat. No. 5,367,439. The lighting source


24


is positioned to illuminate the lead area of the frame


20


in such a way to cause potential defects thereon (e.g., adhesive patches) to appear as light features against a dark background.




Other lighting sources known in the art can be used in place of on-axis source


22


and ring light source


24


to illuminate a surface under inspection. Considerations for selection and positioning of the sources


22


,


24


are that expected defects on the lead frame appear differently (if at all) with respect to the background when illuminated by each respective source


22


,


24


.




More particularly, the lighting sources


22


,


24


are selected and positioned such that the subtraction of an image captured by camera


26


when the lead frame is illuminated by one of the sources (e.g.,


22


) from an image captured by camera


26


when the lead frame is illuminated by the other source (e.g.,


24


) emphasizes defects on the leads, e.g., by increasing the contrast between the defects and the background (i.e., the remainder of the surface).




Put another way, the lighting sources


22


,


24


are selected and positioned in such a way that an image generated by camera


26


when the lead frame is illuminated by one source has an defect-to-background contrast of an opposite polarity then the defect-to-background contrast of an image generated by camera


26


when the lead frame is illuminated the other source.




Thus, for example, in a preferred arrangement to detect adhesive on lead (AOL) defects on the lead frame


20


, the on-axis lighting source


22


is selected and positioned (in relation to frame


20


and mirror


28


) to cause a defect to be dark on a light background (e.g., “positive” defect-to-background contrast polarity), while the diffuse ring light


24


is selected and positioned to make the same defect appear light on a dark background (e.g., “negative” defect-to-background contrast polarity).





FIG. 3A

similarly depicts an image generated by camera


26


when a lead frame


20


with AOL is illuminated by ring light or grazing light source


24


. As shown in the illustration, the ring/grazing light reveals the adhesive as light patches


60


,


62


on a dark background.





FIG. 3B

depicts an image of the type generated by camera


26


when the same lead frame is illuminated by on-axis lighting source


22


. As shown in

FIGS. 3A

, the on-axis lighting reveals adhesive


60


,


62


, on the leads as dark patches on a light background.





FIG. 3C

reveals a result according to the invention of subtracting the images generated by camera


26


under these two separate lighting conditions. Put another way,

FIG. 3C

represents the result of subtracting the image of

FIG. 3B

from the image of FIG.


3


A. In

FIG. 3C

, the adhesive on the lead frame


20


is revealed as very light patches against a very dark background. (Note that this figure shows the output of the subtraction after remapping step


114


, described below.)





FIG. 4

illustrates a method for inspecting semiconductor lead frames according to the invention. In step


100


, the method acquires an image of the lead frame with lighting source


24


or other grazing light. Likewise, in step


102


, the method acquires an image of the lead frame with on-axis light source


22


. Though these images can be acquired at any times—though not concurrently—they are typically acquired at about the same time. This reduces the risk that the lead frame will be moved between acquisitions and, thereby, removes the need to register the images.




In the discussion that follows, the image acquired in step


100


is referred to as “Image 1,” while the image acquired in step


102


is referred to as “Image 2.” Although the discussion herein is directed toward subtraction of Image 2 from Image 1, those skilled in the art will likewise appreciate that Image 1 can be subtracted from Image 2. Preferably, Image 2 is subtracted from Image 1 in instances where the object is lighter than the background in Image 1, and where object is darker than the background in Image 2. Conversely, Image 1 is preferably subtracted from Image 2 in instances where the object is lighter than the background in Image 2, and where object is darker than the background in Image 1.




In optional step


104


, the method registers the images to insure alignment of the features therein. Though not necessary in many instances, this step is utilized if the lead frame or camera is moved between image acquisitions. Image registration can be performed, for example, by a two-dimensional cross-correlation of images, in the manner disclosed in Jain,


Fundamentals of Digital Image Processing


. (Prentice Hall 1989) at Chapter 2, the teachings of which are incoporated herein by reference.




In steps


104


and


106


, the method windows Images 1 and 2. These steps, which are optional, reduce the area (or pixels) of the respective images under consideration and, reduce processing time and/or computational resources. These steps can be performed by selecting the relevant subset of the pixel array of each image.




In steps


108


and


110


, the method normalizes the (windowed) images. These optional steps, which compensate for overall differences in image intensity, can be performed by any technique known in the art. Preferably, however, normalization is global, using a map derived from the global statistics of the (windowed) images. The map is defined to match the extrema (or tails) of the statistical distributions of both images.




In step


112


, the method generates a difference image, Image 3, by subtracting Image 2 from Image 1. This subtraction is performed in the conventional manner known in the art. Objects in Image 3, i.e., the “difference” image, can be isolated by standard machine vision segmentation techniques such as connectivity analysis, edge detection and/or tracking, and by thresholding.




In step


114


, the method maps Image 3 to remove any negative difference values (i.e., negative pixel values) resulting from the subtraction. It also can be used to normalize (or rescale) the difference image to facilitate later stages of processing. This step, which can be performed in a conventional manner known in the art, is optional.




In step


116


, the method performs morphology on the difference image. Morphology, which is well known in the art, is a technique for eliminating or accentuating data in the difference image, e.g., by filtering out of variations due to video noise or small defects. This can be performed, for example, in a manner disclosed by Jain, supra, at Chapter 9.9, the teachings of which are incoporated herein by reference.




In step


118


, the method thresholds, or binarizes, the image to distinguish or isolate defects of interest—e.g., images of adhesive patches—from the background. Thresholding can be performed in the conventional manner known in the art. Thus, for example, a single intensity value can be determined from a histogram of Image 3. Preferably, however, the threshold intensity value is predetermined, i.e., based on empirical analysis of prior images.




In certain applications, use of a high global threshold intensity value will result in portions of the object of interest being interpreted as background and, therefore, will result in poor segmentation. Likewise, use of a low global threshold intensity value will result in background being interpreted as objects of interest. To overcome this, the method includes an optional step of thresholding using a threshold image generated by mapping Image 2; see step


120


. That threshold image is made up of pixels representing local threshold values.




In instances where a threshold image is used, binarization step


118


involves subtracting the threshold image from Image 3, then, mapping positive differences to 1 (indicating object) and negative differences to zero (indicating background).




Following binarization, the method of step


122


conducts connectivity analysis to determine the properties of any defects in the binarized image. Those properties, which include size, position, orientation, and principal moments, can be used to determine whether the defect necessitates rejection of the lead frame.




In further embodiments, the invention contemplates an image capture arrangement as shown in FIG.


2


B. Here, rather than employing two lighting sources, a system according to the invention captures light reflected from the element


40


under inspection in two different wavelengths. For this purpose, the lead frame is illuminated by a single light source


42


, which can be, for example, a white light. Reflections from the lead frame captured by camera


26


can be filtered to capture the differing wavelengths. Such filtering can be provided, e.g., by filters


48


,


50


, which are selected such that objects on the surface of element


40


appear differently (if at all) with respect to the background when the filtered light is captured by the camera


46


.




In addition to capturing light of differing wavelengths, filters


48


and


50


can capture light of differing orientations. To this end, they can be polarizing lens of differing orientation for capturing light from source


42


(which may also be polarized) that is reflected off the lead frame.




The embodiments discussed above are drawn to the inspection of semiconductor die lead frames. It would be within the capability of one of ordinary skill in the art to apply these teachings to the inspection of semiconductor die surfaces without departing from the spirit and scope of the invention. Indeed, copending commonly-assigned application, U.S. Ser. No. 08/621,190, now U.S. Pat. No. 6,949,901, the teachings of which are incorporated herein by reference, discloses the inspection of semiconductor die surfaces using the same techniques as those described herein.




Described above are machine vision methods meeting the objects set forth. These methods provide improved machine vision semiconductor lead frame inspection overcoming the deficiencies of the prior art segmentation techniques, such as GTC. For example, apart from instances where an illuminated object is moved between image captures, the method does not require registration of images prior to subtraction. Nor the method require training. Still further, the method is applicable to the wide range of repeatable and nonrepeatable images. Yet still further, the methods hereof permit inspection of the full range of lead frame types, including flying/free lead configurations, even where the lead count is high. Moreover, these methods do not consume excessive memory or processing time, even in instances where the package under inspection is rotated.




It will be appreciated that the embodiments described above are illustrative only and that additional embodiments within the ken of those of ordinary skill in the art fall within the scope of the invention. By way of example, although the discussion herein primarily refers to subtraction of Image 2 from Image 1, those skilled in the art will likewise appreciate that Image 1 can, alternatively, be subtracted from Image 2 with like success (albeit with a reversal of “polarity” in the resulting image).



Claims
  • 1. A machine vision method for inspecting a semiconductor device, including either a semiconductor die lead frame or a semiconductor surface, comprising the steps of:illuminating the semiconductor device with an illumination source selected from a group of illumination sources including a direct on-axis light source and a diffuse off-axis light source, wherein the on-axis and off-axis sources are each selected and positioned to provide an image of the semiconductor device having a defect-to-background contrast polarity which is opposite to that of the other source; generating a first on-axis image of the semiconductor device with an image capture device while it is so illuminated; illuminating the semiconductor device with another illumination source selected from the aforesaid group; generating a second on-axis image of the semiconductor device with the image capture device while it is so illuminated; and subtracting the second image from the first image to form a third image that emphasizes any defect on the semiconductor device.
  • 2. A method according to claim 1, wherein the step of generating the second image includes the step of generating that image such that subtraction of the second image from the first image increases a contrast between the defect and the semiconductor device.
  • 3. A method according to claim 1, comprising the step of isolating the defect from the third image.
  • 4. A method according to claim 3, where the isolating step comprises the step of performing connectivity analysis on the third image to distinguish the defect from the semiconductor device.
  • 5. A method according to claim 3, wherein the isolating step comprises the step of detecting and tracking edges in the third image to isolate the defect.
  • 6. A method according to claim 3, wherein the isolating step comprises the step of thresholding the third image to distinguish at least one of a defect and its edges from the semiconductor device.
  • 7. A method according to claim 6, wherein the thresholding step comprises the step of determining an intensity threshold value that distinguishes at least one of the defect and its edges from the semiconductor device.
  • 8. A method according to claim 1, comprising the step of normalizing at least one of the first and second images before the subtracting step.
  • 9. A method according to 1, wherein the normalizing step includes the steps ofdetermining distributions of intensity values of each of the first and second images; generating a mapping function for matching extrema of those distributions; and transforming the intensity values of at least one of the first and second images with that mapping function.
  • 10. A method according to claim 1, including the step of generating the first and second images by illuminating the semiconductor device with different respective emission sources.
  • 11. A method according to claim 1, including the step of generating the first and second images with light of different respective polarizations.
  • 12. A method according to claim 1, including the step of generating the first and second images by illuminating the semiconductor device with emissions in different respective wavelengths.
  • 13. A method according to claim 1 including the further step of registering the first and second images with one another before the subtracting step.
  • 14. A machine vision method for inspecting a semiconductor device, including either a semiconductor die lead frame or a semiconductor surface, comprising the steps of:illuminating the semiconductor device with an illumination source selected from a group of illumination sources including a direct on-axis light source and a diffuse off-axis light source, wherein the on-axis and off-axis sources are each selected and positioned to prove an image of the semiconductor device having a defect-to-background contrast polarity which is opposite to that of the other source; generating a first on-axis image of the semiconductor device with an image capture device while it is so illuminated; illuminating the semiconductor device with another illumination source selected from the aforesaid group; generating a second on-axis image of the semiconductor device with the image capture device while it is so illuminated; subtracting the second image from the first image to form a third image that emphasizes any defect on the semiconductor device; and isolating the defect in the third image by any of segmentation, edge detection and tracking, connectivity analysis, and thresholding.
  • 15. A machine vision method for inspecting a semiconductor device, including either a semiconductor die lead frame or a semiconductor surface, comprising the steps ofilluminating the semiconductor device with an illumination source selected from a group of illumination sources including a direct on-axis light source and a diffuse off-axis light source; generating a first on-axis image of the semiconductor device with an image capture device while it is so illuminated; illuminating the semiconductor device with another illumination source selected from the aforesaid group; generating a second on-axis image of the semiconductor device with the image capture device while it is so illuminated; subtracting the second image from the first image to form a third image that emphasizes any defect on the semiconductor device; isolating the defect from the third image, said isolating step comprising the step of thresholding the third image to distinguish at least one of a defect and its edges from the semiconductor device; generating a threshold image from at least one of the first and second images, the threshold image having pixels representing local threshold intensity values; and using the threshold image to distinguish, the third image, at least one of the defect and its edges from the semiconductor device.
  • 16. A method according to claim 15, wherein the step of generating the threshold image includes the step of mapping image intensity values in the second image to generate the threshold image.
  • 17. A method according to claim 15, wherein the step of using the threshold image includes the step of subtracting the threshold image from the third image.
REFERENCE TO RELATED APPLICATIONS

This is a continuation of U.S. patent application Ser. No. 08/621,189 filed Mar. 21, 1996, now abandoned. This application is related to copending, commonly assigned U.S. patent application Ser. No. 08/621,137, for MACHINE VISION METHODS FOR IMAGE SEGMENTATION USING MULTIPLE IMAGES, filed this same day herewith, the teachings of which are incorporated herein by reference. This application is related to copending, commonly assigned U.S. patent application Ser. No. 08/621,190, now U.S. Pat. No. 5,949,901, for MACHINE VISION METHODS FOR INSPECTION OF SEMICONDUCTOR DIE SURFACES, filed Mar. 21, 1996, the teachings of which are incorporated herein by reference.

US Referenced Citations (141)
Number Name Date Kind
3816722 Sakoe et al. Jun 1974
3936800 Ejiri et al. Feb 1976
3967100 Shimomura Jun 1976
3968475 McMahon Jul 1976
3978326 Shimomura Aug 1976
4011403 Epstein et al. Mar 1977
4115702 Nopper Sep 1978
4115762 Akiyama et al. Sep 1978
4183013 Agrawala et al. Jan 1980
4200861 Hubach et al. Apr 1980
4254400 Yoda et al. Mar 1981
4286293 Jablonowski Aug 1981
4300164 Sacks Nov 1981
4385322 Hubach et al. May 1983
4441124 Heebner et al. Apr 1984
4441206 Kuniyoshi et al. Apr 1984
4519041 Fant et al. May 1985
4534813 Williamson et al. Aug 1985
4541116 Lougheed Sep 1985
4570180 Baier et al. Feb 1986
4577344 Warren et al. Mar 1986
4581762 Lapidus et al. Apr 1986
4606065 Beg et al. Aug 1986
4617619 Gehly Oct 1986
4630306 West et al. Dec 1986
4688088 Hamazaki et al. Aug 1987
4706168 Weisner Nov 1987
4728195 Silver Mar 1988
4730260 Mori et al. Mar 1988
4731858 Grasmueller et al. Mar 1988
4736437 Sacks et al. Apr 1988
4742551 Deering May 1988
4758782 Kobayashi Jul 1988
4764870 Haskin Aug 1988
4771469 Wittenburg Sep 1988
4783826 Koso Nov 1988
4783828 Sadjadi Nov 1988
4783829 Miyakawa et al. Nov 1988
4831580 Yamada May 1989
4860374 Murakami et al. Aug 1989
4860375 McCubbrey et al. Aug 1989
4876457 Bose Oct 1989
4876728 Roth Oct 1989
4903218 Longo et al. Feb 1990
4907169 Lovoi Mar 1990
4914553 Hamada et al. Apr 1990
4922543 Ahlbom et al. May 1990
4926492 Tanaka et al. May 1990
4932065 Feldgajer Jun 1990
4953224 Ichinose et al. Aug 1990
4955062 Terui Sep 1990
4959898 Landman et al. Oct 1990
4962423 Yamada et al. Oct 1990
4972359 Silver et al. Nov 1990
4982438 Usami et al. Jan 1991
5012402 Akiyama Apr 1991
5012524 Le Beau Apr 1991
5046190 Daniel et al. Sep 1991
5054096 Beizer Oct 1991
5060276 Morris et al. Oct 1991
5063608 Siegel Nov 1991
5073958 Imme Dec 1991
5081656 Baker et al. Jan 1992
5081689 Meyer et al. Jan 1992
5086478 Kelly-Mahaffey et al. Feb 1992
5090576 Menten Feb 1992
5091861 Geller et al. Feb 1992
5091968 Higgins et al. Feb 1992
5093867 Hori et al. Mar 1992
5113565 Cipolla et al. May 1992
5115309 Hang May 1992
5119435 Berkin Jun 1992
5124622 Kawamura et al. Jun 1992
5133022 Weideman Jul 1992
5134575 Takagi Jul 1992
5143436 Baylor et al. Sep 1992
5145432 Midland et al. Sep 1992
5151951 Ueda et al. Sep 1992
5153925 Tanioka et al. Oct 1992
5159281 Hedstrom et al. Oct 1992
5159645 Kumagai Oct 1992
5164994 Bushroe Nov 1992
5166985 Takagi et al. Nov 1992
5168269 Harlan Dec 1992
5185855 Kato et al. Feb 1993
5189712 Kajiwara et al. Feb 1993
5206820 Ammann et al. Apr 1993
5216503 Paik Jun 1993
5225940 Ishii et al. Jul 1993
5230027 Kikuchi Jul 1993
5243607 Masson et al. Sep 1993
5253306 Nishio Oct 1993
5253308 Johnson Oct 1993
5265173 Griffin et al. Nov 1993
5271068 Ueda et al. Dec 1993
5287449 Kojima Feb 1994
5297256 Wolstenholme et al. Mar 1994
5299269 Gaborski et al. Mar 1994
5311598 Bose et al. May 1994
5315388 Shen et al. May 1994
5319457 Nakahashi et al. Jun 1994
5327156 Masukane et al. Jul 1994
5337267 Colavin Aug 1994
5363507 Nakayama et al. Nov 1994
5367439 Mayer et al. Nov 1994
5367667 Wahlquist et al. Nov 1994
5371690 Engel et al. Dec 1994
5388197 Rayner Feb 1995
5388252 Dreste et al. Feb 1995
5398292 Aoyama Mar 1995
5432525 Maruo et al. Jul 1995
5440699 Farrand et al. Aug 1995
5455870 Sepai et al. Oct 1995
5455933 Schieve et al. Oct 1995
5475766 Tsuchiya et al. Dec 1995
5477138 Erjavic et al. Dec 1995
5481712 Silver et al. Jan 1996
5485570 Bushboom et al. Jan 1996
5491780 Fyles et al. Feb 1996
5495424 Tokura Feb 1996
5495537 Bedrosian et al. Feb 1996
5519840 Matias et al. May 1996
5526050 King et al. Jun 1996
5532739 Garakani et al. Jul 1996
5550763 Michael Aug 1996
5566877 McCormack Oct 1996
5568563 Tanaka et al. Oct 1996
5574668 Beaty Nov 1996
5574801 Collet-Beillon Nov 1996
5583949 Smith et al. Dec 1996
5583954 Garakani Dec 1996
5592562 Rooks Jan 1997
5594859 Palmer et al. Jan 1997
5602937 Bedrosian et al. Feb 1997
5608872 Schwartz et al. Mar 1997
5640199 Garakani et al. Jun 1997
5640200 Michael Jun 1997
5684530 White Nov 1997
5724439 Mizuoka et al. Mar 1998
5761337 Nishimura et al. Jun 1998
5822055 Tsai et al. Oct 1998
Foreign Referenced Citations (7)
Number Date Country
0 527 632 A2 Feb 1993 EP
WO 95122137 Aug 1995 WO
WO 9521376 Aug 1995 WO
WO 9721189 Jun 1997 WO
WO 9722858 Jun 1997 WO
WO 9724693 Jul 1997 WO
WO 9724692 Jul 1997 WO
Non-Patent Literature Citations (15)
Entry
Grimson, W. Eric L. and Huttenlocher, Daniel P., “On the Sensitivity of the Hough Transform for Object Recognition”, May 1990, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, No. 3.
Medina-Mora et al. (1981) An Incremental Programming Environment, IEEE Transactions on Software Eng. SE-7:472-482.
Teitelbaum et al. (19810 The Cornell Program Synthesizer: A Syntax-Directed Programming Environment, Communications of the ACM 24:563-573.
Newsletter from Acquity Imaging, Inc., “Remote Vision Support Package—The Phones Are Ringing!,” 1 page.
Picture Tel Corporation Product Brochure “Picturetel Live PCS 100(tm) Personal Visual Communications System,” 3 pp. (1993).
PictureTel Corporation Product Brochure, “PictureTel System 1000: Complete Videoconferencing for Cost Sensitive Applications,” 4 pp. (1993).
PictureTel Corporation Product Brochure, “PictureTel System 4000(tm) A Family of Models to fit Your Application From Offices to Boardrooms, Classrooms, and Auditoriums,” 4 pp. (1993).
Symantec Corporation, “The Norton pcAnywhere User's Guide,” Table of Contents 8 pp; Introduction of pcAnywhere Technology pp i-vii; Chapter 7—Sessions; pp. 191-240 (1991).
Bursky, Dave, “CMOS Four-Chip Set Process Images at 20-MHz Data Rates,” Electronic Design, May 28, 1987, pp. 39-44.
Plessey Semiconductors, Preliminary Information, May 1986, Publication No. PS2067, May 1986, pp. 1-5.
NEC Electronics Inc., PD7281 Image Pipelined Processor, Product Information Brochure, pp. 2-169-2-211.
Horn, Berthold Klaus Paul. “Robot Vision”, The Massachusetts Institute for Technology, 1986.
Rosenfeld, Azriel. “Computer Vision: Basic Principles,” Proceedings of the IEEE. vol. 76, No. 8, Aug. 1988. pp. 863-868.
Rosenfeld, Azriel. “Computer Vision: Basic Principles”. Proceedings of the IEEE, vol. 76, No. 8, Aug., 1988. pp. 863-868.
Rajarshi Ray, Automated Inspection of Solder Bumps Using Visual Signatures of Specular Image-Highlights, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1989, pp. 588-596, Jun. 1989.
Continuations (1)
Number Date Country
Parent 08/621189 Mar 1996 US
Child 09/184407 US