The present invention relates generally to automated inspection, and specifically to methods and systems for detection and analysis of manufacturing defects.
Automatic Defect Classification (ADC) techniques are widely used in inspection and measurement of defects on patterned wafers in the semiconductor industry. The object of these techniques is not only to detect the existence of defects, but to classify them automatically by type, in order to provide more detailed feedback on the production process and reduce the load on human inspectors. ADC is used, for example, to distinguish among types of defects arising from particulate contaminants on the wafer surface and defects associated with irregularities in the microcircuit pattern itself, and may also identify specific types of particles and irregularities.
Various methods for ADC have been described in the patent literature. For example, U.S. Pat. No. 6,256,093 describes a system for on-the-fly ADC in a scanned wafer. A light source illuminates the scanned wafer so as to generate an illuminating spot on the wafer. Light scattered from the spot is sensed by at least two spaced-apart detectors, and is analyzed so as to detect defects in the wafer and classify the defects into distinct defect types.
As another example, U.S. Pat. No. 6,922,482 describes a method and apparatus for automatically classifying a defect on the surface of a semiconductor wafer into one of a number of core classes, using a core classifier employing boundary and topographical information. The defect is then further classified into a subclass using a specific adaptive classifier that is associated with the core class and trained to classify defects from only a limited number of related core classes. Defects that cannot be classified by the core classifier or the specific adaptive classifiers are classified by a full classifier.
Embodiments of the present invention that are described hereinbelow provide improved methods, systems and software for automated inspection and classification of defects.
There is therefore provided, in accordance with an embodiment of the present invention, inspection apparatus, including an imaging module, which is configured to capture images of defects at different, respective locations on a sample. A processor is coupled to process the images so as to automatically assign respective classifications to the defects, and to autonomously control the imaging module to continue capturing the images responsively to the assigned classifications.
In some embodiments, the processor is configured to instruct the imaging module, after assigning the classifications to a first set of the defects appearing in the images captured by the imaging module, to capture further images of a second set of the defects responsively to a distribution of the classifications of the defects in the first set. The processor may be configured to count respective numbers of the defects belonging to one or more of the classifications, and to instruct the imaging module to continue capturing the further images until at least one of the numbers satisfies a predefined criterion. Typically, the processor is configured to cause the imaging module to continue capturing the further images until a number of the defects belonging to a given classification reaches a predefined threshold, and then to terminate inspection of the sample.
In one embodiment, the apparatus includes a user interface, wherein the processor is coupled to process the images and control the imaging module in response to instructions received from a user via the user interface. Additionally or alternatively, the processor is coupled to process the images and control the imaging module in response to instructions received from a server via a network.
In some embodiments, the processor is configured to identify one or more of the defects for further analysis using a different inspection modality. The imaging module may include multiple detectors, including at least first and second detectors configured to capture images in accordance with different, respective modalities, and the processor may be configured to identify the one or more of the defects by processing first images captured by the first detector and to instruct the imaging module to capture second images of the one or more of the defects using the second detector. In a disclosed embodiment, the processor is configured to identify the one or more of the defects, based on the first images, as belonging to a specified class, and to choose the second detector for capturing the second images depending on the specified class. The multiple detectors may be selected from a group of detectors consisting of electron detectors, X-ray detectors, and optical detectors.
In one embodiment, the apparatus includes a memory, which is configured to store definitions of a plurality of defect classes in terms of respective classification rules in a multi-dimensional feature space, and the processor is configured to extract features of the defects from the images, and to assign the respective classifications by applying the classification rules to the extracted features.
In a disclosed embodiment, the imaging module includes a scanning electron microscope (SEM), and the sample includes a semiconductor wafer.
There is also provided, in accordance with an embodiment of the present invention, a method for inspection, which includes capturing images of defects at different, respective locations on a sample using an imaging module. The images are automatically processed so as to assign respective classifications to the defects, and autonomously controlling the imaging module to continue capturing the images responsively to the assigned classifications.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
In common models of ADC, the image capture and image analysis functions are separate: An imaging system, such as a scanning electron microscope (SEM), will first capture a certain number of images of different locations on a sample, such as a semiconductor wafer, and these images will then be passed to an ADC system for post hoc analysis. Because it is difficult to predict a priori how many defects of any given type will be found on a given wafer, the post hoc approach will sometimes yield far more images than are needed for effective process analysis. At other times, the SEM will yield insufficient information, so that the wafer will have to be rescanned. Such additional scans may be aimed at providing additional images or information that is specific for the class of a particular defect or group of defects. For example, particle-type defects will be analyzed by energy-dispersive X-ray (EDX) analysis, and images of electrical short circuits may be acquired in tilt mode, at a different angle of imaging. The above sorts of situations result in wasted time and testing resources.
Embodiments of the present invention address these problems by integrating ADC capability with an imaging module, such as a SEM, an optical inspection unit, or any other suitable type of inspection device. An ADC processor analyzes each image that the imaging module produces in order to classify the defects and determine whether they are “interesting,” according to user-defined criteria. For example, a given user may decide that he is interested only in certain defect classes, such as particle-type defects, and wishes to inspect fifty such defects per wafer. In this case, the imaging module may scan different locations, randomly selected, on each wafer until it has captured images of fifty particle-type defects. Classifying the defect type that has been captured by the imaging module at each location is done online by integrating ADC capability with the imaging module. In the present example, the imaging module will output fifty images of particle-type defects from each wafer and will then move on to process the next wafer.
In other embodiments, when the imaging module supports multiple inspection modalities, the integrated ADC processor may classify defects according to defect type, and may then instruct the imaging module to apply different additional inspection operations to the different types. This sort of iterative inspection loop can be used multiple times at any given defect location, depending on classification results. For example:
These procedures reduce the time needed for analyzing some of the defect classes by performing necessary analysis and re-imaging based on close-loop classification. All the necessary information may be gathered in one scan over the defect locations, rather than having to scan the wafer multiple times in multiple different modalities.
Thus, embodiments of this invention enable an imaging module, such as a SEM, to use its resources more efficiently: Integration of ADC capability with the imaging module allows it to capture prescribed numbers of images of particular types of defects and to stop capturing images once it has met its quota, for example, rather than wasting time on superfluous images. Additionally or alternatively, as explained above, classification results of defects captured initially on a given wafer can be used to automatically guide the imaging module to capture subsequent images in particular locations using particular modalities and settings. Based on initial ADC results, the imaging module can be guided to perform further analysis at specific wafer locations using another inspection modality, such as energy-dispersive X-ray (EDX) analysis or other forms of material analysis and different image acquisition modes, including optical modalities, as well as inspection by a human operator.
A processor 26 receives and processes the images that are output by imaging module 24. Processor 26 comprises a classifier module 30, which processes the images to extract relevant inspection feature values from the images of wafer 22. Typically, module 30 assigns the defects to respective classes by applying classification rules, which are stored in a memory 28, to the feature values. Classifier module 30 passes the defect classifications to a control module 32, which controls the ongoing image capture by inspection module accordingly, as described in greater detail hereinbelow. These operations of processor 26 are typically carried out autonomously, i.e., without operator intervention during the inspection process, based on predefined criteria and instructions.
Processor 26 typically comprises a general-purpose computer processor or a group of such processors, which may be integrated inside the enclosure of imaging module 24 or coupled to it by a suitable communication link. The processor uses memory 28 to hold defect information and classification rules. Processor 26 is coupled to a user interface 34, through which a user, such as an operator of system 20, is able to define operating criteria to be applied in processing images and controlling imaging module 24. Processor 26 is programmed in software to carry out the functions that are described herein, including the functions of classifier module 30 and control module 32. The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be stored in tangible, non-transitory storage media, such as optical, magnetic, or electronic memory media (which may be comprised in memory 28, as well). Alternatively or additionally, at least some of the functions of processor 26 may be implemented in dedicated or programmable hardware logic.
Classifier module 30 may apply any suitable sort of ADC algorithm that is known in the art to the defect image data. In one embodiment, for example, module 30 runs multiple classifiers, including both single-class and multi-class classifiers. These classifiers use classification rules specified in a multi-dimensional feature space, which define the defect classes in terms of respective regions in the feature space. Module 30 extracts features of the defects from the images provided by module 24, and assigns the defect classifications by applying the classification rules to the extracted features. The multi-class classifier sorts the defects on this basis among a set of predefined defect classes (such as particle defects, pattern defects, etc.); while the single-class classifiers are defined respectively for each class and classify defects as being within or outside the class boundaries. Classifiers of this sort are described in detail, for example, in U.S. patent application Ser. No. 12/844,724, filed Jul. 27, 2010, whose disclosure is incorporated herein by reference.
Processor 26 typically communicates, via a network, for example, with an ADC server 38. The server provides processor 26 with “recipes” for defect analysis and classification and may update these recipes from time to time. Processor 26 reports defect inspection and classification results to the server. Because classification is performed locally at imaging module 24, the volume of data that must be communicated to server 38 is greatly reduced, relative to systems in which raw images are transmitted to the ADC server. Additionally or alternatively, processor 26 may convey some image data and/or intermediate classification results to server 38 for further processing.
Control module 32 receives defect capture instructions, at an instruction input step 40. These instructions may be programmed by a user, such as an operator of system 20, via user interface 34, for example, or they may alternatively be downloaded to the system via a network or conveyed to the system by any other suitable means. Typically, the instructions define one or more criteria applying to the distribution of defects that system 20 is to seek. For example, the instructions may specify one or more classes of defects and the number of defects in each such class that the system should attempt to find on each wafer. The instructions may also specify a timeout condition indicating, for example, that inspection of a wafer should terminate after capturing images of some maximal number of possible defect sites without reaching the target defect distribution. Additionally or alternatively, the instructions may specify additional image acquisition modes to be applied to one or more classes of defects, based on classification decisions made at each iteration of a closed-loop inspection process. Such image acquisition modes may include EDX, tilt imaging, optical imaging, and any other information that can be collected by detectors 36 during the scan.
Control module 32 instructs imaging module 24 to capture an image of a defect on wafer 22, at an image capture step 42. Module 24 passes the image (and/or extracted features of the image) to classifier module 30, at a defect classification step 44. Module 30 applies the appropriate rules to the defect features in order to assign the defect to a particular class. Module 30 may optionally tag the defect for further analysis, such as when the defect cannot be classified with confidence using the rules in memory 28 or when the instructions provided at step 40 instruct processor 26 that certain types of defects should be so identified. Depending on these instructions, these tagged defects may be processed further using another imaging modality in imaging module 24. Alternatively or additionally, certain tagged defects may be passed to a human inspector and/or to another inspection machine, such as an X-ray analysis tool.
Control module 32 receives and records the classification of each defect from classifier module 30, at a distribution checking step 46. At this step, if the defect is of a type that has been tagged for further imaging and classification, module 32 may return to step 42 and instruct imaging module 24 to capture another image of the same defect using another specified modality (such as EDX, tilt, or optical imaging, as explained above). Additionally or alternatively, module may, at step 46, compare the distribution of defects classified so far to the instructions that were received at step 40. If the instructions have not yet been fulfilled (or timed out), the control module returns to step 42 and instructs imaging module 24 to capture a defect image at another location.
If at step 46 the instructions have been fulfilled (by having performed all specified imaging and classification steps and having collected the required number of images of defects belonging to a specified class or classes, for example) or timed out, module 32 terminates the inspection of wafer 22 and issues a report, at an inspection completion step 48. The report may include images of the defects in the class or classes specified by the instructions, or it may simply contain tabulated defect data with respect to wafer 22. System 20 may then proceed to inspection of the next wafer.
Although the above method is described, for the sake of clarity, in the specific context of defect classification in system 20 and imaging module 24, these methods may similarly be applied in other systems and applications of automated inspection, and are in no way limited to semiconductor wafer defects or to SEM images. It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application is a continuation of U.S. patent application Ser. No. 13/948,118 filed on Jul. 22, 2013, the contents of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5963662 | Vachtsevanos | Oct 1999 | A |
5991699 | Kulkarni et al. | Nov 1999 | A |
6148099 | Lee et al. | Nov 2000 | A |
6195458 | Warnick et al. | Feb 2001 | B1 |
6256093 | Ravid et al. | Jul 2001 | B1 |
6288782 | Worster et al. | Sep 2001 | B1 |
6292582 | Lin et al. | Sep 2001 | B1 |
6650779 | Vachtesvanos et al. | Nov 2003 | B2 |
6763130 | Somekh et al. | Jul 2004 | B1 |
6922482 | Ben-Porath | Jul 2005 | B1 |
6999614 | Bakker et al. | Feb 2006 | B1 |
7106434 | Mapoles et al. | Sep 2006 | B1 |
7113628 | Obara et al. | Sep 2006 | B1 |
7318051 | Weston et al. | Jan 2008 | B2 |
7379175 | Stokowski | May 2008 | B1 |
7570796 | Zafar et al. | Aug 2009 | B2 |
7570800 | Lin et al. | Aug 2009 | B2 |
7684609 | Toth et al. | Mar 2010 | B1 |
7756320 | Honda et al. | Jul 2010 | B2 |
7756658 | Kulkarni et al. | Jul 2010 | B2 |
8112241 | Xiong | Feb 2012 | B2 |
8175373 | Abbott et al. | May 2012 | B2 |
8194968 | Park et al. | Jun 2012 | B2 |
8315453 | Shlain et al. | Nov 2012 | B2 |
8502146 | Chen et al. | Aug 2013 | B2 |
8983179 | Yu et al. | Mar 2015 | B1 |
20020159641 | Whitney et al. | Oct 2002 | A1 |
20020159643 | DeYong et al. | Oct 2002 | A1 |
20020165837 | Zhang et al. | Nov 2002 | A1 |
20020168099 | Noy | Nov 2002 | A1 |
20020174344 | Ting | Nov 2002 | A1 |
20030167267 | Kawatani | Sep 2003 | A1 |
20040013304 | Viola et al. | Jan 2004 | A1 |
20040034612 | Mathewson et al. | Feb 2004 | A1 |
20040126909 | Obara et al. | Jul 2004 | A1 |
20040156540 | Gao et al. | Aug 2004 | A1 |
20040218806 | Miyamoto et al. | Nov 2004 | A1 |
20040263911 | Rodriguez et al. | Dec 2004 | A1 |
20050004774 | Volk et al. | Jan 2005 | A1 |
20050049990 | Milenova et al. | Mar 2005 | A1 |
20050147287 | Sakai et al. | Jul 2005 | A1 |
20050169517 | Kasai | Aug 2005 | A1 |
20050175243 | Luo et al. | Aug 2005 | A1 |
20050196035 | Luo et al. | Sep 2005 | A1 |
20060009011 | Barrett et al. | Jan 2006 | A1 |
20060048007 | Yuan et al. | Mar 2006 | A1 |
20060112038 | Luo | May 2006 | A1 |
20060289752 | Fukunishi | Dec 2006 | A1 |
20070047800 | Hiroi et al. | Mar 2007 | A1 |
20070053580 | Ishikawa | Mar 2007 | A1 |
20070063548 | Eipper | Mar 2007 | A1 |
20080013784 | Takeshima et al. | Jan 2008 | A1 |
20080075352 | Shibuya et al. | Mar 2008 | A1 |
20090136121 | Nakagaki et al. | May 2009 | A1 |
20090157578 | Sellamanickam et al. | Jun 2009 | A1 |
20090171662 | Huang et al. | Jul 2009 | A1 |
20090305423 | Subramanian et al. | Dec 2009 | A1 |
20110026804 | Jahanbin et al. | Feb 2011 | A1 |
20120027285 | Shlain et al. | Feb 2012 | A1 |
20120054184 | Masud et al. | Mar 2012 | A1 |
20130165134 | Touag et al. | Jun 2013 | A1 |
20130279794 | Greenberg et al. | Oct 2013 | A1 |
20130304399 | Chen et al. | Nov 2013 | A1 |
20130318485 | Park et al. | Nov 2013 | A1 |
20140050389 | Mahadevan et al. | Feb 2014 | A1 |
Number | Date | Country |
---|---|---|
1917416 | Feb 2007 | CN |
102648646 | Aug 2012 | CN |
2001135692 | Jun 2000 | JP |
2003515942 | Nov 2000 | JP |
2004191187 | Jul 2004 | JP |
2004295879 | Oct 2004 | JP |
200447939 | Dec 2004 | JP |
2007225531 | Sep 2007 | JP |
200876167 | Apr 2008 | JP |
2008529067 | Jul 2008 | JP |
2010249547 | Apr 2009 | JP |
2009103508 | May 2009 | JP |
2010514226 | Apr 2010 | JP |
2010164487 | Jul 2010 | JP |
2011158373 | Aug 2011 | JP |
201203927 | Jan 2012 | TW |
201233095 | Aug 2012 | TW |
2011155123 | Dec 2011 | WO |
2013140302 | Sep 2013 | WO |
2013169770 | Nov 2013 | WO |
Entry |
---|
Ou, Guobin, and Yi Lu Murphey. “Multi-class pattern classification using neural networks.” Pattern Recognition 40, No. 1 (2007): 4-18. |
Chernova, Sonia, and Manuela Veloso. “Multi-thresholded approach to demonstration selection for interactive robot learning.” In Human-Robot Interaction (HRI), 2008 3rd ACM/IEEE International Conference on, pp. 225-232. IEEE, 2008. |
Ban, Tao, and Shigeo Abe. “Implementing multi-class classifiers by one-class classification methods” In Neural Networks, 2006. IJCNN'06. International Joint Conference on, pp. 327-332. IEEE, 2006. |
Tax, David MJ, and Piotr Juszczak. “Kernel whitening for one-class classification” In Pattern Recognition with Support Vector Machines, pp. 40-52. Springer Berlin Heidelberg, 2002. |
Li, Te-Sheng, and Cheng-Lung Huang. ““Defect spatial pattern recognition using a hybrid SOM-SVM approach in semiconductormanufacturing.”” Expert Systems with Applications 36.1 (Jan. 2009): 374-385. |
Akay, Mehmet Fatith. “Support vector machines combined with feature selection of breast cancer diagnosis.” Expert systems with applications 36, No. 2 (2009): 3240-3247. |
Varewyck, Matthias, and J-P. Martens. “A practical approach to model selection for support vector machines with a Gaussian Kernel.” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 41.2 (2011): 330-340. |
Wang, Wenjian, et al. “Determination of the spread parameter in the Gaussian kernel for classification and regression.” Neurocomputing 55.3 (2003): 643-663. |
Xu, Zongben, Mingwei Dai, and Deyu Meng. “Fast and efficient strategies for mdoel selection of Gaussian suport vector machine.” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 39.5 (2009): 1292-1307. |
David M.J. Tax; Robert P. W. Duin, “Support Vector Data Description”. Jan. 2004, Machine learning, vol. 54, pp. 45-66. |
Vapnik, Vladimir N., Section 5.4 The Optimal Separating Hyperplane, The Nature of Statistical Learning Theory, Statistics for Engineering and Information Science, Second Edition, 2000, 1995 Srpinger-Verlag, New York, Inc. pp. 131-163. |
Chih-Chung Chang and Chih-Jen Lin, “LIBSVM: A Library for Support Vector Machines,” National Taiwan University (2001), updated Mar. 6, 2010, pp. 1-30. |
Assaf Glazer and Moshe Sipper, “Evolving an Automatic Defect classification Tool,” EvoWorkshops 2008, LNCS 4974 (Springer-Verlag, 2008), pp. 194-203. |
LIBSVM—a Library for Support Vector Machines, as downloaded from ww.csie.ntu.edu.tw/˜cjlin/libsvm on Jul. 27, 2010. |
Scholkopf, Bernhard et al., “New Support Vector Algorithms,” Neural Computation 12 (2000), Massachusetts Institute of technology, pp. 1207-1245. |
Wang, Peng, Christian Kohler, and Ragini Verma. “Estimating cluster overlap on Manifolds and its Application to Neuropsychiatric Disorders.” Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on. IEE, 2007.6 pages. |
Number | Date | Country | |
---|---|---|---|
20190121331 A1 | Apr 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13948118 | Jul 2013 | US |
Child | 16174070 | US |