This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2023-042309, filed on Mar. 16, 2023, the entire contents of which are incorporated herein by reference.
An embodiment of the present invention relates to a semiconductor image processing apparatus and a semiconductor image processing method.
Recent semiconductor devices have been miniaturized, and it is not easy to accurately extract defects and the like of individual semiconductor devices formed on wafers. In particular, a diffraction pattern caused by the periodic structure of the wafer may appear in an image obtained by capturing the surface of the wafer, and it is not easy to identify the diffraction pattern and the linear defect.
Recent semiconductor devices are formed through multiple fabrication processes. Therefore, there are various types of defects, and the shape, size, number, color, luminance, or the like varies for each type of defect in the image obtained by capturing the surface of the wafer. It is essential to accurately extract the defects from the image in order to improve the yield of the semiconductor device.
According to one embodiment, a semiconductor image processing apparatus comprising a processing circuitry, the processing circuitry configured to:
Hereinafter, embodiments of a semiconductor image processing apparatus and a semiconductor image processing method will be described with reference to the drawings. Although main components of the semiconductor image processing apparatus will be mainly described below, the semiconductor image processing apparatus and the semiconductor image processing method may include components and functions that are not illustrated or described. The following description does not exclude components and functions that are not illustrated or described.
The identifier generation unit 2 generates an identifier 10. The identifier 10 identifies a label corresponding to a feature amount included in an input image. The input image includes, for example, a simulated image generated by the simulated image generation unit 5 and a real image obtained by real capturing. The input image is a simulated image or a real image of any object, and the object is not limited to a specific object. An example in which the object is a semiconductor wafer and the input image is a simulated defect image or a real image of the surface of a semiconductor wafer will be mainly described below.
The feature amount refers to a characteristic form included in the input image. As a specific example, the feature amount is a defect included in the simulated defect image or the real image which is the input image. As will be described later, the defect includes a plurality of types in which at least one of a shape, a size, a number, a color, luminance, and the like is different.
The label is a partial image including the feature amount in the simulated defect image or the real image. In this specification, a separate label is allocated for each type of feature amount. In this specification, a label that is allocated in advance to a specific feature amount of an input image is referred to as a true answer label.
The identifier 10 includes, for example, a neural network capable of machine learning. When an input image is input to the neural network, a label corresponding to the input image is output from the neural network. As described above, the identifier 10 identifies and outputs the label included in the input image.
The self-learning unit 3 learns the model 9 for inferring the feature amount included in the input image and learns the identifier 10. More specifically, the self-learning unit 3 learns the model 9 by inputting the simulated defect image having a known true answer label to the model 9 so that the model 9 outputs an inference image of the true answer label. In addition, the self-learning unit 3 learns the identifier 10 by inputting the inference image output from the model 9 to the identifier 10 so that the identifier 10 outputs the true answer label.
The model 9 performs segmentation to classify the input image according to the feature amount included in the input image, and outputs a different inference image for each type of the feature amount. In this specification, the model 9 may be referred to as an image segmentation model 9. The image segmentation model 9 includes, for example, a neural network capable of machine learning. By inputting an input image to the neural network, an inference image corresponding to a feature amount included in the input image is output. In this specification, the self-learning unit 3 may be referred to as a first learning unit. As described above, when the self-learning unit 3 learns the image segmentation model 9, the simulated defect image having the known true answer label is input to the image segmentation model 9.
The self-feedback unit 4 additionally learns the image segmentation model 9 based on the input image and the learned identifier 10. The self-feedback unit 4 uses both the simulated defect image and the real image as input images to the image segmentation model 9. In this specification, the self-feedback unit 4 may be referred to as a second learning unit.
The self-feedback unit 4 includes a first loss function calculation unit 4a, a second loss function calculation unit 4b, and an update unit 4c. The first loss function calculation unit 4a calculates a first loss function value based on a label corresponding to a first inference image inferred by inputting the simulated image to the model 9 and the true answer label of the simulated image. The second loss function calculation unit 4b calculates a second loss function value based on a label predicted by inputting, to the identifier 10, a second inference image inferred by inputting a real image to the model 9. The update unit 4c updates the parameter of the model 9 based on a third loss function value obtained by adding the first loss function value and the second loss function value. Details of the processing of the self-feedback unit 4 will be described later.
The simulated image generation unit 5 generates a simulated image. In this specification, an example in which the simulated image generation unit 5 generates a simulated defect image including a simulated defect will be mainly described. As will be described later, the simulated image generation unit 5 generates a simulated defect image by combining a background pattern image and the defect pattern image.
The model storage unit 6 stores the image segmentation model 9 that is being learned and has been learned.
The feature emphasis processing unit 7 generates a background difference image obtained by removing a background pattern from a real image. The region emphasis processing unit 8 extracts a region including a feature amount included in the real image, and performs image processing on the background difference image according to each region to generate a feature emphasis-processed image. For example, the feature emphasis-processed image is an image in which at least one of the color or the luminance of the corresponding pixel region is emphasized for each type of defect.
The CPU 11 performs overall control of the semiconductor image processing apparatus 1 by reading and executing a program from the ROM 19. At this time, the CPU 11 uses the RAM 18 as a work memory. The learning processor 13 mainly performs processing of the identifier generation unit 2, the self-learning unit 3, and the self-feedback unit 4 under the instruction of the CPU 11. The learning processor 13 may be omitted, and the CPU 11 may perform the processing of the identifier generation unit 2, the self-learning unit 3, and the self-feedback unit 4.
The model storage unit 6 stores the parameters and the like of the image segmentation model 9 that is learned by the self-learning unit 3 and is additionally learned by the self-feedback unit 4. The parameters are a layer configuration of a neural network constituting the image segmentation model 9, weight information between nodes of each layer, and the like. The model storage unit 6 may be a partial storage region of the RAM 18.
The simulated defect image storage unit 14 stores the simulated defect image generated by the simulated image generation unit 5 in
The simulated image generation unit 5 generates a template of a background pattern image (S1) and generates a template of a defect pattern image (S2). The templates may be generated in advance and stored in the simulated defect image storage unit 14. The simulated image generation unit 5 may acquire any background pattern image and any defect pattern image from the simulated defect image storage unit 14.
Then, the simulated image generation unit 5 generates training data in which a simulated defect image obtained by freely combining the background pattern image and the defect pattern image and a corresponding true answer label are combined (S3).
Then, the self-learning unit 3 learns the image segmentation model 9 (S4) and learns the identifier 10 (S5). As described above, the self-learning unit 3 first learns the image segmentation model 9, and learns the identifier 10 after learning of the image segmentation model 9 is completed. Alternatively, the self-learning unit 3 may perform learning of the image segmentation model 9 and learning of the identifier 10 in parallel.
Then, the self-feedback unit 4 performs additional learning of the image segmentation model 9 by using the image segmentation model 9, the learned identifier 10, the simulated defect image, and the real image (S6).
Then, the region emphasis processing unit 8 extracts a region for each class according to the feature amount included in the real image based on the label output from the image segmentation model 9 by inputting the real image to the learned image segmentation model 9 (S7), performs image processing according to the region (S8), and generates a feature emphasis-processed image (S9). The class is identification information classified by individual feature amounts.
Next, the processing operation of each unit illustrated in
The simulated image generation unit 5 classifies defects included in the original image and generates a true answer label in a monochrome image.
The self-learning unit 3 compares the inference image output from the image segmentation model 9 to the true answer label for each pixel, and inputs, to the image segmentation model 9, a comparison result between the inference image and the true answer label for each pixel as a loss function value. The self-learning unit 3 updates the parameter of the image segmentation model 9 based on the loss function value.
The image segmentation model 9 and the identifier 10 each perform learning by using the neural network.
The neural network N1 of the image segmentation model 9 illustrated in
The neural network N2 of the identifier 10 illustrated in
As illustrated in
In addition, as illustrated in
As illustrated in
The feature emphasis processing unit 7 generates a background difference image. The background difference image is an image obtained by removing the background pattern from the background image included in the real image.
The feature emphasis processing unit 7 includes an average pixel calculation unit 31, a first generation unit 32, a second generation unit 33, and a third generation unit 34.
The average pixel calculation unit 31 calculates an average pixel value for each first pixel region in a first direction of the input image.
The first generation unit 32 generates a first simulated background image based on the average pixel value for each first pixel region in the first direction of the input image. The input image is the real image and is the same as a real image input to the self-feedback unit 4.
The second generation unit 33 generates a second simulated background image based on a pixel value obtained by subtracting, for each pixel, the average pixel value of the input image from the pixel value obtained by adding, for each pixel, the average pixel value for each first pixel region in the first direction of the input image and the average pixel value for each second pixel region in a second direction intersecting the first direction of the input image.
The third generation unit 34 generates a background difference image of the input image by a difference between the input image and the first simulated background image or a difference between the input image and the second simulated background image. The third generation unit 34 can generate a background difference image from one input image.
The region emphasis processing unit 8 extracts a region including a feature amount included in the real image, and performs image processing according to the extracted region to generate a feature emphasis-processed image.
The region emphasis processing unit 8 includes a region extraction unit 35 and a region-specific image processing unit 36.
The region extraction unit 35 inputs the real image to the image segmentation model 9, and extracts a region for each class based on a result of classification for each type of defect included in the real image.
The region-specific image processing unit 36 generates the feature emphasis-processed image by performing image processing suitable for the background difference image generated by the feature emphasis processing unit 7 or performing image processing suitable for each region extracted by the region extraction unit 35.
The average pixel calculation unit 31 of the feature emphasis processing unit 7 calculates an average pixel value for each first pixel region (for example, pixel row) in the first direction (for example, row direction) of the real image (S11).
Then, the first generation unit 32 generates a first simulated background image for all the first pixel regions (for example, pixel rows) of the real image based on the average pixel value for each of the first pixel regions (for example, pixel rows) (S12). As illustrated in
Then, before and after the first generation unit 32 executes the process of S12, the second generation unit 33 calculates an average pixel value for each second pixel region (for example, pixel column) in the second direction (for example, column direction) of the real image (S13). Subsequently, the second generation unit 33 calculates a difference pixel value between the average pixel value for all the second pixel regions (for example, pixel columns) of the input image and the average pixel value of the input image (S14). In S14, for example, as illustrated in
Subsequently, the second generation unit 33 generates a second simulated background image (S15) by adding the average pixel value for each first pixel region in the first direction (for example, the row direction) of the input image and the difference pixel value calculated in S14. As illustrated in
Then, the third generation unit 34 generates a background difference image by a difference between the real image and the first simulated background image or a difference between the real image and the second simulated background image (S16).
When the background difference image is generated, it can be selected whether to generate a white residual image that makes a defect of a bright series (a white color series) conspicuous or to generate a black residual image that makes a defect of a dark series (a black color series) conspicuous, depending on whether the first or second simulated background image is subtracted from the real image or the real image is subtracted from the first or second simulated background image. The third generation unit 34 can generate at least one of the white residual image and the black residual image. As a result, it is possible to extract both of the defect of the bright series and the defect of the dark series.
As described above, in the first embodiment, the image segmentation model 9 and the identifier 10 are learned by inputting the simulated defect image having the known true answer label to the image segmentation model 9 and the identifier 10. Thereafter, the image segmentation model 9 is additionally learned by using the real image having an unknown true answer label and the simulated defect image having the known true answer label. As a result, it is possible to accurately distinguish and extract various defects included in the real image, and to generate a feature emphasis-processed image in which the display form is changed for each type of defect.
In a second embodiment, a real image is classified by a feature amount included in the real image.
The self-learning unit 3 and the self-feedback unit 4 in the second embodiment perform processing similar to that of the self-learning unit 3 and the self-feedback unit 4 in the first embodiment. The example of extracting the defect included in the real image has been described in the first embodiment, but the feature amount in the second embodiment is not necessarily limited to the defect.
Similarly to the feature emphasis processing unit 7 according to the first embodiment, the feature emphasis processing unit 7 according to the second embodiment includes an average pixel calculation unit 31, a first generation unit 32, a second generation unit 33, and a third generation unit 34. The feature emphasis processing unit 7 generates a background difference image from one input image. The feature emphasis processing unit 7 can generate a background difference image including a first background difference image for emphasizing a feature amount of a bright series (a white color series) and a second background difference image for emphasizing a feature amount of a dark series (a black color series).
The feature amount extraction unit 37 extracts the feature amount from a feature emphasis-processed image generated by performing image processing according to a region including the feature amount extracted by the additionally learned model 9 and a region including the feature amount included in the background difference image.
The feature amount extraction unit 37 includes a feature amount combining unit 39 that combines the feature amount included in an input image and the feature amount extracted by the feature amount extraction unit 37.
The image classification unit 38 performs clustering of the real images based on the feature amount combined by the feature amount combining unit 39.
As described above, in the second embodiment, by using the image segmentation model 9 and the identifier 10 similar to those in the first embodiment, the feature emphasis-processed image in which the feature amount included in the real image is emphasized for each type of feature amount can be generated, the feature amount extracted from the feature emphasis-processed image and the feature amount extracted from the real image can be combined, and clustering of the real image can be performed based on the combined feature amount. As a result, it is possible to classify the real image in consideration of various feature amounts.
At least a part of the semiconductor image processing apparatuses 1 and 1a described in the above-described embodiments may be configured by hardware or software. In a case of being configured by software, a program for realizing the function of at least a part of the semiconductor image processing apparatuses 1 and 1a may be stored in a recording medium such as a flexible disk or a CD-ROM, and may be read and executed by a computer. The recording medium is not limited to a removable recording medium such as a magnetic disk or an optical disk, and may be a fixed recording medium such as a hard disk device or a memory.
In addition, the program for realizing the function of at least a part of the semiconductor image processing apparatuses 1 and 1a may be distributed via a communication line (including wireless communication) such as the Internet. Further, the program may be distributed via a wired line or a wireless line such as the Internet or by being stored in a recording medium, in an encrypted, modulated, or compressed state.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the disclosures. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosures. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosures.
Number | Date | Country | Kind |
---|---|---|---|
2023-042309 | Mar 2023 | JP | national |