The present invention relates to an inspection for detecting a minute pattern defect, a foreign particle or the like from an image (detected image) acquired using light, a laser, an electron beam or the like and representing an object to be inspected. The invention more particularly relates to a defect inspection device and a defect inspection method which are suitable for inspecting a defect on a semiconductor wafer, a defect on a TFT, a defect on a photomask or the like.
A method disclosed in Japanese Patent No. 2976550 (Patent Document 1) describes a conventional technique for comparing a detected image with a reference image to detect a defect. In this technique, many images of chips regularly formed on a semiconductor wafer are acquired; a cell comparison inspection is performed to compare repeating patterns located adjacent to each other with each other for a memory mat formed in a periodic pattern in each of the chips on the basis of the acquired images and to detect a mismatched part as a defect. Further a chip comparison inspection is performed (separately from the cell comparison inspection) to compare patterns that are included in chips located near each other and correspond to each other for a peripheral circuit formed in a non-periodic pattern and to detect a mismatched part as a defect.
In addition, there is a method described in Japanese Patent No. 3808320 (Patent Document 2). In this method, a cell comparison inspection and a chip comparison inspection are performed on a memory mat which is included in a chip is set in advance, and results of the comparison are integrated to detect a defect. In the conventional techniques, information on arrangements of the memory mats and the peripheral circuit is defined in advance or obtained in advance, and the comparison inspections are switched in accordance with the arrangement information.
In a semiconductor wafer that is an object to be inspected, a minute difference in the thicknesses of patterns in chips may occur even the chips are located adjacent to each other due to a planarization process by CMP. In addition, a difference in brightness of images between the chips may locally occur. Further, a difference in brightness of the chips may be derived from a variance of the widths of patterns. The cell comparison inspection is performed on patterns (to be compared) separated by a small distance from each other with a higher sensitivity than the chip comparison inspection. As indicated by reference numeral 174 of
An object of the present invention is to provide a defect inspection device and method, which enable the detection of a defect even from a non-memory mat with the highest sensitivity without the need of setting of arrangement information of a pattern within a complex chip and the need of entering information in advance by a user.
In order to accomplish the aforementioned object, according to the present invention, a defect inspection device that inspects a pattern formed on a sample includes: table means that holds the sample thereon and is capable of continuously moving in at least one direction; image acquiring means that images the sample held on the table means and acquires an image of the pattern formed on the sample; pattern arrangement information extracting means that extracts arrangement information of the pattern from the image of the pattern that has been acquired by the image acquiring means; reference image generating means that generates a reference image from the arrangement information of the pattern and the image of the pattern, the arrangement information being extracted by the pattern arrangement information extracting means, the image of the pattern being acquired by the image acquiring means; and defect candidate extracting means that compares the reference image generated by the reference image generating means with the image of the pattern that has been acquired by the image acquiring means thereby extracting a defect candidate of the pattern.
In order to accomplish the aforementioned object, according to the present invention, a defect inspection device that inspects patterns that have been repetitively formed on a sample and originally need to have the same shape includes: table means that holds the sample thereon and is capable of continuously moving in at least one direction; image acquiring means that images the sample held on the table means and sequentially acquires images of the patterns that have been repetitively formed on the sample and originally need to have the same shape; standard image generating means that generates a standard image from the images of the patterns that have been sequentially acquired by the image acquiring means that have been repetitively formed and originally need to have the same shape; pattern arrangement information extracting means that extracts, from the standard image generated by the standard image generating means, arrangement information of the patterns that originally need to have the same shape; reference image generating means that generates a reference image using the arrangement information of the patterns extracted by the pattern arrangement information extracting means, and an image of a pattern to be inspected among the images of the patterns sequentially acquired by the image acquiring means that originally need to have the same shape, or the standard image generated by the standard image generating means; and defect candidate extracting means that compares the reference image generated by the reference image generating means with the image of the pattern to be inspected among the images of the patterns sequentially acquired by the image acquiring means that originally need to have the same shape thereby extracting a defect candidate of the pattern to be inspected.
In order to accomplish the aforementioned object, according to the present invention, a defect inspection method for inspecting a pattern formed on a sample includes the steps of: imaging the sample while continuously moving the sample in a direction, and acquiring images of the patterns formed on the sample; extracting arrangement information of the pattern from the acquired images of the patterns; generating a reference image from an image to be inspected among the acquired images of the patterns using the extracted arrangement information of the pattern; and comparing the generated reference image with the image to be inspected thereby extracting a defect candidate of the pattern.
In order to accomplish the aforementioned object, according to the present invention, a defect inspection method for inspecting patterns that have been repetitively formed on a sample and originally need to have the same shape includes the steps of: imaging the sample while continuously moving the sample in a direction, and sequentially acquiring images of the patterns that have been repetitively formed on the sample and originally need to have the same shape; generating a standard image from a plurality of images of the patterns that have been sequentially acquired in the step of imaging, said patterns are repetitively formed on the sample and originally need to have the same shape; extracting, from the generated standard image, arrangement information of the patterns that originally need to have the same shape; generating a reference image using the extracted arrangement information of the patterns, and an image of a pattern to be inspected among the images of the patterns that have been sequentially acquired that originally need to have the same shape, or the generated standard image; and comparing the generated reference image with the image of the pattern to be inspected thereby extracting a defect candidate of the pattern to be inspected.
According to the present invention, the device includes the means for obtaining arrangement information of a pattern, and the means for generating a self-reference image from the arrangement information of the pattern, performing a comparison and detecting a defect. Thus, a comparison inspection to be performed on the same chip is achieved, and a defect is detected with a high sensitivity, without setting arrangement information of a pattern within the complex chip in advance. In addition, when a pattern that is included in a certain chip and is similar to a certain pattern included in the certain chip is not detected, a self-reference image is interpolated only for the certain pattern using a pattern that is included in a chip located near the certain chip and corresponds to the certain pattern. For a non-memory mat region, it is possible to minimize a region to be subjected to a defect determination through a chip comparison, suppress a difference between the brightness of chips, and detect a defect over the wide range with a high sensitivity.
Embodiments of a defect inspection device and method according to the present invention are described with reference to the accompanying drawings. First, an embodiment of the defect inspection device, which performs dark-field illumination on a semiconductor wafer that is an object to be inspected, is described below.
The image processing unit 3 includes a preprocessing unit 8-1, a defect candidate detector 8-2 and a post-inspection processing unit 8-3. The preprocessing unit 8-1 performs a signal correction, an image division and the like (described later) on the scattered light intensity signals input to the image processing unit 3. The defect candidate detector 8-2 includes a learning unit 8-21, a self-reference image generator 8-22 and a defect determining unit 8-23. The defect candidate detector 8-2 performs a process (described later) on an image generated by the preprocessing unit 8-1 and detects a defect candidate. The post-inspection processing unit 8-3 excludes noise and a nuisance defect (defect of a type unnecessary for a user or non-fatal defect) from the defect candidate detected by the defect candidate detector 8-2, classifies a remaining defect on the basis of the type of the remaining defect, estimates a dimension of the remaining defect, and outputs information including the classification and the estimated dimension to a whole controller 9.
The scattered light 6a and 6b exhibit scattered light distributions corresponding to the illuminating unit 4a and 4b. When optical conditions for the light emitted by the illuminating unit 4a are different from optical conditions for the light emitted by the illuminating unit 4b, the scattered light 6a is different from the scattered light 6b. In the present embodiment, optical characteristics and features of the light scattered due to the emitted light are called scattered light distributions of the scattered light. Specifically, the scattered light distributions indicate distributions of optical parameter values such as an intensity, an amplitude, a phase, a polarization, a wavelength and coherency of the scattered light for a location at which the light is scattered, a direction in which the light is scattered, and an angle at which the light is scattered.
The object to be inspected 5 (semiconductor wafer 5) is placed on a stage (X-Y-Z-θ stage) that is capable of moving and rotating in an XY plane and moving in a Z direction that is perpendicular to the XY plane. The X-Y-Z-θ stage 33 is driven by a mechanical controller 34. In this case, the object to be inspected 5 (semiconductor wafer 5) is placed on the X-Y-Z-θ stage 33. Then, light scattered from a foreign material existing on the object to be inspected 5 (the semiconductor wafer 5) is detected, while the X-Y-Z-θ stage 33 is moving in a horizontal direction. Results of the detection are acquired as two-dimensional images.
Light sources of the illuminating units 4a and 4b may be lasers or lamps. Wavelengths of the light to be emitted by the light sources may be short wavelengths or wavelengths of broadband light (white light). When light with a short wavelength is used, ultraviolet light with a wavelength (160 to 400 nm) may be used in order to increase a resolution of an image to be detected (or in order to detect a minute defect). When short wavelength lasers are used as the light sources, means 4c and 4d for reducing coherency may be included in the illuminating units 4a and 4b, respectively. The means 4c and 4d for reducing the coherency may be made up of rotary diffusers. In addition, the means 4c and 4d for reducing the coherency may be configured by using a plurality of optical fibers (with optical paths whose lengths are different), quartz plates or glass plates, and generating and overlapping a plurality of light fluxes that propagate in the optical paths whose lengths are different. The illumination conditions (the irradiation angles, the illumination directions, the wavelengths of the light, and the polarization state and the like) are selected by the user or automatically selected. An illumination driver 15 performs setting and control on the basis of the selected conditions.
Of the light scattered in the direction perpendicular to the semiconductor wafer 5 among the light scattered from the semiconductor wafer 5 is converted into an image signal by the sensor 31 through the detection optical system 7a. The light that is scattered in the direction oblique to the semiconductor wafer 5 is converted into an image signal by the sensor 32 through the detection optical system 7b. The detection optical systems 7a and 7b include objective lenses 71a and 71b and imaging lenses 72a and 72b, respectively. The lights are focused on and imaged by the sensors 31 and 32 respectively. Each of the detection optical systems 7a and 7b forms a Fourier transform optical system and can perform an optical process (such as a process of changing and adjusting optical characteristics by means of spatial filtering) on the light scattered from the semiconductor wafer 5. When the spatial filtering is to be performed as the optical process, and parallel light is used as the illumination light, the performance of detecting a foreign material is improved. Thus, split beams that are parallel light in a longitudinal direction are used for the spatial filtering.
Time delay integration (TDI) image sensors that are each formed by two-dimensionally arraying a plurality of one-dimensional image sensors in an image sensor are used as the sensors 31 and 32. A Signal that is detected by each of the one-dimensional image sensors is transmitted to a one-dimensional image sensor located at the next stage of the one-dimensional image sensor in synchronization with the movement of the X-Y-Z-θ stage 33 so that the one-dimensional image sensor adds the received signal to the signal detected by the one-dimensional image sensor. Thus, a two-dimensional image can be acquired at a relatively high speed and with a high sensitivity. When sensors of a parallel output type, which each include a plurality of output taps, are used as the TDI image sensors, each of the outputs 311 and 321 from the sensors 31 and 32 respectively can be processed in parallel so that detection is performed at a higher speed. Spatial filters 73a and 73b block specific Fourier components and suppress light diffracted and scattered from a pattern. Reference numerals 74a and 74b indicate optical filter means. The optical filter means 74a and 74b are each made up of an optical element (such as an ND filter or an attenuator) capable of adjusting the intensity of light, a polarization optical element (such as a polarization plate, a polarization beam splitter or a wavelength plate), a wavelength filter (such as a band pass filter or a dichroic mirror) or a combination thereof. The optical filter means 74a and 74b each control the intensity of detected light, a polarization characteristic of the detected light, a wavelength characteristic of the detected light, or a combination thereof.
The image processing unit 3 extracts information of a defect existing on the semiconductor wafer 5 that is the object to be inspected. The image processing unit 3 includes the preprocessing unit 8-1, the defect candidate detector 8-2, the post-inspection processing unit 8-3, a parameter setting unit 8-4 and a storage unit 8-5. The preprocessing unit 8-1 performs a shading correction, a dark level correction and the like on image signals received from the sensors 31 and 32 and divides the image signals into images of a certain size. The defect candidate detector 8-2 detects a defect candidate from the corrected and divided images. The post-inspection processing unit 8-3 excludes a nuisance defect and noise from the detected defect candidate, classifies a remaining defect on the basis of the type of the remaining defect, and estimates a dimension of the remaining defect. The parameter setting unit 8-4 receives parameters and the like from an external device and sets the parameters and the like in the defect candidate detector 8-2 and the post-inspection processing unit 8-3. The storage unit 8-5 stores data that is being processed and has been processed by the preprocessing unit 8-1, the defect candidate detector 8-2 and the post-inspection processing unit 8-3. The parameter setting unit 8-4 of the image processing unit 3 is connected to a database 35, for example.
The defect candidate detector 8-2 includes the learning unit 8-21, the self-reference image generator 8-22 and the defect determining unit 8-23, as illustrated in
The whole controller 9 includes a CPU (included in the whole controller 9) that performs various types of control. The whole controller 9 is connected to a user interface unit (GUI unit) 36 and a storage device 37. The user interface unit (GUI unit) 36 receives parameters and the like entered by the user and includes input means and display means for displaying an image of the detected defect candidate, an image of a finally extracted defect and the like. The storage device 37 stores a characteristic amount or an image of the defect candidate detected by the image processing unit 3. The mechanical controller 34 drives the X-Y-Z-θ stage 33 on the basis of a control command issued from the whole controller 9. The image processing unit 3, the detection optical systems 7a and 7b and the like are driven in accordance with commands issued from the whole controller 9.
The semiconductor wafer 5 that is the object to be inspected has many chips regularly arranged. Each of the chips has a memory mat part and a peripheral circuit part which are identical in shape in each chips. The whole controller 9 moves the X-Y-Z-θ stage 33 and thereby continuously moves the semiconductor wafer 5. The sensors 31 and 32 sequentially acquire images of the chips in synchronization with the movement of the X-Y-Z-θ stage 33. A standard image that does not include a defect is automatically generated for each of acquired images of the two types of the scattered light (6a and 6b). The generated standard image is compared with the sequentially acquired images of the chips, and whereby a defect is extracted.
The flow of the data is illustrated in
In the present embodiment, the images that are acquired by the two different detection systems (7a and 7b illustrated in
Accordingly, when images of the same region that are acquired under different combinations of optical conditions and detection conditions are simultaneously input from the two sensors, a plurality of processors detect defect candidates in parallel (for example, processors A and C illustrated in
The candidates for the defect may be detected in chronological order from the images acquired under the different combinations of the optical conditions and the detection conditions. For example, after the processor A detects a defect candidate from the divided images 41a and 41a′, the processor A detects a defect candidate from the divided images 41b and 41b′. Alternatively, the processor A integrates the divided images 41a, 41a′, 41b and 41b′ acquired under different combinations of the optical conditions and the detection conditions and detects a defect candidate. It is possible to freely set a divided image among the divided images in each of the processers, and to freely set a divided image that is among the divided images and to be used to detect a defect.
The acquired images of the chips can be divided in a different direction, and a defect can be determined using the divided images. The flow of the data is illustrated in
Reference symbols 41c to 44c illustrated in
Next, the flow of a process to be performed by the defect candidate detector 8-2 of the image processing unit 3 is described. The process is performed by each of the processors.
As illustrated in
Details of step S503 of extracting the arrangement information of the patterns from the image 51 (of the first chip) input in step S501 are described with reference to
Small regions that each have N×N pixels and each include a pattern are extracted from the image 51 (of the first chip) input in step S501 (S601). Hereinafter, the small regions that each has N×N pixels are called patches. Next, one or more characteristic amounts of each of all the patches are calculated (S602). It is sufficient if one or more characteristic amounts of each of the patches represent a characteristic of the patch. Examples of the characteristic amounts are (a) a distribution of luminance values (Formula 1); (b) a distribution of contrast (Formula 2); (c) a luminance dispersion value (Formula 3); and (d) a distribution that represents an increase and reduction in luminance, compared with a neighborhood pixel (Formula 4).
When the brightness of each pixel (x, y) located in a patch is represented by f(x, y), the aforementioned characteristic amounts are represented by the following formulas.
[Formula 1]
The distribution of the luminance values; f(x+i, y+j) (Formula 1)
[Formula 2]
The contrast; c(x+i, y+j)
max{f(x+i, y+j), f(x+i+1, y+j), f(x+i, y+j +1), f(x+i+1, y+j+1)}
−min{f(x+i, y), f(x+i+1, y+j), f(x+i, y+j+1), f(x+i+1, y+j+1)} (Formula 2)
[Formula 3]
The luminance dispersion; g(x+i, y+j)
[Σ{f(x+i, y+j)2}−{Σf(x+i, y+j)}2/(N×N)]/(N×N−1) (Formula 3)
[Formula 4]
The distribution representing the increase and reduction in the luminance (x direction); g(x+i, y+j)
If {f(x+i, y+j)·f(x+i+1, y+j)>0}
then g(x+i, y+j)=1
else g(x+i, y+j)=0 (Formula 4)
In Formulas 1 to 4,
i, j=0, 1, . . . , N−1
Then, all or some of the characteristic amounts of each of the patches of the image 51 are selected, and similarities between the selected patches are calculated (S603). An example of the similarities is a distance between the patches on a characteristic space that has characteristics (indicated by Formulas 1 to 4) of N×N dimensions as axes. For example, when the distribution (a) of the luminance values is used as a characteristic amount, a similarity between a patch P1 (central coordinates (x, y)) and a patch P2 (central coordinates (x′, y′) is represented by the following.
A patch that has the highest similarity with each of the patches is searched (S604), and coordinate of the searched patch is stored as similar pattern in the storage unit 8-5 (S605).
For example, when a pattern that is similar to the patch P1 is the patch P2, similar pattern coordinate information of the patch P1 indicates the coordinates (x′, y′) of the patch P2. Similar pattern coordinate information is arrangement information of patterns that indicates the position of a similar pattern to be referenced for each of patterns included in the image or indicates that when similar pattern coordinate information that corresponds to coordinates (x, y) does not exist, a similar pattern does not exist. For example, as illustrated in
In the example illustrated in
Details of step S504 of generating a self-reference image by means of the self-reference image generator 8-22 are described with reference to
The generated self-reference image 100 is transmitted to the defect determining unit 8-23, and step S505 of determining a defect is performed. The arrangement information 510 includes information that indicates whether or not a pattern that is similar to the extracted pattern is included in the interested image in each of the patches. The size N of each of the patches may be one or more pixels.
Thus, the defect determining unit 8-23 first corrects the brightness and the positions. The defect determining unit 8-23 detects the difference between the brightness of the image 51 input in step S501 and the brightness of the self-reference image 100 generated in step S504 and corrects the brightness (S801). The defect determining unit 8-23 may correct the brightness of an arbitrary unit, such as the brightness of the whole images, the brightness of the patches, or the brightness of the patches extracted from the image 52 of the adjacent chip and arranged. An example of detecting a difference in brightness between the inputted image and the generated self-reference image and correcting the detected difference by using a least squares approximation is described below.
It is assumed that there is a linear relationship (indicated in Formula 6) between pixels f(x, y) and g(x, y) that are included in the images and correspond to each other. Symbols a and b are calculated so that a value of Formula 7 is minimized and are treated as correction coefficients gain and offset. Then, the brightness data of all pixel values f(x, y) of the image 51, which are the targets of the brightness correction, is input in step S501 and corrected according to Formula 8.
[Formula 6]
g(x, y)=a+b·f(x, y) (Formula 6)
[Formula 7]
Σ{g(x, y)−(a+b·η(x, y))2 (Formula 7)
[Formula 8]
L(f(x, y))=gain·f(x, y)+offset (Formula 8)
Next, a shifted amount between the positions of patches within the images is detected and corrected (S802). In this case, the detection and the correction may be performed on all the patches or only the patches extracted from the image 52 of the adjacent chip and arranged. The following methods are generally performed to detect and correct the shifted amount of the positions. In one of the methods, the shifted amount that causes the sum of squares of differences between luminance of the images to be minimized is calculated by shifting one of the images. In another method, the shifted amount that causes a normalized correlation coefficient to be maximized is calculated.
Next, characteristic amounts of target pixels of the image 51 subjected to the brightness correction and the position correction are calculated on the basis of pixels that are included in the self-reference image 100 and correspond to the target pixels (S803). All or some of the characteristic amounts of the target pixels are selected so that a characteristic space is formed (S804). It is sufficient if the characteristic amounts represent characteristics of the pixels. Examples of the characteristic amounts are (a) the contrast (Formula 9), (b) a difference between gray values (Formula 10), (c) a brightness dispersion value of a neighborhood pixel (Formula 11), (d) a correlation coefficient, (e) an increase or decrease in the brightness compared with a neighborhood pixel, and (f) a quadratic differential value.
When the brightness of each point of the detected image is represented by f(x, y) and the brightness of each point of the self-reference image corresponding to the detected image is represented by g(x, y), the examples of the characteristic amounts are calculated from the images (51 and 100) according to the following formulas.
[Formula 9]
The contrast; max{f(x, y), f(x+1, y), f(x, y+1), f(x+1, y+1)}−min{f(x, y), f(x+1, y), f(x, y+1), f(x+1, y+1) (Formula 9)
[Formula 10]
The difference between gray values; f(x, y)−g(x, y) (Formula 10)
[Formula 11]
The dispersion; [Σ{f(x+i, y+j)2}−{Σf(x+i, y+j)}2/M]/(M−1) (Formula 11)
i, j=−1, 0, 1 M=9
In addition, the brightness of each of the images is included in the characteristic amounts. One or more of the characteristic amounts is or are selected from the characteristic amounts. Then, each pixels in each of the images are plotted in a space by the feature amount of the pixels, said space having axes corresponding to the selected feature amounts. Then, a threshold plane that surrounds a distribution estimated as normal is set (S805). A pixel that is located outside the threshold plane or has a characteristically out of range value is detected (S806) and output as a defect candidate (S506). In order to estimate the normal range, a threshold may be set for each of the characteristic amounts selected by the user. The probability that the target pixels are not defect pixels may be calculated and the normal range may be identified when it is assumed that a characteristic distribution of normal pixels is formed in accordance with a normal distribution.
In the latter method, when a number d of characteristic amounts of a number n of normal pixels are represented by x1, x2, . . . , xn, an identification function φ that is used to detect a pixel with a characteristic amount x as a defect candidate is given by Formulas 12 and 13.
where μ is the average of all pixels,
Σ is a covariance,
Σ=Σi=1n(xi−μ)(xi−μ)′ [Formula 12]
[Formula 13]
The discriminant function φ(x)=1 (if p(x)≧th, then, the pixel is a non-defect) φ(x)=0 (if p(x)<th, then, the pixel is a defect) (Formula 13)
In this case, the characteristic space may be formed using all the pixels of the image 51 and self-reference image 100. In addition, a characteristic space may be formed for each of the patches. Furthermore, a characteristic space may be formed for each of all patches arranged on the basis of similar patterns within the image 51 and for each of all the patches extracted from the image 52 of the adjacent chip and arranged. The example of the process of the defect candidate detector 8-2 has been described.
The post-inspection processing unit 8-3 excludes noise and a nuisance defect from the defect candidate detected by the defect candidate detector 8-2, classifies a remaining defect on the basis of the type of the defect, and estimates the dimensions of the defect.
Next, the partial image 52 that is acquired by imaging the adjacent chip 2 is input (S502). A self-reference image is generated from the partial image 52 using the pattern arrangement information acquired from the image 51 of the first die (S504). The generated self-reference image and the partial image 52 are compared with each other to perform a defect determination (S505), then, a defect candidate is extracted (S506). After that, the processes of steps S504 to S506 are sequentially and repetitively performed on partial images that are acquired by the optical system 1 using the pattern arrangement information acquired from the image 51 of the first die, and whereby a defect inspection can be performed on each of the chips formed on the semiconductor wafer 5.
As described above, in the present embodiment, the pattern arrangement information is obtained from the image to be inspected, the self-reference image is generated from the image to be inspected and compared with the image to be inspected, and a defect is detected.
It is general that the image 140 to be inspected is compared with the standard image 142 and a part that is included in the image 140 and largely different from a corresponding part of the image 142 is detected as a defect. Reference numeral 143 indicates a self-reference image that is generated from the image 140 using arrangement information that indicates patterns and has been extracted from the standard image 142 in the present embodiment. The images 140, 142 and 143 are displayed side by side.
Patches 143a to 143f that are included in the self-reference image 143 are located at corners of pattern regions and there are no similar patches within the image 140. The patches 143a to 143f are extracted from the standard image 142, and the positions of the patches 143a to 143f correspond to the positions of parts included in the image 142. Reference numeral 144 indicates the result of the general comparison of the image to be inspected 140 with the standard image 142. In the image 144, the larger the difference between parts of the images 140 and 142, the higher the brightness of a part corresponding to the parts. Reference numeral 145 indicates the result of the comparison of the image to be inspected 140 with the self-reference image 143.
Irregular brightness occurs in a background pattern region of a defect 141 in the image to be inspected 140 due to a difference between the thicknesses of layers included in the semiconductor wafer, compared with the standard image 142. The irregular brightness noticeably appears in the image 144, and a defect does not become obvious in the image 144. On the other hand, the irregular brightness of the background pattern region can be suppressed by the comparison with the self-reference image. The defect can be obvious in the image 145. In a similar manner to the image 144, differences remain in the image 145 at positions that correspond to the patches 143a to 143f extracted from the standard image and arranged in the self-reference image 143.
An image 146 represents patches that are extracted from the standard image 142 and arranged for the generation of the self-reference image 143. An image 147 represents a threshold that is calculated for each of the patches of the self-reference image 143 on the basis of whether the patch is extracted from the image 140 (to be inspected) or the standard image 142. In the image 147, the larger the threshold, the brighter a part that corresponds to the threshold.
In the present embodiment, all or some of the images are displayed side by side. The user can confirm whether a defect has been detected by a comparison of similar patterns within the image to be inspected or has been detected by a comparison of a pattern within the image to be inspected with a pattern that is included in a neighborhood chip and whose position corresponds to the pattern within the image to be inspected. In addition, the user can confirm a threshold value used for the detection.
Reference numeral 1500 illustrated in
Reference numeral 1503 indicates a condition setting button. When the user wants to change conditions (optical conditions, image processing conditions and the like) and inspect the wafer, the condition setting button is used to change the conditions. When the condition setting button 1503 is pressed, an input button for inputting image processing parameters is displayed so that the user can change the parameters and the conditions. In addition, when the user wants to analyze the type of each of the defects, the images, and details such as information indicating how the defect has been detected, a black point of the defect on the defect map 1501 is selected or the defect is selected from the defect list 1502 (or a defect indicated by No. 2 of the defect list is specified using a pointer (1504) through an operation using a mouse in the case illustrated in
Reference numeral 1510 illustrated in
Traditionally, in order to achieve this inspection, it has been necessary that a user should enter definitions (such as start coordinates and end coordinates of each of the memory mats included in the chips, the sizes of the memory mats, intervals between the memory mats, intervals between minute patterns included in the memory mats) of regions of the memory mats or information that indicates the configurations of the chips.
Reference numeral 174 illustrated in
Even when there is a difference between the brightness of chips (to be compared) due to a slight difference in the thickness of a thin film formed on the patterns after a planarization process such as chemical mechanical polishing (CMP), or a short wavelength of illumination light, it is not necessary to enter layout information on complex chips. Thus, a comparison of the chips is simplest, and a minute defect (for example, a defect of 100 nm or less), which is located in a region in which there is a large difference between the thicknesses of patterns, can be detected with a high sensitivity.
In an inspection of a low-k film such as an inorganic insulating film (such as SiO2, SiOF, BSG, SiOB, a porous silica film) or an organic insulating film (such as an SiO2 film containing a methyl group, MSQ, a polyimide-based film, a parylene film, a Teflon (registered trademark) film or an amorphous carbon film), even when a difference between brightness locally exists due to a variation in a refraction index distribution in the film, a minute defect can be detected according to the present invention.
A second embodiment of the present invention is described with
First, images that are acquired from the optical system 1 by imaging the semiconductor wafer 5 are preprocessed by the preprocessing unit 8-1. After that, the images are input to the same processor included in the defect candidate detector 8-2′ (S901), and a standard image is generated from a plurality of images among the divided images 51, 52, . . . , 5z of parts that are included in the plurality of chips and whose positions correspond to each other (S902).
As an example of a method for generating the standard image, as illustrated in
S(x, y)=Median {f1(x, y), f2(x, y), f3(x, y), f4(x, y), f5(x, y), . . . } (Formula 14)
Median: a function that outputs a median of the collected luminance values
S(x, y): a luminance value of the standard image
fn(x, y): a luminance value of a divided image 5n after the correction of the positions of the aligned images
As the statistical process, the average value of the collected pixel values may be the luminance value of the standard image.
The images that are used to generate the standard image may include a divided image (up to all the chips formed on the semiconductor wafer 5) that represents a part that is included in a chip arranged in another row and located at a corresponding position.
s(x, y)=Σ{fn(x, y)}/N, N: the number of the divided images used for the statistical process (Formula 15)
Then, arrangement information 910 of patterns is extracted from the standard image (from which the influence of the defect has been excluded) by the learning unit 8-21′ in the same manner as step S503 described with reference to
As described above, in the present embodiment, the arrangement information 910 of the patterns is extracted from the standard image generated using the images that have been acquired under one optical condition and represent the plurality of regions (S903), the self-reference image is generated (S904), the comparison is performed, the defect is determined in step S905, and the defect candidate is detected in step S906. The arrangement information of the patterns may be extracted from images acquired under different combinations of optical conditions and detection conditions.
As an example of a process of determining the similar patch after the integration, a similarity between patches, which is calculated from the image 101A, is plotted along an abscissa, and a similarity between patches, which is calculated from the image 101B, is plotted along an ordinate, as illustrated in
The process of comparing the image 51 to be inspected with the generated self-reference image and extracting a defect candidate is the same as the process explained with reference to
A third embodiment of the present invention is described with reference to
A method for generating the standard image 1110 is the same as the method described in the first and second embodiments. Arrangement information of patterns is extracted from the standard image 1110 by the learning unit 8-21′ (S1103). In this case, one patch that has the highest similarity is not extracted, information of a patch with the highest similarity, a patch with the second highest similarity, a patch with the third highest similarity, . . . , and pattern information are extracted, and the coordinates of the patches are held as arrangement information (1102a, 1102b, 1102, . . . ). Then, a self-reference image is generated for each of the images 51, 52, . . . , 5z (to be inspected) from the image on the basis of the arrangement information (1102a, 1102b, 1102c, . . . ) (S1104). Then, the process (illustrated in
As an example of the integration, an evaluation value (for example, a distance from a normal distribution estimated on a characteristic space) that is calculated from each of the pixels and used to evaluate whether or not the pixel is an out of range pixel is calculated from each of the self-reference images. Then the integration is performed by calculating a logical product (the minimum evaluation value among the pixels) of the evaluation values or a logical sum (the maximum evaluation value among the pixels) of the evaluation values. Examples of a specific effect of the integration are illustrated in
Reference numeral 1200 illustrated in
On the other hand, when defects occur in patches 1204 and 1205 among patches 1204 to 1206 included in an image 1220 (illustrated in
In this case, the self-reference image is generated from the second pattern arrangement information obtained in the case in which a patch that is the second most similar patch to the patch 1301i a is a patch 1302a and a patch that is the second most similar patch to the patch 1302a is a patch 1303a. Then, in step S1105 of performing the defect determination, the defect determining unit 8-23′ compares the image 1300 with the two self-reference images 1310 and 1320. A difference image 1331a and a difference image 1331b that are the results of the comparisons are extracted as defect candidates (S1106).
Then, the defect determining unit 8-23′ integrates the two comparison results (or calculates a logical sum of the two comparison results in this example), and whereby an image 1332 that represents the defects occurring in the patches 1301 and 1302 of the image 1300 to be inspected is extracted. This example describes that the logical sum of the results of the comparisons with the two self-reference images is calculated in order to prevent the large defects from being overlooked. The defects can be detected, with higher reliability, by calculating a logical product of results of comparisons with two or more of self-reference images in order to prevent an erroneous detection, although a process of detecting the defects by calculating the logical product is a little complex.
A process of extracting defect candidates through the comparisons of the image 51 to be inspected with the generated self-reference images is the same as the process explained with reference to
The embodiment of the present invention describes that the images that represent the semiconductor wafer and are to be compared and inspected are used in the dark-field inspection device. Images to be compared through a pattern inspection using an electron beam may be applied. In addition, a pattern inspection device that performs bright-field illumination may be applied.
An object to be inspected is not limited to the semiconductor wafer. For example, a TFT substrate, a photomask, a printed board and the like may be applied as long as defect detection is performed through a comparison of images.
The present invention can be applied to a defect inspection device and method, which enable a minute pattern defect, a foreign material and the like to be detected from an image (detected image) of an object (to be inspected) such as a semiconductor wafer, a TFT or a photomask.
1 . . . Optical system 2 . . . Memory 3 . . . Image processing unit 4a, 4b . . . Illuminating unit 5 . . . Semiconductor wafer 7a, 7b . . . Detector 8-2 . . . Defect candidate detector 8-3 . . . Post-inspection processing unit 31, 32 . . . Sensor 9 . . . Whole controller 36 . . . User interface unit
Number | Date | Country | Kind |
---|---|---|---|
2010-025368 | Feb 2010 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2011/052430 | 2/4/2011 | WO | 00 | 7/18/2012 |