This document claims priority to Japanese Patent Application No. 2017-127407 filed Jun. 29, 2017, the entire contents of which are hereby incorporated by reference.
An optical pattern inspection apparatus, which uses a die-to-die comparison method, is used for a wafer pattern inspection in a semiconductor integrated circuit manufacturing process or for a pattern inspection of photomask that forms wafer patterns. The die-to-die comparison method is a technique of detecting a defect by comparing an image of a semiconductor device, which is referred to as a die to be inspected, with an image obtained at the same position in an adjacent die.
On the other hand, a die-to-database comparison method has been used for the inspection of a photomask (reticle) having no adjacent die. In this die-to-database comparison method, mask data are converted into an image. The image is then used for a substitution of the image of the adjacent die used in the die-to-die comparison method, and inspection is performed in the same manner as the above. The mask data are data obtained by applying photomask correction to design data (for example, see. U.S. Pat. No. 5,563,702).
However, when the die-to-database comparison method is used for wafer inspection, corner roundness of a pattern formed on a wafer is likely to be detected as a defect. In the inspection of a photomask, a smoothing filter is applied to an image, converted from the mask data, so as to form corner roundness, thereby preventing the corner roundness of the pattern from being detected as the defect. However, the corner roundness formed by the smoothing filter is different from corner roundness of each pattern actually formed on the wafer. As a result, the actual corner roundness can be detected as the defect. Therefore, an allowable pattern deformation quantity should be set in order to ignore such a difference in the corner roundness. However, this causes in turn a problem that a fine defect existing in a place except a corner cannot be detected.
From a viewpoint of problems in semiconductor integrated circuit fabrication, repeated defects (systematic defects) are more important issue than a random defect caused by a particle or the like. The repeated defects are defined as defects that occur repeatedly over all dies on a wafer caused by photomask failure, or the like. Because the repeated defects occur both in a die to-be-inspected and in adjacent dies that are to be compared with the die to-be-inspected, the die-to-die comparison wafer inspection cannot detect the repeated defects. Accordingly, the die-to-database comparison wafer inspection has been demanded.
The die-to-database comparison method is also effective in the inspection of a multilayer structure of patterns. In processing of a fine structure, it is essential to improve a positional accuracy of superimposing fine and complicated patterns formed on a layer onto patterns formed on an underlying layer. If the positional accuracy is low relative to a size of a pattern, a performance of a device is impaired. For this reason, in manufacturing of semiconductor devices, management of misalignment between layers, condition monitoring of manufacturing equipment, and feedback are carried out.
In many cases, the semiconductor inspection apparatus performs a misalignment inspection using a specific alignment pattern. However, an amount of misalignment may be different between the alignment pattern and a pattern that actually functions as a device. On the other hand, the die-to-database comparison method can inspect the misalignment with use of a pattern that actually functions as a device (for example, see “Gyoyeon Jo, et al, “Enhancement of Intrafield Overlay Using a Design based Metrology system”, SPIE 9778, Metrology, Inspection, and Process Control for Microlithography XXX, 97781J (Mar. 24, 2016); doi:10.1117/12.2218937”).
In an overlay inspection according to the die-to-database comparison method, edge detection of patterns on an upper layer and a lower layer may cause a problem. For example, when an upper pattern and a lower pattern overlap or come close to each other in a complicated manner, it is necessary to properly process design data so as not to detect an edge of the lower pattern covered by the upper pattern. U.S. Pat. No. 8,577,124 provides a method of detecting an edge excluding a region where patterns of an upper layer and a lower layer overlap.
However, as shown in
Therefore, according to embodiment, there is provided a method capable of accurately detecting an edge of a pattern on an upper layer and an edge of a pattern on a lower layer.
Embodiments, which will be described below, relate to a pattern edge detection method applicable to a semiconductor inspection apparatus that conducts a pattern inspection based on a comparison between pattern design data and a pattern image.
In an embodiment, there is provided a pattern edge detection method comprising: generating a sample image of an upper-layer pattern and a lower-layer pattern; applying a first image processing, which is for emphasizing an edge of the upper-layer pattern, to the sample image, thereby generating a first processed image; detecting the edge of the upper-layer pattern based on a brightness profile of the first processed image; applying a second image processing, which is for emphasizing an edge of the lower-layer pattern, to the sample image, thereby generating a second processed image; and detecting the edge of the lower-layer pattern based on a brightness profile of the second processed image.
In an embodiment, the first image processing is a tone-curve processing that emphasizes the edge of the upper-layer pattern, and the second image processing is a tone-curve processing that emphasizes the edge of the lower-layer pattern.
In an embodiment, the tone-curve processing applied to the first image processing is a process of lowering a brightness value at an intermediate level between a brightness value of the upper-layer pattern and a brightness value of the lower-layer pattern, and the tone-curve processing applied to the second image processing is a process of increasing the brightness value at the intermediate level between the brightness value of the upper-layer pattern and the brightness value of the lower-layer pattern.
In an embodiment, the pattern edge detection method further comprises: generating a template image from design data of the upper-layer pattern and the lower-layer pattern, the template image containing a first reference pattern corresponding to the upper-layer pattern and a second reference pattern corresponding to the lower-layer pattern; aligning the template image and the sample image with each other; drawing a first perpendicular line on an edge of the first reference pattern; and drawing a second perpendicular line on an edge of the second reference pattern, wherein the brightness profile of the first processed image is a distribution of brightness values of the first processed image on the first perpendicular line, and the brightness profile of the second processed image is a distribution of brightness values of the second processed image on the second perpendicular line.
In an embodiment, the pattern edge detection method further comprises applying a corner-rounding process to the first reference pattern and the second reference pattern.
In an embodiment, the pattern edge detection method further comprises: calculating a pattern shift representing a difference between a center of gravity of the upper-layer pattern on the sample image and a center of gravity of the first reference pattern; and calculating a pattern shift representing a difference between a center of gravity of the lower-layer pattern on the sample image and a center of gravity of the second reference pattern.
According to the above-described embodiments, the different two image processes are applied to the image, making edges of the upper-layer pattern and the lower-layer pattern sharp. Therefore, the respective edges of the upper-layer pattern and the lower-layer pattern can be accurately detected.
Hereafter, with reference to the drawings, embodiments will be described in detail.
The main control unit 1 comprises a CPU (Central Processing Unit), and manages and controls the whole apparatus. The main control unit 1 is coupled to the storage device 2. The storage device 2 may be in the form of a hard disk, a flexible disk, an optical disk, or the like. The input device 4 such as a keyboard and a mouse, the display device 5 such as a display for displaying input data, calculation results, and the like, and the printer 6 for printing the calculation results and the like are coupled to the main control unit 1 through the input/output control unit 3.
The main control unit 1 has an internal memory (internal storage device) for storing a control program such as an OS (Operating System), a program for the contact-hole inspection, necessary data, and the like. The main control unit 1 is configured to realize the contact-hole inspection and sampling point extraction with these programs. These programs can be initially stored in a flexible disk, an optical disk, or the like, read and stored in a memory, a hard disk, and the like before execution, and then executed.
The irradiation system 10 includes an electron gun 11, a focusing lens 12 for focusing primary electrons emitted from the electron gun 11, an X deflector 13 and a Y deflector 14 for deflecting an electron beam (charged-particle beam) in the X direction and the Y direction, respectively, and an objective lens 15. The specimen chamber 20 has an XY stage 21 which is movable in the X direction and the Y direction. A wafer W, which is a specimen, can be loaded into and unloaded from the specimen chamber 20 by a wafer-loading device 40.
In the irradiation system 10, primary electrons emitted from the electron gun 11 are focused by the focusing lens 12, deflected by the X deflector 13 and the Y deflector 14, and focused and applied by the objective lens 15 onto the surface of the wafer W which is a specimen.
When the primary electrons strike the wafer W, the wafer W emits secondary electrons. These secondary electrons are detected by the secondary electron detector 30. The focusing lens 12 and the objective lens 15 are coupled to a lens controller 16, which is coupled to a control computer 50. The secondary electron detector 30 is coupled to an image acquisition device 17, which is also coupled to the control computer 50. Intensities of the secondary electrons detected by the secondary electron detector 30 are converted into a voltage contrast image by the image acquisition device 17. A field of view is defined as the largest region where the primary electrons are applied and a voltage contrast image without distortion can be acquired.
The X deflector 13 and the Y deflector 14 are coupled to a deflection controller 18, which is also coupled to the control computer 50. The XY stage 21 is coupled to an XY stage controller 22. This XY stage controller 22 is also coupled to the control computer 50. The wafer-loading device 40 is also coupled to the control computer 50. The control computer 50 is coupled to a console computer 60.
The main control unit 1 produces a template image containing a first reference pattern corresponding to the upper-layer pattern and a second reference pattern corresponding to the lower-layer pattern from design data of the upper-layer pattern and the lower-layer pattern described above (step 2). The design data is CAD data including information necessary for specifying a shape of a pattern, such as a size and vertex of each pattern, layer information to which each pattern belongs, and the like. The design data is stored in advance in the storage device 2 shown in
The main control unit 1 produces the template image by coloring the background 103 on the design data gray, the upper-layer pattern 101 white, and the lower-layer pattern 102 black.
The main control unit 1 performs alignment of the template image and the entirety of the sample image generated in the step 1 (step 3 in
In a process of accessing the image based on the design data information, the main control unit 1 uses an offset obtained as a result of the alignment, i.e., an amount of misalignment between the template image and the sample image, in order to access information of a corresponding position.
Next, the main control unit 1 performs a corner-rounding process on the reference patterns 111, 112 on the template image generated from the design data (step 4 in
The main control unit 1 applies two different image processes, i.e., first image processing and second image processing, to the sample image to generate a first processed image and a second processed image (step 5 in
As shown in
Steps, which will be described below, are processes for detecting the edge of the upper-layer pattern 121, while detection of the edge of the lower-layer pattern 122 is similarly performed. Therefore, duplicate explanations will be omitted.
As shown in
As shown in
The threshold method, which is one method of edge detection from the brightness profile, will be described with reference to
If the sampling point having the determined edge brightness value is not on the brightness profile, as shown in
As shown in
In
In this way, the main control unit 1 can distinguish “thick deformation” and “thin deformation” of the upper-layer pattern 121 based on the bias inspection value. For example, a positive bias inspection value means that the pattern 121 is in a state of the thick deformation, and a negative bias inspection value means that the pattern 121 is in a state of the thin deformation. An upper limit and a lower limit may be predetermined for the bias inspection value. In this case, the main control unit 1 can detect a fat defect at which the bias inspection value exceeds the upper limit, and can also detect a thin defect at which the bias inspection value is lower than the lower limit.
In a case where the reference pattern 111 generated from the design data is an isolated pattern such as a hole or an island pattern, the edge 200 of the upper-layer pattern 121 formed from the plurality of edge-detected positions 150 constitutes a closed polygon. Therefore, the main control unit 1 can calculate the center of gravity C2 of the upper-layer pattern 121. Further, the main control unit 1 calculates a pattern shift which is a difference between the center of gravity C1 of the reference pattern 111 and the center of gravity C2 of the upper-layer pattern 121 (step 11). The pattern shift is represented by a vector specifying a distance and a direction from the center of gravity C1 of the reference pattern 111 to the center of gravity C2 of the upper-layer pattern 121.
As shown in
Similarly, the main control unit 1 detects the edge of the lower-layer pattern 122 (see
The main control unit 1 aggregates pattern shifts of individual patterns, and evaluates the superposition of an upper layer and a lower layer (step 12). Specifically, the main control unit 1 calculates an average of pattern shifts of upper-layer patterns in an appropriate aggregation unit, and an average of pattern shifts of lower-layer patterns in the aggregation unit, and calculates a difference between these two averages. The appropriate aggregation unit may be all continuous patterns in one image or may be adjacent patterns.
The bias inspection values described above can represent deformation amounts of the upper-layer pattern and the lower-layer pattern on the sample image with respect to the reference patterns 111, 112. For example, if the calculated bias inspection value exceeds a predetermined range at a certain portion, the main control unit 1 can detect such a portion as a defect.
The lengths of the bias lines 160 are converted into bias inspection values described above.
The previous description of embodiments is provided to enable a person skilled in the art to make and use the present invention. Moreover, various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles and specific examples defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the embodiments described herein but is to be accorded the widest scope as defined by limitation of the claims.
Number | Date | Country | Kind |
---|---|---|---|
2017-127407 | Jun 2017 | JP | national |