The present disclosure relates to a correction method and a correction device for correcting a target image obtained by imaging a semiconductor pattern.
Charged particle beam devices such as scanning electron microscopes (SEMs) are devices appropriate for measuring or observing semiconductor patterns formed on increasingly miniaturized semiconductor wafers. Electron beam observation devices such as scanning electron microscopes accelerate electrons from electron sources and converge the electrons to a sample surface by an electrostatic lens or an electronic lens for irradiation. The irradiated electrons are called primary electrons. Secondary electrons (in some cases, electrons with low energy are referred to as secondary electrons and electrons with high energy are referred to as backscattered electrons) are emitted from samples by incidence of the primary electrons. By detecting the secondary electrons while deflecting and scanning electron beams, it is possible to obtain scanned images with minute patterns or composition distributions on the samples. To obtain scanned images with high resolutions, focusing is performed by controlling electrostatic lenses or electronic lenses according to heights of samples or charging states of sample surfaces so that diameters of electron beams emitted to patterns are minimized. In general, a method of capturing a plurality of images while changing a focus position and selecting the focus position at which sharpness of the image is maximized is known. However, when electron beams are irradiated to observation target positions for focusing, wear of target areas due to electron beams becomes a problem. When focusing is performed on each observation target at each time, a throughput deteriorates.
PTL 1 proposes a method of setting an adjustment area around a target area and determining an optical condition of the target area based on an optical condition of an optical system in the adjustment area. PTL 2 discloses a method of generating a height map by measuring and storing a height of a sample in advance by a height sensor, and shortening a focusing time by comparing with the height map during imaging.
PTL 1: JP2019-160464A
PTL 2: JP2009-259878A
PTL 1 and PTL 2 do not clearly disclose a focusing method that targets a semiconductor pattern of which a height varies step-wisely. During imaging of the semiconductor pattern of which the height varies step-wisely, when a certain height is in focus, patterns at other heights are not in focus, and thus defocus occurs in a captured image.
To solve the defocus, a method of capturing a plurality of images focusing on each height of a semiconductor pattern and combining the images at a later stage can be considered. However, there is concern of an influence of damage or charging on an imaging target since irradiation areas overlap for combination. A throughput may deteriorate due to the capturing and combination of the plurality of images.
The present disclosure provides a technique capable of reducing defocus in an image caused by variations in height of a semiconductor pattern by image processing performed after imaging.
To solve the foregoing problem, according to an aspect of the present invention, a correction method includes: acquiring a target image in which a semiconductor pattern having a plurality of areas the height of which varies step-wisely is imaged; storing a plurality of image correction values for correcting the respective areas of the target image; and correcting the respective areas of the target image using the stored plurality of image correction values.
According to the present disclosure, it is possible to reduce defocus in an image caused by variations in height of a semiconductor pattern by image processing performed after imaging.
Other tasks, configurations, and effects will be apparent from description of the following embodiments.
Embodiments of the present disclosure will be described in detail with reference to the drawings. In the following embodiments, it is needless to say that components (including elements and steps) may not be necessarily essential unless otherwise specified and deemed essential in principle.
Hereinafter, examples appropriate for the present disclosure will be described with reference to the drawings. In the examples, a scanning electron microscope will be described as an example, but the present disclosure can also be applied to electron beam observation devices other than scanning electron microscopes.
The scanning electron microscope 1 includes an electron source 101, a deformation illumination diaphragm 103, a detector 104, a scanning deflection deflector 105, an objective lens 106, a stage 107, a control device 109, a system control unit 110, and an input/output unit 115.
In a downstream direction in which an electron beam 102 is output from the electron source 101, the deformation illumination diaphragm 103, the detector 104, the scanning deflection deflector 105, and the objective lens 106 are disposed. An electronic optical system includes an aligner (not illustrated) and an aberration corrector (not illustrated) that adjust a central axis (optic axis) 117 of a primary beam. The objective lens 106 according to Example 1 is an electronic lens that controls focusing by an excitation current, but may be an electrostatic lens or a composite lens of an electronic lens and an electrostatic lens. The stage 107 is configured to move while a sample 108 (for example, a semiconductor wafer) is placed thereon. The control device 109 is connected to each unit of the electron source 101, the detector 104, the scanning deflection deflector 105, the objective lens 106, and the stage 107 to be communicable. The system control unit 110 is connected to the control device 109 to be communicable.
In the present example, the detector 104 is disposed upstream of the objective lens 106 or the scanning deflection deflector 105, but the disposition order is not limited to the disposition of
The electron beam 102 output from the electron source 101 converges so that a beam diameter is minimized on the sample 108 by adjusting focus with the objective lens 106. The scanning deflection deflector 105 is controlled by the control device 109 so that the electron beam 102 scans a fixed area of the sample 108. The electron beam 102 arriving at the surface of the sample 108 interacts with a material near the surface. Accordingly, secondary electrons such as backscattered electrons, secondary electrons, or Auger electrons are generated from the sample 108. In the present example, an electronic microscope image is displayed using a signal of secondary electrons 116. The secondary electrons 116 generated from a position at which the electron beam 102 arrives at the sample 108 are detected by the detector 104. Signal processing on the secondary electrons 116 detected by the detector 104 is performed in synchronization with a scanning signal sent from the control device 109 to the scanning deflection deflector 105 to form an SEM image. Accordingly, the sample 108 can be observed.
It is needless to say that components other than a control system and a circuit system is disposed within a vacuum container and operate within the evacuated vacuum container. The scanning electron microscope 1 includes a wafer transport system that places the sample 108 such as a semiconductor wafer from outside the vacuum on the stage 107.
The system control unit 110 is a correction device that corrects a target image obtained by imaging a semiconductor pattern that has a plurality of areas of which heights vary step-wisely. The correction device may be on-premises or in a cloud. The system control unit 110 includes a storage device 111, a processor 112, an input/output interface unit (hereinafter abbreviated to an I/F unit) 113, and a memory 114. An input/output unit 115 including an output device such as a display device and an input device such as a keyboard or a mouse is connected to the I/F unit 113 to be communicable. The input/output unit 115 may be a touch panel in which an input device and an output device are integrated.
The processor 112 is, for example, a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), or the like. The processor 112 loads a program stored in the storage device 111 in a working area of the memory 114 so that the program can be executed. The memory 114 stores a program that is executed by the processor 112, data that is processed by the processor, and the like. The memory 114 is a flash memory, a random access memory (RAM), a read only memory (ROM), or the like. The storage device 111 stores various programs and various data. The storage device 111 stores, for example, an operating system (OS), various programs, various tables, and the like. The storage device 111 is a silicon disc including a nonvolatile semiconductor memory (a flash memory or an erasable programmable ROM (EPROM)), a solid-state drive device, a hard disk drive (HDD) device, or the like.
The processor 112 loads a control program 120, an image processing program 121, or the like stored in the storage device 111 in the memory 114 so that the programs can be executed. The processor 112 executes the control program 120 or the image processing program 121 to perform image processing related to defect inspection or numerical value measurement of a semiconductor ware, control of the control device 109, or the like. The storage device 111 stores a plurality of image correction values for correcting each area of which a height of a target image is different. The image correction values are, for example, correction coefficients to be described below. The image correction value may be a table, a function, a mathematical formula, a mathematical model, a trained learning model, or a DB. The image processing program 121 is a program that processes an SEM image.
The control device 109 includes a storage device, a processor, an I/F unit, and a memory as in the system control unit 110. The storage device (not illustrated) of the control device 109 stores a program that moves the stage 107, a program that controls focus of the objective lens 106, and the like. The processor (not illustrated) of the control device 109 loads and executes a program stored in the storage device in the memory (not illustrated). The I/F unit (not illustrated) of the control device 109 is connected to the system control unit 110, the electron source 101, the deformation illumination diaphragm 103, the detector 104, the scanning deflection deflector 105, the objective lens 106, and the stage 107 to be communicable.
Hereinafter, a method of correcting a microscope image (image) obtained by imaging a semiconductor pattern that has a plurality of areas of which heights vary step-wisely will be described. Since the height of the semiconductor pattern is different depending on a position, the semiconductor pattern is not in focus at all the positions in an image. Accordingly, a correction method and a device for correcting an image will be described using an image correction value determined in advance for each area of which the height is different.
Accordingly, in the image 202 of Example 1, each area of which the height is different is corrected through image processing SO that area every has the same frequency characteristic as the case of being in focus. The correction method according to Example 1 is broadly divided into a procedure for calculating a correction coefficient and a procedure for correcting an image by applying the calculated correction coefficient.
First, a procedure for calculating a correction coefficient will be described with reference to the flowchart of
A procedure for calculating a correction coefficient used to correct an N-th (N is a random integer) stage image of a semiconductor pattern that has areas of which heights are different will be described with reference to
Subsequently, the system control unit 110 detects a position of the white band of the image acquired in S301 (S302). As the detection of the position of the white band, a method of acquiring a luminance profile of the image, detecting a peak position of the profile, and setting an N-th peak position as a position of a white band of an N-th stage is considered. The method of detecting the position of the white band is not limited thereto. In the present example, an example in which the position of the white band is detected as the position of the semiconductor pattern will be described. However, the position of the semiconductor pattern is not limited to the position of the white band as long as the position of the semiconductor pattern such as a position of an edge, a contour line, or the like of the semiconductor pattern can be detected.
Subsequently, the system control unit 110 applies a window function Wn centering on the position of the white band of the N-th stage (S303). Here, the window function Wn will be described with reference to
Subsequently, the system control unit 110 performs Fourier transform or the like on the image extracted by the window function Wn to transform the image into an image of a frequency space and acquires a frequency characteristic (reference frequency characteristic) An from the image (S304).
The system control unit 110 changes a focus position and performs processes of S305 to S308 as in S301 to S304. Specifically, the system control unit 110 images the reference semiconductor pattern focusing on the N-th stage and acquires one or more images (focusing images) (S305). Subsequently, the system control unit 110 detects a position of the white band of the image acquired in S305 (S306). Subsequently, the system control unit 110 applies the window function Wn centering on the position of the white band of the N-th stage to the image captured in focus on the N-th stage (S307). Subsequently, the system control unit 110 performs Fourier transform or the like on the image extracted by the window function Wn to transform the image to an image of a frequency space and acquires a frequency characteristic Bn from the image (S308).
A correction coefficient Cn for correcting the image centering on the position of the white band of the N-th stage is calculated by the following formula (S309).
Correction coefficient Cn=Frequency characteristic Bn/Frequency characteristic An (Formula 1)
The correction coefficient Cn is calculated for each pixel of the image obtained by transforming an image to a frequency space. The correction coefficient Cn is calculated for each of the plurality of areas of which the heights are different.
Next, a method of calculating a plurality of correction coefficients for each of the plurality of areas of which the heights are different will be described with reference to
The system control unit 110 performs Fourier transform or the like on the images 503a to 503d to transform the images to images of a frequency space and acquires frequency characteristics (reference frequency characteristics) A1 to A4 from the images.
Subsequently, as illustrated in
Subsequently, the system control unit 110 performs Fourier transform or the like on the images 602a to 602d extracted by the window functions (Tukey windows) W1 to W4 to transform the images into images of a frequency space and acquires frequency characteristics B1 to B4 from the images.
Then, the system control unit 110 calculates correction coefficients C1=B1/A1, C2=B2/A2, C3=B3/A3, and C4=B4/A4 based on the foregoing (Formula 1).
<Procedure for Correcting Image by Applying Calculated Correction Coefficient>
Next, a procedure for correcting an image by applying the calculated correction coefficient will be described with reference to the flowchart of
A target (semiconductor pattern) is imaged at a predetermined focus position and the system control unit 110 acquires one or more images (target images) (S701). The predetermined focus position is the same focus position as a position when the image is acquired in S301 of
Subsequently, the system control unit 110 performs Fourier transform or the like on the image extracted by the window function Wn to transform the image into an image of a frequency space (S704). Then, the system control unit 110 multiplies each pixel of the image of the frequency space by the correction coefficient Cn (=frequency characteristic Bn/frequency characteristic An) calculated in the procedure for calculating the correction coefficient (S705). Subsequently, the system control unit 110 transforms the image of the frequency space multiplied by the correction coefficient Cn into an image of a real space according to a scheme such as a two-dimensional inverse Fourier transform (S706).
The system control unit 110 applies a window function Xn to the image acquired in S701 (S707). The window function Xn is calculated by the following formula.
Window function Xn=1.0−Window function Wn (Formula 2)
Here, the window function Xn will be described with reference to
The system control unit 110 combines the image acquired in S706 with the image acquired in S707 (S708). Combination means summing each pixel of two images. The corrected image of the N-th stage is output through the combination of the two images (S709). The corrected image is an image in which only the area centering on the position of the white band of the N-th stage is corrected. Therefore, when correction of a plurality of stages is performed, each process of the flowchart of
Next, a method of outputting a combined image will be described with reference to
The system control unit 110 performs Fourier transform or the like on the image 803a to transform the image into an image 804a of a frequency space. Then, the system control unit 110 multiplies each pixel of the image 804a of the frequency space by the correction coefficient C1 (=frequency characteristic B1/frequency characteristic A1). Subsequently, the system control unit 110 transforms the image 804a of the frequency space multiplied by the correction coefficient C1 into an image 805a of the real space according to a scheme such as a two-dimensional inverse Fourier transform.
The system control unit 110 acquires an image 806a by applying a window function (Tukey window) X1 to the image 801. Then, the image 805a and the image 806a of the real space are combined to acquire a corrected image 807a.
The environment setting screen 900 includes a text box 901 for inputting the number of areas of which heights are different, a button 902 for capturing an image at a random focus position, and a button 903 for imaging a reference semiconductor pattern while changing a focus position by the number input in the text box 901. The environment setting screen 900 includes a file storage portion 904 in which the calculated correction coefficient is stored as a file with any name.
In Example 1, the plurality of correction coefficients C1 to Cn for correcting each of the plurality of areas of the image (target image) 801 are stored. Accordingly, defocus of the image (target image) 801 caused by variations in height of the semiconductor pattern can be reduced through image processing after imaging.
In Example 1, each area of the image (target image) 801 can be corrected with each of the plurality of correction coefficients. Therefore, the image (target image) 801 is captured only once. Accordingly, since it is not necessary to irradiate the electron beam to the semiconductor pattern repeatedly, an influence of damage or charging on the semiconductor pattern can be reduced.
As described above, the semiconductor pattern is imaged only once. Therefore, a throughput is improved compared to a case in which imaging is performed repeatedly according to the height of the semiconductor pattern.
In Example 1, since the correction of each area of the image (the target image) 801 is correction related to focus adjustment of the scanning electron microscope 1, defocus can be reduced through image processing after imaging.
In Example 1, the plurality of correction coefficients C1 to Cn can be calculated for each focus position based on the image (reference image) 501 and a plurality of images (focusing images) 601a to 601n which are captured while changing a focus position. Accordingly, since each area of the image (target image) 801 can be corrected with the correction coefficients C1 to Cn appropriate for each area, an image of which defocus is reduced can be obtained.
In Example 1, by calculating frequency characteristics of the image (reference image) 501 or the images (focusing images) 601a to 601n through Fourier transform, it is possible to easily obtain the plurality of correction coefficients for correcting each area of the image (target image) 801.
In Example 1, since the image 805a of the real space can be obtained through inverse Fourier transform, an observer can observe the image 805a of the real space of the semiconductor pattern.
In Example 1, it is possible to calculate the correction coefficients C1 to Cn for reducing defocus of each of the areas centering on the white bands by using the white bands 502a to 502e of the image (reference image) 501, the white bands of the images (focusing images) 601a to 601n, and the window functions W1 to Wn that extract the areas centering the white bands.
In Example 1, each of the plurality of areas of the image (target image) 801 can be individually corrected using the correction coefficients C1 to Cn.
In Example 1, the number of areas of which the heights are different can be input in the environment setting screen 900. Accordingly, each area of the image (target image) 801 can be corrected with a number designated by the user.
The frequency characteristics for calculating the correction coefficients can also be acquired from one image. However, to reduce an influence of variation in value due to noise or the like, frequency characteristics may be calculated from a plurality of images captured under the same condition. For example, frequency characteristics of a plurality of images captured under the same condition may be averaged and correction coefficients may be calculated using the average value. In Example 2, an example in which correction coefficients are calculated from an average of frequencies of a plurality of images captured under the same condition will be described with reference to
As illustrated in
The system control unit 110 repeats processes of S1105 to S1108 L times. The processes of S1105 to S1108 are the same as the processes of S305 to S308 of
A correction coefficient ACn for correcting an image centering on a position of a white band of an N-th stage is calculated by the following formula (S1111).
Correction coefficient ACn=Frequency characteristic ABn/Frequency characteristic AAn (Formula 3)
The correction coefficient is calculated for each pixel of an image obtained by transforming an image to a frequency space.
Each of the foregoing M and L is an integer of 1 or more. M and L may be different values or may be the same value. An average of the frequency characteristics means an average of amplitude characteristics at each frequency.
In Example 2, even when noise or variation occurs during capturing of the image (reference image) 501 or during capturing of the images (focusing images) 601a to 601d, an influence of noise or variation can be reduced by using the average of the frequency characteristics. Since other advantageous effects are similar to those of Example 1, description thereof will be omitted.
In Example 1, the example in which the procedure for calculating the correction coefficients and the procedure for correcting the image by applying the calculated correction coefficients are performed by one device has been described. In Example 3, an example in which a plurality of devices are operated and the correction coefficients acquired by a certain device are applied to an image captured by another device will be described.
The procedure for calculating the correction coefficients is similar to
By using the correction coefficient CA calculated by the device A for the image captured by the device B, the correction result image CIB becomes closer to a frequency characteristic of an image captured by the device A. That is, the correction result image CIB is close to the image captured by the device A and instrumental error between the devices A and B can be reduced. Since other advantageous effects are similar to those of Examples 1 and 2, description thereof will be omitted.
The present disclosure is not limited to the foregoing examples and includes various modified examples. The foregoing examples have been described in detail to describe the present disclosure to be easily understood and the present disclosure is not limited to embodiments in which all of the described configurations are included. Some of the configurations of a certain embodiment can be replaced with configurations of another embodiment, and the configurations of another embodiment can also be added to configurations of a certain embodiment. Other configurations can be added to, deleted from, or replaced by some of the configurations of each example.
In Examples 1 to 3, the examples in which the system control unit 110 performs each step of
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/043525 | 11/29/2021 | WO |