This application claims benefit of priority to Korean Patent Application No. 10-2023-0072170 filed on Jun. 5, 2023 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
Example embodiments of the present disclosure relate to a semiconductor measurement apparatus.
A semiconductor measurement apparatus may measure a dimension of a structure in a sample including a structure formed by a semiconductor process, and generally, the semiconductor measurement apparatus may measure a dimension using ellipsometry. Generally, ellipsometry may irradiate light to a sample at fixed azimuth and incidence angles and may determine a dimension of a structure included in a light-irradiated region in the sample using spectrum distribution of the light reflected from the sample. As a dimension of a structure formed by a semiconductor process gradually decreases, the effect of changes of another dimension on a spectrum distribution may increase. Accordingly, with the spectrum distribution obtained from ellipsometry, a dimension to be measured may not be accurately determined.
Example embodiments provide a semiconductor measurement apparatus which may, by obtaining data for dimension judgment in all azimuth angles and a wide range of incident angles with one iteration, and obtaining pieces of polarization information from the original image without conversion to the frequency domain, accurately determine a selected dimension despite an interaction between different dimensions.
According to an example embodiment of the present disclosure, a semiconductor measurement apparatus includes a lighting unit including a light source and at least one lighting polarization element on a first travel path of light emitted by the light source; a light receiving unit including at least one light receiving polarization element on a second travel path of travel of light that passes through the at least one lighting polarization element and that is reflected from a sample, where the light receiving unit includes an image sensor configured to receive the light passing through the at least one light receiving polarization element and to output an original image; and a control unit configured to: generate a prediction equation representing the original image, where the prediction equation is based on a plurality of elements of a Mueller matrix, approximate each of the plurality of elements of the Mueller matrix to a polynomial including bases of a Zernike polynomial and coefficients, generate optimization coefficients based on a sum of the coefficients and a difference between the prediction equation and the original image, determine whether an optimization condition is satisfied based on the optimization coefficients and a minimum value, and determine a dimension based on the optimization coefficients and the bases when the optimization condition is satisfied.
According to an example embodiment of the present disclosure, a semiconductor measurement apparatus includes an image sensor configured to output a multiple-interference image representing interference patterns of polarization components of light reflected from a sample; an optical unit on a path in which the image sensor receives the light, the optical unit including an objective lens proximate to the sample; and a control unit configured to: generate a restored image approximating the multiple-interference image based on a polynomial that includes bases of a Zernike polynomial and a plurality of coefficients, determine optimization coefficients by optimizing the polynomial such that a difference between the restored image and the multiple-interference image is less than or equal to a predetermined reference difference and such that a number of coefficients having a 0 value among the plurality of coefficients is maximized, generate pieces of polarization information of light based on the optimization coefficients and the bases of the Zernike polynomial, and determine a dimension to be measured from a structure of the sample based on the pieces of polarization information.
According to an example embodiment of the present disclosure, a semiconductor measurement apparatus includes a lighting unit configured to irradiate beams of light in different wavelength bands to a sample including a structure thereon; an optical unit on a path in which the beams of light are reflected from the sample; an image sensor configured to generate a multiple-interference image representing interference patterns of a plurality of polarization components based on the beams of light; and a control unit configured to: generate elements of a Mueller matrix representing the plurality of polarization components by performing a compressive sensing routine on the multiple-interference image, determine a measurement parameter based on the elements of the Mueller matrix, the measurement parameter including at least one of an intensity difference, a phase difference, and a degree of polarization of the plurality of polarization components, and determine a dimension to be measured from the structure based on the measurement parameter.
The above and other aspects, features, and advantages in the example embodiment will be more clearly understood from the following detailed description, taken in combination with the accompanying drawings, in which:
To clarify the present disclosure, parts that are not connected with the description will be omitted, and the same elements or equivalents are referred to by the same reference numerals throughout the specification. Further, since sizes and thicknesses of constituent members shown in the accompanying drawings are arbitrarily given for better understanding and ease of description, the present disclosure is not limited to the illustrated sizes and thicknesses. In the drawings, the thickness of layers, films, panels, regions, etc., are exaggerated for clarity. In the drawings, for better understanding and ease of description, thicknesses of some layers and areas are excessively displayed.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being “on” another element, it can be directly on the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element's or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations), and the spatially relative descriptors used herein may be interpreted accordingly.
In addition, unless explicitly described to the contrary, the word “comprises”, and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of stated elements but not the exclusion of any other elements. As used herein, the phrase “at least one of A, B, and C” refers to a logical (A OR B OR C) using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B and at least one of C.” As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components and/or groups thereof. The term “and/or” includes any and all combinations of one or more of the associated listed items. The term “connected” may be used herein to refer to a physical and/or electrical connection and may refer to a direct or indirect physical and/or electrical connection.
The present disclosure has been described herein with reference to flowchart and/or block diagram illustrations of methods, systems, and devices in accordance with exemplary embodiments of the invention. It will be understood that each block of the flowchart and/or block diagram illustrations, and combinations of blocks in the flowchart and/or block diagram illustrations, may be implemented by computer program instructions and/or hardware operations. These computer program instructions may be provided to a processor of a general purpose computer, a special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a non-transitory computer usable or computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory produce an article of manufacture including instructions that implement the function specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart and/or block diagram block or blocks.
Hereinafter, embodiments in the example embodiment will be described as follows with reference to the accompanying drawings.
Referring to
The lighting unit 100 may include a light source 110, a monochromator 120, a fiber 130, lighting lenses 140 and 160, and a lighting polarization unit 150. The light source 110 may output light incident to the sample 20, and light may include an ultraviolet wavelength band to an infrared wavelength band, or, in example embodiments, monochromatic light having a specific wavelength. The monochromator 120 may select and emit a predetermined wavelength band from light emitted by the light source 110. In the example embodiment, the monochromator 120 may radiate light to the sample 20 while changing a wavelength band of light emitted from the light source 110, and accordingly, light of a wide wavelength band may be radiated to the sample 20.
The fiber 130 may be a light guide member having a cable shape, and light incident to the fiber 130 may be irradiated to the first lighting lens 140. The first lighting lens 140 may be configured as a convex lens, and light may be incident to the lighting polarization unit 150 by adjusting an angular distribution of light irradiated by the fiber 130. For example, the first lighting lens 140 may convert light emitted by the fiber 130 into collimated light.
The lighting polarization unit 150 may polarize light passing through the lighting lens 140 in a predetermined polarization direction and may allow the light to be incident to the sample 20. In an example embodiment, the lighting polarization unit 150 may include at least one lighting polarization element and wave plates 154 and 155. For example, the lighting polarization unit 150 may include a first lighting polarization element 151, a second lighting polarization element 152, and a third lighting polarization element 153. Each of the first lighting polarization element 151 and the second lighting polarization element 152 may include a pair of beam displacers, and the third lighting polarization element 153 may be configured as a polarizer.
The wave plates 154 and 155 may be configured as half wave plates or ¼ wave plates (quarter wave plates), and in example embodiments, the number of lighting polarization elements 151-153 and the number of wave plates 154 and 155 may vary. For example, each of the first lighting polarization element 151 and the second lighting polarization element 152 may be implemented as at least one of a Nomarski prism, a Wallerston prism, and a Lotion prism. The third lighting polarization element 153 may polarize light with a polarization direction inclined by 45 degrees relative to the ground. Light passing through the lighting polarization unit 150 may be incident to the beam splitter 210 of the optical unit 200 through the second lighting lens 160, which may be implemented as a convex lens.
The beam splitter 210 of the optical unit 200 may reflect a portion of light received from the lighting unit 100 and may transmit a portion of the light. The light reflected from the beam splitter 210 may be incident to the objective lens 220, and the light passing through the objective lens 220 may be incident to the sample 20. As an example, light passing through the objective lens 220 may focus on a target region of the sample 20.
When light passing through the objective lens 220 is reflected on the target region of the sample 20, the objective lens 220 may receive reflected light again. In the example embodiment illustrated in
Light irradiated to the sample 20 may include linearly polarized light in a specific direction. The linearly polarized light may be collected and incident to the target region of the sample 20, and the light may include a P polarization component and an S polarization component that is based on an incident angle that is further determined based on a surface of sample 20. For example, in light incident to the sample 20, the P polarization component may be reflected back to the P polarization component, and the S polarization component may be reflected back to the S polarization component.
The light reflected from the sample 20 may sequentially pass through the objective lens 220, the beam splitter 210, the first relay lens 230, the light receiving polarization unit 250, and the second relay lens 240 and may be incident to the image sensor 300. The first relay lens 230 may collect light passing through the beam splitter 210 and may form an image, and may allow the light to be incident to the light receiving polarization unit 250.
The light receiving polarization unit 250 may include at least one of light receiving polarization elements 251 and 252 and a wave plate 253 and an analyzer 254. The first light receiving polarization element 251 and the second light receiving polarization element 252 may polarize light passing through the first relay lens 230, and each may include a pair of beam displacers. The wave plate 253 may be a half wave plate similar to the wave plates 154 and 155 included in the lighting unit 100. Light passing through the light receiving polarization unit 250 may be incident to the image sensor 300 through the second relay lens 240.
Each of the first and second lighting polarization elements 151 and 152 and 251 and 252 may separate incident light into a first polarization component and a second polarization component. For example, the first lighting polarization element 151 may separate incident light into a first polarization component and a second polarization component, and may transmit light by moving an optical axis of each of the first polarization component and the second polarization component. The second lighting polarization element 152 may separate light polarized by 45 degrees by the half wave plate after passing through the first lighting polarization element 151 into a first polarization component and a second polarization component.
Accordingly, in the image sensor 300, a plurality of polarization components generated by the first and second lighting polarization elements 151 and 152 and 251 and 252 may be incident by interfering with each other, and accordingly, the image sensor 300 may generate a multiple-interference image as an original image. The image sensor 300 may output an original image to the control unit 350, and the control unit 350 may process the original image and may determine the dimension of the structure included in the region irradiated with light in sample 20.
In the example embodiment, the control unit 350 may determine a selected dimension among a plurality of dimensions of the structure included in sample 20 by processing the original image without an operation of converting the original image into a frequency domain. For example, the control unit 350 may configure a prediction equation representing the original image with a plurality of pieces of polarization information. Each of the plurality of pieces of polarization information included in the prediction equation may be defined as a polynomial comprising a plurality of bases and a plurality of coefficients, and the plurality of bases may be bases defined by a Zernike polynomial.
The control unit 350 may determine an optimization condition in which a calculation result obtained by adding a difference between the original image and the prediction equation and a sum of the plurality of coefficients included in the polynomial is minimized. When the optimization condition is derived, the control unit 350 may generate a prediction equation using the coefficients according to the optimization condition, and may determine polarization characteristics of light reflected from sample 20 and incident to the image sensor 300 using a plurality of pieces of polarization information of the prediction equation. For example, the control unit 350 may determine an intensity difference, a phase difference, and a degree of polarization of polarization components of light at various incident angles and azimuth angles based on the plurality of pieces of polarization information. The control unit 350 may determine the selected dimension to be measured among the dimensions of the structure included in sample 20 based on at least one of an intensity difference, a phase difference, and a degree of polarization of light polarization components.
Using the above method, the semiconductor measurement apparatus 10 may accurately measure the selected dimension to be measured among the dimensions of the structure of the sample 20. Generally, the dimension of the structure formed in sample 20 may be determined using the spectrum distribution according to a wavelength of light reflected from the sample 20. However, in this case, it may be difficult to accurately measure the selected dimension due to interactions in which other dimensions affect the spectrum distribution in addition to the selected dimension to be determined.
In the example embodiment, a selected dimension may be determined by optimizing a prediction equation comprising pieces of polarization information and representing an original image without a process of converting the original image into a frequency domain. Accordingly, due to the miniaturization of the structure included in sample 20, only the selected dimension may be accurately determined despite the interaction of dimensions affecting each other, and as such, the performance of the semiconductor measurement apparatus 10 and yield of the semiconductor process may improve.
Thereafter, referring to
However, in the lighting unit 100A, the lighting polarization unit 150A may have a structure different from the example embodiment described above with reference to
Thereafter, referring to
However, the light receiving polarization unit 250B may have a structure different from the example embodiment described with reference to
The wave plate 253B may be a ¼ wave plate, and the control unit 350 may adjust a polarization state of light incident to the image sensor 300 by rotating a compensator. For example, similar to the description with reference to
Referring to
However, in the example embodiment illustrated in
In other words, in the example embodiment illustrated in
First, referring to
The control unit may construct a prediction equation representing the original image (S11). The prediction equation may include a spatial frequency carrier and a plurality of pieces of polarization information, and the plurality of pieces of polarization information may be represented by a plurality of elements included in the Mueller matrix. A restored image similar to the original image may be constructed using the prediction equation. When the prediction equation is constructed, the control unit may define each of the plurality of pieces of polarization information included in the prediction equation as a polynomial (S12). A polynomial representing each of a plurality of pieces of polarization information may include a plurality of bases and a plurality of coefficients, and in the example embodiment, the plurality of bases may be bases according to a Zernike polynomial.
The control unit may derive an optimization condition in which the difference between the original image and the prediction equation is minimized and the sum of coefficients included in the polynomial is minimized (S13). When the difference between the original image and the prediction equation is minimized or relatively small, it may indicate that the restored image constructed using the prediction equation may be generated as an image similar to the original image. For example, the control unit may determine a condition in which a calculation result obtained by adding the sum of coefficients included in the polynomial to the difference between the original image and the prediction equation has a minimum value as the optimization condition. When the optimization condition is determined, the control unit may generate optimization coefficients by determining each value of the coefficients under the optimization condition (S14). The control unit may generate each of the plurality of pieces of polarization information by constructing a polynomial with the optimization coefficients and the plurality of bases, and may determine a selected dimension based on the information (S15).
In the example embodiment, polarization characteristics of light reflected from a sample and incident to an image sensor may be analyzed by a compression sensing method included in a prediction equation approximating an original image without a process of converting the original image into a frequency domain. Accordingly, the polarization characteristics of the light represented by the original image may be analyzed without band pass filtering applied in the process of converting the original image into the frequency domain and inversely transforming the image. By using such a compression sensing method, the polarization characteristics of light may be restored in a high frequency band, and accordingly, the selected dimension may be accurately determined.
When converting the original image, which is a multiple-interference image, into the frequency domain, inverting the image, and analyzing the polarization characteristics of light, it may be necessary to obtain a reference image by irradiating light to a reference sample with the same semiconductor measurement apparatus. The reference sample may be a bare wafer. A reference value in the frequency domain may be obtained by converting the reference image, obtained by irradiating the bare wafer with light, to the frequency domain. When a multiple-interference image is obtained from a sample to be measured, a normalization operation of converting the image to the frequency domain and applying a reference value may need to be performed, whereas in the example embodiment, the multiple-interference image may be analyzed using the compression sensing method without conversion to the frequency domain and while determining the polarization characteristics of light, such that the measurement process for the reference sample may not be performed.
Thereafter, referring to
The control unit of the semiconductor measurement apparatus may obtain a multiple-interference image corresponding to each wavelength band as an original image (S21). The control unit may construct a polynomial representing a multiple-interference image with bases of the Zernike polynomial and coefficients (S22). As an example, the multiple-interference image may be represented as a prediction equation comprising a spatial frequency carrier and a plurality of pieces of polarization information, and each of the plurality of pieces of polarization information may be defined as a polynomial comprising bases of a Zernike polynomial and coefficients. By the prediction equation, a restored image similar to a multiple-interference image may be created.
The control unit may optimize the polynomial such that the number of coefficients having a value of 0 among the plurality of coefficients is maximized. A condition in which the number of coefficients having a value of 0 is maximized may be defined as an optimization condition, and the control unit may determine optimization coefficients, which is a value of each coefficient in the optimization condition (S23). The control unit may represent each of a plurality of pieces of polarization information with the optimization coefficients and the bases of the Zernike polynomial determined in operation S23 (S24), and may determine the selected dimension using a plurality of pieces of polarization information (S25).
For example, the control unit may select a condition in which the number of coefficients having a value of 0 is maximized among the plurality of coefficients and simultaneously, the restored image generated by the prediction equation may represent the multiple-interference image properly as an optimization condition. The control unit may select coefficients to generate a restored image representing the multiple-interference image properly, and in this case, two or more estimation equations may be generated by the selected coefficients. The control unit may determine optimization coefficients by selecting a prediction equation having a maximum number of coefficients having a value of 0 from among two or more estimation equations.
For example, the control unit may tune the prediction equation such that the difference between the restored image and the multiple-interference image is less than or equal to a reference difference. When multiple estimation equations that generate a multiple-interference image and a restored image equal to or less than a standard difference, the control unit may determine that, among the plurality of estimation equations, the prediction equation with the maximum number of coefficients having a value of 0 may satisfy the optimization condition.
Referring to
The back focal plane BFP may be a plane defined by the first direction D1 and the second direction D2. For example, the first direction D1 may be the same as the X direction of the sample SP surface, and the second direction D2 may be the same as the Y direction. Light passing through the objective lens OL may be collected in the form of a point on the target region of the sample SP, may be reflected from the target region, may pass through the objective lens OL and may travel to the back focal plane BFP. As described above, in the semiconductor measurement apparatus according to the example embodiment, light may be incident to the sample SP at an entire azimuth angle including 0 to 360 degrees, and the range of the incident angle (q) of the light incident to the sample SP may be determined according to a numerical aperture of the objective lens OL.
In an example embodiment, the objective lens OL having a numerical aperture of 0.8 or more and less than 1.0 may be employed in the semiconductor measurement apparatus such that data for a wide range of incident angle may be obtained by one-time imaging performed by the image sensor. As an example, the maximum incident angle of light passing through the objective lens OL may be greater than or equal to 72 degrees and less than 90 degrees. For example, the image sensor may be disposed such that the light receiving surface may be disposed in a conjugate position to the position of the back focal plane of the objective lens.
When each coordinate included in the back focal plane BFP defined by the first direction D1 and the second direction D2 is represented as polar coordinates (r, θ), as illustrated in
Accordingly, in the semiconductor measurement apparatus according to the example embodiment, data including interference patterns in the range of an azimuth angle from 0 degrees to 360 degrees and an incident angle determined by the numerical aperture of the objective lens OL may be obtained in the form of an image by one-time imaging performed while light is reflected from the target region of the sample SP. Accordingly, differently from a lighting unit irradiating light to the sample SP or a general method requiring imaging performed multiple times while changing the position and angle of the sample, the data required to analyze and measure the target region of the sample SP may be obtained by one-time imaging, and the efficiency of the measurement process using a semiconductor measurement apparatus may be improved.
First, referring to
Light incident to the X-Y plane 400 may be decomposed into polarization components travelling in directions orthogonal to each other. In other words, light may be decomposed into a polarization component traveling in the X-axis direction and a polarization component traveling in the Y-axis direction. Referring to
Light passing through the wave plate may be incident to a second lighting polarization element.
Thereafter,
However, as described with reference to
The image sensor of the semiconductor measurement apparatus may output a self-interference image representing the interference of polarization components of light as an original image, and the control unit may determine the polarization characteristics of light by applying the compression sensing method to the obtained original image. For example, the control unit may determine an intensity difference, a phase difference, and a degree of polarization of the polarization components of the light incident to the image sensor. The control unit may measure the selected dimension and overlay of structures formed in the region in which light is reflected from the sample using at least one of an intensity difference, a phase difference, and a degree of polarization of polarization components of light.
First,
As described with reference to
As described above, the original image 500 generated by the image sensor may be a multiple-interference image. Each of a lighting unit irradiating light on the sample and an optical unit transmitting light reflected from the sample to the image sensor may include polarization elements, and each polarization element may be implemented as a beam displacer or compensator.
Accordingly, as described above with reference to
The original image 500 according to the example embodiment illustrated in
The original image 500 generated by the image sensor may be transmitted to a control unit of a semiconductor measurement apparatus, and the control unit may process the original image 500 and may obtain the plurality of pieces of polarization information. For example, the control unit may obtain a plurality of pieces of polarization information by processing the original image 500 without applying a Fourier transform to the original image 500 and transforming the original image 500 into a frequency domain. A plurality of pieces of polarization information obtained by the control unit may include an intensity difference, a phase difference, and a degree of polarization of polarization components included in light incident to the image sensor.
The second image 520 illustrated in
For example, in each of the first to third images 510 to 530, a position of each of a plurality of pixels disposed on the X-Y plane may be defined as polar coordinates as described above with reference to
In the example embodiment, since light passing through an objective lens having a large numerical aperture is incident to the sample and reflected, first to third images 510-530, including information such as an intensity difference, a phase difference, and a degree of polarization of polarization components of light over a wide incident angle and full azimuth angle (0 to 360 degrees) may be extracted from the original image 500 obtained by imaging one time. Accordingly, the time required for measurement operation may be shortened.
Also, in the example embodiment, a plurality of pieces of polarization information may be obtained from the original image 500 using a compression sensing method without an operation of converting the original image 500 into a frequency domain. In an example embodiment, the plurality of pieces of polarization information may be elements of a Mueller matrix. In the method of converting the original image 500 into the frequency domain and interpreting the image, data loss in a high frequency band may occur in the process of inversely transforming the signal converted into the frequency domain after filtering the signal. Different from the configuration, in the example embodiment, the plurality of pieces of polarization information may be extracted from the original image 500 without conversion to the frequency domain, and the selected dimension to be measured may be more accurately determined from the structure of the sample.
As described above, in the example embodiment, pieces of polarization information may be extracted from an original image using a compression sensing method without conversion to a frequency domain. A control unit of a semiconductor measurement apparatus according to an example embodiment may receive an original image, which is a self-interference image, from an image sensor, and may construct a prediction equation representing the original image. For example, the prediction equation may include a plurality of pieces of polarization information and spatial frequency carriers. In an example embodiment, the prediction equation may be based on Equation 1 below.
In Equation 1, I is an original image, and a plurality of pieces of polarization information Mij included in the prediction equation may be elements of a Mueller matrix. A Mueller matrix may be configured to process the Stokes vector representing the polarization components of light, and may include 16 elements. As an example, a relationship between a plurality of pieces of polarization information and a spatial frequency carrier may be matched as illustrated in Table 1 below.
Accordingly, when the elements of the Mueller matrix are determined using the prediction equation, polarization characteristics of light incident to the image sensor may be analyzed based on the elements. However, elements of the Mueller matrix, that is, a plurality of pieces of polarization information, may not be defined only with the prediction equation defined by Equation 1. Accordingly, in the example embodiment, a Zernike polynomial may be used to determine a plurality of pieces of polarization information.
The control unit of the semiconductor measurement apparatus may represent each element of the Mueller matrix by approximating the elements as a polynomial comprising bases of the Zernike polynomial and coefficients. The representation of each element of the Mueller matrix as a polynomial comprising bases of the Zernike polynomial and coefficients may be understood as a process of orthogonal decomposition of the elements of the Mueller matrix. As an example, the bases of the Zernike polynomial may be illustrated in
For example, the plurality of bases 600 may be divided into primary to eighth bases 610-680 according to order. In the example embodiment, as illustrated in Equation 2 below, a polynomial representing each element of the Mueller matrix may be defined using the plurality of bases 600 and coefficients (sd1, sd2, sd3, . . . ) according to the Zernike polynomial. An element defined as a polynomial according to the example of Equation 2 may be the first element M11 of the Mueller matrix. Also, in Equation 2, the basis multiplied by the first coefficient sd1 may be the second basis among the secondary bases of the Zernike polynomial, and the basis multiplied by the second coefficient sd2 may be the first basis among the secondary bases of the Zernike polynomial. The basis multiplied by the third coefficient sd3 may be a second basis among third bases of the Zernike polynomial.
When each of the elements included in the Mueller matrix is approximated as a polynomial as in Equation 2, the control unit of the semiconductor measurement apparatus may determine an optimization condition to restore the plurality of pieces of polarization information. The optimization condition may be determined according to Equation 3 below.
Referring to Equation 3, the optimization condition may be a condition in which the result of adding the difference between the prediction equation and the original image I to the sum of the coefficients (sd) included in the polynomial approximating each element of the Mueller matrix has a minimum value. The control unit of the semiconductor measurement apparatus, according to the optimization condition defined in Equation 3, may determine that the difference between the original image I and the prediction equation may not be relatively large, and among the coefficients sd included in the polynomial, each value of coefficients sd may be determined such that the number of coefficients having a non-zero value is minimized. Coefficients sd determined according to optimization conditions may be defined as optimization coefficients.
The control unit may restore each of the plurality of pieces of polarization information by reflecting optimization coefficients in Equation 2, and as described above, the plurality of pieces of polarization information may be a plurality of elements included in a Mueller matrix. As an example, the control unit may determine 16 elements M11-M44 of the Mueller matrix as illustrated in
As illustrated in
As described above, each pixel of the original image may correspond to a specific azimuth angle and a specific incident angle. Accordingly, the control unit may calculate measurement parameters representing the polarization characteristics of light at a specific azimuth angle and a specific incident angle using the pieces of polarization information obtained by applying the compression sensing method to the original image. As an example, the measurement parameters may obtain an intensity difference between polarization components included in light incident to an image sensor, a phase difference between polarization components, and a degree of polarization. The control unit may determine the selected dimension by selecting a measurement parameter having the highest sensitivity for the selected dimension to be measured, among various measurement parameters.
The control unit may accurately determine the selected dimension to be measured by comparing the above measurement parameters with reference data pre-stored in the library or by inputting the parameters to a pre-learned machine learning model. For example, the machine learning model may be trained in advance such that the control unit may receive a measurement parameter calculated by applying a compression sensing method to an original image and may output a value of a specific dimension.
Also, in the example embodiment, the control unit may determine the selected dimension by comparing at least one element among the elements M11 to M44 of the Mueller matrix derived as illustrated in
In the example embodiment, at various incident angles and azimuth angles, the selected dimension may be determined by using intensity difference, phase difference, and degree of polarization of polarization components included in light incident to the image sensor as measurement parameters. Accordingly, only the selected dimension to be measured in fine structures in which interaction between different dimensions, such as width and height, affects each other may be accurately determined.
To accurately measure the selected dimension, in the example embodiment, original images may be obtained while irradiating light of various wavelength bands to the sample. For example, as described above with reference to
Referring to
The gate structure 710 may include a gate electrode layer 711, a capping layer 712 and a gate insulating layer 713. In example embodiments, the gate electrode layer 711 may have a multilayer structure formed of a plurality of different conductive materials, for example, metal materials, and may provide a wordline. The capping layer 712 may be formed of polysilicon, silicon nitride, silicon oxide, silicon oxynitride, silicon carbonitride, silicon oxycarbonitride, or the like. The gate insulating layer 713 may be formed of a high-k material having a higher dielectric constant than silicon oxide, silicon nitride, or the like. A channel region may be formed in a region adjacent to the gate electrode layer 711 in the substrate 701.
An interlayer insulating layer 730 may be disposed on the substrate 701. The interlayer insulating layer 730 may include a lower interlayer insulating layer 731 and an upper interlayer insulating layer 732. An active region adjacent to the gate structure 710 may be connected to a buried contact BC extending into the lower interlayer insulating layer 731, and the buried contact BC may be connected to a landing pad LP extending into the upper interlayer insulating layer 732. The buried contact BC connected to the active region between adjacent gate structures 710 may be connected to the bitline structure 720. The bitline structure 720 may include a conductive layer 721, a capping layer 722 and a spacer 723.
A cell capacitor 740 extending in the first direction (Z-axis direction) may be connected to an upper portion of the landing pad LP. The cell capacitor 740 may include a lower electrode 741, a capacitor dielectric layer 742 and an upper electrode 743, and the shape of the lower electrode 741 may have a shape other than the pillar shape illustrated in
In the manufacturing process of the semiconductor device 700, for example, the buried contact BC may be formed by removing a portion of the region of the first interlayer insulating layer 731 and filling the removed region with a conductive material. As the gap between the gate structures 710 decreases and as integration density of the semiconductor device 700 increases, it may be desirable to precisely specify the position of the buried contact BC and to precisely control a dimension thereof in order to prevent defects.
For example, a first graph 800 may represent measurement parameters calculated from an original image of the semiconductor device 700 having the buried contact BC of a first height, and the second graph 810 may represent measurement parameters calculated from an original image of the semiconductor device 700 having the buried contact BC of a second height greater than the first height. A third graph 820 may represent measurement parameters calculated from an original image of the semiconductor device 700 having the buried contact BC of a third height greater than the second height, and a fourth graph 830 may represent measurement parameters calculated from an original image of the semiconductor device 700 having the buried contact BC of a fourth height greater than the second height.
A difference between the first height and the second height, a difference between the second height and the third height, and a difference between the third height and the fourth height may each be 1 nm or less. Referring to
The first to fourth graphs 800 to 830 as illustrated in
When the wavelength band with high sensitivity for the selected dimension to be measured in the structure is already recognized, the measurement operation may be swiftly completed by applying the compression sensing method to the original image obtained while irradiating the sample with light in the corresponding wavelength band. For example, when measuring the height of the buried contact BC in the semiconductor device 700 as illustrated in
Also, in the example embodiment, the height of the buried contact BC may be measured using spectral data representing the distribution of measurement parameters obtained while irradiating light of a wide wavelength band. For example, as illustrated in
According to the aforementioned example embodiments, an original image corresponding to an azimuth angle of 0 degrees to 360 degrees may be obtained by imaging one time, and a prediction equation representing the original images may be generated with a plurality of pieces of polarization information. In the prediction equation, a plurality of pieces of polarization information may be represented by a plurality of bases and a plurality of coefficients, respectively, and optimization coefficients for a plurality of coefficients may be determined under an optimization condition in which the difference between the sum of the plurality of coefficients and the original image and the prediction equation is minimized. A plurality of pieces of polarization information may be generated using optimization coefficients, and the dimension may be accurately determined based on the information, and regardless of the interaction of dimensions affecting each other in the process, the dimension to be measured may be accurately determined.
While the example embodiments have been illustrated and described above, it will be configured as apparent to those skilled in the art that modifications and variations may be made without departing from the scope in the example embodiments as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0072170 | Jun 2023 | KR | national |