This application claims benefit of priority to Korean Patent Application No. 10-2023-0073966, filed on Jun. 9, 2023, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
The present disclosure relates to a semiconductor manufacturing device and control of the same.
In a device using an electron beam, such as a scanning electron microscope or the like, it is necessary to accurately irradiate the electron beam to a subject to be analyzed or a subject to be processed. An electron beam emitted from an electron beam source passes through an aperture, a condenser lens formed by an electromagnetic field, or the like, to be incident on a subject. In order to accurately irradiate the electron beam to the subject, a position of the aperture should be controlled such that the electron beam and the aperture are accurately aligned. However, it may take a long time to manually adjust the position of the aperture, thereby reducing efficiency of a process.
In general, in some aspects, the present disclosure is directed to a semiconductor manufacturing device and method that can improve process efficiency by automatically controlling a position of an aperture so that an electron beam and the aperture are accurately aligned.
According to some aspects of the present disclosure, a semiconductor manufacturing device includes an electron beam source emitting an electron beam; a plurality of condenser lenses disposed between a stage on which an object including structures is seated and the electron beam source; an objective lens disposed between the plurality of condenser lenses and the stage; an aperture disposed between the plurality of condenser lenses; and a controller configured to acquire a plurality of original images according to change a working distance between the objective lens and the object, acquire a pattern image indicating the structures from the plurality of original images, a plurality of kernel images indicating distribution of the electron beam on the object, and a plurality of position vectors indicating a relative position of the structures in the plurality of kernel images, and adjust a position of the aperture based on a motion vector indicating movement of the plurality of position vectors according to the working distance.
According to some aspects of the present disclosure, a semiconductor manufacturing device includes an electron beam source emitting an electron beam; a lens unit transferring the electron beam generated by the electron beam source to an object; an aperture limiting an electron beam path of the lens unit; and a controller configured to acquire a plurality of original images according to change a working distance between the lens unit and the object, acquire a plurality of position vectors indicating a relative position of structures indicating in the plurality of original images in each of the plurality of original images using the plurality of original images, calculate a motion vector indicating movement of the plurality of position vectors according to the working distance, and adjust a position of the aperture according to a compensation vector determined based on the motion vector.
According to some aspects of the present disclosure, a semiconductor manufacturing device includes an electron beam source configured to emit an electron beam; a plurality of condenser lenses disposed between a stage and the electron beam source, wherein the stage is configured to support an object; an objective lens disposed between the plurality of condenser lenses and the stage; an aperture disposed between the plurality of condenser lenses; and a controller configured to acquire a plurality of original images according to a working distance between the object and the objective lens configured to transfer an electron beam to the object, acquire a single pattern image indicating structures included in the object, acquire a plurality of kernel images indicating distribution of the electron beam on the object, acquire a plurality of position vectors indicating a relative position of the structures in the plurality of original images, from the plurality of original images; acquire a motion vector indicating a movement direction according to the working distance, based on the plurality of position vectors; determine a compensation vector based on the motion vector; and move a position of an aperture limiting an electron beam path of the lens unit based on the compensation vector.
According to some aspects of the present disclosure, a method of controlling a semiconductor manufacturing device includes acquiring a plurality of original images to a working distance between a lens unit transferring an electron beam generated by an electron beam source to an object and the object; acquiring a single pattern image indicating structures included in the object, a plurality of kernel images indicating distribution of the electron beam on the object, and a plurality of position vectors indicating a relative position of the structures in the plurality of original images, from the plurality of original images; acquiring a motion vector indicating a movement direction according to the working distance, based on the plurality of position vectors; determining a compensation vector based on the motion vector; and moving a position of an aperture limiting an electron beam path of the lens unit based on the compensation vector.
Exemplary implementations will be more clearly understood from the following detailed description, taken in conjunction with the accompanying drawings.
Hereinafter, exemplary implementations will be described with reference to the accompanying drawings.
Referring to
The semiconductor manufacturing device 10 may irradiate an electron beam EB to an object 90 positioned on the stage 70, and may collect signals emitted from the object 90, to acquire an image of the object 90. For example, the object 90 may be a semiconductor device including structures. In addition, the signals emitted from the object 90 by the electron beam EB irradiated to the object 90 may include signals associated with a secondary electron (SE), a back scattered electron (BSE), an X-ray, visible light, cathodic fluorescence, or the like.
The electron beam source 20 may generate and emit the electron beam EB, in which the electron beam emitted from the electron beam source 20 may be accelerated, condensed by the lens unit 30, and irradiated to the object 90. For example, the electron beam source 20 may include an electron gun, in which the electron gun may heat a filament formed of tungsten or the like to generate an electron, and may accelerate the electron by applying a voltage, to generate the electron beam EB.
The lens unit 30 may include a plurality of condenser lenses 31 and 32, an objective lens 33, and the like. First and second condenser lenses 31 and 32 may condense the electron beams EB emitted from the electron beam source 20 such that the electron beams effectively converge on one point of the object 90. For example, as diameters of the electron beams EB irradiated to the object 90 become smaller, resolution of an image acquired by the controller 70 may be improved. To increase the resolution of the image, the lens unit 30 may include two or more condenser lenses 31 and 32. The diameters of the electron beams EB emitted from the electron beam source 20 may gradually decrease while passing through the first and second condenser lenses 31 and 32.
The objective lens 33 may focus the electron beams EB condensed by the first and second condenser lenses 31 and 32 onto the object 90. For example, the objective lens 33 may determine the diameters of the electron beams EB irradiated onto the object 90. A distance between the objective lens 33 and the object 90 may be defined as a working distance, and the diameters of the electron beams EB irradiated to the object 90 may vary according to the working distance. Therefore, the resolution of the image corresponding to the object 90 may be adjusted by adjusting a position of the objective lens 33.
The objective lens 33 may include a plurality of scan coils 81, an astigmatism adjustor 82, and the like. The plurality of scan coils 81 may deflect the electron beam to one point of the objective lens 33. The astigmatism adjustor 82 may include a plurality of stigmators, and may adjust current flowing through the stigmators, to adjust astigmatism of the electron beam EB passing through the objective lens 33. The objective lens 33 may adjust astigmatism of the lens unit 30 by adjusting the current flowing through the plurality of scan coils 81, or the like, such that the electron beam EB may irradiate the object 90 in a desired shape.
The aperture 40 may be disposed in a path of the electron beam EB, may restrict the path of the electron beam EB to make the electron beam EB incident onto the object 90 uniformly, and may reduce spherical aberration. In the example of
The detector 50 may acquire signals emitted from the object 90 due to the electron beam EB irradiating the object 90, and may generate an image using the signals. The image generated by the detector 50 may be transferred to the controller 60.
When the aperture 40 is misaligned with an axis of the electron beam EB, and the lens unit 30 adjusts the working distance between the objective lens 33 and the object 90 to have a desired focus, a phenomenon in which an imaging region of the object 90 moves according to the working distance may occur. When the imaging region moves while the lens unit 30 adjusts focus, it may be difficult to fix a target imaging region, thereby making it difficult to iteratively capture a specific imaging region during automatic imaging.
For example, when the imaging region according to the working distance is not fixed in a wide-area top-down delaying system (W-TDS) device that iteratively images a predetermined imaging region of the object 90 while sequentially removing layers of the object 90 from an upper layer to a lower layer, the imaging region may have to be corrected every time imaging of the layers. Correcting the imaging region takes time, and as a result, the imaging time may increase.
The aperture 40 may be movable at least on an X-Y plane such that the aperture 40 may be aligned with the axis of the electron beam EB. When an operator manually adjusts the aperture 40, it may take a long time, and a degree of accuracy of alignment of the aperture 40 may depend on a skill level of the operator. In addition, when there is astigmatism in the lens unit 30, shapes of imaged structures may change according to the working distance. Therefore, it may be difficult to manually align the aperture 40 accurately by visually checking the imaged structures.
According to some implementations, an operation of optimizing alignment of the aperture 40 in the semiconductor manufacturing device 10 may be automated. Specifically, the controller 60 may acquire a plurality of original images of the object 90 while adjusting the working distance between the objective lens 33 and the object 90. The controller 60 may acquire a motion vector of an image pattern according to the working distance using the plurality of original images, and may acquire a compensation vector for compensating for the position of the aperture 40 based on the motion vector. The controller 60 may automatically optimize the alignment of the aperture 40 based on the compensation vector.
The subject matter of the present disclosure is not limited to a case in which the semiconductor manufacturing device 10 is a scanning electron microscope device, and the semiconductor manufacturing device 10 may be applied to various devices having a lens unit and an aperture and transferring electron beams to an object, such as a transmission electron microscope (TEM) device.
The electron beam EB output from the electron beam source 20 may be condensed by a first condenser lens 31. The aperture 40 may transfer a portion of the electron beam EB focused by the first condenser lens 31 to a second condenser lens 32. The electron beam EB passing through the aperture 40 may be condensed by the second condenser lens 32, and may be transferred to an objective lens 33. The objective lens 33 may focus the electron beam EB to a point.
The lens unit 30 may adjust a working distance of the objective lens 33 such that a focal point of the objective lens 33 may be located within an imaging region of the object 90. When the working distance and a focal distance of the objective lens 33 are equal to each other, distribution of electron beams within the imaging region may have a circular shape. Also, an image acquired within the imaging region may have patterns in which structures included in the imaging region may be accurately expressed without distortion.
Under an underfocus condition in which the working distance of the objective lens 33 is located closer than the focal distance, and an overfocus condition in which the working distance of the objective lens 33 is located farther than the focal distance, a shape of the electron beam formed in the imaging region may be distorted.
The aperture 40 may limit the path of the electron beam EB to transfer the electron beam EB of a component, perpendicular to a surface of the object 90, to the object 90. Referring to
When the objective lens 33 radiates an electron beam having an axis oblique to the surface of the object, an imaging region of the object may move in accordance with a working distance of the objective lens 33.
In
When the position of the aperture 40 is misaligned with the vertical axis, the distribution of the electron beams may move according to the working distance of the objective lens 33, and an imaging position of the object may move according to the working distance of the objective lens 33. The movement of the imaging position of the object according to the working distance of the objective lens 33 may be represented by movement of relative positions of structures appearing in original images captured at different working distances.
According to some implementations, the semiconductor manufacturing device 10 detects relative positions of structures in a plurality of original images acquired by changing the working distance of the objective lens 33, and may acquire a motion vector of the structures to detect misalignment of the aperture 40. In addition, the semiconductor manufacturing device 10 may adjust a position of the aperture 40, based on the motion vector, to control the position of the aperture 40 to be aligned with the vertical axis.
Referring to
According to some implementations, position adjustment of an aperture may be automated by detecting, in a plurality of original images, relative positions of structures included in an object being irradiated by an electron beam, regardless of directions and shapes of the structures in the object.
In operation S11, a controller of a semiconductor manufacturing device may be configured to change a working distance of an objective lens, and may acquire a plurality of original images according to the working distance. The working distance may be a distance between the objective lens and an object. The plurality of original images may include at least one overfocused image acquired under an overfocus condition in which the working distance is farther than a focal distance, and at least one underfocused image acquired in an underfocus condition in which the working distance may be shorter than the focal distance.
In operation 512, the controller may be configured to acquire a pattern image, a plurality of kernel images, and a plurality of position vectors from the plurality of original images. The pattern image may be an image indicating structures included in a target position to which an electron beam is irradiated on the object. The plurality of kernel images may be images illustrating distribution of electron beams irradiated to the object according to the working distance of the objective lens. Also, the plurality of position vectors may represent relative positions of structures appearing in each of the plurality of original images according to the working distance of the objective lens.
In operation S13, the controller may be configured to acquire a motion vector based on the plurality of position vectors. The motion vector may represent a movement tendency of the plurality of position vectors according to the working distance of the objective lens.
In operation 514, the controller may be configured to determine whether a position of an aperture may be aligned with a vertical axis. For example, when the position of the aperture and the vertical axis are aligned, there will be little change in the plurality of position vectors according to the working distance of the objective lens, and the motion vector may be close to a zero vector.
When a magnitude of the motion vector is less than a threshold value, the controller may be configured to determine that the position of the aperture is aligned with the vertical axis (“yes” in operation S14), and may end the operation. When a magnitude of the motion vector is equal to or greater than the threshold value, the controller may determine that the position of the aperture is misaligned with the vertical axis (“no” in operation S14), and may perform operation S15.
In operation S15, the controller may be configured to determine a compensation vector based on the motion vector. The compensation vector may indicate a movement direction and a movement distance of the aperture for aligning the position of the aperture with the vertical axis.
In operation S16, the controller may be configured to move the aperture based on the compensation vector. In addition, the controller may be configured to determine whether the position of the aperture is aligned with the axis as a result of the movement of the aperture by iteratively performing operations S11 to S14.
Referring to
The object 103 may be positioned upon the stage 102, and a controller of the semiconductor manufacturing device 100 may be configured to change a working distance of the objective lens 101 to first to third working distances WD1 to WD3, to acquire a plurality of original images 111, 112, and 113. An electron beam collector 104 may be provided on one side of the objective lens 101 close to the object 103. The working distance may be defined as a distance between the one side of the objective lens 101 and the object 103.
Referring to
Next, referring to
Referring to
As described with reference to
For example, referring to the plurality of original images 111, 112, and 113 together, as the working distance of the objective lens increases from the first working distance WD1 to the third working distance WD3, checkered pattern may move in a left downward direction.
According to some implementations, a position vector indicating a degree of movement of a relative position of a pattern in the plurality of original images may be acquired.
Referring to
First and second original images 201 and 202 may be images captured under underfocused conditions and compared to the third original image 203, and the imaging region may move. As compared to patterns displayed in the third original image 203, patterns of the first and second original images 201 and 202 may be displayed blurry, and positions thereof may move.
Fourth and fifth original images 204 and 205 may be images captured under overfocused conditions, and, as compared to the patterns illustrated in the third original image 203, patterns thereof may be displayed blurry, and positions thereof may move in a direction, opposite to that of the first and second original images 201 and 202.
The plurality of original images 201 to 205 may have different relative positions and sharpness of patterns, but the plurality of original images 201 to 205 may include patterns indicating the same structures. For example, each of the plurality of original images 201 to 205 may be acquired by applying a plurality of different position vectors 221 to 225 and a plurality of different kernel images 231 to 235 to a pattern image 210.
The pattern image may be an image displaying structures formed in a target region to be imaged in the object, the plurality of position vectors may be vectors indicating a direction and a magnitude of movement of structures appearing in a pattern image in the original images, and the plurality of kernel images may be images indicating distribution of electron beams irradiated to the imaging region of the object.
A shifting operation using each of the plurality of position vectors 221 to 225 may be performed on the pattern image 210, and a convolutional operation using the plurality of kernel images 231 to 235 may be performed on shifted images, respectively, to acquire the plurality of original images 201 to 205.
Conversely, a deconvolutional operation may be applied to each of the plurality of original images 201 to 205 to acquire the pattern image 210, the plurality of position vectors 221 to 225, and the plurality of kernel images 231 to 235.
Since the pattern image 210, the plurality of position vectors 221 to 225, and the plurality of kernel images 231 to 235 may be unknown values, estimation of the pattern image 210, estimation of each of the plurality of position vectors 221 to 225, and estimation of each of the plurality of kernel images 231 to 235 may be required.
According to some implementations, an objective function F for optimizing deconvolution for estimating the pattern image 210, the plurality of position vectors 221 to 225, and the plurality of kernel images 231 to 235 may be expressed as Equation 1, as follows:
In Equation 1, fi may correspond to an ith original image among the plurality of original images 201 to 205, u may correspond to the pattern image 201, vi may correspond to an ith position vector among the plurality of position vectors 221 to 225, and kΣi may correspond to an ith kernel image among the plurality of kernel images 231 to 235. Additionally, k may be a balancing parameter.
The following assumptions may be applied to Equation 1. First, it may be assumed that the position vector vi is constant in all pixels of the original image fi. For example, it is assumed that the structures illustrated in the pattern image u have moved to the same position and direction in the original image fi. Second, it may be assumed that the kernel image kΣi is defined as a point spread function having a Gaussian distribution Σi. Third, it may be assumed that structures displayed on the pattern image u have sharp characteristics.
Equation 1 may be explained as follows. The objective function F may be a function for finding the pattern image 210, the plurality of position vectors 221 to 225, and the plurality of kernel images 231 to 235, capable of minimizing a sum of an average of differences between calculated images calculated by respectively using the pattern image 210, the plurality of position vectors 221 to 225, and the plurality of kernel images 231 to 235, and the plurality of original images 201 to 205, and a gradient Vu of the pattern image.
In this case, the calculated images may be referred to results kΣ
In short, the pattern image 210, the plurality of position vectors 221 to 225, and the plurality of kernel images 231 to 235 may be generated while the pattern image 210 may be sharpened to some extent by using the objective function F. The pattern image 210, the plurality of position vectors 221 to 225, and the plurality of kernel images 231 to 235 may be optimized such that the calculated images may be similar to the plurality of original images 201 to 205.
From Equation 1, Equation 2 for extracting the pattern image u, Equation 3 for extracting the Gaussian distribution Σi, and Equation 4 for extracting the position vector vi may be derived as follows:
Equation 2 represents an equation for updating the pattern image u by applying a gradient descent to a previous pattern image u. In Equation 2, ⊙ may represent a valid convolution, ⊗ may represent a full padding convolution, and
Equation 3 represents an equation for updating the Gaussian distribution Σi by applying the gradient descent to a previous Gaussian distribution Σi. In Equation 3, K may represent a size of the kernel image:
Equation 4 represents an equation for updating an x-axis component vi1 and a y-axis component vi2 of the motion vector vi, respectively. Specifically, Equation 4 represents a mathematical expression for acquiring a vii value for which a value acquired by differentiating the objective function F by vi1 is ‘0,’ and a vi2 value for which a value acquired by differentiating the objective function F by vi2 is ‘0,’ based on an updated kernel image kΣi and the pattern image u.
A method of optimizing the pattern image u, the Gaussian distribution Σi, and the position vector vi by applying Equation 2, Equation 3, and Equation 4 will be described with reference to
Referring to
According to some implementations, the initial value of the pattern image 210 may be set to a random noise image, the initial value of each of the plurality of position vectors 221 to 225 may be set to a zero vector, and the initial value of each of the plurality of kernel images 231 to 235 may be set to a non-directional Gaussian distribution function.
In operation S22, the controller may update the pattern image 210 by applying Equation 2. In operation S23, the controller may update the plurality of kernel images 231 to 235 by updating the Gaussian distribution Σi by applying Equation 3. In operation S24, the controller may update the plurality of position vectors 221 to 225 by applying Equation 4.
In operation S25, the controller may determine whether the pattern image 210, the plurality of kernel images 231 to 235, and the plurality of position vectors 221 to 225 are optimized. For example, as a result of performing operations S22 to S24, the controller determines whether difference values between previous values and updated values of the pattern image 210, the plurality of kernel images 231 to 235, and the plurality of position vectors 221 to 225 are smaller than a threshold value, respectively.
When each of the difference values is smaller than the threshold value, it may be determined that the values of the pattern image 210, the plurality of kernel images 231 to 235, and the plurality of position vectors 221 to 225 have converged to optimized values according to a gradient descent. Therefore, the controller may determine that the pattern image 210, the plurality of kernel images 231 to 235, and the plurality of position vectors 221 to 225 are optimized (“Yes” in operation S25), and may end the operation.
When each of the difference values is greater than or equal to the threshold value, the controller may determine that the pattern image 210, the plurality of kernel images 231 to 235, and the plurality of position vectors 221 to 225 have not been optimized (“No” in operation S25), operations S22 to S24 may be iteratively performed until the pattern image 210, the plurality of kernel images 231 to 235, and the plurality of position vectors 221 to 225 are optimized.
Referring to
The pattern image 310 may be updated by iteratively performing optimization based on original images 201 to 205, as illustrated in
According to some implementations, a pattern image, a plurality of kernel images, and a plurality of position vectors may be accurately separated from a plurality of original images. For example, in an objective lens of a semiconductor manufacturing device having astigmatism, when a working distance and a focal distance are accurately coincided, distribution of electron beams may have a circular shape, but under an underfocused condition, the distribution of the electron beams may have a shape extending in a first direction, and under an overfocused condition, and the distribution of the electron beams may have a shape extending in a second direction, perpendicular to the first direction. Therefore, structures in the plurality of original images may have different shapes, and it may be difficult to accurately determine a movement direction and a size thereof with a naked eye.
According to some implementations, a pattern image extracted from a plurality of original images may have sharp-shaped patterns. In addition, distribution of electron beams may extend perpendicularly to each other in images under an underfocused condition and images under an overfocused condition, among kernel images extracted from the plurality of original images. For example, a pattern image and a plurality of kernel images may be accurately extracted from a plurality of original images. In addition, a plurality of position vectors may be extracted from the plurality of original images by removing influence of a shape of the pattern and shapes of the electron beams due to astigmatism.
Therefore, position vectors of patterns for each working distance may be acquired using a plurality of original images, regardless of a pattern of structures included in a target region of an object or astigmatism of an objective lens of a semiconductor manufacturing device.
A controller of a semiconductor manufacturing device may acquire motion vectors of patterns according to the working distance based on the acquired position vectors, and may acquire a compensation vector for determining a compensation direction of an aperture position based on the acquired motion vectors. In the following, a method of acquiring a motion vector and a compensation vector will be described in more detail.
In a graph of
The plurality of position vectors may have a certain tendency according to the working distance of the objective lens. In an example of
A controller of a semiconductor manufacturing device may acquire a motion vector indicating a motion direction of the plurality of position vectors by using a principal component analysis (PCA) technique. Specifically, the controller may generate one-dimensional data indicating movement of the plurality of position vectors based on the plurality of position vectors distributed in the two-dimensional graph. And, the controller may determine a direction of the one-dimensional data as the direction of the motion vector.
The controller may determine a size of the motion vector based on a standard deviation of the plurality of position vectors. For example, when positions of the end points of the position vectors vary greatly according to the working distance of the objective lens, the standard deviation of the plurality of position vectors may increase. Therefore, the controller may determine that the size of the motion vector is proportional to the standard deviation of the plurality of position vectors.
Hereinafter, a method of acquiring a compensation vector for compensating for a position of an aperture based on the motion vector will be described.
In
Among the positions of the aperture, the aperture may be substantially aligned with an axis at a position in which a motion vector is close to a zero vector.
Referring to directions of motion vectors illustrated in the plurality of graphs together, the motion vectors may have a tendency to converge to the alignment position 511 along a spiral path. For example, in the aperture at a first position 501 having coordinates of (6, −6), when the position of the aperture moves by a magnitude and a direction of a motion vector at the first position 501, the aperture may be placed at a second position 502. Also, in the aperture at the second position 502, when the position of the aperture moves by a magnitude and a direction of a motion vector at the second position 502, the aperture may be placed at a third position 503. When an operation of moving the aperture by a magnitude and a direction of a motion vector at the current position is iteratively performed until the magnitude of the motion vector is smaller than a threshold magnitude, the position of the aperture may converge to the alignment position 511 through fourth to tenth positions 504 to 510.
Even in the aperture located at a position other than the first position 501, when an operation of moving the position of the aperture is iteratively performed by a magnitude and a direction of a motion vector at a position corresponding thereto, the aperture may be located at the alignment position 511.
According to some implementations, the controller may capture a plurality of original images according to the working distance of the objective lens at the current position of the aperture to acquire a motion vector, and may determine a movement direction and a size of the aperture based on the motion vector. The aperture may be controlled to reach the alignment position by acquiring a compensation vector and iteratively performing an operation of moving the aperture based on the compensation vector at the current position of the aperture, until the motion vector is smaller than a threshold magnitude.
In the example of
In
According to some implementations, a compensation vector according to a position of the aperture may be acquired by rotating a motion vector according to the position of the aperture by a predetermined angle. Referring to
In the aperture at a first position 601, when the position of the aperture may move by a magnitude and a direction of the compensation vector at the first position 601, the aperture may be placed at a second position 602. When an operation of moving the aperture by a magnitude and a direction of the compensation vector at the current position may be iteratively performed until the magnitude of the motion vector is smaller than a threshold magnitude, the position of the aperture may converge to an alignment position 604 through a third position 603. Therefore, in
The motion vectors in
According to some implementations, a controller may control the position of the aperture to reach the alignment position even when a semiconductor manufacturing device has astigmatism.
An original image 201 of
When a semiconductor manufacturing device has astigmatism, due to shapes of patterns in a plurality of original images, the plurality of original images may appear as if the patterns rotate and move according to a working distance. For example, comparing the original image 201 and the original image 205, the same pattern may appear to move in a right and downward direction while rotating 90 degrees. Position vectors acquired from the original images may also have a curved distribution.
According to some implementations, a position of an aperture may be corrected to an alignment position, regardless of a shape of distribution of the position vectors due to astigmatism.
In
Referring to
According to some implementations, a controller may acquire a motion vector from the position vectors using a PCA technique, and may thus acquire a linear motion vector regardless of the distribution of the position vectors. In addition, the controller may acquire the linear compensation vector by rotating the motion vector by a predetermined rotation angle.
Referring to
For example, when a compensation vector is acquired based on position vectors in the aperture at a first position 701 and a position of the aperture moves by a magnitude and a direction of the compensation vector, the aperture may be placed at a second position 702. And, when a position of the aperture in the aperture at the second position 702 moves by a magnitude and a direction of the compensation vector, the aperture may converge to a third position 703 close to the alignment position 704.
According to some implementations, a controller of a semiconductor manufacturing device may align an aperture on a vertical axis, based on a motion vector acquired using a plurality of original images according to a working distance of an objective lens regardless of whether the semiconductor manufacturing device has astigmatism or not. The controller may fix an imaging region when adjusting the working distance of the objective lens by aligning the aperture. Therefore, the semiconductor manufacturing device may quickly acquire an image of a target region to be imaged.
The controller may acquire a plurality of kernel images along with a plurality of position vectors based on the plurality of original images. The controller may calculate a motion vector based on the plurality of position vectors to align the aperture, and may adjust a stigmator based on the plurality of kernel images to improve stigmatism.
In addition, according to some implementations, since an operation of aligning an aperture may be automated, the aperture may be aligned quickly and accurately, as compared to manually adjusting a position of the aperture, and a variance in alignment of the aperture between semiconductor production devices may also be reduced.
According to some implementations, a controller may include at least one of a central processing unit (CPU) and a graphics processing unit (GPU), wherein the graphics processing unit may execute an operation of acquiring a pattern image, a plurality of kernel images, and a plurality of position vectors from a plurality of original images. Since the graphics processing unit (GPU) may be specialized in parallel operation, compared to the central processing unit (CPU), operations for images may be parallelized and processed quickly. Therefore, an operation of aligning an aperture may be performed more quickly.
According to some implementations, a device using an electron beam may change a working distance between a lens and an object, may detect misalignment between the electron beam and an aperture using a plurality of original images acquired, and may adjust a position of the aperture for compensating for the misalignment. Since the position control of the aperture may be automated, efficiency of a semiconductor process using the electron beam may be improved.
Problems to be solved by the subject matter of the present disclosure are not limited to the problems mentioned above, and other problems not mentioned will be clearly understood by those skilled in the art from the description below.
While example implementations have been illustrated and described above, it will be apparent to those skilled in the art that modifications and variations could be made without departing from the scope of the present disclosure as defined by the appended claims.
While this disclosure contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed. Certain features that are described in this disclosure in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a combination can in some cases be excised from the combination, and the combination may be directed to a subcombination or variation of a subcombination.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0073966 | Jun 2023 | KR | national |