The present patent application claims the priority of German patent application DE 10 2023 205 136.2, filed on Jun. 1, 2023, the content of which is incorporated herein by reference.
The invention relates to a method for simulating illumination and imaging properties of an optical production system when an object is illuminated and imaged, wherein the simulation is implemented by use of an optical measurement system of a metrology system. Further, the invention relates to a metrology system for carrying out such a method.
Such a method and a metrology system to this end are known from DE 10 2019 208 552 A1, DE 10 2019 206 651 B4 and DE 10 2019 215 800 A1. A metrology system for measuring an aerial image of a lithography mask in three dimensions is known from WO 2016/012426 A1. DE 10 2013 219 524 A1 describes a device and a method for determining an imaging quality of an optical system, and an optical system. DE 10 2013 219 524 A1 has described a phase retrieval method for determining a wavefront on the basis of the imaging of a pinhole. The specialist article “A new system for a wafer lever CD metrology on photomasks,” Proceedings of SPIE—The International Society for Optical Engineering, 2009, 7272, by Martin et al. has disclosed a metrology system for determining a wafer level critical dimension (CD).
It is an aspect of the present invention to improve a method for simulating illumination and imaging properties of an optical production system when illuminating and imaging an object by use of an optical measurement system.
This aspect is achieved according to the invention by a simulation method having the features specified in Claim 1.
According to the invention, it was recognized that the recording of measurement aerial images by use of the plurality of pupil stops, in particular the recording of measurement aerial images in a plurality of measurement positions of a pupil stop, which was selected in advance for the best possible simulation of the illumination setting of the optical production system, provides the possibility of improving the accuracy of the simulation method overall and in particular provides the option of reducing illumination angle-dependent artefacts, in particular, in the reconstructed complex mask transfer function, that is to say in the transfer function of the imaged object. It is then possible to correctly take account of 3-D mask effects. This can be taken into account when examining lithography masks, especially when examining masks used for EUV lithography.
The accuracy of the optical production system 3-D aerial image determined by simulation is increased by giving consideration to a dependence of an effective stop edge profile on a respective displacement position of the pupil stop used during the measurement, with this consideration going beyond the pure displacement-related shift of the stop edge contour or stop edge profile. If the pupil stop displacement during the recording of the measurement aerial images is such that the pose of an outer pupil edge, which specifies a numerical illumination aperture in particular, remains constant, then this can be taken into account within the simulation method and simplify modelling of the respective pupil stop edge contour profile at the current measurement position.
The provided pupil stop is an illumination optical unit stop, with the pupil stop being arrangeable in an illumination pupil region regularly present in an illumination optical unit pupil plane.
In addition to that, the optical measurement system may comprise a further stop in a measurement system imaging optical unit for imaging the object. This image-side, further stop can be an aperture stop. A shadowing effect on such an aperture stop can also be taken into account in the simulation method.
In this case, the simulation method may comprise, in particular, a provision of information regarding the respective image-side, further stop utilized, in particular a provision of contour information for this image-side, further stop and also, for instance, information about a thickness of a stop body of the stop.
Taking account of shadowing effects in accordance with Claim 2 is well adapted to pupil stop design practice. It was recognized that such shadowing effects cannot be neglected if the greatest possible simulation accuracy is needed. A thickness of the main body of the pupil stop may be larger than 0.2 mm and may be in the range between 0.2 mm and 5 mm.
Taking account of shadowing effects on account of a finite chief ray angle of the object illumination in the optical production system in accordance with Claim 3 is well adapted to the illumination and imaging requirements of the optical production system. It was also recognized in this respect that such shadowing effects cannot be neglected if the greatest possible accuracy is required during the simulation.
A field-dependent determination in accordance with Claim 4 or 5 leads to an additional improvement in the accuracy of the simulation method since the optical production system to be simulated regularly has field-dependent illumination and/or imaging properties, i.e. illumination and/or imaging properties that depend on the location and the object field or image field.
Within this field-dependent determination, a chief ray angle can be defined by way of two angles on the object. In this case, one of the two angles is an angle of incidence which can be measured with respect to the normal of an object field in particular. The second of the two angles can be an azimuth angle which is measured relative to an initial azimuth angle of 0°, the latter being present when a projection of the incident chief ray on the object field is perpendicular to a field coordinate, wherein this field coordinate can be the field coordinate of the longer field extent in the case of a rectangular object field. In this case, the azimuth angle can be a deviation of a chief ray trajectory of the object illumination from a trajectory in a meridional plane of the optical measurement system.
If both angles are taken into account, then it is possible to consider chief ray angles whose angle of incidence is greater than 4° and whose absolute azimuth angle is greater than 10°.
A correction in accordance with Claim 6, which may include the respective reconstructed spectra of an illumination setting, takes account of an influence on the illumination setting by the optical production system imaging optical unit on the one hand and a corresponding influence of the metrology system measurement optical unit on the other hand. The same reconstructed spectra may be included in both correction terms. This can be used to eliminate errors that occur during the reconstruction of the spectra. The use of corresponding correction terms in the mask transfer function reconstruction is known from DE 10 2019 208 552 A1 and DE 10 2019 206 651 B4.
The use of a pupil stop with a stop shape optimized in accordance with Claim 7 already yields a good simulation of the optical production system illumination properties at the starting point. In that case, the simulation method can be used to obtain highest possible accuracies of the simulation.
A separate optimization for different field regions in accordance with Claim 8 gives rise to a further option for improving the accuracy of the simulation method. The use of a compensation rule, in particular a weighted compensation rule, allows the simulation accuracy to be increased when simulating in the region of field heights for which no measurement pupil stop was provided.
The advantages of a metrology system according to Claims 9 to 12 correspond to those that have already been explained above with reference to the method claims. The pupil stop of the metrology system may have a thickness which is at least 0.2 mm. The thickness of the pupil stop may be, depending on the respective application, in the range between 0.2 mm and 5 mm. A stop shape and/or a stop thickness may be optimized according to the above mentioned method.
A displacement drive for displacing the pupil stop according to Claim 10 has proven its worth for the reproducible specification of pupil stop measurement positions. This applies correspondingly to an object holder, which is displaceable perpendicular to the object plane, and to a displacement drive for displacing an imaging pupil stop according to Claim 11.
A selection apparatus having a stop storage unit according to Claim 12 advantageously enables the pupil stop selection step of the simulation method. In particular, the selection can be made with the aid of a robotic actuation system which takes the respective selected pupil stop from the stop storage unit and moves it to its use location in the pupil plane. The selection apparatus moreover ensures a substitution of a last-used pupil stop for a newly selected pupil stop. In particular, the last-used pupil stop can be transferred from the use location back to the stop storage unit by use of the robotic actuation system in that case.
The selection apparatus may comprise a stop storage unit with a plurality of pupil stops, each with different stop edge shapes and/or stop edge orientations for specifying correspondingly different measurement illumination settings.
An aperture of the stop, i.e. of the illumination pupil stop and/or the imaging pupil stop, may also be variably specifiable, for example in the style of an iris diaphragm.
The metrology system may comprise a light source for the illumination light. A light source of this type may be configured as an EUV light source.
An EUV wavelength of the light source may range between 5 nm and 30 nm. A light source in the DUV wavelength range, for example of the order of 193 nm, is also possible.
Within the scope of the simulation method, it is possible to select precisely one pupil stop from the plurality of pupil stops provided, which may differ in terms of their stop edge shape and/or in terms of their stop edge orientation. Alternatively, it is possible to select and use a plurality of different pupil stops for the purpose of specifying different measurement positions. The pupil stops provided may specify at least one of the following illumination settings in particular: Quadrupole, C-quad, dipole, annular, conventional. A person skilled in the art finds examples for such settings in, inter alia, WO 2012/028 303 A1. First of all, there may be an initial determination of a best focal plane (defocus value zm=0) within the preparation of the imaging method. z-increments when determining the 3-D aerial image in the last step of the simulation method, i.e. when determining the aerial image from the reconstructed mask transfer function and the optical production system illumination setting, may differ from defocus values that may at first be specified in the simulation method. Pixel sizes of the recorded measurement aerial images may be sampled for the purpose of an adaptation to the desired pixel resolution.
The target pupil stop, which can be specified, and the target stop edge shape thereof may relate to a plurality of, or else a multiplicity of, individual illumination or pupil spots, i.e. a plurality of stop apertures for example arranged in a grid-like manner. Such illumination or pupil spots may yield an illumination setting used within the scope of the production illumination, the said illumination setting for example being able to be set by way of an illumination optical unit having a field facet mirror and a pupil facet mirror.
Exemplary embodiments of the invention are explained in greater detail below with reference to the drawing, in which:
In order to facilitate the representation of positional relationships, a Cartesian xyz-coordinate system will be used hereinafter. In
In a view that corresponds to a meridional section,
An example of the test structure 5 is depicted in a plan view in
The metrology system 2 is used to analyze a three-dimensional (3-D) aerial image (aerial image metrology system). One application is found in the simulation of an aerial image of a lithography mask, in the way that the aerial image would also appear in an optical production system of a producing projection exposure apparatus, for example in a scanner. To this end, an imaging quality of the metrology system 2 itself, in particular, can be measured and optionally adjusted. Consequently, the analysis of the aerial image can serve to determine the imaging quality of a projection optical unit of the metrology system 2, or else to determine the imaging quality of, in particular, projection optical units within a projection exposure apparatus. Metrology systems are known from DE 10 2019 208 552 A1, from WO 2016/012 426 A1, from US 2013/0063716 A1 (cf. FIG. 3 therein), from DE 102 20 815 A1 (cf. FIG. 9 therein), from DE 102 20 816 A1 (cf. FIG. 2 therein) and from US 2013/0083321 A1.
The illumination light 1 is reflected and diffracted at the test structure 5. A plane of incidence of the illumination light 1 is parallel to the yz-plane in the case of the central, initial illumination.
The EUV illumination light 1 is produced by an EUV light source 8. The light source 8 can be a laser plasma source (LPP; laser produced plasma) or a discharge source (DPP; discharge produced plasma). In principle, a synchrotron-based light source can also be used, e.g. a free electron laser (FEL). A used wavelength of the EUV light source can range between 5 nm and 30 nm. In principle, in one variant of the metrology system 2, a light source for another used light wavelength can also be used instead of the light source 8, for example a light source for a used wavelength of 193 nm.
An illumination optical unit 9 of the metrology system 2 is arranged between the light source 8 and the test structure 5. The illumination optical unit 9 serves for the illumination of the test structure 5 to be examined, with a defined illumination intensity distribution over the object field 3 and at the same time with a defined illumination angle distribution with which the field points of the object field 3 are illuminated. Such an illumination angle distribution is also referred to as illumination setting.
The respective illumination angle distribution of the illumination light 1 is specified by way of a pupil stop 10, which is arranged in an illumination optical unit pupil plane 11. The pupil stop 10 is also referred to as a sigma stop.
Further variants of pupil stops 10 with a central passage pole I of increasingly larger radius are shown in
Corresponding annular illumination settings can be realized using the embodiments of the pupil stops 10 according to
Measured from the x-coordinate of the pupil stop 10 of
The pupil stop 10 of the illumination optical unit 9 is embodied as a stop which is displaceable in driven fashion and which is arranged in front of the object plane 4 in an illumination light beam path 15 of the illumination light 1. A drive unit used for the driven displacement of the pupil stop 10 is depicted at 16 in
With the aid of the displacement drive 16, it is possible to displace the selected pupil stop 10 along the pupil coordinates kx and ky in the pupil plane 11.
The displacement drive 16 may also include a stop interchange unit, by use of which a specific pupil stop 10 is replaced with another, specific pupil stop 10. To this end, the stop interchange unit may take the respective selected pupil stop from a stop storage unit and return the replaced stop to this stop storage unit.
The test structure 5 is held by an object holder 17 of the metrology system 2. The object holder 17 cooperates with an object displacement drive 18 for displacing the test structure 5, in particular along the z-coordinate.
Following reflection at the test structure 5, the electromagnetic field of the illumination light 1 has a distribution 19 which is depicted in
The illumination light 1 reflected by the test structure 5 enters an imaging optical unit or projection optical unit 20 of the metrology system 2.
A diffraction spectrum 21 arises in a pupil plane of the projection optical unit 20 on account of the periodicity of the test structure 5 (cf.
The 0th order of diffraction of the test structure 5 is present centrally in the diffraction spectrum 21. Moreover,
The orders of diffraction of the diffraction spectrum 21 depicted in
The imaging pupil stop 23 is operatively connected to a displacement drive 25, the function of which corresponds to that of the displacement drive 16 for the sigma stop 10.
The pupils 24 (cf.
The intensity distribution in the exit pupil 26 finds contributions firstly from the images of the −1st, 0th and +1st order of diffraction and secondly from an imaging contribution of the optical system, specifically of the projection optical unit 20. This imaging contribution which is elucidated in
The projection optical unit 20 images the test structure 5 towards a spatially resolving detection device 27 of the metrology system 2. The detection device 27 is in the form of a camera, in particular a charge-coupled device (CCD) camera or complementary metal-oxide-semiconductor (CMOS) camera.
The projection optical unit 20 is embodied as magnifying optical unit. A magnification factor of the projection optical unit 20 may be greater than 10, may be greater than 50, may be greater than 100 and may even be greater still. As a rule, this magnification factor is less than 1000.
In a manner corresponding to
The following procedure is carried out to simulate the illumination and imaging properties of the optical production system when illuminating and imaging the object, using the example of the test structure 5, by use of the optical measurement system 1 of the metrology system 2:
Firstly, at least one pupil stop 10 and, for instance, a plurality of pupil stops 10 each with different stop edge shapes are provided for the purpose of specifying correspondingly different measurement illumination settings. This is implemented by providing pupil stops 10, for example in the style of the pupil stops 10 of
Then, a target pupil stop with a target stop edge shape is specified proceeding from an illumination setting of the optical production system to be simulated. The target pupil stop can be an arrangement of a plurality or multiplicity of individual pupil spots or stop spots. In this case, the intensity of individual illumination spots or pupil spots generally differs between the individual spots.
The target pupil stop 36 can be specified by way of a definition of appropriate stop aperture contours, especially continuous stop aperture contours. Such stop aperture contours can be described by polygonal chains, for example.
These continuous openings are then approximated by a finite number of pupil spots 37 within the openings. These spots are depicted in
For the specific example in
Proceeding from this target pupil stop 36, at least one pupil stop 10 is then selected from the provided plurality of pupil stops 10 by use of an algorithm which qualifies deviations between the respective stop edge shape of the provided pupil stops 10 and the target stop edge shape of the target pupil stop 36. To this end, the pupil stop 10 currently under examination during the selection (also referred to as pupil stop to be qualified below) can in turn be decomposed within its stop edge into a plurality of pupil spots 38 arranged in grid-like fashion and represented by circles in
The scope of qualification comprises determining the similarity between the target illumination pupil (also denoted “T” below) and the possible measurement stops 10 (also denoted “M” below). For instance, this can be implemented by calculating an overlap function Q.
Here, A is a function for (approximately) calculating the area. The first term corresponds to the normalized area of the overlap between measurement stop and target illumination pupil. The second and third terms correspond to the normalized difference area between the measurement stop and the target illumination pupil, and vice versa. The difference area is intended to refer to the area contained only in the first pupil and not in the second.
The operators “∩”, “∪” and “\” correspond to the intersection (∩), union (∪) and relative complement (\) operators from set theory. In this case, the intersection M1∩M2 of the sets/areas M1 and M2 is intended to mean the set/area which is contained both in M1 and in M2, i.e. corresponds to the overlap area of M1 and M2. The union M1∪M2 of the sets/areas M1 and M2 describes the set/area which is contained in M1 or M2, i.e. corresponds to the overall area covered by M1 or M2. The relative complement. M1\M2 of the sets/areas M1 and M2 describes the set/area which is covered by M1 but not contained in M2.
For instance, the area function A can be implemented as counting illumination spots in the pupil. To this end, target illumination pupil and measurement pupil are scanned using the same grid. Typically, the grid corresponds to the pupil facet grid in the scanner on which the target illumination pupil is sampled (cf.
Thus, the selection of the pupil stop 10 encompasses a comparison between the poses of pupil spots 37 of the target stop edge shape and the poses of pupil spots 38 of the provided pupil stops 10.
Moreover, a plurality of defocus values zm (cf.
Moreover, a plurality of measurement positions (kx, ky) of the selected pupil stop 10 are specified within the scope of the simulation method.
Now, measurement aerial images Imeas({right arrow over (r)}, zn, {right arrow over (q)}m) are recorded in the image plane 29 at the spatial coordinates {right arrow over (r)}=x, y, in the style of intensity distributions 31 according to
The sequence in
In comparison with the imaging pupil stop 23,
In comparison with the centered position according to
An alternative sequence of measurement positions (kx, ky) of the pupil stop 10 is depicted in
Relative to the imaging pupil stop 23,
Relative to the imaging pupil stop 23,
Relative to the imaging pupil stop 23,
The completed sequence of measurement positions (kx, ky) is shown in
The selection of the respective measurement position sequence, or optionally a subset therefrom, is implemented on the basis of the arrangement of individual structures of the test structure 5 and/or on the basis of the illumination setting of the optical production system to be simulated. For instance, the measurement position sequence can be selected in a manner analogous to the stop selection algorithm (see above), with all stop positions of a sequence being taken into account and the sequence being selected for which the overlap of the measurement sequence with the target illumination pupil is maximal.
The poses of the pupil stop 10 which differ from the center position in terms of the relative pose with respect to the imaging pupil stop 23 are also referred to as offset measurement positions. Within the scope of a measurement position sequence, two to ten such offset measurement positions can be homed in on, this typically being two to five offset measurement positions, for example three or four offset measurement positions. The offset measurement positions can be arranged uniformly distributed in the circumferential direction. To reduce the measurement time, it is also possible to use only a subset, e.g. every second measurement position, from the measurement schemes (
The specified defocus values zm are all measured with the aid of the respective measurement position sequence. In an alternative, it is possible that the entire respective measurement position sequence is used only for one defocus value or for individual defocus values zm, with the measurement aerial images being recorded for fewer measurement positions of the pupil stop relative to the imaging pupil stop 23 in the case of other defocus values zm. In extreme cases, it is possible for instance to home in on the entire measurement position sequence and record a respective measurement aerial image there for only one defocus value zm, whereas the measurement aerial image Imeas is only recorded at one respective measurement position, in particular for the centered pupil stop 10, in the case of the other specified defocus values zm.
For instance, the following defocus value/measurement position combinations can be recorded: A central defocus value zm and a plurality of measurement positions (kx, ky) of the pupil stop 10, i.e., in particular, a centered measurement position and a plurality of offset measurement positions, and defocus values zmin, zmax maximally offset from the central defocus value on both sides, with exactly one central measurement position (kx, ky) of the pupil stop 10 being adopted at these positions zmin, zmax.
Then, a complex mask transfer function is reconstructed from the totality of measurement aerial images recorded with the selected pupil stop 10. A similar reconstruction step is also described in DE 10 2019 215 800 A1.
The reconstruction is implemented within the scope of a modelled description, in which a function σ({right arrow over (p)}, {right arrow over (q)}) reproducing the illumination directions {right arrow over (p)} passed through the pupil stop 10 is used to describe the projection optical unit 20 of the metrology system 2 with the illumination setting specified by the pupil stop 10. In contrast to the reconstruction according to DE 10 2019 215 800 A1, for example, a change in the illumination light distribution a on account of a change in the measurement position of the pupil stop 10 is not limited to a description by way of a pure displacement vector; instead, the description of the illumination light distribution a includes a chief ray-dependent change in effective edge contours of the pupil stop 10, dependent on the displacement position thereof. Thus, the illumination light distribution depends on, firstly, the pupil coordinate {right arrow over (p)}, which describes a basic shape of pupil stop edge contours, and on a chief ray illumination direction {right arrow over (q)}. The illumination light distribution field dependence considered thus is illustrated in more detail with the aid of the figures described below:
By way of x1 and x2,
The following variables are depicted schematically in
In particular, it should be observed that the shapes of the intensity spots 34 of the respective EUV illumination pupil BPx1 also vary depending on the direction of the illumination chief ray CRAi. This shape dependence of the intensity spots 34 on the chief ray angle CRAi can be traced back to optical system aberrations.
The pupil coordinates ρx, ρy used in the description of
φ denotes an azimuth angle between a perpendicular to the object field coordinate x, once again through the object field point of incidence of the chief ray CRA, and a projection line of the chief ray CRA in the xy-plane.
The orientation of the chief ray CRA with respect to the object field 3 can be described exactly by way of the two angles θ and φ.
In general, stop shadowing effects predominantly come to bear at angles θ≥4° and φ≥100.
The angle φ specifies a deviation of the chief ray from a meridional trajectory (parallel to the yz-plane).
The metrology system 2 measures a portion of the reticle, i.e. of the test structure 5 in the form of an aerial image. A chief ray CRA of the production system is simulated with the aid of the optical measurement system of the metrology system 2, and so a variation of a contour of the pupil stop 10 in the respective measurement position arises as a function of a simulated chief ray angle CRAi; this is a consequence of the inclination of the respective production system chief ray angle CRAi, i.e. of the respective chief ray illumination direction.
In addition,
The pupil stop 10 according to
Further,
The finite thickness of the stop main body 41, inter alia, causes a slight angular space offset of the contours of the corresponding illumination pupil when the chief ray CRAi is varied, as depicted schematically in
In general, the fact that the intensities, contours and polarization properties in the illumination pupil vary when the illumination directions of the chief rays CRAi are varied also applies to the illumination pupil of the metrology system 2. In principle, the respective imaging optical units of the metrology system 2 and optical production system to be simulated also have a field variation. The upshot is that, in the optical production system, an exit pupil of the imaging optical unit, which images the object field or illumination field in the image field, regularly varies as a function of the field coordinate. This exit pupil variation is independent of the respective optical production system illumination pupil. In a manner similar to the situation presented above, contour, polarization and intensity variations arise in the metrology system 2 for different illumination directions of the chief rays CRAi.
An effective edge of an aperture stop 42 for a first chief ray angle CRA1 is depicted in
Accordingly,
A variation in the field point leads to a change in the aperture edge, intensity distribution (apodization), and phase and polarization effect; this can be described by way of a Jones pupil J_production_system_1/2 or J_metrology_system_1/2 for two field & chief ray variants CRA1 and CRA2 in the optical systems.
A person skilled in the art finds details in this respect and, in particular, in respect of the Jones formalism in M. Totzeck, P. Gräupner, T. Heil, A. Göhnermeier, O. Dittmann, D. Krähmer, V. Kamenov, J. Ruoff, and D. Flagello, “How to describe polarization influence on imaging,” Proc. SPIE 5754, 23-37 (2005), and in the textbook “Field Guide to Polarization” by E. Collett, SPIE Press Book, 2005.
Thus, as a rule, the following dependencies have to be considered when simulating aerial image properties of the optical production system using the metrology system 2, especially in the sub-nanometer range, i.e. on a picometer scale:
These effects can be considered in the simulation method described herein, to be precise both during the reconstruction, as described below, and during the forward propagation, as described below.
In a manner corresponding to the explanations given above in relation to
The finite thickness of the stop main body 41 leads to differences in the effective stop contours of the respective pupil 10 in the measurement positions according to
In
For typical measurement sequences by the metrology system 1, a plurality of focus stacks (variation of zm, cf.
The (known) effective aperture contours 39 can be taken into account when reconstructing the complex mask spectrum. To this end, the effective aperture contours 39 can for instance be ascertained by optical simulations (e.g. by way of ray tracing) of the illumination and projection optical unit. Further, it is also possible to measure these directly by way of a pupil image. For instance, Bertrand optics can be used to this end.
Each illumination direction produces a complex-valued field distribution m({right arrow over (r)}, {right arrow over (p)}) (cf. the field distribution 19 in
Here,
is the curtailment by the numerical aperture of the imaging optical unit 20, i.e. by the imaging pupil stop 23, and
is the wavefront error caused by a defocus z (displacement by the object holder 17). Now, for this curtailment by the aperture stop 23, the effective stop edge 39 is set in accordance with the explanations relating to
The propagated spectrum (cf.
In this case, {right arrow over (r)} is the xy-position of the intensity measurement, i.e. the respective pixel of the camera 27.
{right arrow over (q)} is the illumination direction and {right arrow over (p)} is the pupil coordinate. An illumination direction {right arrow over (q)} corresponds to a center of a stop aperture of the respective pupil stop 10 in the respective measurement position.
For σ({right arrow over (p)}, {right arrow over (q)}), the effective stop edges of the pupil stops 10 at the different measurement positions are used in accordance with the explanations given above, especially in the context of
The object now is to determine the mask spectrum M({right arrow over (k)}, {right arrow over (p)}). In this case, {right arrow over (k)} is the pupil coordinates in the entrance pupil 24 of the projection optical unit 20 and {right arrow over (p)} is the illumination direction.
The Fourier transform of the respective mask spectrum of the associated mask transfer function.
The reconstructed spectra can then be used to calculate the aerial image for any other illumination setting σtarget({right arrow over (p)})) and any defocus ztarget. This is also referred to as forward propagation.
The determination of M({right arrow over (k)}, {right arrow over (p)}) can be formulated as an optimization problem: Sought are the spectra M({right arrow over (k)}, {right arrow over (p)}) for which there is a minimum deviation F between the simulated aerial images and the aerial images Imeas measured at the defocus positions z1, z2 . . . zN and the illumination directions {right arrow over (q)}1, {right arrow over (q)}2 . . . {right arrow over (q)}M. The following optimization problem should be solved:
A separate spectrum is reconstructed for each illumination direction {right arrow over (p)}.
A simulated aerial image Isim for the target illumination setting σtarget and the target defocus ztarget can be calculated using the reconstructed directionally dependent spectrum:
The setting for the target illumination setting σtarget now also depends on the field position xm of an intensity measurement position in the object field 3. This dependence corresponds to the variation depicted in
Thus, an actual intensity distribution BPxm in the optical production system illumination pupil BP is used for each intensity measurement position. This intensity distribution can be ascertained from an optical simulation of the optical production system or from a measurement of an optical production system illumination unit. This actual intensity distribution in the optical production system illumination pupil is assumed to be known.
A dependence of an exit pupil contour of the optical production system imaging optical unit on, firstly, the field position and/or on the chief ray angle can be taken into account by way of a field-dependent transfer function of the optical production system imaging optical unit:
Here, NAScanner({right arrow over (k)}, xm) is the description of the stop edge 39m of the optical production system aperture stop 42, dependent on the field pose xm (cf. also the above description in relation to
The reconstruction thus includes that profiles of edge contours of the pupil stops 10 at the respective measurement position, i.e. of measurement illumination settings specified by stop contours 39 of the pupil stop 10, change in manner which is dependent on a respective displacement position of the pupil stop 10 and which goes beyond a pure displacement of the edge contours.
Equation (4) then allows comparison between the simulated aerial image Isim and the respectively measured aerial image Imeas, and this can be used to reconstruct the mask spectrum M and, accordingly, the complex mask transfer function.
From Equation (4), the 3-D aerial image can be calculated with the aid of the reconstructed mask transfer function M and the illumination setting σtarget of the optical production system. In this way, it is possible to ascertain what the aerial image of the test structure 5 would look like if it were imaged by the optical production system.
As an alternative to the method described in the previous section, a correction approach analogous to DE 10 2019 206 651 A1 is also possible in place of a completely synthetic calculation of the images by propagating the reconstructed mask spectrum. To this end, a correction term Δ is calculated:
The two terms Isim({right arrow over (r)}, z, xm) and Isim ({right arrow over (r)}, z, {right arrow over (q)}) correspond to those in Equations (4) and (2) above. A precondition for this is that measurements are performed for the same focus poses and illumination stop positions as for the target settings.
The correction term Δ corresponds to the difference in the aerial images produced by the CRA/field dependence. Thus, the CRA/field dependence can be corrected as follows:
In this correction approach, systematic/constant errors compensate one another in the mask spectrum reconstruction, i.e. the same deviations in Isim({right arrow over (r)}, z, xm) and Isim({right arrow over (r)}, z, {right arrow over (q)}), and do not make an unwanted contribution to the final image. As a result, it is possible to obtain effects/properties in the measurement data even if these are not taken into account in the object reconstruction imaging model. An example are 3-D mask effects in a reconstruction with a simple imaging model without explicit consideration of 3-D mask effects. In the limit case of a reconstruction with a negligible residue, i.e. F→0 in Equation (3), both methods (propagation and correction approach) are equivalent.
For the pupil stop 10 of the metrology system 2, the simulation method can make use either of stop basic shapes corresponding to those already explained above, especially in the context of
In a predefining step 45, firstly a starting stop shape of the sigma stop 10, 10dc, is selected as an initial design candidate for the simulation.
In the context of the optimization, this starting stop shape 10dc is modified in a modifying step 46, such that a modification stop shape 10dcnew that is slightly changed in regard to its boundary shape arises in a producing step 47.
In a checking step 48, a check is then made to establish whether this modification stop shape 10dcnew satisfies at least one fabrication boundary condition with regard to the fabrication of this modification stop shape 10dcnew. If the checking step 48 reveals that at least one marginal check portion of the modification stop shape 10dcnew does not satisfy the fabrication boundary conditions (decision “N” of the checking step 48), the modification step 46 and the producing step 47 are repeated on the basis of the last valid design candidate. This is done until the checking step 48 for a modification stop shape 10dcnew then given reveals compliance with the predefined fabrication boundary conditions (decision “Y” of the checking step 48).
An ascertaining step 49 then involves communicating the match quality between the illumination and imaging properties of the optical production system and the illumination and imaging properties of the optical measurement system.
A value of at least one merit function is calculated in the context of this match quality ascertainment. Said merit function is influenced by a comparison of optical illumination and imaging parameters between a pupil overlap region of an illumination pupil and an imaging pupil of the optical production system, on the one hand, and a corresponding pupil overlap region of an illumination pupil with a used stop shape of the sigma stop 8 and an imaging pupil with a used NA aperture stop 11 of the optical measurement system.
A center ZAr,ϕ of the exit pupil AP lies at Cartesian coordinates σxi, σyi. Instead of Cartesian coordinates σx, σy, it is also possible to choose polar coordinates, likewise illustrated in
In the context of ascertaining the match quality with the aid of such a pupil overlap region Ar, ϕ, the overlap at various support points σxi, σyi that are scanned is assessed. The following assessment terms are used in this case:
D here is a term describing a simple summation of the intensities I(σx,σy) over the respective pupil overlap region Ar,ϕ. This D term (according to Equation (8)) correlates with an image dimension CD (critical dimension), i.e. a width of a structure along a predefined direction.
Reference is made to U.S. Pat. No. 9,176,390 B in the context of the definition of the parameter CD.
The T term (according to Equation (9)) represents an integral over the overlap region A, said integral again being weighted with the distance value σϕ. For this formulation of the T term, it is assumed for simplification that the exit pupil has no apodization. This T term correlates with the imaging telecentricity imaging parameter. This can include a sensitivity of an object structure offset as a function of a defocus position of a substrate onto which the object is imaged.
For a given pupil stop shape of the pupil stop 10, the following optimization rules are applied when ascertaining the match quality for all possible overlap regions Ar,ϕ:
In this case, dc stands for the respective design candidates, i.e. for the currently considered stop shape of the pupil stop 10. t stands for the target illumination pupil of the optical production system, i.e. in particular of a projection exposure apparatus in the form of a scanner.
The optimization rules in accordance with Equations (10) and (11) are not attained as a rule. When ascertaining the match quality, the stop shape of the design candidate dc is varied until the optimization rules (10), (11) yield minimum values.
Besides the optimization variables D and T, further variables correlated with further illumination and/or imaging parameters can also be used when ascertaining the match quality. One example of such a variable is:
This HV term correlates with an imaging variable “HV asymmetry”, which quantifies a difference in the critical dimensions (CDs) along a vertical and along a horizontal dimension. The HV term may be of interest depending on the structures to be imaged on the object 5; for example in the case of horizontal or vertical lines to be imaged, in particular having the same periodicity and the same target CD, or else in the case of so-called contact holes, i.e. structures having an xy-aspect ratio in the region of 1. An HV asymmetry can then be understood as the difference between the two CDs, i.e. CDh−CDv in the case of horizontal (h) and vertical (v) lines or CDx−CDy in the case of contact holes having extents in the x- and y-directions.
Ascertaining the HV term according to Equation (12) above involves calculating the difference between two D terms according to Equation (8) at the location of two defined overlap regions Ar,ϕ and Br,ϕ which are rotated with respect to one another by 90° about the coordinate origin ZB (cf.
For the HV term, too, there is then a corresponding optimization rule:
After comparison calculation has been carried out, the overlap regions Ar,ϕ used cover the entire illumination pupil of, on the one hand, the optical production system and, on the other hand, the illumination optical unit 9 of the metrology system 2.
The illumination setting to be simulated (cf.
A merit function E can be used during the match quality ascertainment since, in general, the match rules according to Equations (10), (11) and (13) do not all become 0 at the same time. This merit function can be written as weighted error minimization in the usual way as:
I here denotes the stop shape of the pupil stop 10dcnew which is intended to be assessed by use of the merit function. It denotes the target illumination pupil of the optical production system, this being the intended target of optimization. D and T denote the assessment terms discussed above in the context of Equations (10) and (11). In addition, the merit function E can for example also be extended by the assessment term HV (cf. Equations (12) and (13)).
The merit function I can additionally be extended by the requirement for a minimum transmission of the pupil stop 10dcnew.
Besides the target illumination pupil of the optical production system, the ascertaining step 49 can also be influenced by a pupil transfer function of the optical production system and a pupil transfer function of the optical measurement system of the metrology system 2.
For this purpose, the D term defined above in the context of Equation (8) can be written as follows:
P here is an apodization function, i.e. an energetic proportion of the pupil transfer function.
An apodization of the exit pupil can then be taken into account by this means.
In the course of the ascertaining step 49, compliance with an optimization criterion is queried in an optimization query step 50. One example of such an optimization criterion is the Boltzmann criterion of simulated annealing:
In this case, r is a uniformly distributed random number from the interval [0,1[ (the exact numerical value “1” is thus excluded in this interval) and β is a control parameter that increases further and further in the course of the simulated annealing optimization. E(dcnew) and E(dc) are the merit functions that arose for the stop shapes of the pupil stop 10 during the last and during the preceding optimization step.
Insofar as the Boltzmann criterion is satisfied, i.e. the optimization has not yet concluded (decision Y in the query step 50), the current stop shape 10dcnew is set as initial stop shape 10dc for the next modification, which is effected in a predefining step 51. The control parameter 3 is also increased in the predefining step 51. The optimization criterion is thus intensified in the context of the predefining step 51. Afterwards, the method continues with the modifying step 46 and steps 47 to 50 are repeated until the optimization query step 50 reveals that either the Boltzmann criterion is no longer satisfied or the control parameter β is greater than a predefined value (query result N in the query step 50).
If, therefore, the optimization criterion has then been attained in the optimization query step 50 (query result N), the pupil stop 10 with the target stop shape that occurred with the smallest merit function value E in the optimization is fabricated in a fabricating step 52.
Exactly one pupil stop 10, which was optimized as explained above in the context of
Individually optimized stops can be designed for a given selection of field points xi, e.g. three field points (left x-field edge, field center, right x-field edge), and so the contributions of, firstly, the optical production system and, secondly, the metrology system 2 which are valid for this field point xi or for this illumination direction of the respective chief ray CRAi are taken into account more completely, and there is an accordingly improved simulation of the optical production system aerial image. This specification of individual optimized stop edges can be implemented accordingly for the pupil stop 10, for the aperture stops 23 and also for both stops 10, 23.
For instance, the three pupil stops 10 and optionally three aperture stops 23 optimized thus are then available for the metrology system 2. In a manner fitting to the field point xm to be measured in the object field 3, it is then possible to use the corresponding stop or the corresponding stop pair of pupil stop and aperture stop.
If there is no coincidence between a field point xm to be measured and a specified field point for which the respective stop was optimized in respect of its edge, then it is possible to use a compensation rule. An example of such a compensation rule is:
Here, A(x, y) denotes the aerial image measured by the metrology system 2.
In a variant of the simulation method, it is also possible to use a plurality of different pupil stops 10 to specify the various measurement positions (kx, ky).
To prepare the simulation method, it is possible to record an aerial image stack in order to make sure which z-pose of the object plane 4 supplies an optimally sharp image in the image plane 29 (zero of the z-pose).
z-increments which are used in Equation (2) when determining the aerial image Isim may differ from the defocus values zm that are specified within the scope of the simulation method.
Pixel sizes of the recorded measurement aerial images Imeas may be re-sampled for the purpose of matching to a desired pixel resolution.
A plurality of kx, ky positions of the imaging pupil stop 23 can also be set by way of the displacement drive 25 in a simulation method.
When reconstructing the mask transfer function, it is accordingly possible to take account of imaging aberrations of the optical measurement system, in particular imaging aberrations of the imaging optical unit 20 of the metrology system 2.
The determination of the 3-D aerial image Imeas and/or the calculation of the simulated aerial image Isim may be carried out using a different illumination chief ray angle to that of the reconstruction of the mask transfer function.
For selecting the respective pupil stop 10 from the provided plurality of pupil stops 10 with in each case different stop edge shapes and/or stop edge orientations, the metrology system 2 has a selection apparatus not depicted in detail in the drawing. This selection apparatus has a stop storage unit, in which the plurality of pupil stops 10 with different stop edge shapes and/or stop edge orientations are stored in each case for the purpose of specifying correspondingly different measurement illumination settings.
In the selection step of the simulation method, the last pupil stop inserted is firstly removed from its use location in the pupil plane 11 and supplied to the stop storage unit in the selection apparatus with the aid of an actuator system of the selection apparatus, in particular with the aid of a robotic actuator system. Subsequently, the pupil stop 10 selected according to the simulation method is selected from the stop storage unit and inserted in the use position in the pupil plane 11 with the aid of the robotic actuator system.
In principle, the problem presented above and also the solution can be applied analogously to take account of machine-individual properties in the aerial image emulation. For instance, the EUV illumination pupils differ from machine to machine, depending on the light source used, in particular the EUV light source used. In particular, the combination of a stop emulating the ideal system and the simulation-type method incorporating the machine-individual component is attractive.
For instance, this requires that the machine-individual properties are known, for example by way of a qualification or, in the case of the EUV source types, by way of the appropriate numerical models. Machine-individual portions of the metrology system 2 can also be considered in similar fashion.
In general, the above-described approaches allow specific properties of the optical production system to be simulated, for example crosstalk effects between different illumination channels of a fly's eye integrator system in the illumination optical unit of the optical production component and/or crosstalk effects between various x-coordinate-dependent intensity correction stops in the illumination optical unit of the optical production system. For instance, it is possible to simulate insertion depths of appropriate stop fingers which correct the illumination intensity in x-coordinate-dependent fashion, and the influence thereof on the aerial image.
In some implementations, the calculations and processing of data (e.g., performing simulation) described in this document can be performed by one or more computers that include one or more data processors configured to execute one or more programs that include a plurality of instructions according to the principles described above. Each data processor can include one or more processor cores, and each processor core can include logic circuitry for processing data. For example, a data processor can include an arithmetic and logic unit (ALU), a control unit, and various registers. Each data processor can include cache memory. Each data processor can include a system-on-chip (SoC) that includes multiple processor cores, random access memory, graphics processing units, one or more controllers, and one or more communication modules. Each data processor can include millions or billions of transistors.
The methods described in this document can be carried out using one or more computers, which can include one or more data processors for processing data, one or more storage devices for storing data, and/or one or more computer programs including instructions that when executed by the one or more computers cause the one or more computers to carry out the processes. The one or more computers can include one or more input devices, such as a keyboard, a mouse, a touchpad, and/or a voice command input module, and one or more output devices, such as a display, and/or an audio speaker.
In some implementations, the one or more computing devices can include digital electronic circuitry, computer hardware, firmware, software, or any combination of the above. The features related to processing of data can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations. Alternatively or in addition, the program instructions can be encoded on a propagated signal that is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a programmable processor.
A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
For example, the one or more computers can be configured to be suitable for the execution of a computer program and can include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer system include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer system will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as hard drives, magnetic disks, solid state drives, magneto-optical disks, or optical disks. Machine-readable storage media suitable for embodying computer program instructions and data include various forms of non-volatile storage area, including by way of example, semiconductor storage devices, e.g., EPROM, EEPROM, flash storage devices, and solid state drives; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM, DVD-ROM, and/or Blu-ray discs.
In some implementations, the processes described above can be implemented using software for execution on one or more mobile computing devices, one or more local computing devices, and/or one or more remote computing devices (which can be, e.g., cloud computing devices). For instance, the software forms procedures in one or more computer programs that execute on one or more programmed or programmable computer systems, either in the mobile computing devices, local computing devices, or remote computing systems (which may be of various architectures such as distributed, client/server, grid, or cloud), each including at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one wired or wireless input device or port, and at least one wired or wireless output device or port.
In some implementations, the software may be provided on a medium, such as CD-ROM, DVD-ROM, Blu-ray disc, a solid state drive, or a hard drive, readable by a general or special purpose programmable computer or delivered (encoded in a propagated signal) over a network to the computer where it is executed. The functions can be performed on a special purpose computer, or using special-purpose hardware, such as coprocessors. The software can be implemented in a distributed manner in which different parts of the computation specified by the software are performed by different computers. Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system can also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
The embodiments of the present invention that are described in this specification and the optional features and properties respectively mentioned in this regard should also be understood to be disclosed in all combinations with one another. In particular, in the present case, the description of a feature comprised by an embodiment—unless explicitly explained to the contrary—should also not be understood such that the feature is essential or indispensable for the function of the embodiment.
Number | Date | Country | Kind |
---|---|---|---|
102023205136.2 | Jun 2023 | DE | national |