The present invention relates to photolithography, and in particular to methods of creating a semiconductor mask or reticle to print a desired layout pattern.
With conventional photolithographic processing techniques, integrated circuits are created on a semiconductor wafer by exposing photosensitive materials on the wafer through a mask or reticle. The wafer is then chemically and mechanically processed to build up the integrated circuit or other device on a layer-by-layer basis.
As the components of the integrated circuit or other device to be created become ever smaller, optical distortions occur whereby a pattern of features defined on a mask or reticle do not match those that are printed on the wafer. As a result, numerous resolution enhancement techniques (RETs) have been developed that seek to compensate for the expected optical distortions so that the pattern printed on a wafer will more closely match the desired layout pattern. Typically, the resolution enhancement techniques include the addition of one or more subresolution features to the mask pattern or creating features with different types of mask features such as phase shifters. Another resolution enhancement technique is optical and process correction (OPC), which analyzes a mask pattern and moves the edges of the mask features inwardly or outwardly or adds features such as serifs, hammerheads, etc., to the mask pattern to compensate for expected optical distortions.
While RETs improve the fidelity of a pattern created on a wafer, further improvements can be made.
To improve the fidelity by which a desired layout pattern can be printed on a wafer with a photolithographic imaging system, the present invention is a method and apparatus for calculating a mask or reticle layout pattern from a desired layout pattern. A computer system executes a sequence of instructions that cause the computer system to read all or a portion of a desired layout pattern and define a mask layout pattern as a number of pixel transmission characteristics. The computer system analyzes an objective function equation that relates the transmission characteristic of each pixel in the mask pattern to an image intensity on a wafer. In one embodiment, a maximum image intensity for points on a wafer is obtained from a maximum image intensity determined from a simulation of the image that would be formed using a test pattern of mask features. In one embodiment, the objective function also includes one or more penalty functions that enhance solutions meeting desired manufacturing restraints. Once the pixel transmission characteristics for the mask layout pattern are determined, the data are provided to a-mask writer to fashion one or more corresponding masks for use in printing the desired layout pattern.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
As will be explained in further detail below, the present invention is a method and apparatus for calculating a mask pattern that will print a desired layout or portion thereof on a wafer.
Beginning at 100, a computer system obtains all or a portion of a layout file that defines a desired pattern of features to be created on a semiconductor wafer. At 102, the computer system divides the desired layout into a number of frames. In one embodiment, the frames form adjacent, partially overlapping areas in the layout. Each frame, for example, may occupy an area of 5×5 microns. The size of each frame may depend of the amount of memory available and the processing speed of the computer system being used.
At 104, the computer system begins processing each of the frames. At 106, the computer defines a blank area of mask data that is pixilated. At 108, the computer defines a corresponding set of optimal mask data. In one embodiment, the optimal mask data defines a corresponding set of pixels whose transmission characteristics are defined by the desired layout data. For example, each optimal mask data pixel in an area that corresponds to a wafer feature may have a transmission characteristic of 0 (e.g. opaque) and each optimal mask data pixel that corresponds to an area of no wafer feature may have a transmission characteristic of I (e.g. clear). In some embodiments, it may be desirable to change the data for the optimal mask data from that defined strictly by the desired layout pattern. For example, the corners of features may be rounded or otherwise modified to reflect what is practical to print on a wafer. In addition or alternatively, pixel transmission characteristics may be changed from a binary 0/1 value to a grayscale value, to positive and negative values (representing phase shifters) or to complex values (representing partial phase shifters).
In some embodiments the optimal mask data may also include or be modified by a weighting function. The weighting function allows a user to determine how close the solution for a given pixel should be to the transmission characteristic defined by the corresponding pixel in the optimal mask data. The weighting function may be a number selected between 0 and 1 that is defined for each pixel in the optimal mask data.
At 110, an objective function is defined that relates a simulation of the image intensity on wafer to the pixel transmission characteristics of the mask data and the optics of the photolithographic printing system. The objective function may be defined for each frame of mask data or the same objective function may be used for more than one frame of mask data. Typically, the objective function is defined so that the value of the objective is minimized with the best possible mask, however other possibilities could be used.
In one embodiment of the present invention, one or more penalty functions are also defined for the objective function. The penalty functions operate to steer the optimization routine described below to a solution that can be or is desired to be manufactured on a mask. For example, it may be that the objective function has a number of possible solutions of which only one can actually be made as a physical mask. Therefore, the penalty functions operate to steer the solution in a direction selected by the programmer to achieve a mask that is manufacturable. Penalty functions can be defined that promote various styles of resolution enhancement techniques such as: assist features, phase shifters, partial phase shifters, masks having features with grayscale transmission values or multiple transmission values, attenuated phase shifters, combinations of these techniques or the like.
For example, a particular mask to be made may allow each pixel to have one of three possible values with a transmission characteristic of 0 (opaque)+1 (clear) or −1 (clear with phase shift). By including a penalty function in the objective function prior to optimization, the solution is steered to a solution that can be manufactured as this type of mask.
An example of a penalty function is α4∥(m+e)m(m−e)∥22, where e is a one vector, as set forth in Equation 57 described in the “Fast Pixel-Based Mask Optimization for Inverse Lithography” paper below. In one embodiment, the penalty functions are defined as polynomials having zeroes at desired pixel transmission characteristics. In another embodiment, the penalty functions can represent logical operations. For example, if the area of a wafer is too dark, the corresponding pixels in the mask data can be made all bright or clear. This in combination with other mask constraints has the effect of adding subresolution assist features to the mask data.
At 112, the objective function, including penalty functions, for the frame is optimized. In one embodiment the optimized solution is found using a gradient descent. If the objective function is selected to have the form described by Equation 57, its gradient can be mathematically computed using convolution or cross-correlation, which is efficient to implement on a computer. The result of the optimization is a calculated transmission characteristic for each pixel in the mask data for the frame.
At 114, it is determined if each frame has been analyzed. If not, processing returns to 104 and the next frame is processed. If all frames are processed, the mask pixel data for each of the frames is combined at 116 to define the pixel data for one or more masks. The mask data is then ready to be delivered to a mask writer in order to manufacture the corresponding masks.
More mathematical detail of the method for computing the mask pixel transmission characteristics is described U.S. Patent Application No. 60/722,840 filed Sep. 30, 2005 and incorporated by reference herein, as well as in the paper “Fast Pixel-Based Mask Optimization for Inverse Lithography” by Yuri Granik of Mentor Graphics Corporation, 1001 Ridder Park Drive, San Jose, Calif. 95131, reproduced below (with slight edits).
The direct problem of optical microlithography is to print features on a wafer under given mask, imaging system, and process characteristics. The goal of inverse problems is to find the best mask and/or imaging system and/or process to print the given wafer features. In this study we proposed strict formalization and fast solution methods of inverse mask problems. We stated inverse mask problems (or “layout inversion” problems) as non-linear, constrained minimization problems over domain of mask pixels. We considered linear, quadratic, and non-linear formulations of the objective function. The linear problem is solved by an enhanced version of the Nashold projections. The quadratic problem is addressed by eigenvalue decompositions and quadratic programming methods. The general non-linear formulation is solved by the local variations and gradient descent methods. We showed that the gradient of the objective function can be calculated analytically through convolutions. This is the main practical result because it enables layout inversion on large scale in order of M log M operations for M pixels.
The layout inversion goal appears to be similar or even the same as found in Optical Proximity Correction (OPC) or Resolution Enhancement Techniques (RET). However, we would like to establish the inverse mask problem as a mathematical problem being narrowly formulated, thoroughly formalized, and strictly solvable, thus differentiating it from the engineering techniques to correct (“C” in OPC) or to enhance (“E” in RET) the mask. Narrow formulation helps to focus on the fundamental properties of the problem. Thorough formalization gives opportunity to compare and advance solution techniques. Discussion of solvability establishes existence and uniqueness of solutions, and guides formulation of stopping criteria and accuracy of the numerical algorithms.
The results of pixel-based inversions can be realized by the optical maskless lithography (OML) [31]. It controls pixels of 30×30 nm (in wafer scale) with 64 gray levels. The mask pixels can be negative to achieve phase-shifting.
Strict formulations of the inverse problems, relevant to the microlithography applications, first appear in pioneering studies of B. E. A. Salch and his students S. Sayegh and K. Nashold. In [32], Sayegh differentiates image restoration from the image design (a.k.a. image synthesis). In both, the image is given and the object (mask) has to be found. However, in image restoration, it is guaranteed that the image is achieved by some object. In image design the image may not be achievable by any object, so that we have to target the image as close as possible to the desired ideal image. The difference is analogical to solving for a function zero (image restoration) and minimizing a function (image design). Sayegh states the image design problem as an optimization problem of minimizing the threshold fidelity error Fit in trying to achieve given threshold Oat the boundary C of the target image ([32], p. 86):
where n=2 and n=4 options are explored; I(x, y) is image from the mask m(x, y); x, y are image and mask planar coordinates. Optical distortions are modeled by the linear system of convolution with the point-spread function h(x, y), so that
I(x, y)=h(x, y)*m(x, y), (2)
and for the binary mask (m(x,y)=0 or m(x,y)=1). Sayegh proposes algorithm of one at a time “pixel flipping”. Mask is discretized, and then pixel values 0 and 1 are tried. If the error (1) decreases, then the pixel value is accepted, otherwise it is rejected, and we try the next pixel.
Nashold [22] considers a bandlimiting operator in the place of the point-spread function (2). Such formulation facilitates application of the alternate projection techniques, widely used in image processing for the reconstruction and is usually referenced as Gerchberg-Saxton phase retrieval algorithm [7]. In Nashold formulation, one searches for a complex valued mask that is bandlimited to the support of the point-spread function, and also delivers images that are above the threshold in bright areas B and below the threshold in dark areas D of the target:
x,y∈B:I(x,y)>θ, x,y∈D:I(x,y)<θ (4)
Both studies [32] and [22] advanced solution of. inverse problems for the linear optics. However, the partially coherent optics of microlithography is not a linear but a bilinear system [29], so that instead of (2) the following holds:
I(x,y)=∫∫∫∫q(x−x1, x−x2, y−y1, y−y2)m(x1, y1)m*(x2, y2)dx1dx2dy1dy2, (5)
where q is a 4D kernel of the system. While the pixel flipping [32] is also applicable to the bilinear systems, Nashold technique relies on the linearity. To get around this limitation, Pati and Kailath [25] propose to approximate bilinear operator by one coherent kernel h, the possibility that follows from Gamo results [6]:
I(x, y)≈λ|h(x, y)*m(x, y)|2, (6)
where constant λ is the largest eigenvalue of q, and h is the correspondent eigenfunction. With this the system becomes linear in the complex amplitude A of the electrical field
A(x, y)=√{square root over (λ)}h(x, y)*m(x, y). (7)
Because of this and because h is bandlimited, the Nashold technique is applicable.
Liu and Zakhor [19, 18] advanced along the lines started by the direct algorithm [32]. In [19] they introduced optimization objective as a Euclidean distance ∥·∥2 between the target Iideal and the actual wafer images
F
1
[m(x, y)]=∥I(x, y)−Iideal(x, y)∥2→min. (8)
This was later used in (1) as image fidelity error in source optimization. In addition to the image fidelity, the study [18] optimized image slopes in the vicinity of the target contour C:
where C+ε is a sized-up and C−ε is a sized down contour C; ε is a small bias. This objective has to be combined with the requirement for the mask to be a passive optical element m(x, y)m*(x, y)≦1 or, using infinity norm ∥·∥∞=maxi|.|, we can express this as
∥m(x, y)∥∞≦1 (10)
In case of the incoherent illumination
I(x, y)=h(x, y)2*(m(x, y)m*(x, y)) (12)
the discrete version of (9,10) is a linear programming (LP) problem for the square amplitude pi=mim*i of the mask pixels, and was addressed by the “branch and bound” algorithm. When partially coherent optics (4) is considered, the problem is complicated by the interactions mim*j between pixels and becomes a quadratic programming (QP) problem. Liu [18] applied simulated annealing to solve it. Consequently, Liu and Zakhor made important contributions to the understanding of the problem. They showed that it belongs to the class of the constrained optimization problems and should be addressed as such. Reduction to LP is possible; however, the leanest relevant to microlithography and rigorous formulation must account for the partial coherence, so that the problem is intrinsically not simpler than QP. New solution methods, more sophisticated than the “pixel flipping,” have also been introduced.
The first pixel-based pattern optimization software package was developed by Y.-H. Oh, J-C Lee, and S. Lim [24], and called OPERA, which stands for “Optical Proximity Effect Reducing Algorithm”. The optimization objective is loosely defined as “the difference between the aerial image and the goal image,” so we assume that some variant of (7) is optimized. The solution method is a random “pixel flipping,” which was first tried in [32]. Despite the simplicity of this algorithm, it can be made adequately efficient for small areas if image intensity can be quickly calculated when one pixel is flipped. The drawback is that pixel flipping can easily get stuck in the local minima, especially for PSM optimizations. In addition, the resulting patterns often have numerous disjoined pixels, so they have to be smoothed, or otherwise post-processed, to be manufacturable [23]. Despite these drawbacks, it has been experimentally proven in [17] that the resulting masks can be manufactured and indeed improve image resolution.
The study [28] of Rosenbluth, A., et al., considered mask optimization as a part of the combined source/mask inverse problem. Rosenbluth indicates important fundamental properties of inverse mask problems, such as non-convexity, which causes multiple local minima. The solution algorithm is designed to avoid local minima and is presented as an elaborate plan of sequentially solving several intermediate problems.
Inspired by the Rosenbluth paper and based on his dissertation and the SOCS decomposition [2], Socha delineated the interference mapping technique [34] to optimize contact hole patterns. The objective is to maximize sum of the electrical fields A in the centers (xk, yk) of the contacts k=1 . . . N:
Here we have to guess the correct sign for each A(xk, yk), because the beneficial amplitude is either a large positive or a large negative number ([34] uses all positive numbers, so that the larger A the better). When kernel h of (7) is real (which is true for the unaberrated clear pupil), A and FB are also real-valued under approximation (7) and for the real mask m. By substituting (7) into (13), we get
where the dot denotes an inner product f·g=∫∫fgdxdy. Using the following relationship between the inner product, convolution*, and cross-correlation ∘ of real functions
(f*g)·p=f·(g∘p), (15)
we can simplify (14) to
where function G1 is the interference map [34]. With (16) the problem (13) can be treated as LP with simple bounds (as defined in [8]) for the mask pixel vector m={mi}
−Gb·m→min −1≦mi≦1 (17)
In an innovative approach to the joined mask/source optimization by Erdmann, A., et al. [4], the authors apply genetic algorithms (GA) to optimize rectangular mask features and parametric source representation. GA can cope with complex non-linear objectives and multiple local minima.
The inverse mask problem can be reduced to a linear problem as it is shown above for IML or in [11]. This however requires substantial simplifications. Perhaps richer and more interesting is modeling with a linear system and thresholding.
The linearization (7) can be augmented by the threshold operator to model the resist response. Inverse problems for such systems can be solved by Nashold projections [22]. Nashold projections belong to the class of the image restoration techniques, rather than to the image optimizations, meaning that the method might not find the solution (because it does not exists at all), or in the case when it does converge, we cannot state that this solution is the best possible. It has been noted in [30] that the solutions strongly depend on initial guesses and do not deliver the best phase assignment unless the algorithm is steered to it by a good initial guess. Moreover, if the initial guess has all phases set to 0, then so has the solution.
Nashold projections are based on Gerchberg-Saxton [7] phase retrieval algorithm. It updates the current mask iterate mk via
m
k+1=(PmPs)mk, (31)
where Ps is a projection operator into the frequency support of the kernel h, and Pm is a projection operator that forces the thresholding (4). Gerchberg-Saxton iterations tend to stagnate. Fienap [5] proposed basic input-output (BIO) and hybrid input-output (HIO) variations that are less likely to be stuck in the local minima. These variations can be generalized in the expression
m
k+1=(PmPs+α(γ(PmPs−Ps)−Pm+1)mk, (32)
where I is an identity operator; α=1, γ=0 for BIO, α=1, γ=1 for HIO, and α=0, γ=0 for the Gerchberg-Saxton algorithm.
We implemented operator Pm as a projection onto the ideal image
developed, the polynomial optimization is an area of growing research interest, in particular for quartic polynomials [27].
We can generalize (54A) by introducing weighting w=w(x, y) to emphasize important layout targets and consider smoothing in Sobolev norms as in [12]:
F
w
[m(x, y)]2=∥√{square root over (w)}·(I−Iideal)∥22+α1∥L1m∥22+α2∥L2m∥22+α3∥m−m0∥22→min (55A)
where L1, L2are the operators of first and second derivatives, m0=m0 (x, y)is some preferred mask configuration that we want to be close to (for example, the target), and α1, α2, α3are smoothing weights. The solutions of (55A) increase image fidelity; however, the numerical experiments show that the contour fidelity of the images is not adequate. To address, we explicitly add (1A) into (55A):
If the desired output is a two-, tri-, any other multi-level tone mask, we can add penalty for the masks with wrong transmissions. The simplest form of the penalty is a polynomial expression, so for example for the tri-tone Levenson-type masks with transmissions −1, 0, and 1, we construct the objective as
where e is a one-vector. Despite all the complications, the objective function is still a polynomial of the mask pixels. To optimize for the focus depth, the optimization of (57A) can be conducted off-focus, as was suggested in [16, 12]. After discretization, (55A) becomes a non-linear programming problem with simple bounds. and Ps as a projection to the domain of the kernel h, i.e. Ps zeros out all frequencies of m which are high than the frequencies of the kernel h. The iterates (32) are very sensitive to the values of its parameters and the shape of ideal image. We have found meaningful solutions only when the ideal image is smoothed. Otherwise the phases come out “entangled,” i.e. the phase alternates along the lines as in FIGURE IE, right, instead of alternating between lines. We used Gaussian kernel with the diffusion length of 28 nm, which is slightly larger than the pixel size 20 nm in our examples. The behavior of iterates (32) is not yet sufficiently understood [36], which complicates choice of α, γ. In our examples the convergence is achieved for α=0.9, γ=1 after T=5000 iterations. When α=0, γ=0, which corresponds to the original Nashold projections (31), the iterations quickly stagnate converging to a non-printable mask. The runtime is proportional to T*M*log M, M-number of pixels. The convergence is slow because T is large, so that application to the large layout areas is problematic.
As shown in
In the quadratic formulations of the inverse problems, the coherent linearization (6) is not necessarily. We can directly use bilinear integral (5). Our goal here is to construct objective function as is a quadratic form of mask pixels. We start with (8) and replace Euclidean norm (norm 2) with Manhattan norm (norm 1):
F
1
[m(x, v)]=∥I(x, y)−Iideal(x, y)∥1→min. (34)
The next step is to assume that the ideal image is sharp, 0 in dark regions and I in bright regions, so that I(x, y)≧Iideal (x, y) in the dark regions and I(x, y)≦Iideal (x, y) in the bright regions. This lets us to remove the module operation from the integral (34):
∥I(x, y)−Iideal(x, y)∥1=∫∫|I−Iideal|dxdy=∫∫w(x, y)(I(x, y)−Iideal(x, y)dxdy, (35)
where w(x, y) is 1 in dark regions and −1 in bright regions. Finally we can ignore the constant term in (35), which leads to the objective
F
w
[m(x, y)]=∫∫wI(x, y)→min. (36)
The weighting function w can be generalized to have any positive value in dark regions, any negative value in bright regions, and 0 in the regions which we choose to ignore. Proper choice of this function covers the image slope objective (9), but not the threshold objective (1). Informally speaking, we seek to make bright regions as bright as possible, and dark regions as dark as possible. Substituting (5) into (36), we get
∫∫wI(x, y)dxdy=∫∫∫∫Q(x1, y1, x2, y2)m(x1, y1)m*(x2, y2)dx1dx2dy1dy2, (37)
where
Q(x1, y1, x2, y2)=∫∫w(x, y)q(x−x1, x−x2, y−y1, y−y2)dxdy. (38)
Discretization of (37) results to the following constrained QP
F
w
[m]=m*Qm→min ∥m∥∞≦1 (39)
The complexity of this problem depends on the eigenvalues of matrix Q. When all eigenvalues are non-negative, then it is convex QP and any local minimizer is global. This is a very advantageous property, because we can use any of the numerous QP algorithms to find the global solution and do not have to worry about local minima. Moreover, it is well known that the convex QP can be solved in polynomial tine. The next case is when all eigenvalues are non-positive, a concave QP. If we remove constraints, the problem becomes unbounded, with no minimum and no solutions. This means that the constraints play a decisive role: all solutions, either local or global, end up at some vertex of the box ∥m∥∞≦1. In the worst case scenario, the solver has to visit all vertices to find the global solution, which means that the problem is NP-complete, i.e. it may take an exponential amount of time to arrive at the global minima. The last case is an indefinite QP when both positive and negative eigenvalues are present. This is the most complex and intractable case: an indefinite QP can have multiple minima, all lie on the boundary.
We conjecture that the problem (39) belongs to the class of indefinite QP. Consider case of the ideal coherent imaging, when Q is a diagonal matrix. Vector w lies along its diagonal. This means that eigenvalues μ1, μ2 . . . of Q are the same as components of the vector w, which are positive for dark pixels and negative for bright pixels. If there is at least one dark and one bright pixel, the problem is indefinite. Another consideration is that if we assume that (39) is convex, then the stationary internal point m=0 where the gradient is zero
is the only solution, which is a trivial case of mask being dark. This means that (39) is either has only a trivial solution, or it is non-convex.
Related to (39) QP was considered by Rosenbluth [28]:
m*Q
d
m→min m*Qbm≧b (41)
where Qd and Qb deliver average intensities in bright and dark regions correspondingly. The objective is to keep dark regions as dark as possible while maintaining average intensity not worse than some value b in bright areas. Using Lagrange multipliers, we can convert (41) to
m*(Qd−λQb)m→min ∥μ∞≦1, λ≧0 (42)
which is similar to (39).
Another metric of the complexity of (39) is number of the variables, i.e. the pixels in the area of interest. According to Gould [10], the problems with order of 100 variables are small, more than 103 are large, and more than 105 are huge. Considering that the maskless lithography can control transmission of the 30 nm by 30 nm pixel [31], the QP (39) is large for the areas larger than 1 um by 1 um, and is huge for the areas lager than 10 um by 10 um. This has important implications for the type of the applicable numerical methods: in large problems we can use factorizations of matrix Q, in huge problems factorizations are unrealistic.
For the large problems, when factorization is still feasible, a dramatic simplification is possible by replacing the infinity norm by the Euclidean norm in the constraint of (39), which results in
F
w
[m]=m*Qm→min ∥m∥2≦1 (43)
Here we search for the minimum inside a hyper-sphere versus a hyper-cube in (39). This seemingly minor fix carries the problem out of the class of NP-complete to P (the class of problems that can be solved in polynomial time). It has been shown in [35] that we can find global minima of (43) using linear algebra. This result served as a base for the computational algorithm of “trust region” [13] which specifically addresses indefinite QP.
The problem (43) has the following physical meaning: we optimize the balance of making bright regions as bright as possible and dark regions as dark as possible while limiting light energy ∥m∥22 coming through the mask. To solve this problem, we use procedures outlined in [35, 13]. First we form Lagrangian function of (43)
L(m, λ)=m*Qm+λ(∥m∥2−1). (44)
From here we deduce the first order necessary optimality conditions of Karush-Kuhn-Tucker (or KKT conditions, [12]):
2(Q+λI)m=0 λ(∥m∥−1)=0 λ≧0 ∥m∥≦1 (45)
Using Sorensen [35], we can state what that (43) has a global solution if and only if we can find such λ and nm that (45) is satisfied and the matrix Q+λI is positive semidefinite or positively defined. Let us find this solution.
First we notice that we have to choose A large enough to compensate the smallest (negative) eigenvalue of Q, i.e.
λ≧|μ1|≦0. (46)
From the second condition in (45) we conclude that ∥m∥=1, that is the solution lies on the surface of hyper-sphere and not inside it. The last equation to be satisfied is the first one from (45). It has a non-trivial ∥m∥>0 solution only when the lagrange multiplier λ equals to a negative of one of the eigenvalues λ=−μ1. This condition and (46) has a unique solution λ=−μ1, because other eigenvalues μ2, μ2, . . . are either positive so that λ≧0does not hold, or they are negative, but with absolute value that is smaller than ,μ1, so that λ≧|μ1| does not hold.
After we determined that λ=−μ1, we can find m from 2(Q−μ1I) m=0 as the corresponding eigenvector m=v1. This automatically satisfies ∥m∥=1, because all eigenvectors are normalized to have a unit length. We conclude that (43) has a global solution which corresponds to the smallest negative eigenvalue of Q.
As we have shown, the minimum eigenvalue of Q and its eigenvector play special role in the problem by defining the global minimum. However, other negative eigenvectors are also important, because it is easy to see that any pair
λ=−μi>0 m=vi (47)
is a KKT point and as such defines a local minimum. The problem has as many local minima as negative eigenvalues. We may also consider starting our numerical minimization from one of these “good” minima, because it is possible that a local minimum leads to a better solution in the hyper-cube than a global minimum of the spherical problem.
Results of the similar analysis for the case of the contact holes are displayed in
For the positive masks, in particular for the binary masks, the constraint has to be tightened to ∥m−0.5∥∞≦0.5. Then the correspondent to (39) problem is
F
w
[m]=m*Qm→min ∥m−0.5∥∞≦0.5 (48)
This is also an indefinite QP and is NP-complete. Replacing here infinity norm with Euclidean norm, we get a simpler problem
m*Qm→min ∥Δm∥2≦0.5 Δm=m=m0, m0={0.5, 0.5, . . . , 0.5} (49)
The Lagrangian can be written as
L(m, λ)=m*Qm+λ(∥m−m0∥2−0.25 (50)
The KKT point must be found from the following conditions
(Q+λI)Δm=−Qm0 λ(∥Δm∥2−0.25)=0 λ≦0 ∥Δm∥≦0.5 (51)
This is more complex problem than (45) because the first equation is not homogeneous and the pairs λ=−μi, Δm=vi are clearly not the solutions. We can still apply the condition of the global minimum λ≦−μ1>0 (Sorensen [35]). From the second condition we conclude that ∥Δm∥2=0.25, meaning that all solutions lie on the hyper-sphere with the center at m0. The case λ=−μ1 is eliminated because the first equation is not homogeneous, so that we have to consider only λ>−μ1. Then Q+λI is non-singular, we can invert it, and find the solution
Δm=−(Q+λI)−1Qm0 (52)
The last step is to find the Lagrange multiplier λ that satisfy the constraint ∥Δm∥2=0.25, that is we have to solve
∥(Q+λI)−1Qm0∥=0.5. (53)
The norm on the right monotonically increases from 0 to infinity in the interval −∞<λ<−μ1, thus (53) has to have exactly one solution in this interval. The pair λ, Δm that solves (52-53) is a global solution of (49). We conjecture that there are fewer KKT points of local minima of (49) than in (45) (may be there are none), but this remains to be proven by analyzing behavior of the norm (53) when Lagrange multiplier is between negative eigenvalues. The solutions of (49) are supposed to show how to insert assist features when all contacts have the same phases.
Consider objective (8) of image fidelity error
F
1
[m(x, y)]=∥I(x, y)−Iideal(x, y)∥→min. (54)
We can state this in different norms, Manhattan, infinity, Euclidean, etc. The simplest case is a Euclidean norm, because (54) becomes a polynomial of the forth degree (quartic polynomial) of mask pixels. The objective function is very smooth in this case, which ease application of the gradient-descent methods.
We can generalize (54) by introducing weighting w=w(x, y) to emphasize important layout targets and consider smoothing in Sobolev norms as in [20]:
F
w
[m(x, y)]2=∥√{square root over (w)}·(I−Iideal)∥22+α1∥L1m∥22α2∥L2m∥22α3∥m−m022→min, (55)
where L1, L2 are the operators of first and second derivatives, m0=m0 (x, y) is some preferred mask configuration that we want to be close to (for example, the target), and α1, α2, α3 are smoothing weights. The solutions of (55) increase image fidelity; however, the numerical experiments show that the contour fidelity of the images is not adequate. To address, we explicitly add (1) into (55):
If the desired output is a two-, tri-, any other multi-level tone mask, we can add penalty for the masks with wrong transmissions. The simplest form of the penalty is a polynomial expression, so for example for the tri-tone Levenson-type masks with transmissions −1, 0, and 1, we construct the objective as
where e is a one-vector. Despite all the complications, the objective function is still a polynomial of the mask pixels. To optimize for the focus depth, the optimization of (57) can be conducted off-focus, as was suggested in [16, 20]. After discretization, (55) becomes a non-linear programming problem with simple bounds.
We expect that this problem inherits property of having multiple minima from the corresponding simpler QP, though smoothing operators of (57) have to increase convexity of the objective. In the presence of multiple local minima the solution method and staring point are highly consequential: some solvers tend to converge to the “bad” local solutions with disjoined masks pixels and entangled phases, others better navigate solution space and chose smoother local minima. The Newton-type algorithms, which rely on the information about second derivatives, should be used with a caution, because in the presence of concavity in (57), the Newtonian direction may not be a descent direction. The branch-and-bound global search techniques [18] are not the right choice because they are not well-suited for the large multi-dimensional optimization problems. It is also tempting to perform non-linear transformation of the variables to get rid of the constraints and convert problem to the unconstrained case, for example by using transformation xi=tanh(mi) or mi=sin(xi) as in [26].
The reasonable choices to solve (57) are descent algorithms with starting points found from the analytical solutions of the related QP. We apply algorithms of local variations (“one variable at a time”), which is similar in spirit to the pixel flipping [32, 17], and also use a variation of the steepest descent by Frank and Wolfe [21] to solve constrained optimization problems.
In the method of local variation, we chose the step Δ, to compare three exploratory transmissions for the pixel i: mi1, mi1+Δ1, and mi1−Δ1. If one of these values violates constraints, then it is pulled back to the boundary. The best of these three values is accepted. We try all pixels, optionally in random exhaustive or circular order, until no further improvement is possible. Then we reduce step Δ2<Δ1 and repeat the process until the step is deemed sufficiently small. This algorithm is simple to implement. It naturally takes care of the simple (box) constraints and avoids the general problem of other more sophisticated techniques, which may converge prematurely to a non-stationary point. This algorithm calculates the objective function numerous times; however, the runtime cost of its exploratory calls is relatively low with the electrical field caching (see the next section). Other algorithms may require fewer but more costly non-exploratory calls. This makes method of local variation a legitimate tool in solving the problem, though descent methods that use convolution for the gradient calculations are faster.
Frank and Wolfe method is an iterative gradient descent algorithm to solve constrain problems. At each step k we calculate the gradient ∇Fk of the objective and then replace the non-linear objective with its linear approximation. This reduces the problem to LP with simple bounds:
∇Fk·m→min ∥mμ∞≦1 (59)
The solution of this m=lk is used to determine the descent direction
p
k
=l
k
−m
k−1. (60)
Then the line search is performed in the direction of pk to minimize the objective as a function one variable Δ∈[0,1]:
F[m
k−1
+γp
k]→min. (61)
The mask mk=mk−1+γpk is accepted as the next iterate. The iterations continue until convergence criteria are met. Electrical field caching helps to speedup line search and the gradient calculations if numerical differentiation is used.
The gradient descent algorithms require recalculation of the objective and its gradient at each iteration. The gradient of the objective function can be calculated numerically or analytically. When the objective is expressed in norm 2 as in (55), the derivatives can be calculated analytically, yielding efficient representation through convolutions.
Consider objective in the form of the weighted inner product (f, g)=∫∫wfgdxdy:
F
w
2
[m]=∥√{square root over (w)}·(I·Iideal)∥2=(I−Iideal, I−Iideal). (63)
Small variations δm of the mask m cause the following changes in the objective:
Let us find δI=I(m+δm)−I(m). Using SOCS formulation (60) , and neglecting O(δm2) terms, we get
where Ai is defined in (60). To use this in (64), we have to find scalar product of δI with ΔI=I−Iideal:
Using the following property of the weighted inner product
(f*g, h)=f·(g*∘wh) (67)
we can convert (66) to the form
Substituting this into (64) gives us an analytical expression for the gradient of the objective
This formula let us calculate gradient of the objective through cross-correlation or convolution as O(NM log(M)) FFT operation, which is significantly faster than numerical differentiation with O(NM2) runtime.
The speed of the local variation algorithm critically depends on the ability to quickly recalculate image intensity when one or a few pixels change. We use electrical field caching procedure to speedup this process.
According to SOCS approximation [3], the image intensity is the following sum of convolutions of kernels hi (x, y) with the mask m(x, y):
Suppose that we know the electrical fields Ai0 for the mask m0 and want to calculate intensity for the slightly different mask m′. Then
A′
i
=A
0
i
+h
i*(m′−m0). (61A)
These convolutions can be quickly calculated by the direct multiplication, which is O(d·M·N) operation, where d is the number of different pixels between m0 and m′, M is pixel count of the kernels, and N is number of kernels. This may be faster than convolution by FFT. Constantly updating the cache A′i, we can quickly re-calculate intensities for small evolutionary mask changes.
The additivity of the electrical fields can also be exploited to speedup intensity calculations in the line search (61 A). If the mask mk−1 delivers electrical fields Aik−1, and the mask pk delivers Bik, then the intensity from the mask m=mk−1+γpk can be quickly calculated through its electrical fields Ai:
A
i
=A
i
k−1
+γB
i
k (62A)
This avoids convolutions of (60A) and reduces intensity calculation to multiplication of the partial electrical fields Ai.
In
Next example demonstrates solutions when main features have the same phase and assist features can have phase shift,
We classified methods for solving inverse mask problems as linear, quadratic, and non-linear. We showed how to solve a quadratic problem for the case of spherical constraint. Such analytical solutions can be used as a first step for solving non-linear problems. In the case of the contacts, these solutions can be immediately applicable to assign contact phases and find positions of assist features. A composite objective function is proposed for the non-linear optimizations that combines objectives of image fidelity, contour fidelity, and penalized non-smooth and out of tone solutions. We applied method of local variations and a gradient descent to the non-linear problem. We proposed electrical field caching technique. Significant speedup is achieved in the descent algorithms by using analytical gradient of the objective function. This enables layout inversion on large scale as M log M operation for M pixels.
Still further mathematical detail of a method of calculating mask pixel transmission characteristics in accordance with an embodiment of the present invention is set forth in U.S. Provisional Patent Application No. 60/657,260, which is incorporated by reference herein as well as is in the paper “Solving Inverse Problems of Optical Microlithography” by Yuri Granik of Mentor Graphics Corporation, reproduced below (with slight edits).
The direct problem of microlithography is to simulate printing features on the wafer under given mask, imaging system, and process characteristics. The goal of inverse problems is to find the best mask and/or imaging system and/or process to print the given wafer features. In this study we will describe and compare solutions of inverse mask problems.
Pixel-based inverse problem of mask optimization (or “layout inversion”) is harder than inverse source problem, especially for partially-coherent systems. It can be stated as a non-linear constrained minimization problem over complex domain, with large number of variables. We compare method of Nashold projections, variations of Fienap phase-retrieval algorithms, coherent approximation with deconvolution, local variations, and descent searches. We propose electrical field caching technique to substantially speedup the searching algorithms. We demonstrate applications of phase-shifted masks, assist features, and maskless printing.
We confine our study to the inverse problem of finding the best mask. Other inverse problems like non-dense mask optimization or combined source/mask optimization, however important, are not scoped. We also concentrate on the dense formulations of problems, where mask is discretized into pixels, and mostly skip the traditional edge-based OPC [25] and source optimization approaches [1].
The layout inversion goal appears to be similar or even the same as found in Optical Proximity Correction (OPC) or Resolution Enhancement Techniques (RET). However, we would like to establish the inverse mask problem as a mathematical problem being narrowly formulated, thoroughly formalized, and strictly solvable, thus differentiating it from the engineering techniques to correct (“C” in OPC) or to enhance (“E” in RET) the mask. Narrow formulation helps to focus on the fundamental properties of the problem. Thorough formalization gives opportunity to compare and advance solution techniques. Discussion of solvability establishes existence and uniqueness of solutions, and guides formulation of stopping criteria and accuracy of the numerical algorithms.
The results of pixel-based inversions can be realized by the optical maskless lithography (OML) [31]. It controls pixels of 30×30 nm (in wafer scale) with 64 gray levels. The mask pixels can also have negative real values, which enables phase-shifting.
Strict formulations of the inverse problems, relevant to the microlithography applications, first appear in pioneering studies of B. E. A. Saleh and his students S. Sayegh and K. Nashold. In [32], Sayegh differentiates image restoration from the image design (a.k.a. image synthesis). In both, the image is given and the object (mask) has to be found. However, in image restoration, it is guaranteed that the image is achieved by some object. In image design the image may not be achievable by any object, so that we have to target the image as close as possible to the desired ideal image. The difference is analogical to solving for a function zero (image restoration) and minimizing a function (image design). Sayegh proceeds to state the image design problem as an optimization problem of minimizing the threshold fidelity error F, in trying to achieve the given threshold Oat the boundary C of the target image ([32], p. 86):
where n=2 and n=4 options were explored; I(x, y) is image from the mask m(x,y); x,y are image and mask coordinates. Optical distortions were modeled by the linear system of convolution with a point-spread function h(x,y), so that
I(x, y)=h(x, y)*m(x, y), (2A)
and for the binary mask
m(x, y)={0, 1} (3A)
Sayegh proposes algorithm of one at a time “pixel flipping”. Mask is discretized, and then pixel values 0 and 1 are tried. If the error (1) decreases, then the pixel value is accepted, otherwise it is rejected, and we try the next pixel.
Nashold [22] considered a bandlimiting operator in the place of the point-spread function (2A). Such formulation facilitates application of the alternate projection techniques, widely used in image processing for the reconstruction and is usually referenced as Gerchberg-Saxton phase retrieval algorithm [7]. In Nashold formulation, one searches for a complex valued mask that is bandlimited to the support of the point-spread function, and also delivers images that are above the threshold in the bright areas B and below the threshold in the dark areas D of the target:
x,y∈B:I(x, y)>θx, y∈D:I(x,y)<θ (4A)
Both studies [32] and [22] advanced solution of inverse problems for the linear optics. However, the partially coherent optics of microlithography is not a linear but a bilinear system [29], so that instead of (2A) the following holds:
I(x, y)=∫∫∫∫q(x−x1, x−x2, y−y1, y−y2)m(x1, y1)m*(x2, y2)dx1dx2dy1dy2, (5A)
where q is a 4D kernels of the system. While the pixel flipping [32] is also applicable to the bilinear systems, Nashold technique relies on the linearity. To get around this limitation, Pati and Kailath [25] proposed to approximate bilinear operator by one coherent kernel h, the possibility that follows from Gamo results [6]:
I(x, y)≈λ|h(x, y)*m(x, y)|2, (6A)
where constant λ is the largest eigenvalue of q, and his the correspondent eigenfunction. With this the system becomes linear in the complex amplitude A of the electrical field
A(x, y)=√{square root over (λ)}h(x, y)*m(x, y). (7A)
Because of this and because h is bandlimited, the Nashold technique is applicable.
Y. Liu and A. Zakhor [19, 18] advanced along the lines started by the direct algorithm [32]. In [19] they introduced optimization objective as a Euclidean distance ∥·∥2 between the target Iideal and actual wafer images
F
i
[m(x, y)]=∥(x, y)−Ideal(x, y)2→min. (8A)
This was later used in (1A) as image fidelity error in source optimization. In addition to the image fidelity, the study [18] optimized image slopes in the vicinity of the target contour C:
where C+ε is a sized up and C−ε is a sized down contour C; ε is a small bias. This objective has to be combined with the requirement for the mask to be a passive optical element m(x, y)m*(x, y)≦1 or, using infinity norm ∥.∥∞=max|.|, we can express this as
∥m(x, y)∥∞≦1. (10A)
In case of the incoherent illumination
I(x, y)=h(x, y)2*(m(x, y)m*(x, y)) (12A)
the discrete version of (9A, 10A) is a linear programming (LP) problem for the square amplitude pi=mim*i of the mask pixels, and was addressed by the “branch and bound” algorithm. When partially coherent optics (4A) is considered, the problem is complicated by the interactions mim*j between pixels and becomes a quadratic programming (QP) problem. Liu [18] applied simulated annealing to solve it. Consequently, Liu and Zakhor made important contributions to the understanding of the problem. They showed that it belongs to the class of the constrained optimization problems and should be addressed as such. Reduction to LP is possible; however, the leanest relevant to microlithography and rigorous formulation must account for the partial coherence, so that the problem is intrinsically not simpler than QP. New solution methods, more sophisticated than the “pixel flipping”, have also been introduced.
The first pixel-based pattern optimization software package was developed by Y.-H. Oh, J-C Lee, and S. Lim [24], and called OPERA, which stands for “Optical Proximity Effect Reducing Algorithm.” The optimization objective is loosely defined as “the difference between the aerial image and the goal image,” so we assume that some variant of (7A) is optimized. The solution method is a random “pixel flipping”, which was first tried in [32]. Despite the simplicity of this algorithm, it can be made adequately efficient if image intensity can be quickly calculated when one pixel is flipped. The drawback is that pixel flipping can easily get stuck in the local minima, especially for PSM optimizations. In addition, the resulting patterns often have numerous disjoined pixels, so they have to be smoothed, or otherwise post-processed, to be manufacturable [23]. Despite these drawbacks, it has been experimentally proven in [17] that the resulting masks can be manufactured and indeed improve image resolution.
The study [28] of Rosenbluth, A., et al., considered mask optimization as a part of the combined source/mask inverse problem. Rosenbluth indicates important fundamental properties of inverse mask problems, such as non-convexity, which causes multiple local minima. The solution algorithm is designed to avoid local minima and is presented as an elaborate plan of sequentially solving several intermediate problems.
Inspired by the Rosenbluth paper and based on his dissertation and the SOCS decomposition [3], Socha delineated the interference mapping technique [34] to optimize contact hole patterns. The objective is to maximize sum of the electrical fields A in the centers (xk, yk) of the contacts k=1 . . . N:
Here we have to guess the correct sign for each A(xk, yk) because the beneficial amplitude is either a large positive or a large negative number ([34] uses all positive numbers, so that the larger A the better). When kernel h of (7A) is real (which is true for the unaberrated clear pupil), A and FB are also real-valued under approximation (7A) and for the real mask m. By substituting (7A) into (13A), we get
where the dot denotes an inner product f·g=∫∫fgdxdy. Using the following relationship between the inner product, convolution*, and cross-correlation ∘ of real functions
(f*g)·p=f·(g∘p), (15A)
we can simplify (14A) to
where function G1 is the interference map [34]. With (16A) the problem (13A) can be treated as LP with simple bounds (as defined in [8]) for the mask pixel vector m={mi}
−Gb·→min −1≦mi≦1 (17A)
In an innovative approach to the joined mask/source optimization by Erdmann, A., et al. [4], the authors apply genetic algorithms (GA) to optimize rectangular mask features and parametric source representation. GA can cope with complex non-linear objectives and multiple local minima. It has to be proven though, as for any stochastically based technique that the runtime is acceptable and quality of the solutions is adequate. Here we limit ourselves to the dense formulations and more traditional mathematical methods, so the research direction of [4] and [15] however intriguing is not pertinent to this study.
The first systematic treatment of source optimization appeared in [16]. This was limited to the radially-dependent sources and periodic mask structures, with the Michelson contrast as an optimization objective. Simulated annealing is applied to solve the problem. After this study, parametric [37], contour-based [1], and dense formulations [28], [12], [15] were introduced. In [12], the optimization is reduced to solving a non-negative least square (NNLS) problem, which belongs to the class of the constrained QP problems. The GA optimization was implemented in [15] for the pixelized source, with the objective to maximize image slopes at the important layout cutlines.
The inverse mask problem can be reduced to a linear problem, including traditional LP, using several simplification steps. The first step is to accept coherent approximation (6A,7A). Second, we have to guess correctly the best complex amplitude Aideal of the electrical field from
I
ideal
=A
ideal
A*
ideal, (18A)
where Iideal is the target image. If we consider only the real masks m=Re[m] and real kernels h=Re[h], then from (7A) we conclude that A is real and thus we can set Aideal to be real-valued. From (18A) we get
A
ideal=±√{square root over (Iideal)}, (19A)
which means that Aideal is either +1 or −1 in bright areas of the target, and 0 in dark areas. If the ideal image has M bright pixels, the number of possible “pixel phase assignments” is exponentially large 2M. This can lead to the phase-edges in wrong places, but of course can be avoided by assigning the same value to all pixels within a bright feature: for N bright features we get 2N different guesses. After we choose one of these combinations and substitute it as Aideal into (7A), we have to solve
A
ideal(x, y)=√{square root over (λ)}h(x, y)*m(x, y) (20A)
form. This is a deconvolution problem. Within the zoo of deconvolition algorithms, we demonstrate Weiner filtering, which solves (20A) in some least square sense. After applying Fourier transformation F[ . . . ]to (20A) and using convolution theorem F[h*m]=F[h]F[m], we get
where the circumflex denotes Fourier transforms: m=F[m], ĥ=F[h]. The Wiener filter is a modification of (21A) where a relative noise power P is added to the denominator, which helps to avoid division by 0 and suppresses high harmonics:
Final mask is found by the inverse Fourier transformation:
As the simplest choice we set P=const>0 to be large enough to satisfy mask constraint (11A). The results are presented in
We can also directly solve (2) in the least square sense
∥√{square root over (λ)}h(x, y)*m(x, y)−Aideal(x, y)∥→min. (24A)
In the matrix form
∥Hm−Aideal∥→min Hij=√{square root over (λ)}hi-j (25A)
Matrix H has multiple small eigenvalues. The problem is ill-posed. The standard technique dealing with this is to regularize it by adding norm of the solution to the minimization objective [14]:
∥Hm−Aideal∥2+α∥m∥2→min, (26A)
where the regularization parameter α is chosen from secondary considerations. In our case we chose a large enough to achieve ∥m∥∞=1. The problem (26A) belongs to the class of unconstrained convex quadratic optimization problems, with guaranteed unique solution in non-degenerate cases. It can be solved by the methods of linear algebra, because (26) is equivalent to solving
(H+αI)m=Aideal (27A)
by the generalized inversion [12] of the matrix H+αI. The results are presented in
This method delivers pagoda-like corrections to image corners. Some hints of hammer-heads and serifs can be seen in mask contours. Line ends are not corrected. Comparison of contrasts between the case when mask is the same as target show improved contrast, especially between the comb and semi-isolated line.
Further detailing of the problem (26A) is possible by explicitly adding mask constrains, that is we solve
∥Hm−Aideal∥2+α∥m∥2→min ∥m∥∞≦1 (28A)
This is a constrained quadratic optimization problem. It is convex as any linear least square problem with simple bounds. Convexity guarantees that any local minimizer is global, so that the solution method is not consequential: all proper solvers converge to the same (global) solution. We used MATLAB routine Isqlin to solve (29A). The results are presented in
Any linear functional of A is also linear by m, in particular we can form a linear objective by integrating A over some parts of the layout, as in (13A). One of the reasonable objectives to be formed by such procedure is the sum of electrical amplitudes through the region B, which consists of the all or some parts of the bright areas:
that is we try to make bright areas as bright as possible. Using the same mathematical trick as in (14A), this is reduced to the linear objective
−Gb·m→min, (29A)
where Gb=h∘b, and b is a characteristic function of the bright areas. This seems to work well as the basis for the contact optimizations. It is harder to form region B for other layers. If we follow suggestion [4] to use centers of the lines, then light through the comers becomes dominant, spills over to the dark areas, and damages image fidelity. This suggests that we have to keep dark areas under control as well. Using constraints similar to (4A), we can require for each dark pixel to be of the limited brightness θ
−x, y∈D:−θ≦A(x, y)≦θ, (30A)
or in the discrete form
−θHdm≦θ. (30B)
where Hd is matrix H without rows correspondent to the bright regions. Though equations (28A) and (30B) form a typical constrained LP problem, MATLAB simplex and interior point algorithms failed to converge, perhaps because the matrix of constraints has large null-space.
The linearization (7A) can be augmented by the threshold operator to model the resist response. This leads to Nashold projections [22]. Nashold projections belong to the class of the image restoration techniques, rather than to the image optimizations, meaning that the method might not find the solution (because it does not exists at all), or in the case when it does converge, we cannot state that this solution is the best possible. It has been noted in [20] that the solutions depend on the initial guess and do not deliver the best phase assignment unless the algorithm is steered to it by a good initial guess. Moreover, if the initial guess has all phases set to 0, then so has the solution.
Nashold projections are based on Gerchberg-Saxton [7] phase retrieval algorithm. It updates a current mask iterate mk via
mk+1=(PmPs)mk, (31A)
where Ps is a projection operator into the frequency support of the kernel h, and Pm is a projection operator that forces the thresholding (4A). Gerchberg-Saxton iterations tend to stagnate. Fienap [5] proposed basic input-output (BIO) and hybrid input-output (HIO) variations that are less likely to be stuck in the local minima. These variations can be generalized in the expression
m
k+1=(PmPs+α(γ(PmPs−Ps)−Pm+I)mk, (32A)
where I is an identity operator; α=1, γ=0 for BIO, α=1, γ=1 for HIO, and α=0, γ=0 for the Gerchberg-Saxton algorithm.
We implemented operator Pm as a projection onto the ideal image
and Ps as a projection to the domain of the kernel h, i.e. Ps zeros out all frequencies of {circumflex over (m)} which are high than the frequencies of the kernel h. The iterates (32A) are very sensitive to the values of its parameters and the shape of ideal image. We were able to find solutions only when the ideal image is smoothed. We used Gaussian kernel with the diffusion length of 28 nm, which is slightly larger than the pixel size 20 nm in our examples. The behavior of iterates (32A) is not yet sufficiently understood [36], which complicates choice of α, γ. We found that in our examples the convergence is achieved for α=0.9, γ=1 after 5000 iterations. When α=0, γ=0, which corresponds to (31), the iterations quickly stagnate converging to a non-printable mask.
As shown in
In the quadratic formulations of the inverse problems, the coherent linearization (6A) is not necessarily. We can directly use bilinear integral (5A). Our goal here is to construct an objective function as is a quadratic form of mask pixels. We start with (8A) and replace Euclidean norm (norm 2) with Manhattan norm (norm 1):
F
1
[m(x, y)]=∥(x, y)−Iideal(x, y)∥1→min. (34A)
The next step is to assume that the ideal image is sharp, 0 in dark regions and I in bright regions, so that I(x, y)≧Iideal (x, y) in the dark regions and I(x, y)≦Iideal (x, y) in the bright regions. This lets us to remove the module operation from the integral (34A):
∥I(x, y)−Iideal(x, y)∥1=∫∫|I−Iideal|dxdy=∫∫w(x, y)(I(x, y)−Iideal(x, y))dxdy, (35A)
where w(x,y) is I in dark regions and −1 in bright regions. Finally we can ignore the constant term in (35A), which leads to the objective
F
w
[m(x, y)]=∫∫wI(x, y)→min. (36A)
The weighting function w can be generalized to have any positive value in dark regions, any negative value in bright regions, and 0 in the regions which we choose to ignore. Proper choice of this function covers the image slope objective (9A), but not the threshold objective (1A). Informally speaking, we seek to make bright regions as bright as possible, and dark regions as dark as possible. Substituting (5A) into (36A), we get
∫∫wI(x, y)dxdy=∫∫∫∫Q(x1, y1, x2, y2)m(x1, y1)m*(x2, y2)dx1dx2dy1dy2, (37A)
where
Q(x1, y1, x2, y2)=∫∫w(x, y)q(x−x1, x−x2, y−y1, y−y2)dxdy. (38A)
Discretization of (37A) results to the following constrained QP
F
w
[m]=m*Qm→min ∥m∥∞≦1 (39A)
The complexity of this problem depends on the eigenvalues of matrix Q. When all eigenvalues are non-negative, then it is convex QP and any local minimizer is global. This is a very nice property, because we can use any of the numerous QP algorithms to find the global solution and do not have to worry about local minima. Moreover, it is well known that the convex QP can be solved in polynomial time. The next case is when all eigenvalues are non-positive, a concave QP. If we remove constraints, the problem becomes unbounded (no solutions). This means that the constraints play a decisive role: all solutions, either local or global, end up at some vertex of the box ∥m∥∞≦1. In the worst case scenario, the solver has to visit all vertices to find the global solution, which means that the problem is NP-complete, i.e., it may take an exponential amount of time to arrive at the global minima. The last case is an indefinite QP when both positive and negative eigenvalues are present. This is the most complex and the most intractable case. An indefinite QP can have multiple minima, all lie on the boundary.
We conjecture that the problem (39A) belongs to the class of indefinite QP. Consider the case of the ideal coherent imaging, when Q is a diagonal matrix. Vector w lies along its diagonal. This means that eigenvalues μ1, μ2 . . . of Q are the same as components of the vector w, which are positive for dark pixels and negative for bright pixels. If there is at least one dark and one bright pixel, the problem is indefinite. Another consideration is that if we assume that (39A) is convex, then the stationary internal point m=0 where the gradient is zero
is the only solution, which is a trivial case of mask being dark. This means that (39) is either has trivial (global) solution, or it is non-convex.
Related to (39) QP was considered by Rosenbluth [28]:
m*Q
d
m→min m*Qbm≧b (41A)
where Qd and Qb deliver average intensities in bright and dark regions correspondingly. The objective is to keep dark regions as dark as possible while maintaining average intensity not worse than some value b in bright areas. Though the problem was stated for the special case of the off-centered point-source, the structure of (41A) is very similar to (39A). Using Lagrange multipliers, we can convert (41A) to
m*(Qd−λQb)m→min ∥m∥∞≦1 λ≧0 (42A)
which is similar to (39A).
Another metric of the complexity of (39A) is number of the variables, i.e., the pixels in the area of interest. According to Gould [10], the problems with order of 100 variables are small, more than 103 are large, and more than 105 are huge. Considering that the maskless lithography can control transmission of the 30 nm by 30 nm pixel [31], the QP (39A) is large for the areas larger than I urn by I um, and is huge for the areas lager than 10 um by 10 um. This has important implications for the type of the applicable numerical methods: in large problems we can use factorizations of matrix Q, in huge problems factorizations are unrealistic.
For the large problems, when factorization is still feasible, a dramatic simplification is possible by replacing the infinity norm by the Euclidean norm in the constraint of (39A), which results in
F
w
[m]=m*Qm→min ∥m∥2≦1 (43A)
Here we search for the minimum inside a hyper-sphere versus a hyper-cube in (39A). This seemingly minor fix carries the problem out of the class of NP-complete to P (the class of problems that can be solved in polynomial time). It has been shown in [35] that we can find global minima of (43A) using linear algebra. This result served as a base for the computational algorithm [1] which specifically addresses indefinite QP.
The problem (43A) has the following physical meaning: we optimize the balance of making bright regions as bright as possible and dark regions as dark as possible while limiting light energy ∥m∥22 coming through the mask. To solve this problem, we use procedures outlined in [31,32]. First we form Lagrangian function of (43A)
L(m, λ)=m*Qm+λ(∥m∥2−1) (44A)
From here we deduce the first order necessary optimality conditions of Karush-Kuhn-Tucker (or KKT conditions, [20]):
2(Q+λI)m=0 λ(∥m∥−1)=0 λ≧0 ∥m∥≦1 (45A)
Using Sorensen [35], we can state what that (43A) has a global solution if we can find such λ and m that (45A) is satisfied and the matrix Q+λI is positive semidefinite or positively defined. Let us find this solution. . First we notice that we have to choose λ large enough to compensate the smallest (negative) eigenvalue of Q, i.e.
λ≧|μ1|≦0. (46A)
From the second condition in (45A) we conclude that ∥m∥=1, that is the solution lies on the surface of hyper-sphere and not inside it. The last equation to be satisfied is the first one from (45A). It has a non-trivial ∥m∥>0 solution only when the lagrange multiplier λ equals to a negative of one of the eigenvalues λ=−μi. This condition and (46A) has a unique solution λ=−μ1, because other eigenvalues μ2, μ3, . . . are either positive so that λ≧0 does not hold, or they are negative, but with absolute value that is smaller than μ1, so that λ≧|μ1| does not hold.
After we determined that λ=−μ1, we can find m from 2(Q−μ, I)m=0 as the corresponding eigenvector m=v1. This automatically satisfies ∥m∥=1, because all eigenvectors are normalized to have a unit length. We conclude that (43A) has a global solution which corresponds to the smallest negative eigenvalue of Q. This solution is a good candidate for a starting point in solving (39A): we start from the surface of the hyper-sphere and proceed with some local minimization technique to the surface of the hyper-cube.
As we have shown, the minimum eigenvalue of Q and its eigenvector play special role in the problem by defining the global minimum. However, other negative eigenvectors are also important, because it is easy to see that any pair
λ=−μi≦0 m=vi (47A)
is a KKT point and as such defines a local minimum. The problem has as many local minima as negative eigenvalues. We may also consider starting our numerical minimization from one of these “good” minima, because it is possible that a local minimum leads to a better solution in the hyper-cube than a global minimum of the spherical problem.
Results of the similar analysis for the case of the contact holes are displayed in
For the positive masks, in particular for the binary masks, the constraint can be tightened to ∥m−0.5∥∞≦0.5. Then the correspondent to (39A) problem is
F
w
[m]=m*Qm→min ∥m−0.5∥∞≦0.5 (48A)
This is also an indefinite QP and is NP-complete. Replacing here infinity norm with Euclidean norm, we get a simpler problem
m*Qm→min ∥Δm∥2≦0.5 Δm=m−m0, m0={0.5, 0.5, . . . , 0.5} (49A)
The Lagrangian can be written as
L(m, λ)=m*Qm+λ(∥m−m0∥2−0.25). (50A)
The KKT point must be found from the following conditions
(Q+λI)Δm=−Qm0 λ(∥Δm∥2−0.25)=0 λ≧0 ∥Δm∥≧0.5 (51A)
This is more complex problem than (45A) because the first equation is not homogeneous and the pairs λ=−μi, Δm=vi are clearly not the solutions. We can still apply the condition of the global minimum λ≧−μ1>0 (Sorensen [35]). From the second condition we conclude that ∥Δm∥2=0.25, meaning that all solutions lie on the hyper-sphere with the center at m0. The case λ=−μ1 is eliminated because the first equation is not homogeneous, so that we have to consider only λ>−μ1. Then Q+λI is non-singular, we can invert it, and find the solution
Δm=−(Q+λI)−1Qm0. (52A)
The last step is to find the lagrange multiplier λ that satisfy the constraint ∥Δm∥2=0.25, that is we have to solve
∥(Q+λI)−1Qm0∥=0.5. (53A)
This norm monotonically increases from 0 to infinity in the interval −∞<λ<−μ1, thus (53A) has to have exactly one solution in this interval. The pair λ, Δm that solves (52A-53A) is a global solution of (49A). We conjecture that there are fewer KKT points of local minima of (49A) than in (45A) (may be there are none), but this remains to be proven by analyzing behavior of the norm (53A) when lagrange multiplier is between negative eigenvalues. The solutions of (49A) show how to insert assist features when all contacts have the same phases.
Consider objective (8A) of image fidelity error
F
1
[m(x, y)]=∥I(x, y)−Iideal(x, y)∥→min. (54A)
We can state this in different norms, Manhattan, infinity, Euclidean, etc. The simplest case is a Euclidean norm, because (54A) becomes a polynomial of the forth degree (quartic polynomial) of mask pixels. The objective function is very smooth in this case, which ease application of the gradient-descent methods. While theory of QP is well
We expect that this problem inherits property of having multiple minima from the corresponding simpler QP, though smoothing operators of (57A) have to increase convexity of the objective. In the presence of multiple local minima the solution method and staring point are highly consequential: some solvers tend to converge to the “bad” local solutions with disjoined masks pixels and entangled phases, others better navigate solution space and chose smoother local minima. The Newton-type algorithms, which rely on the information about second derivatives, should be used with a caution, because in the presence of concavity in (57A), the Newtonian direction may not be a descent direction. The branch-and-bound global search techniques [18] are not the right choice because they are not well-suited for the large multi-dimensional optimization problems. Application of stochastic techniques of simulated annealing [24] or GA [4] seems to be an overkill, because the objective is smooth. It is also tempting to perform non-linear transformation of the variables to get rid of the constraints and convert problem to the unconstrained case, for example by using transformation xi=tanh(mi) or mi=sin(xi), however, this generally is not recommended by experts [8, p. 267].
The reasonable choices to solve (57A) are descent algorithms with starting points found from the analytical solutions of the related QP. We apply algorithms of local variations (“one variable at a time”), which is similar in spirit to the pixel flipping [32, 24], and also use a variation of the steepest descent by Frank and Wolfe [21] to solve constrained optimization problems.
In the method of local variation, we chose the step Δ1 to compare three exploratory transmissions for the pixel i: mi1, mi1+Δ1, and mi1−Δ1. If one of these values violates constraints, then it is pulled back to the boundary. The best of these three values is accepted. We try all pixels, optionally in random exhaustive or circular order, until no further improvement is possible. Then we reduce step Δ2<Δ1 and repeat the process until the step is deemed sufficiently small. This algorithm is simple to implement. It naturally takes care of the simple (box) constraints and avoids the general problem of other more sophisticated techniques (like Newton), which may converge prematurely to a non-stationary point. This algorithm calculates the objective function numerous times; however, the runtime cost of its exploratory calls is very low with the electrical field caching (see the next section). Other algorithms require fewer but more costly non-exploratory calls. This makes method of local variation a legitimate tool in solving the problem.
Frank and Wolfe method is an iterative algorithm to solve constrain problems. At each step k we calculate the gradient ∇Fk of the objective and then replace the non-linear objective with its linear approximation. This reduces the problem to LP with simple bounds:
∇Fk·m→min ∥m∥∞≧1 (59A)
The solution of this m=Ik is used to determine the descent direction
p
k
=l
k
−m
k−1. (60B)
Then the line search is performed in the direction of pk to minimize the objective as a function one variable γ∈[0,1]:
F[m
k−1
+γp
k]→min. (61 B)
The solution mk=mk−1+γpk is accepted as the next iterate. The iterations continue until convergence criteria are met. Electrical field caching helps to speedup the gradient calculations and line search of this procedure.
In
Result of the local variation algorithm for the PSM mask are shown in
Next example demonstrate solutions when main features have the same phase and assist features can have phase shift,
The speed of the descent and local variation algorithms critically depends on the ability to quickly re-calculate image intensity when one or a few pixels change. We use electrical field caching procedure to speedup this process.
According to SOCS approximation [3], the image intensity is the following sum of convolutions of kernels hi (x, y) with the mask m(x, y):
Suppose that we know the electrical fields Ai0 for the mask m0 and want to calculate intensity for the slightly different mask m′. Then
A′
i
=A
0
i
h
i*(m′−m0). (61C)
These convolutions can be quickly calculated by the direct multiplication, which is O(d·M·N) operation, where d is the number of different pixels between m0 and m′, M is pixel count of the kernels, and N is number of kernels. This is faster than convolution by FFT when O(d) is smaller than O(log(M)). Constantly updating the cache Ai0, we can quickly re-calculate intensities for small evolutionary mask changes. Formula (61 A) is helpful in gradient calculations, because they alter one pixel at a time.
The additivity of the electrical fields can also be exploited to speedup intensity calculations in the line search (61A). If the mask mk−1 delivers electrical fields Aik−1, and the mask pk delivers Bik, then the intensity from the mask m=mk−1+γpk can be quickly calculated through its electrical fields Ai:
A
i
=A
i
k−1
+γB
i
k. (62A)
This avoids expensive convolutions of (60C).
In one embodiment of the invention, the optimization function to be minimized in order to define the optimal mask data has the form
where Ii=image intensity evaluated at a location on the wafer corresponding to a particular pixel in the mask data. Typically Ii has a value ranging between 1 (full illumination) and φ (no illumination); Ii,ideal=the desired image intensity on the wafer at a point corresponding to the pixel and wi=a weighting factor for each pixel.
As indicated above, the ideal image intensity level for any point on the wafer is typically a binary value having an intensity level of 0 for areas where no light is desired on a wafer and 1 where a maximum amount of light is desired on the wafer. However, in some embodiments, a better result can be achieved if the maximum ideal image intensity is set to a value that is determined from experimental results, such as a test pattern for various pitches produced on a wafer with the same photolithographic processing system that will be used to produce the desired layout pattern in question. A better result can also be achieved if the maximum ideal image intensity is set to a value determined by running an image simulation of a test pattern for various pitches, which predicts intensity values that will be produced on a wafer using the photolithographic processing system that will be used in production for the layout pattern in question.
With the maximum ideal image intensity determined using the test pattern 150, the transmission values of the pixels in the mask data that will result in the objective function (64A) being minimized are then determined. Once the transmission values of the pixels has been determined, the mask pixel data is converted to a suitable mask writing format, and provided to a mask writer that produces one or masks. In some embodiments, a desired layout is broken into one or more frames and mask pixel data is determined for each of the frames.
As indicated above, each pixel in the objective function may be weighted by the weight function w. In some embodiments, the weight function is set to 1 for all pixels and no weighting is performed. In other embodiments, the weights of the pixels can be varied by, for example, allowing all pixels within a predetermined distance from the edge of a feature in the ideal image to be weighted more than pixels that are far from the edges of features. In another embodiment, pixels within a predetermined distance from designated tagged features (e.g. features tagged to be recognized as gates, or as contact holes, or as line ends, etc.) are given a different weight. Weighting functions can be set to various levels, depending on the results intended. Typically, pixels near the edge of the ideal image would have a weight w=10, while those further away would have a weight w=1. Likewise, line ends (whose images area known to be difficult to form accurately) may be given a smaller weight w=0.1, while other pixels in the image may be given a weight w=1. Both functions may also be applied (i.e. regions near line ends have a weight w=0.1, and the rest of the image has a weight w=1 except near the edges of the ideal image away from the line ends, where the weight would be w=10.) Alternatively, if solutions using SRAFs are desired, and these SRAFs occur at a predetermined spacing relative to main features, weighting functions which have larger values at locations corresponding to these SRAF positions can be constructed.
It should be noted that the absolute values of weighting functions can be set to any value; it is the relative values of the weighting functions across pixels that makes them effective. Typically, distinct regions have relative values that differ by a factor of 10 or more to emphasize the different weights.
In some instances it has been found that by setting the maximum ideal image intensity for use in an objective function to the maximum image intensity for tightly spaced features of the test pattern 150, the process window of the photolithographic processing is increased.
Some mask writers are capable of producing patterns having angles other than 90°. In this case, an optimized mask data can be approximated using the techniques shown in
With various implementations of the invention, the pixel inversion is solved iteratively. More particularly, a first step to identify a search direction may be performed, followed by a second “line search” step to that identifies a point where the objection function becomes close to the lowest possible value in the local scope of the search direction.
There are various ways to determining the search direction based, such as for example steepest descent or quasi-Newton. Conventional line search strategies may be accomplished by first, scaling the search direction (or search vector) by multiplying a predetermined constant scaling factor (called y) with the search vector. Subsequently, the search range may be divided into a preselected number of even sub-steps. Then the objective function may be evaluated starting at the sub-step closest to the scaled search vector and ending at the sub-step farthest form the scaled search vector. The first sub-step where the next sub-step gives a larger objective function value may be selected as the ideal candidate.
There are a number of problems with the conventional approaches. Particularly, there is no way of knowing if the search range based on a fixed scaling factor will be appropriate for all iterations steps with different search directions, as a result it can lead to unstable convergence behaviors. Additionally, it is often arbitrary to use a fixed integer to divide the search range into the evenly distributed sub-steps. A small sub-step size may find local minima, but it also takes more objective function evaluations due to increased sub-steps, thereby making the process slower, whereas a large sub-step may be faster, but it can cause the line search process to miss local minima by a large margin.
As discussed in [38], the descent property, which is the inner product between the search vector and the gradient, may be defined as follows:
sTg(x) where g(x)=δ∫/δx
which is the first order derivative (or gradient) of the objective function at the starting point of the search vector.
Note: with those algorithms such as steepest descent and quasi-Newton, the first order derivative at the starting point of the search vector is computed as part of the overall optimization process. Therefore it can be assumed that this derivative is available without additional calculation during the line search process.
The descent property is the slope of the objective function along the search direction at the starting point of the search vector, which is illustrated in
∫(x+αs)˜∫(x)+αsTg(x).
If the expected maximum objective function improvement Δ∫=∫(x+γs)−∫(x) for the current iteration is known, an estimate as to γ can be made as follows,
γ=|Δ∫/(sTg(x)ρ)|
where ρ is a slope-relaxation factor, which we will discuss later. To use this formula to calculate γ, reasonable estimates of Δ∫ and ρ should be calculated.
One way to calculate the estimate of Δ∫ is to keep track of objective function value improvements in the previous iterations of the optimization process. The objective function value keeps falling during the optimization iterations as illustrated in the “objective function history curve” shown in
There is no guarantee that an objective function history curve falls down smoothly as the iterations go. It is quite possible the curve falls rapidly at some points and falls slowly at other points and there is no easy way to tell what is going to happen in this respect. As a result, estimates of Δ∫ could be quite off if the estimate is based on one particular iteration result. However, it's generally true that the curve initially falls rapidly and then later the fall rate significantly slows down as it gets closer to convergence. And it's also generally true if you take a look at multiple consecutive points as a group, they tend to show more smoothly falling tendencies. Based on this observation, it should be a good strategy to make an estimate of Δ∫ based on multiple values of Δ∫ of previous iterations. One simple way to doing this is to compute the moving average of Δ∫ say for the previous three iterations.
To make an estimate of ρ, one can consider what the value of ρ should have been for the previous iteration. Once a line search process is completed for the current iteration of the optimization process, the value of real Δ∫ can be obtained. The descent property for the starting point of the current search vector, sTg(x), is also known.
To see the meaning of ρ in [11], refer back to
Again, what ρ should have been for the previous iterations can be computed. To do that, we introduce the value called best α. The best α is the value of a at the point where the objective function value gets the lowest (or close to the lowest) in a line search process. The best α can be denoted as A.
If the adjusted slope in
ρ=X|Δ∫/(sTg(x)A)|
where X is the aforementioned adjustment factor, which, in the example mentioned above, is equal to 1/10. For similar reasons mentioned before with regard to the use of moving average values, we extend the use of moving average values to all related parameters. As a result, we can rewrite γ=|Δ∫/(sTg(x)ρ)| as follows
γ=|<Δ∫>/(<sTg(x)><ρ>)|
where <Δ∫> is the moving average of df from the previous iterations, say the last three, <ρ> is the moving average of p similarly, and <sTg(x)> is the moving average of the descent property from the current iteration plus the previous iterations, say two previous ones.
The actual flow that incorporates this line search control method using three point moving averages looks as follows:
First, compute the initial objective function value, ∫0, and set the initial expected objective function improvement, Δ∫0 equal to ∫0.
Second, start the overall optimization iterations (e.g. quasi-Newton) with the iteration counter of k (k=1, 1, 2, . . . ).
Third, compute the gradient gk, for the current iteration
Fourth, compute the search vector sk for the current iteration.
Fifth, compute the current descent property, dk.
Sixth, if k is smaller than 3, set γk equal to the following:
|Δ∫k−1/dk|if k>0 and k<3 |Δ∫0/dk|if k=0
otherwise set γk equal to the following:
Seventh, run the sub-step evaluation process (described separately) and keep the following values:
Eight, compute Δ∫k and ρk as follows,
Δ∫k=∫k−1−∫k
ρk=χ|Δ∫k/(sTg(x)Ak)|
Ninth, if the convergence is achieved (e.g. Δ∫k has become zero or very close to zero) or the maximum number of the iterations is reached, exit the loop, otherwise increment k and go back to the second step.
As to the sub-step evaluation step (seven), as mentioned earlier, fixed sub-step definition and incremental search from the starting toward the end point may work, but it's not an efficient way if a small sub-step size is used, and it's not an accurate way, if a too coarse sub-step size is used. One relatively easy improvement to this approach is to start with a coarse set of sub-steps and identify the sub-step that contains the lowest objective function value shown in
Once the optimization process is completed, the system has to apply a threshold operation to those mask transmission values to make them attain the discrete mask transmission values that are allowed on a real physical mask. For example, for the mask whose physical limits are −0.245 (the lowest) and 1.0 (the highest), the system would examine the optimized mask transmission values on the pixels, and if they are above the threshold value of (1−0.245)/2=0.3775 (for example), the final transmission values become 1.0 for them, and otherwise they get the value of −0.245. The threshold value does not necessarily have to be the middle point of the upper/lower bounds, and it is indeed possible to use some other values.
One potential issue in this flow is the possibility that the optimization process may converge to a state which uses mask transmission values that are not close to the discrete values that are allowed in the final real mask. Such in-between mask transmission values are referred to as grayscale values in this document. For example, the optimization process may decide the mask transmission value of 0.3 is optimum for some region, and from mathematical point of view, that may well be a perfectly optimum solution. However, after the threshold operation, the pixels in that region will be all brought back to the minimum mask transmission value allowed in the real mask. This could be particularly problematic for SRAF formation purposes, as there is no such thing as grayscale SRAF's in the real mask.
One way to addressing this issue is to add an additional penalty term in the objective function in such a way those grayscale values cause higher cost. However, our experience shows a) whenever you add additional terms that are quite different in nature from the original optical imaging objective function, end results tend to become be unreliable and unpredictable from optical optimization point of view, and b) such a penalty term that punishes grayscale values also tend to punish the formation of SRAF's, as those SRAF's tend to be gradually formed taking grayscale values during the optimization process.
The adaptive weight adjustment method addresses this issue quite differently. Instead of adding additional terms to the objective function, this method makes adjustments to the original imaging term of the objective function in an adaptive way during the optimization. We'll discuss the details in the following section.
As mentioned earlier, the original objective function of the pixel inversion process is defined as follows
When the optimization process converges to a state with lots of gray values, that may well be a mathematically valid solution. In other words, the objective function as it is simply has not been set up to generate a desired solution with distinctive SRAF's. Which means, unless you modify the objective function in such a way that it leads to a desired solution, there is fundamentally no way that the optimization process could achieve a desired goal. The problem is that it is not a trivial problem to set up the objective function that way. We are dealing with lots of complex geometry shapes in various configurations, not to mention the fact that the whole image is being computed based advanced optics/illumination setting. All of those complexities tend to add up to the situation where it is very difficult to know beforehand how exactly to set up the objective function to achieve a desired result.
There may be a way to solving this problem by adding separate terms to the objective function. One such additional cost term would be off-tone penalty, which is defined as follows
Here, mmin is the minimum mask transmission value. The problem of this approach is a) adding such a term that has nothing to with the optical behavior could distort the image based pixel inversion results, b) the off-tone penalty tends to keep the transmission values grounded to the two extremity values, thereby making it obstructive to SRAF formation purposes, and c) as a result, end results could be unpredictable and unreliable.
Various implementations of the invention provide a solution to this problem which we call the adaptive weight adjustment. This approach is based on the original formulation of the objective function
It does not add separate terms to the objective function. However, it does change the weight values (wi,j) during the optimization process in an adaptive way.
The basic procedure of the adaptive weight adjustment is as follows:
Check the current intensity for each of the pixels, and increase the weight in the following cases
1. if the intensity for a bright region pixel is lower than the Imax by a certain amount, increase the weight for the pixel.
2. if the intensity for a dark region pixel is higher than Imin by a certain amount, increase the weight for the pixel.
Where Imax is the target intensity for the bright region, and Imin the target intensity for the dark region. As to the selection of the values of Imax and Imin, see the reference [39].
Since we are changing those values of wi,j, we are changing the problem definition itself. However, based on the observation that an ill-defined objective function leads to ill-defined results, it makes sense to improve the definition of the objective function. And we change the values of wi,j in such a way that those pixels with undesirable intensity values get penalized more by increasing the values of wi,j for such pixels. This maneuver heightens the objective function value, and creates more room for the optimization process to find a search direction that lowers the objective function value for the next iteration, whereas the optimization process just based on the original objective function definition could be stuck in one of the minima.
Adaptive weight adjustment doesn't necessarily directly deal with gray bar issues, but a) it should achieve better global convergence with continually revised objective function definition, b) it should prevent printing SRAF formation, and c) it is still based on optics only, thereby avoiding unreliability of introducing auxiliary objective function terms.
The best mode formula we use for the weight adjustment is as follows:
If the target intensity for the pixel is Imax and the image intensity of the pixel Ii,j is smaller than (Imax−s), then the new weight is computed as follows:
Wi,j=wij+F·(Imax−S−Ii,j), where where S (slack) is typically a small positive value, F is a constant factor with a positive value
Similarly, if the target intensity is Imin and the current intensity Ii,j is greater than S, then the new weight for the pixel is as follows
w
i,j
=w
i,j
+F·(Ii,j−S)
Otherwise, no weight change is made.
There are a couple of parameters used here, namely S and F. Our experience shows the following are a reasonable setting for the two parameters:
S=0.01·Imax
F may be determined so that the maximum possible improvement in the objective function due to the adaptive weight change would be about 100 in total for the entire pixels.
An example of an optimization flow that incorporates thee adaptive weight adjustment is as follows:
First, run initial iterations based on the original definition of the objective function.
Second, switch to the adaptive weight adjustment mode, and keep adjusting the weights the rest of the way. It is important to note, that the timing of the switch to the adaptive weight adjustment mode is determined by the status around the target edge region. If it reaches the point where there's virtually no improvement in the target edge region, then make the switch. Additionally, the objective function value is recomputed with the adjusted weights, and that value is used as if it were the previous iteration's objective function value.
Raw optimization results of the pixel inversion results generally contain polygons with lots of small features and arbitrarily angled edges that break all sorts of mask manufacturing rules. It's particularly problematic to have lots of small figures that cause not only mask-writing inaccuracy but also significant increase in the EB shot count. It is known that about ⅓rd of the mask cost come from the EB writing time, which is roughly proportional to the EB shot count for the mask. Given that, it seems quite prohibitive to have a shot count increase of 100×, for example.
Geometrical mask simplification has been employed to address this issue [39]. Various implementations of the present invention provide a post pixel inversion MRC process to address this issue.
While the geometrical mask simplification as a post process to the pixel inversion helps improve the manufacturability to some extent, it does not necessarily ensure that the result be clean in terms of MRC. The post pixel inversion MRC process takes the final pixel inversion result and tries to turn it into MRC-clean result while ensuring that resultant mask shapes would not cause undesirable imaging effects such as printing s-bars.
The purpose of the post pixel inversion MRC process is to transform the final result of the pixel inversion into the one with MRC-clean geometrical shapes while ensuring those modified features do not turn out to cause undesirable effects such as printing SRAF's.
As described in [11], the pixel inversion process involves image analysis for the pixels in the work region. We use two schemes for the image analysis: 1) FFT-based method, and 2) electrical field caching method [11]. The electrical field caching method is suitable for the line-search process of the optimization, and it enables fast evaluation of the objective function. To perform the electrical field caching method, it is necessary to calculate the image of the starting point of the line search as well as that of the endpoint of the line search based on the FFT-based method. Once the results of these two points are known, it is possible to exploit the linearity of the kernel convolution to conduct linear interpolation for the calculation of intermediate step between the two points. Since there is no need for redoing FFT during the linear interpolation process, it achieves faster evaluation of the objective function.
The electrical field caching method can be a useful tool for the purpose of examining the image after the geometrical modification due to MRC. The MRC process itself is a geometrical shape transformation which is not aware of the impact the transformation causes in terms of imaging. The basic idea of the MRC process is to take the final result of the pixel inversion and the geometrical MRC cleanup results as the starting point and the endpoint of the line-search process, respectively. In other words, compute the difference between the MRC result and the pixel inversion result and regard that as the search vector for the ensuing line search process. Then perform the line search process by using the electrical field caching method to quickly determine the intermediate step where the objective function gets larger than that the pixel inversion result. This is a systematic process that determines the point during the line search where the MRC cleanup result starts adversely deviating from the image of the final pixel inversion. We call this process quasi line search.
Based on the idea outlined above, we will describe the baseline algorithm of this scheme below. In the algorithm description, when the word pixel is used, that means the pixel representation of the mask is used, where mask transmission values could be grayscale and/or discrete values. When the word geometrical is used, that means the geometrical representation of the mask is sued with physically realizable discrete mask transmission values.
1. Start with the final state of the pixel inversion (note: the pixel data here still has grayscale values).
2. Convert the pixel data to the geometrical data based on the specified threshold (typically (1+mmin)/2, where mmin is the minimum mask transmission value).
3. Convert back the geometrical data to the pixels and compute the objective function value (note: the pixel data here has the discrete mask transmission values).
4. Run the orthogonal geometrical simplification for the result of 2.
5. Perform geometrical MRC cleanup (more about this step is described later).
6. Convert back the current geometry to pixels.
7. Evaluate the objective function for the current state.
8. If it's smaller than or nearly the same as the objective function value of the pixel inversion result then exit.
9. Otherwise, compute the difference between the MRC's pixel state and the pixel inversion's pixel state and treat it as a search vector.
10. Then run the quasi line-search process using the aforementioned search vector and electrical field caching.
11. Keep the result right before the objective function value gets larger than the objective function value of the pixel inversion.
12. Convert the result to geometrical representation.
13. Run the orthogonal geometrical simplification to the result.
As to the geometrical MRC cleanup mentioned in the algorithm, it could be implemented in different ways. The baseline approach we employed for the geometrical MRC cleanup is as follows:
1. Put the polygons that interact with the original target shapes in the category of the main feature polygons, and all other polygons in the category of SRAF polygons.
2. Perform the over-sizing operation by the clearance distance on the main feature polygon to form the clearance area (the clearance distance is the minimum distance allowed between SRAF polygons and the main feature polygons).
3. Remove part of the SRAF polygons that are inside the clearance area.
4. Combine the remaining SRAF polygons and the main feature polygon as the current set of polygons.
5. Compute the space/width distances for all edges of the current set of polygons within the maximum distance of the space/width constrains, and the results are a set of the pairs of the edges that are within the constraints distance with one another.
6. Examine each of those pairs one by one and compute the distance between the two edges.
7. If the two violation edge's distance is close enough (by a specified amount), then a) if it's for the width violation, then delete the part of the polygon between the two edge from the entire polygon that the two edge belong to, or b) if it's for the space violation, fill in the gap between the two edges to form continuous polygon between the two edges.
8. if the two violation edges' distance is not close enough, then a) if it's for width violation, then move the two edges farther apart to create sufficient width between them, or b) if it's for space violation, then move them farther apart to create sufficient space between them.
This is just an example of geometrical MRC algorithm. In reality, it can be much more complex than this example with more MRC constrains such as area constraint, etc.
While illustrative embodiments have been illustrated and described, it will be appreciated that various changes can be made therein without departing from the scope of the invention as defined by the following claims and equivalents thereof.
1. Barouch, E., et al., “Illuminator Optimization for Projection Printing,” SPIE 3679:697-703, 1999.
2. Cobb, N., and A. Zakhor, “Fast, Low-Complexity Mask Design,” SPIE 2440:313-327.
3. Cobb, N. B., “Fast Optical and Process Proximity Correction Algorithms for Integrated Circuit Manufacturing,” Dissertation, University of California at Berkeley, 1998.
4. Erdmann, A., et al., “Towards Automatic Mask and Source Optimization for Optical Lithography,” SPIE 5377:646-657.
5. Fienup, J. R., “Phase Retrieval Algorithms: A Comparison,” Appl. Opt. 21:2758-2769, 1982.
6. Gamo, H., “Matrix Treatment of Partial Coherence,” Progress in Optics 3:189-332, Amsterdam, 1963.
7. Gerchberg, R. W., and W. O. Saxton, “A Practical Algorithm for the Determination of Phase From Image and Diffraction Plane Pictures,” Optik 35:237-246, 1972.
8. Gill, P. E., et al., “Practical Optimization,” Academic Press, 2003.
9. Golub, G., and C. van Loan, “Matrix Computations,” J. Hopkins University Press, Baltimore and London, 1996.
10. Gould, N., “Quadratic Programming: Theory and Methods,” 3rd FNRC Cycle in Math. Programming, Belgium, 2000.
11. Granik, Y., “Solving Inverse Problems of Optical Microlithography,” SPIE, 2005.
12. Granik, Y., “Source Optimization for Image Fidelity and Throughput,” JM3:509-522, 2004.
13. Han, C.-G., et al., “On the Solution of Indefinite Quadratic Problems Using an Interior-Point Algorithm,” Informatica 3(4):474-496, 1992.
14. Hansen, P.C., “Rank Deficient and Discrete Ill-Posed Problems,” SIAM, Philadelphia, 1998.
15. Hwang, C., et al., “Layer-Specific Illumination for Low k1 Periodic and Semi-Periodic DRAM Cell Patterns: Design Procedure and Application,” SPIE 5377:947-952.
16. Inoue, S., et al., “Optimization of Partially Coherent Optical System for Optical Lithography,” J. Vac. Sci. Technol. B 10(6):3004-3007, 1992.
17. Jang, S.-H., et al., “Manufacturability Evaluation of Model-Based OPC Masks,” SPIE 4889:520-529, 2002.
18. Liu, Y., and A. Zachor, “Binary and Phase-Shifting Inage Design for Optical Lithography,” SPIE 1463:382-399, 1991.
19. Liu, Y., and A. Zachor, “Optimal Binary Image Design for Optical Lithography,” SPIE 1264:401-412, 1990.
20. Luenberger, D., “Linear and Nonlinear Programming,” Kluwer Academic Publishers, 2003.
21. Minoux, M., “Mathematical Programming,” Theory and Algorithms, New York, Wiley, 1986.
22. Nashold, K., “Image Synthesis—a Means of Producing Superresolved Binary Images Through Bandlimited Systems,” Dissertation, University of Wisconsin, Madison, 1987.
23. Oh, Y.-H., et al., “Optical Proximity Correction of Critical Layers in DRAW Process of 12 um Minimum Feature Size,” SPIE 4346:1567-1574, 2001.
24. Oh, Y.-H., et al., “Resolution Enhancement Through Optical Proximity Correction and Stepper Parameter Optimization for 0.12 um Mask Pattern,” SPIE 3679:607-613, 1999.
25. Pati, Y. C., and T. Kailath, “Phase-Shifting Masks for Microlithography: Automated Design and Mask Requirements,” J. Opt. Soc. Am. A 11(9):2438-2452, September 1994.
26. Poonawala, A., and P. Milanfar, “Prewarping Techniques in Imaging: Applications in Nanotechnology and Biotechnology,” Proc. SPIE 5674:114-127, 2005.
27. Qi, L., et. al., “Global Minimization of Normal Quartic Polynomials Based on Global Descent Directions,” SIAM. J. Optim. 15(1):275-302.
28. Rosenbluth, A., et al., “Optimum Mask and Source Patterns to Print a Given Shape,” JM3 1:13-30, 2002.
29. Saleh, B. E. A., “Optical Bilinear Transformation: General Properties,” Optica Acta 26(6):777-799, 1979.
30. Saleh, B. E. A., and K. Nashold, “Image Construction: Optimum Amplitude and Phase Masks in Photolithography,” Applied Optics 24:1432-1437, 1985.
31. Sandstrom, T., et. al., “OML: Optical Maskless Lithography for Economic Design Prototyping and Small-Volume Production,” SPIE 5377:777-797, 2004.
32. Sayegh, S. I., “Image Restoration and Image Design in Non-Linear Optical Systems,” Dissertation, University of Wisconsin, Madison, 1982.
33. Shang, S., et. al., “Simulation-Based SRAF Insertion for 65 nm Contact Hole Layers,” BACUS, 2005, in print.
34. Socha, R., et al., “Contact Hole Reticle Optimization by Using Interference Mapping Lithography (IML),” SPIE 5446:516-534.
35. Sorensen, D.C., “Newton's Method With a Model Trust Region Modification,” SIAM, J. Num. Anal. 19:409-426, 1982.
36. Takajo, H., et. al., “Further Study on the Convergence Property of the Hybrid Input-Output Algorithm Used for Phase Retrieval,” J. Opt. Soc. Am, A 16(9):2163-2168, 1999.
37. Vallishayee, R. R., et al., “Optimization of Stepper Parameters and Their Influence on OPC,” SPIE 2726:660-665, 1996.
38. Fletcher, R., “Practical Methods of Optimization,” John Wiley & Sons, pp. 33-44.
39. Huang, C. Y., et at., “Model based insertion of assist features using pixel inversion method: implementation in 65 nm node”, Proc. SPIE 6283, 62832Y (2006).
This application claims the benefit of U.S. Provisional Patent Application No. ______ filed Mar. 31, 2008, and is a continuation in part of U.S. application Ser. No. ______ filed ______, which in turn claims the benefit of U.S. Provisional Patent Application No. 60/792,476 filed Apr. 14, 2006, and is a continuation-in-part of U.S. patent application Ser. No. 11/364,802 filed Feb. 28, 2006, which in turn claims the benefit of U.S. Provisional Patent Application No. 60/657,260 filed Feb. 28, 2005; U.S. Provisional Patent Application No. 60/658,278, filed Mar. 2, 2005; and U.S. Provisional Patent Application No. 60/722,840 filed Sep. 30, 2005, which applications are all incorporated entirely herein by reference.
Number | Date | Country | |
---|---|---|---|
60792476 | Apr 2006 | US | |
60722840 | Sep 2005 | US | |
60658278 | Mar 2005 | US | |
60657260 | Feb 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11621082 | Jan 2007 | US |
Child | 12359174 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12359174 | Jan 2009 | US |
Child | 12416016 | US | |
Parent | 11364802 | Feb 2006 | US |
Child | 11621082 | US |