This invention relates generally to a system and method for managing a semiconductor process and in particular to a system and method for managing yield in a semiconductor process.
The semiconductor industry is continually pushing toward smaller and smaller geometries of the semiconductor devices being produced since smaller devices generate less heat and operate at a higher speed than larger devices. Currently, a single chip may contain over one billion patterns. The semiconductor manufacturing process is extremely complicated since it involves hundreds of processing steps. A mistake or small error at any of the process steps or tool specifications may cause lower yield in the final semiconductor product, wherein yield may be defined as the number of functional devices produced by the process as compared to the theoretical number of devices that could be produced assuming no bad devices. Improving yield is a critical problem in the semiconductor industry and has a direct economic impact to the semiconductor industry. In particular, a higher yield translates into more devices that may be sold by the manufacturer.
Semiconductor manufacturing companies have been collecting data for a long time about various process parameters in an attempt to improve the yield of the semiconductor process. Today, an explosive growth of database technology has contributed to the yield analysis that each company follows. In particular, the database technology has far outpaced the yield management ability when using conventional statistical methods to interpret and relate yield to major yield factors. This has created a need for a new generation of tools and techniques for automated and intelligent database analysis for yield management.
Current conventional yield management systems have a number of limitations and disadvantages which make them less desirable to the semiconductor industry. For example, the conventional systems may require some manual processing which slows the analysis and makes it susceptible to human error. In addition, these conventional systems may not handle both continuous and categorical yield management variables. Some conventional systems cannot handle missing data elements and do not permit rapid searching through hundreds of yield parameters to identify key yield factors. Some conventional systems output data that is difficult to understand or interpret even by knowledgeable semiconductor yield management people. In addition, the conventional systems typically process each yield parameter separately, which is time consuming and cumbersome and cannot identify more than one parameter at a time.
Thus, it is desirable to provide a yield management system and method which solves the above limitations and disadvantages of the conventional systems and it is to this end that the present invention is directed.
The yield management system and method in accordance with the invention may provide many advantages over conventional methods and systems which make the yield management system and method more useful to semiconductor device manufacturers. In particular, the system may be fully automated and easy to use so that no extra training is necessary to make use of the yield management system. In addition, the system handles both continuous (e.g., temperature) and categorical (e.g., Lot 1, Lot 2, etc.) variables. The system also automatically handles missing data during a pre-processing step. The system can rapidly search through hundreds of yield parameters and generate an output indicating the one or more key yield factors/parameters. The system generates an output (a decision tree) that is easy to interpret and understand. The system is also very flexible in that it permits prior yield parameter knowledge (from users) to be easily incorporated into the building of the model in accordance with the invention. Unlike conventional systems, if there is more than one yield factor/parameter affecting the yield of the process, the system can identify all of the parameters/factors simultaneously so that the multiple factors are identified during a single pass through the yield data.
In accordance with a preferred embodiment of the invention, the yield management method may receive a yield data set. When a data set comes in, it first goes through a data preprocessing step in which the validity of the data in the data set is checked and cases or parameters with missing data are eliminated. Using the cleaned up data set, a Yield Mine model is built during a model building step. Once the model is generated automatically by the yield management system, the model may be modified by one or more users based on their experience or prior knowledge of the data set. Once the model has been modified, the data set may be processed using various statistical analysis tools to help the user better understand the relationship between the response and predict variables.
illustrateillustrates an example of a yield parameter being selected by the user and a tree being automatically generated by the system based on the user selected parameter in accordance with the invention;
The invention is particularly applicable to a computer-implemented software-based yield management system and it is in this context that the invention will be described. It will be appreciated, however, that the system and method in accordance with the invention has greater utility since it may be implemented in hardware or may incorporate other modules or functionality not described herein.
In accordance with the invention, the yield management system may also be implemented using hardware and may be implemented on different types of computer systems, such as client/server systems, web servers, mainframe computers, workstations and the like. Now, more details of the implementation of the yield management system in software will be described.
In more detail, the data may be input to a data preprocessor 32 that may validate the data and remove any missing data records. The output from the data preprocessor may be fed into a model builder 34 so that a model of the data set may be automatically generated by the system. Once the system has generated a model, the user may enter model modifications into the model builder to modify the model based on, for example, past experience with the particular data set. Once the user modifications have been incorporated into the model, a final model is output and made available to a statistical tool library 36. The library may contain one or more different statistical tools that may be used to analyze the final model. The output of the system may be, for example, a listing of one or more factors/parameters that contributed to the yield of the devices that generated the data set being analyzed. As described above, the system is able to simultaneously identify multiple yield factors. Now, a yield management method in accordance with the invention will be described.
The data preprocessing step 42 helps to clean up the incoming data set so that the later analysis may be more fruitful. The yield management system in accordance with the invention can handle data sets with complicated data structures. A yield data set typically has hundreds of different variables. These variables may include both a response variable, Y, and predictor variables, X1, X2, . . . , Xm, that may be of a numerical type or a categorical type. A variable is a numerical type variable if its values are real numbers, such as different temperatures at different timetimes during the process. A variable is a categorical type variable if its values are of a set of finite elements not necessarily having any natural ordering. For example, a categorical variable could takestake values in a set of {MachineA, MachineB, MachineC} or values of (Lot1, Lot2 or Lot3).
It is very common for a yield data set to have missing values. The data pre-processing step removes the cases or variables having missing values. In particular, the preprocessing first may remove all predictor variables that are “bad”. By “bad”, it is understood that either a variable has too much missing data, ≧MS, or, for a categorical variable, if the variable has too many distinct classes, ≧DC. In accordance with the invention, both MS and DC are user defined thresholds so that the user may set these values and control the preprocessing of the data. In a preferred embodiment, the default valuevalues are MS=0.05×N, DC=32, where N is the total number of cases in the data set.
Once the “bad” predictor variables are removed, then, for the remaining data set, data preprocessing may remove all cases with missing data. If one imagines that the original data set is a matrix with each column representing a single variable, then data preprocessing first removes all “bad” columns (variables) and then removes “bad rows” (missing data) in the remaining data set with the “good” columns.
The yield management system uses a decision tree-based method. In particular, the method partitions the data set, D, into sub-regions. The tree structure may be a hierarchical way to describe a partition of D. It is constructed by successively splitting nodes (as described below), starting with the root node (D), until some stopping criteria are met and the node is declared a terminal node. For each terminal node, a value or a class is assigned to all the cases within the node. Now, the node splitting method and example of the decision tree will be described in more detail.
In this example, out of all 774 predictor variables, the Yield Mine system using the decision tree prediction, identifies one or more variables as key yield factors. In this example, the key yield factor variables are PWELLASH, FINISFI, TI_TIN_RTP_, and VTPSP_. In this example, PWELLASH and FINISFI are time variables associated with the process variables PWELLASH_and FINISFI_and TI13 TIN_RTP_TI—TIN—RTP_and VTPSP_are process variables. Note that, for each terminal node 102 in the decision tree, the value of the response variable at that terminal node is shown so that the user can view the tree and easily determine which terminal node (and thus which predictor variables) result in the best value of the response variable.
In the tree structure model in accordance with the invention, if a tree node is not terminal, it has a splitting criterion for the construction of its sub-nodes as will be described in more detail below with reference to
To find the proper stopping criteria for tree construction is a difficult problem. To deal with the problem we first over-grow the tree and then apply cross validation techniques to prune the tree. Pruning the tree is described in detail in the following sections. To grow an over sized tree, the method may keep splitting nodes in the tree until all cases in the node having the same response value, or the number of cases in the node is less than a user defined threshold, n0. The default in our algorithm is n0=max{5, floor(0.02×N)} where N is the total number of cases in D, and the function floor(x) gives the biggest integer that is less than or equal to x. Now, the construction of the decision tree and the method for splitting tree nodes in accordance with the invention will be described.
If Φj>V, then in step 126, the node, T, is split into one or more sub-nodes, T1, T2, . . . , Tm, based on the variable j. In step 128, for each sub-node, Tk where k=1, . . . , m, the same node splitting method is applied. In this manner, each node is processed to determine if splitting is appropriate and then each sub-node created during a split is also checked for susceptibility to splitting as well. Thus, the nodes of the decision tree are split in accordance with the invention. Now, more details of the decision tree construction and node splitting method will be described.
A decision tree is built to find relations between the response variable and the predictor variables. Each split, S, of a node, T, partitions the node into m subnodes T1, T2, . . . , Tm, in hopes that the subnodes are less “noisy” than T as defined below. To quantify this idea, a real-value function that measures the noisiness of a node T, g(T), may be defined wherein NT denotes the number of cases in T, and NT
We say that the subnodes are less noisy than their ancestor if Φ(S)>0. In Yield Mine, a node split depends only on one predictor variable. The method may search through all predictor variables, X1, X2, . . . , Xn, one by one to find the best split based on each predictor variable. Then, the best split is the one to be used to split the node. Therefore, it is sufficient to explain the method by describing how to find the best split for a single predictor variable. Depending on the types of the response variable, Y, and the predictor variable, X, as being either categorical or numerical, there are four possible scenarios. Below, details for each scenario on how the split is constructed and how to assign a proper value or a class to a terminal node is described. Now, the case when Y and X are both categorical variables is described.
Y is Categorical and X is AtegoricalCategorical
Suppose that Y takes values in the set A={A1, A2 , . . . , Ak}, and X takes values in the set B={B1,B2, . . . , Bl}. In this case, only binary splits are allowed. That is, if a node is split, it produces 2 sub-nodes, a left sub-node, TL, and a right sub-node, TR. A split rule has the form of a question: Is xΣBS, where BS is a subset of B. If the answer to the question is yes, then the case is put in the left sub-node TL. Otherwise it is put in the right sub-node TR. There are 2l different subsets of B. Therefore, there are 2l different splits.
Let NiT denote the number of class i cases in node T. The function which measuremeasures the noisiness of the node, g(T), is defined as:
Since there are only two sub-nodes, the goodness of split function, Φ(S), is:
The method thus searches through all positions 2l splits to find the one that minimizes Φ(S). Now, the case where Y is categorical and X is numerical will be described.
Y is Categorical and X is Numerical
Suppose that Y takes values in the set A={A1,A2, . . . , Ak} and x1,x2, . . . , xN
Now, we define NT and g(T) in the same way as in the previous scenario. Since a split in this case has one more parameter than a split in the first case above, the method may define
Y is Numerical and X is Categorical
In this case, the split rule is the same as the first case. The only difference is the way in which the noiseness function, g(T), is defined. In particular, since Y is numerical, let y1, y2, . . . , yN
Then, Φ(S) may be defined as:
As in the first case, there are only a finite number of possible splits and the method searches through all possible splits to find the one that minimizes Φ(S). Now, a fourth case where Y and X are both numerical will be described.
Y is Numerical and X is Numerical
In this case, the split rule is defined the same way as in the second case above, and g(T) is defined the same way as in the third case. Thus, the method may search through all possible splits to come up with the split, S*, which minimizes Φ(S), where:
Then, a linear regression model, as set forth below, is fit
y=a0+a1x+ε, (5)
If Φ(S*)<cxr , then S* is the best split. Otherwise, the linear model fits better than split form 1 and 2. In this case, the node T is split into d sub-nodes, T1, T2, . . . , Td. Let {circumflex over (x)}1,{circumflex over (x)}2, . . . , {circumflex over (x)}N
{circumflex over (x)}L
where
h1=max{i, (NT mod d)},
When a terminal node is reached, a value or a class, ƒ(T), is assigned to all cases in the node depending on the type of the response variable. If the type of the response variable is numerical, ƒ(T) is a real value number. Otherwise, ƒ(T) is set to be a class member of the set A={A1, A2 , . . . , Ak}. Now, the cost function may be determined if Y is categorical or numerical.
Y is Categorical
Assume Y takes values in set A={A1, A2, . . . , Ak}. T is a terminal node with NT cases. Let NiT be the number, Y, equal to Ai in T, iε{1,2, . . . , L}. If the node is pure (i.e., all the cases in the node has the same response Aj), then, ƒ(T)=Aj. Otherwise, the node is not pure. No matter which class, ƒ(T), is assigned to, there is at least one case misclassified in the node. Let u(i|j) be the cost of assigning a class j case to class i. Then the total cost of assigning ƒ(T) to node T is
If u(i|j) is constant for all i and j, then ƒ(T) is assigned to the biggest class in the node. When there is a tie for the best choice of ƒ(T) among several classes, ƒ(T) is picked arbitrarily among those classes. Now, the case where Y is numerical is described.
Y is Numerical
In this case, the cost function is the same function g(T) which measuremeasures the “noisiness” of the node as described above. ƒ(T) is assigned to the value which minimizes the cost. It can be easily shown that, when g(T) is the L2 norm of the node, ƒ(T) equals to the mean value of the node. Now, the pruning of the decision tree will be described.
By growing an oversized tree as described above, one encounters the problem of over fitting. To deal with this problem, cross validation is used to find the right size of the model. Then, the tree can be pruned to the proper size. Ideally, one would like to split the data into two sets. One for constructing the model and one for testing. But, unless the data set is sufficiently large, using only part of the data set to build the model reduces its accuracy. Therefore, cross validation is the preferred procedure.
An n-fold cross validation procedure starts with dividing the data set into n subsets, D1,D2, . . . , Dn. The division is random and each subset contains as nearly as possible, the same number of cases. Let Dic denote the compliment set of Di. Then n tree structure models TR1, TR2, . . . , TRn are built using the D1c, D2c, . . . , Dnc. Now we can use the cases in Dic to test the validity of TRi and to find out what is the right size of the tree model.
A measure of the size of a tree structure model, g(TR), the complexity of TR, is defined as follows. Let TT denote the set of terminal nodes of a tree node T. Let C(t) be the cost function of node t if all nodes under t are pruned. Thus,
where |TT| is the cardinality of TT, and
where P(t) is the probability function.
Next, one can define
g(TR)=max(g(T)|T is a node of TR)
Theorem: Let T0 be the node, such that g(T0)=g(TR). Then, pruning off all sub-nodes of T0 will not increase the complexity of the tree.
Proof
Let TRN be the tree obtained by pruning off T0 from TR. Every node TN in tree TRN comes from the node T in TR. If we can show that, for every TN, g(TN)≦g(T), then, by definition, g(TRN)≦g(TR).
There are two scenarios. 1) For node TN, its counter partcounterpart T contains T0 as one of its sub-nodesub-nodes. 2) For node TN, its counter partcounterpart T does not contain T0 as a sub-node. In the second scenario, TN and T hashave the same structure. Therefore, g(TN)=g(T). Now, let us consider the first scenario. If TN has no sub-node, then, g(TN)=0≦g(T). Otherwise, by definition,
Since, C(TN)=C(T), C(T)−C(TT)−(C(TN)−C(TTN))=C(T0)−C(T0T), |TT|−1−(|TTN|−1)=|T0|−1, and g(T0)=g(TR), therefore, g(T)≦g(T0). Hence, g(TN)≦g(T).
This theorem establishes a relationship between the size of a tree structure model and its complexity g(TR). In general, the bigger the complexity the more the number of nodes of the tree.
Cross validation can point out which complexity value v is likely to produce the most accurate tree structure. Using this v, we can prune the tree generated from the whole data set until its complexity is just below v. This pruned tree is used as the final tree structure model. Now, the model modification step will be described.
In some cases, the predictor variables can be correlated with each other. The splits of a node based on different parameters can produce similar results. In such cases, it is really up to the process engineer who uses the software to identify which parameter is the real cause of the Yield problem. To help the engineer to identify the possible candidates of parameters at any node split, all predictor variables are ranked according to their relative significance if the split were based on them. To be more precise, let Xi be the variable picked by the method which the split, S*, is based on.
For any j≠i let Sj denote the best split based on Xj. Then, define
Since S* is the best split 0≦q(i)≦1 . Then, when double clicking on a node, a list of all predictor variables ranked by their q values is shown as illustrated in
All the basic statistical analysis tools are available to help the user to validate the model and identify the yield problem. At each node, a right click of the mouse produces a list of tools available as shown in
After each model is built, the tree can be saved for future predictions. If a new set of parameter values is available, it can be fed into the model and generate prediction of the response value for each case. This functionality can be very handy for the user.
While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4754410 | Leech et al. | Jun 1988 | A |
5727128 | Morrison | Mar 1998 | A |
5897627 | Leivian et al. | Apr 1999 | A |
6098063 | Xie et al. | Aug 2000 | A |
6336086 | Perez et al. | Jan 2002 | B1 |
Number | Date | Country | |
---|---|---|---|
Parent | 09458185 | Dec 1999 | US |
Child | 10972115 | US |