This disclosure relates generally to computer based mathematical modeling techniques and, more particularly, to methods and systems for identifying desired distribution characteristics of input parameters of mathematical models and interpretation methods thereof.
Mathematical models, particularly process models, are often built to capture complex interrelationships between input parameters and outputs. Neural networks may be used in such models to establish correlations between input parameters and outputs. Because input parameters may be statistically distributed, these models may also need to be optimized, for example, to find appropriate input values to produce a desired output. Simulation may often be used to provide such optimization.
When used in optimization processes, conventional simulation techniques, such as Monte Carlo or Latin Hypercube simulations, may produce an expected output distribution from knowledge of the input distributions, distribution characteristics, and representative models. G. Galperin et al., “Parallel Monte-Carlo Simulation of Neural Network Controllers,” available at http://www-fp.mcs.anl.gov/ccst/research/reports_pre1998/neural_network/galperin.html, describes a reinforcement learning approach to optimize neural network based models. However, such conventional techniques may be unable to guide the optimization process using interrelationships among input parameters and between input parameters and the outputs. Further, these conventional techniques may be unable to identify opportunities to increase input variation that has little or no impact on output variations. Such conventional techniques may also fail to represent the optimization process and results effectively and efficiently to users of these models.
Methods and systems consistent with certain features of the disclosed systems are directed to solving one or more of the problems set forth above.
One aspect of the present disclosure includes a method for model optimization. The method may include obtaining respective distribution descriptions of a plurality of input parameters to a model indicative of interrelationships between the input parameters and one or more output parameters. The method may also include specifying respective search ranges for the plurality of input parameters and simulating the model to determine a desired set of input parameters based on a zeta statistic of the model. Further, the method may include determining respective desired distributions of the input parameters based on the desired set of input parameters; determining significance levels of the input parameters in interacting with the output parameter based on the simulation and the desired distributions of the input parameters; and presenting the significance levels.
Another aspect of the present disclosure includes a computer system. The computer system may include a console, at least one input device, and a processor. The processor is configured to obtain respective distribution descriptions of a plurality of input parameters to a model indicative of interrelationships between the input parameters and one or more output parameters and to specify respective search ranges for the plurality of input parameters. The processor is also configured to simulate the model to determine a desired set of input parameters based on a zeta statistic of the model and to determine respective desired distributions of the input parameters based on the desired set of input parameters. Further, the processor is configured to determine significance levels of the input parameters in interacting with the output parameter based on the simulation and the desired distributions of the input parameters and to present the significance levels.
Reference will now be made in detail to exemplary embodiments, which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
Neural network model 104 may be any appropriate type of neural network based mathematical model that may be trained to capture interrelationships between input parameters and outputs. Although
A zeta statistic optimization process 108 may be provided to identify desired value ranges (e.g., desired distributions) of input parameters to maximize the probability of obtaining a desired output or outputs. Zeta statistic may refer to a mathematic concept reflecting a relationship between input parameters, their value ranges, and desired outputs. Zeta statistic may be represented as
where
Processor 202 may execute sequences of computer program instructions to perform various processes, as explained above. The computer program instructions may be loaded into RAM 204 for execution by processor 202 from a read-only memory (ROM). Storage 216 may be any appropriate type of mass storage provided to store any type of information processor 202 may access to perform the processes. For example, storage 216 may include one or more hard disk devices, optical disk devices, floppy disk devices, or other storage devices to provide storage space.
Console 208 may provide a graphic user interface (GUI) to display information to users of computer system 200, such as outputs 106. Console 208 may include any appropriate type of computer display devices or computer monitors. Input devices 210 may be provided for users to input information into computer system 200. Input devices 210 may include a keyboard, a mouse, or other optical or wireless computer input devices. Further, network interfaces 212 may provide communication connections such that computer system 200 may be accessed remotely through computer networks.
Databases 214-1 and 214-2 may contain model data and any information related to data records under analysis, such as training and testing data. Databases 214-1 and 214-2 may also include analysis tools for analyzing the information in the databases. Processor 202 may also use databases 214-1 and 214-2 to determine correlation between parameters.
As explained above, computer system 200 may perform process 108 to determine desired distributions (e.g., means, standard deviations, etc.) of input parameters.
As shown in
The normal values and ranges of tolerance may be determined based on deviation from target values, discreteness of events, allowable discrepancies, and/or whether the data is in distribution tails or in a certain range of the distribution. In certain embodiments, the normal values and ranges of tolerance may also be determined based on experts' opinion or empirical data in a corresponding technical field. Alternatively, the normal value and range of tolerance of an individual input parameter may be determined by outputs 106. For example, an input parameter may be considered as normal if outputs 106 based on the input parameter are in a normal range.
After obtaining input parameter distribution description (step 302), processor 202 may specify search ranges for the input parameters (step 304). Search ranges may be specified as the normal values and tolerance ranges of individual input parameters. In certain embodiments, search ranges may also include values outside the normal tolerance ranges if there is indication that such out-of-range values may still produce normal outputs when combined with appropriate values of other input parameters.
Processor 202 may setup and start a genetic algorithm as part of the zeta optimization process (step 306). The genetic algorithm may be any appropriate type of genetic algorithm that may be used to find possible optimized solutions based on the principles of adopting evolutionary biology to computer science. When applying a genetic algorithm to search a desired set of input parameters, the input parameters may be represented by a parameter list used to drive an evaluation procedure of the genetic algorithm. The parameter list may be called a chromosome or a genome. Chromosomes or genomes may be implemented as strings of data and/or instructions.
Initially, one or several such parameter lists or chromosomes may be generated to create a population. A population may be a collection of a certain number of chromosomes. The chromosomes in the population may be evaluated based on a fitness function or a goal function, and a value of suitability or fitness may be returned by the fitness function or the goal function. The population may then be sorted, with those having better suitability more highly ranked.
The genetic algorithm may generate a second population from the sorted population by using genetic operators, such as, for example, selection, crossover (or reproduction), and mutation. During selection, chromosomes in the population with fitness values below a predetermined threshold may be deleted. Selection methods, such as roulette wheel selection and/or tournament selection, may also be used. After selection, a reproduction operation may be performed upon the selected chromosomes. Two selected chromosomes may be crossed over along a randomly selected crossover point. Two new child chromosomes may then be created and added to the population. The reproduction operation may be continued until the population size is restored. Once the population size is restored, mutation may be selectively performed on the population. Mutation may be performed on a randomly selected chromosome by, for example, randomly altering bits in the chromosome data structure.
Selection, reproduction, and mutation may result in a second generation population having chromosomes that are different from the initial generation. The average degree of fitness may be increased by this procedure for the second generation, since better fitted chromosomes from the first generation may be selected. This entire process may be repeated for any desired number of generations until the genetic algorithm converges. Convergence may be determined if the rate of improvement between successive iterations of the genetic algorithm falls below a predetermined threshold.
When setting up the genetic algorithm (step 306), processor 202 may also set a goal function for the genetic algorithm. As explained above, the goal function may be used by the genetic algorithm to evaluate fitness of a particular set of input parameters. For example, the goal function may include maximizing the zeta statistic based on the particular set of input parameters. A larger zeta statistic may allow a larger dispersion for these input parameters, thus, having a higher fitness, while still maintaining normal outputs 106. A goal function to maximize the zeta statistic may cause the genetic algorithm to choose a set of input parameters that have desired dispersions or distributions simultaneously.
After setting up and starting the genetic algorithm, processor 202 may cause the genetic algorithm to generate a candidate set of input parameters as an initial population of the genetic algorithm (step 308). The candidate set may be generated based on the search ranges determined in step 304. The genetic algorithm may also choose the candidate set based on user inputs. Alternatively, the genetic algorithm may generate the candidate set based on correlations between input parameters. For example, in a particular application, the value of one input parameter may depend on one or more other input parameters (e.g., power consumption may depend on fuel efficiency, etc.). Further, the genetic algorithm may also randomly generate the candidate set of input parameters as the initial population of the genetic algorithm.
Once the candidate set of stochastic input parameters are generated (step 308), processor 202 may run a simulation operation to obtain output distributions (step 310). For example, processor 202 may provide the candidate set of input parameters to neural network model 104, which may generate a corresponding set of outputs 106. Processor 202 may then derive the output distribution based on the set of outputs. Further, processor 202 may calculate various zeta statistic parameters (step 312).
As shown in
where LCL is a lower control limit, UCL is a upper control limit,
Once the values of variable Cpk for all outputs are calculated, processor 202 may find a minimum value of Cpk as Cpk,worst (step 404). Concurrently, processor 202 may also calculate zeta value ζ as combined for all outputs (step 406). The zeta value ζ may be calculated according to equation (1). During these calculations,
Returning to
If the genetic algorithm does not converge on a particular candidate set of input parameters (step 314; no), the genetic algorithm may proceed to create a next generation of chromosomes, as explained above. The zeta optimization process may go to step 308. The genetic algorithm may create a new candidate set of input parameters for the next iteration of the genetic algorithm (step 308). The genetic algorithm may recalculate the zeta statistic parameters based on the newly created candidate set of input parameters or chromosomes (steps 310 and 312).
On the other hand, if the genetic algorithm converges on a particular candidate set of input parameters (step 314; yes), processor 202 may determine that an optimized input parameter set has been found. Processor 202 may further determine mean and standard deviations of input parameters based on the optimized input parameter set (316). That is, processor 202 may determine desired distributions (e.g., mean and standard deviations) of input parameters based on the desired or optimized input parameter set.
Once the desired distributions are determined, processor 202 may define a valid input space that may include any input parameter within the desired distributions. Additionally, processor 202 may create a database to store information generated during the zeta optimization process. For example, processor 202 may store data records of the input parameters and output parameters and/or impact relationships between input parameters and outputs.
In one embodiment, statistical distributions of certain input parameters may be impossible or impractical to control. For example, an input parameter may be associated with a physical attribute of a device, such as a dimensional attribute of an engine part, or the input parameter may be associated with a constant variable within virtual sensor process models, etc. These input parameters may be used in the zeta statistic calculations to search or identify desired distributions for other input parameters corresponding to constant values and/or statistical distributions of these input parameters.
Further, optionally, more than one neural network models may be established. Multiple established neural network models may be simulated by using any appropriate type of simulation method, such as statistical simulation. Output parameters based on simulation of these multiple neural network models may be compared to select a best-fit neural network model based on predetermined criteria, such as smallest variance, etc. The selected best-fit neural network model 104 may be deployed in applications.
Further, processor 202 may process and present stochastic simulation results of the zeta optimization process (step 318). Processor 202 may process and interpret the results of the zeta optimization process by any appropriate algorithm. For example, processor 202 may use any appropriate tree-related method, such as Chi-square automatic interaction detection (CHAID), exhaustive CHAID, classification & regression trees (C&RT), etc., and/or rule-induction related methods based on data obtained during the optimization process.
If the data indicates that the value of a particular input parameter varies significantly within the search range with little change to the values of the output parameters, processor 202 may identify the particular input parameter as one having only a minor effect on the output. An impact level or significance level may be predetermined by processor 202 to determine whether the effect is minor (i.e., below the impact level). Processor 202 may also output such information to users or other application software programs. For instance, in a design process, such information may be used to increase design tolerance of a particular design parameter. In a manufacture process, such information may also be used to reduce cost of a particular part.
In certain embodiments, processor 202 may interpret the results based on a CHAID method. CHAID, or Chi-square automatic interaction detection, as used herein, may refer to a classification tree technique that evaluates interactions among a plurality of predictors and/or displays the results in a tree diagram.
As shown in
Trunk node 502 may represent the results database, i.e., data records collected during the zeta optimization process including values of both input parameters and output parameters. Processor 202 may create trunk node 502 based on the data records from the zeta optimization process as explained above. Processor 202 may then create first layer nodes 504, 506, and 508. A layer node, or branch node, may represent certain data records of the results database under limitations set by its parent nodes.
Processor 202 may create first layer nodes 504, 506, and 508 based on values of the strongest predictor of the output parameters. A predictor, as used herein, may refer to a variable, corresponding to an input parameter, that may represent certain interrelationships between the input parameter and output parameters and may also reflect a statistically significant discrimination among different values of the output parameters. A predictor may be categorical, i.e., with discrete values, or continuous, i.e., with continuous values. Processor 202 may create a series of predictors for creating layer nodes of tree diagram 500. For continuous predictors, processor 202 may optionally convert the continuous predictors into categorical predictors by dividing the respective continuous distributions of the input parameters into a number of categories with an approximately equal number of observations.
After preparing a series of predictors, processor 202 may start a CHAID algorithm to automatically determine how to group or merge the values of the predictors into a manageable number of categories (e.g., 2, 3, 4, etc.). Processor 202 may determine possible interactions between the output parameters and the predictors. To infer from interactions in the data records possible dependencies between the input parameters and the output parameters, processor 202 may perform Chi-squared tests of independence within the CHAID algorithm. The test statistic of the Chi-squared test may cumulate the (standardized) squared deviations between observed and expected frequencies of output parameters fit within a category of the predictor. Further, large values of the test statistic may indicate more interactions between the analyzed parameters. For the categories with a small amount of or no interactions, processor 202 may merge these categories into a single group, or merge these categories with the category having a large amount of interactions.
Further, processor 202 may calculate p-value for all the predictors, each corresponding to an input parameter, to determine a split point. The p-value, as used herein, may refer to an error probability defined by a significance level that may represent a statistically significant interaction between a particular input parameter and the output parameters. Processor 202 may choose the predictor, or the corresponding input parameter, with the smallest p-value, i.e., the predictor that may yield the most significant split as the split point.
For example, if the predictor corresponding to x3 has the smallest p-value, processor 202 may select x3 as the split point for the first layer. Further, processor 202 may also determine first layer nodes 504, 506, and 508, each corresponding to a different range of the predictor of x3. The total number of nodes, i.e., 3, is used for exemplary purposes only, and may be set to any appropriate number based on the CHAID algorithm, which may merge different ranges or categories into a single range or category, thus change the number of layer nodes.
Processor 202 may repeat the above process to further determine next significant split or splits and to create further layer nodes based on the Chi-square test and the p-values. For example, processor 202 may determine split point x1, and x6, and may create second layer nodes 510 and 512, corresponding to x1, and second layer nodes 514 and 516, corresponding to x6.
If processor 202 determines that the smallest p-value for any predictor is greater than a predetermined threshold value, then no further split will be performed, and a respective node may be referred to as a terminal node or leaf node. For example, after split point xi, processor 202 may determine that layer nodes 518 and 520 are terminal nodes. If processor 202 determines that no further split needs to be performed on any branch node, processor 202 may complete the above tree building process.
Processor 202 may also interpret tree diagram 500 based on split points and layer nodes. Processor 202 may interpret and identify input parameters that have significant impact on output parameters based on split points corresponding to the input parameters. For example, processor 202 may interpret that input parameter x3 (the first split point) may be the most significant factor to be considered during a design process related to the output parameters, because changes made to input parameter x3 would have the most significant effect on the output parameters. That is, x3 may have the highest significance level for interacting with the output parameters.
Processor 202 may also interpret significance of other input parameters in a sequence determined by tree diagram 500. For example, processor 202 may determine that input parameters x1 and x6 may have the second most significant effect on the output parameters, and that input parameter xi may have the least significant effect on the output parameters. Further, if any input parameter is not included in split points of tree diagram 500, processor 202 may determine that the omitted input parameter may have little or no effect on the output parameters.
Processor 202 may further use such information to guide the zeta optimization process in a particular direction based on the impact probability or significance level, such as when a new candidate set of input parameters is generated. For example, the optimization process may focus on the input parameters that have significant impact on output parameters.
Additionally or optionally, processor 202 may also determine significant ranges of input parameters based on the layer nodes. For example, processor 202 may determine that ranges represented by layer nodes 504 and 508 are more significant than the range represented by layer node 506 in that more subsequent layer nodes follow layer nodes 504 and 508.
Processor 202 may also present the interpretation results. For example, processor 202 may output the results to other application software programs or, alternatively, display the results as graphs on console 208. Further, processor 202 may display the results in any appropriate format, such as a tree diagram as shown in
The disclosed zeta statistic process methods and systems provide a desired solution for effectively identifying input target settings and allowed dispersions in one optimization routine. The disclosed methods and systems may also be used to efficiently determine areas where input dispersion can be increased without significant computational time. The disclosed methods and systems may also be used to guide outputs of mathematical or physical models to stability, where outputs are relatively insensitive to variations in the input domain. Performance of other statistical or artificial intelligence modeling tools may be significantly improved when incorporating the disclosed methods and systems.
Certain advantages may be illustrated by, for example, designing and manufacturing an engine component using the disclosed methods and systems. The engine components may be assembled from three parts. Under conventional practice, all three parts may be designed and manufactured with certain precision requirements (e.g., a tolerance range). If the final engine component assembled does not meet quality requirements, often the precision requirements for all three parts may be increased until these parts can produce a good quality component. On the other hand, the disclosed methods and systems may be able to simultaneously find desired distributions or tolerance ranges of the three parts to save time and cost. The disclosed methods and systems may also find, for example, one of the three parts that has only minor effect on the component quality. The precision requirement for the one with minor effect may be lowered to further save manufacturing cost.
The disclosed zeta statistic process methods and systems may also provide a more effective solution to process modeling containing competitive optimization requirements. Competitive optimization may involve finding the desired input parameters for each output parameter independently, then performing one final optimization to unify the input process settings while staying as close as possible to the best possible outcome found previously. The disclosed zeta statistic process methods and systems may overcome two potential risks of the competitive optimization (e.g., relying on sub-optimization to create a reference for future optimizations, difficult or impractical trade off between two equally balanced courses of action, and unstable target values with respect to input process variation) by simultaneously optimizing a probabilistic model of competing requirements on input parameters. Further, the disclosed methods and systems may simultaneously find desired distributions of input parameters without prior domain knowledge and may also find effects of variations between input parameters and output parameters.
Further, the disclosed methods and systems may provide desired interpretation and presentation of optimization results based on optimization data. By using the data records collected during the optimization process, more accurate and more representative data may be obtained than using conventional techniques, which often use original data records. Moreover, such presentation may provide a user with a visual view of dependencies and other interrelationships among input parameters and/or between input parameters and output parameters.
Other embodiments, features, aspects, and principles of the disclosed exemplary systems will be apparent to those skilled in the art and may be implemented in various environments and systems.
This application is a continuation-in-part (CIP) application of and claims the priority and benefit of U.S. patent application Ser. No. 11/101,554, filed Apr. 8, 2005 now abandoned.
| Number | Name | Date | Kind |
|---|---|---|---|
| 3316395 | Lavin | Apr 1967 | A |
| 4136329 | Trobert | Jan 1979 | A |
| 4533900 | Muhlberger et al. | Aug 1985 | A |
| 5014220 | McMann et al. | May 1991 | A |
| 5163412 | Neu et al. | Nov 1992 | A |
| 5262941 | Saladin et al. | Nov 1993 | A |
| 5341315 | Niwa et al. | Aug 1994 | A |
| 5386373 | Keeler et al. | Jan 1995 | A |
| 5434796 | Weininger | Jul 1995 | A |
| 5539638 | Keeler et al. | Jul 1996 | A |
| 5548528 | Keeler et al. | Aug 1996 | A |
| 5561610 | Schricker et al. | Oct 1996 | A |
| 5566091 | Schricker et al. | Oct 1996 | A |
| 5585553 | Schricker | Dec 1996 | A |
| 5594637 | Eisenberg et al. | Jan 1997 | A |
| 5598076 | Neubauer et al. | Jan 1997 | A |
| 5604306 | Schricker | Feb 1997 | A |
| 5604895 | Raimi | Feb 1997 | A |
| 5608865 | Midgely et al. | Mar 1997 | A |
| 5666297 | Britt et al. | Sep 1997 | A |
| 5682317 | Keeler et al. | Oct 1997 | A |
| 5698780 | Mizutani et al. | Dec 1997 | A |
| 5727128 | Morrison | Mar 1998 | A |
| 5750887 | Schricker | May 1998 | A |
| 5752007 | Morrison | May 1998 | A |
| 5835902 | Jannarone | Nov 1998 | A |
| 5842202 | Kon | Nov 1998 | A |
| 5914890 | Sarangapani et al. | Jun 1999 | A |
| 5925089 | Fujime | Jul 1999 | A |
| 5950147 | Sarangapani et al. | Sep 1999 | A |
| 5966312 | Chen | Oct 1999 | A |
| 5987976 | Sarangapani | Nov 1999 | A |
| 6086617 | Waldon et al. | Jul 2000 | A |
| 6092016 | Sarangapani et al. | Jul 2000 | A |
| 6119074 | Sarangapani | Sep 2000 | A |
| 6145066 | Atkin | Nov 2000 | A |
| 6195648 | Simon et al. | Feb 2001 | B1 |
| 6199007 | Zavarehi et al. | Mar 2001 | B1 |
| 6208982 | Allen, Jr. et al. | Mar 2001 | B1 |
| 6223133 | Brown | Apr 2001 | B1 |
| 6236908 | Cheng et al. | May 2001 | B1 |
| 6240343 | Sarangapani et al. | May 2001 | B1 |
| 6269351 | Black | Jul 2001 | B1 |
| 6298718 | Wang | Oct 2001 | B1 |
| 6370544 | Krebs et al. | Apr 2002 | B1 |
| 6405122 | Yamaguchi | Jun 2002 | B1 |
| 6438430 | Martin et al. | Aug 2002 | B1 |
| 6442511 | Sarangapani et al. | Aug 2002 | B1 |
| 6477660 | Sohner | Nov 2002 | B1 |
| 6513018 | Culhane | Jan 2003 | B1 |
| 6546379 | Hong et al. | Apr 2003 | B1 |
| 6584768 | Hecker et al. | Jul 2003 | B1 |
| 6594989 | Hepburn et al. | Jul 2003 | B1 |
| 6698203 | Wang | Mar 2004 | B2 |
| 6711676 | Zomaya et al. | Mar 2004 | B1 |
| 6721606 | Kaji et al. | Apr 2004 | B1 |
| 6725208 | Hartman et al. | Apr 2004 | B1 |
| 6763708 | Ting et al. | Jul 2004 | B2 |
| 6775647 | Evans et al. | Aug 2004 | B1 |
| 6785604 | Jacobson | Aug 2004 | B2 |
| 6810442 | Lin et al. | Oct 2004 | B1 |
| 6823675 | Brunell et al. | Nov 2004 | B2 |
| 6859770 | Ramsey | Feb 2005 | B2 |
| 6859785 | Case | Feb 2005 | B2 |
| 6865883 | Gomulka | Mar 2005 | B2 |
| 6882929 | Liang et al. | Apr 2005 | B2 |
| 6895286 | Kaji et al. | May 2005 | B2 |
| 6935313 | Jacobson | Aug 2005 | B2 |
| 6941287 | Vaidyanathan et al. | Sep 2005 | B1 |
| 6952662 | Wegerich et al. | Oct 2005 | B2 |
| 6976062 | Denby et al. | Dec 2005 | B1 |
| 7000229 | Gere | Feb 2006 | B2 |
| 7024343 | El-Ratal | Apr 2006 | B2 |
| 7027953 | Klein | Apr 2006 | B2 |
| 7035834 | Jacobson | Apr 2006 | B2 |
| 7117079 | Streichsbier et al. | Oct 2006 | B2 |
| 7124047 | Zhang et al. | Oct 2006 | B2 |
| 7127892 | Akins et al. | Oct 2006 | B2 |
| 7174284 | Dolansky et al. | Feb 2007 | B2 |
| 7178328 | Solbrig | Feb 2007 | B2 |
| 7191161 | Rai et al. | Mar 2007 | B1 |
| 7194392 | Tuken et al. | Mar 2007 | B2 |
| 7213007 | Grichnik | May 2007 | B2 |
| 7356393 | Schlatre et al. | Apr 2008 | B1 |
| 7369925 | Morioka et al. | May 2008 | B2 |
| 20020014294 | Okano et al. | Feb 2002 | A1 |
| 20020016701 | Duret et al. | Feb 2002 | A1 |
| 20020042784 | Kerven et al. | Apr 2002 | A1 |
| 20020049704 | Vanderveldt et al. | Apr 2002 | A1 |
| 20020103996 | LeVasseur et al. | Aug 2002 | A1 |
| 20020198821 | Munoz | Dec 2002 | A1 |
| 20030018503 | Shulman | Jan 2003 | A1 |
| 20030055607 | Wegerich et al. | Mar 2003 | A1 |
| 20030093250 | Goebel | May 2003 | A1 |
| 20030126053 | Boswell et al. | Jul 2003 | A1 |
| 20030126103 | Chen et al. | Jul 2003 | A1 |
| 20030130855 | Babu et al. | Jul 2003 | A1 |
| 20030167354 | Peppers et al. | Sep 2003 | A1 |
| 20030187567 | Sulatisky et al. | Oct 2003 | A1 |
| 20030187584 | Harris | Oct 2003 | A1 |
| 20030200296 | Lindsey | Oct 2003 | A1 |
| 20040030420 | Ulyanov et al. | Feb 2004 | A1 |
| 20040034857 | Mangino et al. | Feb 2004 | A1 |
| 20040059518 | Rothschild | Mar 2004 | A1 |
| 20040077966 | Yamaguchi et al. | Apr 2004 | A1 |
| 20040122702 | Sabol et al. | Jun 2004 | A1 |
| 20040122703 | Walker et al. | Jun 2004 | A1 |
| 20040128058 | Andres et al. | Jul 2004 | A1 |
| 20040135677 | Asam | Jul 2004 | A1 |
| 20040138995 | Hershkowitz et al. | Jul 2004 | A1 |
| 20040153227 | Hagiwara et al. | Aug 2004 | A1 |
| 20040215430 | Huddleston et al. | Oct 2004 | A1 |
| 20040230404 | Messmer et al. | Nov 2004 | A1 |
| 20040267818 | Hartenstine | Dec 2004 | A1 |
| 20050047661 | Mauer | Mar 2005 | A1 |
| 20050055176 | Clarke et al. | Mar 2005 | A1 |
| 20050091093 | Bhaskaran et al. | Apr 2005 | A1 |
| 20050209943 | Ballow et al. | Sep 2005 | A1 |
| 20050210337 | Chester et al. | Sep 2005 | A1 |
| 20050240539 | Olavson | Oct 2005 | A1 |
| 20050261791 | Chen et al. | Nov 2005 | A1 |
| 20050262031 | Saidi et al. | Nov 2005 | A1 |
| 20050278227 | Esary et al. | Dec 2005 | A1 |
| 20050278432 | Feinleib et al. | Dec 2005 | A1 |
| 20060010057 | Bradway et al. | Jan 2006 | A1 |
| 20060010142 | Kim et al. | Jan 2006 | A1 |
| 20060010157 | Dumitrascu et al. | Jan 2006 | A1 |
| 20060025897 | Shostak et al. | Feb 2006 | A1 |
| 20060026270 | Sadovsky et al. | Feb 2006 | A1 |
| 20060026587 | Lemarroy et al. | Feb 2006 | A1 |
| 20060064474 | Feinleib et al. | Mar 2006 | A1 |
| 20060068973 | Kappauf et al. | Mar 2006 | A1 |
| 20060129289 | Kumar et al. | Jun 2006 | A1 |
| 20060130052 | Allen et al. | Jun 2006 | A1 |
| 20060229753 | Seskin et al. | Oct 2006 | A1 |
| 20060229769 | Grichnik et al. | Oct 2006 | A1 |
| 20060229852 | Grichnik et al. | Oct 2006 | A1 |
| 20060229854 | Grichnik et al. | Oct 2006 | A1 |
| 20060230018 | Grichnik et al. | Oct 2006 | A1 |
| 20060230097 | Grichnik et al. | Oct 2006 | A1 |
| 20060230313 | Grichnik et al. | Oct 2006 | A1 |
| 20060241923 | Xu et al. | Oct 2006 | A1 |
| 20060247798 | Subbu et al. | Nov 2006 | A1 |
| 20070061144 | Grichnik et al. | Mar 2007 | A1 |
| 20070094048 | Grichnik | Apr 2007 | A1 |
| 20070094181 | Tayebnejad et al. | Apr 2007 | A1 |
| 20070118338 | Grichnik et al. | May 2007 | A1 |
| 20070124237 | Sundararajan et al. | May 2007 | A1 |
| 20070150332 | Grichnik et al. | Jun 2007 | A1 |
| 20070168494 | Liu et al. | Jul 2007 | A1 |
| 20070179769 | Grichnik et al. | Aug 2007 | A1 |
| 20070203864 | Grichnik | Aug 2007 | A1 |
| 20080154811 | Grichnik et al. | Jun 2008 | A1 |
| Number | Date | Country |
|---|---|---|
| 1103926 | May 2001 | EP |
| 1367248 | Dec 2003 | EP |
| 1418481 | May 2004 | EP |
| 10-332621 | Dec 1998 | JP |
| 11-351045 | Dec 1999 | JP |
| 2002-276344 | Sep 2002 | JP |
| WO9742581 | Nov 1997 | WO |
| WO02057856 | Jul 2002 | WO |
| WO2006017453 | Feb 2006 | WO |
| 2006110242 | Oct 2006 | WO |
| Entry |
|---|
| Gletos et al. “A Computer-Aided Diagnostic System to Characterize CT Focal Liver Lesions: Design and Optimization of a Neural Network Classifier”, Sep. 2003, IEEE Transactions on Information Technology in Biomedicine, vol. 7, No. 3, pp. 153-162. |
| Diepen et al. “Evaluating chi-squared automatic interaction detection” 2006, Information systems No. 31, pp. 814-831. |
| “CHAID and Exhaustive CHAID Algorithms” Jul. 2004, 8 pages. |
| Liu et al. “Specification Tests in the Efficient Method of Moments Framework with Application to the Stochastic Volatility Models”, Sep. 1998, 31 pages. |
| SPSS, Inc. “AnswerTree 2.0 User's Guide” 1998, 203 pages. |
| Fowdar, Crockett, Bandar, and O'Shea, “On the Use of Fuzzy Trees for Solving Classificaton Problems with Numeric Outcomes,” Fuzzy Systems, 2005, pp. 436-441, The 14th IEEE International Conference on Reno, Nevada, USA May 22-25, 2005, Piscataway, USA. |
| Wilkinson Leland, “Tree Structured Data Analysis: AID, CHAID and CART,” Jan. 1, 1992, pp. 1-10, XP 002265278, Chicago, USA. |
| Galperin, G. et al, “Parallel Monte-Carlo Simulation of Neural Network Controllers,” available at http://wwwfp.mcs.anl.gov/ccst/research/reports—pre1998/neural—network/galperin.html. (6 pages). |
| Allen et al., “Supersaturated Designs That Maximize the Probability of Identifying Active Factors,” 2003 American Statistical Association and the American Society for Quality, Technometrics, vol. 45, No. 1, Feb. 2003, pp. 1-8. |
| April, Jay et al., “Practical Introduction to Simulation Optimization,” Proceedings of the 2003 Winter Simulation Conference, pp. 71-78. |
| Bandte et al., “Viable Designs Through a Joint Probabilistic Estimation Technique,” SAE International, and the American Institute of Aeronautics and Astronautics, Inc., Paper No. 1999-01-5623, 1999, pp. 1-11. |
| Beisl et al., “Use of Genetic Algorithm to Identify the Source Point of Seepage Slick Clusters Interpreted from Radarsat-1 Images in the Gulf of Mexico,” Geoscience and Remote Sensing Symposium, 2004, Proceedings, 2004 IEEE International Anchorage, AK, Sep. 20-24, 2004, vol. 6, Sep. 20, 2004, pp. 4139-4142. |
| Berke et al., “Optimum Design of Aerospace Structural Components Using Neural Networks,” Computers and Structures, vol. 48, No. 6, Sep. 17, 1993, pp. 1001-1010. |
| Bezdek, “Genetic Algorithm Guided Clustering,” IEEE 0-7803-1899-4/94, 1994, pp. 34-39. |
| Brahma et al., “Optimization of Diesel Engine Operating Parameters Using Neural Networks,” SAE Technical Paper Series, 2003-01-3228, Oct. 27-30, 2003 (11 pages). |
| Chau et al., “Use of runs test to access cardiovascular autonomic function in diabetic subjects,” Abstract, Diabetes Care, vol. 17, Issue 2, pp. 146-148, available at http://care.diabetesjournals.org/cgi/content/abstract/17/2/146). |
| Chung et al., “Process Optimal Design in Forging by Genetic Algorithm,” Journal of Manufacturing Science and Engineering, vol. 124, May 2002, pp. 397-408. |
| Cox et al., “Statistical Modeling for Efficient Parametric Yield Estimation of MOS VLSI Circuits,” IEEE, 1983, pp. 242-245. |
| De Maesschalck et al., “The Mahalanobis Distance,” Chemometrics and Intelligent Laboratory Systems, vol. 50, No. 1, Jan. 2000, pp. 1-18. |
| Dikmen et al., “Estimating Distributions in Genetic Algorithms,” ISCIS 2003, LNCS 2869, 2003, pp. 521-528. |
| Gletsos et al., “A Computer-Aided Diagnostic System to Characterize CT Focal Liver Lesions: Design and Optimization of a Neural Network Classifier,” IEEE Transactions on InformationTechnology in Biomedicine, vol. 7, No. 3, Sep. 2003 pp. 153-162. |
| Grichnik et al., “An Improved Metric for Robust Engineering,” Proceedings of the 2007 International Conference on Scientific Computing, Las Vegas, NV (4 pages). |
| Grichnik et al., Copending U.S. Appl. No. 11/529,267, filed Sep. 29, 2006, entitled Virtual Sensor Based Engine Control System and Method. |
| Grichnik et al., Copending U.S. Appl. No. 11/730,363, filed Mar. 30, 2007, entitled Prediction Based Engine Control System and Method. |
| Grichnik et al., Copending U.S. Appl. No. 11/812,164, filed Jun. 15, 2007, entitled Virtual Sensor System and Method. |
| Grichnik et al., Copending U.S. Appl. No. 11/979,408, filed Nov. 2, 2007, entitled Virtual Sensor Network (VSN) System and Method. |
| Holland, John H., “Genetic Algorithms,” Scientific American, Jul. 1992, pp. 66-72. |
| Hughes et al., “Linear Statistics for Zeros of Riemann's Zeta Function,” C.R. Acad. Sci. Paris, Ser. I335 (2002), pp. 667-670. |
| Ko et al., “Application of Artificial Neural Network and Taguchi Method to Perform Design in Metal Forming Considering Workability,” International Journal of Machine Tools & Manufacture, vol. 39, No. 5, May 1999, pp. 771-785. |
| Kroha et al., “Object Server on a Parallel Computer,” 1997 IEEE 0-8186-8147-0/97, pp. 284-288. |
| Mavris et al., “A Probabilistic Approach to Multivariate Constrained Robust Design Simulation,” Society of Automotive Engineers, Inc., Paper No. 975508, 1997, pp. 1-11. |
| National Institute of Health, “10-year CVD Risk Calculator” available at http://hin.nhlbi.nih.gov/atpiii/calculator.asp?usertype=prof, printed Aug. 2, 2005, 2 pages. |
| Obayashi et al, “Multiobjective Evolutionary Computation for Supersonic Wing-Shape Optimization,” IEEE Transactions on Evolutionary Computation, vol. 4, No. 2, Jul. 2000, pp. 182-187. |
| Simpson et al., “Metamodels for Computer-Based Engineering Design: Survey & Recommendations,” Engineering with Computers, 2001, vol. 17, pp. 129-150. |
| Solar Turbines, “InSight System,” Oct. 19, 2006, http://mysolar.cat.com. |
| Solar Turbines, “InSight Systems, Machinery Management Solutions,” Oct. 19, 2006. |
| Song et al., “The Hyperellipsoidal Clustering Using Genetic Algorithm,” 1997 IEEE International Conference on Intelligent Processing Systems, Oct. 28-31, 1997, Beijing, China, pp. 592-596. |
| Sytsma, Sid, “Quality and Statistical Process Control,” available at http://www.sytsma.com/tqmtools/ctlchtprinciples.html, printed Apr. 7, 2005, 6 pages. |
| Taguchi et al., “The Mahalanobis-Taguchi Strategy,” A Pattern Technology System, John Wiley & Sons, Inc., 2002, 234 pages. |
| Taylor et al., “Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results,” NIST Technical Note 1297, 1994 Edition, United States Dept. of Commerce, National Institute of Standards and Technology (25 pages). |
| Thompson, G.J. et al., “Neural Network Modelling of the Emissions and Performance of a Heavy-Duty Diesel Engine,” Proc. Instu. Mech. Engrs., vol. 214, Part D (2000), pp. 111-126. |
| Traver, Michael L. et al., “Neural Network-Based Diesel Engine Emissions Prediction Using In-Cylinder Combustion Pressure,” International Spring Fuels & Lubricants Meeting & Exposition, SAE Technical Paper Series, May 3-6, 1999, 17 pages. |
| Woodall, Tsui et al., “A Review and Analysis of the Mahalanobis-Taguchi System,” Technometrics, Feb. 2003, vol. 45, No. 1 (15 pages). |
| Wu et al., “Cam-phasing Optimization Using Artificial Neural Networks as Surrogate Models—Fuel Consumption and Nox Emissions,” SAE Technical Paper Series, 2006-01-1512, Apr. 3-6, 2006 (19 pages). |
| Yang et al., “Similar Cases Retrieval from the Database of Laboratory Test Results,” Journal of Medical Systems, vol. 27, No. 3, Jun. 2003, pp. 271-282. |
| Yuan et al., “Evolutionary Fuzzy C-Means Clustering Algorithm,” 1995 IEEE 0-7803-2461-7/95, pp. 2221-2226. |
| Office Action in U.S. Appl. No. 11/101,554 dated Dec. 24, 2009 (2 pages). |
| Office Action in U.S. Appl. No. 11/101,554 dated Apr. 21, 2009 (18 pages). |
| Office Action in U.S. Appl. No. 11/101,554 dated Oct. 7, 2008 (12 pages). |
| Traver, Michael L. et al., “A Neural Network-Based Virtual NOχ Sensor for Diesel Engines,” West Virginia University, Mechanical and Aerospace Engineering Dept., Morgantown, WV, 7 pages (Apr. 2000). |
| Number | Date | Country | |
|---|---|---|---|
| 20080021681 A1 | Jan 2008 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 11101554 | Apr 2005 | US |
| Child | 11882189 | US |