Robust distance measures for on-line monitoring

Information

  • Patent Grant
  • 8311774
  • Patent Number
    8,311,774
  • Date Filed
    Friday, December 14, 2007
    17 years ago
  • Date Issued
    Tuesday, November 13, 2012
    12 years ago
Abstract
An apparatus and associated method are utilized for monitoring an operation of a system characterized by operational parameters. A non-parametric empirical model generates estimates of parameter values in response to receiving a query vector of monitored parameters for a model characterizing the system. A distance estimation engine (a) determines robust distances between the query vector and each of a set of predetermined historical vectors for the non-parametric empirical model based on an implementation of an elemental kernel function; (b) determines weights for the monitored parameters based on the robust distances; and (c) combining the weights with the predetermined historical vectors to make predictions for the system.
Description
BACKGROUND OF THE INVENTION

Traditionally, the calibration of safety critical nuclear instrumentation has been performed at each refueling cycle. However, many nuclear plants have moved toward condition-directed rather than time-directed calibration. This condition-directed calibration is accomplished through the use of on-line monitoring which commonly uses an autoassociative predictive modeling architecture to assess instrument channel performance. An autoassociative architecture predicts a group of correct sensor values when supplied with a group of sensor values that is corrupted with process and instrument noise, and could also contain faults such as sensor drift or complete failure.


In the U.S. nuclear power industry, millions of dollars are spent annually on the calibration of instrument chains that are performing within the required specifications. For the past twenty years, several nuclear utilities have investigated methods to monitor the calibration of safety critical process instruments. In 2000, the U.S. Nuclear Regulatory Commission (NRC) issued a safety evaluation report (SER) on an EPRI submitted Topical Report (TR) 104965, “On-Line Monitoring of Instrument Channel Performance”. This SER concluded that the generic concept of on-line monitoring (OLM) for tracking instrument performance as discussed in the topical report is acceptable. However, additional requirements were identified that must be addressed by plant specific license amendments if the calibration frequency of safety-related instrumentation is to be relaxed. Since the applicability of an OLM system is directly related to the ability of an empirical model to correctly predict sensor values when supplied faulty data, methods must be developed to ensure that robust empirical models can be developed.


The autoassociative architecture for predicting correct sensor values has also been adapted for use in equipment fault detection and health monitoring. Accordingly, it is known to provide a nonparametric empirical model such as a kernel regression model or a similarity-based model that generates estimates of sensor values responsive to input of measurements of those sensor values in real-time. The estimates are subtracted from the measured values to provide residuals, which are used to detect deviations indicative of incipient equipment failure. Such approaches are known from, for example, U.S. Pat. No. 4,937,763 to Mott; and in U.S. Pat. No. 5,764,509 to Gross et al. In these approaches, a kernel function incorporating a distance function is used to compare the measured values of the sensors arranged as an observation vector, to a set of reference observations. The kernel function, also called a similarity operator, returns a scalar value indicative of the similarity of the input observation vector to each of the reference observation vectors, and these scalar values are used in generating an estimate observation of the sensor values as an adaptive linear combination of at least some of the reference observations. Kernel regression and similarity-based modeling differ in the details of how the adaptive linear combination is formed; however the kernel function is used in both instances. The scalar value or similarity value of the kernel function typically is designed to range between zero and one, where a value of one indicates the compared vectors are identical, and values approaching zero indicate increasing dissimilarity or distance between the vectors.


One of the drawbacks of the kernel functions in use is susceptibility to outlier inputs, especially when the kernel function is executed on the elements of the compared vectors. In such a case, the kernel function compares individual like elements of the vectors, and generates a scalar comparison outcome for each element, then combines those to form an observation level scalar value. When a particular sensor reading is very different from the sensor reading in a reference observation, the observation-level kernel result can be dominated by the outlier sensor value, resulting in a reduced similarity scalar value for the comparison of the input vector to the reference observation in question than might otherwise be implied by the other sensor readings.


SUMMARY OF THE INVENTION

The invention provides improved kernel-based model performance with more robust distance metrics, for sensor calibration and equipment health monitoring. Accordingly, robust distance measures for use in nonparametric, similarity based models are disclosed. The alternative robust distance functions have performance advantages for the common task of sensor drift detection. In particular, a robust Euclidean distance function according to the invention produces significant robustness improvements in nonparametric, similarity based models, such as kernel regression and the multivariate state estimation technique (MSET).


The invention can be used in software for monitoring the health of equipment and sensors, especially for nonparametric empirical model based systems. Accordingly, equipment is instrumented with sensors for determining a variety of physical or statistical measurements of equipment performance; the sensor data is provided to the software which generates estimates of the sensor data using the nonparametric empirical model; and the estimates are compared to the measured values to determine if an incipient deviation exists. Residuals can be processed through a variety of alerting, trending and pattern recognition techniques, to provide an autonomous software system for annunciation of probable and/or incipient equipment failures. People responsible for maintaining or operating the equipment can rely on the software to call out exceptional conditions in the equipment requiring intervention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 comprises charts (a) data from turbine pressure sensors and (b) data from steam pressure sensors in a nuclear power plant, used to train nonparametric estimation models as may be used with embodiments of the present invention;



FIG. 2 is a bar graph showing the computed accuracy for five sensors in a nonparametric estimation model, for three alternatives of the model employing (a) an ordinary Euclidean distance metric, (b) an L1-norm distance metric, and (c) a robust Euclidean distance metric according to an embodiment of the invention;



FIG. 3 is a bar graph showing the computed robustness for five sensors in a nonparametric estimation model, for three alternatives of the model employing (a) an ordinary Euclidean distance metric, (b) an L1-norm distance metric, and (c) a robust Euclidean distance metric according to an embodiment of the invention;



FIG. 4 is a bar graph showing the computed spillover for five sensors in a nonparametric estimation model, for three alternatives of the model employing (a) an ordinary Euclidean distance metric, (b) an L1-norm distance metric, and (c) a robust Euclidean distance metric according to an embodiment of the invention; and



FIG. 5 illustrates a monitoring apparatus 500 for monitoring a monitored system 505 and according to an embodiment of the invention.





DETAILED DESCRIPTION OF THE INVENTION

An empirical model's architecture may be either defined by a set of parameters and functional relationships (parametric) or a set of data and algorithmic estimation procedures (nonparametric). In a parametric model, training data is used to fit the model to the data according to a pre-defined mathematical structure. For example, consider the following polynomial model:

y=b0+b1x1+b2x2+b3x1x2+b4x12+b5x22  (1)


In order to completely define this model for a given set of training observations, the polynomial coefficients, are optimized to minimize some objective function, usually the sum of the squared error (SSE). Once the optimal polynomial coefficients have been estimated, the model is completely specified by Equation 1 and the estimated coefficients. Therefore, a parametric model may be roughly defined as a model that may be completely specified by a set of parameters and a functional relationship for applying these parameters to new data in order to estimate the response.


A non-parametric model, by contrast, stores historical data exemplars in memory and processes them when a new query is made. For instance, rather than modeling a whole input space with a parametric model such as a neural network or linear regression, local non-parametric techniques may be used to construct a local model in the immediate region of the query. These models are constructed “on the fly” not beforehand. When the query is made; the algorithm locates historical exemplars in its vicinity and performs a weighted regression with the nearby observations. The observations are weighted with respect to their proximity to the query point. In order to construct a robust local model, one must define a distance function to measure what is considered to be local to the query, implement locally weighted regression, and in some cases consider additional regularization techniques.


As an example, the mathematical framework of a modeling technique such as autoassociative kernel regression (AAKR) is composed of three basic steps. First, the distance between a query vector (the observation comprised of the readings of the multiple sensors in the model) and each of the historical exemplar (memory) vectors is computed using the conventional Euclidean distance or L2-norm:










u
j

=





i
=
1

n








(


x

q
,
i


-

m

j
,
i



)

2







(
2
)








where, uj is the distance between the query vector (x) and jth memory vector, n is the number of variables in the data set, xq,i is the ith variable of the query vector, and mj,i is the ith variable of the jth memory vector.


Second, these distances are used to determine weights by evaluating the standard, Gaussian kernel, expressed by:









w
=


K


(

u
,
h

)


=


1


2


π
·

h
2









-


u
2


h
2










(
3
)








where, h is the kernel's bandwidth. Finally, these weights are combined with the memory vectors to make predictions according to:











x
^

q

=





i
=
1

M








w
i

·

m
i







i
=
1

M



w
i







(
4
)







Here, wi are the weights, mi are the memory vectors, M is the number of memory vectors, and {circumflex over (x)}q is the prediction for the query vector. Since the monitoring system's objective is to detect and quantify sensor drift, the model should be made as immune as possible to sensor drift. In order to improve the robustness of the AAKR modeling routine, distance functions other than the standard Euclidean distance may be used. Before discussing the alternative distance functions, the parameters used to measure model performance must be discussed.


The performance of autoassociative OLM systems is measured in terms of its accuracy, robustness, and spillover. Accuracy measures the ability of the model to correctly and accurately predict sensor values and is normally presented as the mean squared error (MSE) between the prediction and the correct sensor value. Robustness measures the ability of the model to make correct sensor predictions when the respective sensor value is incorrect due to some sort of fault. Spillover measures the effect a faulty sensor input has on the other sensor predictions in the model. An ideal system would be accurate and would not have sensor predictions affected by degraded inputs.


The most basic form of the AAKR modeling technique makes use of the Euclidean distance or L2-norm described above in Equation 2. Since this distance function squares the individual differences, the effects of a faulty input may be amplified, resulting in parameter predictions which are more affected by input variations and therefore less robust. In order to improve robustness, it is desirable to have distance measures which are not affected by errant sensor readings.


A first robust distance function is the L1-norm, which is defined by the following equation.










u
j

=




i
=
1

n






x

q
,
i


-

m

j
,
i










(
5
)








Notice that rather than square the individual differences, the L1-norm uses the absolute value. This alteration provides a modest improvement in robustness, but the distance will still be affected by faulty input. Therefore, an additional step can be taken in the robust distance function to remove faulty input from the distance calculation and improve model robustness. Accordingly, the largest elemental difference contributing to the distance metric is removed, as shown in the following equation:










u
j

=






i
=
1

n








(


x

q
,
i


-

m

j
,
i



)

2


-


max

i
=
1

n



[


(


x

q
,
i


-

m

j
,
i



)

2

]








(
6
)







Here,







max

i
=
1

n



[


(


x

q
,
i


-

m

j
,
i



)

2

]






is the maximum squared difference of the query vector from the jth memory vector. Simply speaking, one “bad performer” is assumed to exist and its influence is removed from the calculation. To more clearly illustrate Equation 6, consider the following example vectors.

xq=[0.9501 0.2311 0.6068 0.4860]
mj=[0.8913 1.7621 0.4565 0.0185]

The squared differences are found to be:

(xq,i−mj,i)2=[0.0035 2.3438 0.0226 0.2185]  (7)

Notice that the largest squared difference is 2.3438. Therefore, the robust Euclidean distance is defined to be the square root of the sum of the squared distances minus the largest squared difference.

uj=√{square root over (2.5884−2.3438)}=0.4946  (8)

According to the invention, the robust Euclidean distance is the Euclidean distance with the largest distance or worst performer removed.


This improved robust kernel function can be extended to a variety of similarity operators. Thus, the L1-norm distance function can be improved by subtracting the largest city block distance element from the sum of the city block distances:










u
j

=





i
=
1

n










x

q
,
i


-

m

j
,
i






-


max

i
=
1

n



[




x

q
,
i


-

m

j
,
i





]







(
9
)







In fact, any elemental kernel function (one in which the kernel function scalar output is determined by averaging or otherwise combining the scalar comparison results for each element of the compared vectors) is amenable to the improvement of the present invention, by leaving out contribution of the maximally different element from the kernel function calculation.


It may not always be desirable to leave out the maximally different element with each calculation of similarity between two observation vectors. Therefore, in a preferred embodiment of the present invention, a threshold difference is assigned for each element, and the maximum elemental difference is left out of the distance function calculation only if that elemental difference is greater than the threshold specified for that element. The threshold can be determined in a number of ways, either for each element (sensor) individually, or across all the variables (sensors) uniformly. By way of example, a percentage of the range seen in the data for the sensor can be used as a threshold for maximal elemental difference. Another particularly effective method according to the invention comprises the steps of (a) first scaling data for each sensor to a zero-mean centered range where +/−1 is set to one standard deviation; (b) setting the threshold for excluding maximal elemental difference equal to a multiplier of the standard deviation, e.g., 0.5 times the standard deviation. Furthermore, more than one elemental difference can be excluded from the calculation of the distance metric if they exceed their thresholds. A maximum limit on the number of elements that can be excluded can be set, such that, for example, in a 9-variable model, if 4 elemental differences are larger than their exclusionary threshold, but the maximum cap on excluded elements is 3, then the 3 elements with the largest elemental differences are excluded, and the fourth is included in the distance metric, even though it exceeds its threshold.


The weights described in Equation 3 can also be derived in several alternative ways. Regardless of the exact manner in which the distance metric is used to determine weights, the important aspect is that the weights are greatest (the absolute value of the kernel function is maximum) when the two vectors being compared are identical, and the weights diminish as the two vectors being compared are increasingly different. For example, the weights can be determined according to:









w
=


K


(

u
,
R

)


=

1

1
+


u
λ

R








(
10
)








where R and the power λ are tuning factors. Another way to determine the weights according to the present invention is:









w
=


K


(

u
,
R

)


=

1
-


u
λ

R







(
11
)








where again R and the power λ are tuning factors.


Furthermore, the mathematical framework of Equation 4 (AAKR) is just one framework in which the memory vectors can be combined according to the weights w. In the framework of similarity-based modeling, the memory vectors are also compared to each other using the kernel function to produce a M-square matrix G of scalar values for the comparison of the M memory vectors:

G=K(M,h)  (12)

where M is the matrix formed by all memory vectors as columns, and h is a vector of n bandwidths h for each sensor. The weights can similarly be written in matrix notation as:

w=K(U,h)  (13)

where w is the weight vector of n weights wi, U is a the vector of distance function results of input observation x and the memory vectors of M, and h is the bandwidths vector. Then, the estimate vector {circumflex over (x)} can be determined with similarity-based modeling as:

{circumflex over (x)}q=M·G−1·w  (14)


EXAMPLE

Data was collected from an operating nuclear power plant steam system and used to compare and evaluate the robust distance metrics. The model is used to monitor steam system sensor calibration at an operating plant and contains 5 plant sensors, primarily from one loop, which include 2 turbine pressure sensors and 3 steam pressure sensors. The quoted sensor units are as follows: 1) turbine pressure in pounds per square inch atmospheric (PSIA) and 2) steam pressure in pounds per square inch gauge (PSIG). The training data for each of the sensor types is shown in FIG. 1. The data presented in FIG. 1 was selected from data collected every two minutes over a two-month period. Overall, the training and test data spans approximately 2 weeks of data observing every 5th sample or every 10 seconds.


The training data was chosen to be 1,600 observations from steady state plant operation. The test data were chosen to be a successive set of 400 observations sampled from steady state plant operation. The training data were used to develop the empirical models and the test data were used to evaluate the performance of the empirical models.


For completeness, the AAKR model was developed with 800 memory vectors and a bandwidth of 0.5, using the form of Equation 3. The resulting accuracy, robustness, and spillover performance metrics are listed in Table 1.









TABLE 1







Accuracy, robustness, and spillover performance


for compared distance functions.











Turbine Pressure
Steam Pressure















#1
#2
#3
#4
#5
Average


















Accuracy
Euclidean
0.23
0.60
0.44
0.21
0.29
0.35



L1-norm
0.08
0.20
0.28
0.07
0.02
0.17



Robust
0.59
2.80
0.89
0.42
0.36
1.10



Euclidean


Robust-
Euclidean
0.56
0.63
0.29
0.33
0.37
0.44


ness
L1-norm
0.64
0.73
0.21
0.25
0.24
0.41



Robust
0.20
0.23
0.23
0.18
0.13
0.19



Euclidean


Spillover
Euclidean
0.11
0.11
0.18
0.18
0.16
0.15



L1-norm
0.11
0.12
0.12
0.15
0.12
0.13



Robust
0.09
0.12
0.06
0.08
0.09
0.09



Euclidean









Turning to FIG. 2, the accuracy of the respective distance metrics are compared. FIG. 3 shows the respective robustness performance metrics. FIG. 4 shows the respective spillover. Generally, a lower value indicates better performance for the metric. These figures show a decrease in robustness and spillover for the robust distance functions. In other words, the models that use the robust distance functions are less affected by faulty input and are considered to be more robust. This increased robustness is not without consequence though, as all of the variable accuracy metrics (MSE) for the robust Euclidean distance function are larger than those of the model with the L2-norm. Even though there may be an increase in the accuracy metric (predictive error of the model), using the normal L2-norm, the decreases in robustness and spillover metrics using the L1-norm and robust Euclidean distance more than validate its effectiveness in detecting sensor drift. Ultimately, what is important in equipment fault detection and sensor drift detection is robustness, and accuracy does not need to be exacting. This is because accuracy can represent merely overfitting, in which case the fault may not be identified in a timely fashion. Robustness on the other hand is critical to fault detection as the input becomes increasingly incorrect. A less accurate model may still outperform a more accurate model on fault detection because the error between the estimates and actual values serves as a baseline against which the more robust, less accurate model evidences drifts and deviations better.



FIG. 5 illustrates a monitoring apparatus 500 for monitoring a monitored system 505 and according to an embodiment of the invention. The monitored system 505 may comprise, for example, a fossil fueled power plant environment. A set of sensors 510 monitor various parts, sections, or areas within the monitored system 505. For example, the sensors may monitor temperatures or flow rates at various locations in the monitored system 505.


The sensors 510 provide a query vector, based on the measurements of the set of sensors 510, to the monitoring apparatus 500. The monitoring apparatus 500 includes several devices or modules, such as a non-parametric empirical model 515, a distance estimation engine 520, and a memory 525. The non-parametric empirical model 515 generates estimates of parameter values in response to receiving the query vector of monitored parameter. The distance estimation engine 515 determines robust distances between the query vector and each of a set of predetermined historical vectors for the non-parametric empirical model based on an implementation of an elemental kernel function. The distance estimation engine 515 also determines weights for the monitored parameters based on the robust distances and combines the weights with the predetermined historical vectors to make predictions for the system.


It should be appreciated that a wide range of changes and modifications may be made to the embodiments of the invention as described herein. Thus, it is intended that the foregoing detailed description be regarded as illustrative rather than limiting and that the following claims, including all equivalents, are intended to define the scope of the invention.

Claims
  • 1. A method for making predictions based on a non-parametric empirical model used in monitoring a system, the method comprising: providing a processor;receiving at the processor a query vector of multiple query sensor values for different monitored parameters for the non-parametric empirical model characterizing the system;determining, with the processor, robust distances between the query vector and each of a set of predetermined historical vectors, each historical vector having multiple historical sensor values, for the non-parametric empirical model based on an implementation of an elemental kernel function including: performing an elemental calculation between each query sensor value of the query vector and a corresponding historical sensor value of each historical vector, wherein each elemental calculation results in a single elemental contributor for each pair of corresponding query and historical sensor values,eliminating at least one, but less than all, of the elemental contributors formed from at least one comparison between the query vector and the historical vector, and depending on the values of the elemental contributors,calculating the robust distance between the query vector and the historical vector using a calculation with the remaining elemental contributors;determining weights for the monitored parameters based on the robust distances calculated using the query vector;combining the weights with the predetermined historical vectors to make predictions for the system.
  • 2. The method of claim 1, wherein the non-parametric empirical model is based on an autoassociative kernel regression model.
  • 3. The method of claim 1, wherein the non-parametric empirical model is based on a similarity based model.
  • 4. The method of claim 1, wherein the elemental kernel function is Euclidean distance.
  • 5. The method of claim 1, wherein the elemental kernel function is a city block distance.
  • 6. The method of claim 1, wherein the determining robust distances comprises removing a largest elemental contributor to the elemental kernel function.
  • 7. The method of claim 1, wherein the determining robust distances comprises removing at least one of a set of largest elemental contributors to the elemental kernel function based on a threshold distance assigned for each element.
  • 8. The method of claim 1 wherein at least one elemental contributor is eliminated from the robust distance calculation for each query-historical vector comparison.
  • 9. An apparatus for monitoring an operation of a system characterized by operational parameters, comprising: a non-parametric empirical model for generating estimates of parameter values in response to receiving a query vector of multiple query sensor values for different monitored parameters for a model characterizing the system;a distance estimation engine for determining robust distances between the query vector and each of a set of predetermined historical vectors of multiple historical sensor values for the non-parametric empirical model based on an implementation of an elemental kernel function including: performing an elemental calculation between each query sensor value of the query vector and a corresponding historical sensor value of the historical vector wherein each elemental calculation results in a single elemental contributor for each pair of corresponding query and historical sensor values,eliminating at least one, but less than all, of the elemental contributors formed from at least one comparison between the query vector and the historical vector, andcalculating the robust distance between the query vector and the historical vector using a calculation with the remaining elemental contributors;determining weights for the monitored parameters based on the robust distances calculated using the query vector; andcombining the weights with the predetermined historical vectors to make predictions for the system.
  • 10. The apparatus of claim 9, wherein the non-parametric empirical model is based on an auto-associative kernel regression model.
  • 11. The apparatus of claim 9, wherein the non-parametric empirical model is based on a similarity based model.
  • 12. The apparatus of claim 9, wherein the elemental kernel function is Euclidean distance.
  • 13. The apparatus of claim 9, wherein the elemental kernel function is a city block distance.
  • 14. The method of claim 9, wherein the distance estimation engine determines the robust distances at least partially by removing a largest elemental contributor to the elemental kernel function.
  • 15. The apparatus of claim 9, wherein the distance estimation engine determines the robust distances at least partially by removing at least one of a set of largest elemental contributors to the elemental kernel function based on a threshold distance assigned for each element.
  • 16. The apparatus of claim 9 wherein at least one elemental contributor is eliminated from the robust distance calculation for each query-historical vector comparison.
RELATED APPLICATION DATA

This application claims priority to provisional application Ser. No. 60/870,268, filed Dec. 15, 2006, the disclosure of which is hereby incorporated by reference herein in its entirety.

US Referenced Citations (390)
Number Name Date Kind
3045221 Roop Jul 1962 A
3561237 Eggers Feb 1971 A
3651454 Venema et al. Mar 1972 A
3767900 Chao Oct 1973 A
3851157 Ellis et al. Nov 1974 A
3866166 Kerscher et al. Feb 1975 A
3906437 Brandwein et al. Sep 1975 A
3928022 Langange Dec 1975 A
3992884 Pacault Nov 1976 A
4057847 Lowell et al. Nov 1977 A
4060716 Pekrul et al. Nov 1977 A
4067061 Juhasz Jan 1978 A
4071898 Schorsch et al. Jan 1978 A
4080654 Walley, Jr. Mar 1978 A
4212064 Forsythe et al. Jul 1980 A
4215412 Bernier et al. Jul 1980 A
4267569 Baumann et al. May 1981 A
4271402 Kastura et al. Jun 1981 A
4295128 Hashemian et al. Oct 1981 A
4296409 Whitaker et al. Oct 1981 A
4330838 Yoneda et al. May 1982 A
4334136 Mahan et al. Jun 1982 A
4336595 Adams et al. Jun 1982 A
4368510 Anderson Jan 1983 A
4398258 Naitoh et al. Aug 1983 A
4402054 Osborne et al. Aug 1983 A
RE31582 Hosaka et al. May 1984 E
RE31750 Morrow Nov 1984 E
4480480 Scott et al. Nov 1984 A
4517468 Kemper et al. May 1985 A
4521885 Melocik et al. Jun 1985 A
4639882 Keats Jan 1987 A
4667176 Matsuda May 1987 A
4677429 Glotzbach Jun 1987 A
4707796 Calabro et al. Nov 1987 A
4761748 Le Rat et al. Aug 1988 A
4773021 Harris et al. Sep 1988 A
4796205 Ishii et al. Jan 1989 A
4823290 Fasack et al. Apr 1989 A
4841456 Hogan, Jr. et al. Jun 1989 A
4849894 Probst Jul 1989 A
4924418 Bachman et al. May 1990 A
4931977 Klemes Jun 1990 A
4937763 Mott Jun 1990 A
4965513 Haynes et al. Oct 1990 A
4965549 Koike Oct 1990 A
4975685 Rahhal Dec 1990 A
4975827 Yonezawa Dec 1990 A
4978291 Nakai Dec 1990 A
4978909 Hendrix et al. Dec 1990 A
4985857 Bajpai et al. Jan 1991 A
4990885 Irick et al. Feb 1991 A
5003478 Kobayashi et al. Mar 1991 A
5003479 Kobayashi et al. Mar 1991 A
5003950 Kato et al. Apr 1991 A
5005142 Lipchak et al. Apr 1991 A
5005147 Krishen et al. Apr 1991 A
5009833 Takeuchi et al. Apr 1991 A
5010487 Stonehocker Apr 1991 A
5012414 Ishii et al. Apr 1991 A
5012421 Ishii Apr 1991 A
5025499 Inoue et al. Jun 1991 A
5034889 Abe Jul 1991 A
5038545 Hiendl Aug 1991 A
5052630 Hinsey et al. Oct 1991 A
5056023 Abe Oct 1991 A
5063513 Shank et al. Nov 1991 A
5067099 McCown et al. Nov 1991 A
5072391 Abe Dec 1991 A
5088058 Salsburg Feb 1992 A
5091856 Hasegawa et al. Feb 1992 A
5093792 Taki et al. Mar 1992 A
5109700 Hicho May 1992 A
5113483 Keeler et al. May 1992 A
5119287 Nakamura et al. Jun 1992 A
5119468 Owens Jun 1992 A
5123017 Simpkins et al. Jun 1992 A
5164895 Lunz et al. Nov 1992 A
5166873 Takatsu et al. Nov 1992 A
5173856 Purnell et al. Dec 1992 A
5187735 Herrero Garcia et al. Feb 1993 A
5195046 Gerardi et al. Mar 1993 A
5210704 Husseiny May 1993 A
5213080 Lambert et al. May 1993 A
5214582 Gray May 1993 A
5222065 Krogmanm Jun 1993 A
5223207 Gross et al. Jun 1993 A
5239462 Jones et al. Aug 1993 A
5251285 Inoue et al. Oct 1993 A
5255208 Thakore et al. Oct 1993 A
5262941 Saladin et al. Nov 1993 A
5285494 Sprecher et al. Feb 1994 A
5291420 Matsumoto et al. Mar 1994 A
5309139 Austin May 1994 A
5309351 McCain et al. May 1994 A
5309379 Rawlings et al. May 1994 A
5311562 Palusamy et al. May 1994 A
5325304 Aoki Jun 1994 A
5327349 Hoste Jul 1994 A
5333240 Matsumoto et al. Jul 1994 A
5361336 Atchison Nov 1994 A
5386373 Keeler et al. Jan 1995 A
5387783 Mihm et al. Feb 1995 A
5390776 Thompson Feb 1995 A
5402333 Cardner Mar 1995 A
5402521 Niida et al. Mar 1995 A
5410492 Gross et al. Apr 1995 A
5414619 Katayama et al. May 1995 A
5414632 Mochizuki et al. May 1995 A
5420571 Coleman et al. May 1995 A
5421204 Svaty, Jr. Jun 1995 A
5442553 Parrillo Aug 1995 A
5445347 Ng Aug 1995 A
5446671 Weaver et al. Aug 1995 A
5446672 Boldys Aug 1995 A
5450321 Crane Sep 1995 A
5450537 Hirai et al. Sep 1995 A
5455777 Fujiyama et al. Oct 1995 A
5459675 Gross et al. Oct 1995 A
5463768 Cuddihy et al. Oct 1995 A
5463769 Tate et al. Oct 1995 A
5465321 Smyth Nov 1995 A
5473532 Unno et al. Dec 1995 A
5479574 Glier et al. Dec 1995 A
5481647 Brody et al. Jan 1996 A
5481674 Mahavadi Jan 1996 A
5486997 Reismiller et al. Jan 1996 A
5495168 de Vries Feb 1996 A
5496450 Blumenthal et al. Mar 1996 A
5500940 Skeie Mar 1996 A
5502543 Aboujaoude Mar 1996 A
5526446 Adelson et al. Jun 1996 A
5539638 Keeler et al. Jul 1996 A
5544320 Konrad Aug 1996 A
5548528 Keeler et al. Aug 1996 A
5553239 Heath et al. Sep 1996 A
5559710 Shahraray et al. Sep 1996 A
5561431 Peele et al. Oct 1996 A
5566092 Wang et al. Oct 1996 A
5574387 Petsche et al. Nov 1996 A
5579232 Tong et al. Nov 1996 A
5586066 White et al. Dec 1996 A
5596507 Jones et al. Jan 1997 A
5600726 Morgan et al. Feb 1997 A
5602733 Rogers et al. Feb 1997 A
5608845 Ohtsuka et al. Mar 1997 A
5610339 Haseley et al. Mar 1997 A
5611052 Dykstra et al. Mar 1997 A
5612886 Weng Mar 1997 A
5617342 Elazouni Apr 1997 A
5623109 Uchida et al. Apr 1997 A
5629872 Grosse et al. May 1997 A
5629878 Kobrosly May 1997 A
5629879 Lelle May 1997 A
5638413 Uematsu et al. Jun 1997 A
5640103 Petsche et al. Jun 1997 A
5644463 El-Sharkawi et al. Jul 1997 A
5657245 Hecht et al. Aug 1997 A
5663894 Seth et al. Sep 1997 A
5668944 Berry Sep 1997 A
5671635 Nadeau et al. Sep 1997 A
5680409 Qin et al. Oct 1997 A
5680541 Kurosu et al. Oct 1997 A
5682317 Keeler et al. Oct 1997 A
5689416 Shimizu et al. Nov 1997 A
5689434 Tambini et al. Nov 1997 A
5696907 Tom Dec 1997 A
5699403 Ronnen Dec 1997 A
5704029 Wright, Jr. Dec 1997 A
5708780 Levergood et al. Jan 1998 A
5710723 Hoth et al. Jan 1998 A
5714683 Maloney Feb 1998 A
5727144 Brady et al. Mar 1998 A
5727163 Bezos Mar 1998 A
5737228 Ishizuka et al. Apr 1998 A
5745382 Vilim et al. Apr 1998 A
5745654 Titan Apr 1998 A
5748469 Pyotsia May 1998 A
5748496 Takahashi et al. May 1998 A
5751580 Chi May 1998 A
5753805 Maloney May 1998 A
5754451 Williams May 1998 A
5754965 Hagenbuch May 1998 A
5757309 Brooks et al. May 1998 A
5761090 Gross et al. Jun 1998 A
5761640 Kalyanswamy et al. Jun 1998 A
5764509 Gross et al. Jun 1998 A
5774379 Gross et al. Jun 1998 A
5774882 Keen et al. Jun 1998 A
5774883 Andersen et al. Jun 1998 A
5784285 Tamaki et al. Jul 1998 A
5787138 Ocieczek et al. Jul 1998 A
5790977 Ezekiel Aug 1998 A
5792072 Keefe Aug 1998 A
5796633 Burgess et al. Aug 1998 A
5797133 Jones et al. Aug 1998 A
5799043 Chang et al. Aug 1998 A
5802509 Maeda et al. Sep 1998 A
5805442 Crater et al. Sep 1998 A
5808903 Schiltz et al. Sep 1998 A
5809490 Guiver et al. Sep 1998 A
5817958 Uchida et al. Oct 1998 A
5818716 Chin et al. Oct 1998 A
5819029 Edwards et al. Oct 1998 A
5819236 Josephson Oct 1998 A
5819291 Haimowitz et al. Oct 1998 A
5822212 Tanaka et al. Oct 1998 A
5832465 Tom Nov 1998 A
5841677 Yang et al. Nov 1998 A
5842157 Wehhofer et al. Nov 1998 A
5845230 Lamberson Dec 1998 A
5845627 Olin et al. Dec 1998 A
5848396 Gerace Dec 1998 A
5864773 Barna et al. Jan 1999 A
5867118 McCoy et al. Feb 1999 A
5870721 Norris Feb 1999 A
5878403 DeFrancesco et al. Mar 1999 A
5886913 Marguinaud et al. Mar 1999 A
5895177 Iwai et al. Apr 1999 A
5905989 Biggs May 1999 A
5909368 Nixon et al. Jun 1999 A
5911135 Atkins Jun 1999 A
5913911 Beck et al. Jun 1999 A
5917428 Discenzo et al. Jun 1999 A
5921099 Lee Jul 1999 A
5930156 Kennedy Jul 1999 A
5930776 Dykstra et al. Jul 1999 A
5930779 Knoblock et al. Jul 1999 A
5933352 Salut Aug 1999 A
5933818 Kasravi et al. Aug 1999 A
5940298 Pan et al. Aug 1999 A
5940811 Norris Aug 1999 A
5940812 Tengel et al. Aug 1999 A
5943634 Piety et al. Aug 1999 A
5946661 Rothschild et al. Aug 1999 A
5946662 Ettl et al. Aug 1999 A
5949678 Wold et al. Sep 1999 A
5950147 Sarangapani et al. Sep 1999 A
5950179 Buchanan et al. Sep 1999 A
5956487 Venkatraman et al. Sep 1999 A
5956664 Bryan Sep 1999 A
5960411 Hartman et al. Sep 1999 A
5960435 Rathmann et al. Sep 1999 A
5961560 Kemner Oct 1999 A
5963884 Billington et al. Oct 1999 A
5966699 Zandi Oct 1999 A
5970430 Burns et al. Oct 1999 A
5970478 Walker et al. Oct 1999 A
5987399 Wegerich et al. Nov 1999 A
5987434 Libman Nov 1999 A
5991525 Shah et al. Nov 1999 A
5991735 Gerace Nov 1999 A
5993041 Toba Nov 1999 A
5995911 Hart Nov 1999 A
5995916 Nixon et al. Nov 1999 A
5995947 Fraser et al. Nov 1999 A
6000832 Franklin et al. Dec 1999 A
6002839 Keeler et al. Dec 1999 A
6006192 Cheng et al. Dec 1999 A
6006260 Barrick, Jr. et al. Dec 1999 A
6009381 Ono Dec 1999 A
6013108 Karolys et al. Jan 2000 A
6014598 Duyar et al. Jan 2000 A
6014645 Cunningham Jan 2000 A
6021396 Ramaswamy et al. Feb 2000 A
6023507 Wookey Feb 2000 A
6026348 Hala Feb 2000 A
6029097 Branicky et al. Feb 2000 A
6029149 Dykstra et al. Feb 2000 A
6029890 Austin Feb 2000 A
6041287 Dister et al. Mar 2000 A
6049738 Kayama et al. Apr 2000 A
6049741 Kawamura Apr 2000 A
6049827 Sugauchi et al. Apr 2000 A
6064916 Yoon May 2000 A
6076048 Gunther et al. Jun 2000 A
6076088 Paik et al. Jun 2000 A
6088626 Lilly et al. Jul 2000 A
6088686 Walker et al. Jul 2000 A
6100901 Mohda et al. Aug 2000 A
6104965 Lim et al. Aug 2000 A
6105007 Norris Aug 2000 A
6107919 Wilks et al. Aug 2000 A
6108616 Borchers et al. Aug 2000 A
6110214 Klimasauskas Aug 2000 A
6112190 Fletcher et al. Aug 2000 A
6115653 Bergstrom et al. Sep 2000 A
6119111 Gross et al. Sep 2000 A
6125351 Kauffman Sep 2000 A
6128540 Van Der Vegt et al. Oct 2000 A
6128543 Hitchner Oct 2000 A
6131076 Stephan et al. Oct 2000 A
6141647 Meijer et al. Oct 2000 A
6141674 Unkrich et al. Oct 2000 A
6144893 Van Der Vegt et al. Nov 2000 A
6181975 Gross et al. Jan 2001 B1
6182022 Mayle et al. Jan 2001 B1
6202038 Wegerich et al. Mar 2001 B1
6236908 Cheng et al. May 2001 B1
6240372 Gross et al. May 2001 B1
6245517 Chen et al. Jun 2001 B1
6246972 Klimasauskas Jun 2001 B1
6272449 Passera Aug 2001 B1
6278962 Klimasauskas et al. Aug 2001 B1
6289330 Jannarone Sep 2001 B1
6327574 Kramer et al. Dec 2001 B1
6331864 Coco et al. Dec 2001 B1
6331964 Barone Dec 2001 B1
6356857 Qin et al. Mar 2002 B1
6393373 Duyar et al. May 2002 B1
6418431 Mahajan et al. Jul 2002 B1
6424958 Pappalardo et al. Jul 2002 B1
6480810 Cardella et al. Nov 2002 B1
6502082 Toyama et al. Dec 2002 B1
6519552 Sampath et al. Feb 2003 B1
6522978 Chen et al. Feb 2003 B1
6526356 DiMaggio et al. Feb 2003 B1
6532426 Hooks et al. Mar 2003 B1
6539343 Zhao et al. Mar 2003 B2
6553334 Gross et al. Apr 2003 B2
6556939 Wegerich Apr 2003 B1
6567752 Cusumano et al. May 2003 B2
6567795 Alouani et al. May 2003 B2
6587737 Voser et al. Jul 2003 B2
6590362 Parlos et al. Jul 2003 B2
6591166 Millett et al. Jul 2003 B1
6591296 Ghanime Jul 2003 B1
6609036 Bickford Aug 2003 B1
6609212 Smith Aug 2003 B1
6625569 James et al. Sep 2003 B2
6678639 Little et al. Jan 2004 B2
6687654 Smith, Jr. et al. Feb 2004 B2
6731990 Carter et al. May 2004 B1
6751575 Lenz et al. Jun 2004 B2
6775641 Wegerich et al. Aug 2004 B2
6804628 Gross et al. Oct 2004 B2
6826552 Grosser et al. Nov 2004 B1
6839660 Eryurek et al. Jan 2005 B2
6853920 Hsiung et al. Feb 2005 B2
6859739 Wegerich et al. Feb 2005 B2
6876943 Wegerich Apr 2005 B2
6892163 Herzog et al. May 2005 B1
6898469 Bickford May 2005 B2
6898554 Jaw et al. May 2005 B2
6917839 Bickford Jul 2005 B2
6941287 Vaidyanathan et al. Sep 2005 B1
6952662 Wegerich et al. Oct 2005 B2
6957172 Wegerich Oct 2005 B2
6975962 Wegerich et al. Dec 2005 B2
6999899 Gross et al. Feb 2006 B2
7016816 Mott Mar 2006 B2
7027953 Klein Apr 2006 B2
7050875 Cribbs et al. May 2006 B2
7085675 Wegerich Aug 2006 B2
7089154 Rasmussen et al. Aug 2006 B2
7142990 Bouse et al. Nov 2006 B2
7233886 Wegerich et al. Jun 2007 B2
7308385 Wegerich et al. Dec 2007 B2
7373283 Herzog et al. May 2008 B2
7386426 Black et al. Jun 2008 B1
7403869 Wegerich et al. Jul 2008 B2
7409320 Wegerich Aug 2008 B2
7539597 Wegerich et al. May 2009 B2
7621141 McCormick et al. Nov 2009 B2
7640145 Wegerich et al. Dec 2009 B2
7739096 Wegerich et al. Jun 2010 B2
7941701 Wegerich et al. May 2011 B2
20020065698 Schick May 2002 A1
20020183971 Wegerich et al. Dec 2002 A1
20030028269 Spriggs Feb 2003 A1
20030040878 Rasmussen Feb 2003 A1
20030055666 Roddy Mar 2003 A1
20030060808 Wilk Mar 2003 A1
20030093521 Schlonski et al. May 2003 A1
20030109951 Hsiung Jun 2003 A1
20030125248 Hair Jul 2003 A1
20030126258 Conkright Jul 2003 A1
20030139908 Wegerich et al. Jul 2003 A1
20040019406 Wang Jan 2004 A1
20040088093 Yao May 2004 A1
20050021187 Wang Jan 2005 A1
20050021212 Gayme Jan 2005 A1
20050027400 Wang Feb 2005 A1
20050096757 Frerichs May 2005 A1
20050210337 Chester Sep 2005 A1
20050261837 Wegerich et al. Nov 2005 A1
20080071501 Herzog Mar 2008 A1
20080215291 Wegerich Sep 2008 A1
20090043405 Chester Feb 2009 A1
20090043467 Filev Feb 2009 A1
Foreign Referenced Citations (19)
Number Date Country
0840244 May 2008 EP
61160111 Jul 1986 JP
02004300 Sep 1990 JP
05126980 May 1993 JP
06274784 Sep 1994 JP
06278179 Oct 1994 JP
7243876 Sep 1995 JP
08220279 Aug 1996 JP
09166483 Jun 1997 JP
11311591 Sep 1999 JP
06161982 Jun 2006 JP
9504878 Feb 1995 WO
9722073 Jun 1997 WO
0067412 Nov 2000 WO
0167262 Sep 2001 WO
0235299 May 2002 WO
02057856 Jul 2002 WO
02086726 Oct 2002 WO
2005038545 Apr 2005 WO
Related Publications (1)
Number Date Country
20080183425 A1 Jul 2008 US
Provisional Applications (1)
Number Date Country
60870268 Dec 2006 US