Kernel-based method for detecting boiler tube leaks

Information

  • Patent Grant
  • 8275577
  • Patent Number
    8,275,577
  • Date Filed
    Tuesday, September 18, 2007
    16 years ago
  • Date Issued
    Tuesday, September 25, 2012
    11 years ago
Abstract
A method and apparatus are provided for diagnosing faults in a monitored system that is monitored by sensors. An empirical model is generated for a targeted component of the monitored system. The empirical model is trained with an historical data source that contains example observations of the sensors. Substantially real-time estimates are generated based on instrumented data corresponding to the targeted component. The substantially real-time estimates are compared and differenced with instrumented readings from the sensors to provide residual values. The residual values are analyzed to detect the faults and determine a location of the faults in the monitored system.
Description
BACKGROUND OF THE INVENTION

The large heat exchangers used by commercial coal-fired power plants are prone to tube leaks. Tube leaks represent a potential for serious physical damage due to escalation of the original leaks. For instance, the steam tubes located in the superheat/reheat section of a boiler are prone to cascading tube failures due to the close proximity of the steam tubes coupled with the high energy of the escaping steam. When undetected for an extended time, the ultimate damage from serious tube failures may range from $2 to $10 million/leak, forcing the system down for major repairs that can last up to a week.


If detected early, tube failures may be repaired before catastrophic damage, such repairs lasting only several days and costing a fraction of the cost associated with late detection and catastrophic damage. Repair times may be further reduced if the location of the leak is identified before repairs are initiated. In addition, accurate location allows the operator to delay shutdown and repair of leaks that occur in less critical regions of the boiler, such as the water wall, until economically advantageous.


Boiler tube leaks result in the diversion of water from its normal flow paths as the coolant in the boiler, directly into the combustion environment. The amount of water typically diverted by a leak is small relative to the normal variations in feed water flow rates and sources of water in the fuel/air mixture. Other sources of water in the fuel/air mixture are myriad and subtle including: water added at the point of combustion as steam used to atomize fuel; water used by pollutant control processes; water used in soot blowing; water formed from the combustion of hydrocarbon fuels; free water born by the fuel; and moisture carried by combustion air. These confound the discrimination of boiler tube leaks by a variety of prior art methods that have been employed in an attempt to detect them. In addition, the normal operation of the plant is subject to seasonal variation, variation in the quality of the combustion fuel, and manual operator choices, making it extremely difficult to detect boiler tube leaks in their incipient stages.


A system and method has been proposed in U.S. patent application publication No. 2005/0096757 for detecting faults in components of a continuous process, such as a boiler. A model of the process is developed using a modeling technique such as an advanced pattern recognition empirical model, which is used to generate predicted values for a predetermined number of the operating parameters of the process. The operating parameters in the model are drawn from the sensors that monitor the flow of steam/water through the balance-of-plant (“BOP”). The BOP encompasses the components of a power plant that extract thermal energy from the steam/water mixture and convert it to electrical energy. As such, the BOP excludes the boiler itself. The model monitors flow rates of steam/water entering into and exiting from the BOP, which correspond to the flow rate of superheated steam from the top of the boiler and the flow rate of condensed feed water into the bottom of the boiler, respectively. Under normal conditions, the flow entering the BOP is balanced by the flow exiting the BOP. One of the abnormal conditions that can upset this balance is a boiler tube leak. This approach, built around a mass and energy balance on the BOP, is capable of indirectly detecting a boiler tube leak. But since the model does not monitor any operating parameter internal to the boiler, including any parameter from the fuel/air side if the boiler, it is incapable of locating a tube leak.


What is needed is a way of monitoring a heat exchange environment in a fossil fuel power plant that is sensitive enough to detect boiler tube leaks in their initial stages from existing instrumentation present in the plant.


SUMMARY OF THE INVENTION

A method and system for monitoring the heat exchanger of a fossil fueled power plant environment is provided for detection of boiler tube leaks. According to the invention, a multivariate empirical model is generated from data from instrumentation on and related to the boiler, which then provides in real-time or near-real-time estimates for these sensors responsive to receiving each current set of readings from the sensors. The estimates and the actual readings are then compared and differenced to provide residuals that are analyzed for indications of boiler tube leaks. The model is provided by a kernel-based method that learns from example observations of the sensors, and preferably has the ability to localize on relevant learned examples in a two-step estimation process. Finally, the model is preferably capable of lagging-window adaptation to learn new normal variation patterns in plant operation. When kernel-based localized modeling is used to construct a multivariate nonparametric model of the traditional monitoring sensors (pressures, temperatures, flow rates, etc.) present in the boiler, the effect of the normal variations in the water balance on sensor response that typically confound other methods, can be accurately accounted for.


The invention can be carried out as software with access to plant data in data historians, or even from sensor data directly from the control system. The invention can be executed at real-time or near real-time, or can be executed in a batch mode with a batch delay no longer than the time in which the plant operator desires to receive an indication of a boiler tube leak.


The above summary of the present invention is not intended to represent each embodiment or every aspect of the present invention. The detailed description and Figures will describe many of the embodiments and aspects of the present invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is process flowchart for boiler tube leak monitoring using the approach of the invention;



FIG. 2 illustrates an intuitive visual interface according to an embodiment of the invention;



FIG. 3 shows a pair of related signal plots, as generated according to the invention, for a portion of a boiler which did not have a boiler tube leak;



FIG. 4 shows a pair of related signal plots, as generated according to the invention, for a portion of the same boiler, which did have a boiler tube leak; and



FIG. 5 illustrates a monitoring apparatus for diagnosing faults in a system according to an embodiment of the invention.





Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are typically not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention.


DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Some embodiments described herein are directed to a boiler in a fossil fuel power plant. However, those skilled in the art will recognize that teachings are equally applicable to a steam generator of a nuclear power plant.


Turning to FIG. 1, the method of the present invention is shown to comprise the step 100 of receiving an input observation of the sensor values related to the boiler, and inputting that to a localization step 110. In the localization step, the empirical model is tuned to use data that is “local” or particularly relevant to the input observation. Upon localization tuning of the model, the input observation is then used by the model with its localized learned data, to generate in step 120 an estimate of the input observation. In step 130, the estimate is compared to the input observation to form a residual for each sensor of interest in the input observation. In step 140, the residual signals are tested against pattern matching rules to determine whether any of them indicate a tube leak disturbance, and if so where the disturbance is located within the boiler.


Training of the model or models on sufficient historic data to characterize normal operating conditions of the boiler enables the detection of abnormal conditions (i.e., tube leaks). Because typical amounts of historical data used for model training (one year of data) often do not contain all combinations in operating parameters observed through the normal lifetime of a boiler, the present invention uses trailing adaptation algorithm described below to dynamically update the model when new combinations of operating parameters are encountered, and when the new data does not occur in the presence of a tube leak.


More than one model may be used to generate estimates as described with respect to FIG. 1. Models may be developed that focus on sections of the boiler in particular. In each case, the process of FIG. 1 is performed for each model.


Model Development


The first step in the invention is to construct suitable kernel-based models for a targeted boiler. Although the invention encompasses all forms of localized, kernel-based modeling techniques, the preferred embodiment of the invention utilizes the localized Similarity-Based Modeling (SBM) algorithm, which is detailed below.


The modeling process begins with the identification of all boiler sensors and collection of representative operating data from the sensors. Preferably, the set of sensors should encompass all process variables (pressures, temperatures, flow rates, etc.) used to monitor boiler operation. The set of sensors should include process variables from all major tube bundle regions (furnace, primary superheater, secondary superheater, reheater, economizer, boiler wall heat transfer region, etc.). If available, sensors that measure boiler make-up water or acoustically monitor boiler regions should be included in the set of sensors, since these are sensitive to tube leaks. Model development requires a sufficient amount of historic data to characterize normal operating conditions of the boiler. This condition is typically met by a year of operating data collected at a once-per-hour rate. Operation and maintenance records for the boiler are required to identify the location of any tube leaks that might have occurred during the selected operating period.


Following identification of boiler sensors and collection of operating data, the operating data are cleaned by data filtering algorithms. The data filtering algorithms test the suitability of data for model training; eliminating data for a variety of reasons including nonnumeric values, sensor drop-outs, spiking and flat-lining. Sensors that exhibit a preponderance of these effects can be easily eliminated from further modeling considerations and not included in any model. An important consideration in preparation of model training data is to eliminate data from time periods just prior to known past tube leak events so that the models recognize novel sensor behavior coincident with tube faults as being abnormal. The period of data eliminated prior to known tube leak events should preferably equal the maximum length of time that a boiler can operate with a tube leak. Experience has shown that eliminating one to two weeks of data prior to tube leak events is sufficient. Data that survive the filtering process are combined into a reference matrix of reference observations, each observation comprising a set of readings from each of a plurality of sensors in the model. The columns of the matrix correspond to sensor signals and the rows correspond to observation vectors (i.e., sensor measurements that are collected at the same point in time). Experience has shown that elimination of data by the filtering algorithms and due to concurrence with tube leak events results in typically half of the original data ending up in the reference matrix.


Sensors that are retained following the data filtering step are then grouped into candidate models. Sensors are grouped into candidate models based on plant location and function. An example candidate model might contain all sensors in a major boiler region. There is no upper limit on the number of sensors that can be included in an SBM. But because it is difficult to interpret sensor trends when models contain many sensors, a practical upper limit of about 150 to 200 sensors has been established. Sensors can be assigned to any number of candidate models and subgroups of related sensors can be included in any number of candidate models.


For each candidate model, training algorithms are applied that can effectively downsample the available historic data (which may be tremendously large) to a manageable but nonetheless representative set of reference data, which is identified herein as the model memory or H matrix. One effective training algorithm comprises selecting all reference vectors that contain a global maximum or minimum value for a sensor in the model. A remaining subset of available reference observations is then added. This can be done either by random selection, or by a using a distance metric of some kind to rate the remaining vectors and selecting one for inclusion at regular intervals. This can be done on an elemental basis or on a multidimensional basis. For example, after minimums and maximums have been covered, each reference vector available can be ranked according to the value of a given sensor, and then vectors included over intervals of the sensor value (e.g., each 5 degrees, or each 0.1 units of pressure). This can be done for one, some or all sensors in the model. This would constitute an elemental metric approach.


Once trained, the candidate models are tested against the remaining data in the reference matrix. The results (residual signals) from these tests are statistically analyzed to enable model grading. By directly comparing the statistics generated by the models, poorly performing candidate models can be eliminated and the best model for each boiler sensor can be identified. There are a number of statistics that are evaluated for each model, with the most important statistic being robustness. The robustness statistic is a measurement of the ability of the model to detect disturbances in each one of the modeled sensors. It is calculated by applying a small disturbance (step change) into the test signal for each modeled sensor. The tendency of the model estimate to follow the disturbance is evaluated by comparing the estimate calculated by the model over the disturbed region of the signal to the estimate calculated by the model when the disturbance is removed.


The residual signals from the best candidate models are further analyzed to determine the normal variation in model behavior. In this calculation, a normal residual signal is generated for each modeled sensor by a leave-one-out cross-validation algorithm applied to the H matrix. A statistical analysis of the normal residual signals measures upper and lower variation in normal residual response. Finally, these upper and lower values are multiplied by a user-specified factor, which typically varies between a value of 2 and 3, to set residual thresholds for each modeled sensor. The residual thresholds form the basis for distinguishing between normal and abnormal sensor behavior.


According to the present invention, the modeling technique can be chosen from a variety of known empirical kernel-based modeling techniques. By way of example, models based on kernel regression, radial basis functions and similarity-based modeling are usable in the context of the present invention. These methods can be described by the equation:










x
est

=




i
=
1

L








c
i



K


(


x
new

,

x
i


)








(
1
)








where a vector xest of sensor signal estimates is generated as a weighted sum of results of a kernel function K, which compares the input vector xnew of sensor signal measurements to multiple learned snapshots of sensor signal combinations, xi. The kernel function results are combined according to weights ci, which can be determined in a number of ways. The above form is an “autoassociative” form, in which all estimated output signals are also represented by input signals. This contrasts with the “inferential” form in which certain output signal estimates are provided that are not represented as inputs, but are instead inferred from the inputs:










y
^

=




i
=
1

L








c
i



K


(


x
new

,

x
i


)








(
2
)








where in this case, y-hat is an inferred sensor estimate. In a similar fashion, more than one sensor can be simultaneously inferred.


In a preferred embodiment of the invention, the modeling technique used is similarity based modeling, or SBM. According to this method, multivariate snapshots of sensor data are used to create a model comprising a matrix D of learned reference observations. Upon presentation of a new input observation xin comprising sensor signal measurements of equipment behavior, autoassociative estimates xest are calculated according to:

xest=D·(DT{circumflex over (x)}D)−1·(DT{circumflex over (x)}xin)  (3)

or more robustly:










x
est

=


D
·


(


D
T


D

)


-
1


·

(


D
T



x
in


)





(



(


D
T


D

)


-
1


·

(


D
T



x
in


)


)







(
4
)








where the similarity operator is signified by the symbol {circumflex over (x)}, and can be chosen from a number of alternative forms. Generally, the similarity operator compares two vectors at a time and returns a measure of similarity for each such comparison. The similarity operator can operate on the vectors as a whole (vector-to-vector comparison) or elementally, in which case the vector similarity is provided by averaging the elemental results. The similarity operator is such that it ranges between two boundary values (e.g., zero to one), takes on the value of one of the boundaries when the vectors being compared are identical, and approaches the other boundary value as the vectors being compared become increasingly dissimilar.


An example of one similarity operator that may be used in a preferred embodiment of the invention is given by:









s
=







x
in

-

x
i




h






(
5
)








where h is a width parameter that controls the sensitivity of the similarity to the distance between the input vector xin and the example vector xi. Another example of a similarity operator is given by:









s
=


1
N






i
=
1

N







(


[

1
+



[


(


x
i


A


-

x
i


B



)

/

R
i


]

λ

C


]


-
1


)







(
6
)








where N is the number of sensor variables in a given observation, C and λ are selectable tuning parameters, Ri is the expected range for sensor variable i, and the elements of vectors Ax and Bx corresponding to sensor i are treated individually.


Further according to a preferred embodiment of the present invention, an SBM-based model can be created in real-time with each new input observation by localizing within the learned reference library to those learned observations with particular relevance to the input observation, and constituting the D matrix from just those observations. With the next input observation, the D matrix would be reconstituted from a different subset of the learned reference matrix, and so on. A number of means of localizing may be used, including nearest neighbors to the input vector, and highest similarity scores.


By way of example, another example-learning kernel based method that can be used to generate estimates according the invention is kernel regression, as exemplified by the Nadaraya-Watson equation (in autoassociative form):










x
^

=






i
=
1

L








d
i



K


(


x
new

,

d
i


)








i
=
1

L







K


(


x
new

,

d
i


)




=


D
·

(


D
T



x
new


)





(


D
T



x
new


)








(
7
)








which in inferential form takes the form of:










y
^

=






i
=
1

L








d
i
out



K


(


x
new

,

d
i
in


)








i
=
1

L







K


(


x
new

,

d
i
in


)




=



D
out

·

(


D
in
T



x
new


)





(


D
in
T



x
new


)








(
8
)







Localization again is used to preselect the reference observations that will comprise the D matrix.


Turning to the specific details of localization, a number of methods can be used to localize on the right subset of available reference observations to use to constitute the D matrix, based on the input observation for which the estimate is to be generated. According to a first way, the nearest neighbors to the input observation can be used, as determined with a number of distance metrics, including Euclidean distance. Reference observations can be included based on nearest neighbor either (a) so that a requisite minimum number of reference observations are selected for inclusion in the D matrix, regardless of how distant the furthest included observation is, or (b) so that all reference observations within a selected distance are included, no matter how many or few there are.


According to another way of localizing, the kernel similarity operator K itself is used to measure the similarity of every available reference vector or observation, with the input observation. Again, either (a) a requisite number of the most similar reference observations can be included, or (b) all reference observations above a threshold similarity can be included.


According to a variation of the above, another way of choosing reference vectors for the D matrix can include the above distance and similarity approaches, coupled with the criteria that the D matrix must contain at least enough reference vectors so that each sensor value of the input observation is bracketed by a low and a high sensor value from the reference observations, that is, so that the input observation does not have a sensor value that is outside the range of values seen for that sensor across all the reference observations that have been included in the D matrix. If an input sensor is out of range in this sense, then further reference vectors are added until the range for that sensor is covered. A minimum threshold of similarity or distance can be used such that if no reference vector with at least that similarity, or at least within that distance, is found to cover the range of the sensor, then the D matrix is used as is, with the input observation sensor lying outside the covered range.


The basic approach for modeling of a boiler, as discussed herein, is to use one model to monitor boiler performance and a number of other models to monitor various tube bundle regions, such as the primary superheater, secondary superheater, reheater, furnace waterwall and economizer sections.


The boiler performance model is designed to provide the earliest indications of developing tube leaks by detecting subtle deviations in boiler performance induced by the tube leaks. The main constituents of the boiler performance model are the sensors that monitor the input and output conditions of both the fuel/air side and water/steam sides of the boiler. On the fuel/air side of the boiler, these include sensors that measure the flow of fuel and air into the furnace section of the boiler and the flow of air and combustion products out of the boiler to the plant's stack. On the water/steam side, these include sensors that measure the flow of feedwater into the first heat transfer section of the boiler (typically the economizer) and all flows of saturated and superheated steam out of the boiler leading to various turbine stages. In addition, sensors that measure the energy content of these flows, such as power expended by system pumps and the power generated by the plant are included in the model. Conceptually, the boiler performance model is constructed of the constituent elements in mass and energy balances across the fuel/air and water/steam components of the boiler. Since the model is trained with data collected while the boundary between the two sides is intact, the model is designed to detect changes in the mass and energy balances when the boundary between the two sides is breached by boiler tube leaks.


Experience with the boiler performance model during boiler tube faults has revealed that key boiler sensors that show deviations correlated with tube leaks include: air flows, forced and induced draft pump currents, outlet gas pressures and temperatures, excess (i.e., uncombusted) oxygen fractions and steam drum levels and temperatures. Most of the boiler model sensors that provide early warning monitor the flow of air and combustion products through the boiler. The effect of tube leaks on water/steam side parameters tend to show up in the later stages of the fault progression.


The heat transfer regions of the boiler are typically composed of tube bundles, with high pressure steam/water mixture on the inside of the tubes and hot air/combustion product mixture on the outside. The number and composition of the heat transfer models depends upon the boiler design and the installed instrumentation. The bulk of sensors included in the models are thermocouples that monitor the temperature of the steam/water mixture within individual tubes. For better instrumented boilers, the number of tube bundle thermocouples can easily run into the hundreds. For the most part, these tube bundle thermocouples are located outside of the heat transfer region, away from the caustic air/combustion product mixture, and are located near the tops of the tubes where they connect with steam headers.


Residual Signal Analysis for Rule Development


After development of a set of models for a targeted boiler, all historic data collected from the boiler are analyzed. These calculations include any data prevented from the being added to the reference matrix by the data filtering algorithms or due to concurrence with tube leak events. In the event that an observation vector contains nonnumeric data values, the autoassociative form of the model can be switched to an inferential form of the model for the missing sensor value. These calculations produce residual signals that bear signatures of boiler tube leaks.


The residual signals generated during the modeling of all collected operating data are analyzed to detect sensor abnormalities. The first step in residual signal analysis is to apply linear or nonlinear windowed-smoothing algorithms (e.g., moving average, median and olympic filters) to accentuate residual signal deviations. Next, smoothed and unsmoothed residual signals are analyzed with the residual threshold alerting and window ratio rule algorithms. These algorithms provide simple means to detect the onset and measure the persistence of sensor abnormalities. Other sensitive statistical techniques, including the sequential probability ratio test and run-of-signs tests can be used to provide additional means of detecting onset and measuring persistence of sensor abnormalities. For residual signals that display deviations, one-dimensional kernel techniques, including kernel regression and SBM regression algorithms, are used to calculate the rate-of-change of the deviations.


The residual signal analysis provides a database of time-varying measurements that can be used to characterize the tube leak faults. These measurements include time of onset, direction of deviation (i.e., negative or positive), duration, amplitude and rate-of-change for all sensors that exhibit residual signal abnormalities. Utilizing maintenance records, the residual signals and time-varying measurements can be recast as functions relative to the time at which the tube leak is detected or time at which the boiler is shutdown to repair the leak. Collectively, the residual signals and measurements form a set of residual signal signatures for each boiler fault.


Utilizing maintenance records and knowledge of boiler design and boiler fault mechanisms to group similar tube leak events, the residual signal signatures are reviewed to identify the salient characteristics of individual fault types. An important aspect of this task is to review operational records to determine whether any of the residual signal signatures can be explained by changes in operation that were not captured during the training process. Because of the high-dimensionality of the residual signal signatures, classification algorithms, such as K-means, LVQ neural network and SBM classification algorithms can be used to reveal common features hidden within the residual signal signatures that may escape expert review. Salient residual signal features that are identified for a given fault type are cast into diagnostic rules to provide a complete boiler tube fault solution.


An important application of the heat transfer models constructed to monitor tube bundle sections of the boiler, as discussed herein, is to merge model results with data defining the physical location of tube bundle thermocouples to infer the location of tube leaks. Merging of these data allows for the development of intuitive visual interfaces.



FIG. 2 illustrates an intuitive visual interface 200 according to an embodiment of the invention. The visual interface 200 may be displayed on a computer monitor or on some other type of display viewable by an operator. FIG. 2 represents a birds-eye view of a boiler 205, looking from the highest region of the boiler 205, called the penthouse, down into the heat transfer sections of the boiler 205. The left-side of the figure labeled “furnace” 210 represents the combustion zone of the boiler 205. Hot combustion gases rise from the furnace 210 and are redirected horizontally across the tube bundle regions where they heat the water/steam mixture in the tubes. As represented by the figure, the hot combustion gases flow from left-to-right, passing through the secondary superheater 215, reheater 220, and then primary superheater 225 sections, in turn. These sections are labeled along the top of the figure and are represented by gray shaded regions within the figure. Embedded within these regions are rectangular grids which are used to roughly represent the location of the various tube bundle thermocouples. The numbers that are arrayed vertically along the sides of the rectangular grid indicate pendant numbers. A pendant is a collection of steam tubes that are connected to a common header. For the boiler 205 represented in the figure, a pendant contains from 22 to 36 individual tubes, depending on tube bundle region. Within a particular pendant, two or three of the steam tubes may contain thermocouples. Pendants that contain tubes monitored by thermocouples are indicated by colors that vary from red to blue. Pendants that lack tube thermocouples or whose thermocouple(s) are inoperable are represented by white rectangles in the grids.


The colors are used to indicate the value of the normalized residual produced by the model of a thermocouple at a given moment in time. Residual values for a thermocouple are normalized by a statistical measure of the normal amount of variation in the model for that thermocouple. The relationship between individual shades of color and corresponding normalized residual values is depicted by the vertical color bar located to the right of the figure.


The results depicted in FIG. 2 were generated for a boiler 205 that experienced boiler tube leaks in its reheater 220. The figure shows normalized residual signals for tube bundle thermocouples for a time that was six hours prior to the time at which the operator suspected that a tube fault event had occurred and initiated a boiler 205 shutdown. Following shutdown of the boiler 205, maintenance personnel inspected all tube bundle regions of the boiler and discovered that two reheater steam tubes, one in pendant 33 and the other in pendant 35, had failed. The location of the pendants which contained the failed tube is depicted by two solid black dots. FIG. 2 shows that thermocouples situated closest to the failed steam tubes exhibit the largest residual signal changes. The two thermocouples located in pendant 39 of the reheater are shaded to indicate that the normalized residuals for these sensors have shifted in the positive direction. The one operable thermocouple in pendant 31 of the reheater is shaded differently to indicate that its residual has shifted negatively to a large degree.



FIG. 2 shows that steam tubes located to the right of the failed tubes across the combustion gas flow path are experiencing higher temperatures than those expected by the model, while steam tubes located to the left of the failed tubes are experiencing lower temperatures than those expected by the model. These changes in temperature profile are due to the directional nature of the tube failure. In most cases, tube failures are characterized by a small opening in the tube or by a tear along the length of the tube. Rarely does the opening extend around the circumference of the tube. Thus the high pressure steam tends to escape from the leak preferentially in one direction. Since the high pressure steam flowing from the failed tube is cooler than the surrounding combustion gases, steam tubes along the direction of the leak are cooled. The high pressure steam disturbs the normal flow of the combustion gases, forcing the gases to the other side of the tube fault heating the steam tubes on the opposite side of the leak. The normalized residual values for thermocouples located relatively far from the failed tubes are within the bounds of normal model variation, and thus are depicted by shaded rectangles in the grids of FIG. 2.


Adaptation


Because typical amounts of historical data used for model training do not necessarily contain all combinations in operating parameters observed through the normal lifetime of a boiler, the real-time monitoring solution is preferably coupled with a means to maintain model accuracy, by application of various adaptation algorithms, including manual (user-driven), trailing, out-of-range, in-range and control-variable driven adaptation algorithms. In manual adaptation, the user identifies a stretch of data which has been validated as clear of faults, and that data is added to the repertoire of reference data from which the H matrix is then reconstituted. In out-of-range adaptation, observations that contain new highs or lows for a sensor, beyond the range of what was seen across all the available reference data, is added to the reference data (and optionally after validating no fault is occurring at the time) and the H matrix is instantly or occasionally reconstituted. Alternatively, the new observation can be added directly to the H matrix. In control variable driven adaptation, observations corresponding to new control settings not used during the time frame of the original reference data are added to the reference data, and the H matrix is reconstituted. In in-range adaptation, observations falling into sparsely represented spaces in the dimensional space of the model are added to the reference data upon determination that no fault was occurring during that observation. The preferred embodiment uses the trailing adaptation algorithm (detailed below) coupled with manual adaptation as needed.


In the trailing adaptation algorithm, historical data that lag the data currently being analyzed are continually added to the H matrix and thus are available for future modeling. The trailing adaptation algorithm applies the same data filtering algorithms used during model development to test the suitability of trailing data encountered during monitoring. This prevents bad data (nonnumeric values, sensor drop-outs, spiking and flat-lined data) from being added to the H matrix. To apply the trailing adaptation algorithm the user needs to set the lag time, set the maximum size on the H matrix, and determine how to remove data from the H matrix when the maximum size is reached. The lag time is set to the maximum length of time that a boiler can operate with a tube leak, which typically equals one to two weeks. The maximum H matrix size is set based on balancing adequate model response with algorithm performance (CPU time). Experience has shown that a maximum H matrix size of 1000 observation vectors provides a good balance between model accuracy and algorithm performance. The preferred method for removing data from the H matrix is to remove the oldest observation vectors from the matrix. Other techniques based on similarity can be used in the alternative (i.e., remove the vector most similar to the current observation vector, remove the vector least similar to all other vectors in H, etc.)


The trailing adaptation algorithm continually adjusts the kernel-based localized model to maintain model accuracy, despite varying operating conditions. Because the trailing adaptation algorithm works in a delayed manner, gross changes in operating conditions such as resetting of baseline power level can cause residual signal signatures that are misinterpreted by the diagnostic rules. To ameliorate this effect, the manual adaptation algorithm is used to provide immediate model adjustment when gross changes in operating conditions are observed. In the manual adaptation algorithm, the user identifies the time at which the gross change in operating conditions is first observed. The algorithm then collects all recently analyzed data from that time up to the current time, passes the data through the same data filtering algorithms used during model development, and adds the remaining data to the H matrix. Another aspect of the manual adaptation algorithm is that it can be used to prevent automatic model adjustment by the trailing adaptation algorithm when abnormal changes in operating conditions are observed. For instance when a boiler tube leak occurs, the user specifies the period of recently collected data that corresponds to the leak and identifies the data as being unsuitable for consideration by the trailing adaptation algorithm. This is accomplished by the simple setting of a binary flag that is attached to each observation vector processed by the system. When the trailing adaptation algorithm encounters these vectors, the algorithm reads the binary flags and prevents the vectors from being added to the H matrix.


Because the trailing and manual adaptation algorithms continually modify the H matrix to capture changing operating conditions, the residual thresholds need to be recalculated occasionally. Since the thresholds are a function of the statistical width of normal residual signals generated from the H matrix, the thresholds need to be recalculated only when a sizable fraction (e.g., 10%) of the H matrix is replaced.


Example

Turning to FIG. 3, two plots are shown. A first plot 300 shows the raw data 305 and the corresponding model estimate 307 of a pressure drop sensor from a boiler in an air heater section. The difference between the raw data 305 and the estimate 307 is the residual 315 which is shown in the bottom plot 310. The residual 315 is tested against statistically determined upper and lower thresholds 320 and 322 respectively. As can be seen, the estimate 307 and the raw data 305 are very close, and the residual 315 is well behaved between thresholds 320 and 322 until late in the plots, where a residual exceedance 325 is seen corresponding to a shut down of the boiler for repair. The model of the invention found no problem with this portion of the boiler.


Turning to FIG. 4, a corresponding parallel air heater section of the boiler of FIG. 3 is shown, again with two plots. The top plot 400 shows the raw data 405 and the corresponding model estimate 407 of a pressure drop sensor from a boiler in this parallel air heater section to that shown in FIG. 3. The difference between the raw data 405 and the estimate 407 is the residual 415 that is shown in the bottom plot 410. The residual 415 is tested against statistically determined upper and lower thresholds 420 and 422 respectively. As can be seen, the estimate 407 and the raw data 405 deviate over time, with raw data 405 moving lower than was expected according to estimate 407. Correspondingly, the residual 415 exceeds lower threshold 422 further and further leading up to the shut down of the boiler for repair. The deviation 425 shown here evidences the boiler tube leak that led to the shut down of the boiler.



FIG. 5 illustrates a monitoring apparatus 500 for diagnosing faults in a system according to an embodiment of the invention. As shown, the monitoring apparatus monitors a boiler 505 for faults. Sensors are utilized to monitor the boiler 505. A first set of the sensors 510 monitors conditions of the fuel/gas mixture of the boiler 505. A second set of the sensors 515 monitors conditions of the water/steam mixture of the boiler 505. The conditions being monitored by the first set of sensors 510 and the second set of sensors 515 include pressures, temperatures, and flow rates.


The monitoring apparatus 500 includes a reference data store 520 containing instrumented data corresponding to the boiler 505. The monitoring apparatus 500 also includes a processor 525 to (a) construct an empirical model for the targeted component of the system according to a nonparametric kernel-based method trained from example observations of sensors monitoring the system; (b) generate substantially real-time estimates based on the instrumented data corresponding to the targeted component; (c) compare and difference the substantially real-time estimates with instrumented readings from the sensors to provide residual values; and (d) analyze the residual values to detect the faults and determine a location of the faults in the monitored system.


Teachings discussed herein are directed to a method, system, and apparatus for diagnosing faults in a system monitored by sensors. An empirical model is constructed for a targeted component of the monitored system. The empirical model is trained with an historical data source that contains example observations of the sensors. Substantially real-time estimates are generated based on instrumented data corresponding to the targeted component. The substantially real-time estimates are compared and differenced with instrumented readings from the sensors to provide residual values. The residual values are analyzed to detect the faults and determine a location of the faults in the monitored system. At least one inferred real-time estimate may be generated based on data corresponding to the targeted component comprises.


The empirical model may be utilized to generate estimated sensor values according to a nonparametric kernel-based method. The empirical model may further generate estimated sensor values according to a similarity-based modeling method or a kernel regression modeling method.


The empirical model may be updated in real-time with each new input observation localized within a learned reference library to those learned observations that are relevant to the input observation.


The empirical model may implement an adaptation algorithm to learn new normal variation patterns in operation of the monitored system. The adaptation may utilize at least one of: lagging-window, manual (user-driven), trailing, out-of-range, in-range, and control-variable driven adaptation algorithms.


A first set of the sensors may be utilized to monitor fuel/gas conditions of the boiler, and a second set of the sensors to monitor water/steam conditions of the boiler. The targeted component may be a boiler of a fossil fueled power plant environment.


Some embodiments described above include a boiler in a fossil fuel power plant. However, those skilled in the art will recognize that the targeted component may instead be the steam generator of a nuclear power plant. In such case, the first set of sensors would monitor high pressure water conditions of the primary side of a nuclear power plant steam generator. In general, the first set of sensors utilized by the method discussed herein monitors the “hot side conditions” of steam generating equipment. The “hot side” contains the fluid that transfers thermal energy from the power source. The power source is the reactor core of a nuclear power plant or the combustion region of a fossil fuel plant.


A visual interface may be provided to graphically display components of the steam generating equipment and indicate residual values for locations of thermocouples within the tube bundle sections of the steam generating equipment.


Those skilled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the spirit and scope of the invention, and that such modifications, alterations, and combinations are to be viewed as being within the ambit of the inventive concept.

Claims
  • 1. A method of diagnosing faults in a monitored system, the monitored system being monitored by sensors, comprising: constructing an empirical model for a targeted component of the monitored system, wherein the empirical model is trained with a historical data source that contains example observations of the sensors;generating substantially real-time estimates based on instrumented data corresponding to the targeted component;comparing and differencing the substantially real-time estimates with instrumented readings from the sensors in the form of input observations to provide residual values;analyzing the residual values to detect the faults and determine a location of the faults in the monitored system; andadapting, by a processor, the empirical model with input observations indicating normal operation of the monitored system only after a predetermined time period from acquiring each such input observation.
  • 2. The method of claim 1, wherein the targeted component consists of steam generating equipment which contains a first set of sensors to monitor hot side conditions of steam generating equipment and a second set of the sensors to monitor steam/water conditions of the steam generating equipment.
  • 3. The method of claim 2, further comprising determining the location of a tube leak by graphically displaying, on a visual interface, a representation of one or more components of the steam generating equipment and the locations of tubes within one or more of the components, and indicating residual values at the representation of physical locations of sensors that correspond to the residual values, the sensors measuring the temperature near tubes at different locations within the one or more components.
  • 4. The method of claim 3 wherein the sensors comprise tube-bundle thermocouples.
  • 5. The method of claim 2, wherein the steam generating equipment is a boiler of a fossil-fuel power plant.
  • 6. The method of claim 2, wherein the steam generating equipment is a steam generator of a nuclear power plant.
  • 7. A method of claim 1, wherein the empirical model generates estimated sensor values according to a nonparametric kernel-based method.
  • 8. A method of claim 7, wherein the empirical model generates estimated sensor values according to a similarity-based modeling method.
  • 9. A method of claim 7, wherein the empirical model generates estimated sensor values according to a kernel regression modeling method.
  • 10. The method of claim 1, wherein adapting corriprises having the empirical model implement an adaptation algorithm to learn new normal variation patterns in operation of the monitored system.
  • 11. The method of claim 10, wherein the adaptation algorithm utilizes at least one of: manual (user-driven), trailing, out-of-range, in-range, and control-variable driven adaptation algorithms.
  • 12. The method of claim 1, wherein the empirical model is updated to form a subset of the example observations and in real-time with each new input observation localized within a learned reference library to those example observations that are relevant to the input observation, according to predetermined relevance criteria.
  • 13. The method of claim 1, wherein the generating substantially real-time estimates based on data corresponding to the targeted component comprises generating at least one inferred real-time estimate.
  • 14. The method of claim 1, wherein the predetermined time period is set to at least the amount of time the system being monitored can still operate with a known fault.
  • 15. The method of claim 3 wherein the representation is a grid with columns and rows, and wherein squares formed by the grid represent a bundle of tubes.
  • 16. A monitoring apparatus for diagnosing faults in a system monitored by sensors, comprising: a reference data store containing instrumented data corresponding to a targeted component of the system;a processor to construct an empirical model for a targeted component of the monitored system, wherein the empirical model is trained with a historical data source that contains example observations of the sensors;generate substantially real-time estimates based on the instrumented data corresponding to the targeted component;compare and difference the substantially real-time estimates with instrumented readings from the sensors in the form of input observations to provide residual values;analyze the residual values to detect the faults and determine a location of the faults in the monitored system; andadapt the empirical model with input observations indicating normal operation of the system only after a predetermined time period from acquiring each such input observation.
  • 17. The monitoring apparatus of claim 16, wherein the processor constructs an empirical model to generate estimated sensor values according to a nonparametric kernel-based method.
  • 18. The monitoring apparatus of claim 17, wherein the processor constructs an empirical model to generate estimated sensor values according to a similarity-based modeling method.
  • 19. The monitoring apparatus of claim 17, wherein the processor constructs an empirical model to generate estimated sensor values according to a kernel regression modeling method.
  • 20. The monitoring apparatus of claim 16, further comprising a visual interface to graphically display a representation of the physical location of components of the targeted component and indicate residual values at the physical locations of sensors that correspond to the residual values and on the representation.
  • 21. The monitoring apparatus of claim 20 wherein the sensors comprise tube-bundle thermocouples.
  • 22. The monitoring apparatus of claim 20 wherein the representation is a grid with columns and rows, and wherein squares formed by the grid represent a bundle of tubes.
  • 23. The monitoring apparatus of claim 16, wherein the processor adapts by constructing an empirical model that implements an adaptation algorithm to learn new normal variation patterns in operation of the system.
  • 24. The monitoring apparatus of claim 23, wherein the adaptation algorithm utilizes at least one of: manual (user-driven), trailing, out-of-range, in-range, and control-variable driven adaptation algorithms.
  • 25. The monitoring apparatus of claim 16, wherein the processor constricts an empirical model that is updated to form a subset of the example observations and in real-time with each new input observation localized within a learned reference library to those example observations that are relevant to the input observation, according to predetermined relevance criteria.
  • 26. The apparatus of claim 16, wherein the predetermined time period is set to at least the amount of time the system being monitored can still operate with a known fault.
  • 27. A method for characterizing tube leak faults in a monitored system, comprising: collecting historical sensor data for a targeted component of the monitored system;producing residual signals that are correlative of given tube leak fault types in the targeted component;analyzing the residual signals to generate residual signal signatures according to at least one predetermined analysis algorithm;collecting the residual signal signatures in a database of time-varying measurements;analyzing, by a processor, the database of residual signal signatures according to at least one predetermined classification algorithm to characterize salient features of the residual signal signatures relating to the given tube leak fault types, anddetermining the location of a tube leak at a particular tube by providing, on a visual interface, a graphical display of a representation of the physical location of tubes within the targeted component and indicating residual values at a representation of the physical location of sensors corresponding to the residual values and on the display, the sensors being disposed and arranged to measure the temperature at a plurality of locations near different tubes located within the targeted component.
  • 28. The method of claim 27, wherein the at least one predetermined analysis algorithm comprises at least one of residual threshold alerting, window ratio rule, sequential probability ratio test, and run-of-signs algorithms.
  • 29. The method of claim 27, wherein the at least one predetermined classification algorithm comprises at least one of K-means, LVQ neural network, and SBM classification algorithms.
  • 30. The method of claim 27 wherein the sensors comprise tube-bundle thermocouples.
  • 31. The method of claim 27 wherein the representation is a grid with columns and rows, and wherein squares formed by the grid represent a bundle of tubes.
  • 32. A method of monitoring a system, comprising: collecting historical sensor data for a component of the monitored system;generating estimate values by using the historical sensor data;producing residual values that indicate a difference between the estimate values and current input values;analyzing the residual values to determine if a particular type of fault exists; anddetermining the location of the part on the system that caused the fault by providing a graphical display, on a visual interface, of a representation of the physical location of the parts of the system being monitored, and a representation of residual values at a moment in time and on the graphical display, each residual value being represented at the physical location of a sensor on the display and corresponding to the residual value.
  • 33. The method of claim 32 wherein the system is a steam generating system, and wherein the parts are tubes within a system component, and wherein the sensors are disposed and arranged to measure the temperature at a plurality of locations near different tubes located within the system component.
  • 34. The method of claim 33 wherein the representation is a grid with columns and rows, and each square formed by the grid represents a bundle of tubes.
  • 35. The method of claim 32 wherein the location with a sensor changes in color to indicate the amount of the residual value at that location.
  • 36. The method of claim 32 wherein the residual values are generated by using similarity based modeling wherein the estimate values are generated by using a calculation that uses both the historical sensor data and the current input values.
RELATED APPLICATION DATA

This application claims priority to provisional application Ser. No. 60/826,203, filed Sep. 19, 2006, the disclosure of which is hereby incorporated by reference herein in its entirety.

US Referenced Citations (394)
Number Name Date Kind
3045221 Roop Jul 1962 A
3561237 Eggers Feb 1971 A
3651454 Venema et al. Mar 1972 A
3767900 Chao et al. Oct 1973 A
3851157 Ellis et al. Nov 1974 A
3866166 Kerscher et al. Feb 1975 A
3906437 Brandwein et al. Sep 1975 A
3928022 Langange Dec 1975 A
3992884 Pacault Nov 1976 A
4057847 Lowell et al. Nov 1977 A
4060716 Pekrul et al. Nov 1977 A
4067061 Juhasz Jan 1978 A
4071898 Schorsch et al. Jan 1978 A
4080654 Walley, Jr. Mar 1978 A
4212064 Forsythe et al. Jul 1980 A
4215412 Bernier et al. Jul 1980 A
4267569 Baumann et al. May 1981 A
4271402 Kastura et al. Jun 1981 A
4295128 Hashemian et al. Oct 1981 A
4296409 Whitaker et al. Oct 1981 A
4330838 Yoneda et al. May 1982 A
4334136 Mahan et al. Jun 1982 A
4336595 Adams et al. Jun 1982 A
4368510 Anderson Jan 1983 A
4398258 Naitoh et al. Aug 1983 A
4402054 Osborne et al. Aug 1983 A
RE31582 Hosaka May 1984 E
RE31750 Morrow Nov 1984 E
4480480 Scott et al. Nov 1984 A
4517468 Kemper et al. May 1985 A
4521885 Melocik et al. Jun 1985 A
4639882 Keats Jan 1987 A
4667176 Matsuda May 1987 A
4677429 Glotzbach Jun 1987 A
4707796 Calabro et al. Nov 1987 A
4761748 Le Rat et al. Aug 1988 A
4773021 Harris et al. Sep 1988 A
4796205 Ishii et al. Jan 1989 A
4823290 Fasack et al. Apr 1989 A
4841456 Hogan, Jr. et al. Jun 1989 A
4849894 Probst Jul 1989 A
4924418 Bachman et al. May 1990 A
4931977 Klemes Jun 1990 A
4937763 Mott Jun 1990 A
4951234 Bellows Aug 1990 A
4965513 Haynes et al. Oct 1990 A
4965549 Koike Oct 1990 A
4975685 Rahhal Dec 1990 A
4975827 Yonezawa Dec 1990 A
4978291 Nakai Dec 1990 A
4978909 Hendrix et al. Dec 1990 A
4985857 Bajpai et al. Jan 1991 A
4990885 Irick et al. Feb 1991 A
5003478 Kobayashi et al. Mar 1991 A
5003479 Kobayashi et al. Mar 1991 A
5003950 Kato et al. Apr 1991 A
5005142 Lipchak et al. Apr 1991 A
5005147 Krishen et al. Apr 1991 A
5009833 Takeuchi et al. Apr 1991 A
5010487 Stonehocker Apr 1991 A
5012414 Ishii et al. Apr 1991 A
5012421 Ishii Apr 1991 A
5025499 Inoue et al. Jun 1991 A
5034889 Abe Jul 1991 A
5038545 Hiendl Aug 1991 A
5052630 Hinsey et al. Oct 1991 A
5056023 Abe Oct 1991 A
5063513 Shank et al. Nov 1991 A
5067099 McCown et al. Nov 1991 A
5072391 Abe Dec 1991 A
5088058 Salsburg Feb 1992 A
5091856 Hasegawa et al. Feb 1992 A
5093792 Taki et al. Mar 1992 A
5109700 Hicho May 1992 A
5113483 Keeler et al. May 1992 A
5119287 Nakamura et al. Jun 1992 A
5119468 Owens Jun 1992 A
5123017 Simpkins et al. Jun 1992 A
5164895 Lunz et al. Nov 1992 A
5166873 Takatsu et al. Nov 1992 A
5173856 Purnell et al. Dec 1992 A
5187735 Herrero Garcia et al. Feb 1993 A
5195046 Gerardi et al. Mar 1993 A
5210704 Husseiny May 1993 A
5213080 Lambert et al. May 1993 A
5214582 Gray May 1993 A
5222065 Krogmann Jun 1993 A
5223207 Gross et al. Jun 1993 A
5239462 Jones et al. Aug 1993 A
5251285 Inoue et al. Oct 1993 A
5255208 Thakore et al. Oct 1993 A
5262941 Saladin et al. Nov 1993 A
5285494 Sprecher et al. Feb 1994 A
5291420 Matsumoto et al. Mar 1994 A
5309139 Austin May 1994 A
5309351 McCain et al. May 1994 A
5309379 Rawlings et al. May 1994 A
5311562 Palusamy et al. May 1994 A
5325304 Aoki Jun 1994 A
5327349 Hoste Jul 1994 A
5333240 Matsumoto et al. Jul 1994 A
5361336 Atchison Nov 1994 A
5386373 Keeler et al. Jan 1995 A
5387783 Mihm et al. Feb 1995 A
5390776 Thompson Feb 1995 A
5402333 Cardner Mar 1995 A
5402521 Niida et al. Mar 1995 A
5410492 Gross et al. Apr 1995 A
5414619 Katayama et al. May 1995 A
5414632 Mochizuki et al. May 1995 A
5420571 Coleman et al. May 1995 A
5421204 Svaty, Jr. Jun 1995 A
5442553 Parrillo Aug 1995 A
5445347 Ng Aug 1995 A
5446671 Weaver et al. Aug 1995 A
5446672 Boldys Aug 1995 A
5450321 Crane Sep 1995 A
5450537 Hirai et al. Sep 1995 A
5455777 Fujiyama et al. Oct 1995 A
5459675 Gross et al. Oct 1995 A
5463768 Cuddihy et al. Oct 1995 A
5463769 Tate et al. Oct 1995 A
5465321 Smyth Nov 1995 A
5473532 Unno et al. Dec 1995 A
5479574 Glier et al. Dec 1995 A
5481647 Brody et al. Jan 1996 A
5481674 Mahavadi Jan 1996 A
5486997 Reismiller et al. Jan 1996 A
5495168 de Vries Feb 1996 A
5496450 Blumenthal et al. Mar 1996 A
5500940 Skeie Mar 1996 A
5502543 Aboujaoude Mar 1996 A
5526446 Adelson et al. Jun 1996 A
5539638 Keeler et al. Jul 1996 A
5544320 Konrad Aug 1996 A
5548528 Keeler et al. Aug 1996 A
5553239 Heath et al. Sep 1996 A
5559710 Shahraray et al. Sep 1996 A
5561431 Peele et al. Oct 1996 A
5566092 Wang et al. Oct 1996 A
5574387 Petsche et al. Nov 1996 A
5579232 Tong et al. Nov 1996 A
5586066 White et al. Dec 1996 A
5596507 Jones et al. Jan 1997 A
5600726 Morgan et al. Feb 1997 A
5602733 Rogers et al. Feb 1997 A
5608845 Ohtsuka et al. Mar 1997 A
5610339 Haseley et al. Mar 1997 A
5611052 Dykstra et al. Mar 1997 A
5612886 Weng Mar 1997 A
5617342 Elazouni Apr 1997 A
5623109 Uchida et al. Apr 1997 A
5629872 Gross et al. May 1997 A
5629878 Kobrosly May 1997 A
5629879 Lelle May 1997 A
5638413 Uematsu et al. Jun 1997 A
5640103 Petsche et al. Jun 1997 A
5644463 El-Sharkawi et al. Jul 1997 A
5657245 Hecht et al. Aug 1997 A
5663894 Seth et al. Sep 1997 A
5668944 Berry Sep 1997 A
5671635 Nadeau et al. Sep 1997 A
5680409 Qin et al. Oct 1997 A
5680541 Kurosu et al. Oct 1997 A
5682317 Keeler et al. Oct 1997 A
5689416 Shimizu et al. Nov 1997 A
5689434 Tambini et al. Nov 1997 A
5696907 Tom Dec 1997 A
5699403 Ronnen Dec 1997 A
5704029 Wright, Jr. Dec 1997 A
5708780 Levergood et al. Jan 1998 A
5710723 Hoth et al. Jan 1998 A
5714683 Maloney Feb 1998 A
5727144 Brady et al. Mar 1998 A
5727163 Bezos Mar 1998 A
5737228 Ishizuka et al. Apr 1998 A
5745382 Vilim et al. Apr 1998 A
5745654 Titan Apr 1998 A
5748496 Takahashi et al. May 1998 A
5751580 Chi May 1998 A
5753805 Maloney May 1998 A
5754451 Williams May 1998 A
5754965 Hagenbuch May 1998 A
5757309 Brooks et al. May 1998 A
5761090 Gross et al. Jun 1998 A
5761640 Kalyanswamy et al. Jun 1998 A
5764509 Gross et al. Jun 1998 A
5774379 Gross et al. Jun 1998 A
5774882 Keen et al. Jun 1998 A
5774883 Andersen et al. Jun 1998 A
5784285 Tamaki et al. Jul 1998 A
5787138 Ocieczek et al. Jul 1998 A
5790977 Ezekiel Aug 1998 A
5791147 Earley et al. Aug 1998 A
5792072 Keefe Aug 1998 A
5796633 Burgess et al. Aug 1998 A
5797133 Jones et al. Aug 1998 A
5799043 Chang et al. Aug 1998 A
5802509 Maeda et al. Sep 1998 A
5805442 Crater et al. Sep 1998 A
5808903 Schiltz et al. Sep 1998 A
5809490 Guiver et al. Sep 1998 A
5817958 Uchida et al. Oct 1998 A
5818716 Chin et al. Oct 1998 A
5819029 Edwards et al. Oct 1998 A
5819236 Josephson Oct 1998 A
5819291 Haimowitz et al. Oct 1998 A
5822212 Tanaka et al. Oct 1998 A
5832465 Tom Nov 1998 A
5841677 Yang et al. Nov 1998 A
5842157 Wehhofer et al. Nov 1998 A
5845230 Lamberson Dec 1998 A
5845627 Olin et al. Dec 1998 A
5848396 Gerace Dec 1998 A
5864773 Barna et al. Jan 1999 A
5867118 McCoy et al. Feb 1999 A
5870721 Norris Feb 1999 A
5878403 DeFrancesco et al. Mar 1999 A
5886913 Marguinaud et al. Mar 1999 A
5895177 Iwai et al. Apr 1999 A
5905989 Biggs May 1999 A
5909368 Nixon et al. Jun 1999 A
5911135 Atkins Jun 1999 A
5913911 Beck et al. Jun 1999 A
5917428 Discenzo et al. Jun 1999 A
5921099 Lee Jul 1999 A
5930156 Kennedy Jul 1999 A
5930776 Dykstra et al. Jul 1999 A
5930779 Knoblock et al. Jul 1999 A
5933352 Salut Aug 1999 A
5933818 Kasravi et al. Aug 1999 A
5940298 Pan et al. Aug 1999 A
5940811 Norris Aug 1999 A
5940812 Tengel et al. Aug 1999 A
5943634 Piety et al. Aug 1999 A
5946661 Rothschild et al. Aug 1999 A
5946662 Ettl et al. Aug 1999 A
5949678 Wold et al. Sep 1999 A
5950147 Sarangapani et al. Sep 1999 A
5950179 Buchanan et al. Sep 1999 A
5956487 Venkatraman et al. Sep 1999 A
5956664 Bryan Sep 1999 A
5960411 Hartman et al. Sep 1999 A
5960435 Rathmann et al. Sep 1999 A
5961560 Kemner Oct 1999 A
5963884 Billington et al. Oct 1999 A
5966699 Zandi Oct 1999 A
5970430 Burns et al. Oct 1999 A
5970478 Walker et al. Oct 1999 A
5987399 Wegerich et al. Nov 1999 A
5987434 Libman Nov 1999 A
5991525 Shah et al. Nov 1999 A
5991735 Gerace Nov 1999 A
5993041 Toba Nov 1999 A
5995911 Hart Nov 1999 A
5995916 Nixon et al. Nov 1999 A
5995947 Fraser et al. Nov 1999 A
6000832 Franklin et al. Dec 1999 A
6002839 Keeler et al. Dec 1999 A
6006192 Cheng et al. Dec 1999 A
6006260 Barrick, Jr. et al. Dec 1999 A
6009381 Ono Dec 1999 A
6013108 Karolys et al. Jan 2000 A
6014598 Duyar et al. Jan 2000 A
6014645 Cunningham Jan 2000 A
6021396 Ramaswamy et al. Feb 2000 A
6023507 Wookey Feb 2000 A
6026348 Hala Feb 2000 A
6029097 Branicky et al. Feb 2000 A
6029149 Dykstra et al. Feb 2000 A
6029890 Austin Feb 2000 A
6041287 Dister et al. Mar 2000 A
6049738 Kayama et al. Apr 2000 A
6049741 Kawamura Apr 2000 A
6049827 Sugauchi et al. Apr 2000 A
6064916 Yoon May 2000 A
6076048 Gunther et al. Jun 2000 A
6076088 Paik et al. Jun 2000 A
6088626 Lilly et al. Jul 2000 A
6088686 Walker et al. Jul 2000 A
6100901 Mohda et al. Aug 2000 A
6104965 Lim et al. Aug 2000 A
6105007 Norris Aug 2000 A
6107919 Wilks et al. Aug 2000 A
6108616 Borchers et al. Aug 2000 A
6110214 Klimasauskas Aug 2000 A
6112190 Fletcher et al. Aug 2000 A
6115653 Bergstrom et al. Sep 2000 A
6119111 Gross et al. Sep 2000 A
6125351 Kauffman Sep 2000 A
6128540 Van Der Vegt et al. Oct 2000 A
6128543 Hitchner Oct 2000 A
6131076 Stephan et al. Oct 2000 A
6141647 Meijer et al. Oct 2000 A
6141674 Unkrich et al. Oct 2000 A
6144893 Van Der Vegt et al. Nov 2000 A
6181975 Gross et al. Jan 2001 B1
6182022 Mayle et al. Jan 2001 B1
6202038 Wegerich et al. Mar 2001 B1
6236908 Cheng et al. May 2001 B1
6240372 Gross et al. May 2001 B1
6245517 Chen et al. Jun 2001 B1
6246972 Klimasauskas Jun 2001 B1
6272449 Passera Aug 2001 B1
6278962 Klimasauskas et al. Aug 2001 B1
6289330 Jannarone Sep 2001 B1
6327574 Kramer et al. Dec 2001 B1
6331864 Coco et al. Dec 2001 B1
6331964 Barone Dec 2001 B1
6356857 Qin et al. Mar 2002 B1
6393373 Duyar et al. May 2002 B1
6418431 Mahajan et al. Jul 2002 B1
6424958 Pappalardo et al. Jul 2002 B1
6480810 Cardella et al. Nov 2002 B1
6502082 Toyama et al. Dec 2002 B1
6519552 Sampath et al. Feb 2003 B1
6522978 Chen et al. Feb 2003 B1
6526356 DiMaggio et al. Feb 2003 B1
6532426 Hooks et al. Mar 2003 B1
6539343 Zhao et al. Mar 2003 B2
6553334 Gross et al. Apr 2003 B2
6556939 Wegerich Apr 2003 B1
6567752 Cusumano et al. May 2003 B2
6567795 Alouani et al. May 2003 B2
6587737 Voser et al. Jul 2003 B2
6590362 Parlos et al. Jul 2003 B2
6591166 Millett et al. Jul 2003 B1
6591296 Ghanime Jul 2003 B1
6609036 Bickford Aug 2003 B1
6609212 Smith Aug 2003 B1
6625569 James et al. Sep 2003 B2
6651035 Lang Nov 2003 B1
6678639 Little et al. Jan 2004 B2
6687654 Smith, Jr. et al. Feb 2004 B2
6731990 Carter et al. May 2004 B1
6751575 Lenz et al. Jun 2004 B2
6775641 Wegerich et al. Aug 2004 B2
6804628 Gross et al. Oct 2004 B2
6826552 Grosser et al. Nov 2004 B1
6839660 Eryurek et al. Jan 2005 B2
6853920 Hsiung et al. Feb 2005 B2
6859739 Wegerich et al. Feb 2005 B2
6876943 Wegerich Apr 2005 B2
6892163 Herzog et al. May 2005 B1
6898469 Bickford May 2005 B2
6898554 Jaw et al. May 2005 B2
6917839 Bickford Jul 2005 B2
6941287 Vaidyanathan et al. Sep 2005 B1
6952662 Wegerich et al. Oct 2005 B2
6957172 Wegerich Oct 2005 B2
6975962 Wegerich et al. Dec 2005 B2
6999899 Gross et al. Feb 2006 B2
7016816 Mott Mar 2006 B2
7027953 Klein Apr 2006 B2
7050875 Cribbs et al. May 2006 B2
7085675 Wegerich Aug 2006 B2
7089154 Rasmussen et al. Aug 2006 B2
7142990 Bouse et al. Nov 2006 B2
7233886 Wegerich et al. Jun 2007 B2
7308385 Wegerich et al. Dec 2007 B2
7373283 Herzog et al. May 2008 B2
7386426 Black et al. Jun 2008 B1
7403869 Wegerich et al. Jul 2008 B2
7409320 Wegerich Aug 2008 B2
7539597 Wegerich et al. May 2009 B2
7621141 McCormick et al. Nov 2009 B2
7640145 Wegerich et al. Dec 2009 B2
7739096 Wegerich et al. Jun 2010 B2
7941701 Wegerich et al. May 2011 B2
20020065698 Schick May 2002 A1
20020152056 Herzog et al. Oct 2002 A1
20020183971 Wegerich et al. Dec 2002 A1
20030028269 Spriggs Feb 2003 A1
20030040878 Rasmussen Feb 2003 A1
20030055666 Roddy Mar 2003 A1
20030060808 Wilk Mar 2003 A1
20030093521 Schlonski May 2003 A1
20030109951 Hsiung Jun 2003 A1
20030125248 Hair Jul 2003 A1
20030126258 Conkright Jul 2003 A1
20040019406 Wang Jan 2004 A1
20040088093 Yao May 2004 A1
20040243636 Hasiewicz et al. Dec 2004 A1
20050021187 Wang Jan 2005 A1
20050021212 Gayme et al. Jan 2005 A1
20050027400 Wang Feb 2005 A1
20050096757 Frerichs et al. May 2005 A1
20050210337 Chester Sep 2005 A1
20050261837 Wegerich Nov 2005 A1
20080071501 Herzog Mar 2008 A1
20080183425 Hines Jul 2008 A1
20080215291 Wegerich Sep 2008 A1
20090043405 Chester Feb 2009 A1
20090043467 Filev Feb 2009 A1
Foreign Referenced Citations (20)
Number Date Country
0840244 May 1998 EP
61160111 Jul 1986 JP
02004300 Sep 1990 JP
05126980 May 1993 JP
06274784 Sep 1994 JP
06278179 Oct 1994 JP
7243876 Sep 1995 JP
08220279 Aug 1996 JP
09166483 Jun 1997 JP
11311591 Sep 1999 JP
06161982 Jun 2006 JP
9016048 Dec 1990 WO
WO9504878 Feb 1995 WO
WO9722073 Jun 1997 WO
WO0067412 Nov 2000 WO
WO0167262 Sep 2001 WO
WO0235299 May 2002 WO
WO02057856 Jul 2002 WO
WO02086726 Oct 2002 WO
WO2005038545 Apr 2005 WO
Related Publications (1)
Number Date Country
20080071501 A1 Mar 2008 US
Provisional Applications (1)
Number Date Country
60826203 Sep 2006 US