This application is related to the following commonly-owned, co-pending patent application: U.S. patent application Ser. No. 11/492,467, entitled “METHOD AND SYSTEM FOR DETECTING ABNORMAL OPERATION IN A PROCESS PLANT,” filed on the same day as the present application. The above-referenced patent application is hereby incorporated by reference herein, in its entirety.
This disclosure relates generally to process control systems and, more particularly, to systems for monitoring and/or modeling level regulatory control loops.
Process control systems, such as distributed or scalable process control systems like those used in chemical, petroleum or other processes, typically include one or more process controllers communicatively coupled to each other, to at least one host or operator workstation and to one or more field devices via analog, digital or combined analog/digital buses. The field devices, which may be, for example valves, valve positioners, switches and transmitters (e.g., temperature, pressure and flow rate sensors), perform functions within the process such as opening or closing valves and measuring process parameters. The process controller receives signals indicative of process measurements made by the field devices and/or other of information pertaining to the field devices, uses this information to implement a control routine and then generates control signals which are sent over the buses to the field devices to control the operation of the process. Information from the field devices and the controller is typically made available to one or more applications executed by the operator workstation to enable an operator to perform any desired function with respect to the process, such as viewing the current state of the process, modifying the operation of the process, etc.
In the past, conventional field devices were used to send and receive analog (e.g., 4 to 20 milliamps) signals to and from the process controller via an analog bus or analog lines. These 4 to 20 mA signals were limited in nature in that they were indicative of measurements made by the device or of control signals generated by the controller required to control the operation of the device. However, in the past decade or so, smart field devices including a microprocessor and a memory have become prevalent in the process control industry. In addition to performing a primary function within the process, smart field devices store data pertaining to the device, communicate with the controller and/or other devices in a digital or combined digital and analog format, and perform secondary tasks such as self calibration, identification, diagnostics, etc. A number of standard and open smart device communication protocols such as the HART®, PROFIBUS®, WORLDFIP®, Device Net®, and CAN protocols, have been developed to enable smart field devices made by different manufacturers to be used together within the same process control network. Moreover, the all digital, two wire bus protocol promulgated by the Fieldbus Foundation, known as the FOUNDATION™ Fieldbus (hereinafter “Fieldbus”) protocol uses function blocks located in different field devices to perform control operations previously performed within a centralized controller. In this case, the Fieldbus field devices are capable of storing and executing one or more function blocks, each of which receives inputs from and/or provides outputs to other function blocks (either within the same device or within different devices), and performs some process control operation, such as measuring or detecting a process parameter, controlling a device or performing a control operation, like implementing a proportional-integral-derivative (PID) control routine. The different function blocks within a process control system are configured to communicate with each other (e.g., over a bus) to form one or more process control loops, the individual operations of which are spread throughout the process and are, thus, decentralized.
Information from the field devices and the process controllers is typically made available to one or more other hardware devices such as operator workstations, maintenance workstations, personal computers, handheld devices, data historians, report generators, centralized databases, etc., to enable an operator or a maintenance person to perform desired functions with respect to the process such as, for example, changing settings of the process control routine, modifying the operation of the control modules within the process controllers or the smart field devices, viewing the current state of the process or of particular devices within the process plant, viewing alarms generated by field devices and process controllers, simulating the operation of the process for the purpose of training personnel or testing the process control software, diagnosing problems or hardware failures within the process plant, etc.
While a typical process plant has many process control and instrumentation devices such as valves, transmitters, sensors, etc. connected to one or more process controllers, there are many other supporting devices that are also necessary for or related to process operation. These additional devices include, for example, power supply equipment, power generation and distribution equipment, rotating equipment such as turbines, motors, etc., which are located at numerous places in a typical plant. While this additional equipment does not necessarily create or use process variables and, in many instances, is not controlled or even coupled to a process controller for the purpose of affecting the process operation, this equipment is nevertheless important to, and ultimately necessary for proper operation of the process.
As is known, problems frequently arise within a process plant environment, especially a process plant having a large number of field devices and supporting equipment. These problems may take the form of broken or malfunctioning devices, logic elements, such as software routines, being in improper modes, process control loops being improperly tuned, one or more failures in communications between devices within the process plant, etc. These and other problems, while numerous in nature, generally result in the process operating in an abnormal state (i.e., the process plant being in an abnormal situation) which is usually associated with suboptimal performance of the process plant. Many diagnostic tools and applications have been developed to detect and determine the cause of problems within a process plant and to assist an operator or a maintenance person to diagnose and correct the problems, once the problems have occurred and been detected. For example, operator workstations, which are typically connected to the process controllers through communication connections such as a direct or wireless bus, Ethernet, modem, phone line, and the like, have processors and memories that are adapted to run software or firmware, such as the DeltaV™ and Ovation control systems, sold by Emerson Process Management which includes numerous control module and control loop diagnostic tools. Likewise, maintenance workstations, which may be connected to the process control devices, such as field devices, via the same communication connections as the controller applications, or via different communication connections, such as OPC connections, handheld connections, etc., typically include one or more applications designed to view maintenance alarms and alerts generated by field devices within the process plant, to test devices within the process plant and to perform maintenance activities on the field devices and other devices within the process plant. Similar diagnostic applications have been developed to diagnose problems within the supporting equipment within the process plant.
Thus, for example, the AMS™ Suite: Intelligent Device Manager application (at least partially disclosed in U.S. Pat. No. 5,960,214 entitled “Integrated Communication. Network for use in a Field Device Management System”) sold by Emerson Process Management, enables communication with and stores data pertaining to field devices to ascertain and track the operating state of the field devices. In some instances, the AMS™ application may be used to communicate with a field device to change parameters within the field device, to cause the field device to run applications on itself such as, for example, self-calibration routines or self-diagnostic routines, to obtain information about the status or health of the field device, etc. This information may include, for example, status information (e.g., whether an alarm or other similar event has occurred), device configuration information (e.g., the manner in which the field device is currently or may be configured and the type of measuring units used by the field device), device parameters (e.g., the field device range values and other parameters), etc. Of course, this information may be used by a maintenance person to monitor, maintain, and/or diagnose problems with field devices.
Similarly, many process plants include equipment monitoring and diagnostic applications such as, for example, RBMware provided by CSI Systems, or any other known applications used to monitor, diagnose, and optimize the operating state of various rotating equipment. Maintenance personnel usually use these applications to maintain and oversee the performance of rotating equipment in the plant, to determine problems with the rotating equipment, and to determine when and if the rotating equipment must be repaired or replaced. Similarly, many process plants include power control and diagnostic applications such as those provided by, for example, the Liebert and ASCO companies, to control and maintain the power generation and distribution equipment. It is also known to run control optimization applications such as, for example, real-time optimizers (RTO+), within a process plant to optimize the control activities of the process plant. Such optimization applications typically use complex algorithms and/or models of the process plant to predict how inputs may be changed to optimize operation of the process plant with respect to some desired optimization variable such as, for example, profit.
These and other diagnostic arid optimization applications are typically implemented on a system-wide basis in one or more of the operator or maintenance workstations, and may provide preconfigured displays to the operator or maintenance personnel regarding the operating state of the process plant, or the devices and equipment within the process plant. Typical displays include alarming displays that receive alarms generated by the process controllers or other devices within the process plant, control displays indicating the operating state of the process controllers and other devices within the process plant, maintenance displays indicating the operating state of the devices within the process plant, etc. Likewise, these and other diagnostic applications may enable an operator or a maintenance person to retune a control loop or to reset other control parameters, to run a test on one or more field devices to determine the current status of those field devices, to calibrate field devices or other equipment, or to perform other problem detection and correction activities on devices and equipment within the process plant.
While these various applications and tools are very helpful in identifying and correcting problems within a process plant, these diagnostic applications are generally configured to be used only after a problem has already occurred within a process plant and, therefore, after an abnormal situation already exists within the plant. Unfortunately, an abnormal situation may exist for some time before it is detected, identified and corrected using these tools, resulting in the suboptimal performance of the process plant for the period of time during which the problem is detected, identified and corrected. In many cases, a control operator will first detect that some problem exists based on alarms, alerts or poor performance of the process plant. The operator will then notify the maintenance personnel of the potential problem. The maintenance personnel may or may not detect an actual problem and may need further prompting before actually running tests or other diagnostic applications, or performing other activities needed to identify the actual problem. Once the problem is identified, the maintenance personnel may need to order parts and schedule a maintenance procedure, all of which may result in a significant period of time between the occurrence of a problem and the correction of that problem, during which time the process plant runs in an abnormal situation generally associated with the sub-optimal operation of the plant.
Additionally, many process plants can experience an abnormal situation which results in significant costs or damage within the plant in a relatively short amount of time. For example, some abnormal situations can cause significant damage to equipment, the loss of raw materials, or significant unexpected downtime within the process plant if these abnormal situations exist for even a short amount of time. Thus, merely detecting a problem within the plant after the problem has occurred, no matter how quickly the problem is corrected, may still result in significant loss or damage within the process plant. As a result, it is desirable to try to prevent abnormal situations from arising in the first place, instead of simply trying to react to and correct problems within the process plant after an abnormal situation arises.
One technique that may be used to collect data that enables a user to predict the occurrence of certain abnormal situations within a process plant before these abnormal situations actually arise, with the purpose of taking steps to prevent the predicted abnormal situation before any significant loss within the process plant takes place. This procedure is disclosed in U.S. patent application Ser. No. 09/972,078, entitled “Root Cause Diagnostics” (based in part on U.S. patent application Ser. No. 08/623,569, now U.S. Pat. No. 6,017,143). The entire disclosures of both of these applications are hereby incorporated by reference herein. Generally speaking, this technique places statistical data collection and processing blocks or statistical processing monitoring (SPM) blocks, in each of a number of devices, such as field devices, within a process plant. The statistical data collection and processing blocks collect, for example, process variable data and determine certain statistical measures associated with the collected data, such as a mean, a median, a standard deviation, etc. These statistical measures may then be sent to a user and analyzed to recognize patterns suggesting the future occurrence of a known abnormal situation. Once a particular suspected future abnormal situation is detected, steps may be taken to correct the underlying problem, thereby avoiding the abnormal situation in the first place.
Other techniques have been developed to monitor and detect problems in a process plant. One such technique is referred to as Statistical Process Control (SPC). SPC has been used to monitor variables, such as quality variables, associated with a process and flag an operator when the quality variable is detected to have moved from its “statistical” norm. With SPC, a small sample of a variable, such as a key quality variable, is used to generate statistical data for the small sample. The statistical data for the small sample is then compared to statistical data corresponding to a much larger sample of the variable. The variable may be generated by a laboratory or analyzer, or retrieved from a data historian. SPC alarms are generated when the small sample's average or standard deviation deviates from the large sample's average or standard deviation, respectively, by some predetermined amount. An intent of SPC is to avoid making process adjustments based on normal statistical variation of the small samples. Charts of the average or standard deviation of the small samples may be displayed to the operator on a console separate from a control console.
Another technique analyzes multiple variables and is referred to as multivariable statistical process control (MSPC). This technique uses algorithms such as principal component analysis (PCA) and projections to latent structures (PLS) which analyze historical data to create a statistical model of the process. In-particular, samples of variables corresponding to normal operation and samples of variables corresponding to abnormal operation are analyzed to generate a model to determine when an alarm should be generated. Once the model has been defined, variables corresponding to a current process may be provided to the model, which may generate an alarm if the variables indicate an abnormal operation.
With model-based performance monitoring system techniques, a model is utilized, such as a correlation-based model or a first-principles model, that relates process inputs to process outputs. The model may be calibrated to the actual plant operation by adjusting internal tuning constants or bias terms. The model can be used to predict when the process is moving into an abnormal region and alert the operator to take action. An alarm may be generated when there is a significant deviation in actual versus predicted behavior or when there is a big change in a calculated efficiency parameter. Model-based performance monitoring systems typically cover as small as a single unit operation (e.g. a pump, a compressor, a heater, a column, etc.) or a combination of operations that make up a process unit (e.g. crude unit, fluid catalytic cracking unit (FCCU), reformer, etc.)
Example methods and systems are disclosed that may facilitate detecting an abnormal operation associated with a level regulatory control loop in a process plant. Generally speaking, a model for modeling at least a portion of the level regulatory control loop may be utilized with respect to first and second signals associated with regulatory control of a level of material in a tank. More specifically, the model may generate a prediction of the second signal as a function of first signal. The model may include a first regression model in a first range corresponding to a first operating region of the level regulatory control loop, and the model may be capable of being subsequently configured to include at least a second regression model in at least a second respective range corresponding to at least a second respective operating region different than the first operating region. It may be determined whether the second signal significantly deviates from the prediction of the second signal generated by the model. If there is a significant deviation, this may indicate an abnormal operation associated with the level regulatory control loop.
Referring now to
Still further, maintenance systems, such as computers executing the AMS™ Suite: Intelligent Device Manager application or any other device monitoring and communication applications may be connected to the process control systems 12 and 14 or to the individual devices therein to perform maintenance and monitoring activities. For example, a maintenance computer 18 may be connected to the controller 12B and/or to the devices 15 via any desired communication lines or networks (including wireless or handheld device networks) to communicate with and, in some instances, reconfigure or perform other maintenance activities on the devices 15. Similarly, maintenance applications such as the AMS application may be installed in and executed by one or more of the user interfaces 14A associated with the distributed process control system 14 to perform maintenance and monitoring functions, including data collection related to the operating status of the devices 16.
The process plant 10 also includes various rotating equipment 20, such as turbines, motors, etc. which are connected to a maintenance computer 22 via some permanent or temporary communication link (such as a bus, a wireless communication system or hand held devices which are connected to the equipment 20 to take readings and are then removed). The maintenance computer 22 may store and execute known monitoring and diagnostic applications 23 provided by, for example, CSI (an Emerson Process Management Company) or other any other known applications used to diagnose, monitor and optimize the operating state of the rotating equipment 20. Maintenance personnel usually use the applications 23 to maintain and oversee the performance of rotating equipment 20 in the plant 10, to determine problems with the rotating equipment 20 and to determine when and if the rotating equipment 20 must be repaired or replaced. In some cases, outside consultants or service organizations may temporarily acquire or measure data pertaining to the equipment 20 and use this data to perform analyses for the equipment 20 to detect problems, poor performance or other issues effecting the equipment 20. In these cases, the computers running the analyses may not be connected to the rest of the system 10 via any communication line or may be connected only temporarily.
Similarly, a power generation and distribution system 24 having power generating and distribution equipment 25 associated with the plant 10 is connected via, for example, a bus, to another computer 26 which runs and oversees the operation of the power generating and distribution equipment 25 within the plant 10. The computer 26 may execute known power control and diagnostics applications 27 such a as those provided by, for example, Liebert and ASCO or other companies to control and maintain the power generation and distribution equipment 25. Again, in many cases, outside consultants or service organizations may use service applications that temporarily acquire or measure data pertaining to the equipment 25 and use this data to perform analyses for the equipment 25 to detect problems, poor performance or other issues effecting the equipment 25. In these cases, the computers (such as the computer 26) running the analyses may not be connected to the rest of the system 10 via any communication line or may be connected only temporarily.
As illustrated in
Generally speaking, the abnormal situation prevention system 35 may communicate with abnormal operation detection systems (not shown in
The portion 50 of the process plant 10 illustrated in
In any event, one or more user interfaces or computers 72 and 74 (which may be any types of personal computers, workstations, etc.) accessible by plant personnel such as configuration engineers, process control operators, maintenance personnel, plant managers, supervisors, etc. are coupled to the process controllers 60 via a communication line or bus 76 which may be implemented using any desired hardwired or wireless communication structure, and using any desired or suitable communication protocol such as, for example, an Ethernet protocol. In addition, a database 78 may be connected to the communication bus 76 to operate as a data historian that collects and stores configuration information as well as on-line process variable data, parameter data, status data, and other data associated with the process controllers 60 and field devices 64 and 66 within the process plant 10. Thus, the database 78 may operate as a configuration database to store the current configuration, including process configuration modules, as well as control configuration information for the process control system 54 as downloaded to and stored within the process controllers 60 and the field devices 64 and 66. Likewise, the data base 78 may store historical abnormal situation prevention data, including statistical data collected by the field devices 64 and 66 within the process plant 10, statistical data determined from process variables collected by the field devices 64 and 66, and other types of data that will be described below.
While the process controllers 60, I/O devices 68 and 70, and field devices 64 and 66 are typically located down within and distributed throughout the sometimes harsh plant environment, the workstations 72 and 74, and the database 78 are usually located in control rooms, maintenance rooms or other less harsh environments easily accessible by operators, maintenance personnel, etc.
Generally speaking, the process controllers 60 store and execute one or more controller applications that implement control strategies using a number of different, independently executed, control modules or blocks. The control modules may each be made up of what are commonly referred to as function blocks, wherein each function block is a part or a subroutine of an overall control routine and operates in conjunction with other function blocks (via communications called links) to implement process control loops within the process plant 10. As is well known, function blocks, which may be objects in an object-oriented programming protocol, typically perform one of an input function, such as that associated with a transmitter, a sensor or other process parameter measurement device, a control function, such as that associated with a control routine that performs PID, fuzzy logic, etc. control, or an output function, which controls the operation of some device, such as a valve, to perform some physical function within the process plant 10. Of course, hybrid and other types of complex function blocks exist, such as model predictive controllers (MPCs), optimizers, etc. It is to be understood that while the Fieldbus protocol and the DeltaV™ system protocol use control modules and function blocks designed and implemented in an object-oriented programming protocol, the control modules may be designed using any desired control programming scheme including, for example, sequential function blocks, ladder logic, etc., and are not limited to being designed using function blocks or any other particular programming technique.
As illustrated in
Each of one or more of the field devices 64 and 66 may include a memory (not shown) for storing routines such as routines for implementing statistical data collection pertaining to one or more process variables sensed by sensing device and/or routines for abnormal operation detection, which will be described below. Each of one or more of the field devices 64 and 66 may also include a processor (not shown) that executes routines such as routines for implementing statistical data collection and/or routines for abnormal operation detection. Statistical data collection and/or abnormal operation detection need not be implemented by software. Rather, one of ordinary skill in the art will recognize that such systems may be implemented by any combination of software, firmware, and/or hardware within one or more field devices and/or other devices.
As shown in
Generally speaking, the blocks 80 and 82 or sub-elements of these blocks, collect data, such a process variable data, from the device in which they are located and/or from other devices. Additionally, the blocks 80 and 82 or sub-elements of these blocks may process the variable data and perform an analysis on the data for any number of reasons. For example, the block 80, which is illustrated as being associated with a valve, may have a stuck valve detection routine which analyzes the valve process variable data to determine if the valve is in a stuck condition. In addition, the block 80 may include a set of one or more statistical process monitoring (SPM) blocks or units such as blocks SPM1-SPM4 which may collect process variable or other data within the valve and perform one or more statistical calculations on the collected data to determine, for example, a mean, a median, a standard deviation, a root-mean-square (RMS), a rate of change, a range, a minimum, a maximum, etc. of the collected data and/or to detect events such as drift, bias, noise, spikes, etc., in the collected data. The specific statistical data generated, nor the method in which it is generated is not critical. Thus, different types of statistical data can be generated in addition to, or instead of, the specific types described above. Additionally, a variety of techniques, including known techniques, can be used to generate such data. The term statistical process monitoring (SPM) block is used herein to describe functionality that performs statistical process monitoring on at least one process variable or other process parameter, and may be performed by any desired software, firmware or hardware within the device or even outside of a device for which data is collected. It will be understood that, because the SPMs are generally located in the devices where the device data is collected, the SPMs can acquire quantitatively more and qualitatively more accurate process variable data. As a result, the SPM blocks are generally capable of determining better statistical calculations with respect to the collected process variable data than a block located outside of the device in which the process variable data is collected.
It is to be understood that although the blocks 80 and 82 are shown to include SPM blocks in
It is to be understood that although the blocks 80 and 82 are shown to include SPM blocks in
The block 82 of
Overview of an Abnormal Operation Detection (AOD) System
The model 112 includes an independent variable X input and a dependent variable Y. As will be described in more detail below, the model 112 may be trained using a plurality of data sets (X, Y), to model Y (dependent variable) as a function of X (independent variable). As will be described in more detail below, the model 112 may include one or more regression models, each regression model for a different operating region. Each regression model may utilize a function to model the dependent variable Y as a function of the independent variable X over some range of X. The regression model may comprise be a linear regression model, for example, or some other regression model. Generally, a linear regression model comprises some linear combination of functions ƒ(X), g(X), h(X), . . . For modeling an industrial process, a typically adequate linear regression model may comprise a first order function of X (e.g., Y=m*X+b) or a second order function of X (e.g., Y=a*X2+b*X+c). Of course, other types of functions may be utilized as well such as higher order polynomials, sinusoidal functions, logarithmic functions, exponential functions, power functions, etc.
After it has been trained, the model 112 may be used to generate a predicted value (YP) of a dependent variable Y based on a given independent variable X input. The output YP of the model 112 is provided to a deviation detector 116. The deviation detector 116 receives the output YP of the regression model 112 as well as the dependent variable input Y to the model 112. Generally speaking, the deviation detector 116 compares the dependent variable Y to the value YP generated by the model 112 to determine if the dependent variable Y is significantly deviating from the predicted value YP. If the dependent variable Y is significantly deviating from the predicted value YP, this may indicate that an abnormal situation has occurred, is occurring, or may occur in the near future, and thus the deviation detector 116 may generate an indicator of the deviation. In some implementations, the indicator may comprise an alert or alarm.
One of ordinary skill in the art will recognize that the AOD system 100 can be modified in various ways. For example, the SPM blocks 104 and 108 could be omitted. As another example, other types of processing in addition to or instead of the SPM blocks 104 and 108 could be utilized. For example, the process variable data could be filtered, trimmed, etc., prior to the SPM blocks 104 and 108, or rather than utilizing the SPM blocks 104 and 108.
Additionally, although the model 112 is illustrated as having a single independent variable input X, a single dependent variable input Y, and a single predicted value YP, the model 112 could include a regression model that models multiple variables Y as a function of multiple variables X. For example, the model 112 could comprise a multiple linear regression (MLR) model, a principal component regression (PCR) model, a partial least squares (PLS) model, a ridge regression (RR) model, a variable subset selection (VSS) model, a support vector machine (SVM) model, etc.
The AOD system 100 could be implemented wholly or partially in a field device. As just one example, the SPM blocks 104 and 108 could be implemented in a field device 66 and the model 112 and/or the deviation detector 116 could be implemented in the controller 60 or some other device. In one particular implementation, the AOD system 100 could be implemented as a function block, such as a function block to be used in system that implements a Fieldbus protocol. Such a function block may or may not include the SPM blocks 104 and 108. In another implementation, each of at least some of the blocks 104, 108, 112, and 116 may be implemented as a function block.
The AOD system 100 may be in communication with the abnormal situation prevention system 35 (
Additionally, the AOD system 100 may provide information to the abnormal situation prevention system 35 and/or other systems in the process plant. For example, the deviation indicator generated by the deviation detector 116 could be provided to the abnormal situation prevention system 35 and/or the alert/alarm application 43 to notify an operator of the abnormal condition. As another example, after the model 112 has been trained, parameters of the model could be provided to the abnormal situation prevention system 35 and/or other systems in the process plant so that an operator can examine the model and/or so that the model parameters can be stored in a database. As yet another example, the AOD system 100 may provide X, Y, and/or YP values to the abnormal situation prevention system 35 so that an operator can view the values, for instance, when a deviation has been detected.
Then, at a block 158, the trained model generates predicted values (YP) of the dependent variable Y using values of the independent variable X that it receives. Next, at a block 162, the actual values of Y are compared to the corresponding predicted values YP to determine if Y is significantly deviating from YP. For example, the deviation detector 116 receives the output YP of the model 112 and compares it to the dependent variable Y. If it is determined that Y has significantly deviated from YP an indicator of the deviation may be generated at a block 166. In the AOD system 100, for example, the deviation detector 116 may generate the indicator. The indicator may be an alert or alarm, for example, or any other type of signal, flag, message, etc., indicating that a significant deviation has been detected.
As will be discussed in more detail below, the block 154 may be repeated after the model has been initially trained and after it has generated predicted values YP of the dependent variable Y. For example, the model could be retrained if a set point in the process has been changed.
Overview of the Model
Referring again to
At a block 212, a regression model for the range [XMIN, XMAX] may be generated based on the data sets (X, Y) received at the block 204. Any of a variety of techniques, including known techniques, may be used to generate the regression model, and any of a variety of functions could be used as the model. For example, the model of could comprise a linear equation, a quadratic equation, a higher order equation, etc. In
Utilizing the Model through Operating Region Changes
It may be that, after the model has been initially trained, the system that it models may move into a different, but normal operating region. For example, a set point may be changed.
At a block 244, a data set (X, Y) is received. In the AOD system 100 of
At the block 252, a predicted value YP of the dependent variable Y may be generated using the model. In particular, the model generates the predicted value YP from the value X received at the block 244. In the AOD system 100 of
Then, at a block 256, the value Y received at the block 244 may be compared with the predicted value YP. The comparison may be implemented in a variety of ways. For example, a difference or a percentage difference could be generated. Other types of comparisons could be used as well. Referring now to
Referring again to
In general, determining if the value Y significantly deviates from the predicted value YP may be implemented using a variety of techniques, including known techniques. For instance, determining if the value Y significantly deviates from the predicted value YP may include analyzing the present values of Y and YP. For example, Y could be subtracted from YP, or vice versa, and the result may be compared to a threshold to see if it exceeds the threshold. It may optionally comprise also analyzing past values of Y and YP. Further, it may comprise comparing Y or a difference between Y and YP to one or more thresholds. Each of the one or more thresholds may be fixed or may change. For example, a threshold may change depending on the value of X or some other variable. U.S. patent application Ser. No. 11/492,347, entitled “METHODS AND SYSTEMS FOR DETECTING DEVIATION OF A PROCESS VARIABLE FROM EXPECTED VALUES,” filed on the same day as the present application, and which is hereby incorporated by reference herein, describes example systems and methods for detecting whether a process variable significantly deviates from an expected value, and any of these systems and methods may optionally be utilized. One of ordinary skill in the art will recognize many other ways of determining if the value Y significantly deviates from the predicted value YP. Further, blocks 256 and 260 may be combined.
Some or all of criteria to be used in the comparing Y to YP (block 256) and/or the criteria to be used in determining if Y significantly deviates from YP (block 260) may be configurable by a user via the configuration application 38 (
Referring again to
Referring again to the block 248 of
Then, at a block 272, it may be determined if enough data sets are in the data group to which the data set was added at the block 268 in order to generate a regression model corresponding to the data in that group. This determination may be implemented using a variety of techniques. For example, the number of data sets in the group may be compared to a minimum number, and if the number of data sets in the group is at least this minimum number, it may be determined that there are enough data sets in order to generate a regression model. The minimum number may be selected using a variety of techniques, including techniques known to those of ordinary skill in the art. If it is determined that there are enough data sets in order to generate a regression model, the model may be updated at a block 276, as will be described below with reference to
At a block 308, a regression model for the range [X′MIN, X′MAX] may be generated based on the data sets (X, Y) in the group. Any of a variety of techniques, including known techniques, may be used to generate the regression model, and any of a variety of functions could be used as the model. For example, the model could comprise a linear equation, a quadratic equation, etc. In
For ease of explanation, the range [XMIN, XMAX] will now be referred to as [XMIN
Referring again to
Similarly, if XMAX
Thus, the model may now be represented as:
if XMAX
As can be seen from equations 1, 4 and 5, the model may comprise a plurality of regression models. In particular, a first regression model (i.e., f1(X)) may be used to model the dependent variable Y in a first operating region (i.e., XMIN
Referring again to
Referring now to
The abnormal situation prevention system 35 (
The mean output of the SPM block 404 is provided as an independent variable (X) input of a model 412, and the mean output of the SPM block 408 is provided as a dependent variable (Y) input of the model 412. The model 412 may comprise a model such as the model 112 of
In the AOD system 400, the model 412 generally models the mean of the monitored variable as a function of the mean of the load variable. The model 416 generally models the standard deviation of the monitored variable as a function of the mean of the load variable. This may be useful in situations where the standard deviation of the monitored variable tends to change as the load variable changes.
The YP outputs of the models 412 and 416 are provided to a deviation detector 420. Additionally, the mean output of the SPM block 408 is provided to the deviation detector 420. The deviation detector 420 generally compares the mean (μmv) of the monitored variable to the predicted mean (μPmv) generated by the model 412. Additionally, the deviation detector 420 utilizes this comparison as well as the predicted standard deviation (σPmv) generated by the model 416 to determine if a significant deviation has occurred. More specifically, the deviation detector 420 generates a status signal as follows:
In one particular implementation, the AOD system 400 could be implemented as a function block, such as a function block to be used in system that implements a Fieldbus protocol. In another implementation, each of some or all of blocks 404, 408, 412, 416 and 420 maybe implemented as a separate function block.
Using AOD System in a Level Regulatory Control Loop
AOD systems such as those described above can be used in various ways in a process plant to facilitate abnormal situation prevention. An example of using AOD systems to prevent an abnormal situation in a process plant will be described with reference to
A pump 470 may facilitate draining material from the tank 454, and a valve 474 may be used to regulate the flow rate of material exiting the tank. A position of the valve may be altered using a control demand (CD) signal in a manner well known to those of ordinary skill in the art. The valve 474 may include a sensor that generates a signal VP indicative of the position of the valve.
A PID control routine 478 may be used to control the valve 474 in order to regulate the level of material in the tank 454 according to a set point (SP). Any of a variety of suitable control routines may be utilized for the PID control routine 478. In general, such a routine may utilize one or more of the following signals to generate a control demand (CD) signal to appropriately control the valve 454: SP, LVL, VP, IF and/or OF.
In control systems such as the control system 450, two typical abnormal conditions are encountered: a measurement drift and a valve problem. The measurement drift condition may be indicative of a problem with a sensor, such as the level sensor 466. For example, a measurement drift condition may result in the signal LVL not accurately indicating the actual level in the tank 454. The valve problem condition may indicate a problem with the valve 474. This may result, for example, in the VP signal indicating a different valve position than that indicated by the CD signal. With prior art techniques, such underlying problems may cause another problem to occur, such as the level in the tank becoming too high or too low. This may lead to an alert or alarm being generated. But it may take an operator some time to determine the underlying problem that led to the alert/alarm.
One prior art technique for detecting an abnormal condition associated with a control system such as the control system 450 of
For each of PV, CD and VP, a state was determined based on MEAN according to Table 1.
Then, based on the states of the MEANs of PV, CD and VP, a diagnostics decision was made according to Table 2.
As described above, this prior art technique was intended for use in situations in which the process is to remain in a steady state for a relatively long period of time.
The system 500 includes a first AOD block 504 and a second AOD block 508. Each of the AOD blocks 504 and 508 may comprise an AOD system such as the AOD system 400 of
Referring now to
A status signal S1 generated by the AOD block 504 and a status signal S2 generated by the AOD block 508 may be provided to a logic block 516. The signals S1 and S2 may be generated in the manner described with respect to
One of ordinary skill in the art will recognize that a system similar to the system 500 of
In one particular implementation, the system 500 could be a function block, such as a function block to be used in system that implements a Fieldbus protocol. In another implementation, each of at least some of the blocks 504, 508, 512, and 516 may be implemented as a function block.
Manual Control of AOD System
In the AOD systems described with respect to
An initial state of the AOD system may be an UNTRAINED state 560, for example. The AOD system may transition from the UNTRAINED state 560 to the LEARNING state 554 when a LEARN command is received. If a MONITOR command is received, the AOD system may remain in the UNTRAINED state 560. Optionally, an indication may be displayed on a display device to notify the operator that the AOD system has not yet been trained.
In an OUT OF RANGE state 562, each received data set may be analyzed to determine if it is in the validity range. If the received data set is not in the validity range, the AOD system may remain in the OUT OF RANGE state 562. If, however, a received data set is within the validity range, the AOD system may transition to the MONITORING state 558. Additionally, if a LEARN command is received, the AOD system may transition to the LEARNING state 554.
In the LEARNING state 554, the AOD system may collect data sets so that a regression model may be generated in one or more operating regions corresponding to the collected data sets. Additionally, the AOD system optionally may check to see if a maximum number of data sets has been received. The maximum number may be governed by storage available to the AOD system, for example. Thus, if the maximum number of data sets has been received, this may indicate that the AOD system is, or is in danger of, running low on available memory for storing data sets, for example. In general, if it is determined that the maximum number of data sets has been received, or if a MONITOR command is received, the model of the AOD system may be updated and the AOD system may transition to the MONITORING state 558.
If, on the other hand, the minimum number of data sets has been collected, the flow may proceed to a block 612. At the block 612, the model of the AOD system may be updated as will be described in more detail with reference to
If, at the block 604 it has been determined that a MONITOR command was not received, the flow may proceed to a block 620, at which a new data set may be received. Next, at a block 624, the received data set may be added to an appropriate training group. An appropriate training group may be determined based on the X value of the data set, for instance. As an illustrative example, if the X value is less than XMIN of the model's validity range, the data set could be added to a first training group. And, if the X value is greater than XMAX of the model's validity range, the data set could be added to a second training group.
At a block 628, it may be determined if a maximum number of data sets has been received. If the maximum number has been received, the flow may proceed to the block 612, and the AOD system will eventually transition to the MONITORING state 558 as described above. On the other hand, if the maximum number has not been received, the AOD system will remain in the LEARNING state 554. One of ordinary skill in the art will recognize that the method 600 can be modified in various ways. As just one example, if it is determined that the maximum number of data sets has been received at the block 628, the AOD system could merely stop adding data sets to a training group. Additionally or alternatively, the AOD system could cause a user to be prompted to give authorization to update the model. In this implementation, the model would not be updated, even if the maximum number of data sets had been obtained, unless a user authorized the update.
At a block 662, it may be determined if this is the initial training of the model. As just one example, it may be determined if the validity range [XMIN, XMAX] is some predetermined range that indicates that the model has not yet been trained. If it is the initial training of the model, the flow may proceed to a block 665, at which the validity range [XMIN, XMAX] will be set to the range determined at the block 654.
If at the block 662 it is determined that this is not the initial training of the model, the flow may proceed to a block 670. At the block 670, it may be determined whether the range [X′MIN, X′MAX] overlaps with the validity range [XMIN, XMAX]. If there is overlap, the flow may proceed to a block 674, at which the ranges of one or more other regression models or interpolation models may be updated in light of the overlap. Optionally, if a range of one of the other regression models or interpolation models is completely within the range [X′MIN, X′MAX], the other regression model or interpolation model may be discarded. This may help to conserve memory resources, for example. At a block 678, the validity range may be updated, if needed. For example, if X′MIN is less than XMIN of the validity range, XMIN of the validity range may be set to the X′MIN.
If at the block 670 it is determined whether the range [X′MIN, X′MAX] does not overlap with the validity range [XMIN, XMAX], the flow may proceed to a block 682. At the block 682, an interpolation model may be generated, if needed. At the block 686, the validity range may be updated. The blocks 682 and 686 may be implemented in a manner similar to that described with respect to blocks 316 and 320 of
One of ordinary skill in the art will recognize that the method 650 can be modified in various ways. As just one example, if it is determined that the range [X′MIN, X′MAX] overlaps with the validity range [XMIN, XMAX], one or more of the range [X′MIN, X′MAX] and the operating ranges for the other regression models and interpolation models could be modified so that none of these ranges overlap.
At the block 712, a data set (X,Y) may be received as described previously. Then, at a block 716, it may be determined whether the received data set (X,Y) is within the validity range [XMIN, XMAX]. If the data set is outside of the validity range [XMIN, XMAX], the flow may proceed to a block 720, at which the AOD system may transition to the OUT OF RANGE state 562. But if it is determined at the block 716 that the data set is within the validity range [XMIN, XMAX], the flow may proceed to blocks 724, 728 and 732. The blocks 724, 728 and 732 may be implemented similarly to the blocks 158, 162 and 166, respectively, as described with reference to
To help further explain state transition diagram 550 of
The graph 800 of
If the operator subsequently causes a LEARN command to be issued, the AOD system will transition again to the LEARNING state 554. The graph 800 of
Then, the AOD system may transition back to the MONITORING state 558. The graph of
If the operator again causes a LEARN command to be issued, the AOD system will again transition to the LEARNING state 554. The graph 800 of
Next, ranges of the other regression models may be updated. For example, referring to
After transitioning to the MONITORING state 558, the AOD system may operate as described previously. For example, the graph of
Examples of Implementing AOD Systems in One or More Process Plant Devices
As described previously, AOD systems such as those described herein, may be implemented in a variety of devices within a process plant.
In operation, the analog input function block 914 may provide a process variable signal to the SPM block 916. In turn, the SPM block 916 may generate one or more statistical signals based on the process variable signal, and may provide the statistical signals to the abnormal operation detection function block 918. Similarly, the analog input function block 922 may provide a process variable signal to the SPM block 924. In turn, the SPM block 924 may generate one or more statistical signals based on the process variable signal, and may provide the statistical signals to the abnormal operation detection function block 918 via the Fieldbus segment 912.
In another implementation, the SPM blocks 916 and 924 may be incorporated within the abnormal operation detection function block 918. In this implementation, the analog input function block 914 may provide its process variable signal to the abnormal operation detection function block 918. Similarly, the analog input function block 922 may provide its process variable signal to the abnormal operation detection function block 918 via the Fieldbus segment 912. Of course, as described above, SPM blocks may not always be utilized in connection with abnormal operation detection function block 918, and thus may be omitted in some implementations.
As is known, some field devices are capable of making sensing of two or more process variables. Such a field device may be capable of implementing all of blocks 914, 916, 918, 922, and 924.
The interface device 950 may communicate with other devices such as a host workstation 958 via a hardwired connection, such as a 2-wire, a 3-wire, a 4-wire, etc. connection, to provide SPM data, or data developed therefrom, such as alerts, data plots, etc. to those devices for viewing by a user. Additionally, as illustrated in
One of ordinary skill in the art will recognize that the example systems and methods described above may be modified in various ways. For example, blocks may be omitted or reordered, additional blocks may be added, etc. For example, with regard to
Although examples were described in which a regression model comprised a linear regression model of a single dependent variable as a function of a single independent variable, one of ordinary skill in the art will recognize that other linear regression models and non-linear regression models may be utilized. One of ordinary skill in the art will also recognize that the linear or non-linear regression models may model multiple dependent variables as functions of multiple independent variables.
The AOD systems, models, regression models, interpolation models, deviation detectors, logic blocks, method blocks, etc., described herein may be implemented using any combination of hardware, firmware, and software. Thus, systems and techniques described herein may be implemented in a standard multi-purpose processor or using specifically designed hardware or firmware as desired. When implemented in software, the software may be stored in any computer readable memory such as on a magnetic disk, a laser disk, or other storage medium, in a RAM or ROM or flash memory of a computer, processor, I/O device, field device, interface device, etc. Likewise, the software may be delivered to a user or a process control system via any known or desired delivery method including, for example, on a computer readable disk or other transportable computer storage mechanism or via communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared and other wireless media. Thus, the software may be delivered to a user or a process control system via a communication channel such as a telephone line, the Internet, etc. (which are viewed as being the same as or interchangeable with providing such software via a transportable storage medium).
Thus, while the present invention has been described with reference to specific examples, which are intended to be illustrative only and not to be limiting of the invention, it will be apparent to those of ordinary skill in the art that changes, additions or deletions may be made to the disclosed embodiments without departing from the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
4527271 | Hallee et al. | Jul 1985 | A |
4607325 | Horn | Aug 1986 | A |
4657179 | Aggers et al. | Apr 1987 | A |
4734873 | Malloy et al. | Mar 1988 | A |
4763243 | Barlow et al. | Aug 1988 | A |
4764862 | Barlow et al. | Aug 1988 | A |
4853175 | Book, Sr. | Aug 1989 | A |
4885694 | Pray et al. | Dec 1989 | A |
4907167 | Skeirik | Mar 1990 | A |
4910691 | Skeirik | Mar 1990 | A |
4944035 | Aagardl et al. | Jul 1990 | A |
4956793 | Bonne et al. | Sep 1990 | A |
4965742 | Skeirik | Oct 1990 | A |
5006992 | Skeirik | Apr 1991 | A |
5008810 | Kessel et al. | Apr 1991 | A |
5015934 | Holley et al. | May 1991 | A |
5018215 | Nasr et al. | May 1991 | A |
5043863 | Bristol et al. | Aug 1991 | A |
5050095 | Samad | Sep 1991 | A |
5070458 | Gilmore et al. | Dec 1991 | A |
5121467 | Skeirik | Jun 1992 | A |
5134574 | Beaverstock et al. | Jul 1992 | A |
5140530 | Guha et al. | Aug 1992 | A |
5142612 | Skeirik | Aug 1992 | A |
5161013 | Rylander et al. | Nov 1992 | A |
5167009 | Skeirik | Nov 1992 | A |
5187674 | Bonne | Feb 1993 | A |
5189232 | Shabtai et al. | Feb 1993 | A |
5193143 | Kaemmerer et al. | Mar 1993 | A |
5197114 | Skeirik | Mar 1993 | A |
5212765 | Skeirik | May 1993 | A |
5224203 | Skeirik | Jun 1993 | A |
5282261 | Skeirik | Jan 1994 | A |
5291190 | Scarola et al. | Mar 1994 | A |
5301101 | MacArthur et al. | Apr 1994 | A |
5311447 | Bonne | May 1994 | A |
5311562 | Palusamy et al. | May 1994 | A |
5325522 | Vaughn | Jun 1994 | A |
5333298 | Bland et al. | Jul 1994 | A |
5351184 | Lu et al. | Sep 1994 | A |
5353207 | Keeler et al. | Oct 1994 | A |
5369599 | Sadjadi et al. | Nov 1994 | A |
5373452 | Guha | Dec 1994 | A |
5384698 | Jelinek | Jan 1995 | A |
5390326 | Shah | Feb 1995 | A |
5396415 | Konar et al. | Mar 1995 | A |
5398303 | Tanaka | Mar 1995 | A |
5408406 | Mathur et al. | Apr 1995 | A |
5442544 | Jelinek | Aug 1995 | A |
5461570 | Wang et al. | Oct 1995 | A |
5483138 | Shmookler et al. | Jan 1996 | A |
5486920 | Killpatrick et al. | Jan 1996 | A |
5486996 | Samad et al. | Jan 1996 | A |
5488697 | Kaemmerer et al. | Jan 1996 | A |
5499188 | Kline, Jr. et al. | Mar 1996 | A |
5519647 | DeVille | May 1996 | A |
5521842 | Yamoda | May 1996 | A |
5533413 | Kobayashi et al. | Jul 1996 | A |
5537310 | Tanake et al. | Jul 1996 | A |
5541833 | Bristol et al. | Jul 1996 | A |
5546301 | Agrawal et al. | Aug 1996 | A |
5552984 | Crandall et al. | Sep 1996 | A |
5559690 | Keeler et al. | Sep 1996 | A |
5561599 | Lu | Oct 1996 | A |
5566065 | Hansen et al. | Oct 1996 | A |
5570282 | Hansen et al. | Oct 1996 | A |
5572420 | Lu | Nov 1996 | A |
5574638 | Lu | Nov 1996 | A |
5596704 | Geddes et al. | Jan 1997 | A |
5640491 | Bhat et al. | Jun 1997 | A |
5640493 | Skeirik | Jun 1997 | A |
5666297 | Britt et al. | Sep 1997 | A |
5675253 | Smith et al. | Oct 1997 | A |
5680409 | Qin et al. | Oct 1997 | A |
5687090 | Chen et al. | Nov 1997 | A |
5692158 | Degeneff et al. | Nov 1997 | A |
5704011 | Hansen et al. | Dec 1997 | A |
5715158 | Chen | Feb 1998 | A |
5719767 | Jang | Feb 1998 | A |
5729661 | Keeler et al. | Mar 1998 | A |
5740324 | Mathur et al. | Apr 1998 | A |
5742513 | Bouhenguel et al. | Apr 1998 | A |
5761518 | Boehling et al. | Jun 1998 | A |
5764891 | Warrior | Jun 1998 | A |
5768119 | Havekost et al. | Jun 1998 | A |
5777872 | He | Jul 1998 | A |
5781432 | Keeler et al. | Jul 1998 | A |
5790898 | Kishima et al. | Aug 1998 | A |
5796609 | Tao et al. | Aug 1998 | A |
5798939 | Ochoa et al. | Aug 1998 | A |
5805442 | Crater et al. | Sep 1998 | A |
5809490 | Guiver et al. | Sep 1998 | A |
5817958 | Uchida et al. | Oct 1998 | A |
5819050 | Boehling et al. | Oct 1998 | A |
5819232 | Shipman | Oct 1998 | A |
5825645 | Konar et al. | Oct 1998 | A |
5826249 | Skeirik | Oct 1998 | A |
5842189 | Keeler et al. | Nov 1998 | A |
5847952 | Samad | Dec 1998 | A |
5859773 | Keeler et al. | Jan 1999 | A |
5859964 | Wang et al. | Jan 1999 | A |
5877954 | Klimasauskas et al. | Mar 1999 | A |
5892679 | He | Apr 1999 | A |
5892939 | Call et al. | Apr 1999 | A |
5898869 | Anderson | Apr 1999 | A |
5901058 | Steinman et al. | May 1999 | A |
5905989 | Biggs | May 1999 | A |
5907701 | Hanson | May 1999 | A |
5909370 | Lynch | Jun 1999 | A |
5909541 | Sampson et al. | Jun 1999 | A |
5909586 | Anderson | Jun 1999 | A |
5918233 | La Chance et al. | Jun 1999 | A |
5924086 | Mathur et al. | Jul 1999 | A |
5940290 | Dixon | Aug 1999 | A |
5948101 | David et al. | Sep 1999 | A |
5949417 | Calder | Sep 1999 | A |
5960214 | Sharpe, Jr. et al. | Sep 1999 | A |
5960441 | Bland et al. | Sep 1999 | A |
5975737 | Crater et al. | Nov 1999 | A |
5984502 | Calder | Nov 1999 | A |
5988847 | McLaughlin et al. | Nov 1999 | A |
6008985 | Lake et al. | Dec 1999 | A |
6014598 | Duyar et al. | Jan 2000 | A |
6017143 | Eryurek et al. | Jan 2000 | A |
6026352 | Burns et al. | Feb 2000 | A |
6033257 | Lake et al. | Mar 2000 | A |
6041263 | Boston et al. | Mar 2000 | A |
6047220 | Eryurek | Apr 2000 | A |
6047221 | Piche et al. | Apr 2000 | A |
6055483 | Lu | Apr 2000 | A |
6061603 | Papadopoulos et al. | May 2000 | A |
6067505 | Bonoyer et al. | May 2000 | A |
6076124 | Korowitz et al. | Jun 2000 | A |
6078843 | Shavit | Jun 2000 | A |
6093211 | Hamielec et al. | Jul 2000 | A |
6102164 | McClintock et al. | Aug 2000 | A |
6106785 | Havlena et al. | Aug 2000 | A |
6108616 | Borchers et al. | Aug 2000 | A |
6110214 | Klimasauskas | Aug 2000 | A |
6119047 | Eryurek et al. | Sep 2000 | A |
6122555 | Lu | Sep 2000 | A |
6128279 | O'Neil et al. | Oct 2000 | A |
6144952 | Keeler et al. | Nov 2000 | A |
6169980 | Keeler et al. | Jan 2001 | B1 |
6224121 | Laubach | May 2001 | B1 |
6246950 | Bessler et al. | Jun 2001 | B1 |
6266726 | Nixon et al. | Jul 2001 | B1 |
6298377 | Hartikainen et al. | Oct 2001 | B1 |
6298454 | Schleiss et al. | Oct 2001 | B1 |
6317701 | Pyotsia et al. | Nov 2001 | B1 |
6332110 | Wolfe | Dec 2001 | B1 |
6397114 | Eryurek et al. | May 2002 | B1 |
6421571 | Spriggs et al. | Jul 2002 | B1 |
6445963 | Blevins et al. | Sep 2002 | B1 |
6532392 | Eryurek et al. | Mar 2003 | B1 |
6539267 | Eryurek et al. | Mar 2003 | B1 |
6594589 | Coss, Jr. et al. | Jul 2003 | B1 |
6609036 | Bickford | Aug 2003 | B1 |
6615090 | Blevins et al. | Sep 2003 | B1 |
6633782 | Schleiss et al. | Oct 2003 | B1 |
6795798 | Eryurek et al. | Sep 2004 | B2 |
6901300 | Blevins et al. | May 2005 | B2 |
6954721 | Webber | Oct 2005 | B2 |
7079984 | Eryurek et al. | Jul 2006 | B2 |
7085610 | Eryurek et al. | Aug 2006 | B2 |
7221988 | Eryurek et al. | May 2007 | B2 |
7233834 | McDonald, Jr. et al. | Jun 2007 | B2 |
7269599 | Andreev et al. | Sep 2007 | B2 |
7321848 | Tuszynski | Jan 2008 | B2 |
7526405 | Miller | Apr 2009 | B2 |
7567887 | Emigholz et al. | Jul 2009 | B2 |
7657399 | Miller et al. | Feb 2010 | B2 |
7912676 | Miller | Mar 2011 | B2 |
20020022894 | Eryurek et al. | Feb 2002 | A1 |
20020038156 | Eryurek et al. | Mar 2002 | A1 |
20020077711 | Nixon et al. | Jun 2002 | A1 |
20020107858 | Lundahl et al. | Aug 2002 | A1 |
20020123864 | Eryurek et al. | Sep 2002 | A1 |
20020133320 | Wegerich et al. | Sep 2002 | A1 |
20020147511 | Eryurek et al. | Oct 2002 | A1 |
20020161940 | Eryurek et al. | Oct 2002 | A1 |
20020163427 | Eryurek et al. | Nov 2002 | A1 |
20030014500 | Schleiss et al. | Jan 2003 | A1 |
20030074159 | Bechhoefer et al. | Apr 2003 | A1 |
20040039556 | Chan et al. | Feb 2004 | A1 |
20040064465 | Yadav et al. | Apr 2004 | A1 |
20040078171 | Wegerich et al. | Apr 2004 | A1 |
20040168108 | Chan et al. | Aug 2004 | A1 |
20050060103 | Chamness | Mar 2005 | A1 |
20050143873 | Wilson | Jun 2005 | A1 |
20050197792 | Haeuptle | Sep 2005 | A1 |
20050210337 | Chester et al. | Sep 2005 | A1 |
20050246149 | Tuszynski | Nov 2005 | A1 |
20050256601 | Lee et al. | Nov 2005 | A1 |
20060020423 | Sharpe | Jan 2006 | A1 |
20060052991 | Pflugl et al. | Mar 2006 | A1 |
20060067388 | Sedarat | Mar 2006 | A1 |
20060074598 | Emigholz et al. | Apr 2006 | A1 |
20060157029 | Suzuki et al. | Jul 2006 | A1 |
20060200549 | Soto et al. | Sep 2006 | A1 |
20060265625 | Dubois et al. | Nov 2006 | A1 |
20070005298 | Allen et al. | Jan 2007 | A1 |
20070097873 | Ma et al. | May 2007 | A1 |
20070109301 | Smith | May 2007 | A1 |
20080027677 | Miller et al. | Jan 2008 | A1 |
20080027678 | Miller | Jan 2008 | A1 |
20080052039 | Miller et al. | Feb 2008 | A1 |
20080082295 | Kant et al. | Apr 2008 | A1 |
20080082304 | Miller | Apr 2008 | A1 |
20080082308 | Kant et al. | Apr 2008 | A1 |
20080097637 | Nguyen et al. | Apr 2008 | A1 |
20080116051 | Miller et al. | May 2008 | A1 |
20080120060 | Kant et al. | May 2008 | A1 |
20080167839 | Miller | Jul 2008 | A1 |
20080177513 | Miller | Jul 2008 | A1 |
20080208527 | Kavaklioglu | Aug 2008 | A1 |
20090089009 | Miller | Apr 2009 | A1 |
20090097537 | Miller | Apr 2009 | A1 |
Number | Date | Country |
---|---|---|
0 612 039 | Aug 1994 | EP |
0 626 697 | Nov 1994 | EP |
0 961 184 | Dec 1999 | EP |
0 964 325 | Dec 1999 | EP |
0 965 897 | Dec 1999 | EP |
2 294 129 | Apr 1996 | GB |
2 294 793 | May 1996 | GB |
2 347 234 | Aug 2000 | GB |
2 360 357 | Sep 2001 | GB |
07-152714 | Jun 1995 | JP |
WO-0179948 | Oct 2001 | WO |
WO-2006026340 | Mar 2006 | WO |
WO-2006107933 | Oct 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20080125879 A1 | May 2008 | US |