Fluids, such a water or hydrocarbons may be moved over distances using pipelines or flowline. In general, flowlines refer to conveyances for fluids a single site and pipelines refer to fluid conveyances over greater distances. Anomalies may develop in both pipelines and flowlines. Existing anomaly detection systems for pipelines and flowlines are generally based on fluid modeling and suffer from both too many false positives and false negatives. Existing pipeline leak-detection systems are, in general, loosely integrated, disparate systems in the enterprise. Furthermore, existing pipeline leak-detection systems have high administration and engineering support costs. Furthermore, existing pipeline leak-detection systems are technology dependent and are generally limited to a real-time transient model (RTTM) for features related to leak size and leak localization. Flowline leak detection systems are a nonexistent in commercially available solution.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
While embodiments of this disclosure have been depicted and described and are defined by reference to exemplary embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and not exhaustive of the scope of the disclosure.
For the purposes of this disclosure, computer-readable media may include any instrumentality or aggregation of instrumentalities that may retain data and/or instructions for a period of time. Computer-readable media may include, for example, without limitation, storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk drive), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), and/or flash memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing.
Illustrative embodiments of the present invention are described in detail herein. In the interest of clarity, not all features of an actual implementation may be described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions may be made to achieve the specific implementation goals, which may vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of the present disclosure.
To facilitate a better understanding of the present invention, the following examples of certain embodiments are given. In no way should the following examples be read to limit, or define, the scope of the invention. The terms “couple” or “couples,” as used herein are intended to mean either an indirect or a direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect electrical connection via other devices and connections. Similarly, the term “communicatively coupled” as used herein is intended to mean either a direct or an indirect communication connection. Such connection may be a wired or wireless connection such as, for example, Ethernet or LAN. Thus, if a first device communicatively couples to a second device, that connection may be through a direct connection, or through an indirect communication connection via other devices and connections.
The present disclosure includes methods, systems, and software to perform anomaly detection in pipeline or flowlines. Fluids in both pipeline and flowlines may include one or both of turbulent (for example, non-steady state) and laminar flow. In certain example implementations, pipelines includes one or more inlets and one or more outlets. In certain example implementations, flowline leak detection is a single inlet to multiple outlet streams on the separation vessel(s) or to the product processing facility. Other example factors that influence pipeline flow characteristics are elevation deviations, which can contribute to increased transient volumes. In certain example implementations, a well's flowline typically has differing qualities of gas-to-liquid ratio (GLR), which contribute to the non-steady state and are also impacted by one or more of pipe diameter, inclination, and/or elevation.
FIGURES TA, 1B, 1C, and 1D are diagrams of example pipelines according to the present disclosure. The pipeline of FIGURE TA has multiple inlets with a single outlet. The pipeline of
Although control unit 300 is illustrated as including two databases, control unit 300 may contain any suitable number of databases and machine learning algorithms.
Control unit 300 may be communicatively coupled to one or more displays 316 such that information processed by sensor control system 302 may be conveyed to operators at or near the pipeline or flowline or may be displayed at a location offsite.
Modifications, additions, or omissions may be made to
An example monitoring service (block 405) is shown in greater detail in
The control unit determines outlet flow rates (block 510). In certain example embodiments, the outlet flow rate is calculated as a sum of flow rates from all active outlet. This may represent the total volume of fluid removed from the system of the pipeline or flowline in a given period of time. One or more of inlet and outlet flow rates may be standardized to 60° F. and 1 atmosphere of pressure.
The control unit determines a listing of active inlets (block 515) and a number of active inlets (block 525). The control unit determines a listing of active outlets (block 520) and a number of active outlets (block 530). In certain example embodiments, the listing of inlets and outlets generates a listing of (primo, metric) for the instance where multiple LACTs have the same primo, but different metrics.
The control unit then determines a relative flow rate and/or a standard relative flow rate difference (block 535). In embodiments of the present disclosure, one or more of blocks 505-535 may be omitted, repeated, or preformed in a different order. In one example embodiment, the monitoring service 405 runs in an infinite loop and waits five seconds between iterations. The delay may be based, in part, on the time for sensor data to be transmitted to and ingested into the database 308. In other example embodiments, the monitoring service loops more frequently. In other example embodiments, the monitoring service loops less frequently. In certain embodiments, the delay between iterations is based on how frequently sensors collect the data. In certain embodiments, algorithm iteration takes less than 1 second, data come from sensors with period equal to 5 seconds.
An example prediction service (block 410) is shown in
An example training method for the prediction service (block 410) is shown in
In certain embodiments, the neural network model does not operate on each data point individually, but instead works with a set of data points. In some embodiments, the set of data points is a time sequences of 32-36 data points. In certain embodiments, this data set corresponds to 2-3 minutes of data. In other example embodiments, the data set may correspond to 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, or 60 minutes of data. In such an embodiment, the control unit converts the sequence of labels into a single label. As an example, assume that there is a sequence of 10 data points S=[0.011, 0.012, 0.014, 0.012, 0.015, 0.016, 0.230, 0.340, 0.440, 0.420]. In this example, labels for such a sequence would be L=[0, 0, 0, 0, 0, 0, 1, 1, 1, 1]. In order to generate a single label for a sequence we take an average of all labels for each data point:
Where N is a total number of points in a sequence (32 or 36). The term “p” is a probability of a leak event on a given timeframe. Each probability associated with timestamp—the maximum timestamp for a sequence of points, for example, the timestamp associated with the last data point in a sequence.
As a result, we get a dataset of sequences associated with labels—probabilities. An example of such a dataset is given in a table below:
The controller generates features for branch 1 of the model (block 1210). In this example embodiment, the dataset from table 2 is passed to branch 1 of the neural network model.
The controller generates features for branch 2 of the model (block 1215). Unlike the example branch 1 described above, where each data row is a sequence of numbers that related to each other (time series of relative flow rate difference values), features in branch 2 may not physically relate. Therefore features in branch 2 is just a collection of derived properties. Example properties that can be used as features in a branch 2 of neural network model, include, but are not limited to:
As a result, the controller generates values in the table below of new features that will be used for branch 2 training:
The controller trains the model in block 1220. In one example embodiment, to train the machine learning model the dataset is split into 3 parts: training dataset (40%), validation dataset (20%), and test dataset (40%). Ratios 40:20:40 was chosen arbitrary and other example embodiments feature different splitting ratios can be used including 40:30:30, 40:40:20, 50:25:25, 50:30:20. Training and validation datasets are used in training procedure directly, while test dataset is required only for model evaluation. In certain embodiments, each machine learning model may require its own training parameters. Example parameters to train neural network mode include a batch size of 32. In other example embodiments, the batch size may be 8, 16, 48, 64, 128, 256. Other example training parameters include a number of epochs. In general, the number of “epochs” refers to how many times each entry in training dataset passed through backpropagation algorithm to optimize neural network's weights. In one example embodiment, 1000 epochs was the selected parameter. In certain example embodiments, the data entries were shuffled in each epoch. In other example embodiments, the data entries are not shuffled, or are shuffled less frequently. In certain example embodiments, after each epoch model the resulting weights were saved if validation score decreased. At the end of training procedure, this provides a model with lowest validation score. In certain example embodiments, one or more optimizers are used as part of the training. In one example embodiment, the following optimizer were used: Adam, Stochastic Gradient Descent (SGD). For SGD optimizer the following parameters were used:
In certain example embodiments, as an evaluation metric, the “Area under the Receiver Operating Characteristic Curve” was used, also known as ROC-AUC score.
An example trained machine learning algorithm used by the decision-making service is shown in detail in
The example artificial neural network shown in
In one example embodiment, the RFRD is determined for the last 32 data points. where F is a total flow on inlets (in) and outlets (out). In certain example embodiments, the RFRD metric is normalized in a [−1, 1] range. In one example embodiment, the controls system determines a logarithmic flow ratio (LFR).
In certain example embodiments, the control system normalizes the LFR values.
The control system then performs a convolution in block 710. With respect to the convolution, parameters are weights generated by the trained model based on given data using backpropagation algorithm. In certain embodiments, weights are selected by the training procedure to minimize the error. The resulting output is then batch normalized in block 715. The control system then performs an activation function at block 720. In one example embodiment, the ELU activation function is performed. Other example embodiments may use different activation functions, such as TANH, SOFTMAX, or RELU. Block 725 is a pooling layer that, in one embodiments, performs max pooling. Block 730 is a convolution layer. In one example embodiment, the filter size is 32 and the kernel size is 5. In other example embodiments, the kernel size is 3 or 7. In general however, the filter size may be between 1 and infinity. Block 735 is a batch normalization layer. In block 740, the control system performs an ELU activation function. In block 745, the control system performs a pooling layer.
In the second branch, at block 750, the control system determines thirteen input parameters. In other example embodiments, more, fewer, or different input parameters are determined. In one example embodiment, the control system determines a transient stage indicator. That is, if the number of active inlets increases with the next data point, then a value of 0.01 is assigned to the current data point. If the number of active inlets decreases with the next data point, then the control system assigns a scaling value of to the current data point. In certain example embodiments, the scaling parameters are 0, 0.01, or −0.01. In general, however, the scaling parameters may be any real number. If the number of inlets remains the same, then the control system assigns a value of 0 to the current data point. Other example embodiments may use different numbers for the transient stage analysis.
In block 750, the control system also determines a mean relative flow rate difference over the last 32 data points. The control system may also determine a standard deviation of the flow rate over the last 32 data points. The control system may also determine a total average inlet flow rate over the last 32 data points. In certain embodiments, this average inlet flow rate is normalized. The control system may also determine a total average outlet flow rate over the last 32 data points. In certain embodiments, this average outlet flow rate is normalized. In certain embodiments, the control system determines the relative number of data point in RFRD that are larger than 0. In certain embodiments, the control system determines the relative number of data point in RFRD that are smaller than 0. In certain embodiments, the control system determines the relative number of data point in RFRD that are in a range of [−1, −0.9). In certain embodiments, the control system determines the relative number of data point in RFRD that are in a range of [−0.9, −0.5). In certain embodiments, the control system determines the relative number of data point in RFRD that are in a range of [−0.5, −0.02). In certain embodiments, the control system determines the relative number of data point in RFRD that are in a range of [−0.02, 0.04). In certain example embodiments, the second branch takes derived features as inputs. Derived features include all features described above, and additionally one or more of cumulative flow rate, cumulative flow rate difference, normalized flow rate, standardized flow rate, deviations in active inlet count, and deviations in active outlet count. In block 755, the control system performs a batch normalization layer and in block 760, it performs an activation function layer. In one example embodiment, the block 760 activation is an ELU activation layer.
The control system concatenates the output of the two branches at block 765. The output of the concatenation is then subjected to a dense layer at block 770. The control system then performs a batch normalization at block 775, an activation layer at block 780, and a dropout layer at block 785. In example embodiments, different number of nodes can be used in dense layer. In certain example embodiments, the number of nodes is any integer value between 1 and infinity. In example embodiments, the number of nodes is between 10 and 1000. In example embodiments, the number of nodes is optimized by an external algorithm. In embodiments of the present system, dropout values may be any real value can be used between 0 and 1. The control system then performs an activation function at block 785 to generate an output. In one example embodiment, the activation function at block 785 is a sigmoid activation function.
A second example machine learning algorithm used by the decision-making service is shown in detail in
The first branch receives four inputs of relative flow rate difference, inlet configuration change, standardized inlet flow rate, and standardized outlet flow rate (block 805). the first branch then passes the inputs to a GRU (Gated Recurrent Unit) (block 810) and a dropout layer (block 815). In certain embodiments, the GRU is able to extract features from time series data more efficiently than the convolutional layers of the system of
The second branch of the machine learning algorithm receives five inputs: a mean relative flow rate difference, a relative flow rate difference standard deviation, an area under RFRD curve, an inlet flow rate scaled by 5000, and an outlet flow rate scaled by 5000 (block 820). In general, the scaling parameters may vary in different implementations. In certain example embodiments, the scaling parameters are chosen so that all or most of the output values are between 0 and 1. The second branch further includes a dense layer (block 825), a batch normalization layer (block 830), an activation layer (block 835), and a dropout layer (block 840). The first and second branches are concatenated at concatenation layer (block 845). The combined branches are then though a dense layer (block 850), a batch normalization layer (block 855), an activation layer (block 860), a dropout layer (block 865), and an output layer (block 870).
An example decision-making service (block 415) is show in
Values from the sensors are cached (block 1010). The cached sensor values may be used to generate a model (block 1015). The resulting model may be cached (block 1020). With respect to detecting an anomaly, the system receives data from the monitor requests into one or more data queues (block 1025). The system determines if a model is present in the model cache (block 1035) and, if a model is present then the system publishes evaluated data using the cached model (block 1040). If, however, the system determines that no model is present (block 1035), then the system publishes the data without evaluation (block 1045).
An example of model generation (block 1015) is shown in greater detail in
In certain example embodiments, the controls system normalizes data relative to the decline curve of the well as well. An example decline curve of a well is shown in
The decline curve of
In example embodiments, over a small enough window in which the well is not declining steeply, the decline curve can be approximated to a line. In example embodiments, the control system determines the mean value at each point along the decline, using a rolling window, and take the resulting points to be the decline curve.
In certain embodiments, the control system subtracts the expected decline of pressure from the data, resulting in a pressure vs time plot where the pressure is distributed around 0 PSI. In certain embodiments, the control system ensures that the data has a standard deviation of 1. In example embodiments, the control system ensures that the data has the expected standard deviation by using the standard deviation of the pressure throughout the entire window being considered. However, because the data might be distributed differently at different points in the well's decline—for example, some parts of a flow regime might be more turbulent than others, the control system may get a measure of standard deviations at various times throughout the window being considered and normalize based on the changing standard deviation. In example embodiments, the controls system performs a windowed average standard deviation
x
norm=(xactual−xexpectedDeclineValue)/σstandardDeviationForDataPoint (Eq. 4)
In this example, the control system would analyze data when equipment changes have not been made that would affect the flowline pressure behavior.
In example embodiments, ore recent data may be excluded from model training so that the system has time to collect data from sensors that may lag other sensors. The system filter ESD events before training the model (block 1105). ESD events will vary based on the sensors used. In example embodiments, “ESD events” refers to an electronic signal given by one or more sensors in the field to indicate that it is reading some sort of value that warrants starting an emergency shut-down procedure, or an emergency shut down has manually been initiated by a stakeholder. In example embodiments, the control system also filters other non-emergency shut downs, which aren't sent in the ESD signal from the field. Planned shutdown events could happen for a number of reasons such as planned maintenance, equipment switches, or offset-frac jobs. In certain embodiments, the control system recognizes these events by looking for oil/gas production values close to 0 that span a period of time. In certain embodiments, the control system recognizes the events by seeing if the well is shut in.
Example ESD events include one or more of treater level switch, treater pressure above a treater pressure threshold, large blow case, small blow case, fire tube, KO drum switch, flowline hi pressure, flare status, water tank high level, level ESD, oil tank heights, battery voltage, separator communication down, tank communication down, combustor communication down, wellhead communication down, bath temp, treater temp, VRT high level switch, VRT scrubber switch, VRU fault, power fail, sales line hi PSI, group water tank high level, group oil tank high level, Low Line pressure, High Line pressure, High Level in production separator, High Level in water tank, High Level in sand box, High Pipeline Pressure.
In example embodiments, the events may be organized by sensor type. With respect to pressure transducers, events may include high pressure and low pressure. In example embodiments, the pressure transducer high pressure event may reflect one or more of high casing, tubing, flowline, high pressure separator, wellhead, flare-line, etc. pressures. In example embodiments, the pressure transducer low pressure event may reflect one or more of low casing, tubing, flowline, high pressure separator, wellhead, flare-line, etc. pressures. Events related to a temperature transducer may include high wellhead, flowline, high pressure separator, etc. temperatures. Events related to communication issues may include one or more of separator, combustor, tank, wellhead, etc. communication down. Events related to material management sensors, including radar or tuning fork sensors may one or more of high oil tank level, high water tank level, high pressure separator level, high level in sandbox, etc. Events related to equipment failures may include one or more of pump failure, compressor failure, etc. Events related to electrical failure may include one or more of low battery level, power failure. Other events may include high H2S level, scheduled shut-in, etc.
The system then trains the model in block 1105. Example of model training include generating a plurality of kernel density estimation models. The system then tests the kernel density estimation models to determine the best model. The testing may include recursive gradient descent to find the highest log-likelihood for selected model. The model training may further include scaling the chosen model. Example implementations of model training may include one or more hyper parameter optimizations. Example hyper parameter optimizations include brute force optimization, such as grid search, random search, or random parameter optimization. In random search, the control system leaves out or puts in random parameters for various cases. By using random search the control system may not have to look at as many cases, so it saves on computational resources. In certain embodiments, random search may result in a sub-optimal model when compared to grid search, which is exhaustive.
Example implementations of model training may include one or more model specific cross-validation techniques such as ElasticNetCV, LarsCV, LassoCV, LassoLarsCV, LogisticRegressionCV, MultiTaskLassoCV, OrthogonalMatchingPursuitCV, RidgeCV, or RidgeClassifierCV. Example implementations of model training may include one or more out of bag estimates. Some ensemble models—which are models that use groups of other models to make predictions, like random forest which uses a multitude of tree models—use bagging. Bagging is a method by which new training sets are generated by sampling with replacement, while part of the training set remains unused. For each classifier or regression in the ensemble, a different part of the training set would be left out. The left-out portion can be used to estimate how accurate the models are without having a separate validation set, which would make the estimate come “for free” because in other training processes, cross validation requires losing data could be used for model training.
Example Models that use out of bag estimations include Random Forest Classification, Random Forest Regression, Gradient Boost Classifier, Gradient Boost Regression, Extra-Tree Classifier, and Extra-Tree Regression.
In example embodiments, the kernel density estimation is for one stream. In other example embodiments, the kernel density estimate trains on two or more streams. Where the kernel density estimation trains on multiple streams, the control stem pulls data streams from one or more databases. In example embodiments, the data table is pivoted, such that the time stamp is the index and the metric names are the columns. In example embodiments, null values are filled in using either interpolation, single variate imputation, multi variate imputation, or other methods. In example embodiments with multiple streams, the training process proceeds as it would after, doing hyper-parameter optimization (such as grid search), etc.
In example embodiments, when it comes to caching the model (or otherwise saving it), the model would be given a unique id. The model would be cached with the key being the unique ID, and the model itself being the value. In example embodiments, each data stream the model relied on would be cached as well. With the key being the name of the data stream, and the value being a list of all unique model ids associated with that data stream. In example embodiments, when a data message comes in we first check the cache that contains the keys as the specific data stream name, and the keys as a list or set of all unique models associated with that data stream. If no models are found, then nothing is done. If models are found, the control system runs each model. If the model needs more than one data stream it can check the cache generated by the “new process.” In example embodiments, the model can be called through a number of ways, not just triggered by an incoming data message, but by it being on a timer, a user calling it to run, or any multitude of ways. In example embodiments, the saving of data can be handled by a process that saves it to a database, or wherever else, in which case when a process is triggered to run the model, it will reference wherever that data is stored. In example embodiments, instead of storing the data, the control system may poll equipment to get the latest sensor readings for the model to run.
In random search, the control system leaves out or puts in random parameters for various cases. By using random search, the control system may not have to look at as many cases, so it saves on computational resources. In certain embodiments, random search may result in a sub-optimal model when compared to grid search, which is exhaustive.
Example Models that use out of bag estimations include Random Forest Classification, Random Forest Regression, Gradient Boost Classifier, Gradient Boost Regression, Extra-Tree Classifier, and Extra-Tree Regression.
After the model is chosen, in example embodiments, the chosen model is stored (block 1105). In certain embodiments, the storage is performed by caching. In example embodiments, after the model is chosen, the model is stored along with metadata about the model. The metadata is used when the model is called. The metadata can also be used to as information as to how to route incoming requests to the appropriate models.
In certain example embodiments, the system performs an analysis of static fluids in pipelines or flowlines.
In block 1610, the system fits a pressure curve over the period Tfit. In certain example embodiments, the pressure fit curve is hyperbolic. In other example embodiments, the curve is linear. In some example embodiments the pressure curve is the largest absolute deviation of actual data from linear fit as defined by the function:
∂yfiti=s*max(|yki−yk−i|),k∈[1, . . . ,Tfit] (Eq. 5)
In block 1615, the system determines how much data will be used for fitting the curve and then how much data will be used for prediction of an anomaly. In certain example embodiments, the fitting time is referred to as Tfit and the length of the prediction period is Tpredict. In certain example embodiments, the value of Tfit can be any positive value. In certain example embodiments, Tfit is an index, and therefore must be integer, but greater than 1. In certain example embodiments, Tfit is chosen empirically based on one or more of frequency of data, meter accuracy, and presence of false alarms. For example these system receives data points every 5 seconds, and accuracy is good, Tfit may vary between 10 and 60, which roughly correspond to 1-5 minutes of data. This is enough to make a prediction. In example embodiments, Tfit is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, or 100.
In block 1620, the system determines a minimum pressure drop parameter pmin for which the system should not label such a pressure change as an anomaly. In certain example embodiments, there will be oscillation that are larger than pmin, but example system will not label such oscillation changes as anomalies because such oscillations are normal, as discussed above with respect to
∂p=max(pmin,∂yfiti) (Eq. 6)
In block 1625, the system determines the difference between the predicted pressures and the measured pressures in the prediction region. In one example embodiment, the system extrapolates the line Ŷi into the prediction region to find the difference between the predicted pressures. In example embodiments, the system, for each difference determines a value of partial probability as:
where σ is a smoothing parameter and ∂yji=yji−ŷji is a difference between observed pressure and predicted pressure at a point j for a curve i. The indices j are valid for a prediction segment and start at the left part of the prediction segment. Thereafter the system has calculated partial probabilities for each point within the prediction segment. In certain embodiments, the value of the σ smoothing parameter is chosen to prevent probability spiking. In certain embodiments, the σ smoothing parameter is any positive floating point value from range (0, +inf). In certain embodiments, the σ smoothing parameter is a value from range (0, 10] such that it prevents probability spiking and false alarms. In certain embodiments, the σ smoothing parameter is 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10, or intermediate values.
In block 1630, the system determines a probability for the curve i. In one example embodiment, the probability for the curve i is calculated as:
where
is a weight of the j-th probability. In example embodiments the weights are chosen to minimize the influence of a partial (j-th) probability on the total probability value. In example embodiments the weights are chosen the weights can be set to be equal to each other. In example embodiments the weights are assigned weights as e−(T
Weights are something that everyone can easily play with. There are no restrictions on how those weights must be assigned. We have chosen them to be calculated according to exp(*) formula, however someone else may decide that all the weights must be equal to 1. It is impossible to predict how this affect the final result. At the same time, there is no difference between setting weights to 1, and to 20, because the total probability will be normalized anyway.
I think it is worth saying that weights can be set equal to each other. In certain example embodiments, the most recent pressure point has the largest index Tpredict and therefore the largest corresponding weight (1). In example embodiments, the oldest point with index 1 has weight
If we assume that one minute of 5 second data is used to make a prediction, the Tpredict=12 and wji≈0.4.
The process of block 1605-1630 is repeated for each of the node pressure curves. In block 1635, the system determines which of the pressure curves returns the largest probably.
In other example embodiments, the system returns probabilities for two, three, four, five, six, seven, eight, nine, ten, or all pressure curves.
In certain example embodiments, the process of
Therefore, the present invention is well adapted to attain the ends and advantages mentioned as well as those that are inherent therein. The particular embodiments disclosed above are illustrative only, as the present invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular illustrative embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the present invention. Also, the terms in the claims have their plain, ordinary meaning unless otherwise explicitly and clearly defined by the patentee. The indefinite articles “a” or “an,” as used in the claims, are each defined herein to mean one or more than one of the element that it introduces.
A number of examples have been described. Nevertheless, it will be understood that various modifications can be made. Accordingly, other implementations are within the scope of the following claims.
This application claims priority to U.S. Provisional Application No. 62/924,457 filed Oct. 22, 2019 entitled “Anomaly Detection in Pipelines and Flowlines” by Justin Alan Ward, Alexey Lukyanov, Ashley Sean Kessel, Alexander P. Jones, Bradley Bennett Burt, and Nathan Rice.
Number | Date | Country | |
---|---|---|---|
62924457 | Oct 2019 | US |