The extraction and production of oil and gas from a well, or an oil and gas field composed of at least one well, is a complex process. Over the lifecycle of the oil and gas field many decisions will be taken in order to meet both short and long term goals and to extend the life cycle of a well. In general, optimization of an oil and gas field seeks to maximize hydrocarbon recovery while minimizing cost, wherein cost is accrued through the allocation of resources and energy. Additionally, oil and gas field optimization seeks to mitigate the production of process byproducts, such as water or acidic gases like CO2 and H2S.
In other words, optimization refers to the various activities of measuring, modelling, and taking actions to augment productivity of an oil and gas field. Optimization activities may include, or affect: field exploration, subsurface modelling, preliminary reservoir simulations, well-bore trajectory planning, and completions.
A completed and producing oil and gas field is composed of many components and
sub-processes, both above and below the surface of the Earth. A plurality of oil and gas field devices are disposed at various locations throughout the oil and gas field. These devices include sensors and controllers which monitor and govern the behavior of the components and sub-processes of the oil and gas field. The productivity of the oil and gas field is directly affected, and may be altered, by the devices. Generally, complex interactions between oil and gas field components and sub-processes exist such that configuring field devices for optimal production is a difficult and laborious task. Further, the state and behavior of oil and gas fields is transient over the lifetime of the oil and gas field requiring continual changes to field devices to enhance production.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
One or more embodiments disclosed herein generally relate to a method to optimize an oil and gas field. The method includes receiving oil and gas field device data from a plurality of devices disposed throughout the oil and gas field, where at least one device of the plurality of devices monitors oil and gas production. The method further includes processing, by a computer processor, the device data to determine optimal inflow control valve (ICV) and choke valve settings, adjusting the ICV and choke valve settings to the optimal ICV and choke valve settings, and validating that the optimal ICV and choke valve settings optimize oil and gas production with the at least one device that monitors oil and gas production.
One or more embodiments disclosed herein generally relate to a non-transitory computer readable medium storing instructions executable by a computer processor. The instructions include functionality for receiving oil and gas field device data from a plurality of devices disposed throughout an oil and gas field, where at least one device of the plurality of devices monitors oil and gas production, processing the device data to determine optimal inflow control valve (ICV) and choke valve settings, and returning the optimal ICV and choke valve settings.
One or more embodiments disclosed herein generally relate to a system. The system includes an oil and gas field, a plurality of devices disposed throughout the oil and gas field, where at least one device from the plurality of devices is configured to measure oil and gas production of the oil and gas field. The plurality of devises includes, at least, a plurality of inflow control valves (ICVs), and a plurality of choke valves. The system further includes a computer communicably connected to the plurality of devices. The computer includes one or more computer processors and a non-transitory computer readable medium storing instructions executable by a computer processor. The instructions include functionality for receiving oil and gas field device data from the plurality of devices, processing the device data to determine optimal inflow control valve (ICV) and choke valve settings, adjusting the ICV and choke valve settings to the optimal ICV and choke valve settings, and validating that the optimal ICV and choke valve settings optimize oil and gas production with the device that monitors oil and gas production.
Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.
Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “acoustic signal” includes reference to one or more of such acoustic signals.
Terms such as “approximately,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
It is to be understood that one or more of the steps shown in the flowchart may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope disclosed herein should not be considered limited to the specific arrangement of steps shown in the flowchart.
Although multiple dependent claims are not introduced, it would be apparent to one of ordinary skill that the subject matter of the dependent claims of one or more embodiments may be combined with other dependent claims.
In accordance with one or more embodiments,
For clarity, the pipeline (100) is divided into three sections; namely, a subsurface (102) section, a tree (104) section, and a flowline (106) section. It is emphasized that pipelines (100) and other components of wells and, more generally, oil and gas fields may be configured in a variety of ways. As such, one with ordinary skill in the art will appreciate that the simplified view of
Turning to the tree (104) section of
Also shown in
Turning to the flowline (106) section, the flowline (106) transports (108) the fluid from the well to a storage or processing facility (not shown). A choke valve (119) is disposed along the flowline (106). The choke valve (119) is used to control flow rate and reduce pressure for processing the extracted fluid at a downstream processing facility. In particular, effective use of the choke valve (119) prevents damage to downstream equipment and promotes longer periods of production without shut-down or interruptions. The choke valve (119) is bordered by an upstream pressure transducer (115) and a downstream pressure transducer (117) which monitor the pressure of the fluid entering and exiting the choke valve (119), respectively. The flowline (106) shown in
The various valves, pressure gauges and transducers, sensors, and flow meters depicted in
The plurality of oil and gas field devices may be distributed, local to the sub-processes and associated components, global, connected, etc. The devices may be of various control types, such as a programmable logic controller (PLC) or a remote terminal unit (RTU). For example, a programmable logic controller (PLC) may control valve states, pipe pressures, warning alarms, and/or pressure releases throughout the oil and gas field. In particular, a programmable logic controller (PLC) may be a ruggedized computer system with functionality to withstand vibrations, extreme temperatures, wet conditions, and/or dusty conditions, for example, around a pipeline (100). With respect to an RTU, an RTU may include hardware and/or software, such as a microprocessor, that connects sensors and/or actuators using network connections to perform various processes in the automation system. As such, a distributed control system may include various autonomous controllers (such as remote terminal units) positioned at different locations throughout the oil and gas field to manage operations and monitor sub-processes. Likewise, a distributed control system may include no single centralized computer for managing control loops and other operations.
In accordance with one or more embodiments,
Oil and gas field devices, like those shown in
In accordance with one or more embodiments, data from the oil and gas field devices are processed with a machine-learned model to determine the optimal ICV (101) and choke valve (119) settings for the oil and gas field. Machine learning, broadly defined, is the extraction of patterns and insights from data. The phrases “artificial intelligence”, “machine learning”, “deep learning”, and “pattern recognition” are often convoluted, interchanged, and used synonymously throughout the literature. This ambiguity arises because the field of “extracting patterns and insights from data” was developed simultaneously and disjointedly among a number of classical arts like mathematics, statistics, and computer science. For consistency, the term machine learning, or machine-learned, will be adopted herein, however, one skilled in the art will recognize that the concepts and methods detailed hereafter are not limited by this choice of nomenclature.
Machine-learned model types may include, but are not limited to, neural networks, random forests, generalized linear models, and Bayesian regression. Further, as defined herein, machine learning may include algorithmic search methods and optimization methods such as a line search or the genetic algorithm. Machine-learned model types are usually associated with additional “hyperparameters” which further describe the model. For example, hyperparameters providing further detail about a neural network may include, but are not limited to, the number of layers in the neural network, choice of activation functions, inclusion of batch normalization layers, and regularization strength. The selection of hyperparameters surrounding a model is referred to as selecting the model “architecture”. Generally, multiple model types and associated hyperparameters are tested and the model type and hyperparameters that yield the greatest predictive performance on a hold-out set of data is selected.
As noted, the objective of the machine-learned model is to determine the optimal settings for ICVs (101) and choke valves (119) in an oil and gas field. In accordance with one or more embodiments,
As seen, oil and gas field device data (202) are collected from the plurality of devices of the oil and gas field. The device data (202) may include measurements of temperature, pressure, percent water cut (% WC), and gas-to-oil ratio (GOR) from one or more multiphase flow meters (123) disposed throughout the oil and gas field. Likewise, subsurface measurements, such as temperature and pressure, may be collected and received from a permanent downhole monitoring system (PDHMS) (124). The device data (202) may further include frequency, speed, pressure, and temperature measurements from one or more electrical submersible pumps (ESPs), pressure readings from a plurality of pressure transducers (115, 117), and pressure, temperature, and valve states at the tree (104) section. Additionally, the device data (202) includes the current settings of the ICVs (101) and choke valves (119) of the oil and gas field. Finally, the device data (202) includes monitoring of the oil and gas production such that it may be determined when oil and gas production is optimized. One with ordinary skill in the art will appreciate that additional field devices may be employed in an oil and gas field and that additional associated device data (202) may be collected without departing from the scope of this disclosure.
In accordance with one or more embodiments, and as shown in
The pre-processed (204) device data (202), which may be identical to the original device data (202) or contain derived features or a sub-selection of features, is processed by the machine-learned model (206). The machine-learned model (206) is configured to output optimal ICV (208) and choke valve (210) settings. The optimal settings are those which, if applied, will result in the optimal oil and gas production as determined by the machine-learned model (206).
As shown in
where the machine learned model A (305), represented by the function ƒ, is maximized with respect to the ICV (101) and choke valve (119) settings subject to oil and gas field constraints. One with ordinary skill in the art will appreciate that maximization and minimization may be made equivalent through simple techniques such as negation. As such, the choice to represent the optimization as a maximization as shown in EQ. 1 does not limit the scope of the present disclosure. Whether done through minimization or maximization, the optimization wrapper (308) identifies the ICV (101) and choke valve (119) values which optimize oil and gas production according to the trained machine-learned model A (305). An oil and gas field may be subject to constraints, such as safety limits imposed on various devices and sub-processes of an oil and gas field. For example, it may be determined that in order for an oil and gas field to operate safely, pressure, as measured by a given field device, should not exceed a prescribed value. In the embodiment described by
Other intelligent search methods, or the machine learned model B (307), may include a genetic algorithm, Bayesian search, or a Gaussian process. For example, while a full description of a Gaussian process exceeds the scope of this disclosure, it may simply be said that a Gaussian process is a machine-learned method, which in the present case may be used to construct a relationship between oil and gas production and ICV (101) and choke valve (119) settings given the remaining device data (202). Such a relationship may be mathematically described as
{right arrow over (y)}=ƒ({right arrow over (x)}|D)˜N( ), (2),
where {right arrow over (y)} is a vector of quantities indicating oil and gas production, {right arrow over (x)} is a vector of ICV (101) and choke valve (119) settings, and D is the remaining pre-processed device data (202). The output {right arrow over (y)} of a Gaussian process for a given input {right arrow over (x)} will follow a normal distribution with a mean value and a variance. Because the outputs of a Gaussian process follow a normal distribution, the Gaussian process naturally lends itself to uncertainty quantification. As such, the domain of inputs {right arrow over (x)} may be intelligently searched to discover the optimal outputs {right arrow over (y)} within the bounds of uncertainty.
Once elected, the new ICV (101) and choke valve (119) settings are selected and used in the oil and gas field. This process is repeated until the optimal settings have been discovered. Again, like the embodiment of
In accordance with one or more embodiments, the procedures depicted in
The process of using the device data (202) to determine the ICV (101) and choke valve (119) settings which optimize the oil and gas production of an oil and gas field is summarized in the flow chart of
While the various blocks in
Embodiments of the present disclosure may provide at least one of the following advantages. As noted, complex interactions between oil and gas field components and sub-processes exist such that configuring a plurality of devices for optimal production is a difficult and laborious task. For example, device settings may be adjusted to prevent or mitigate unwanted activities such as coning and cusping. Further, the state and behavior of oil and gas fields is transient over the lifetime of the constituent wells requiring continual changes to the plurality of field devices to enhance production. By continuously receiving and processing device data (202) with a machine-learned model (206), the oil and gas field can be maintained in an optimal state greatly reducing the cost and time required to identify optimal settings which change with the transient nature of the wells. This, in turn, improves oil and gas yield and prolongs the life of constituent wells. Further, optimal choke valve (119) setting serve to prevent damage to downstream equipment and promote longer periods of production without shut-down or interruptions.
In accordance with one or more embodiments, one or more of the machine-learned models (206) discussed herein, such as the machine-learned model A (305) is a neural network. A diagram of a neural network is shown in
Nodes (502) and edges (504) carry additional associations. Namely, every edge is associated with a numerical value. The edge numerical values, or even the edges (504) themselves, are often referred to as “weights” or “parameters”. While training a neural network (500), numerical values are assigned to each edge (504). Additionally, every node (502) is associated with a numerical variable and an activation function. Activation functions are not limited to any functional class, but traditionally follow the form
A=ƒ(Σi∈(incoming)[(node value)i(edge value)i]), (3),
where i is an index that spans the set of “incoming” nodes (502) and edges (504) and ƒ is a user-defined function. Incoming nodes (502) are those that, when viewed as a graph (as in
and rectified linear unit function ƒ(x)=max(0,x), however, many additional functions are commonly employed. Every node (502) in a neural network (500) may have a different associated activation function. Often, as a shorthand, activation functions are described by the function ƒ by which it is composed. That is, an activation function composed of a linear function ƒ may simply be referred to as a linear activation function without undue ambiguity.
When the neural network (500) receives an input, the input is propagated through the network according to the activation functions and incoming node (502) values and edge (504) values to compute a value for each node (502). That is, the numerical value for each node (502) may change for each received input. Occasionally, nodes (502) are assigned fixed numerical values, such as the value of 1, that are not affected by the input or altered according to edge (504) values and activation functions. Fixed nodes (502) are often referred to as “biases” or “bias nodes” (506), displayed in
In some implementations, the neural network (500) may contain specialized layers (505), such as a normalization layer, or additional connection procedures, like concatenation. One skilled in the art will appreciate that these alterations do not exceed the scope of this disclosure.
As noted, the training procedure for the neural network (500) comprises assigning values to the edges (504). To begin training the edges (504) are assigned initial values. These values may be assigned randomly, assigned according to a prescribed distribution, assigned manually, or by some other assignment mechanism. Once edge (504) values have been initialized, the neural network (500) may act as a function, such that it may receive inputs and produce an output. As such, at least one input is propagated through the neural network (500) to produce an output. Recall, that a given data set will be composed of inputs and associated target(s), where the target(s) represent the “ground truth”, or the otherwise desired output. The neural network (500) output is compared to the associated input data target(s). The comparison of the neural network (500) output to the target(s) is typically performed by a so-called “loss function”; although other names for this comparison function such as “error function”, “misfit function”, and “cost function” are commonly employed. Many types of loss functions are available, such as the mean-squared-error function, however, the general characteristic of a loss function is that the loss function provides a numerical evaluation of the similarity between the neural network (500) output and the associated target(s). The loss function may also be constructed to impose additional constraints on the values assumed by the edges (504), for example, by adding a penalty term, which may be physics-based, or a regularization term. Generally, the goal of a training procedure is to alter the edge (504) values to promote similarity between the neural network (500) output and associated target(s) over the data set. Thus, the loss function is used to guide changes made to the edge (504) values, typically through a process called “backpropagation”.
While a full review of the backpropagation process exceeds the scope of this disclosure, a brief summary is provided. Backpropagation consists of computing the gradient of the loss function over the edge (504) values. The gradient indicates the direction of change in the edge (504) values that results in the greatest change to the loss function. Because the gradient is local to the current edge (504) values, the edge (504) values are typically updated by a “step” in the direction indicated by the gradient. The step size is often referred to as the “learning rate” and need not remain fixed during the training process. Additionally, the step size and direction may be informed by previously seen edge (504) values or previously computed gradients. Such methods for determining the step direction are usually referred to as “momentum” based methods.
Once the edge (504) values have been updated, or altered from their initial values, through a backpropagation step, the neural network (500) will likely produce different outputs. Thus, the procedure of propagating at least one input through the neural network (500), comparing the neural network (500) output with the associated target(s) with a loss function, computing the gradient of the loss function with respect to the edge (504) values, and updating the edge (504) values with a step guided by the gradient, is repeated until a termination criterion is reached. Common termination criteria are: reaching a fixed number of edge (504) updates, otherwise known as an iteration counter; a diminishing learning rate; noting no appreciable change in the loss function between iterations; reaching a specified performance metric as evaluated on the data or a separate hold-out data set. Once the termination criterion is satisfied, and the edge (504) values are no longer intended to be altered, the neural network (500) is said to be “trained”.
While multiple embodiments using different machine-learned models (206) have been suggested, one skilled in the art will appreciate that this process, of determining the optimal ICV (101) and choke valve (119) settings, is not limited to the listed machine-learned models. Machine-learned models (206) such as a random forest, support vector machines, or non-parametric methods such as K-nearest neighbors may be readily inserted into this framework and do not depart from the scope of this disclosure.
Embodiments may be implemented on a computer system.
The computer (602) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. In some implementations, one or more components of the computer (602) may be configured to operate within environments, including cloud-computing-based, local, global, or other environment (or a combination of environments).
At a high level, the computer (602) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (602) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).
The computer (602) can receive requests over network (630) from a client application (for example, executing on another computer (602) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (602) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.
Each of the components of the computer (602) can communicate using a system bus (603). In some implementations, any or all of the components of the computer (602), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (604) (or a combination of both) over the system bus (603) using an application programming interface (API) (612) or a service layer (613) (or a combination of the API (612) and service layer (613). The API (612) may include specifications for routines, data structures, and object classes. The API (612) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (613) provides software services to the computer (602) or other components (whether or not illustrated) that are communicably coupled to the computer (602). The functionality of the computer (602) may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer (613), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or another suitable format. While illustrated as an integrated component of the computer (602), alternative implementations may illustrate the API (612) or the service layer (613) as stand-alone components in relation to other components of the computer (602) or other components (whether or not illustrated) that are communicably coupled to the computer (602). Moreover, any or all parts of the API (612) or the service layer (613) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.
The computer (602) includes an interface (604). Although illustrated as a single interface (604) in
The computer (602) includes at least one computer processor (605). Although illustrated as a single computer processor (605) in
The computer (602) also includes a memory (606) that holds data for the computer (602) or other components (or a combination of both) that can be connected to the network (630). The memory may be a non-transitory computer readable medium. For example, memory (606) can be a database storing data consistent with this disclosure. Although illustrated as a single memory (606) in
The application (607) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (602), particularly with respect to functionality described in this disclosure. For example, application (607) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (607), the application (607) may be implemented as multiple applications (607) on the computer (602). In addition, although illustrated as integral to the computer (602), in alternative implementations, the application (607) can be external to the computer (602).
There may be any number of computers (602) associated with, or external to, a computer system containing computer (602), wherein each computer (602) communicates over network (630). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (602), or that one user may use multiple computers (602).
Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.