In the petroleum industry, hydrocarbons are located in reservoirs far beneath the surface of the Earth. Wells are drilled into these reservoirs to access and produce the hydrocarbons. In the production stage, hydrocarbons flow from the reservoir up through the wellbore to a wellhead at a surface location. The wellhead may include a series of valves, fittings, and connectors that regulate the flow of hydrocarbons from the reservoir to the surface and onto a series of processing facilities along a production line. Often, a hydrocarbon production rate is measured along the production pipeline or inside the wellbore. A hydrocarbon production rate is a measure of the volume of oil or gas that a well produces over a given period and is a key indicator of the performance of an oil or gas field.
The production rate of a hydrocarbon well can vary over time due to several factors, including the characteristics of the reservoir, the well's completion design, the production technology used, and the operational practices. The production rate of a well is usually the highest when it is first brought into production and then the production rate gradually declines over time as the reservoir pressure decreases and the amount of recoverable fluids decreases. Often, it is advantageous to generate a production decline curve for a production well to understand the rate at which the wells production is declining. A production decline curve is a graphical representation of the decline in production rate of a hydrocarbon well over time. It is a useful tool for predicting the future performance of a well and estimating its ultimate recovery.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
In general, in one aspect, embodiments disclosed herein relate to a computer-implemented method including obtaining a well production dataset for a plurality of wells penetrating a reservoir, where the well production dataset includes a well production parameter set for each of the plurality of wells, and partitioning the well production dataset into a plurality of training datasets based, at least in part, on a criterion. For each of the plurality of training datasets, the method further includes training a machine learning (“ML”) network to predict a future hydrocarbon production rate for a candidate well, where the training includes generating a production decline curve using the well production parameter set for each of the plurality of wells within the training dataset, fitting an empirical model to the production decline curve, and determining the future hydrocarbon production rate for the candidate well based, at least in part, on the empirical model. The method still further includes determining a drilling target, using the trained ML network, for a future production well based, at least in part, on evaluating the future hydrocarbon production rate for each candidate well.
In general, in one aspect, embodiments disclosed herein relate to a system including a computer system and a trained machine learning (“ML”) network. The computer system is configured to receive a well production dataset for a plurality of wells penetrating a reservoir, where the well production dataset comprises a well production parameter set for each of the plurality of wells, and partition the well production dataset into a plurality of training datasets based, at least in part, on a criterion. The ML network is configured to receive the plurality of training datasets and predict a future hydrocarbon production rate for a candidate well for each of the training datasets, and determine a drilling target for a future production well based, at least in part, on evaluating the future hydrocarbon production rates for each candidate well.
Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.
Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a production decline curve” includes reference to one or more of such production decline curves.
Terms such as “approximately,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
It is to be understood that one or more of the steps shown in the flowchart may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope disclosed herein should not be considered limited to the specific arrangement of steps shown in the flowchart.
Although multiple dependent claims are not introduced, it would be apparent to one of ordinary skill that the subject matter of the dependent claims of one or more embodiments may be combined with other dependent claims.
In the following description of
Embodiments disclosed herein relate to developing an empirical model capitalizing on a decline rate analysis profile for predicting wells' productivity performance (oil, gas and water). This model is built on a set of historical data for the field based on individual well performance over time. This model is used to generate a decline curve of the specific field which can be utilized to predict future well performance of any upcoming new well over the entire life span. The model will also act as a reference to compare individual well performance and in case of any deviation, the well is marked as sick (unhealthy) well which are considered for any remedial actions to restore back the well performance. This empirical model can also be used for many other beneficial purposes like economical evaluation of the maintain potential gain jobs, budget planning, resource allocation, workover planning, well intervention job preparation and prioritization as well as other purposes.
More specifically, one or more embodiments disclosed herein disclose methods and systems to determine an underperforming well and/or to future hydrocarbon production rate for a candidate well. The methods include both training a ML network to form a trained ML network and using the trained ML network to determine expected hydrocarbon production rates existing wells and future hydrocarbon production rates and a drilling target based on the evaluation of the hydrocarbon production rates. Training the ML network includes obtaining a well production dataset for a plurality of wells penetrating a reservoir, wherein the well production dataset comprises a well production parameter set for each of the plurality of wells. The well production parameter set may include a hydrocarbon production rate history and information regarding the producing reservoir zone. The training dataset may then be partitioned into a plurality of training datasets based, at least in part on a criterion which may include well type, well location and a well length within the reservoir zone.
Each of the training datasets may be used to train a machine learning (ML) network to predict a future hydrocarbon production rate for a candidate well. The training may include generating a production decline curve using the well production parameter set for each of the plurality of wells within the training dataset, fitting an empirical model to the production decline curve, and determine the future hydrocarbon production rate for the candidate well based, at least in part, on the empirical model. A drilling target may also be determined using the trained ML network for a future production well based, at least in part, on evaluating the future hydrocarbon production rate for each candidate well. With drilling target determined, the method continues with planning a wellbore path, using a wellbore planning system based, at least in part, on the drilling target and drilling a wellbore guided by the wellbore path using a drilling system.
In some embodiments, the ML network may also be used to identify under-performing wells, based on the empirical model and determining a remedial action for the under-performing well. Identifying the under-performing well may include identifying at least one of the plurality of wells within the training dataset what has a hydrocarbon production rate that underperforms the empirical model. The remedial actions for under-performing wells may include stimulating the under-performing well to restore production.
In some embodiments, the well production system (100) includes a wellbore (102), a well surface system (124), and a well control system (“control system”) (116). The area where the components of the well production system (100) are located is referred to as a wellsite. The wellbore (102) may include a bored hole that extends from the surface (108) into a target zone (120) of the reservoir (114). An upper end of the wellbore (102), terminating at or near the surface (108), may be referred to as the “up-hole” end of the wellbore (102), and a lower end of the wellbore, terminating in the hydrocarbon-bearing formation, may be referred to as the “down-hole” end of the wellbore (102). The wellbore (102) may facilitate the circulation of drilling fluids during drilling operations, the flow of hydrocarbon production (“production”) (e.g., oil and gas) from the reservoir (114) to the surface (108) during production operations, or the communication of monitoring devices (e.g., logging tools) into the hydrocarbon-bearing formation or the reservoir (114) during monitoring operations (e.g., during in situ logging operations).
Once the wellbore (102) has been drilled during a drilling operation, the wellbore (102) is prepared for production in a well completion process. The well completion process may include lining the wellbore (102) with a steel casing and cementing the wellbore (102) in place to prevent any leakage of production fluids, perforating the casing and surrounding rock formation within the reservoir (114), installing completion equipment to facilitate the flow of hydrocarbons, and initiating production using artificial lift systems.
In some embodiments, the well surface system (124) includes a wellhead (134), flow regulating and measuring devices, a flowline (121) and various well production facilities (130). During production, hydrocarbons flow up the wellbore (102) to the wellhead (134) located at the surface (108), where the wellhead (134) serves as an interface between the wellbore (102) and the well surface system (124). The wellhead (134) may include a rigid structure installed at the “up-hole” end of the wellbore (102), at or near where the wellbore (102) terminates at the Earth's surface (108). The wellhead (134) may include structures for supporting (or “hanging”) casing and production tubing extending into the wellbore (102). The wellhead (134) may also include a series of valves and chokes used to control the flow of the fluids, blowout preventers (BOPs) and other safety devices for well control. A Christmas tree or “production tree” may be installed at the top of the wellhead (134) and includes a series of valves and fittings for directing the hydrocarbons from the wellbore (102) to the well production facilities (130) along a flowline (121). The flowline (121) includes a series of pipelines for channeling the produced hydrocarbons to production facilities (130) where the hydrocarbons may be separated, processed, and stored.
In some embodiments, the well surface system (124) includes flow regulating devices that are operable to control the flow of substances into and out of the wellbore (102). For example, the well surface system (124) may include one or more production valves (128) that are operable to control the flow of production. For example, a production valve (128) may be fully opened to enable unrestricted flow of production from the wellbore (102), the production valve (128) may be partially opened to partially restrict (or “throttle”) the flow of production from the wellbore (102), and production valve (128) may be fully closed to fully restrict (or “block”) the flow of production from the wellbore (102), and through the well surface system (124).
Keeping with
A flowmeter (126) may be installed along the flowline (121) to measure the hydrocarbon flow rate. In some embodiments, the flowmeter (126) may be installed at the surface well head, but in other embodiments the flowmeter (126) may be installed within the wellbore (i.e., “downhole”) or between the well head and a production processing facility or storage tank (132). The method and location of flowrate measurement is not intended to limit the scope of the invention in any way.
A flowmeter (126), as described in
The well control system (116) may control various operations of the well production system (100), such as well production operations, well completion operations, well maintenance operations, reservoir monitoring, assessment, and development operations. In some embodiments, the well control system (116) may include a computer system, such as illustrated computer system (800) in
Other parameters of a well production dataset may be recorded using the well control system (116) relating to a well type, (e.g., vertical, highly-deviated, horizontal, monobore or multilateral), a well location, and a well length within a reservoir zone. The well type may refer to the type of reservoir fluid being produced by the well e.g., an oil, gas or water well. A well location may refer to the surface and horizontal location of the production zone which is determined by an x-y geographical coordinate system. Other aspects of the reservoir (114) may be recorded by the well control system (116) such as vertical thickness of the production layer and a reservoir zone identifier. In some embodiments, a well production system (100) may divide the reservoir into different zones or segments based on their intended use or reservoir characteristics in a process known as reservoir zoning. Each reservoir zone may have specific production characteristics such as field location (e.g., North and South Field), vertical thickness of production zone, or other reservoir rock properties such as porosity, permeability, saturation, resistivity, hydrocarbon class, rock type, pressure, temperature and pay zone depth. When a well production system (100) includes multiple wellbores (102) each having different target zones (120), dividing the reservoir (114) into separate zones, may aid in identifying the characteristics of the reservoir (114) more accurately when recording information using the well control system (116) for a particular wellbore (102).
Turning to the reservoir (114) in
The well control system (116) may record the hydrocarbon production rate from the reservoir zone of the wellbore (102). The well production system (100) may include multiple wellbores (102) producing from multiple reservoir zones and the well control system (116) may record the hydrocarbon production rate for each one of these wellbores (102). The full extent of the parameters of the wellbore (102), reservoir (114) and the well production parameter set may be referred to as a well production dataset and may be recorded using the well control system (116). Using the well production dataset, a production decline curve may be generated for each zone within the reservoir (114) and is discussed in more detail in the context of
Using the hydrocarbon production rate history for each well, a curve fitting application may be used to generate an empirical best-fitting equation to the data. Frequently, this best-fitting equation may be an exponential decline curve (206). As fluid is produced from a well or reservoir (114), the pressure in the reservoir surrounding the wellbore typically decreases and the rate of production typically decreases as well. The exponential decline curve (206) is based on the assumption that the rate of decline in production will be proportional to the amount of oil or gas remaining in the reservoir (114). An exponential decline curve (206) is typically used to forecast future production and to estimate the remaining reserves for a reservoir (114).
In other embodiments, the best-fitting equation may be a hyperbolic decline curve, a harmonic decline curve, or a combination of any of the three mentioned decline curves. An exponential decline curve (206) is the most common type of decline seen in conventional oil and gas reservoirs and signifies a steady decline in production over time. A hyperbolic decline curve may suggest a less permeable and tightly formed reservoir and may fluctuate over time. A harmonic decline is most often seen in shale horizontal wells characterized by a very steep initial decline after production.
For example,
Turning to
Once a sick well (408) has been determined, a remedial action for the sick well (408) may be determined to restore the expected performance to increase the hydrocarbon production rate to expected levels. The remedial action may include both diagnostic work and economic analysis to propose the most economical remedial actions. The remedial actions may include sidetracking the well or drilling a secondary wellbore away from the original well to bypass an unusable section of wellbore or a certain geologic feature. The remedial actions may also include stimulating the well with acid treatments to increase production, isolating water entry zones, or adding more reservoir channels to better perforate the reservoir (114). Furthermore, identifying sick wells (408) may aid in determining a drilling target by avoiding similar production wells having the same characteristics as the sick well (408).
Any curve fitting method may be used to determine a decline curve to fit the production data including by regression analysis. For example, a best-fitting equation may be fit by forming and minimizing a least-squares cost function. Existing methods of determining an exponential decline curve (206) are typically generic and only created for the entire reservoir (114) within a well production system (100). However, these methods to predict future hydrocarbon production rates may be flawed as different zones within a reservoir (114) have different characteristics and may be characterized by differing exponential decline curves (206). Furthermore, traditional methods are often time consuming and must be computed numerous times using various curve fitting applications over a well's lifespan.
Partitioning the wells manually by reservoir characteristics and well characteristics or type prior to fitting empirical best-fit models is time-consuming, and prone to error and human bias. As such the results may be non-unique and non-repeatable. Consequently, it is desirable to have systems and methods that overcome these challenges. The use of artificial intelligence (“AI”) machine learning (“ML”) networks as disclosed herein provide an improvement over conventional manual methods for these tasks.
Machine learning (ML), broadly defined, is the extraction of patterns and insights from data. The phrases “artificial intelligence,” “machine learning,” “deep learning,” and “pattern recognition” are often convoluted, interchanged, and used synonymously. This ambiguity arises because the field of “extracting patterns and insights from data” was developed simultaneously and disjointedly among a number of classical arts like mathematics, statistics, and computer science. For consistency, the term machine learning, or machine learned, will be adopted herein. However, one skilled in the art will recognize that the concepts and methods detailed hereafter are not limited by this choice of nomenclature.
In some embodiments, the ML model may be a recurrent convolutional neural network (RCNN), such as the Pixel convolutional neural network (PixelCNN). An RCNN may be more readily understood as a specialized neural network (NN) and, from there, as a specialized convolutional neural network (CNN). Thus, a cursory introduction to an NN and a CNN are provided herein. However, note that many variations of an NN and CNN exist. Therefore, one of ordinary skill in the art will recognize that any variation of an NN or CNN (or any other ML model) may be employed without departing from the scope of this disclosure. Further, it is emphasized that the following discussions of an NN and CNN are basic summaries and should not be considered limiting.
A diagram of an NN is shown in
An NN 500 will have at least two layers 505, where the first layer 508 is the “input layer” and the last layer 514 is the “output layer.” Any intermediate layer 510, 512 is usually described as a “hidden layer.” An NN 500 may have zero or more hidden layers 510, 512. An NN 500 with at least one hidden layer 510, 512 may be described as a “deep” neural network or “deep learning method.” In general, an NN 500 may have more than one node 502 in the output layer 514. In these cases, the neural network 500 may be referred to as a “multi-target” or “multi-output” network.
Nodes 502 and edges 504 carry associations. Namely, every edge 504 is associated with a numerical value. The edge numerical values, or even the edges 504 themselves, are often referred to as “weights” or “parameters.” While training an NN 500, a process that will be described below, numerical values are assigned to each edge 504. Additionally, every node 502 is associated with a numerical value and may also be associated with an activation function. Activation functions are not limited to any functional class, but traditionally follow the form:
where i is an index that spans the set of “incoming” nodes 502 and edges 504 and f is a user-defined function. Incoming nodes 502 are those that, when viewed as a graph (as in
and rectified linear unit function ƒ(x)=max(0, x), however, many additional functions are commonly employed. Every node 502 in an NN 500 may have a different associated activation function. Often, as a shorthand, activation functions are described by the function ƒ by which it is composed. That is, an activation function composed of a linear function ƒ may simply be referred to as a linear activation function without undue ambiguity.
When the NN 500 receives an input, the input is propagated through the network according to the activation functions and incoming node values and edge values to compute a value for each node 502. That is, the numerical value for each node 502 may change for each received input while the edge values remain unchanged. Occasionally, nodes 502 are assigned fixed numerical values, such as the value of 1. These fixed nodes 506 are not affected by the input or altered according to edge values and activation functions. Fixed nodes 506 are often referred to as “biases” or “bias nodes” as displayed in
In some implementations, the NN 500 may contain specialized layers 505, such as a normalization layer, pooling layer, or additional connection procedures, like concatenation. One skilled in the art will appreciate that these alterations do not exceed the scope of this disclosure.
The number of layers in an NN 500, choice of activation functions, inclusion of batch normalization layers, and regularization strength, among others, may be described as “hyperparameters” that are associated to the ML model. It is noted that in the context of ML, the regularization of a ML model refers to a penalty applied to the loss function of the ML model. The selection of hyperparameters associated to a ML model is commonly referred to as selecting the ML model “architecture.”
Once a ML model, such as an NN 500, and associated hyperparameters have been selected, the ML model may be trained. To do so, M training pairs may be provided to the NN 500, where M is an integer greater than or equal to one. The variable m maintains a count of the M training pairs. As such, m is an integer between 1 and M inclusive of 1 and M where m is the current training pair of interest. For example, if M=2, the two training pairs include a first training pair and a second training pair each of which may be generically denoted an mth training pair. In general, each of the M training pairs includes an input and an associated target output. Each associated target output represents the “ground truth,” or the otherwise desired output upon processing the input. During training, the NN 500 processes at least one input from an mth training pair in the form of an mth training geological data patch to produce at least one output. Each NN output is then compared to the associated target output from the mth training pair in the form of an mth training feature image patch.
Returning to the NN 500 in
The comparison of the NN output to the associated target output from the mth training pair is typically performed by a “loss function.” Other names for this comparison function include an “error function,” “misfit function,” and “cost function.” Many types of loss functions are available, such as the log-likelihood function. However, the general characteristic of a loss function is that the loss function provides a numerical evaluation of the similarity between the NN output and the associated target output from the mth training pair. The loss function may also be constructed to impose additional constraints on the values assumed by the edges 504. For example, a penalty term, which may be physics-based, or a regularization term may be added. Generally, the goal of a training procedure is to alter the edge values to promote similarity between the NN output and associated target output for most, if not all, of the M training pairs. Thus, the loss function is used to guide changes made to the edge values. This process is typically referred to as “backpropagation.”
While a full review of the backpropagation process exceeds the scope of this disclosure, a brief summary is provided. Backpropagation consists of computing the gradient of the loss function over the edge values. The gradient indicates the direction of change in the edge values that results in the greatest change to the loss function. Because the gradient is local to the current edge values, the edge values are typically updated by a “step” in the direction indicated by the gradient. The step size is often referred to as the “learning rate” and need not remain fixed during the training process. Additionally, the step size and direction may be informed by previous edge values or previously computed gradients. Such methods for determining the step direction are usually referred to as “momentum” based methods.
Once the edge values of the NN 500 have been updated through the backpropagation process, the NN 500 will likely produce different outputs than it did previously. Thus, the procedure of propagating at least one input from an mth training pair through the NN 500, comparing the NN output with the associated target output from the mth training pair with a loss function, computing the gradient of the loss function with respect to the edge values, and updating the edge values with a step guided by the gradient is repeated until a termination criterion is reached. Common termination criteria include, but are not limited to, reaching a fixed number of edge updates (otherwise known as an iteration counter), reaching a diminishing learning rate, noting no appreciable change in the loss function between iterations, or reaching a specified performance metric as evaluated on the m training pairs or separate hold-out training pairs (often denoted “validation data”). Once the termination criterion is satisfied, the edge values are no longer altered and the neural network 500 is said to be “trained.”
Turning to a CNN, a CNN is similar to an NN 500 in that it can technically be graphically represented by a series of edges 504 and nodes 502 grouped to form layers 505. However, it is more informative to view a CNN as structural groupings of weights. Here, the term “structural” indicates that the weights within a group have a relationship, often a spatial relationship. CNNs are widely applied when the input also has a relationship. For example, the pixels of a seismic image, such as seismic image, have a spatial relationship where the value associated to each pixel is spatially dependent on the value of other pixels of the seismic image. Consequently, a CNN is an intuitive choice for processing geological data 415 that includes a seismic image and may include other spatially dependent data.
A structural grouping of weights is herein referred to as a “filter” or “convolution kernel.” The number of weights in a filter is typically much less than the number of inputs, where now, each input may refer to a pixel in an image. For example, a filter may take the form of a square matrix, such as a 3×3 or 7×7 matrix. In a CNN, each filter can be thought of as “sliding” over, or convolving with, all or a portion of the inputs to form an intermediate output or intermediate representation of the inputs which possess a relationship. The portion of the inputs convolved with the filter may be referred to as a “receptive field.” Like the NN 500, the intermediate outputs are often further processed with an activation function. Many filters of different sizes may be applied to the inputs to form many intermediate representations. Additional filters may be formed to operate on the intermediate representations creating more intermediate representations. This process may be referred to as a “convolutional layer” within the CNN. Multiple convolutional layers may exist within a CNN as prescribed by a user.
There is a “final” group of intermediate representations, wherein no filters act on these intermediate representations. In some instances, the relationship of the final intermediate representations is ablated, which is a process known as “flattening.” The flattened representation may be passed to an NN 500 to produce a final output. Note that, in this context, the NN 500 is considered part of the CNN.
Like an NN 500, a CNN is trained. The filter weights and the edge values of the internal NN 500, if present, are initialized and then determined using the M training pairs and backpropagation as previously described.
Training a ML network may be performed using a plurality of training datasets. In some embodiments, the well production dataset may partitioned into a plurality of training datasets either manually or using an AI unsupervised clustering technique, such as, without limitation, K-means, Mean-shift clustering, Density-based spatial clustering of applications with noise, Hierarchical clustering, Agglomerative hierarchical clustering, and Spectral clustering. Such unsupervised clustering techniques automatically separate wells into partitions containing similar wells. Alternatively, in other embodiments, the well production dataset may be partitioned into a plurality of training datasets based on one or more manually criteria. The criterion may include well type, well location, and a well length within the reservoir zone. The criterion may also be based, at least in part, on the specific reservoir zone within the reservoir (114).
Each of the training datasets may be input into a ML network to train the ML network to predict past and future hydrocarbon production rate for a well or a future hydrocarbon production rate for a candidate well. Training the ML network may or may not include generating an explicit production decline curve using the well production parameter set for each of the plurality of wells within the training dataset. In addition, training the ML network may include training the ML network to produce an uncertainty metric corresponding to each prediction. Such an uncertainty metric may indicate the expected error in the prediction. The metric may include, without limitation, a probability distribution, a standard deviation, or a variance. A lower decline rate uncertainty parameter would suggest a higher confidence in the future hydrocarbon production rate. Comparing the two reservoir zone specific production decline curves (300, 350), the reservoir zone specific production decline curve (350) may produce a smaller uncertainty parameter as the decline curve distribution between the wells is smaller.
In Step 602, a well production dataset may be obtained for a plurality of wells penetrating a reservoir (114) having a well production parameter set for each of the plurality of wells. The well production parameter set may include a hydrocarbon production rate history and information regarding the producing reservoir zone.
In Step 604, the well production dataset may be partitioned into a plurality of training datasets based at least in part on a criterion. In some embodiments, the criterion includes a well type, a well location, or a well length within a reservoir zone. In some embodiments, the well location may include a particular reservoir zone from which the hydrocarbons are produced.
For each of the plurality of partitions a ML network may be trained to predict a future hydrocarbon production rate for a candidate well. Training the ML network is described in more detail in Steps 608-612.
In Step 608, a production decline curve may be generated using the well production parameter set for each of the plurality of wells within the training dataset.
In Step 610 an empirical model may be fit to the production decline curve. Fitting the empirical model to the production decline curve may include determining a decline rate parameter. Fitting the empirical model to the production decline curve may include determining a decline rate uncertainty parameter. The decline rate uncertainty parameter may be used to perform an uncertainty analysis on the decline rate parameter. In some embodiments, the empirical model may include an exponential decline curve. In other embodiments, the empirical model may be any other bet-fit empirical curves, including a hyperbolic decline curve, a harmonic decline curve, or a combination of any of the three mentioned decline curves. The empirical model may be fit to the data using any curve fitting applications including regression analysis.
In Step 612, A future hydrocarbon production rate may be determined for the candidate well based, at least in part, on the empirical model. The empirical model gives the expected hydrocarbon production trend.
Steps 606-612 are performed for each one of the plurality of training datasets creating a plurality of future hydrocarbon production rates for a plurality of candidate wells and a trained ML network.
In Step 614, a drilling target may be determined, using the trained ML network for a future production well based, at least in part, on evaluating the future production rate for each candidate well. Evaluating the future production rates for each candidate well may include economic analysis to determine a drilling target that includes the highest rate of return. Once a drilling target is determined using the trained ML network, a wellbore path may be planned, using a wellbore planning system, based on the drilling target. A wellbore may be drilled, using a drilling system, guided by the wellbore path. The wellbore plan may be additionally informed by the best available information at the time of planning. This may include models encapsulating subterranean stress conditions, the trajectory of any existing wellbores (which may be desirable to avoid), and the existence of other drilling hazards, such as shallow gas pockets, over-pressure zones, and active fault planes.
The method (600) may further includes using the trained ML network to identify an under-performing well based, at least in part, on the empirical model and determining a remedial action for the under-performing well (408). Identifying the under-performing well may include using the training ML network to identify at least one of the plurality of wells that demonstrates a hydrocarbon production rate that underperforms the empirical model. The remedial actions include simulating the under-performing well to restore production. The remedial actions may also include sidetracking, hydrofracturing, acidizing, preforming scale removal procedures or isolating water entry zones. Furthermore, identifying sick wells (408) may aid in determining a drilling target by avoiding similar production wells having the same characteristics as the sick well (408).
As shown in
To start drilling, or “spudding in,” the wellbore 705, the hoisting system lowers the drillstring 720 suspended from the derrick 715 towards the planned surface location of the wellbore 705. An engine, such as a diesel engine, may be used to supply power to the top drive 735 to rotate the drillstring 720 via the drive shaft 740. The weight of the drillstring 720 combined with the rotational motion enables the drill bit 730 to bore the wellbore 705.
The near-surface of the subterranean region of interest 755 is typically made up of loose or soft sediment or rock 110, so large diameter casing 745 (e.g., “base pipe” or “conductor casing”) is often put in place while drilling to stabilize and isolate the wellbore 705. At the top of the base pipe is the wellhead, which serves to provide pressure control through a series of spools, valves, or adapters (not shown). Once near-surface drilling has begun, water or drill fluid may be used to force the base pipe into place using a pumping system until the wellhead is situated just above the surface of the earth 135.
Drilling may continue without any casing 745 once deeper or more compact rock is reached. While drilling, a drilling mud system 750 may pump drilling mud from a mud tank on the surface of the earth 135 through the drill pipe. Drilling mud serves various purposes, including pressure equalization, removal of rock cuttings, and drill bit cooling and lubrication.
At planned depth intervals, drilling may be paused and the drillstring 720 withdrawn from the wellbore 705. Sections of casing 745 may be connected and inserted and cemented into the wellbore 705. Casing string may be cemented in place by pumping cement and mud, separated by a “cementing plug,” from the surface of the earth 135 through the drill pipe. The cementing plug and drilling mud force the cement through the drill pipe and into the annular space between the casing 745 and the wall of the wellbore 705. Once the cement cures, drilling may recommence. The drilling process is often performed in several stages. Therefore, the drilling and casing cycle may be repeated more than once, depending on the depth of the wellbore 705 and the pressure on the walls of the wellbore 705 from surrounding rock 70.
Due to the high pressures experienced by deep wellbores, a blowout preventer (BOP) may be installed at the wellhead to protect the rig and environment from unplanned oil or gas releases. As the wellbore 705 becomes deeper, both successively smaller drill bits 730 and casing 745 may be used. Drilling deviated or horizontal wellbores 705 may require specialized drill bits 730 or drill assemblies.
The drilling system 700 may be disposed at and communicate with other systems in the wellbore environment. The drilling system 700 may control at least a portion of a drilling operation by providing controls to various components of the drilling operation. In one or more embodiments, the system may receive data from one or more sensors arranged to measure controllable parameters of the drilling operation. As a non-limiting example, sensors may be arranged to measure weight-on-bit, drill rotational speed (RPM), flow rate of the mud pumps (GPM), and rate of penetration of the drilling operation (ROP). Each sensor may be positioned or configured to measure a desired physical stimulus. Drilling may be considered complete when a drilling target with the hydrocarbon reservoir 114 is reached or the presence of hydrocarbons is established.
The computer (802) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. The illustrated computer (802) is communicably coupled with a network (830). In some implementations, one or more components of the computer (802) may be configured to operate within environments, including cloud-computing-based, local, global, or other environment (or a combination of environments).
At a high level, the computer (802) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (802) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).
The computer (802) can receive requests over network (830) from a client application (for example, executing on another computer (802) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (802) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.
Each of the components of the computer (802) can communicate using a system bus (803). In some implementations, any or all of the components of the computer (802), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (804) (or a combination of both) over the system bus (803) using an application programming interface (API) (812) or a service layer (813) (or a combination of the API (812) and service layer (813). The API (812) may include specifications for routines, data structures, and object classes. The API (812) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (813) provides software services to the computer (802) or other components (whether or not illustrated) that are communicably coupled to the computer (802). The functionality of the computer (802) may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer (813), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or another suitable format. While illustrated as an integrated component of the computer (802), alternative implementations may illustrate the API (812) or the service layer (813) as stand-alone components in relation to other components of the computer (802) or other components (whether or not illustrated) that are communicably coupled to the computer (802). Moreover, any or all parts of the API (812) or the service layer (813) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.
The computer (802) includes an interface (804). Although illustrated as a single interface (804) in
The computer (802) includes at least one computer processor (805). Although illustrated as a single computer processor (805) in
The computer (802) also includes a memory (806) that holds data for the computer (802) or other components (or a combination of both) that can be connected to the network (830). For example, memory (806) can be a database storing data consistent with this disclosure. Although illustrated as a single memory (806) in
The application (807) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (802), particularly with respect to functionality described in this disclosure. For example, application (807) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (807), the application (807) may be implemented as multiple applications (807) on the computer (802). In addition, although illustrated as integral to the computer (802), in alternative implementations, the application (807) can be external to the computer (802).
There may be any number of computers (802) associated with, or external to, a computer system containing computer (802), wherein each computer (802) communicates over network (830). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (802), or that one user may use multiple computers (802).
In some embodiments, a well control system (116) may describe a first computer (802) and one or more first Applications (807) capable of performing functions related to the sample labeling method as described herein, including storing the first rollout image in the secondary label digital database, searching the secondary label digital database using the second rollout image, and creating a plurality of matching scores using a ML or DL technique. In some embodiments, this first computer (802) and one or more first Applications (807) may perform all of the previously mentioned functions, while in other embodiments at least one of the function may be performed using a second computer (802) and one or more second Applications (807).
Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible, including dimensions, in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.