AI-ENABLED UNCONFINED COMPRESSIVE STRENGTH (UCS) REAL-TIME PREDICTION UTILIZING LOGGING-WHILE-DRILLING MEASUREMENTS

Abstract
A method for optimizing a drilling performance of a drilling operation, based process data. The method includes obtaining process data while conducting a drilling operation through a subsurface, where the drilling operation is controlled by a set of drilling parameters, and determining, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface. The method further includes determining, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation, and, upon determining that the drilling performance is not optimum, adjusting one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.
Description
BACKGROUND

Unconfined compressive strength is a key measure of the ability for a rock to withstand compressive stress. During hydrocarbon well drilling operations, the bit-wear and rate of penetration of the drill bit depend on the unconfined compressive strength of the rock being perforated.


A continuous profile of the unconfined compressive strength of a subsurface may be obtained from well logs when they become available, after drilling a well. However, to optimize drilling operations, it is desirable to know the unconfined compressive strength during drilling, rather than after drilling, so the drilling parameters can be adjusted accordingly.


In that regard, artificial intelligence models may be trained using existing wells and offer a potential solution to predict the unconfined compressive strength of a subsurface in real-time, by receiving input data from the drilling operation, while drilling.


SUMMARY

This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.


Embodiments disclosed herein generally relate to a method for optimizing a drilling performance of a drilling operation, based process data. The method includes obtaining process data while conducting a drilling operation through a subsurface, where the drilling operation is controlled by a set of drilling parameters, and determining, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface. The method further includes determining, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation, and, upon determining that the drilling performance is not optimum, adjusting one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.


Embodiments disclosed herein generally relate to a system for optimizing a drilling performance of a drilling operation, based process data. The system includes a drilling system performing a drilling operation through a subsurface, including a drilling rig, a drill string, connected to the drilling rig, and a drill bit, connected to the drill string, where the drilling operation is controlled by a set of drilling parameters. The system further includes a plurality of sensors, connected to the drilling system, the plurality of sensors collecting process data from the drilling operation, and a computer, configured to receive the process data from the plurality of sensors and determine, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface. The computer is further configured to determine, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation, and, upon determining that the drilling performance is not optimum, adjust one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.


Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.





BRIEF DESCRIPTION OF DRAWINGS

Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.



FIG. 1 depicts a well drilling site, in accordance with one or more embodiments disclosed herein.



FIG. 2 depicts a flow chart of a method for optimizing performances of a drilling operation, in accordance with one or more embodiments disclosed herein.



FIG. 3 depicts a system for computing an unconfined compressive strength, in accordance with one or more embodiments disclosed herein.



FIG. 4 depicts a system for computing an unconfined compressive strength, in accordance with one or more embodiments disclosed herein.



FIG. 5 depicts a box diagram of a system for adjusting drilling parameters of a drilling operation.



FIG. 6 depicts an example diagram of a neural network, in accordance with one or more embodiments disclosed herein.



FIG. 7 depicts an example diagram of a gradient boosted tree algorithm, in accordance with one or more embodiments disclosed herein.



FIG. 8 depicts an example diagram of a computer, in accordance with one or more embodiments disclosed herein.



FIG. 9 depicts examples of data profiles, in accordance with one or more embodiments disclosed herein.





DETAILED DESCRIPTION

In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.


Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.


It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. For example, a computer may reference two or more such computers.


Terms such as “approximately,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.


It is to be understood that one or more of the steps shown in a flowchart may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope disclosed herein should not be considered limited to the specific arrangement of steps shown in the flowchart.


Although multiple dependent claims are not introduced, it would be apparent to one of ordinary skill that the subject matter of the dependent claims of one or more embodiments may be combined with other dependent claims.


In the following description of FIGS. 1-9, any component described with regard to a figure, in various embodiments disclosed herein, may be equivalent to one or more like-named components described with regard to any other figure. For brevity, descriptions of these components will not be repeated with regard to each figure. Thus, each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments disclosed herein, any description of the components of a figure is to be interpreted as an optional embodiment which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure.


Embodiments disclosed herein are directed to a workflow that harnesses artificial intelligence (AI) capabilities to generate continuous Unconfined Compressive Strength (UCS) data in real-time from sonic predictions. The generated real-time UCS data is then used to steer the lateral and optimize the drilling parameters to increase ROP and extend the life-cycle of the bit. The UCS of drilling formations controls the drilling rate of penetration (ROP) and bit wear. Therefore, it holds very crucial information for drilling engineers during drilling operations. By considering the UCS, drilling operations can be improved by optimizing the drilling parameters. For example, the weight-on-bit (WOB) can be reduced while drilling across zones with increasing trend of UCS in order to extend the life-cycle of the bit. One the other hand, WOB can be increased while drilling across zones with decreasing trend of UCS in order to improve ROP. Ultimately, this leads optimum drilling performance with minimal trips to change bits (1-1.5 days of rig time).



FIG. 1 illustrates an exemplary well site (100) where a drilling operation is conducted. In general, well sites may be configured in a myriad of ways. Therefore, well site (100) is not intended to be limiting with respect to the particular configuration of the drilling equipment. The well site (100) is depicted as being on land. In other examples, the well site (100) may be offshore, and drilling may be carried out with or without use of a marine riser. A drilling operation at well site (100) may include drilling a wellbore (102) into a subsurface (103). Generally, the subsurface (103) may include various geological formations, such as, in the specific example in FIG. 1, geological formations (104) and (106). Each geological formation may be composed of various rocks.


For the purpose of drilling, a drill string (108) is suspended within the wellbore (102). The drill string (108) may include one or more drill pipes (109) connected to form conduit and a bottom hole assembly (BHA) (110) disposed at the distal end of the conduit. The BHA (110) may include a drill bit (112) to cut into the subsurface rock. In one or more embodiments, the BHA (110) may include measurement tools, such as a measurement-while-drilling (MWD) tool (114) and logging-while-drilling (LWD) tool (116). Measurement tools (114) and (116) may include sensors and hardware to measure downhole drilling parameters, and these measurements may be transmitted to the surface using any suitable telemetry system known in the art. The BHA (110) and the drill string (108) may include other drilling tools known in the art but not specifically shown. The drill string (108) may be suspended in the wellbore (102) by a derrick (118). A crown block (120) may be mounted at the top of the derrick (118), and a traveling block (122) may hang down from the crown block (120) by means of a cable or drilling line (124). One end of the cable or drilling line (124) may be connected to a drawworks (126), which is a reeling device that can be used to adjust the length of the cable or drilling line (124) so that the traveling block (122) may move up or down the derrick (118). The traveling block (122) may include a hook (128) on which a top drive (130) is supported.


During a drilling operation at the well site (100), the drill string (108) is rotated relative to the wellbore (102), and weight is applied to the drill bit (112) to enable the drill bit (112) to break rock as the drill string (108) is rotated. In one or more embodiments, the drill string (108) is rotated by operating the top drive (130), which is coupled to the top of the drill string (108). Alternatively, the drill string (108) may be rotated by means of a rotary table (not shown) on the drilling floor (131), or independently with a downhole drilling motor. In further embodiments, the drill bit (112) may be rotated using a combination of the drilling motor and the top drive (130) (or a rotary swivel if a rotary table is used instead of a top drive to rotate the drill string (108)). Drilling fluid (commonly called mud) may be stored in a mud pit (132), and at least one pump (134) may pump the mud from the mud pit (132) into the drill string (108). The mud may flow into the drill string (108) through appropriate flow paths in the top drive (130) (or a rotary swivel if a rotary table is used instead of a top drive to rotate the drill string (108)), and exit into the bottom of the wellbore (102) through nozzles in the drill bit (112). The mud in the wellbore (102) then flows back up to the surface in an annular space between the drill string (108) and the wellbore (102) with entrained cuttings. The mud with the cuttings is returned to the pit (132) to be circulated back again into the drill string (108). Typically, the cuttings are removed from the mud, and the mud is reconditioned as necessary, before pumping the mud again into the drill string (108).


Generally, a drilling operation, such as the one depicted in FIG. 1, is controlled by a set of adjustable, drilling parameters and results in a set of, non-adjustable, drilling performance parameters. Examples of drilling parameters controlling a drilling operation include, but are not limited to, a weight on bit (WOB), a drill string rotational speed (RPM), a torque of the drill bit, a mud flow rate (e.g., in the units of gallons per minute (GPM)), a drilling direction, and properties of the mud, such as a viscosity or a hydraulic pressure of the mud injection. Examples of drilling performance parameters include, but are not limited to, a rate of penetration (ROP), equipment wear and tear and number of bit runs. In one or more embodiments, it may be desirable to select the drilling parameters with the purpose of optimizing the drilling performance parameters.


In one or more embodiments, a control system (162) may be disposed at, or communicate with, the well site (100). The control system (162) may control one or more drilling parameters and receive data from one or more sensors (160). As a non-limiting example, sensors (160) may be arranged to measure one or more drilling parameters and drilling performance parameters, such as the mud-flow rate or the ROP. For illustration purposes, sensors (160) are shown on drill string (108) and proximate mud pump (134). The illustrated locations of sensors (160) are not intended to be limiting, and sensors (160) could be disposed wherever drilling parameters need to be measured. Moreover, there may be many more sensors (160) than shown in FIG. 1 to measure various other parameters of the drilling operation. Each sensor (160) may be configured to measure a desired physical stimulus. One or more sensor systems (164) according to embodiments disclosed herein may be fitted into nozzle receptacles in the drill bit (112), which may collect downhole data in addition to or alternatively to data collected by sensors (160). In some embodiments, sensor systems (164) according to embodiments disclosed herein may be used to collect downhole data that other sensors (160) would otherwise not be able to collect, e.g., downhole data related to conditions at the drill bit (112), such as temperature at the bit, bit vibration, and drilling fluid exit flow rate. In one or more embodiments, downhole data collected from the sensor system (164) may further be sent to or collected by the system (162).


Generally, the subsurface (103) is attributed a set of subsurface properties, that may highly depend on the geographical location of the well site (100), and may further vary with depth within the subsurface (103). Examples of subsurface properties that may be attributed to the subsurface (103) include, but are not limited to a density, a porosity, a permeability or a mineral composition of the rocks composing the subsurface (103). Another, notable example of a subsurface property is the UCS of the rocks within the subsurface (103), referred to as the UCS of the subsurface (103), in this disclosure. In one or more embodiments, the UCS of a rock is defined as a measure of how much compressive stress the rock can withstand without deforming. It is noted that as a measure of a strength of the subsurface (103), the UCS of the subsurface (103) may influence the drilling operation significantly. For example, perforating a rock with a high UCS may be more difficult than a rock with a low UCS. Another, notable example of a subsurface property for the subsurface (103) is a sonic compressional wave propagation slowness (DTC) in the subsurface (103), defined as the inverse of the speed of compressional sonic waves through the subsurface (103). In some embodiments, the DTC of the subsurface (103) may be approximated before the drilling operation begins, by using, for example, geological or physical models, such as the geological or physical models described in later paragraphs of this disclosure. In other instances, the DTC of the subsurface may be obtained after drilling, by analyzing well logs. In this disclosure, a method to compute the DTC during a drilling operation is described.



FIG. 2 depicts a method for optimizing a performance of a drilling operation by adjusting one or more drilling parameters, based on the UCS of a subsurface that is being drilled. In Step 203, some process data are obtained, while conducting a drilling operation of a subsurface. The drilling operation is controlled by a set of drilling parameters, such as, for example, a weight on bit (WOB), a drill string rotational speed (RPM), a torque of the drill bit, a mud flow rate (e.g., in the units of gallons per minute (GPM)), properties of the mud, such as its viscosity, a hydraulic pressure of the mud injection, and an inclination of a drilling direction. Examples of process data that may be obtained while conducting a drilling operation include logging-while-drilling (LWD) data. Examples of LWD data include a gamma ray, a bulk formation density, a thermal neutron porosity, a photoelectric factor, a resistivity and a pressure of the portion of the subsurface being perforated. The gamma ray of a portion of the subsurface is a natural gamma radiation emitted by the surrounding rocks. The photoelectric factor is a response, from the subsurface, of an exposure to gamma rays emitted by a LWD tool.


In one or more embodiments, the LWD data are acquired in real-time by using LWD tools, such as the LWD tool (116) located within the BHA (110) in FIG. 1. Some LWD tools are able to emit thermal neutrons that scatter through the subsurface, measure how much of those neutrons are captured by the subsurface and compute, as a result, a hydrogen index of the subsurface. In one or more embodiments, the hydrogen index determined as such is called the thermal neutron porosity of the subsurface. Examples of process data that may be obtained while conducting a drilling operation further include measurement-while-drilling (MWD) data. Examples of MWD data include a downhole temperature, a magnetic field amplitude and a measurement of gravity within the subsurface. MWD data may further include measurements of the drilling parameters, such as the WOB, drill string rotational speed or inclination of the wellbore. For quality control purposes, such MWD data, of the drilling parameters, may be compared and assessed against the drilling parameters that are planned for the drilling operation. In one or more embodiments, the MWD data are acquired in real-time by using MWD tools, such as the MWD tool (114) located within the BHA (110) in FIG. 1.


In one or more embodiments, the process data that may be obtained while conducting a drilling operation further includes the DTC of the subsurface. In some situations, a DTC profile of the subsurface is obtained prior to drilling, and the DTC in real-time, in this case, is simply obtained by selecting the value of the DTC at any wanted depth while drilling. Example of methods that may be used to compute the DTC prior to a drilling operation include geological interpolation techniques, geophysical techniques or any combination thereof. For example, the DTC at the drilling location may be interpolated from one or more available DTCs from existing wells in a vicinity of the drilling location. Another example of obtaining a DTC prior to the drilling operation is using seismic geophysics to compute a velocity model in an area containing the drilling location, extracting a velocity at the well location and computing the DTC as the inverse of the velocity. Examples of seismic geophysical methods to determine a velocity include residual moveout tomography, full waveform inversion, any combination thereof, and multiple iterations of any combination thereof. In other situations, no DTC is available prior to drilling, or the DTC available prior to drilling is not considered accurate. In such situations, the real-time DTC of the subsurface may be computed. In one or more embodiments, the real-time DTC of the subsurface is computed using artificial intelligence, as described in later paragraphs of this disclosure.


In Step 205, a UCS of the subsurface is obtained in real-time, using a computational model that receives the process data from Step 203 as input. The computational model may be of various types. In one or more embodiments, the computational model in Step 205 makes use of artificial intelligence. Examples of AI models that may be used to compute the real-time UCS of the subsurface from process data include regression models and neural networks. In one or more embodiments, the computational model in Step 205 includes a physical model, that receives the process data from Step 203 as input, and returns, as output, the real-time UCS of the subsurface. The physical model may be of various forms, including, a formula that provides a value of the real-time UCS output directly, or an equation that needs to be solved to find the output, such as a numerical equation, a differential equation or an integral equation. In that respect, the computational model may further include methods to solve an equation, such as an iterative solver or a numerical method. Examples of iterative solvers include Newton methods and pseudo-Newton methods, that seek the solution of a non-linear equation by computing a sequence that is intended to converge towards a solution to an equation. Numerical methods include quadrature formulas that approximate integrals, such as a method of rectangles or Simpson's rule. Numerical methods further include discretization methods for differential equations, such as Runge-Kutta methods, finite differences and finite element methods.


It is noted that there may be a delay between an instant during the drilling operation and the time at which any process that is consequential to the drilling operation is performed. For instance, there may be a delay between a time at which a point of the subsurface is drilled, and the computation of the UCS at that point is complete. In that regard, denoting t as a current time during a drilling operation, the term “real-time”, in this disclosure, is defined as any instant in a tolerance interval [t, t+S], where S≥0 is an acceptable tolerance. Generally, the acceptable tolerance S is defined as any delay that is small enough so that a certain process performed during the drilling operation is useful. For instance, as long as the time taken to compute the UCS of the rock being drilled at a time t is short enough so that the UCS of the rock being drilled at a time t can be used to optimize a performance of the drilling operation, the time taken to compute the UCS may be considered as an acceptable tolerance, and the computation of the UCS is said to occur in real-time. For the scope of disclosure, the terms “real-time” and “instantly” may be used interchangeably, and the terms “current time” may refer to an actual time t during a drilling operation or any instant within the tolerance interval [t, t+S]. Furthermore, in some embodiments, the acceptable tolerance S is defined by an entity in charge of the drilling operation or any person making use of the invention, at least in part, presented in this disclosure.


Many formulas have been developed that express the real-time UCS directly as a function of one or more pieces of the process data from Step 203. Examples of such formulas are given Table I. In Table I, Δt denotes the DTC of the subsurface expressed in μ/ft, ϕ denotes the total porosity, ρ denotes the density expressed in kg/m3 and E denotes the Young modulus, expressed in GPa, of the rock for which the UCS, given in PSI in Table I, is computed. The first column in Table I contains, for reference, an ID for each example formula that expresses the UCS as a function of the process data reported in the second column to the left of Table I. The next column, with the heading “Type of rock”, contains a type of rock for which the UCS formula in the second column to the left is valid. The rightmost column of Table 1, with the heading “Geographical region”, contains a geographical region where the UCS formula in the second column to the left is valid. It is noted that the process data in Step 203 are obtained in real time, so the UCS in in Step 205 is obtained in real-time.









TABLE I







Examples of formulas that express the


UCS as a function of process data.













Geographical


ID
UCS (MPa)
Type of rock
region













1
143000 exp(−0.035Δt)
Sandstone
Australia


2
9.843 · 10−8/Δt − 31.5
Sandstone
Thuringia,





Germany


3
1200 exp(−0.036Δt)
Sandstone
Bowen Basin,





Australia


4
1.4138 · 107 Δt−3
Sandstone
Gulf Coast


5
18.78486 · 10−21 ρ/Δt2 − 21
Sandstone
Cook Inlet,





Alaska


6
42.1 exp(20.45342 · 10−23 ρ/Δt2)
Sandstone
Australia


7
3.87 exp(12.27296 · 10−22 ρ/Δt2)
Sandstone
Gulf of Mexico


8
2.28 + 4.1089E
Sandstone
Worldwide


9
254(1 − 2.7ϕ)2
Sandstone
Sedimentary





basins worldwide


10
0.77(304.8/Δt)2.93
Shale
North Sea


11
0.43(304.8/Δt)3.2
Shale
Gulf of Mexico


12
1.35(304.8/Δt)2.6
Shale
Worldwide


13
0.5(304.8/Δt)3
Shale
Gulf of Mexico


14
10(304.8/Δt − 1)
Shale
North Sea


15
7.97E0.91
Shale
North Sea


16
1.001ϕ−0.96
Shale
North Sea


17
2.922ϕ−0.96
Shale
North Sea


18
(7682/Δt)1.82/145
Limestone or
Korobcheyev




Dolomite
deposit, Russia


19
102.44 + 109.14/Δt/145
Limestone or
Korobcheyev




Dolomite
deposit, Russia


20
13.8E0.51
Limestone or
Korobcheyev




Dolomite
deposit, Russia


21
276(1 − 3ϕ)2
Limestone or
Korobcheyev




Dolomite
deposit, Russia


22
143.8 exp(−6.95ϕ)
Limestone or
Middle East




Dolomite


23
135.9 exp(−4.8ϕ)
Limestone or
Middle East




Dolomite










In one or more embodiments, the Young modulus E in Table I is computed from the process data, such as the DTC and density of the rock considered. In such scenario, the physical model may further include a formula that expresses E as a function of other process data. Generally, the physical model may include any transformation model that computes a physical quantity from the process data. It is emphasized that the examples of formulas expressing the UCS in Table 1 and the type of rock, geographical regions and any conditions in which those formulas are given in this paragraph only as examples, and should not be considered limiting. One with ordinary skill in the art will recognize that other examples of formulas for computing the UCS may be used, and other conditions condition may apply without departing from the scope of this disclosure.


Some formulas in Table I require the DTC of the subsurface as an argument. In one or more embodiments, the computational model in Step 205 includes an artificial intelligence (AI) model that computes the DTC of the subsurface from other pieces of process data obtained in Step 203, such as LWD data. Thus, in some embodiments, the UCS in Step 205 is obtained by using, sequentially, the AI model that receives LWD or MWD data, or both, as input and returns the DTC as output, followed by the physical model that receives the DTC and possibly LWD or MWD data, or both as input and returns the UCS as output. Examples of AI models that may be used in Step 205 to compute the real-time DTC of the subsurface from LWD or MWD data, or both, include regression models, neural networks such fully connected neural networks (DNN) or convolutional neural networks (CNN), decision trees and random forests. Considering the DTC as a series of values given over a range of depths in the subsurface, these models may be combined with natural language processing (NLP) such as recurrent neural networks (RNN) models, long-short-term-memory (LSTM) and gated recurrent unit (GRU) models. The examples of AI models given herein should not be considered as limiting. One with ordinary skill in the art will recognize that other examples of AI models that compute a DTC from LWD or MWD data, or both, may be used without departing from the scope of this disclosure.


In one or more embodiments, the AI model designed to produce the DTC of the subsurface may further require, in addition to other pieces of process data, petrophysical data in real-time as input. Examples of real-time petrophysical data that may be required by the AI model as input include, but are not limited to, a total porosity of the subsurface and a volume of gas hydrocarbon of the subsurface. In one or more embodiments, the computational model in Step 205 further includes a petrophysical model, that computes petrophysical data in real-time, from process data, such as LWD or MWD data, or both, acquired in real-time. In these scenarios, the UCS in Step 205 is obtained in three steps. First, the petrophysical data are computed by using the petrophysical model that receives process data as input, which can be any process data excluding the DTC. Then, the AI model is used to compute the DTC of the subsurface from the petrophysical data and other process data as input. Thirdly, the physical model is used to compute the UCS from the DTC and other process data.


In Step 207, a drilling performance of the drilling operation is determined, based on the real-time UCS computed in Step 205. The drilling performance of the drilling operation can be defined in many ways. In one or more embodiments, the drilling performance of the drilling operation is an estimate of an average rate of penetration (AROP) of the drilling operation. The AROP of the drilling operation is defined as the total length of the wellbore at the end of the drilling operation, divided by the duration of the drilling operation. As such, the AROP can only be known at the end of the drilling operation. It is in contrast with the instantaneous rate of penetration, denoted as ROP, that defines the rate of penetration of the drill bit at a given instant. In some embodiments, the ROP may be measured by downhole sensors, such as the sensors (160) in FIG. 1. The ROP does not take into account bit wear, and the fact that when a bit wears out beyond a maximum limit, the bit may need to be replaced. Replacing the bit requires some time during which the drilling operation is halted, which eventually affects the AROP. In some scenarios, replacing a bit may take between one day and one day and a half. In this disclosure, the instantaneous average rate of penetration (IAROP) is defined as an estimate, at a given instant, of the AROP that would be obtained if the whole well were drilled under the same drilling conditions at that instant. The drilling conditions at an instant include as, at that instant, all drilling parameters, a drilling equipment, a rock being perforated, rock properties of the rock being perforated, such as its UCS, total porosity and pressure.


In one or more embodiments, the drilling performance of the drilling operation is determined as the IAROP of the drilling operation. The IAROP may be computed in many ways. In one or more embodiments, the IAROP is defined as a function of process data, UCS, and some of the drilling parameters, such as the WOB and the torque. A notable example of such a model, defining the IAROP, is a Bourgoyne model:









IAROP
=

exp




(


a
1

+






1
8



a
i



P
i



)

.






EQ
.

1







In EQ. 1, a1 is a measure of the rock drillability, a2 is a normal compaction constant, P2=10000−D, a3 is an under-compaction constant, P3=0.69D(gp−9), a4 is a pressure differential constant, P4=D(gp−ρc), as is the WOB,








P
5

=

ln



(



WOB
/

d
b


-

WOB
0



4
-

WOB
0



)



,




a6 is the RPM, P6=ln(N/60), a7 is a bit wear constant, P7=−h, a8 is a hydraulic parameter and P8 is a hydraulic jet impact force beneath the bit. Besides, D is the true vertical depth expressed in feet, gp is the pore pressure gradient expressed in lbm/gal, ρc is the mud density expressed in lbm/gal, db is a bit diameter expressed in inches, WOB0 is a threshold bit weight per inch, N is the rotary speed of the bit, expressed in rotations per minute, and h is a fractional bit wear. Note that in some embodiments, the rock drillability, a1, the normal compaction constant, a2 and the under-compaction constant, a3 may depend on the UCS and as such, denoted as a1 (UCS), a2 (UCS) and a3 (UCS). An example of the parameters a1, . . . , a8, defined for a specific drilling operation in a specific subsurface, is given in Table II. Those skilled in the art will readily appreciate that, in other drilling conditions, the parameters in EQ. 1 may have different values from the ones expressed in Table II.









TABLE II







Examples of constants in the Bourgoyne model EQ. 1.











Parameter
Constant
Value















Rock drillability
a1
30.7



Normal compaction
a2
0.00025



Under-compaction
a3
0.00059



Pressure differential
a4
0.0157



WOB
a5
1.1877



RPM
a6
4.023



Bit wear
a7
8.2355



Hydraulic jet impact force
a8
5.0548










In one or more embodiments, the IAROP is defined as the difference between the ROP (instantaneous ROP) and a loss that account for the bit wear:









IAROP
=

ROP
-


A
·
B

/


B
max

.







EQ
.

2







In EQ. 2, ROP is expressed in m/s, B is a bit wear factor, expressed in m/s, Bmax is the maximum bit wear allowed until it needs to be replaced, and the constant A, is the time it takes to replace a bit, expressed in seconds, during which the drilling operation is halted. This way, the term A·B/Bmax models the drill time that is lost when the bit is being replaced. In one or more embodiments, the bit wear factor B is given by a bit wear model. In some embodiments, the bit wear factor depends on the UCS and may be written as B (UCS). The ROP may be defined in many ways. In some embodiments, the ROP is measured by sensors, such as the sensors (160) in FIG. 1. In other embodiments, the ROP is defined by a ROP model, such as a Teale model:









ROP
=




π
·
μ




d
b

·
RPM
·
WOB

/

d
a

/
90



UCS

C
e


-

WOB

d
a




.





EQ
.

3







In EQ. 3, db is the bit diameter, da is the bit face area, μ is a friction coefficient and Ce is a parameter that measures the efficiency of transmitting the penetration of the bit to the rock. It is noted that EQ. 3 depends on the UCS.


In one or more embodiments, the IAROP may be determined by AI. An AI model may be trained by using information from existing wells. Examples of information from an existing well that may be used to train an AI model include an AROP of the existing well, a UCS of the existing well, and process data of the existing well, such as the WOB, RPM profile or torque. The AI model is then trained to match the UCS and process data to the AROP. The trained AI model can then be used to determine the IAROP of the well being drilled. The AI model received the process data obtained in Step 203 and the UCS data obtained in Step 205, such as LWD or MWD data, and returns, as output, the IAROP of the well being drilled. Examples of AI models that may be used to determine the IAROP of the well being drilled include, but are not limited to, regression models, neural networks such fully connected neural networks (DNN) or convolutional neural networks (CNN), decision trees and random forests.


In one or more embodiments, the drilling performance in Step 207 is obtained based on qualitative prior experience. For example, based on qualitative experience, conditions, such as UCS and process data, may be known to favor bit wear, which could lead to having to replace the bit often. In this case, applying a high WOB while drilling a rock with a high UCS may lead to excessive bit wear while not increasing the ROP enough to justify such excessive bit wear. An example of defining whether a bit wear is excessive is to estimate an expected bit life of the bit, based on experience, and compare it with a minimum bit life threshold. If the estimated bit life is below the minimum bit life threshold, the bit wear is then said to be excessive. If the estimated bit life is above the minimum bit life threshold, the bit wear is then said not to be excessive. On the other hand, based on qualitative experience, conditions, such as UCS and process data, may be known not to favor bit wear, and the WOB might be increased, improving the ROP without excessively increasing bit wear. Thus, in some embodiments, the drilling performance in Step 207 may be defined as a status, equal to positive if the WOB might be increased without triggering excessive bit wear, negative if the WOB should be reduced in order not to trigger excessive bit wear, or neutral if the WOB should be kept as it is. It is emphasized that the examples of drilling performances and the example definitions of IAROP given in this disclosure are given only as examples and should not be considered limiting. One with ordinary skill in the art will recognize that other examples of may be used in Step 207 and other steps of FIG. 2, or other figures in this disclosure, without departing from the scope of this disclosure.


In Step 209, a determination is made whether the drilling performance obtained in Step 207 is optimum. The determination Step 209 can be performed in many ways, depending on how the drilling performance is determined in Step 207. In one or more embodiment, a drilling performance threshold is defined, and the drilling performance is said to be optimum if it is greater than or equal to the drilling performance threshold, or not optimum if it is less than the drilling performance threshold.


In other embodiments, the determination step 209, of whether the drilling performance obtained in Step 207 is optimum, is made by solving an optimization problem, for example in case the drilling performance obtained in Step 207 is given by a formula that expresses the drilling performance as a function of one or more drilling parameters, and possibly other data, such as one or more pieces of the process data from Step 203, or the UCS obtained in Step 205. In such scenarios, denoting X as the one or more drilling parameters and P as the one or more pieces of the process data from Step 203, the performance of the drilling operation may be denoted as F(X, UCS, P). For example, the set X may include the WOB, RPM, mud flow rate or torque or any combination thereof, and the set P may include the total porosity, density or pressure or any combination thereof of the rock being drilled. Examples of such function F include formulations of the IAROP, such as the formulations given in EQ. 1, EQ. 2, or an AI model, such as the AI model given as an embodiment in the description of Step 207, that receives drilling parameters and process data as inputs and predicts the IAROP of the drilling operation as output. Given the real-time UCS from Step 205 and P from Step 203, a function G may be defined as expressing the performance or the drilling operation with respect to the one or more drilling parameters X:










G

(
X
)

=


F

(

X
,
UCS
,
P

)

.





EQ
.

4







A way to determine whether the drilling performance is optimum, in this case, is by maximizing G, by solving the following maximization problem:










Find



X




such


that


for


all


X

,


G

(

X


)




G

(
X
)

.






EQ
.

5







Finding a set X* that satisfies EQ. 5 is only possible in rare cases, for instance, in cases for which the gradient VG can be computed, the equation ∇G(X)=0 can be solved for X, and it can be shown that at least one solution, denoted by X*, thus satisfying ∇G(X*)=0, also satisfies EQ. 5. If a set X* can be found as satisfying EQ. 5, the determination in Step 209, of whether the drilling performance obtained in Step 207 is optimum, is made by comparing the drilling performance obtained in Step 207 with G(X*). If the drilling performance obtained in Step 207 is equal to G(X*), the drilling performance obtained in Step 207 is said to be optimum. If the drilling performance obtained in Step 207 is less than G(X*), the drilling performance obtained in Step 207 is said to be not optimum. Generally, the optimization problem in EQ. 5 is done in an approximate sense, by iterating an algorithm, called an optimizer, until a certain convergence criterion is reached. In one or more embodiments, the optimizer is a gradient ascent method. Given an initial set of drilling parameters, X0, the optimizer produces a recurrent sequence, indexed by an integer iteration number q≥1, of sets Xq such that Xq only depends on the values of the sets Xs, for s<q. In one or more embodiments, the set X0 may be defined randomly, or defined as the current drilling parameters of the drilling operation. In one or more embodiments, the optimizer is defined such that the set Xq, at each iteration q, only depends on set the values of Xq-1. Intuitively, the goal of the optimizer is that the drilling performance function G applied to one of the terms of the sequence Xq at an iteration q*, namely, G(Xq*) is as large as possible. In one or more embodiments, the optimizer is defined such that the sequence G(Xq) is an increasing sequence and then, iterating the optimizer always produces a set of drilling parameters Xq associated with a larger drilling performance than the drilling parameters at the previous iteration, Xq-1. The optimizer runs for a certain number of iterations, Q≥1, called the maximum iteration number. In one or more embodiments, the maximum iteration number Q is pre-defined and the convergence criterion for the iterative optimizer is that the iteration number be Q. The convergence criterion for the optimizer can be defined in many other ways. In other embodiments, the convergence criterion is noting that the distance |G(Xq)−G(Xq-1)| is less than a predefined threshold for a certain q≥1. If a convergence criterion is met at some iteration, the optimizer is said to have converged, and the iterative process stops. Regardless of the definition of the convergence criterion, the maximum iteration number is reached when the convergence criterion is met and denoted as Q. An optimal set of drilling parameters, in the scope of this disclosure, can then be defined in many ways. In one or more embodiments, the optimal set of drilling parameters is defined as










X
Q

,




EQ
.

6







that is, the last value obtained by the optimizer when the convergence criterion is met, and an approximate maximum drilling performance is defined as G(XQ). In other embodiments, the optimal set of drilling parameters is defined as the set Xq*, obtained at some integer q*, such that 0≤q*≤Q, that maximizes the drilling performance in the following sense:











for


all


q


such


that






0


q

Q

,


G

(

X

q



)



G

(

X
q

)


,




EQ
.

7







and the approximate maximum drilling performance is defined as G(Xq*). The determination of whether the drilling performance obtained in Step 207 is optimum is by comparing the drilling performance obtained in Step 207 with the approximate maximum drilling performance. If the drilling performance obtained in Step 207 is greater than or equal to the approximate maximum drilling performance, the drilling performance obtained in Step 207 is said to be optimum. If the drilling performance obtained in Step 207 is less than the approximate maximum drilling performance, the drilling performance obtained in Step 207 is said to be not optimum.


In one or more embodiments, a set of constraints may be added to the optimization problem in EQ. 5, in which case EQ. 5 and the constraints form a constrained optimization problem. Such constrained optimization problem can be solved approximately, for example, by using a generalized reduced gradient optimizer. Examples of constraints that may be added to EQ. 5 include ranges for some of the drilling parameters, such as a maximum WOB, or a maximum torque, based on equipment specificity. In such scenarios, the optimizer may be designed such that each term Xq satisfies the constraints at each iteration q.


In other embodiments, the determination step 209, of whether the drilling performance in Step 207 is optimum, is made by simply selecting a status. In scenarios in which the drilling performance in Step 207 is defined qualitatively, depending on prior experience, as a status, that can be positive if the WOB might be increased without triggering excessive bit wear, negative if the WOB should be reduced in order not trigger excessive bit wear, or neutral if the WOB should be kept as it is, the drilling performance may be determined as optimum if the status is neutral, or not optimum if the status is positive or negative.


If the drilling performance is determined as optimum in Step 209, no action is taken on the drilling parameters and the Steps 203-209 are repeated while the drilling operation continues, controlled by the drilling parameters. If the drilling performance is determined as not optimum in Step 209, one or more drilling parameters are adjusted in Step 211, in order to optimize the drilling performance. Adjusting one or more drilling parameters to optimize the drilling performance can be done in many ways. In one or more embodiments, the drilling parameters may be adjusted by using a grid search technique. To perform a grid search technique for a subset of N≥1 drilling parameters with the set of drilling parameters, the subset of one or more drilling parameters denoted as {Xi, 1≤i≤N}, where i is an integer, where each Xi is a drilling parameter, for each integer i such that 1≤i≤N, a feasible range Ri is first defined for each Xi, for i such that 1≤i≤N. Then, for each feasible range Ri, a certain integer number Ki≥1 of values for the drilling parameters, Xik, 1≤k≤Ki are taken on each feasible range Ri, resulting in a Πi=1N Ki-dimensional grid X of drilling parameters. For each set {circumflex over (X)} of drilling parameters on the grid X, a drilling performance G({circumflex over (X)}) is determined, for instance, in a similar fashion as it was done in Step 207, and optimum drilling parameters are defined as the set {circumflex over (X)}* such that:











for


all



X
^




𝒳


,


G

(


X
^



)




G

(

X
^

)

.






EQ
.

8







In Step 211, the one or more drilling parameters Xi, 1≤i≤N are then adjusted to be equal to {circumflex over (X)}* satisfying EQ. 8.


In case the determination in Step 209, of whether the drilling performance is optimum is performed by solving an optimization problem, optimum values for one or more drilling parameters are obtained are the optimum values that solve the optimization problem, in a sense defined in the description of Step 209, in accordance with one or more embodiments. In Step 211, the drilling parameters are then adjusted to be equal to the optimum values for one or more drilling parameters obtained in Step 209. For example, if the determination of whether the drilling performance is optimum is performed by solving the optimization problem in EQ. 5, optimum values for the one or more drilling parameters may be defined by X* in EQ. 5, XQ in EQ. 6, or Xq* in EQ. 7.


If the drilling parameters are adjusted in Step 211, the drilling operation continues, controlled by the adjusted drilling parameters, and the Steps 203-209 are repeated while the drilling operation continues.



FIG. 3 depicts an example of a system for computing the UCS of a subsurface, using AI, while drilling through the subsurface. For concision, a full description of components and/or elements depicted in FIG. 3 is not provided anew for those components and/elements that have be previously described with reference to the preceding figures. As part of process data (303), logging-while-drilling data (LWD) (305) are obtained in real-time during a drilling operation and passed on as input to a computational model (309). Examples of LWD data (305) include a gamma ray, a bulk formation density, a thermal neutron porosity, a photoelectric factor, a resistivity and a pressure of the portion of the subsurface being perforated. In this disclosure, a component of LWD data is defined as a piece of LWD data, such as a gamma ray, a bulk formation density, a thermal neutron porosity, a photoelectric factor, a resistivity and a pressure of the portion of the subsurface being perforated.


In this disclosure, a C profile on a time interval [T1, T2], for a certain quantity, C, such as a component of the LWD data, is defined as a vector {Cn such that 0≤n≤N}, where each Cn, called a sample of C at time tn, is a value of C obtained at time tn, where the times tn, for a sequence of consecutive integers n∈[0, N], for a given integer N≥1, discretize the interval [T1, T2] so that to =T1, tN=T2 and tn-1<tn, for all n∈[1, N]. The times tn, for n∈[0, N], are called a sampling of the interval [T1, T2]. Furthermore, denoting Z=0 as the origin of the well being drilled, and Zt as the distance along the wellbore from the origin to the point in space of the subsurface that is being drilled at a time t, there is no distinction between a value for the quantity C obtained at time t and the value of the quantity C obtained at distance Zt. Therefore, Cn denotes, interchangeably, the value of C obtained at time ty, and the value of C obtained at distance Ztn. Furthermore, a profile on a time interval [T1, T2] and a profile on the depth interval [ZT1, ZT2] denote the same profile and may be referred to interchangeably.


In one or more embodiments, the LWD data are obtained at discrete times during the drilling operation, rather than continuously and therefore, the obtained components of the LWD data form profiles whose samples are obtained at those discrete times. Although at any time, full profiles are available on a time interval from the first discrete time to the latest discrete time, some shorter profiles may be extracted from the full profiles on time intervals included in the interval from the first discrete time to the latest discrete time. Also, if the LWD data have multiple components the term “LWD profiles” refers to the set of profiles for all the components of the LWD data. For example, if the LWD data is composed of a gamma ray and a thermal neutron porosity, the LWD profiles on a time interval [T1, T2] refer to the set composed of a gamma ray profile on the time interval [T1, T2] and a thermal neutron porosity profile on the time interval [T1, T2].


In FIG. 3, the computational model (309) includes an AI model (311) that receives the LWD data (305) as input and returns, as output, a sonic compressional wave propagation slowness (DTC) (307) of the rock being drilled. The DTC (307) is then included in the process data (303). The AI model (311) can be of various types. Examples of AI models that may be used in Step 205 to compute the real-time DTC (307) of the subsurface from LWD data, include supervised machine learning models, such as regression models, decision trees, random forests, and neural networks, such as fully connected neural networks (DNN) or convolutional neural networks (CNN). In one or more embodiments, the AI model (311) may include a natural language processing (NLP) model. Examples of NLP models that may be included in the AI model (311) include, but are not limited to, a recurrent neural network (RNN) model, a long-short-term-memory (LSTM) model, a gated recurrent unit (GRU) model, and a model within a family of encoder-decoder models, such as a transformers model.


The AI model (311) may be configured in many ways. In one or more embodiments, the AI model (311) receives the LWD data acquired at a given time, T, and returns, as output, a prediction of the DTC at time T. In one or more embodiments, denoting 0 as the first recording time, the AI model (311) receives profiles for all the components of the LWD data on a time interval [T0, T], where T0 and T are two times at which the LWD data are obtained, such that 0≤T0<T, and returns, as output, a prediction of the DTC at time T. In one or more embodiments, the AI model (311) receives profiles for all the components of the LWD data on [T0, T] and returns, as output, a prediction of a DTC profile on [T0′, T], where T0′ is any number such that T0≤T0′<T. The value of the DTC at time T is then extracted as the last sample of the DTC profile on [T0′, T]. A notable example is obtained by considering the full profiles, that is, by setting T0=T0′=0. Note that in this scenario, in which the AI model (311) receives profiles for all the components of the LWD data as input and returns a DTC profile as output, the sampling of the DTC profile may be different from the sampling of LWD profiles. Also, in some embodiments, the computational model (309) includes a pre-processing step that converts the received LWD data into pre-processed LWD data, the pre-processing step including converting the received LWD data into a format that is suitable as an input for the AI model (311). In some embodiments in which the AI model (311) is configured to receive LWD profiles, the pre-processing step includes re-sampling LWD profiles to a sampling for which the AI model (311) is configured to receive the LWD profiles.


Before being put into production and applied to current data as predictors, AI models typically involve a training phase and a testing phase, both using previously acquired data. It is noted that supervised machine-learned models require examples of input and associated output (i.e., target) pairs in order to learn a desired functional mapping. As such, in one or more embodiments, the AI model (311) is trained using known data from existing wells. A dataset of examples may be constructed, each example including an input and an associated output (i.e., target) for a distinct existing well. In some embodiments, the input of an example is an LWD sample, that is, set of all the components of the LWD at a given time, that are known for the existing well, and the associated output is a value of the DTC at the same time, that is known for the existing well. In other embodiments, the input of an example is a set LWD profiles on a time interval of the form [T0, T], that are known for the existing well, where T0 and T are two times such that T0<T. In such scenarios, the associated output may be defined as a value of the DTC at time T, that is known for the existing well, or a DTC profile on a time interval [T0′, T], that is known for the existing well, where T0′ is any number such that T0≤T0′<T. In one or more embodiments, the dataset is split into a training dataset and a testing dataset, the example input and associated output pairs of the training dataset being called the training examples, and the example input and associated output pairs of the testing dataset being called the testing examples. It is common practice to split the dataset in a way that the training dataset contains more examples than the testing dataset. Because data splitting is a common practice when training and testing a machine-learned model, it is not described in detail in this disclosure. One with ordinary skill in the art will recognize that any data splitting technique may be applied to the dataset without departing from the scope of this disclosure. The AI model (311) is trained as a functional mapping that optimally matches the inputs of the training examples to the associated outputs of the training examples.


Once trained, the AI model (311) is validated by computing a metric for the testing examples, in accordance with one or more embodiments. Examples of metrics that may be used to validate the AI model (311) include any scoring or comparison function known in the art, including but not limited to: a mean square error (MSE), a root mean square error (RMSE) and a coefficient of determination (R2), defined as:










MSE
=


1
n








i
=
1


i
=
n







"\[LeftBracketingBar]"




y
^

i

-

y
i




"\[RightBracketingBar]"


2



,




EQ
.

9














RMS

E

=



1
n








i
=
1


i
=
n







"\[LeftBracketingBar]"




y
^

i

-

y
i




"\[RightBracketingBar]"


2




,




EQ
.

10













R
2

=

1
-









i
=
1


i
=
n







"\[LeftBracketingBar]"




y
^

i

-

y
i




"\[RightBracketingBar]"


2









i
=
1


i
=
n







"\[LeftBracketingBar]"



y
i

-

y
_




"\[RightBracketingBar]"


2



.






EQ
.

11







In EQ. 9, EQ. 10, and EQ. 11, n denotes the number of testing examples, each training example being defined as an input-output pair, (xi, yi), for i=1, . . . , n, in which xi is the input, yi is the output associated with xi,








y
_

=


1
n








i
=
1


i
=
n




y
i



,




and ŷi denotes the value of the predicted UCS by the AI model (311) when receiving xi as input, for i=1, . . . , n. The notation |·| denotes a norm that can be applied to the object in between. For example, if the outputs are real-valued, such as a UCS at a given time, the notation |·| may denote an absolute value. If the outputs are vector-valued, such as a UCS profile at a given time, the notation |·| may denote an l2 norm.


Some NLP models, such as a family of text predicting encoder-decoder models, including transformers models, are configured as data generators. They receive and encode an input by using the encoder, and then, with the decoder, generate an output, one sample at a time, until stopped by a stopping criterion. In scenarios in which the AI model (311) returns a DTC profile as output using such NLP models, the AI model (311) may be configured to stop generating the DTC profile at a sample occurring in the future, and in turn, the AI model (311), in addition to predicting a DTC profile in real-time, may further offer a prediction of the DTC of future times, that is, the DTC of the subsurface that is to be drilled but has not been drilled yet. Predicting DTC, and thus, UCS, in future times, may be used to anticipate which drilling parameters to be tuned in the present to optimize the drilling performance in future times.


The DTC (307), computed in real-time, is passed, as input, to a physical model (313), that outputs a UCS (315) in real-time. Summarizing, FIG. 3, at any time T at which the LWD data (305) is acquired, the LWD data (305) is passed on to the computational model (309) as input. The computational model (309) conveys the LWD data (305), at time T, or LWD data profiles on a time interval including T, to the part of computational model (309) that is the AI model (311), which returns, as output, the DTC (307) at time T. The DTC (307) at time T, and possibly one or more components of the LWD data at time T, is passed on as input to the physical model (313), that returns, as output, the UCS at time T. As discussed in the description of FIG. 2, the physical model (313) may be defined in many ways. Example of physical models, that compute a value of a UCS from values of DTC and/or LWD data are included in Table I.



FIG. 4 depicts a variant of the system in FIG. 3, for computing the UCS of a subsurface, using AI, while drilling through the subsurface. As part of the process data (303), the logging-while-drilling data (LWD) (305) are obtained in real-time during a drilling operation and passed on, as input, to the computational model (309). In FIG. 3, the AI model (311) further requires petrophysical data (405) as input, in addition to the LWD data (305). The petrophysical data (405) are computed in real-time by a petrophysical model (403). Examples of petrophysical data (405) that may be required by the AI model as input include, but are not limited to a total porosity (PHIT) of the subsurface and a volume of gas hydrocarbon (VOL_GAS) of the subsurface. An example of petrophysical model used to compute the total porosity is a ratio model, defined as ϕ=Vƒ/Vtotal, where Vtotal is a volume of any portion of the subsurface and Vƒ is a volume of fluid in the same portion of the subsurface. In one or more embodiments, the physical model (313) also requires some of the petrophysical data (405) as input, as indicated by the formulas 9, 16, 17, 21, 22, 23 in Table I, that express the UCS as a function of the total porosity.


In turn, the system in FIG. 4 includes four steps: First, the LWD data (303) is obtained in real-time during the drilling operation. Then, the petrophysical model (403) computes the petrophysical data (405) from the LWD data (303). Then, the AI model (311) computes the DTC (307) from the LWD data (303) and the petrophysical data (405) and in the last step, the physical model (313) computes the UCS (315) from the DTC (307), and possibly the LWD data (303) or the petrophysical data (405). In the specific embodiment in FIG. 4, the computational model (309) includes the petrophysical model (403), the AI model (311) and the physical model (313).


The block diagram in FIG. 5 depicts a system for optimizing a performance of a drilling operation by using the UCS in real-time, computed using AI. Some elements of the block diagram in FIG. 5 are discussed, at least in part, in the description of FIGS. 1-4 and, for simplicity, will not be re-explained here. The system in FIG. 5 includes a drilling system (510), a data acquisition system (520), a database (530) and a drilling management system (540). The drilling operation of a subsurface is performed by the drilling system (510), that includes the derrick (118), the drill string (108) and the drill bit (112). The drilling operation is controlled by a set of drilling parameters (513), that can be tuned to control the drilling operation. As stated in other paragraphs of this disclosure, the drilling parameters (513) may include one or more of a weight on bit (WOB), a drill string rotational speed (RPM), a torque of the drill bit, a mud flow rate and an inclination of the wellbore. Generally, the drilling parameters (513) have a critical influence on performances of the drilling operation, such as an influence on the average ROP of the drilling operation.


The data acquisition system (520) includes the LWD tool (116), described in the description of FIG. 1, that measures the LWD data (305) in real-time. Examples of the LWD data (305) include a gamma ray, a bulk formation density, a thermal neutron porosity, a photoelectric factor, a resistivity and a pressure of the portion of the subsurface being perforated. The acquired LWD data (305) is conveyed to the drilling management system (540). The drilling management system (540) includes the computational model (309) that, as in described in FIG. 3, receives the LWD data (305) as input and returns, as output, the real-time UCS (315) of the subsurface. In one or more embodiments, the computation of the UCS (315), by the computational model (309) includes two steps: first, the AI model (311) is used to compute the DTC of the subsurface in real-time, from the LWD data (305), and then, the physical model (313) is used to compute the UCS (315) from the DTC, and possibly the LWD data (305). In some embodiments, as described in FIG. 4, the computational model (309) further includes the petrophysical model (403), that produces petrophysical data that are further used as input to the AI model (311), to compute the DTC. In further embodiments, the petrophysical data computed by the petrophysical model (403) are also used as input to the physical model (313), as illustrated by the example formulas 9, 16, 17, 21, 22, 23 in Table I, that connect the UCS with the total porosity of the subsurface. The drilling management system (540) further includes a computer (543) on which the computational model (309) is hosted and run.


The AI model (311) is connected to both the data acquisition system (540) and the database (530). The database (530) contains previously acquired LWD data from known wells (531) and DTC from known wells (533), that may be used to train and test the AI model (311). Note that during the drilling operation, the AI model (311) may be re-trained or fine-tuned using the database (530). In one or more embodiments, the DTC computed by the AI model (311) is validated through one or more tests after drilling (e.g., well log analysis). In instances where the DTC is directly measured using a post-drilling test, the measured DTC values may be appended to the DTC from known wells (DDD33) in the database (530), and the associated LWD data (305) may likewise be appended to the LWD data from known wells (531) in the database (530). In this way, newly acquired data may also be used to train, re-train, or fine tune the AI model (311). Training the AI model (311) is described in greater detail later in the instant disclosure.


Returning to the drilling management system (540), the UCS computed by the computational model (309), and the LWD data (305) may be used to assess and optimize the drilling performance of the drilling operation. In one or more embodiments, the drilling performance is a prediction of the average ROP of the drilling operation from start to complete, such as the instantaneous average ROP (IAROP), as described in the description of FIG. 2. The average ROP (AROP) differs from the ROP in that the ROP is defined by the rate at which the drill bit (112) penetrates the subsurface at a given instant, while the AROP is defined as the length of the wellbore after drilling divided by the time of completion of the drilling operation. The AROP includes the ROP and other factors, such as disruptions of the drilling operations. The IAROP, at a given instant during the drilling operation, is defined as an estimate, at that instant, of the AROP that will result upon completion of the drilling operation. Examples of formulas for the IAROP are given by EQ. 1 and EQ. 2. It is noted that EQ. 1 and EQ. 2. take into account the bit-wear, that may result in temporary disruptions of the drilling operation. In one or more embodiments, the drilling performance, such as the IAROP, depends on the drilling parameters (513), that are tunable, as well as the LWD data (305) and the UCS (315).


Using an optimizer (541), a determination is made whether the drilling parameters (513) are optimum to maximize the drilling performance. If the drilling parameters (513) are determined optimum, the drilling operation continues using the drilling parameters (513). If the drilling parameters (513) are considered non-optimum, one or more drilling parameters within the drilling parameters (513) are adjusted by the control system (162) through the adjustment action (550), and the drilling operation continues with the adjusted drilling parameters, that are assigned as the drilling parameters (513). In one or more embodiments, the control system (162) may communicate with external entities. For example, in situations in which the one or more drilling parameters to be adjusted include the inclination of the wellbore, the control system may communicate with a geosteering operation center to obtain an assessment of the feasibility, requirements and consequences of such an adjustment.


In one or more embodiments, the drilling performance F(X, UCS, P), is expressed as the result of applying a mathematical function, F, to one or more drilling parameters, X, within the drilling parameters (513), LWD data (305), P, and the UCS. In such scenarios, as shown by EQ. 4, a function G may be defined as an expression of the drilling performance as a function of the only tunable variables within X, UCS, P, namely, X: G(X)=F(X, UCS, P). Then, in some embodiments, the optimizer (541) may be defined as any method that solves the maximization problem in EQ. 5, exactly, or in an approximate sense, such as EQ. 7. Embodiments for such an optimizer are described in the description of FIG. 2. The optimizer (541) may be hosted an run on the computer (543) and return both a determination whether the drilling parameters (513) are optimum, and optimum drilling parameters defined as X that maximize G(X) in the sense of EQ. 5 or EQ. 7. In other embodiments, the drilling performance is obtained based on qualitative prior experience, and the optimizer (541) returns a determination whether the drilling parameters (513) are optimum, and in case they are not optimum, determines adjustments to be made to one or more drilling parameters in order to optimize the drilling performance, also based on prior experience. In some embodiments, the computer (543) is further configured to make the determined adjustments to the one or more drilling parameters, within the drilling parameters (513), that need to be adjusted in order to optimize the drilling performance.


As stated, the computational model as defined in Step 205 of the method in FIG. 2 may include artificial intelligence, which in some embodiments includes the AI model (311) in FIGS. 3 and 4. Artificial intelligence, broadly defined, is the extraction of patterns and insights from data. The phrases “artificial intelligence,” “machine learning,” “deep learning,” and “pattern recognition” are often convoluted, interchanged, and used synonymously throughout the literature. This ambiguity arises because the field of “extracting patterns and insights from data” was developed simultaneously and disjointedly among a number of classical arts like mathematics, statistics, and computer science. For consistency, the term artificial intelligence (AI) will be adopted herein, however, one skilled in the art will recognize that the concepts and methods detailed hereafter are not limited by this choice of nomenclature.


AI model types may include, but are not limited to, generalized linear models, Bayesian regression, random forests, and deep models such as neural networks, convolutional neural networks, and recurrent neural networks. AI model types, whether they are considered deep or not, are usually associated with additional “hyperparameters” which further describe the model. For example, hyperparameters providing further detail about a neural network may include, but are not limited to, the number of layers in the neural network, choice of activation functions, inclusion of batch normalization layers, and regularization strength. Commonly, in the literature, the selection of hyperparameters surrounding an AI model is referred to as selecting the model “architecture.” Once an AI model type and hyperparameters have been selected, the AI model is trained to perform a task.


A brief discussion and summary of some machine-learned model types is provided herein. However, one with ordinary skill in the art will recognize that a full discussion of every type of machine-learned model applicable to the methods and systems disclosed herein is not possible nor required to describe the AI model (311). Consequently, the following discussion of machine-learned models is provided by way of introduction to the art of machine-learning and does not impose a limitation on the present disclosure.


A first, notable example of an AI model that may be included in the computational model in Step 205 in FIG. 2, such as the AI model (311), is a neural network (NN), such as a convolutional neural network (CNN) or a recurrent neural network (RNN). A cursory introduction to a NN is provided herein. However, it is noted that many variations of a NN exist. Therefore, one with ordinary skill in the art will recognize that any variation of the NN (or any other AI model) may be employed without departing from the scope of this disclosure. Further, it is emphasized that the following discussions of a NN is a basic summary and should not be considered limiting.


A diagram of a neural network is shown in FIG. 6. At a high level, a neural network (600) may be graphically depicted as being composed of nodes (602), where here any circle represents a node, and edges (604), shown here as directed lines. The nodes (602) may be grouped to form layers (605). FIG. 6 displays four layers (608, 610, 612, 614) of nodes (602) where the nodes (602) are grouped into columns, however, the grouping need not be as shown in FIG. 6. The edges (604) connect the nodes (602). Edges (604) may connect, or not connect, to any node(s) (602) regardless of which layer (605) the node(s) (602) is in. That is, the nodes (602) may be sparsely and residually connected. A neural network (600) will have at least two layers (605), where the first layer (608) is considered the “input layer” and the last layer (614) is the “output layer.” Any intermediate layer (610, 612) is usually described as a “hidden layer.” A neural network (600) may have zero or more hidden layers (610, 612) and a neural network (600) with at least one hidden layer (610, 612) may be described as a “deep” neural network or as a “deep learning method.” In general, a neural network (600) may have more than one node (602) in the output layer (614). In this case the neural network (600) may be referred to as a “multi-target” or “multi-output” network.


Nodes (602) and edges (604) carry additional associations. Namely, every edge is associated with a numerical value. The edge numerical values, or even the edges (604) themselves, are often referred to as “weights” or “parameters.” While training a neural network (600), numerical values are assigned to each edge (604). Additionally, every node (602) is associated with a numerical variable and an activation function. Activation functions are not limited to any functional class, but traditionally follow the form










A
=

f

(







i


(
incoming
)



[



(

node


value

)

i





(

edge


value

)

i


]

)


,




EQ
.

12







where i is an index that spans the set of “incoming” nodes (602) and edges (604) and ƒ is a user-defined function. Incoming nodes (602) are those that, when the neural network (600) is viewed or depicted as a directed graph (as in FIG. 6), have directed arrows that point to the node (602) where the numerical value is being computed. Some functions for ƒ may include the linear function ƒ(x)=x, sigmoid function








f

(
x
)

=

1

1
+

e

-
x





,




and rectified linear unit function ƒ(x)=max (0, x), however, many additional functions are commonly employed. Every node (602) in a neural network (600) may have a different associated activation function. Often, as a shorthand, activation functions are described by the function ƒ by which it is composed. That is, an activation function composed of a linear function ƒ may simply be referred to as a linear activation function without undue ambiguity.


When the neural network (600) receives an input, the input is propagated through the network according to the activation functions and incoming node (602) values and edge (604) values to compute a value for each node (602). That is, the numerical value for each node (602) may change for each received input. Occasionally, nodes (602) are assigned fixed numerical values, such as the value of 1, that are not affected by the input or altered according to edge (604) values and activation functions. Fixed nodes (602) are often referred to as “biases” or “bias nodes” (606), displayed in FIG. 6 with a dashed circle.


In some implementations, the neural network (600) may contain specialized layers (605), such as a normalization layer, or additional connection procedures, like concatenation. One skilled in the art will appreciate that these alterations do not exceed the scope of this disclosure.


As noted, the training procedure for the neural network (600) comprises assigning values to the edges (604). To begin training the edges (604) are assigned initial values. These values may be assigned randomly, assigned according to a prescribed distribution, assigned manually, or by some other assignment mechanism. Once edge (604) values have been initialized, the neural network (600) may act as a function, such that it may receive inputs and produce an output. As such, at least one input is propagated through the neural network (600) to produce an output. Training data is provided to the neural network (600). Generally, training data consists of pairs of inputs and associated targets. The targets represent the “ground truth,” or the otherwise desired output, upon processing the inputs. In the context of the AI model (311), an input is a piece of LWD data that is known from an existing well at a given time T, or a LWD profile on a time interval ending in T, for the existing well, that can be obtained, for example, from the LWD data from known wells (531). An output, or target, is a value of the UCS for the existing well at the same time T or a UCS profile on a time interval ending in T for the existing well, that can be obtained, for example, from the UCS data from known wells (533). During training, the neural network (600) processes at least one input from the training data and produces at least one output. Each neural network (600) output is compared to its associated input data target. The comparison of the neural network (600) output to the target is typically performed by a so-called “loss function;” although other names for this comparison function such as “error function,” “misfit function,” and “cost function” are commonly employed. Many types of loss functions are available, such as the mean-squared-error function, however, the general characteristic of a loss function is that the loss function provides a numerical evaluation of the similarity between the neural network (600) output and the associated target. The loss function may also be constructed to impose additional constraints on the values assumed by the edges (604), for example, by adding a penalty term, which may be physics-based, or a regularization term. Generally, the goal of a training procedure is to alter the edge (604) values to promote similarity between the neural network (600) output and associated target over the training data. Thus, the loss function is used to guide changes made to the edge (604) values, typically through a process called “backpropagation”.


While a full review of the backpropagation process exceeds the scope of this disclosure, a brief summary is provided. Backpropagation consists of computing the gradient of the loss function over the edge (604) values. The gradient indicates the direction of change in the edge (604) values that results in the greatest change to the loss function. Because the gradient is local to the current edge (604) values, the edge (604) values are typically updated by a “step” in the direction indicated by the gradient. The step size is often referred to as the “learning rate” and need not remain fixed during the training process. Additionally, the step size and direction may be informed by previously seen edge (604) values or previously computed gradients. Such methods for determining the step direction are usually referred to as “momentum” based methods.


Once the edge (604) values have been updated, or altered from their initial values, through a backpropagation step, the neural network (600) will likely produce different outputs. Thus, the procedure of propagating at least one input through the neural network (600), comparing the neural network (600) output with the associated target with a loss function, computing the gradient of the loss function with respect to the edge (604) values, and updating the edge (604) values with a step guided by the gradient, is repeated until a termination criterion is reached. Common termination criteria are: reaching a fixed number of edge (604) updates, otherwise known as an iteration counter; a diminishing learning rate; noting no appreciable change in the loss function between iterations; reaching a specified performance metric as evaluated on the data or a separate hold-out data set. Once the termination criterion is satisfied, and the edge (604) values are no longer intended to be altered, the neural network (600) is said to be “trained”.


A structural grouping, or group, of weights is herein referred to as a “filter.” The number of weights in a filter is typically much less than the number of inputs. In a CNN, the filters can be thought as “sliding” over, or convolving with, the inputs to form an intermediate output or intermediate representation of the inputs which still possesses a structural relationship. Like unto the neural network (600), the intermediate outputs are often further processed with an activation function. Many filters may be applied to the inputs to form many intermediate representations. Additional filters may be formed to operate on the intermediate representations creating more intermediate representations. This process may be repeated as prescribed by a user. There is a “final” group of intermediate representations, wherein no more filters act on these intermediate representations. In some instances, the structural relationship of the final intermediate representations is ablated; a process known as “flattening.” The flattened representation may be passed to a neural network (600) to produce a final output. Note, that in this context, the neural network (600) is still considered part of the CNN. Like unto a neural network (600), a CNN is trained, after initialization of the filter weights, and the edge (604) values of the internal neural network (600), if present, with the backpropagation process in accordance with a loss function.


Another example of a machine-learned model type is a decision tree. As will be described, decisions trees often act as components, or sub-models, to other types of machine-learned models such as random forests and gradient boosted machines. A decision tree is composed of nodes. A decision is made at each node such that data present at the node are segmented. Typically, at each node, the data at said node, are split into two parts, or segmented bimodally, however, multimodal segmentation is possible. The segmented data can be considered another node and may be further segmented. As such, a decision tree represents a sequence of segmentation rules. The segmentation rule (or decision) at each node is determined by an evaluation process. The evaluation process usually involves calculating which segmentation scheme results in the greatest homogeneity or reduction in variance in the segmented data. However, a detailed description of this evaluation process, or other potential segmentation scheme selection methods, is omitted for brevity and does not limit the scope of the present disclosure.


Further, if at a node in a decision tree, the data are no longer to be segmented, that node is said to be a “leaf node.” Commonly, values of data found within a leaf node are aggregated, or further modeled, such as by a linear model, so that a leaf node represents a class or an aggregated value (e.g., an average). A decision tree can be configured in a variety of ways, such as, but not limited to, choosing the segmentation scheme evaluation process, limiting the number of segmentations, and limiting the number of leaf nodes. Generally, when the number of segmentations or leaf nodes in a decision tree is limited, the decision tree is said to be a “weak learner”.


As stated, another example of a machine-learned model type, based on decision trees, is a random forest model, which may operate as a supervised machine learning algorithm performing a regression to predict the UCS. A random forest model is an ensemble machine learning algorithm that uses multiple decision trees to make predictions. The architecture of random forest models is unique in that it combines multiple decision trees to reduce the risk of overfitting and improve the overall generalization of the model and the accuracy of predictions, in comparison to individual trees. This is based on the idea that multiple “weak learners” can combine to create a “strong learner.” Each individual classifier is considered a “weak learner,” while the group of classifiers functioning together is regarded as a “strong learner.” This approach allows random forests to effectively capture complex relationships and interactions between features, resulting in better predictive performance.


Each of the multiple decision trees operates on a different subset of the same training dataset, followed by taking an average of the results to improve the overall accuracy of the predictions. In other words, instead of relying on a single decision tree, the random forest gathers predictions from each tree and makes a final prediction based on the majority of these predictions.


As stated, another example of a machine-learned model type, based on decision trees, is a gradient boosted machine. Hereafter, a gradient boosted machine model using decision trees is referred to as a gradient boosted trees model. In most implementations, the decision trees from which a gradient boosted trees model is composed are weak learners. In a gradient boosted trees model, the decision trees are ensembled in series, wherein each decision tree makes a weighted adjustment to the output of the preceding decision trees in the series. The process of ensembling decision trees in series, and making weighted adjustments, to form a gradient boosted trees model is best illustrated by considering the training process of a gradient boosted trees model.


Training a gradient boosted trees model consists of the selection of segmentation rules for each node in each decision tree; that is, training each decision tree. Once trained, a decision tree is capable of processing data. For example, a decision tree may receive a data input (e.g., a pre-processed molecular descriptor). The data input is sequentially transferred to nodes within the decision tree according to the segmentation rules of the decision tree. Once the data input is transferred to a leaf node, the decision tree outputs the assigned class or aggregate value (e.g., molecular property value) of the associated leaf node.


Generally, training a gradient boosted model firstly consists of making a simple prediction (SP) for the target data (i.e., the UCS). The simple prediction (SP) may be the average UCS value over the training examples of a training dataset. The simple prediction (SP) is subtracted from the targets to form a first residual. The first decision tree in the series is created and trained, wherein the first decision tree attempts to predict the first residuals forming first residual predictions. The first residual predictions from the first decision tree are scaled by a scaling parameter. In the context of gradient boosted trees the scaling parameter is known as the “learning rate” (n). The learning rate is one of the hyperparameters governing the behavior of the gradient boosted trees model. The learning rate (n) may be fixed for all decision trees or may be variable or adaptive. The first residual predictions of the first decision tree are multiplied by the learning rate (n) and added to the simple prediction (SP) to form a first prediction. The first predictions are subtracted from the targets to form a second residual. A second decision tree is created and trained using the data inputs and the second residuals as targets such that it produces second residual predictions. The second residual predictions are multiplied by the learning rate (n) and are added to the first predictions forming second predictions. This process is repeated recursively until a termination criterion is achieved.


Many termination criteria exist and are not all enumerated here for brevity. Common termination criteria are terminating training when a pre-defined number of decision trees has been reached, or when improvement in the residuals is no longer observed.


Once trained, a gradient boosted trees model may make predictions using input data. To do so, the input data is passed to each decision tree, which will form a plurality of residual predictions. The plurality of residual predictions is multiplied by the learning rate (n), summed across every decision tree, and added to the simple prediction (SP) formed during training to produce the gradient boosted trees predictions.


One with ordinary skill in the art will appreciate that many adaptions may be made to gradient boosted trees models and that these adaptions do not exceed the scope of this disclosure. Some adaptions may be algorithmic optimizations, efficient handling of sparse data, use of out-of-core computing, and parallelization for distributed computing. Commonly, when such adaptions are applied to a gradient boosted trees model, the model is known in the literature as XGBoost.



FIG. 7 depicts, generally, the flow of data through a trained gradient boosted trees model (702) in accordance with one or more embodiments. Input data (706) are passed to the gradient boosted trees model (702) composed of a plurality of decision trees (712). As such, the input data (706) are processed by each decision tree (712) and the output of each decision tree is collected, multiplied by the learning rate (η), summed, and added to the simple prediction (SP) established during training forming an ensemble (714). The result of the ensemble (714) is returned as the gradient boosted trees model prediction (716). In the context of the current disclosure, the input data (706) is a set of LWD components of the LWD data that received by the AI model (311), and the gradient boosted trees model prediction (716) is the real-time UCS of the subsurface being drilled.


The computations mentioned in this disclosure may be performed by a computer, such as the computer (543) in FIG. 5. In that regard, FIG. 8 depicts a block diagram of a computer (802) used to provide computational functionalities associated with described algorithms, methods, functions, processes, flows, and procedures as described in this disclosure, according to one or more embodiments. The illustrated computer (802) is intended to encompass any computing device such as a server, desktop computer, laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, one or more processors within these devices, or any other suitable processing device, including both physical or virtual instances (or both) of the computing device. Additionally, the computer (802) may include a computer that includes an input device, such as a keypad, keyboard, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the computer (802), including digital data, visual, or audio information (or a combination of information), or a GUI.


The computer (802) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. In some implementations, one or more components of the computer (802) may be configured to operate within environments, including cloud-computing-based, local, global, or other environments (or a combination of environments).


At a high level, the computer (802) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (802) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).


The computer (802) can receive requests over network (830) from a client application (for example, executing on another computer (802) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (802) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.


Each of the components of the computer (802) can communicate using a system bus (803). In some implementations, any or all of the components of the computer (802), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (804) (or a combination of both) over the system bus (803) using an application programming interface (API) (812) or a service layer (813) (or a combination of the API (812) and service layer (813). The API (812) may include specifications for routines, data structures, and object classes. The API (812) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (813) provides software services to the computer (802) or other components (whether or not illustrated) that are communicably coupled to the computer (802). The functionality of the computer (802) may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer (813), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or another suitable format. While illustrated as an integrated component of the computer (802), alternative implementations may illustrate the API (812) or the service layer (813) as stand-alone components in relation to other components of the computer (802) or other components (whether or not illustrated) that are communicably coupled to the computer (802). Moreover, any or all parts of the API (812) or the service layer (813) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.


The computer (802) includes an interface (804). Although illustrated as a single interface (804) in FIG. 8, two or more interfaces (804) may be used according to particular needs, desires, or particular implementations of the computer (802). The interface (804) is used by the computer (802) for communicating with other systems in a distributed environment that are connected to the network (830). Generally, the interface (804) includes logic encoded in software or hardware (or a combination of software and hardware) and operable to communicate with the network (830). More specifically, the interface (804) may include software supporting one or more communication protocols associated with communications such that the network (830) or interface's hardware is operable to communicate physical signals within and outside of the illustrated computer (802).


The computer (802) includes at least one computer processor (805). Although illustrated as a single computer processor (805) in FIG. 8, two or more processors may be used according to particular needs, desires, or particular implementations of the computer (802). Generally, the computer processor (805) executes instructions and manipulates data to perform the operations of the computer (802) and any algorithms, methods, functions, processes, flows, and procedures as described in the instant disclosure.


The computer (802) also includes a memory (806) that holds data for the computer (802) or other components (or a combination of both) that can be connected to the network (830). The memory may be a non-transitory computer readable medium. For example, memory (806) can be a database storing data consistent with this disclosure. Although illustrated as a single memory (806) in FIG. 8, two or more memories may be used according to particular needs, desires, or particular implementations of the computer (802) and the described functionality. While memory (806) is illustrated as an integral component of the computer (802), in alternative implementations, memory (806) can be external to the computer (802).


The application (807) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (802), particularly with respect to functionality described in this disclosure. For example, application (807) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (807), the application (807) may be implemented as multiple applications (807) on the computer (802). In addition, although illustrated as integral to the computer (802), in alternative implementations, the application (807) can be external to the computer (802).


There may be any number of computers such as the computer (802) associated with, or external to, a computer system containing computer (802), wherein each computer (802) communicates over network (830). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (802), or that one user may use multiple computers such as the computer (802).



FIG. 9 depicts examples of profiles for the DTC, in μs/ft and UCS, in psi, obtained by using the method in this disclosure, for a drilling operation of the Pre-Khuff well in the HRDH field. The DTC profile (DTC_AI), in plot (905), was obtained by using the AI model (311). The resulting UCS profile (AI-Based UCS), in plot (911), was obtained from the predicted DTC profile from plot (905) by using the physical model (313) in the form 143000 exp (−0.035·DTC), as in formula 1 in Table I. FIG. 9 further includes profiles for LWD data acquired during the drilling operation. Plot (903) features a gamma ray profile measured in gAPI (GR); plot (905) features a bulk formation density profile measured in g/cm3 (RHOB) and a thermal neutron porosity profile (NPHI); plot (907) features a resistivity profile measured in ohm·m (RES). Plot (909) features a volume fraction of a lithology (Litho), as an example of petrophysical data extracted from the LWD data, by using a petrophysical model, such as the petrophysical model (403).


Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.

Claims
  • 1. A method, comprising: obtaining process data while conducting a drilling operation through a subsurface, wherein the drilling operation is controlled by a set of drilling parameters;determining, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface;determining, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation; andupon determining that the drilling performance is not optimum, adjusting one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.
  • 2. The method of claim 1, wherein: the set of drilling parameters comprises one or more of: a weight-on-bit on a drill bit,a torque of the drill bit, anda mud flow rate;the drilling performance is based on one or more of: a rate of penetration, andan estimate of a drill bit life cycle; andoptimizing the drilling performance comprises one or more of: maximizing the rate of penetration, andminimizing bit-runs.
  • 3. The method of claim 1, wherein the process data comprise logging-while-drilling (LWD) data, comprising one or more of: a gamma ray of the subsurface,a bulk formation density of the subsurface; anda thermal neutron porosity of the subsurface.
  • 4. The method of claim 3, wherein: the process data further comprise a sonic compressional wave propagation slowness (DTC) of the subsurface; andthe computational model comprises a physical model that receives the DTC of the subsurface and outputs the real-time UCS of the subsurface.
  • 5. The method of claim 4, wherein the computational model further comprises an artificial intelligence (AI) model that receives the LWD data and outputs the DTC of the subsurface.
  • 6. The method of claim 5, wherein: the computational model further comprises a petrophysical model that receives the LWD data and outputs a set of petrophysical data; andthe AI model further receives the set of petrophysical data.
  • 7. The method of claim 6, wherein the petrophysical data comprise one or more of: a total porosity of the subsurface; anda volume of hydrocarbon gases within the subsurface.
  • 8. The method of claim 7, further comprising: obtaining training process data and a training DTC for each well within a plurality of existing wells;constructing a training dataset of training examples, each training example comprising: training process data for a well within plurality of existing wells, anda training DTC for the well; andtraining the AI model using the training dataset.
  • 9. The method of claim 5, wherein the AI model comprises a random forest.
  • 10. A system, comprising: a drilling system performing a drilling operation through a subsurface, wherein: the drilling operation is controlled by a set of drilling parameters; andthe drilling system comprises: a drilling rig,a drill string, connected to the drilling rig, anda drill bit, connected to the drill string;a plurality of sensors, connected to the drilling system, the plurality of sensors collecting process data from the drilling operation; anda computer, configured to: receive the process data from the plurality of sensors,determine, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface,determine, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation, andupon determining that the drilling performance is not optimum, adjust one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.
  • 11. The system of claim 10, wherein: the set of drilling parameters comprises one or more of: a weight-on-bit on the drill bit,a torque of the drill bit, anda mud flow rate;the drilling performance is based on one or more of: a rate of penetration, andan estimate of a drill bit life cycle; andoptimizing the drilling performance comprises one or more of: maximizing the rate of penetration, andminimizing bit-runs.
  • 12. The system of claim 10, wherein the process data comprise logging-while-drilling (LWD) data, comprising one or more of: a gamma ray of the subsurface,a bulk formation density of the subsurface; anda thermal neutron porosity of the subsurface.
  • 13. The system of claim 12, wherein: the process data further comprise a sonic compressional wave propagation slowness (DTC) of the subsurface; andthe computational model comprises a physical model that receives the DTC of the subsurface and outputs the real-time UCS of the subsurface.
  • 14. The system of claim 13, wherein the computational model further comprises an artificial intelligence (AI) model that receives the LWD data and outputs the DTC of the subsurface.
  • 15. The system of claim 14, wherein: the computational model further comprises a petrophysical model that receives the LWD data and outputs a set of petrophysical data; andthe AI model further receives the set of petrophysical data.
  • 16. The system of claim 15, wherein the petrophysical data comprise one or more of: a total porosity of the subsurface; anda volume of hydrocarbon gases within the subsurface.
  • 17. The system of claim 16, wherein the computer is further configured to: receive training process data and a training DTC for each well within a plurality of existing wells;construct a training dataset of training examples, each training example comprising: training process data for a well within plurality of existing wells, anda training DTC for the well; andtrain the AI model using the training dataset.
  • 18. The system of claim 14, wherein the AI model comprises a random forest.
  • 19. A non-transitory computer-readable memory comprising computer-executable instructions stored thereon that, when executed on a processor, cause the processor to perform steps comprising: obtaining process data while conducting a drilling operation through a subsurface, wherein the drilling operation is controlled by a set of drilling parameters;determining, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface;determining, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation; andupon determining that the drilling performance is not optimum, adjusting one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.
  • 20. The non-transitory computer-readable memory of claim 19, wherein: the process data comprise logging-while-drilling (LWD) data;the computational model comprises: an artificial intelligence (AI) model that receives the LWD data and outputs a sonic compressional wave propagation slowness (DTC) of the subsurface, anda physical model that receives the DTC of the subsurface and outputs the real-time UCS of the subsurface;the set of drilling parameters comprises one or more of: a weight-on-bit on a drill bit,a torque of the drill bit, anda mud flow rate;the drilling performance is based on one or more of: a rate of penetration, andan estimate of a drill bit life cycle;optimizing the drilling performance comprises one or more of: maximizing the rate of penetration, and minimizing bit-runs; andthe LWD data comprises one or more of: a gamma ray of the subsurface,a bulk formation density of the subsurface; anda thermal neutron porosity of the subsurface.