Unconfined compressive strength is a key measure of the ability for a rock to withstand compressive stress. During hydrocarbon well drilling operations, the bit-wear and rate of penetration of the drill bit depend on the unconfined compressive strength of the rock being perforated.
A continuous profile of the unconfined compressive strength of a subsurface may be obtained from well logs when they become available, after drilling a well. However, to optimize drilling operations, it is desirable to know the unconfined compressive strength during drilling, rather than after drilling, so the drilling parameters can be adjusted accordingly.
In that regard, artificial intelligence models may be trained using existing wells and offer a potential solution to predict the unconfined compressive strength of a subsurface in real-time, by receiving input data from the drilling operation, while drilling.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
Embodiments disclosed herein generally relate to a method for optimizing a drilling performance of a drilling operation, based process data. The method includes obtaining process data while conducting a drilling operation through a subsurface, where the drilling operation is controlled by a set of drilling parameters, and determining, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface. The method further includes determining, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation, and, upon determining that the drilling performance is not optimum, adjusting one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.
Embodiments disclosed herein generally relate to a system for optimizing a drilling performance of a drilling operation, based process data. The system includes a drilling system performing a drilling operation through a subsurface, including a drilling rig, a drill string, connected to the drilling rig, and a drill bit, connected to the drill string, where the drilling operation is controlled by a set of drilling parameters. The system further includes a plurality of sensors, connected to the drilling system, the plurality of sensors collecting process data from the drilling operation, and a computer, configured to receive the process data from the plurality of sensors and determine, with a computational model that receives the process data as input, a real-time unconfined compressive strength (UCS) of the subsurface. The computer is further configured to determine, based on the real-time UCS of the subsurface, a drilling performance of the drilling operation, and, upon determining that the drilling performance is not optimum, adjust one or more drilling parameters, within the set of drilling parameters, to optimize the drilling performance.
Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.
Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. For example, a computer may reference two or more such computers.
Terms such as “approximately,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
It is to be understood that one or more of the steps shown in a flowchart may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope disclosed herein should not be considered limited to the specific arrangement of steps shown in the flowchart.
Although multiple dependent claims are not introduced, it would be apparent to one of ordinary skill that the subject matter of the dependent claims of one or more embodiments may be combined with other dependent claims.
In the following description of
Embodiments disclosed herein are directed to a workflow that harnesses artificial intelligence (AI) capabilities to generate continuous Unconfined Compressive Strength (UCS) data in real-time from sonic predictions. The generated real-time UCS data is then used to steer the lateral and optimize the drilling parameters to increase ROP and extend the life-cycle of the bit. The UCS of drilling formations controls the drilling rate of penetration (ROP) and bit wear. Therefore, it holds very crucial information for drilling engineers during drilling operations. By considering the UCS, drilling operations can be improved by optimizing the drilling parameters. For example, the weight-on-bit (WOB) can be reduced while drilling across zones with increasing trend of UCS in order to extend the life-cycle of the bit. One the other hand, WOB can be increased while drilling across zones with decreasing trend of UCS in order to improve ROP. Ultimately, this leads optimum drilling performance with minimal trips to change bits (1-1.5 days of rig time).
For the purpose of drilling, a drill string (108) is suspended within the wellbore (102). The drill string (108) may include one or more drill pipes (109) connected to form conduit and a bottom hole assembly (BHA) (110) disposed at the distal end of the conduit. The BHA (110) may include a drill bit (112) to cut into the subsurface rock. In one or more embodiments, the BHA (110) may include measurement tools, such as a measurement-while-drilling (MWD) tool (114) and logging-while-drilling (LWD) tool (116). Measurement tools (114) and (116) may include sensors and hardware to measure downhole drilling parameters, and these measurements may be transmitted to the surface using any suitable telemetry system known in the art. The BHA (110) and the drill string (108) may include other drilling tools known in the art but not specifically shown. The drill string (108) may be suspended in the wellbore (102) by a derrick (118). A crown block (120) may be mounted at the top of the derrick (118), and a traveling block (122) may hang down from the crown block (120) by means of a cable or drilling line (124). One end of the cable or drilling line (124) may be connected to a drawworks (126), which is a reeling device that can be used to adjust the length of the cable or drilling line (124) so that the traveling block (122) may move up or down the derrick (118). The traveling block (122) may include a hook (128) on which a top drive (130) is supported.
During a drilling operation at the well site (100), the drill string (108) is rotated relative to the wellbore (102), and weight is applied to the drill bit (112) to enable the drill bit (112) to break rock as the drill string (108) is rotated. In one or more embodiments, the drill string (108) is rotated by operating the top drive (130), which is coupled to the top of the drill string (108). Alternatively, the drill string (108) may be rotated by means of a rotary table (not shown) on the drilling floor (131), or independently with a downhole drilling motor. In further embodiments, the drill bit (112) may be rotated using a combination of the drilling motor and the top drive (130) (or a rotary swivel if a rotary table is used instead of a top drive to rotate the drill string (108)). Drilling fluid (commonly called mud) may be stored in a mud pit (132), and at least one pump (134) may pump the mud from the mud pit (132) into the drill string (108). The mud may flow into the drill string (108) through appropriate flow paths in the top drive (130) (or a rotary swivel if a rotary table is used instead of a top drive to rotate the drill string (108)), and exit into the bottom of the wellbore (102) through nozzles in the drill bit (112). The mud in the wellbore (102) then flows back up to the surface in an annular space between the drill string (108) and the wellbore (102) with entrained cuttings. The mud with the cuttings is returned to the pit (132) to be circulated back again into the drill string (108). Typically, the cuttings are removed from the mud, and the mud is reconditioned as necessary, before pumping the mud again into the drill string (108).
Generally, a drilling operation, such as the one depicted in
In one or more embodiments, a control system (162) may be disposed at, or communicate with, the well site (100). The control system (162) may control one or more drilling parameters and receive data from one or more sensors (160). As a non-limiting example, sensors (160) may be arranged to measure one or more drilling parameters and drilling performance parameters, such as the mud-flow rate or the ROP. For illustration purposes, sensors (160) are shown on drill string (108) and proximate mud pump (134). The illustrated locations of sensors (160) are not intended to be limiting, and sensors (160) could be disposed wherever drilling parameters need to be measured. Moreover, there may be many more sensors (160) than shown in
Generally, the subsurface (103) is attributed a set of subsurface properties, that may highly depend on the geographical location of the well site (100), and may further vary with depth within the subsurface (103). Examples of subsurface properties that may be attributed to the subsurface (103) include, but are not limited to a density, a porosity, a permeability or a mineral composition of the rocks composing the subsurface (103). Another, notable example of a subsurface property is the UCS of the rocks within the subsurface (103), referred to as the UCS of the subsurface (103), in this disclosure. In one or more embodiments, the UCS of a rock is defined as a measure of how much compressive stress the rock can withstand without deforming. It is noted that as a measure of a strength of the subsurface (103), the UCS of the subsurface (103) may influence the drilling operation significantly. For example, perforating a rock with a high UCS may be more difficult than a rock with a low UCS. Another, notable example of a subsurface property for the subsurface (103) is a sonic compressional wave propagation slowness (DTC) in the subsurface (103), defined as the inverse of the speed of compressional sonic waves through the subsurface (103). In some embodiments, the DTC of the subsurface (103) may be approximated before the drilling operation begins, by using, for example, geological or physical models, such as the geological or physical models described in later paragraphs of this disclosure. In other instances, the DTC of the subsurface may be obtained after drilling, by analyzing well logs. In this disclosure, a method to compute the DTC during a drilling operation is described.
In one or more embodiments, the LWD data are acquired in real-time by using LWD tools, such as the LWD tool (116) located within the BHA (110) in
In one or more embodiments, the process data that may be obtained while conducting a drilling operation further includes the DTC of the subsurface. In some situations, a DTC profile of the subsurface is obtained prior to drilling, and the DTC in real-time, in this case, is simply obtained by selecting the value of the DTC at any wanted depth while drilling. Example of methods that may be used to compute the DTC prior to a drilling operation include geological interpolation techniques, geophysical techniques or any combination thereof. For example, the DTC at the drilling location may be interpolated from one or more available DTCs from existing wells in a vicinity of the drilling location. Another example of obtaining a DTC prior to the drilling operation is using seismic geophysics to compute a velocity model in an area containing the drilling location, extracting a velocity at the well location and computing the DTC as the inverse of the velocity. Examples of seismic geophysical methods to determine a velocity include residual moveout tomography, full waveform inversion, any combination thereof, and multiple iterations of any combination thereof. In other situations, no DTC is available prior to drilling, or the DTC available prior to drilling is not considered accurate. In such situations, the real-time DTC of the subsurface may be computed. In one or more embodiments, the real-time DTC of the subsurface is computed using artificial intelligence, as described in later paragraphs of this disclosure.
In Step 205, a UCS of the subsurface is obtained in real-time, using a computational model that receives the process data from Step 203 as input. The computational model may be of various types. In one or more embodiments, the computational model in Step 205 makes use of artificial intelligence. Examples of AI models that may be used to compute the real-time UCS of the subsurface from process data include regression models and neural networks. In one or more embodiments, the computational model in Step 205 includes a physical model, that receives the process data from Step 203 as input, and returns, as output, the real-time UCS of the subsurface. The physical model may be of various forms, including, a formula that provides a value of the real-time UCS output directly, or an equation that needs to be solved to find the output, such as a numerical equation, a differential equation or an integral equation. In that respect, the computational model may further include methods to solve an equation, such as an iterative solver or a numerical method. Examples of iterative solvers include Newton methods and pseudo-Newton methods, that seek the solution of a non-linear equation by computing a sequence that is intended to converge towards a solution to an equation. Numerical methods include quadrature formulas that approximate integrals, such as a method of rectangles or Simpson's rule. Numerical methods further include discretization methods for differential equations, such as Runge-Kutta methods, finite differences and finite element methods.
It is noted that there may be a delay between an instant during the drilling operation and the time at which any process that is consequential to the drilling operation is performed. For instance, there may be a delay between a time at which a point of the subsurface is drilled, and the computation of the UCS at that point is complete. In that regard, denoting t as a current time during a drilling operation, the term “real-time”, in this disclosure, is defined as any instant in a tolerance interval [t, t+S], where S≥0 is an acceptable tolerance. Generally, the acceptable tolerance S is defined as any delay that is small enough so that a certain process performed during the drilling operation is useful. For instance, as long as the time taken to compute the UCS of the rock being drilled at a time t is short enough so that the UCS of the rock being drilled at a time t can be used to optimize a performance of the drilling operation, the time taken to compute the UCS may be considered as an acceptable tolerance, and the computation of the UCS is said to occur in real-time. For the scope of disclosure, the terms “real-time” and “instantly” may be used interchangeably, and the terms “current time” may refer to an actual time t during a drilling operation or any instant within the tolerance interval [t, t+S]. Furthermore, in some embodiments, the acceptable tolerance S is defined by an entity in charge of the drilling operation or any person making use of the invention, at least in part, presented in this disclosure.
Many formulas have been developed that express the real-time UCS directly as a function of one or more pieces of the process data from Step 203. Examples of such formulas are given Table I. In Table I, Δt denotes the DTC of the subsurface expressed in μ/ft, ϕ denotes the total porosity, ρ denotes the density expressed in kg/m3 and E denotes the Young modulus, expressed in GPa, of the rock for which the UCS, given in PSI in Table I, is computed. The first column in Table I contains, for reference, an ID for each example formula that expresses the UCS as a function of the process data reported in the second column to the left of Table I. The next column, with the heading “Type of rock”, contains a type of rock for which the UCS formula in the second column to the left is valid. The rightmost column of Table 1, with the heading “Geographical region”, contains a geographical region where the UCS formula in the second column to the left is valid. It is noted that the process data in Step 203 are obtained in real time, so the UCS in in Step 205 is obtained in real-time.
In one or more embodiments, the Young modulus E in Table I is computed from the process data, such as the DTC and density of the rock considered. In such scenario, the physical model may further include a formula that expresses E as a function of other process data. Generally, the physical model may include any transformation model that computes a physical quantity from the process data. It is emphasized that the examples of formulas expressing the UCS in Table 1 and the type of rock, geographical regions and any conditions in which those formulas are given in this paragraph only as examples, and should not be considered limiting. One with ordinary skill in the art will recognize that other examples of formulas for computing the UCS may be used, and other conditions condition may apply without departing from the scope of this disclosure.
Some formulas in Table I require the DTC of the subsurface as an argument. In one or more embodiments, the computational model in Step 205 includes an artificial intelligence (AI) model that computes the DTC of the subsurface from other pieces of process data obtained in Step 203, such as LWD data. Thus, in some embodiments, the UCS in Step 205 is obtained by using, sequentially, the AI model that receives LWD or MWD data, or both, as input and returns the DTC as output, followed by the physical model that receives the DTC and possibly LWD or MWD data, or both as input and returns the UCS as output. Examples of AI models that may be used in Step 205 to compute the real-time DTC of the subsurface from LWD or MWD data, or both, include regression models, neural networks such fully connected neural networks (DNN) or convolutional neural networks (CNN), decision trees and random forests. Considering the DTC as a series of values given over a range of depths in the subsurface, these models may be combined with natural language processing (NLP) such as recurrent neural networks (RNN) models, long-short-term-memory (LSTM) and gated recurrent unit (GRU) models. The examples of AI models given herein should not be considered as limiting. One with ordinary skill in the art will recognize that other examples of AI models that compute a DTC from LWD or MWD data, or both, may be used without departing from the scope of this disclosure.
In one or more embodiments, the AI model designed to produce the DTC of the subsurface may further require, in addition to other pieces of process data, petrophysical data in real-time as input. Examples of real-time petrophysical data that may be required by the AI model as input include, but are not limited to, a total porosity of the subsurface and a volume of gas hydrocarbon of the subsurface. In one or more embodiments, the computational model in Step 205 further includes a petrophysical model, that computes petrophysical data in real-time, from process data, such as LWD or MWD data, or both, acquired in real-time. In these scenarios, the UCS in Step 205 is obtained in three steps. First, the petrophysical data are computed by using the petrophysical model that receives process data as input, which can be any process data excluding the DTC. Then, the AI model is used to compute the DTC of the subsurface from the petrophysical data and other process data as input. Thirdly, the physical model is used to compute the UCS from the DTC and other process data.
In Step 207, a drilling performance of the drilling operation is determined, based on the real-time UCS computed in Step 205. The drilling performance of the drilling operation can be defined in many ways. In one or more embodiments, the drilling performance of the drilling operation is an estimate of an average rate of penetration (AROP) of the drilling operation. The AROP of the drilling operation is defined as the total length of the wellbore at the end of the drilling operation, divided by the duration of the drilling operation. As such, the AROP can only be known at the end of the drilling operation. It is in contrast with the instantaneous rate of penetration, denoted as ROP, that defines the rate of penetration of the drill bit at a given instant. In some embodiments, the ROP may be measured by downhole sensors, such as the sensors (160) in
In one or more embodiments, the drilling performance of the drilling operation is determined as the IAROP of the drilling operation. The IAROP may be computed in many ways. In one or more embodiments, the IAROP is defined as a function of process data, UCS, and some of the drilling parameters, such as the WOB and the torque. A notable example of such a model, defining the IAROP, is a Bourgoyne model:
In EQ. 1, a1 is a measure of the rock drillability, a2 is a normal compaction constant, P2=10000−D, a3 is an under-compaction constant, P3=0.69D(gp−9), a4 is a pressure differential constant, P4=D(gp−ρc), as is the WOB,
a6 is the RPM, P6=ln(N/60), a7 is a bit wear constant, P7=−h, a8 is a hydraulic parameter and P8 is a hydraulic jet impact force beneath the bit. Besides, D is the true vertical depth expressed in feet, gp is the pore pressure gradient expressed in lbm/gal, ρc is the mud density expressed in lbm/gal, db is a bit diameter expressed in inches, WOB0 is a threshold bit weight per inch, N is the rotary speed of the bit, expressed in rotations per minute, and h is a fractional bit wear. Note that in some embodiments, the rock drillability, a1, the normal compaction constant, a2 and the under-compaction constant, a3 may depend on the UCS and as such, denoted as a1 (UCS), a2 (UCS) and a3 (UCS). An example of the parameters a1, . . . , a8, defined for a specific drilling operation in a specific subsurface, is given in Table II. Those skilled in the art will readily appreciate that, in other drilling conditions, the parameters in EQ. 1 may have different values from the ones expressed in Table II.
In one or more embodiments, the IAROP is defined as the difference between the ROP (instantaneous ROP) and a loss that account for the bit wear:
In EQ. 2, ROP is expressed in m/s, B is a bit wear factor, expressed in m/s, Bmax is the maximum bit wear allowed until it needs to be replaced, and the constant A, is the time it takes to replace a bit, expressed in seconds, during which the drilling operation is halted. This way, the term A·B/Bmax models the drill time that is lost when the bit is being replaced. In one or more embodiments, the bit wear factor B is given by a bit wear model. In some embodiments, the bit wear factor depends on the UCS and may be written as B (UCS). The ROP may be defined in many ways. In some embodiments, the ROP is measured by sensors, such as the sensors (160) in
In EQ. 3, db is the bit diameter, da is the bit face area, μ is a friction coefficient and Ce is a parameter that measures the efficiency of transmitting the penetration of the bit to the rock. It is noted that EQ. 3 depends on the UCS.
In one or more embodiments, the IAROP may be determined by AI. An AI model may be trained by using information from existing wells. Examples of information from an existing well that may be used to train an AI model include an AROP of the existing well, a UCS of the existing well, and process data of the existing well, such as the WOB, RPM profile or torque. The AI model is then trained to match the UCS and process data to the AROP. The trained AI model can then be used to determine the IAROP of the well being drilled. The AI model received the process data obtained in Step 203 and the UCS data obtained in Step 205, such as LWD or MWD data, and returns, as output, the IAROP of the well being drilled. Examples of AI models that may be used to determine the IAROP of the well being drilled include, but are not limited to, regression models, neural networks such fully connected neural networks (DNN) or convolutional neural networks (CNN), decision trees and random forests.
In one or more embodiments, the drilling performance in Step 207 is obtained based on qualitative prior experience. For example, based on qualitative experience, conditions, such as UCS and process data, may be known to favor bit wear, which could lead to having to replace the bit often. In this case, applying a high WOB while drilling a rock with a high UCS may lead to excessive bit wear while not increasing the ROP enough to justify such excessive bit wear. An example of defining whether a bit wear is excessive is to estimate an expected bit life of the bit, based on experience, and compare it with a minimum bit life threshold. If the estimated bit life is below the minimum bit life threshold, the bit wear is then said to be excessive. If the estimated bit life is above the minimum bit life threshold, the bit wear is then said not to be excessive. On the other hand, based on qualitative experience, conditions, such as UCS and process data, may be known not to favor bit wear, and the WOB might be increased, improving the ROP without excessively increasing bit wear. Thus, in some embodiments, the drilling performance in Step 207 may be defined as a status, equal to positive if the WOB might be increased without triggering excessive bit wear, negative if the WOB should be reduced in order not to trigger excessive bit wear, or neutral if the WOB should be kept as it is. It is emphasized that the examples of drilling performances and the example definitions of IAROP given in this disclosure are given only as examples and should not be considered limiting. One with ordinary skill in the art will recognize that other examples of may be used in Step 207 and other steps of
In Step 209, a determination is made whether the drilling performance obtained in Step 207 is optimum. The determination Step 209 can be performed in many ways, depending on how the drilling performance is determined in Step 207. In one or more embodiment, a drilling performance threshold is defined, and the drilling performance is said to be optimum if it is greater than or equal to the drilling performance threshold, or not optimum if it is less than the drilling performance threshold.
In other embodiments, the determination step 209, of whether the drilling performance obtained in Step 207 is optimum, is made by solving an optimization problem, for example in case the drilling performance obtained in Step 207 is given by a formula that expresses the drilling performance as a function of one or more drilling parameters, and possibly other data, such as one or more pieces of the process data from Step 203, or the UCS obtained in Step 205. In such scenarios, denoting X as the one or more drilling parameters and P as the one or more pieces of the process data from Step 203, the performance of the drilling operation may be denoted as F(X, UCS, P). For example, the set X may include the WOB, RPM, mud flow rate or torque or any combination thereof, and the set P may include the total porosity, density or pressure or any combination thereof of the rock being drilled. Examples of such function F include formulations of the IAROP, such as the formulations given in EQ. 1, EQ. 2, or an AI model, such as the AI model given as an embodiment in the description of Step 207, that receives drilling parameters and process data as inputs and predicts the IAROP of the drilling operation as output. Given the real-time UCS from Step 205 and P from Step 203, a function G may be defined as expressing the performance or the drilling operation with respect to the one or more drilling parameters X:
A way to determine whether the drilling performance is optimum, in this case, is by maximizing G, by solving the following maximization problem:
Finding a set X* that satisfies EQ. 5 is only possible in rare cases, for instance, in cases for which the gradient VG can be computed, the equation ∇G(X)=0 can be solved for X, and it can be shown that at least one solution, denoted by X*, thus satisfying ∇G(X*)=0, also satisfies EQ. 5. If a set X* can be found as satisfying EQ. 5, the determination in Step 209, of whether the drilling performance obtained in Step 207 is optimum, is made by comparing the drilling performance obtained in Step 207 with G(X*). If the drilling performance obtained in Step 207 is equal to G(X*), the drilling performance obtained in Step 207 is said to be optimum. If the drilling performance obtained in Step 207 is less than G(X*), the drilling performance obtained in Step 207 is said to be not optimum. Generally, the optimization problem in EQ. 5 is done in an approximate sense, by iterating an algorithm, called an optimizer, until a certain convergence criterion is reached. In one or more embodiments, the optimizer is a gradient ascent method. Given an initial set of drilling parameters, X0, the optimizer produces a recurrent sequence, indexed by an integer iteration number q≥1, of sets Xq such that Xq only depends on the values of the sets Xs, for s<q. In one or more embodiments, the set X0 may be defined randomly, or defined as the current drilling parameters of the drilling operation. In one or more embodiments, the optimizer is defined such that the set Xq, at each iteration q, only depends on set the values of Xq-1. Intuitively, the goal of the optimizer is that the drilling performance function G applied to one of the terms of the sequence Xq at an iteration q*, namely, G(Xq*) is as large as possible. In one or more embodiments, the optimizer is defined such that the sequence G(Xq) is an increasing sequence and then, iterating the optimizer always produces a set of drilling parameters Xq associated with a larger drilling performance than the drilling parameters at the previous iteration, Xq-1. The optimizer runs for a certain number of iterations, Q≥1, called the maximum iteration number. In one or more embodiments, the maximum iteration number Q is pre-defined and the convergence criterion for the iterative optimizer is that the iteration number be Q. The convergence criterion for the optimizer can be defined in many other ways. In other embodiments, the convergence criterion is noting that the distance |G(Xq)−G(Xq-1)| is less than a predefined threshold for a certain q≥1. If a convergence criterion is met at some iteration, the optimizer is said to have converged, and the iterative process stops. Regardless of the definition of the convergence criterion, the maximum iteration number is reached when the convergence criterion is met and denoted as Q. An optimal set of drilling parameters, in the scope of this disclosure, can then be defined in many ways. In one or more embodiments, the optimal set of drilling parameters is defined as
that is, the last value obtained by the optimizer when the convergence criterion is met, and an approximate maximum drilling performance is defined as G(XQ). In other embodiments, the optimal set of drilling parameters is defined as the set Xq*, obtained at some integer q*, such that 0≤q*≤Q, that maximizes the drilling performance in the following sense:
and the approximate maximum drilling performance is defined as G(Xq*). The determination of whether the drilling performance obtained in Step 207 is optimum is by comparing the drilling performance obtained in Step 207 with the approximate maximum drilling performance. If the drilling performance obtained in Step 207 is greater than or equal to the approximate maximum drilling performance, the drilling performance obtained in Step 207 is said to be optimum. If the drilling performance obtained in Step 207 is less than the approximate maximum drilling performance, the drilling performance obtained in Step 207 is said to be not optimum.
In one or more embodiments, a set of constraints may be added to the optimization problem in EQ. 5, in which case EQ. 5 and the constraints form a constrained optimization problem. Such constrained optimization problem can be solved approximately, for example, by using a generalized reduced gradient optimizer. Examples of constraints that may be added to EQ. 5 include ranges for some of the drilling parameters, such as a maximum WOB, or a maximum torque, based on equipment specificity. In such scenarios, the optimizer may be designed such that each term Xq satisfies the constraints at each iteration q.
In other embodiments, the determination step 209, of whether the drilling performance in Step 207 is optimum, is made by simply selecting a status. In scenarios in which the drilling performance in Step 207 is defined qualitatively, depending on prior experience, as a status, that can be positive if the WOB might be increased without triggering excessive bit wear, negative if the WOB should be reduced in order not trigger excessive bit wear, or neutral if the WOB should be kept as it is, the drilling performance may be determined as optimum if the status is neutral, or not optimum if the status is positive or negative.
If the drilling performance is determined as optimum in Step 209, no action is taken on the drilling parameters and the Steps 203-209 are repeated while the drilling operation continues, controlled by the drilling parameters. If the drilling performance is determined as not optimum in Step 209, one or more drilling parameters are adjusted in Step 211, in order to optimize the drilling performance. Adjusting one or more drilling parameters to optimize the drilling performance can be done in many ways. In one or more embodiments, the drilling parameters may be adjusted by using a grid search technique. To perform a grid search technique for a subset of N≥1 drilling parameters with the set of drilling parameters, the subset of one or more drilling parameters denoted as {Xi, 1≤i≤N}, where i is an integer, where each Xi is a drilling parameter, for each integer i such that 1≤i≤N, a feasible range Ri is first defined for each Xi, for i such that 1≤i≤N. Then, for each feasible range Ri, a certain integer number Ki≥1 of values for the drilling parameters, Xik, 1≤k≤Ki are taken on each feasible range Ri, resulting in a Πi=1N Ki-dimensional grid X of drilling parameters. For each set {circumflex over (X)} of drilling parameters on the grid X, a drilling performance G({circumflex over (X)}) is determined, for instance, in a similar fashion as it was done in Step 207, and optimum drilling parameters are defined as the set {circumflex over (X)}* such that:
In Step 211, the one or more drilling parameters Xi, 1≤i≤N are then adjusted to be equal to {circumflex over (X)}* satisfying EQ. 8.
In case the determination in Step 209, of whether the drilling performance is optimum is performed by solving an optimization problem, optimum values for one or more drilling parameters are obtained are the optimum values that solve the optimization problem, in a sense defined in the description of Step 209, in accordance with one or more embodiments. In Step 211, the drilling parameters are then adjusted to be equal to the optimum values for one or more drilling parameters obtained in Step 209. For example, if the determination of whether the drilling performance is optimum is performed by solving the optimization problem in EQ. 5, optimum values for the one or more drilling parameters may be defined by X* in EQ. 5, XQ in EQ. 6, or Xq* in EQ. 7.
If the drilling parameters are adjusted in Step 211, the drilling operation continues, controlled by the adjusted drilling parameters, and the Steps 203-209 are repeated while the drilling operation continues.
In this disclosure, a C profile on a time interval [T1, T2], for a certain quantity, C, such as a component of the LWD data, is defined as a vector {Cn such that 0≤n≤N}, where each Cn, called a sample of C at time tn, is a value of C obtained at time tn, where the times tn, for a sequence of consecutive integers n∈[0, N], for a given integer N≥1, discretize the interval [T1, T2] so that to =T1, tN=T2 and tn-1<tn, for all n∈[1, N]. The times tn, for n∈[0, N], are called a sampling of the interval [T1, T2]. Furthermore, denoting Z=0 as the origin of the well being drilled, and Zt as the distance along the wellbore from the origin to the point in space of the subsurface that is being drilled at a time t, there is no distinction between a value for the quantity C obtained at time t and the value of the quantity C obtained at distance Zt. Therefore, Cn denotes, interchangeably, the value of C obtained at time ty, and the value of C obtained at distance Zt
In one or more embodiments, the LWD data are obtained at discrete times during the drilling operation, rather than continuously and therefore, the obtained components of the LWD data form profiles whose samples are obtained at those discrete times. Although at any time, full profiles are available on a time interval from the first discrete time to the latest discrete time, some shorter profiles may be extracted from the full profiles on time intervals included in the interval from the first discrete time to the latest discrete time. Also, if the LWD data have multiple components the term “LWD profiles” refers to the set of profiles for all the components of the LWD data. For example, if the LWD data is composed of a gamma ray and a thermal neutron porosity, the LWD profiles on a time interval [T1, T2] refer to the set composed of a gamma ray profile on the time interval [T1, T2] and a thermal neutron porosity profile on the time interval [T1, T2].
In
The AI model (311) may be configured in many ways. In one or more embodiments, the AI model (311) receives the LWD data acquired at a given time, T, and returns, as output, a prediction of the DTC at time T. In one or more embodiments, denoting 0 as the first recording time, the AI model (311) receives profiles for all the components of the LWD data on a time interval [T0, T], where T0 and T are two times at which the LWD data are obtained, such that 0≤T0<T, and returns, as output, a prediction of the DTC at time T. In one or more embodiments, the AI model (311) receives profiles for all the components of the LWD data on [T0, T] and returns, as output, a prediction of a DTC profile on [T0′, T], where T0′ is any number such that T0≤T0′<T. The value of the DTC at time T is then extracted as the last sample of the DTC profile on [T0′, T]. A notable example is obtained by considering the full profiles, that is, by setting T0=T0′=0. Note that in this scenario, in which the AI model (311) receives profiles for all the components of the LWD data as input and returns a DTC profile as output, the sampling of the DTC profile may be different from the sampling of LWD profiles. Also, in some embodiments, the computational model (309) includes a pre-processing step that converts the received LWD data into pre-processed LWD data, the pre-processing step including converting the received LWD data into a format that is suitable as an input for the AI model (311). In some embodiments in which the AI model (311) is configured to receive LWD profiles, the pre-processing step includes re-sampling LWD profiles to a sampling for which the AI model (311) is configured to receive the LWD profiles.
Before being put into production and applied to current data as predictors, AI models typically involve a training phase and a testing phase, both using previously acquired data. It is noted that supervised machine-learned models require examples of input and associated output (i.e., target) pairs in order to learn a desired functional mapping. As such, in one or more embodiments, the AI model (311) is trained using known data from existing wells. A dataset of examples may be constructed, each example including an input and an associated output (i.e., target) for a distinct existing well. In some embodiments, the input of an example is an LWD sample, that is, set of all the components of the LWD at a given time, that are known for the existing well, and the associated output is a value of the DTC at the same time, that is known for the existing well. In other embodiments, the input of an example is a set LWD profiles on a time interval of the form [T0, T], that are known for the existing well, where T0 and T are two times such that T0<T. In such scenarios, the associated output may be defined as a value of the DTC at time T, that is known for the existing well, or a DTC profile on a time interval [T0′, T], that is known for the existing well, where T0′ is any number such that T0≤T0′<T. In one or more embodiments, the dataset is split into a training dataset and a testing dataset, the example input and associated output pairs of the training dataset being called the training examples, and the example input and associated output pairs of the testing dataset being called the testing examples. It is common practice to split the dataset in a way that the training dataset contains more examples than the testing dataset. Because data splitting is a common practice when training and testing a machine-learned model, it is not described in detail in this disclosure. One with ordinary skill in the art will recognize that any data splitting technique may be applied to the dataset without departing from the scope of this disclosure. The AI model (311) is trained as a functional mapping that optimally matches the inputs of the training examples to the associated outputs of the training examples.
Once trained, the AI model (311) is validated by computing a metric for the testing examples, in accordance with one or more embodiments. Examples of metrics that may be used to validate the AI model (311) include any scoring or comparison function known in the art, including but not limited to: a mean square error (MSE), a root mean square error (RMSE) and a coefficient of determination (R2), defined as:
In EQ. 9, EQ. 10, and EQ. 11, n denotes the number of testing examples, each training example being defined as an input-output pair, (xi, yi), for i=1, . . . , n, in which xi is the input, yi is the output associated with xi,
and ŷi denotes the value of the predicted UCS by the AI model (311) when receiving xi as input, for i=1, . . . , n. The notation |·| denotes a norm that can be applied to the object in between. For example, if the outputs are real-valued, such as a UCS at a given time, the notation |·| may denote an absolute value. If the outputs are vector-valued, such as a UCS profile at a given time, the notation |·| may denote an l2 norm.
Some NLP models, such as a family of text predicting encoder-decoder models, including transformers models, are configured as data generators. They receive and encode an input by using the encoder, and then, with the decoder, generate an output, one sample at a time, until stopped by a stopping criterion. In scenarios in which the AI model (311) returns a DTC profile as output using such NLP models, the AI model (311) may be configured to stop generating the DTC profile at a sample occurring in the future, and in turn, the AI model (311), in addition to predicting a DTC profile in real-time, may further offer a prediction of the DTC of future times, that is, the DTC of the subsurface that is to be drilled but has not been drilled yet. Predicting DTC, and thus, UCS, in future times, may be used to anticipate which drilling parameters to be tuned in the present to optimize the drilling performance in future times.
The DTC (307), computed in real-time, is passed, as input, to a physical model (313), that outputs a UCS (315) in real-time. Summarizing,
In turn, the system in
The block diagram in
The data acquisition system (520) includes the LWD tool (116), described in the description of
The AI model (311) is connected to both the data acquisition system (540) and the database (530). The database (530) contains previously acquired LWD data from known wells (531) and DTC from known wells (533), that may be used to train and test the AI model (311). Note that during the drilling operation, the AI model (311) may be re-trained or fine-tuned using the database (530). In one or more embodiments, the DTC computed by the AI model (311) is validated through one or more tests after drilling (e.g., well log analysis). In instances where the DTC is directly measured using a post-drilling test, the measured DTC values may be appended to the DTC from known wells (DDD33) in the database (530), and the associated LWD data (305) may likewise be appended to the LWD data from known wells (531) in the database (530). In this way, newly acquired data may also be used to train, re-train, or fine tune the AI model (311). Training the AI model (311) is described in greater detail later in the instant disclosure.
Returning to the drilling management system (540), the UCS computed by the computational model (309), and the LWD data (305) may be used to assess and optimize the drilling performance of the drilling operation. In one or more embodiments, the drilling performance is a prediction of the average ROP of the drilling operation from start to complete, such as the instantaneous average ROP (IAROP), as described in the description of
Using an optimizer (541), a determination is made whether the drilling parameters (513) are optimum to maximize the drilling performance. If the drilling parameters (513) are determined optimum, the drilling operation continues using the drilling parameters (513). If the drilling parameters (513) are considered non-optimum, one or more drilling parameters within the drilling parameters (513) are adjusted by the control system (162) through the adjustment action (550), and the drilling operation continues with the adjusted drilling parameters, that are assigned as the drilling parameters (513). In one or more embodiments, the control system (162) may communicate with external entities. For example, in situations in which the one or more drilling parameters to be adjusted include the inclination of the wellbore, the control system may communicate with a geosteering operation center to obtain an assessment of the feasibility, requirements and consequences of such an adjustment.
In one or more embodiments, the drilling performance F(X, UCS, P), is expressed as the result of applying a mathematical function, F, to one or more drilling parameters, X, within the drilling parameters (513), LWD data (305), P, and the UCS. In such scenarios, as shown by EQ. 4, a function G may be defined as an expression of the drilling performance as a function of the only tunable variables within X, UCS, P, namely, X: G(X)=F(X, UCS, P). Then, in some embodiments, the optimizer (541) may be defined as any method that solves the maximization problem in EQ. 5, exactly, or in an approximate sense, such as EQ. 7. Embodiments for such an optimizer are described in the description of
As stated, the computational model as defined in Step 205 of the method in
AI model types may include, but are not limited to, generalized linear models, Bayesian regression, random forests, and deep models such as neural networks, convolutional neural networks, and recurrent neural networks. AI model types, whether they are considered deep or not, are usually associated with additional “hyperparameters” which further describe the model. For example, hyperparameters providing further detail about a neural network may include, but are not limited to, the number of layers in the neural network, choice of activation functions, inclusion of batch normalization layers, and regularization strength. Commonly, in the literature, the selection of hyperparameters surrounding an AI model is referred to as selecting the model “architecture.” Once an AI model type and hyperparameters have been selected, the AI model is trained to perform a task.
A brief discussion and summary of some machine-learned model types is provided herein. However, one with ordinary skill in the art will recognize that a full discussion of every type of machine-learned model applicable to the methods and systems disclosed herein is not possible nor required to describe the AI model (311). Consequently, the following discussion of machine-learned models is provided by way of introduction to the art of machine-learning and does not impose a limitation on the present disclosure.
A first, notable example of an AI model that may be included in the computational model in Step 205 in
A diagram of a neural network is shown in
Nodes (602) and edges (604) carry additional associations. Namely, every edge is associated with a numerical value. The edge numerical values, or even the edges (604) themselves, are often referred to as “weights” or “parameters.” While training a neural network (600), numerical values are assigned to each edge (604). Additionally, every node (602) is associated with a numerical variable and an activation function. Activation functions are not limited to any functional class, but traditionally follow the form
where i is an index that spans the set of “incoming” nodes (602) and edges (604) and ƒ is a user-defined function. Incoming nodes (602) are those that, when the neural network (600) is viewed or depicted as a directed graph (as in
and rectified linear unit function ƒ(x)=max (0, x), however, many additional functions are commonly employed. Every node (602) in a neural network (600) may have a different associated activation function. Often, as a shorthand, activation functions are described by the function ƒ by which it is composed. That is, an activation function composed of a linear function ƒ may simply be referred to as a linear activation function without undue ambiguity.
When the neural network (600) receives an input, the input is propagated through the network according to the activation functions and incoming node (602) values and edge (604) values to compute a value for each node (602). That is, the numerical value for each node (602) may change for each received input. Occasionally, nodes (602) are assigned fixed numerical values, such as the value of 1, that are not affected by the input or altered according to edge (604) values and activation functions. Fixed nodes (602) are often referred to as “biases” or “bias nodes” (606), displayed in
In some implementations, the neural network (600) may contain specialized layers (605), such as a normalization layer, or additional connection procedures, like concatenation. One skilled in the art will appreciate that these alterations do not exceed the scope of this disclosure.
As noted, the training procedure for the neural network (600) comprises assigning values to the edges (604). To begin training the edges (604) are assigned initial values. These values may be assigned randomly, assigned according to a prescribed distribution, assigned manually, or by some other assignment mechanism. Once edge (604) values have been initialized, the neural network (600) may act as a function, such that it may receive inputs and produce an output. As such, at least one input is propagated through the neural network (600) to produce an output. Training data is provided to the neural network (600). Generally, training data consists of pairs of inputs and associated targets. The targets represent the “ground truth,” or the otherwise desired output, upon processing the inputs. In the context of the AI model (311), an input is a piece of LWD data that is known from an existing well at a given time T, or a LWD profile on a time interval ending in T, for the existing well, that can be obtained, for example, from the LWD data from known wells (531). An output, or target, is a value of the UCS for the existing well at the same time T or a UCS profile on a time interval ending in T for the existing well, that can be obtained, for example, from the UCS data from known wells (533). During training, the neural network (600) processes at least one input from the training data and produces at least one output. Each neural network (600) output is compared to its associated input data target. The comparison of the neural network (600) output to the target is typically performed by a so-called “loss function;” although other names for this comparison function such as “error function,” “misfit function,” and “cost function” are commonly employed. Many types of loss functions are available, such as the mean-squared-error function, however, the general characteristic of a loss function is that the loss function provides a numerical evaluation of the similarity between the neural network (600) output and the associated target. The loss function may also be constructed to impose additional constraints on the values assumed by the edges (604), for example, by adding a penalty term, which may be physics-based, or a regularization term. Generally, the goal of a training procedure is to alter the edge (604) values to promote similarity between the neural network (600) output and associated target over the training data. Thus, the loss function is used to guide changes made to the edge (604) values, typically through a process called “backpropagation”.
While a full review of the backpropagation process exceeds the scope of this disclosure, a brief summary is provided. Backpropagation consists of computing the gradient of the loss function over the edge (604) values. The gradient indicates the direction of change in the edge (604) values that results in the greatest change to the loss function. Because the gradient is local to the current edge (604) values, the edge (604) values are typically updated by a “step” in the direction indicated by the gradient. The step size is often referred to as the “learning rate” and need not remain fixed during the training process. Additionally, the step size and direction may be informed by previously seen edge (604) values or previously computed gradients. Such methods for determining the step direction are usually referred to as “momentum” based methods.
Once the edge (604) values have been updated, or altered from their initial values, through a backpropagation step, the neural network (600) will likely produce different outputs. Thus, the procedure of propagating at least one input through the neural network (600), comparing the neural network (600) output with the associated target with a loss function, computing the gradient of the loss function with respect to the edge (604) values, and updating the edge (604) values with a step guided by the gradient, is repeated until a termination criterion is reached. Common termination criteria are: reaching a fixed number of edge (604) updates, otherwise known as an iteration counter; a diminishing learning rate; noting no appreciable change in the loss function between iterations; reaching a specified performance metric as evaluated on the data or a separate hold-out data set. Once the termination criterion is satisfied, and the edge (604) values are no longer intended to be altered, the neural network (600) is said to be “trained”.
A structural grouping, or group, of weights is herein referred to as a “filter.” The number of weights in a filter is typically much less than the number of inputs. In a CNN, the filters can be thought as “sliding” over, or convolving with, the inputs to form an intermediate output or intermediate representation of the inputs which still possesses a structural relationship. Like unto the neural network (600), the intermediate outputs are often further processed with an activation function. Many filters may be applied to the inputs to form many intermediate representations. Additional filters may be formed to operate on the intermediate representations creating more intermediate representations. This process may be repeated as prescribed by a user. There is a “final” group of intermediate representations, wherein no more filters act on these intermediate representations. In some instances, the structural relationship of the final intermediate representations is ablated; a process known as “flattening.” The flattened representation may be passed to a neural network (600) to produce a final output. Note, that in this context, the neural network (600) is still considered part of the CNN. Like unto a neural network (600), a CNN is trained, after initialization of the filter weights, and the edge (604) values of the internal neural network (600), if present, with the backpropagation process in accordance with a loss function.
Another example of a machine-learned model type is a decision tree. As will be described, decisions trees often act as components, or sub-models, to other types of machine-learned models such as random forests and gradient boosted machines. A decision tree is composed of nodes. A decision is made at each node such that data present at the node are segmented. Typically, at each node, the data at said node, are split into two parts, or segmented bimodally, however, multimodal segmentation is possible. The segmented data can be considered another node and may be further segmented. As such, a decision tree represents a sequence of segmentation rules. The segmentation rule (or decision) at each node is determined by an evaluation process. The evaluation process usually involves calculating which segmentation scheme results in the greatest homogeneity or reduction in variance in the segmented data. However, a detailed description of this evaluation process, or other potential segmentation scheme selection methods, is omitted for brevity and does not limit the scope of the present disclosure.
Further, if at a node in a decision tree, the data are no longer to be segmented, that node is said to be a “leaf node.” Commonly, values of data found within a leaf node are aggregated, or further modeled, such as by a linear model, so that a leaf node represents a class or an aggregated value (e.g., an average). A decision tree can be configured in a variety of ways, such as, but not limited to, choosing the segmentation scheme evaluation process, limiting the number of segmentations, and limiting the number of leaf nodes. Generally, when the number of segmentations or leaf nodes in a decision tree is limited, the decision tree is said to be a “weak learner”.
As stated, another example of a machine-learned model type, based on decision trees, is a random forest model, which may operate as a supervised machine learning algorithm performing a regression to predict the UCS. A random forest model is an ensemble machine learning algorithm that uses multiple decision trees to make predictions. The architecture of random forest models is unique in that it combines multiple decision trees to reduce the risk of overfitting and improve the overall generalization of the model and the accuracy of predictions, in comparison to individual trees. This is based on the idea that multiple “weak learners” can combine to create a “strong learner.” Each individual classifier is considered a “weak learner,” while the group of classifiers functioning together is regarded as a “strong learner.” This approach allows random forests to effectively capture complex relationships and interactions between features, resulting in better predictive performance.
Each of the multiple decision trees operates on a different subset of the same training dataset, followed by taking an average of the results to improve the overall accuracy of the predictions. In other words, instead of relying on a single decision tree, the random forest gathers predictions from each tree and makes a final prediction based on the majority of these predictions.
As stated, another example of a machine-learned model type, based on decision trees, is a gradient boosted machine. Hereafter, a gradient boosted machine model using decision trees is referred to as a gradient boosted trees model. In most implementations, the decision trees from which a gradient boosted trees model is composed are weak learners. In a gradient boosted trees model, the decision trees are ensembled in series, wherein each decision tree makes a weighted adjustment to the output of the preceding decision trees in the series. The process of ensembling decision trees in series, and making weighted adjustments, to form a gradient boosted trees model is best illustrated by considering the training process of a gradient boosted trees model.
Training a gradient boosted trees model consists of the selection of segmentation rules for each node in each decision tree; that is, training each decision tree. Once trained, a decision tree is capable of processing data. For example, a decision tree may receive a data input (e.g., a pre-processed molecular descriptor). The data input is sequentially transferred to nodes within the decision tree according to the segmentation rules of the decision tree. Once the data input is transferred to a leaf node, the decision tree outputs the assigned class or aggregate value (e.g., molecular property value) of the associated leaf node.
Generally, training a gradient boosted model firstly consists of making a simple prediction (SP) for the target data (i.e., the UCS). The simple prediction (SP) may be the average UCS value over the training examples of a training dataset. The simple prediction (SP) is subtracted from the targets to form a first residual. The first decision tree in the series is created and trained, wherein the first decision tree attempts to predict the first residuals forming first residual predictions. The first residual predictions from the first decision tree are scaled by a scaling parameter. In the context of gradient boosted trees the scaling parameter is known as the “learning rate” (n). The learning rate is one of the hyperparameters governing the behavior of the gradient boosted trees model. The learning rate (n) may be fixed for all decision trees or may be variable or adaptive. The first residual predictions of the first decision tree are multiplied by the learning rate (n) and added to the simple prediction (SP) to form a first prediction. The first predictions are subtracted from the targets to form a second residual. A second decision tree is created and trained using the data inputs and the second residuals as targets such that it produces second residual predictions. The second residual predictions are multiplied by the learning rate (n) and are added to the first predictions forming second predictions. This process is repeated recursively until a termination criterion is achieved.
Many termination criteria exist and are not all enumerated here for brevity. Common termination criteria are terminating training when a pre-defined number of decision trees has been reached, or when improvement in the residuals is no longer observed.
Once trained, a gradient boosted trees model may make predictions using input data. To do so, the input data is passed to each decision tree, which will form a plurality of residual predictions. The plurality of residual predictions is multiplied by the learning rate (n), summed across every decision tree, and added to the simple prediction (SP) formed during training to produce the gradient boosted trees predictions.
One with ordinary skill in the art will appreciate that many adaptions may be made to gradient boosted trees models and that these adaptions do not exceed the scope of this disclosure. Some adaptions may be algorithmic optimizations, efficient handling of sparse data, use of out-of-core computing, and parallelization for distributed computing. Commonly, when such adaptions are applied to a gradient boosted trees model, the model is known in the literature as XGBoost.
The computations mentioned in this disclosure may be performed by a computer, such as the computer (543) in
The computer (802) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. In some implementations, one or more components of the computer (802) may be configured to operate within environments, including cloud-computing-based, local, global, or other environments (or a combination of environments).
At a high level, the computer (802) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (802) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).
The computer (802) can receive requests over network (830) from a client application (for example, executing on another computer (802) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (802) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.
Each of the components of the computer (802) can communicate using a system bus (803). In some implementations, any or all of the components of the computer (802), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (804) (or a combination of both) over the system bus (803) using an application programming interface (API) (812) or a service layer (813) (or a combination of the API (812) and service layer (813). The API (812) may include specifications for routines, data structures, and object classes. The API (812) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (813) provides software services to the computer (802) or other components (whether or not illustrated) that are communicably coupled to the computer (802). The functionality of the computer (802) may be accessible for all service consumers using this service layer. Software services, such as those provided by the service layer (813), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or another suitable format. While illustrated as an integrated component of the computer (802), alternative implementations may illustrate the API (812) or the service layer (813) as stand-alone components in relation to other components of the computer (802) or other components (whether or not illustrated) that are communicably coupled to the computer (802). Moreover, any or all parts of the API (812) or the service layer (813) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.
The computer (802) includes an interface (804). Although illustrated as a single interface (804) in
The computer (802) includes at least one computer processor (805). Although illustrated as a single computer processor (805) in
The computer (802) also includes a memory (806) that holds data for the computer (802) or other components (or a combination of both) that can be connected to the network (830). The memory may be a non-transitory computer readable medium. For example, memory (806) can be a database storing data consistent with this disclosure. Although illustrated as a single memory (806) in
The application (807) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (802), particularly with respect to functionality described in this disclosure. For example, application (807) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (807), the application (807) may be implemented as multiple applications (807) on the computer (802). In addition, although illustrated as integral to the computer (802), in alternative implementations, the application (807) can be external to the computer (802).
There may be any number of computers such as the computer (802) associated with, or external to, a computer system containing computer (802), wherein each computer (802) communicates over network (830). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (802), or that one user may use multiple computers such as the computer (802).
Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.