In the oil and gas industry, seismic surveys are conducted over subsurface regions of interest during the search for, and characterization of, hydrocarbon reservoirs. In seismic surveys, a seismic source generates seismic waves that propagate through the subterranean region of interest and are detected by seismic receivers. The seismic receivers detect and store a time-series of samples of earth motion caused by the seismic waves. The collection of time-series of samples recorded at many receiver locations generated by a seismic source at many source locations constitutes a seismic dataset.
To determine the earth structure, including the presence of hydrocarbons, the seismic dataset may be processed. Processing a seismic dataset includes a sequence of steps designed to correct for a number of issues, such as near-surface effects, irregularities in the seismic survey geometry, etc. Another step in processing a seismic dataset may be noise filtering and other noise removal operations. A properly processed seismic dataset may aid in decisions as to if and where to drill for hydrocarbons.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
In one aspect, embodiments disclosed herein relate to a method. The method includes receiving, by a seismic processing system from a seismic acquisition system, seismic data regarding a subsurface region of interest, where the seismic data comprises a plurality of time-space waveforms. The method also includes generating, using the seismic processing system and statistical sampling, a plurality of pilot waveforms based on the plurality of time-space waveforms. The method further includes forming, using the seismic processing system, a training seismic dataset comprising an input training dataset and an output training dataset, where the input training dataset is based on the plurality of time-space waveforms and the output training dataset is based on the plurality of pilot waveforms. The method still further includes training, using the seismic processing system and the training seismic dataset, a machine-learning (ML) model to predict the output training dataset, at least in part, from the input training dataset.
In general, in one aspect, embodiments disclosed herein relate to a system. The system includes a seismic acquisition system and a seismic processing system. The seismic acquisition system is configured to record seismic data regarding a subsurface region of interest, wherein the seismic data comprises a plurality of time-space waveforms. The seismic processing system is configured to receive the seismic data from the seismic acquisition system and generate, using statistical sampling, a plurality of pilot waveforms based on the plurality of time-space waveforms. The seismic processing system is also configured to form a training seismic dataset comprising an input training dataset and an output training dataset, where the input training dataset is based on the plurality of time-space waveforms and the output training dataset is based on the plurality of pilot waveforms. The seismic processing system is further configured to train, using the training seismic dataset, a machine-learning (ML) model to predict the output training dataset, at least in part, from the input training dataset.
It is intended that the subject matter of any of the embodiments described herein may be combined with other embodiments described separately, except where otherwise contradictory.
Other aspects and advantages of the claimed subject matter will be apparent from the following description and the appended claims.
Specific embodiments of the disclosed technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the disclosure, numerous specific details are set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art that the disclosure may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as using the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
In the following description of
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a seismic signal” includes reference to one or more of such seismic signals.
Terms such as “approximately,” “substantially,” etc., mean that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
It is to be understood that one or more of the steps shown in the flowcharts may be omitted, repeated, and/or performed in a different order than the order shown. Accordingly, the scope disclosed herein should not be considered limited to the specific arrangement of steps shown in the flowcharts.
In general, disclosed embodiments include systems and methods to reduce noise by filtering seismic data. In particular, in some embodiments seismic data may be filtered with a machine-learning (ML) model, trained with “virtual waveforms”, i.e., waveforms that are less noisy representations of waveforms of seismic data. As such, the virtual waveforms may be directly generated from the same waveforms of seismic data. Further, the virtual waveforms, when used to train a ML model, provide a ML model that is capable of removing or reducing noise in the waveforms of seismic data.
Because of its lower noise content, virtual waveforms may facilitate various seismic data processing operations, such as first arrival picking (first break picking) or velocity picking (normal move-out velocity). However, performing such operations with virtual waveforms may not capture details or information that is contained in the actual waveforms of seismic data, but not in the virtual waveforms. Performing seismic data processing operations using the actual waveforms of seismic data may provide results with higher precision, and thus, higher resolution. Therefore, processing techniques to reduce or remove seismic noise from the actual waveforms may assist in improving the quality of all the seismic processing operations.
It is noted that while the methods described herein will be described in the context of seismic data in two dimensions, these methods are not limited to these types of seismic data. In general, embodiments disclosed herein can be applied to any pre-stack seismic data. For example, embodiments disclosed herein can be applied to a collection of shot gathers. One with ordinary skill in the art will appreciate that the methods disclosed herein are applicable to seismic data that have undergone any number of pre-processing steps commonly employed in the art.
Because seismic data may contain spatial and lithology information about a subterranean region of interest, the seismic data may be used to construct a seismic image of the subterranean region of interest. The resulting seismic image may then be used for further seismic data interpretation, such as in updating the spatial extension of a hydrocarbon reservoir. Thus, the disclosed methods are integrated into the established practical applications for improving seismic images and searching for an extraction of hydrocarbons from subsurface hydrocarbon reservoirs. The disclosed methods represent an improvement over existing methods for at least the reasons of lower cost and increased efficacy.
Refracted seismic waves (110) and reflected seismic waves (114) may occur, for example, due to geological discontinuities (112) that may be also known as “seismic reflectors”. The geological discontinuities (112) may be, for example, planes or surfaces that mark changes in physical or chemical characteristics in a geological structure. The geological discontinuities (112) may be also boundaries between faults, fractures, or groups of fractures within a rock. The geological discontinuities (112) may delineate a hydrocarbon reservoir (104).
At the surface, refracted seismic waves (110) and reflected seismic waves (114) may be detected by seismic receivers (116). Radiated seismic waves (108) that propagate from the seismic source (106) directly to the seismic receivers (116), known as direct seismic waves (122), may also be detected by the seismic receivers (116).
In some embodiments, a seismic source (106) may be positioned at a location denoted (xs, ys) where x and y represent orthogonal axes on the earth's surface above the subsurface region of interest (102). The seismic receivers (116) may be positioned at a plurality of seismic receiver locations denoted (xr, yr), with the distance between each receiver and the source being termed “the source-receiver offset”, or simply “the offset”. Thus, the direct seismic waves (122), refracted seismic waves (110), and reflected seismic waves (114) generated by a single activation of the seismic source (106) may be represented in the axes (xs, ys, xr, yr, t). The t-axis delimits the time sample at which the seismic acquisition system (100) activated the seismic source (106) and acquired the seismic data by the seismic receivers (116).
Once acquired, seismic data may undergo a myriad of pre-processing steps. These pre-processing steps may include but are not limited to: reducing signal noise; applying move-out corrections; organizing or resampling the traces according to a regular spatial pattern (i.e., regularization); and data visualization. One with ordinary skill in the art will recognize that many pre-processing (or processing) steps exist for dealing with a seismic dataset. As such, one with ordinary skill in the art will appreciate that not all pre-processing (or processing) steps can be enumerated herein and that zero or more pre-processing (or processing) steps may be applied with the methods disclosed herein without imposing a limitation on the instant disclosure.
In some instances, seismic processing may reduce five-dimensional seismic data produced by a seismic acquisition system (100) to three-dimensional (x,y,t) seismic data by, for example, correcting the recorded time for the time of travel from the seismic source (106) to the seismic receiver (116) and summing (“stacking”) samples over two horizontal space dimensions. Stacking of samples over a predetermined time interval may be performed as desired, for example, to reduce noise and improve the quality of the signals.
Seismic data may also refer to data acquired over different time intervals, such as, for example, in cases where seismic surveys are repeated to obtain time-lapse data. Seismic data may also be pre-processed data, e.g., arranged as a “common shot gather” (CSG) domain, in which waveforms are acquired by different receivers but a single source location. Further, seismic data may also refer to datasets generated via numerical simulations by modeling wave propagation phenomena in the subsurface region of interest (102). The noted seismic data is not intended to be limiting, and any other suitable seismic data is intended to fall within the scope of the present disclosure
In the CSG (204) shown in
The CSG (204) illustrates how the arrivals are detected at later times by the seismic receivers (116) that are farther from the seismic source (106). In some embodiments, arrivals may have distinctive geometric shapes. For example, direct seismic waves (122) in the CSG (204) may be characterized by a straight line, while arrivals of reflected seismic waves (114) may present a hyperbolic shape, as seen in
In one or more embodiments, seismic data (202) acquired by a seismic acquisition system (100) may be arranged in a plurality of CSGs (210) to create a 3D seismic dataset, as illustrated in
Seismic data (202) may be processed by a seismic processing system (220) to generate a seismic velocity model (219) of the subsurface region of interest (102). A seismic velocity model (219) is a representation of seismic velocity at a plurality of locations within a subsurface region of interest (102). Seismic velocity is the speed at which a seismic wave, that may be a pressure-wave or a shear-wave, travel through a medium. Pressures waves are often referred to as “primary-waves” or “P-waves”. Shear waves are often referred to a “secondary waves” or “S-waves”. Seismic velocities in a seismic velocity model (219) may vary in vertical depth, in one or more horizontal directions, or both. Layers of rock may be created from different materials or created under varying conditions. Each layer of rock may have different physical properties from neighboring layers and these different physical properties may include seismic velocity. The seismic processing system (220) will provide multiple methods of performing velocity analysis, including normal moveout analysis, iterative Kirchhoff time- and depth-migration, tomography, and full waveform inversion.
In some embodiments, seismic data (202) may be processed by a seismic processing system (220) to generate a seismic image (230) of the subsurface region of interest (102). For example, a time-domain seismic image (232) may be generated using a process called seismic migration (also referred to as “migration” herein) using a seismic velocity model (219). In seismic migration, events of seismic reflectivity recorded at the surface are relocated in either time or space to the locations the events occurred in the subsurface. In some embodiments, migration may transform pre-processed shot gathers from a time-domain to a depth-domain seismic image (234). In a depth-domain seismic image (234), seismic events in a migrated shot gather may represent geological boundaries (236, 238) in the subsurface. Various types of migration algorithms may be used in seismic imaging. For example, one type of migration algorithm corresponds to reverse time migration.
Processing seismic data (202) may consist of several key groups of functions, each serving a specific purpose in the processing workflow. For example, the steps may include data injection, the loading, sorting and arrangement of raw seismic data, acquired from various sources such as seismographs or land-based sensors, into the processing system. This data may include seismic waveforms, and may also include well logs, and survey information.
Data quality control is critical in seismic data processing. A seismic processing system (220) employs various tools and techniques to identify and correct any artifacts, noise, or errors in the data. This step ensures the accuracy and reliability of subsequent processing steps.
Further, the raw seismic data may be “conditioned”, i.e., the raw seismic data is pre-processed to enhance its quality and make it suitable for further analysis. This step may include procedures such as filtering, deconvolution, noise suppression, and signal enhancement.
In addition, data may be “stacked”. Stacking involves combining multiple seismic traces to improve data quality and increase signal-to-noise ratio. This may enhance the identification of subsurface features and reduces random noise interference.
The seismic processing system (220) may provide visualization tools to render the seismic data (202) in a visual format, enabling geoscientists to analyze, interpret, and perform visual quality control more effectively. This can include 2D/3D seismic displays, depth slices, horizon maps, and virtual reality visualization.
The final step involves generating reports and documenting the results of the seismic processing workflow. This includes recording the processing parameters, interpretation results, and any uncertainties or limitations associated with the data processing. The seismic processing system (220) is used to perform these groups of steps for even a small commercial seismic survey.
The seismic processing system (220) may consist of various hardware components that work together to process and analyze seismic data (202). Seismic processing may require significant computational power and storage capacity. High-performance servers and workstations are used to handle the massive amount of seismic data (202) and perform complex processing algorithms efficiently. Seismic data (202) can be massive, reaching terabytes or even petabytes in size. Reliable and high-capacity storage systems, such as Network Attached Storage (NAS) or Storage Area Networks (SAN), are utilized to store and manage the seismic data (202) effectively. In some cases, where processing demands are extremely high, the seismic processing system (220) may utilize cluster systems. Clusters are groups of interconnected computers or servers that work together to distribute the processing workload, enabling parallel processing and faster data analysis. A robust and high-speed network infrastructure allows seamless data transfer between different components of the seismic processing system (220). This ensures efficient communication and data sharing, especially in multi-node or distributed processing environments.
The seismic processing system (220) may use GPUs for accelerating the computation of seismic processing algorithms. Their parallel processing capabilities significantly speed up tasks such as migration, inversion, and visualization. Despite advances in storage technology, data on tapes is still often used for long-term archiving and backup purposes. Tape systems provide high-capacity, cost-effective, and reliable storage solutions for seismic data. Various peripherals such as monitors, keyboards, mice, network switches, uninterruptible power supply (UPS), and backup power generators complete the hardware setup of a seismic processing system. These peripherals ensure smooth operation, user interaction, and data integrity.
The software/firmware are at least as integral a part of the seismic processing system (220) as the hardware components and a seismic processing system (220) equipped with a software program is at least as distinctively different from other seismic processing systems without the software program as a seismic processing system with GPUs is different from one without GPUs.
As illustrated in
However, seismic data (202) may contain noise or vibration energy that is often unrelated to the geological features of the subsurface of interest (102). Noise in seismic data may arise from human activities and other natural surficial sources such as oceans, rivers and atmospheric phenomena. Noise in seismic data (202) may be also related to seismic waves reflected at the surface, trapped waves, scattered waves or any kind of seismic energy that is not related to the geological features of interest. Acquisition and processing operations of seismic data (202) may also introduce errors that can be considered as seismic noise. Denoising operations such as filtering are commonly performed in processing seismic data (202).
The ML model (304) may be trained with a training seismic dataset (306) that may include at least an input seismic dataset (370) and an output seismic dataset (380). In some embodiments the input seismic dataset (370) may include the plurality of time-space waveforms (302), and the output seismic dataset (380) may include the plurality of pilot waveforms (308). In other embodiments, the input seismic dataset (370) may include a subset of the plurality of time-space waveforms (302), and the output seismic dataset (380) may include the corresponding subset of the plurality of pilot waveforms (308). Typically, the seismic dataset may be recorded or simulated (in the case of the input seismic dataset) or predicted (in the case of the output seismic dataset).
Machine-learning (ML), broadly defined, is the extraction of patterns and insights from data. The phrases “artificial intelligence”, “machine-learning”, “deep-learning”, and “pattern recognition” are often convoluted, interchanged, and used synonymously throughout the literature. This ambiguity arises because the field of “extracting patterns and insights from data” was developed simultaneously and disjointedly among a number of classical arts like mathematics, statistics, and computer science. The term machine-learning will be adopted herein. However, one skilled in the art will recognize that the concepts and methods detailed hereafter are not limited by this choice of nomenclature.
ML models types may include, but are not limited to, generalized linear models, Bayesian regression, random forests, and deep models such as neural networks, convolutional neural networks, and recurrent neural networks. ML network types, whether they are considered deep or not, are usually associated with additional “hyperparameters” which further describe the model. For example, hyperparameters providing further detail about a neural network may include, but are not limited to, the number of layers in the neural network, choice of activation functions, inclusion of batch normalization layers, and regularization strength. It is noted that in the context of machine-learning (ML), the regularization of a ML model (304) refers to a penalty applied to the loss function of the ML model (304) and should not be confused with the regularization of a seismic dataset. Commonly, in the literature, the selection of hyperparameters surrounding a ML model (304) is referred to as selecting the model “architecture”.
In some embodiments, once a ML model (304) type and hyperparameters have been selected, the ML model (304) is “trained” to perform a task. In some implementations, the ML model (304) is trained using supervised learning. A training seismic dataset (306) for supervised learning consists of pairs of input and output seismic datasets. The output seismic datasets (380) represent desired outputs, upon processing the input seismic datasets (370). During training, the ML model (304) processes at least one input seismic dataset (370) from the training seismic dataset (306) and produces at least one predicted dataset. Each predicted dataset is compared to the output seismic dataset (380) associated to the input seismic dataset (370). The comparison of the predicted dataset to the output seismic dataset (380) may be performed in an iterative manner until a termination criterion is satisfied, and the ML model (304) may be said to be trained.
In accordance with one or more embodiments, the ML model (304) type may be a convolutional neural network (CNN). A CNN may be more readily understood as a specialized neural network (NN). One with ordinary skill in the art will recognize that any variation of the NN or CNN (or any other ML model) may be employed without departing from the scope of this disclosure. Further, it is emphasized that the following discussions of a NN and a CNN are basic summaries and should not be considered limiting.
A diagram of a neural network is shown in
Nodes (402) and edges (404) carry additional associations. Namely, every edge is associated with a numerical value. The edge numerical values, or even the edges (404) themselves, are often referred to as “weights” or “parameters”. While training a neural network (400), numerical values are assigned to each edge (404). Additionally, every node (402) is associated with a numerical variable and an activation function. Activation functions are not limited to any functional class, but traditionally follow the form:
where i is an index that spans the set of “incoming” nodes (402) and edges (404) and f is a user-defined function. Incoming nodes (402) are those that, when viewed as a graph (as in
When the neural network (400) receives an input, the input is propagated through the network according to the activation functions and incoming node (402) values and edge (404) values to compute a value for each node (402). That is, the numerical value for each node (402) may change for each received input. Occasionally, nodes (402) are assigned fixed numerical values, such as the value of 1, that are not affected by the input or altered according to edge (404) values and activation functions. Fixed nodes (402) are often referred to as “biases” or “bias nodes” (406), displayed in
In some implementations, the neural network (400) may contain specialized layers (405), such as a normalization layer, or additional connection procedures, like concatenation. One skilled in the art will appreciate that these alterations do not exceed the scope of this disclosure.
As noted, the training procedure for the neural network (400) comprises assigning values to the edges (404). To begin training, the edges (404) are assigned initial values. These values may be assigned randomly, assigned according to a prescribed distribution, assigned manually, or by some other assignment mechanism. Once edge (404) values have been initialized, the neural network (400) may act as a function, such that it may receive inputs and produce an output. As such, at least one input is propagated through the neural network (400) to produce an output.
A training seismic dataset (306) is provided to the neural network (400). A training seismic dataset (306) consists of pairs consisting of an input seismic dataset (370) and an output seismic dataset (380). Each neural network (400) predicted dataset is compared to the associated output seismic dataset (380). The comparison of the neural network (400) predicted dataset to the output seismic dataset (380) is typically performed by a so-called “loss function”; although other names for this comparison function such as “error function”, “misfit function”, and “cost function” are commonly employed. Many types of loss functions are available, such as the mean-squared-error function, the Huber loss function, however, the general characteristic of a loss function is that the loss function provides a numerical evaluation of the similarity between the neural network (400) output and the associated target. The loss function may also be constructed to impose additional constraints on the values assumed by the edges (404), for example, by adding a penalty term, which may be physics-based, or a regularization term (not be confused with regularization of seismic data).
Generally, the goal of a training procedure is to alter the edge (404) values to promote similarity between the neural network (400) output and associated target over the training seismic dataset (306). Thus, the loss function is used to guide changes made to the edge (404) values, typically through a process called “backpropagation”. While a full review of the backpropagation process exceeds the scope of this disclosure, a brief summary is provided. Backpropagation consists of computing the gradient of the loss function over the edge (404) values. The gradient indicates the direction of change in the edge (404) values that results in the greatest change to the loss function. Because the gradient is local to the current edge (404) values, the edge (404) values are typically updated by a “step” in the direction indicated by the gradient. The step size is often referred to as the “learning rate” and need not remain fixed during the training process. Additionally, the step size and direction may be informed by previously seen edge (404) values or previously computed gradients. Such methods for determining the step direction are usually referred to as “momentum” based methods.
Once the edge (404) values have been updated, or altered from their initial values, through a backpropagation step, the neural network (400) will likely produce different outputs. Thus, the procedure of propagating at least one input through the neural network (400), comparing the neural network (400) output with the associated target with a loss function, computing the gradient of the loss function with respect to the edge (404) values, and updating the edge (404) values with a step guided by the gradient, is repeated until a termination criterion is reached. Common termination criteria are: reaching a fixed number of edge (404) updates, otherwise known as an iteration counter; a diminishing learning rate; noting no appreciable change in the loss function between iterations; reaching a specified performance metric as evaluated on the data or a separate hold-out dataset. Once the termination criterion is satisfied, and the edge (404) values are no longer intended to be altered, the neural network (400) is said to be “trained.”
A CNN is similar to a neural network (400) in that it can technically be graphically represented by a series of edges (404) and nodes (402) grouped to form layers. However, it is more informative to view a CNN as structural groupings of weights; where here the term structural indicates that the weights within a group have a relationship. CNNs are widely applied when the data inputs also have a structural relationship, for example, a spatial relationship where one input is always considered “to the left” of another input. Images have such a structural relationship. A seismic dataset may be organized and visualized as an image. Consequently, a CNN is an intuitive choice for processing seismic data (202).
A structural grouping, or group, of weights is herein referred to as a “filter”. The number of weights in a filter is typically much less than the number of inputs, where here the number of inputs refers to the number of pixels in an image or the number of trace-time (or trace-depth) values in a seismic dataset. In a CNN, the filters can be thought as “sliding” over, or convolving with, the inputs to form an intermediate output or intermediate representation of the inputs which still possesses a structural relationship. Like unto the neural network (400), the intermediate outputs are often further processed with an activation function. Many filters may be applied to the inputs to form many intermediate representations. Additional filters may be formed to operate on the intermediate representations creating more intermediate representations. This process may be repeated as prescribed by a user. There is a “final” group of intermediate representations, wherein no more filters act on these intermediate representations. In some instances, the structural relationship of the final intermediate representations is ablated; a process known as “flattening”. The flattened representation may be passed to a neural network (400) to produce a final output. Note, that in this context, the neural network (400) is still considered part of the CNN. Like unto a neural network (400), a CNN is trained, after initialization of the filter weights, and the edge (404) values of the internal neural network, if present, with the backpropagation process in accordance with a loss function.
Turning to
The output o of a neural network can be expressed as a nonlinear function h of the input i and of the network model parameters (weights and biases) θ:
The previous equation can be used to train the network for an inverse problem by assuming the input dt and the output mt are known, and minimizing a least-squares deep learning (DL) objective function ϕl,m (i.e., loss function) over the network model parameters θ
where the term Hθ† is a pseudoinverse operator parameterized by θ. The loss function ϕl,m is minimized to obtain an optimized set of network model parameters θ. The pseudoinverse operator Hθ† parameterized by θ is optimized to minimize the discrepancy between predicted models with current network model parameters θ and the corresponding known models provided for the training (mt). The network model parameters θ may be selected by a user or may be identified during training using a set of data similar to the training seismic dataset (306) known as a validation dataset. A testing phase may be also added to further refine and generalize the network model parameters θ. In the testing phase the optimized pseudoinverse operator Hθ† is used to predict the models using a testing dataset. The procedure may be iterated through additional loss function evaluation and parameter optimization until a stopping criterion, or multiple stopping criteria are reached.
The trained ML model (304) may then be used to predict the output my from the plurality of time-space waveforms dobs through the optimized pseudoinverse operator Hθ†:
The predicted model mi can be embedded in a denoising or filtering scheme, where the data dt represents a plurality of time-space waveforms (302) and the model mt represents the plurality of filtered time-space waveforms (314).
While an embodiment using a CNN for the ML model (304) has been suggested, one skilled in the art will appreciate that the instant disclosure is not limited to this ML model type. ML model types such as a random forest, visual transformers (ViTs), or non-parametric methods such as K-nearest neighbors or a Gaussian process may be readily inserted into this framework and do not depart from the scope of this disclosure.
In some embodiments, with a ML network type and associated architecture selected, the ML model (304) is trained using one or more input seismic datasets (370) and one or more output seismic datasets (380). Each input seismic dataset (370) may have an associated output seismic dataset (380). During training, the input seismic dataset (370) may be provided to the ML model (304). The ML model (304) may process the input seismic dataset (370) and produce a predicted output. The predicted output may be compared to the associated output seismic dataset (380). In some embodiments, during training, the ML model (304) is adjusted such that the predicted output, upon receiving one or more input seismic datasets (370), is similar to the output seismic datasets (380).
Once the ML model (304) is trained, a subset of the plurality of time-space waveforms (302) and its corresponding plurality of pilot waveforms (308) may form a validation dataset and may be processed by the trained ML model (304) for validation. The predicted dataset is then compared to the associated output seismic datasets (380) of the validation set. Thus, the performance of the trained ML model (304) may be evaluated. An indication of the performance of the ML model (304) may be acquired by estimating the generalization error of the trained ML model (304). The generalization error is estimated by evaluating the performance of the trained ML model (304), after a suitable model has been found on the testing dataset. One with ordinary skill in the art will recognize that the training procedure described herein is general and that many adaptions can be made without departing from the scope of the present disclosure. For example, common training techniques, such as early stopping, adaptive or scheduled learning rates, and cross-validation may be used during training without departing from the scope of this disclosure.
According to one or more embodiments, the trained ML model (304) may be retrained using transfer learning to process seismic data (202) of different type or domain. Transfer learning may be performed, for example, by using the values of the edges (404) of the trained ML model (304) as initial values of the ML model (304) to be trained with different seismic data. Training time may be significantly reduced by making use of transfer learning.
The plurality of filtered time-space waveforms obtained with the ML model (304) may then be used by the seismic processing system (220) into a number of data processing operations such as for example, first arrival picking, generation of a velocity model, and generation of a seismic image. In particular, a seismic image may assist in identifying geological boundaries (236, 238) and other geological objects, such as faults. If a seismic image (230) indicates the potential presence of hydrocarbons in the subsurface region of interest (102), a wellbore (118) may be planned using a wellbore planning system. Further, a drilling system may drill a wellbore (118) to confirm the presence of those hydrocarbons.
A top drive (616) provides clockwise torque via the drive shaft (618) to the drillstring (608) in order to drill the wellbore (118). The drillstring (608) may comprise a plurality of sections of drillpipe attached at the uphole end to the drive shaft (618) and downhole to a bottomhole assembly (“BHA”) (620). The BHA (620) may be composed of a plurality of sections of heavier drillpipe and one or more measurement-while-drilling (“MWD”) tools configured to measure drilling parameters, such as torque, weight-on-bit, drilling direction, temperature, etc., and one or more logging tools configured to measure parameters of the rock surrounding the wellbore (118), such as electrical resistivity, density, sonic propagation velocities, gamma-ray emission, etc. MWD and logging tools may include sensors and hardware to measure downhole drilling parameters, and these measurements may be transmitted to the surface (124) using any suitable telemetry system known in the art. The BHA (620) and the drillstring (608) may include other drilling tools known in the art but not specifically shown.
The wellbore (118) may traverse a plurality of overburden (622) layers and one or more formations (624) to a hydrocarbon reservoir (104) within the subterranean region (628), and specifically to a drilling target (630) within the hydrocarbon reservoir (104). The wellbore trajectory (604) may be a curved or a straight trajectory. All or part of the wellbore trajectory (604) may be vertical, and some parts of the wellbore trajectory (604) may be deviated or have horizontal sections. One or more portions of the wellbore (118) may be cased with casing (632) in accordance with a wellbore plan.
To start drilling, or “spudding in” the well, the hoisting system lowers the drillstring (608) suspended from the derrick (614) towards the planned surface location of the wellbore (118). An engine, such as an electric motor, may be used to supply power to the top drive (616) to rotate the drillstring (608) through the drive shaft (618). The weight of the drillstring (608) combined with the rotational motion enables the drill bit (606) to bore the wellbore (118).
The drilling system (600) may be disposed at and communicate with other systems in the well environment, such as a seismic processing system (220), a seismic interpretation system (640), and a wellbore planning system (638). The drilling system (600) may control at least a portion of a drilling operation by providing controls to various components of the drilling operation. In one or more embodiments, the drilling system (600) may receive well-measured data from one or more sensors and/or logging tools arranged to measure controllable parameters of the drilling operation. During operation of the drilling system (600), the well-measured data may include mud properties, flow rates, drill volume and penetration rates, rock physical properties, etc.
A seismic interpretation system (640) is primarily used by geoscientists, seismic interpreters, and exploration teams in the oil and gas industry for analyzing seismic data to understand subsurface geological structures. Seismic interpreters use the workstation to visualize seismic data, including 2D and 3D seismic volumes, cross-sections, time slices, and attribute maps. These visualizations provide insights into subsurface structures, faults, and potential hydrocarbon reservoirs.
Interpreters may pick and interpret key geological horizons within seismic data to identify stratigraphic layers, boundaries, and structural features. Horizon interpretation tools and workflows allow for the accurate extraction of geological information from seismic volumes. A seismic interpretation system (640) enable interpreters to identify and interpret subsurface faults that may impact hydrocarbon reservoirs. Fault interpretation tools and visualization techniques help in understanding fault geometry, connectivity, and spatial relationships. Seismic attributes, such as amplitude, frequency, and gradient, provide additional information about subsurface properties and can be analyzed using various algorithms and statistical methods. Attribute analysis tools in the workstation aid in defining reservoir characteristics, identifying anomalies, and highlighting potential hydrocarbon traps.
Interpreters may use the seismic interpretation system (640) to build 3D geological models by integrating seismic data with well-log data, geological knowledge, and other geophysical information. These models help in estimating reservoir properties, optimizing well locations, and predicting hydrocarbon distribution. Interpreters may analyze and characterize hydrocarbon reservoirs by integrating different data sources, including seismic data, well logs, production data, and seismic inversion results. Workstations provide tools for reservoir property estimation, quantitative analysis, and reservoir performance evaluation.
The seismic interpretation system (640) may facilitate prospect generation and evaluation, where interpreters identify and assess areas with high hydrocarbon exploration potential. They can perform detailed geological and geophysical analysis, identify drilling targets, and quantify the risk and uncertainty associated with potential prospects. Finally, workstations enable interpreters to collaborate with team members, share interpretation results, and communicate findings effectively. Interpretation software allows for the creation of reports, annotated images, and presentations to communicate geological interpretations to stakeholders.
The seismic interpretation system (640) is an important tool for geoscientists involved in exploration and production activities, helping them make informed decisions about drilling locations, optimize production strategies, and understand complex subsurface geological structures. The seismic interpretation system (640) may be a specialized computer system used by geoscientists and seismic interpreters for analyzing and interpreting seismic data. The seismic interpretation system (640) may be implemented on a computing device such as that shown in
Seismic interpretation involves intensive tasks like data visualization, horizon picking, attribute analysis, and 3D modeling. A high-performance seismic interpretation system (640) with a powerful processor, ample memory, and a high-resolution display is necessary to handle these computationally demanding tasks efficiently. Dedicated GPUs allow real-time rendering of seismic data, and enable smooth and interactive visualization. GPUs with high memory and parallel processing capabilities accelerate tasks like volume rendering and horizon visualization.
Seismic interpretation often involves working with large and complex datasets. Multiple high-resolution monitors allow interpreters to view seismic data, cross-sections, time slices, attribute maps, and other visualizations simultaneously, enhancing productivity and analysis accuracy. The seismic interpretation system (640) may be equipped with industry-standard software applications tailored for seismic interpretation, such as seismic data processing and visualization tools, horizon and fault interpretation systems, attribute analysis software, and 3D modeling software.
Seismic interpretation projects generate substantial amounts of data, including seismic volumes, processed data, interpretation results, and velocity models. A high-capacity and fast storage system, such as solid-state drives (SSDs) or RAID arrays, is necessary to store and access this data efficiently. The seismic interpretation system (640) often requires network connectivity to access centralized data repositories, collaborate with colleagues, and share interpretation results. A robust network infrastructure with fast Ethernet or fiber connections ensures smooth data transfer and collaboration capabilities.
Peripherals like keyboards, mice, and graphics tablets enable efficient interaction with data and software interfaces. Additionally, color-calibrated and high-accuracy input devices enhance the precision of interpretation tasks like picking horizons or drawing geological features. The seismic interpretation system (640) may have backup solutions in place to protect valuable data from loss or damage. Automated backup systems, external storage devices, or network-attached storage (NAS) can be utilized to ensure data safety. In some cases, seismic interpreters may need remote access to the seismic interpretation system (640) or collaborate with colleagues remotely. Setting up remote access capabilities, such as Virtual Private Networks (VPNs) or remote desktop solutions, allows interpreters to work from different locations and share their work effectively. The seismic interpretation system (640) may be customized to meet the needs of interpreters and the specific requirements of projects. The hardware specifications may vary based on factors like the complexity of interpretations, the size of datasets, and the software tools utilized.
In some embodiments, the rock physical properties may be used by a seismic interpretation system (640) to help determine a location of a hydrocarbon reservoir (104). In some implementations, the rock physical properties and other subterranean features may be represented in a seismic image (230) that may be transferred from the seismic processing system (220) to the seismic interpretation system (640). Knowledge of the existence and location of the hydrocarbon reservoir (104) and the seismic image (230) may be transferred from the seismic interpretation system (640) to a wellbore planning system (638). The wellbore planning system (638) may use information regarding the hydrocarbon reservoir (104) location to plan a well, including a wellbore trajectory (604) from the surface (124) of the earth to penetrate the hydrocarbon reservoir (104). In addition, to the depth and geographic location of the hydrocarbon reservoir (104), the planned wellbore trajectory (604) may be constrained by surface limitations, such as suitable locations for the surface position of the wellhead, i.e., the location of potential or preexisting drilling rigs, drilling ships or from a natural or man-made island.
Typically, the wellbore plan is generated based on best available information at the time of planning from a geophysical model, geomechanical models encapsulating subterranean stress conditions, the trajectory of any existing wellbores (which it may be desirable to avoid), and the existence of other drilling hazards, such as shallow gas pockets, over-pressure zones, and active fault planes. Information regarding the planned wellbore trajectory (604) may be transferred to the drilling system (600) described in
The wellbore planning system (638) is used in the oil and gas industry for designing and planning drilling operations. It assists drilling engineers and teams in making strategic decisions related to wellbore placement, casing design, trajectory planning, and well path optimization. The wellbore planning system (638) allows drilling engineers to visualize and interact with wellbore data in a 3D environment. It provides a graphical representation of the planned well trajectory, existing well paths, geological formations, and potential hazards.
The wellbore planning system (638) integrates geological models, well logs, seismic data, and other subsurface information to facilitate the creation of accurate and realistic wellbore plans. By incorporating geological models, drilling engineers can optimize well placement in reservoir targets and avoid geohazards. Furthermore, the wellbore planning system (638) may assist in designing optimal well trajectories based on reservoir targets, geologic constraints, and drilling objectives. Engineers can define well paths that maximize drilling efficiency, reach specific targets (horizontal or vertical), and account for geological formations and structural complexities.
The wellbore planning system (638) incorporates collision-avoidance algorithms to assess potential collision risks between nearby wells, salt bodies, or other subsurface infrastructure. By considering uncertainties in subsurface data and drilling conditions, the wellbore planning system (638) may assess collision probabilities for planned well paths. This analysis helps in quantifying risks associated with collision potential and improving well placement decisions. The wellbore planning system (638) provides real-time alerts to prevent wellbore collisions and maintain drilling safety.
The wellbore planning system (638) helps drilling engineers in designing casing strings and selecting appropriate tubulars based on the wellbore conditions, planned drilling operations, and regulatory requirements. It considers factors such as pressure, temperature, well depth, formation properties, and casing load capacity. Furthermore, the wellbore planning system (638) performs torque and drag analysis to evaluate the forces and stresses acting on the drillstring during drilling operations. This analysis helps in identifying potential issues such as differential sticking, buckling, or limitations in the drilling equipment. The wellbore planning system (638) may have the capability to integrate real-time drilling data, such as downhole measurements, drilling parameters, and formation evaluation results. This integration allows engineers to monitor the drilling progress, make on-the-fly adjustments to the well plan, and optimize drilling efficiency. Furthermore, the wellbore planning system (638) provides tools for generating reports, exporting data, and documenting drilling plans and decisions. These reports can be shared with regulatory agencies, drilling contractors, and other stakeholders to ensure alignment and compliance throughout the drilling lifecycle.
The wellbore planning system (638) assists drilling engineers in designing optimal well trajectories, minimizing risks, and maximizing drilling efficiency. They integrate various subsurface data sources, perform complex analyses, and provide visualization tools to support informed decision-making in well planning and drilling operations.
Turning to
In Block 700, seismic data regarding a subsurface region is received, in accordance with one or more embodiments. The seismic data (202) includes a plurality of time-space waveforms (302). For example, seismic data (202) acquired with a seismic acquisition system (100) may be organized in one or more spatial dimensions (216, 218) and a time axis (214) to form the plurality of time-space waveforms (302). In some embodiments, the plurality of time-space waveforms (302) may be obtained with numerical simulations of seismic waves propagating in a model of the subsurface region (102). The plurality of time-space waveforms (205, 302) may be similar to the waveforms described in
In some embodiments, the plurality of time-space waveforms (302) may be ordered based on a seismic parameter. The seismic parameter may be, for example, a distance from a common midpoint, and the plurality of time-space waveforms (302) is said to be in the “common midpoint domain”. Another example of the seismic parameter may be an azimuth of a waveform centered on a common midpoint, and the plurality of time-space waveforms (302) is said to be in the “azimuth domain”. Other seismic parameters known to those skilled in the art can be used without departing from the scope of the present disclosure.
In Block 710, a plurality of pilot waveforms are generated using statistical sampling, in accordance with one or more embodiments. The plurality of pilot waveforms (308) is based on the plurality of time-space waveforms (302).
In some embodiments the plurality of pilot waveforms is generated by applying statistical sampling to the plurality of time-space waveforms, as shown in Block 712. Random sampling (RS) and Active Learning (AL) queries using heuristics are non-limiting examples of statistical sampling techniques that may be implemented. In some random sampling techniques, every sample available for selection has the same probability of being selected and each sample selection is independent of any other sample selection. Alternatively, the goal of AL is to implement sample selection criteria to generate a subset of the dataset that maintains the diversity of the dataset and fully represents the dataset. AL techniques are based on the definition of query frameworks using heuristics, built on uncertainty criteria and diversity criteria.
Uncertainty sampling and diversity sampling are two types of active learning that may be used individually or in combination. Uncertainty sampling targets uncertain or unexpected samples in the dataset. Such samples would require the collection of new evidence to be used in additional training of the ML model (304). Other examples of active learning include uncertainty sampling with model-based outliers, sampling from the highest entropy cluster, uncertainty sampling and representative sampling, etc.
Diversity sampling targets gaps in the dataset. Diversity criteria heuristics focus on generating diverse samples for the ML model (304). A homogeneous representation of the model space may be generated through discrete sampling. Statistical measures, such as variance and Euclidean distance, among others, may be used to evaluate uncertainty and diversity heuristics. In some embodiments, various clustering techniques may be used to provide a selection of diverse and representative pilot waveforms. Non-limiting examples of clustering schemes include k-means clustering, fuzzy c-means clustering, and DBSCAN. Diversity heuristics allows a significant reduction of data size without loss of information.
In some embodiments, the plurality of pilot waveforms includes generating virtual waveforms. As shown in Block 714 of
In block 716, the plurality of virtual waveforms may be generated be based on a combination of the time-space waveforms of each bin, in accordance with one or more embodiments. A plurality of virtual waveforms may be generated using various processing techniques, including the examples discussed below. Different sorting domains may be used to generate the plurality of virtual waveforms. For example, the actual waveforms (312) of each bin may be processed in the time domain, in a frequency domain, or in the Laplace-Fourier domain.
In some embodiments, an elevation correction is needed before combining the actual waveforms (312) of each bin in response to differences in elevation (e.g., for land seismic data). As such, seismic sources (106) and seismic receivers (116) may need to be referenced to a common data domain before combining the actual waveforms (312) in order to accurately model the subsurface. This operation may be performed with various approaches, such as statics corrections (i.e., vertical travel path time shift), or redatuming (e.g., using wave-based processes for moving the seismic trace to the datum plane).
In some embodiments, a horizontal datum plane may be used as a reference data domain whose X-Y-Z position coincides with the center of a particular bin. Likewise, elevation of seismic sources (106) and seismic receivers (116) may be inferred using an interpolation process from nearby source-receiver positions. Using a velocity of the respective pressure wave, a time shift may be applied to each actual waveform (312) within a bin by assuming a vertical travel path from an actual source/receiver position to the horizontal datum plane. This elevation correction may correspond to a static correction or a phase shift. Additional corrections may be applied to a seismic trace with a residual component of the time shift (which may be referred to as a “residual static” or “residual phase shift”) that is calculated with surface-consistent.
Other processing operations include, for example, applying alignment and amplitude correction to each time-space waveform of each bin, as shown in Block 717. Waveform alignment is desirable for the combination of the waveforms, because it may avoid smearing of phases (leading to loss of frequency) or amplitudes. For example, alignment may be based on differences in offset amongst waveforms within a bin. Stacked waveforms may also display a position-related time delay. Such time delays may become bigger as the physical dimensions (i.e., coverage area) of the bin increases. In other words, waveforms at slightly different distances from a seismic source may need compensation to account for the differences in travel times. In some embodiments, for example, a Linear Move Out (LMO) is determined for a bin. An LMO value may compensate the difference in bin location for various LMO events such as diving waves that may not be properly corrected for reflected events, especially at a short difference in position.
In some embodiments, a waveform correction may be performed using a slant stack technique. For example, various ray parameters may be determined for actual waveforms (312) within a bin. The ray parameters may be derived from knowledge of the local near surface velocity obtained with various seismic preprocessing steps. For relatively small bins, the moveout of seismic events in the bin may have a small amount of curvature. Likewise, refracted events may have zero curvature (i.e. refracted waves may have exactly linear moveout) and reflection events may be almost linear, with small amounts of curvature. Using small bins, the actual waveforms (312) belonging to a bin may experience the same velocity. Thus, actual waveforms (312) may have signatures of pressure waves traveling from approximately the same origin (i.e., seismic source) to almost the same destination (i.e., seismic receiver).
Various combination methods may be used to generate each virtual waveform (310). In some embodiments, different actual waveforms (312) within a bin may be weighted in a summation for producing a virtual waveform (310). For example, Gaussian tapering may be used in a stacking operation to provide more weight to actual waveforms (312) located closer to a beam center of a virtual waveform (310) and progressively reduce the weight of actual waveforms (312) as the distance from the center increases. However, while weights may be determined as part of an elevation correction or bin correction, weights may also be used in other virtual waveform processing, such as noise reductions.
In some embodiments, various weighted actual waveforms (312) are determined within a bin using a predetermined weight distribution. For example, the predetermined weight distribution may assign a larger weight value to a respective actual waveform (312) closer to a beam center of a seismic survey than other time-space waveforms. The predetermined weight distribution may be defined as various weight values (e.g., a vector or a function of distance or other parameters for determining weights). Likewise, weight values may change based on a predetermined increment as a function of distance from the beam center.
Stacking with weighting algorithms may facilitate determining how different actual waveforms (312) contribute to each virtual waveform (310). The virtual waveform (310) may be generated by stacking in a particular domain, such as, the common midpoint domain or the azimuth domain. In some embodiments, corrected and non-corrected actual waveforms (312) may be combined to generate a virtual waveform (310), for example, by a summation process.
Each virtual waveform (310) may be considered as an enhanced representation of the time-space waveforms of the bin, since each virtual waveform may result from combinations and corrections of the time-space waveforms of the bin, leading to waveforms with lower noise content.
The collection of virtual waveforms may be called a Virtual Super Gather (VSG). The VSG may be considered as a volumetric, average, representation of each of the plurality of time-space waveforms (302). The generated VSG may be used for velocity analysis and hyperbolic moveout corrections.
Once the plurality of virtual waveforms is obtained, the plurality of pilot waveforms may then be generated by applying statistical sampling to the plurality of virtual waveforms, as shown in Block 718.
In Block 720, a training seismic dataset is formed, in accordance with one or more embodiments. The training seismic dataset includes an input training dataset (370) and an output training dataset (380). The input training dataset is based on the plurality of time-space waveforms (302) and the output training dataset is based on the plurality of pilot waveforms (308).
For example, in some embodiments the input training dataset (370) may include the plurality of time-space waveforms (302), and the output training dataset (380) may include the plurality of pilot waveforms (308). In other embodiments the input training dataset (370) may include a first partition of the plurality of time-space waveforms (302) and the output training dataset (380) may include the pilot waveforms corresponding the first partition. The first partition may be, for example, 70% of the plurality of time-space waveforms (302).
By generating the plurality of pilot waveforms with the process described in Blocks (710-718) of
In Block 730, a machine-learning (ML) model is trained to predict the output training dataset, at least in part, from input training dataset, in accordance with one or more embodiments. The ML model (304) may be trained to remove or reduce the seismic noise from given input seismic dataset (370). In accordance with one or more embodiments, the ML model type may be a convolutional neural network (CNN).
In some embodiments the ML model (304) may be validated using a validation dataset that is based on a second partition of the plurality of time-space waveforms, as shown in Block 732. The validation dataset may also include the pilot waveforms corresponding to the second partition. The second partition may be, for example, 15% of the plurality of time space waveforms (302).
Further, the ML model (304) may be tested using a testing dataset that is based on a third partition of the plurality of time-space waveforms, as shown in Block 734. The testing dataset may also include the pilot waveforms corresponding to the third partition. The third partition may be, for example, 15% of the plurality of time space waveforms (302).
In Block 740, a plurality of filtered time-space waveforms is predicted using the ML model. The plurality of filtered time-space waveforms is based, at least in part, on the plurality of time-space waveforms. The trained ML model (304) is configured to receive the plurality of time-space waveforms (302) and output a plurality of filtered time-space waveforms (314). The signal-to-noise ratio of the plurality of filtered time-space waveforms (314) may be increased because the ML model (304) is trained to remove or reduce seismic noise.
In some embodiments, the signal-to-noise ratio of the plurality of filtered time-space waveforms (314) may be further improved in several iterations, until a desired level of signal-to-noise ratio is obtained, as illustrated in the method of
Alternatively, the plurality of filtered time-space waveforms may be iteratively or recursively updated until a stopping condition is reached. The stopping condition may include the signal-to-noise ratio exceeding a predetermined threshold.
In Block 750, a seismic image is determined based, at least in part, on the plurality of filtered time-space waveforms (314), in accordance with one or more embodiments. In some embodiments, the seismic data (202) may include a plurality of gathers and each gather may include a plurality of time-space waveforms (205, 302), as illustrated in
Processing to generate a plurality of filtered time-space waveforms (314) may be performed for each gather of the seismic data (202), and the plurality of processed gathers may then be used by the seismic processing system (220) to generate a seismic image (230). For example, a partial seismic image may be generated for each of the plurality of filtered gathers obtaining a plurality of partial seismic images. A stacked seismic image may then be constructed, for example, by summing the partial seismic images.
In Block 760, a drilling target in the subsurface region may be determined based on the seismic image (230), in accordance with one or more embodiments. The seismic image (230) may be transferred to a seismic interpretation system (640). The seismic interpretation system (640) may use the seismic image (230) to determine the location of a drilling target (630). The location of the drilling target (630) in a wellbore (118) may be based on, for example, an expected presence of gas or another hydrocarbon within a seismic image (230). Locations in a seismic image (230) may indicate an elevated probability of the presence of a hydrocarbon and may be targeted by well designers. On the other hand, locations in a seismic image (230) indicating a low probability of the presence of a hydrocarbon may be avoided by well designers.
In Block 770, a planned wellbore trajectory (604) to intersect the drilling target (630) is planned, in accordance with one or more embodiments. Knowledge of the location of the drilling target (630) and the seismic image (230) may be transferred to a wellbore planning system (638). Instructions associated with the wellbore planning system (638) may be stored, for example, in the memory (1009) within the computer system (1000) described in
In Block 780, a portion of a wellbore is drilled guided by the planned wellbore trajectory, in accordance with one or more embodiments. The wellbore planning system (638) may transfer the planned wellbore trajectory (604) to the drilling system (600) described in
As an illustrative example of the methods, processes, models, and techniques described herein, a plurality of time-space waveforms (302) was acquired, a ML model (304) was trained, and a plurality of filtered waveforms (314) was generated. The results are shown in
The training seismic dataset (306) is generated using virtual waveforms as described in
The ML model (304) after the architecture shown in
Returning to
In some embodiments the wellbore planning system (638), the seismic interpretation system (640), and the seismic processing system (220) may each be implemented within the context of a computer system.
The computer (1100) can serve in a role as a client, network component, a server, a database or other persistency, or any other component (or a combination of roles) of a computer system for performing the subject matter described in the instant disclosure. The illustrated computer (1100) is communicably coupled with a network (1102). In some implementations, one or more components of the computer (1100) may be configured to operate within environments, including cloud-computing-based, local, global, or other environment (or a combination of environments).
At a high level, the computer (1100) is an electronic computing device operable to receive, transmit, process, store, or manage data and information associated with the described subject matter. According to some implementations, the computer (1100) may also include or be communicably coupled with an application server, e-mail server, web server, caching server, streaming data server, business intelligence (BI) server, or other server (or a combination of servers).
The computer (1100) can receive requests over network (1102) from a client application (for example, executing on another computer (1100)) and responding to the received requests by processing the said requests in an appropriate software application. In addition, requests may also be sent to the computer (1100) from internal users (for example, from a command console or by other appropriate access method), external or third-parties, other automated applications, as well as any other appropriate entities, individuals, systems, or computers.
Each of the components of the computer (1100) can communicate using a system bus (1103). In some implementations, any or all of the components of the computer (1100), both hardware or software (or a combination of hardware and software), may interface with each other or the interface (1104) (or a combination of both) over the system bus (1103) using an application programming interface (API) (1107) or a service layer (1108) (or a combination of the API (1107) and service layer (1108). The API (1107) may include specifications for routines, data structures, and object classes. The API (1107) may be either computer-language independent or dependent and refer to a complete interface, a single function, or even a set of APIs. The service layer (1108) provides software services to the computer (1100) or other components (whether or not illustrated) that are communicably coupled to the computer (1100). The functionality of the computer (1100) may be accessible for all service consumers using this service layer (1108). Software services, such as those provided by the service layer (1108), provide reusable, defined business functionalities through a defined interface. For example, the interface may be software written in JAVA, C++, or other suitable language providing data in extensible markup language (XML) format or other suitable format. While illustrated as an integrated component of the computer (1100), alternative implementations may illustrate the API (1107) or the service layer (1108) as stand-alone components in relation to other components of the computer (1100) or other components (whether or not illustrated) that are communicably coupled to the computer (1100). Moreover, any or all parts of the API (1107) or the service layer (1108) may be implemented as child or sub-modules of another software module, enterprise application, or hardware module without departing from the scope of this disclosure.
The computer (1100) includes an interface (1104). Although illustrated as a single interface (1104) in
The computer (1100) includes at least one computer processor (1105). Although illustrated as a single computer processor (1105) in
The computer (1100) also includes a memory (1109) that holds data for the computer (1100) or other components (or a combination of both) that may be connected to the network (1102). For example, memory (1109) may be a database storing data consistent with this disclosure. Although illustrated as a single memory (1109) in
The application (1106) is an algorithmic software engine providing functionality according to particular needs, desires, or particular implementations of the computer (1100), particularly with respect to functionality described in this disclosure. For example, application (1106) can serve as one or more components, modules, applications, etc. Further, although illustrated as a single application (1106), the application (1106) may be implemented as multiple applications (1106) on the computer (1100). In addition, although illustrated as integral to the computer (1100), in alternative implementations, the application (1106) may be external to the computer (1100).
There may be any number of computers (1100) associated with, or external to, a computer system containing computer (1100), each computer (1100) communicating over network (1102). Further, the term “client,” “user,” and other appropriate terminology may be used interchangeably as appropriate without departing from the scope of this disclosure. Moreover, this disclosure contemplates that many users may use one computer (1100), or that one user may use multiple computers (1100).
In some embodiments, the computer (1100) is implemented as part of a cloud computing system. For example, a cloud computing system may include one or more remote servers along with various other cloud components, such as cloud storage units and edge servers. In particular, a cloud computing system may perform one or more computing operations without direct active management by a user device or local computer system. As such, a cloud computing system may have different functions distributed over multiple locations from a central server, which may be performed using one or more Internet connections. More specifically, cloud computing system may operate according to one or more service models, such as infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), mobile “backend” as a service (MBaaS), serverless computing, artificial intelligence (AI) as a service (AlaaS), and/or function as a service (FaaS).
Although only a few example embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from this invention. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims.