The present invention relates to backpropagation-enabled processes, and in particular, to a method for capturing long-range dependencies in seismic images.
Backpropagation-enabled machine learning processes offer the opportunity to speed up time-intensive seismic interpretation processes. Many investigators are using field-acquired seismic data for training the backpropagation-enabled processes. In such cases, investigators apply labels to identified geologic features as a basis for training the backpropagation-enabled process.
For example, Salman et al. (WO2018/026995A1, 8 Feb. 2018) describes a method for “Multi-Scale Deep Network for Fault Detection” by generating patches from a known seismic volume acquired from field data, the known seismic volume having known faults. Labels are assigned to the patches and represent a subset of the training areas in a patch. The patch is a contiguous portion of a section of the known seismic volume and has multiple pixels (e.g., 64×64 pixels). The patch is intersected by a known fault specified by a user. A machine learning model is trained by the label for predicting a result to identify an unknown fault in a target seismic volume.
Waldeland et al. also describe using deep learning techniques for seismic data analysis in “Salt classification using deep learning” (79th EAGE Conference & Exhibition, 2017, Paris, France, 12-15 Jun. 2017). As noted by Waldeland et al., deep learning on images is most often done using a group of convolutional neural networks. A group of convolutional neural networks (CNN) is a cascade of convolutions that can be used to construct attributes for solving a problem of classifying salt bodies. With a view to reducing computation time, Waldeland et al. train a CNN to classify each pixel in a dataset as either “salt” or “not salt.” The CNN is trained on one inline slice of the dataset, and the trained CNN is subsequently used to classify a different slice in the same dataset.
The results on two datasets show that salt bodies can be labelled in 3D datasets using one manually labelled slice. Waldeland et al. state that one advantage of using CNN for salt classification is that the input is just a small cube from the raw data, removing the need for attribute-engineering and making it easier to classify any given location in the dataset without computing attribute-sections. A coarse classification is done by evaluating every n-th pixel, while a more refined classification requires evaluating every pixel.
Waldeland et al. acknowledge the difficulty of working with full seismic data, which may be 3D, 4D or 5D, for producing a fully classified image. Accordingly, small cubes of input data of dimension 65×65×65 are selected from the full cube of seismic data. The goal is to have the network predicting the class of the center pixel of the small cubes. The network is trained in one manually labeled inline slice (see also Waldeland et al. “Convolutional neural networks for automated seismic interpretation” The Leading Edge 529-537; July 2018) with selected 3D cubes around the pixels in the slice. Random augmentation is applied to the training slice to simulate a larger training set by random scaling, random flipping of non-depth axes, random rotation, and random tilting.
While Waldeland et al. were motivated to reduce computational time by reducing data to center pixels of a seismic cube, the computational time actually increases significantly when a more detailed and refined classification is required, especially when it is desired to identify the occurrence of other types of subsurface features.
Griffith et al. disclose methods for training back-propagation-enabled processes to improve accuracy and efficiency, while reducing the need for computational resources. In WO2020/053197A1 (19 Mar. 2020), a backpropagation-enabled segmentation process for identifying an occurrence of a subsurface feature computes a prediction of the occurrence of the subsurface feature that has a prediction dimension of at least 1 and is at least 1 dimension less than the input dimension. In WO2020/053199A1 (19 Mar. 2020), a backpropagation-enabled regression process for identifying predicting values of an attribute of subsurface data computes a predicted value that has a dimension of at least 1 and is at least 1 dimension less than the input dimension.
A challenge for currently available backpropagation-enabled processes is the field of view in a seismic data set. Current processes look at a single pixel and the neighboring pixels. For example, for a typical seismic data set of 64×64×64 pixels, current processes have a limited field of view of 3×3×3 pixels. Achieving a wider field of view requires larger filter sizes and/or recursively applying a large number of small filters, both of which are computationally prohibitive for currently available backpropagation-enabled processes.
In a non-analogous field of machine translation, Vaswani et al. (“Attention is all you need” arXiv1706.03762; 6 Dec. 2017) indicate that, in models using convolutional neural networks as basic building blocks, the number of operations required to relate signals from two arbitrary input or output positions grows as the distance between words increases. This makes it more difficult to learn dependencies between distant word positions.
A disadvantage of the limited field of view of conventional backpropagation-enabled processes for seismic images is that the context of geological structures is lost. Accordingly, current processes may not, for example, capture connections between ends of a syncline structure. There is a need to capture long-range dependencies, beyond the limited field of view of conventional process, in seismic data sets, thereby improving accuracy and efficiency of the trained process.
According to one aspect of the present invention, there is provided a method for capturing long-range dependencies in seismic images, comprising the steps of: providing a training seismic data set, the training seismic data set having a set of associated training labels; dependency-training a backpropagation-enabled process to compute spatial relationships between elements of the training seismic data set, thereby producing a dependency-trained backpropagation-enabled process; label-training the dependency-trained backpropagation-enabled process using the training seismic data set and the associated training labels to compute a prediction selected from an occurrence, a value of an attribute, and combinations thereof, thereby producing a label-trained backpropagation-enabled process; and using the label-trained backpropagation-enabled process to capture long-range dependencies in a non-training seismic data set by computing a prediction selected from the group consisting of a geologic feature occurrence, a geophysical property occurrence, a hydrocarbon occurrence, an attribute of subsurface data, and combinations thereof.
The present invention provides a method for capturing long-range dependencies in seismic images using a backpropagation-enabled process that has been trained by dependency-training for spatial relationships and label-training for predicting one or more of a geologic feature occurrence, a geophysical property occurrence, a hydrocarbon occurrence, an attribute of subsurface data, and combinations thereof.
Analysis of subsurface data, including seismic data, is important for improving efficiency and accuracy of hydrocarbon exploration. However, seismic data is often voluminous and subject to human error in interpretation. Moreover, the spatial relationship between spaced-apart elements of the seismic data is often lost because backpropagation-enabled processes often have a short-range dependency, thereby restricting field of view. In other words, conventional backpropagation-enabled processes can look only at immediate neighborhood to get the statistics and/or information that it needs.
The inventor has surprisingly discovered that by first dependency-training the backpropagation-enabled process to compute spatial relationships between spaced-apart elements of a training seismic data set, followed by label-training the dependency-trained backpropagation-enabled process, the predictions of geologic feature occurrences, geophysical property occurrences, hydrocarbon occurrences, and/or attributes of subsurface data can be improved, thereby improving the prospectivity of the region targeted by a non-training seismic data set.
Accordingly, the backpropagation-enabled process can be leveraged to predict a geologic feature occurrence, a geophysical property occurrence, a hydrocarbon occurrence, an attribute of subsurface data, and combinations thereof.
Examples of geologic features include, without limitation, boundary layer variations, overlapping beds, rivers, channels, tributaries, salt domes, basins, and combinations thereof. Geologic features also include indicators of geologic processes including, without limitation, tectonic deformation, erosion, infilling, and combinations thereof. Examples of tectonic deformation processes include, without limitation, earthquakes, creep, subsidence, uplift, erosion, tensile fractures, shear fractures, thrust faults, and combinations thereof. Geologic features may also include lithofacies, the geologic environment in which the rocks were deposited. Geologic features may also include elements of a working petroleum system such as source rocks, migration pathways, reservoir rocks, seal (a.k.a. cap rock) and trapping elements.
Examples of geophysical properties include, without limitation, elastic parameters of the subsurface (such as λ and μ), P-wave velocity, S-wave velocity, porosity, impedance, reservoir thickness, and combinations thereof
Examples of hydrocarbon occurrences includes, without limitation the occurrence of any combination of oil, gas or brine occupying the pore space of the rock matrix.
Examples of attributes of subsurface data include any quantity derived from the seismic data such as, without limitation, spectral content, energy associated with changes in frequency bands, signals associated with filters including, without limitation, noise-free filters, low-pass filters, high-pass filters, and band-pass filters, acoustic impedance, reflectivity, semblance, loop-based properties, envelope, phase, dip, azimuth, curvature and the like.
Examples of backpropagation-enabled processes include, without limitation, artificial intelligence, machine learning, and deep learning. It will be understood by those skilled in the art that advances in backpropagation-enabled processes continue rapidly. The method of the present invention is expected to be applicable to those advances even if under a different name. Accordingly, the method of the present invention is applicable to the further advances in backpropagation-enabled processes, even if not expressly named herein.
A preferred embodiment of a backpropagation-enabled process is a deep learning process, including, but not limited to a convolutional neural network.
The backpropagation-enabled process may be supervised, semi-supervised, unsupervised or a combination thereof. In one embodiment, a supervised process is made semi-supervised by the addition of an unsupervised technique. In another embodiment, a subset of the seismic data is labeled in a semi-supervised process. As an example, the unsupervised technique may be an auto-encoder step. Examples of an unsupervised backpropagation-enabled process include, without limitation, a variational autoencoder (VAE) process and a generative adversarial network (GAN) process. Examples of a semi-supervised backpropagation-enabled process include, without limitation, a semi-supervised VAE process and a semi-supervised GAN process.
In a supervised backpropagation-enabled process, the training seismic data set is labeled to provide examples of geologic features, geophysical properties, hydrocarbons, and/or attributes of interest. In an unsupervised backpropagation-enabled process, a feature, property or attribute of interest may be identified by, for example, drawing a polygon around the image of interest in the seismic data. The trained process will then identify areas of interest having similar latent space characteristics. When the training seismic data set is labeled seismic data, the labels may have a dimension of 1D-3D.
In one embodiment, the supervised backpropagation-enabled process is a classification process. The classification process may be conducted voxel-wise, slice-wise and/or volume-wise.
In another embodiment, the unsupervised backpropagation-enabled process is a clustering process. The clustering process may be conducted voxel-wise, slice-wise and/or volume-wise.
In another embodiment, the unsupervised backpropagation-enabled process is a generative process. The generative process may be conducted voxel-wise, slice-wise and/or volume-wise.
In accordance with the present invention, a training seismic data set has a set of associated training labels. The training seismic data set may have a dimension in the range of from 1 to 6. An example of a 1D seismic data set is a ribbon (for example representing a line in a 2-dimensional slice or grid, for example a line is an x or y direction), or a trace (for example, an amplitude in a z-direction at an x-value). A seismic array is an example of 2D or 3D data, while pre-stack seismic response data may be 4D and/or 5D. An example of 6D data may be 5D data with time-lapse data. Seismic response data may be field-acquired and/or simulated seismic data from multiple field or simulated source locations and/or multiple field or simulated receiver locations. Seismic response data includes, for example, without limitation, single offset, multiple offsets, single azimuth, multiple azimuths, and combinations thereof for all common midpoints of field-acquired and/or simulated seismic data. 4D-6D data may also be 3D seismic data with attributes related to seismic survey acquisition or the result of multiple attribute computations. As an example, multiple attributes preferably comprise 3 color channels. The seismic response data may be measured in a time domain and/or a depth domain.
The 2D data set may, for example, be 2D seismic data or 2D data extracted from seismic data of 3 or more dimensions. Likewise, the 3D data set may, for example, be 3D seismic data or 3D data extracted from seismic data of 4 or more dimensions. And the 4D data set may, for example, be 4D seismic data or 4D data extracted from seismic data of 5 or more dimensions.
The training seismic data set may be selected from real seismic data, synthetically generated seismic data, augmented seismic data, and combinations thereof.
For real seismic data, the associated labels describing subsurface features in the image are manually generated, while labels for simulated seismic data are automatically generated. The generation of labels, especially manual label generation, is time-intensive and requires expertise and precision to produce an effective set of labels.
By augmented data, we mean field-acquired and/or synthetically generated data that is modified, for example, by conventional DL data-augmentation techniques, as described in Taylor et al. (“Improved deep learning with generic data augmentation” IEEE Sympgsium—Symposium Series on Computational Intelligence SSCI 2018 1542-1547; 2018) which describes conventional augmenting by geometrical transformation (flipping, cropping, scaling and rotating) and photometric transformations (amending color channels to change lighting and color by color jittering and Fancy Principle Component Analysis). Augmented data may also be generated, for example, as described in Liu et al. (US2020/0183035A1), which relates to data augmentation for seismic interpretation, recognizing that standard data augmentation strategies may produce limited plausible alternative samples and/or may lead to geologically or geophysically infeasible to implausible alternative samples. The machine learning method involves extracting patches from input data and transforming that data based on the input data and geologic and/or geophysical domain knowledge to generate augmented data. Transforming data is selected from an identity transformation, a spatial filter, a temporal filter, an amplitude scaling, a rotational transformation, a dilatational transformation, a deviatoric transformation, a resampling using interpolation or extrapolation, a spatial and temporal frequency modulation, a spectral shaping filter, an elastic transformation, an inelastic transformation, and a geophysical model transformation. In another embodiment, two pieces of data are blended together to generate a new piece of data. Other geophysical augmenting methods may also be used to generate augmented data. The labels may be preserved or modified in the augmentation. In this way, the data set size may be augmented to improve the model by introducing variations of data without requiring resources of acquiring and labeling field-acquired data or generating new synthetic data. Preferably, the augmented data is generated by a test-time augmentation technique.
The backpropagation-enabled process is dependency-trained to compute spatial relationships or connections between elements of the training seismic data set.
The dependency-training step preferably computes spatial relationships between elements of the training seismic data set by applying self-attention weights to the training seismic data set.
In a preferred embodiment, the dependency-training step involves preparing a square self-attention matrix using the training seismic data set. Where the training seismic data set is 1D, for example, 1×N, the square self-attention matrix is preferably N×N. Where the training seismic data set is 2D or greater, the training seismic data set is preferably flattened to a 1D representation of the training seismic data set, for example, 1×M, the square self-attention matrix is preferably M×M.
So, for example, where the training seismic data set is 64×64×64, the square self-attention matrix will have a dimension of 262, 144×262, 144. The self-attention matrix uses the seismic image to correlate pixels within the whole of the training seismic data set. By providing an unrestricted field of view, long-range dependencies can be captured because the backpropagation-enabled process is allowed to make connections between inter-dependent pixels in all directions.
Preferably, at least a portion of the square self-attention matrix is populated with values defining the spatial relationships between any two elements in the square self-attention matrix. Each value represents the strength of the spatial relationship between two elements in the matrix. Preferably, the values are provided on a scale of 0-1, where 1 indicates the highest similarity.
An updated training seismic data set is defined by combining the training seismic data set by the scores of the self-attention matrix, preferably by performing a linear transformation of the populated square self-attention matrix with the training seismic data set. Examples of suitable linear transformations include, without limitation, convolution, pooling, softmax, Fourier, and combinations thereof. The updated training seismic data set preferably has a dimension equal to the training data set.
The updated training seismic data set may be used in the next step or the steps of preparing and populating the self-attention matrix and updating the training seismic data set may be repeated one or more times. As the dependency-training progresses, the backpropagation-enabled process with self-attention learns to put correct values in the matrix to properly capture relationships between elements. Preferably, the steps are repeated from 1 to 25 times, more preferably from 1 to 10 times, most preferably from 2 to 8 times. By repeated the steps, the strength of connections between elements is improved.
A sequence of one or more mathematical operation is executed on the updated training seismic data. The mathematical operation may be multiplying and/or adding in any sequence. The dimension of the mathematical operation is preferably less than or equal to the training seismic data set. The steps of preparing and populating the self-attention matrix, updating the training seismic data set, and executing a sequence of layers may be repeated one or more times. Preferably, the steps are repeated until the prediction accuracy on the training seismic data set exceeds 80%, preferably 85%, or until the prediction accuracy substantially plateaus or stops increasing.
The dependency-trained backpropagation-enabled process is then label-trained using the training seismic data set and the associated training labels to compute a prediction of an occurrence and/or a value of an attribute.
The label-trained backpropagation-enabled process can now be used to capture long-range dependencies in a non-training seismic data set. Preferably, the trained backpropagation-enabled process computes a regression prediction and/or a segmentation prediction. The prediction may be a geologic feature occurrence, a geophysical property occurrence, a hydrocarbon occurrence, and/or an attribute of subsurface data.
For example, a suitable backpropagation-enabled segmentation process is described in Griffith et al. WO2020/053197A1 (19 Mar. 2020). A suitable backpropagation-enabled regression process is described in Griffith et al. WO2020/053199A1 (19 Mar. 2020).
While preferred embodiments of the present invention have been described, it should be understood that various changes, adaptations and modifications can be made therein within the scope of the invention(s) as claimed below.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/035475 | 6/29/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63216021 | Jun 2021 | US |