The disclosed embodiments relate generally to techniques for determining reservoir properties in subsurface reservoirs and, in particular, to a method of deriving high-resolution reservoir parameters for a subsurface reservoir that honors both seismic and flow-related data using a combination of stochastic inversion and deep learning.
Seismic exploration involves surveying subterranean geological media for hydrocarbon deposits. A survey typically involves deploying seismic sources and seismic sensors at predetermined locations. The sources generate seismic waves, which propagate into the geological medium creating pressure changes and vibrations. Variations in physical properties of the geological medium give rise to changes in certain properties of the seismic waves, such as their direction of propagation and other properties.
Portions of the seismic waves reach the seismic sensors. Some seismic sensors are sensitive to pressure changes (e.g., hydrophones), others to particle motion (e.g., geophones), and industrial surveys may deploy one type of sensor or both. In response to the detected seismic waves, the sensors generate corresponding electrical signals, known as traces, and record them in storage media as seismic data. Seismic data will include a plurality of “shots” (individual instances of the seismic source being activated), each of which are associated with a plurality of traces recorded at the plurality of sensors.
Seismic data, particularly the amplitude-versus-angle or -offset (AVA or AVO) data, may be inverted to estimate reservoir properties. The typical workflow transforms course-scale geophysical parameters, θc. (acoustic velocity Vp, shear velocity Vs, density ρ, and porosity ϕ), derived from some form of AVA inversion through a rock physics model derived from log data to a fine-scale permeability model, kƒ.
New developments in machine learning (ML) technology have provided methods that can link the course-scale θc to fine-scale kƒ. However, unlike facial recognition applications where ML has performed well, ML applications in the earth sciences are limited by a relative lack of training data (a company may have 10's of seismic data sets compared to millions of photographs of faces in a facial recognition data base). The lack of real data examples can be ameliorated using the method of transfer-learning (Goodfellow, et al. 2016) where both models and synthetic data are used to produce synthetic data for ML training.
There exists a need for determining high-resolution reservoir parameters that honor both the seismic data and flow-related data to enable improved hydrocarbon production from hydrocarbon reservoirs.
In accordance with some embodiments, a method for deriving high-resolution reservoir parameters for a subsurface reservoir is disclosed. The method may include receiving a seismic dataset; inverting the seismic dataset to generate an ensemble of coarse-scale seismic parameters, wherein the inverting may use one of Bayesian models with Markov Chain Monte Carlo (MCMC) sampling, simulated annealing, partial swarm, or analytic Bayes formulations; receiving fine-scale lithotype models; developing deep learning neural networks based on transfer learning using the fine-scale lithotype models to generate a conditional probability distribution of high-resolution reservoir parameters; generating an ensemble of high-resolution reservoir parameters using the deep learning neural network to condition the ensemble of coarse-scale seismic parameters; and displaying, on a user interface, the ensemble of high-resolution reservoir parameters. The method may also include performing flow simulation for each image in the ensemble of high-resolution reservoir parameters to generate an ensemble of flow simulation results. The method may also include receiving flow-related data and comparing the flow-related data to the ensemble of flow simulation results. The method may also include using some criteria to select those models from the ensemble of flow simulation results that fit the flow-related data within a set variance.
In another aspect of the present invention, to address the aforementioned problems, some embodiments provide a non-transitory computer readable storage medium storing one or more programs. The one or more programs comprise instructions, which when executed by a computer system with one or more processors and memory, cause the computer system to perform any of the methods provided herein.
In yet another aspect of the present invention, to address the aforementioned problems, some embodiments provide a computer system. The computer system includes one or more processors, memory, and one or more programs. The one or more programs are stored in memory and configured to be executed by the one or more processors. The one or more programs include an operating system and instructions that when executed by the one or more processors cause the computer system to perform any of the methods provided herein.
Like reference numerals refer to corresponding parts throughout the drawings.
Described below are methods, systems, and computer readable storage media that provide a manner of deriving high-resolution reservoir parameters.
Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure and the embodiments described herein. However, embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures, components, and mechanical apparatus have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
The present invention includes embodiments of a method and system for deriving high-resolution reservoir parameters for a subsurface reservoir. Borrowing from the transfer-learning approach, the present invention uses a combination of stochastic Markov Chain Monte Carlo (MCMC)-based AVA inversion (Hoversten et al. 2017) with conditional generative adversarial networks (cGAN). The starting point is a Bayesian model for full joint AVA and production data inversion. A simplified form of the complete model is used to develop a workflow that generates an ensemble of θc and kƒ models that fit both the AVA data and the production data.
The Bayesian model for the joint posterior distribution of θc and kƒ, ƒ(θc, kƒ|S, D), given AVA data (S) and production data (D) is given by
ƒ(θc,kƒ|S,D)∝ƒ(S|θc)׃(D|kƒ)׃(kƒ|θc)׃(θc) (1)
where ƒ(S|θc) is the likelihood of S given θcand ƒ(D|kƒ), is the likelihood of D given kƒ. The term that integrates cGAN into the stochastic inversion is ƒ(kƒ|θc), the conditional probability of kƒ given θc. The final term in (1) is the prior information on the probability of θc. The standalone stochastic AVA inversion (Hoversten et al. 2017) would remove ƒ(D|kƒ) and ƒ(kƒ|θc) from the right-hand side leaving only the posterior ƒ(θc|S).
In an embodiment, cGAN is used to generate ƒ(kƒ|θc) because it is currently under investigation by many researchers, leading to open source code that could be quickly tested and modified. However, other embodiments could use any number of different ML techniques to generate ƒ(kƒ|θc). In one embodiment, the original Torch code developed by Isola et al. (2017) was modified for use. Two options for the generator were considered: 1) Encoder-decoder networks (Hinton and Salakhutdinov, 2006; Badrinarayanan et al., 2016), and 2) U-net (Ronneberger et al., 2015). Further, two options for the discrimination were considered, 1) Markovian discriminator (PatchGAN) (Li and Ward, 2016), and 2) Conventional neural networks with variable layers. The choice of options is part of the model selection process. For the results shown here we chose U-net as the generator and conventional neural networks as the discriminator. The tuning parameter values were taken from Isola et al. (2017).
Equation (1) represents a full joint inversion of AVA and production data. For the AVA inversion, the forward problem required to evaluate ƒ(S|θc) at each element of the Markov chain is a convolution that is fast, however the forward problem required to evaluate ƒ(D|kf) is a flow simulation that is numerically intensive. While this is not impossible, and the end-product may justify the cost, we are investigating ways to speed the evaluation of ƒ(D|kƒ). Approximations such as stream line simulation or emulation and even ML can be used. In one embodiment, we make the approximation that ƒ(kƒ|θc) is smaller thanƒ(S|θc), allowing ƒ(kf|θc) to be dropped and thus decoupling ƒ(D|kƒ) and ƒ(S|θc) . This results in a workflow that eliminates evaluation of ƒ(kƒ|θc) during the MCMC sampling and moves it to a post-inversion step, thus reducing the number of simulations required.
The process is summarized in five steps:
1) Numerical models based on flow simulations or computational stratigraphy and rock-physics provide synthetic AVA data from kƒ models.
2) Stochastic AVA inversion of the synthetic data from 1) provides an ensemble of course-scale θc (i.e. Vp, Vs, ρ, ϕ, permeability k) models.
3) cGAN is trained on the synthetic θc and kƒ to produce ƒ(kƒ|θc).
4) Field data is inverted and the conditional probability θ(kf|θc) is applied to the ensemble of stochastic θc models producing an ensemble of kθ models.
5) All or a subset of the ensemble of kθ models have production simulated, and the models that match the field production data within the estimated variance are selected.
Steps 1-3 represent the transfer learning using models to generating training data and training the cGAN network. Step 4 provides an ensemble of kθ models that can be used as is or as input to step 5. Step 5 refines kθ to those that fit available production data. In the best-case scenario kθ models from step 5 may be used directly for production predictions; however, it is likely that these models will still require human interaction before they are field-ready. Nevertheless, our goal is to significantly accelerate the workflow and provide more robust kθ models compared to starting directly from rock-physics transformation of the course-scale θc.
An one embodiment bridges the gap between the resolution of seismic AVA inversion models and the fine-scale models required to match production data by combining Bayesian models with deep learning. Tests show that application of cGAN conditional probabilities applied to AVA-generated course-scale models significantly improves production history matches without additional flow simulations. Further, flow simulation of the ensemble of fine-scale models generated by cGAN conditional probabilities applied to AVA-generated course-scale parameters followed by selection of those models that best fit production data provides additional significant improvement. Since seismic inversion, training of the deep neural networks, and forward flow simulation are carried out separately, the workflow is scalable and can be applied to large-scale problems.
The ensemble of estimated coarse-scale seismic parameters B is input to the deep learning operation 12. The deep learning operation 12 is detailed in
Referring again to
The ensemble of flow simulation results D can be matched to observed production data such as the measured borehole pressure and/or saturation data E. The models from the ensemble that fit the observed production data within observed variance are then selected. This will produce the fine-scale images of reservoir parameters F that are consistent with both the seismic AVA data A and the measured borehole pressure and/or saturation data E (i.e., production data, flow-related data). The process can be stopped after C and the mean or MAP solution from the ensemble used, however better results are obtained if D is performed resulting in the fine-scale images of F which fit both the AVA data and the production data.
To that end, the high-resolution imaging system 500 includes one or more processing units (CPUs) 502, one or more network interfaces 508 and/or other communications interfaces 503, memory 506, and one or more communication buses 504 for interconnecting these and various other components. The high-resolution imaging system 500 also includes a user interface 505 (e.g., a display 505-1 and an input device 505-2). The communication buses 504 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Memory 506 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 506 may optionally include one or more storage devices remotely located from the CPUs 502. Memory 506, including the non-volatile and volatile memory devices within memory 506, comprises a non-transitory computer readable storage medium and may store seismic data, production data, various products of the methods described herein, and/or geologic information.
In some embodiments, memory 506 or the non-transitory computer readable storage medium of memory 506 stores the following programs, modules and data structures, or a subset thereof including an operating system 516, a network communication module 518, and a high-resolution model module 520.
The operating system 516 includes procedures for handling various basic system services and for performing hardware dependent tasks.
The network communication module 518 facilitates communication with other devices via the communication network interfaces 508 (wired or wireless) and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on.
In some embodiments, the high-resolution model module 520 executes the operations of the method 100 shown in
Bayesian sub-module 522 contains a set of instructions 522-1 and accepts metadata and parameters 522-2 that will enable it to execute operation 10 of method 100. The deep learning sub-module 523 contains a set of instructions 523-1 and accepts metadata and parameters 523-2 that will enable it to execute at least operation 12 of method 100. The flow simulation sub-module 524 contains a set of instructions 524-1 and accepts metadata and parameters 524-2 that will enable it to execute operation 14 of method 100. Although specific operations have been identified for the sub-modules discussed herein, this is not meant to be limiting. Each sub-module may be configured to execute operations identified as being a part of other sub-modules, and may contain other instructions, metadata, and parameters that allow it to execute other operations of use in processing seismic data and generate the images. For example, any of the sub-modules may optionally be able to generate a display that would be sent to and shown on the user interface display 505-1. In addition, any of the data or processed data products may be transmitted via the communication interface(s) 503 or the network interface 508 and may be stored in memory 506.
Method 100 is, optionally, governed by instructions that are stored in computer memory or a non-transitory computer readable storage medium (e.g., memory 506 in
While particular embodiments are described above, it will be understood it is not intended to limit the invention to these particular embodiments. On the contrary, the invention includes alternatives, modifications and equivalents that are within the spirit and scope of the appended claims. Numerous specific details are set forth in order to provide a thorough understanding of the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that the subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Although some of the various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
Isola, P., Zhu, J., Zhou, T., and Efros, A. A. [2017] Image-to-Image Translation with Conditional Adversarial Networks. CVPR 2017.
Li, C., Wand, M. [2016] Precomputed real-time texture synthesis with Markovian generative adversarial networks. In: ECCV
Ronneberger, O., Fischer, P., and Brox, T. [2015] U-net: Convolutional networks for biomedical image segmentation. In MICCAI, volume 9351, pp. 234-241.
Sulistiono, D., Vaughan, R., Ali, M., and Rasoulzadeh, S. [2015] Integrating Seismic and Well Data into Highly Detailed Reservoir Model through AVA Geostatistical Inversion. Abu Dhabi International Petroleum Exhibition and Conference 9-12 Nov. 2015, SPE-177963-MS
This application claims the benefit of U.S. Provisional Patent Application No. 62/790,281, filed Jan. 9, 2019, which is incorporated by reference herein in its entirety.
This invention was made with government support under Contract No. DE-AC02-05CH11231 awarded by the U.S. Department of Energy. The government has certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
62790281 | Jan 2019 | US |