The present invention relates to the general area of the analysis and interpretation of subsurface regions on the basis of seismic data, and in particular to the processing of seismic inversion data for use in reservoir models.
At the time an oil or gas field is being appraised or developed, the development of a reservoir model usually centers on the task of building computer models suitable for forward flow simulation. Prior to this, much of the work will have focused on data acquisition and interpretation, and the construction of models suitable for simple volumetric calculations or drilling decisions. In particular, much of the interpretative work will be based on surface seismic data, and this is routinely fed into various inversion routines which produce pointwise (trace-local) estimates of the properties of direct interest, such as surface positions, layer thicknesses, hydrocarbon content, net-to-gross (NG or NG), etc.
In previous publications (Gunning and Glinsky, 2004, 2005; Gunning, 2003), the inventors have introduced an open source tool Delivery that enables users to perform a fully probabilistic seismic inversion for a layer-based model of the reservoir. This is a trace-based inversion, so it produces an ensemble of realisations of the relevant reservoir parameters at each point in the imaged seismic grid over a field. The ‘meso-scale’ layer resolution is usually around 5-20 m. At each ‘common mid-point’ (CMP) location, the inversion provides a full joint-probability distribution of quantities like the layer thicknesses, fluid content, NG, layer times, and velocities. The seismic inversion data produced by Delivery is an array of trace-local stochastic samples from a Bayesian posterior distribution of reservoir layer parameters, which contains complex correlations between layers boundaries, rock properties and fluid information, but no transverse correlations. This inversion data produced by the program is suitable for answering the simple kind of questions mentioned above, such as pointwise histograms of layer thickness, maps of hydrocarbon probability etc, but is not directly suitable for flow calculations per se.
In the past, trend maps have been formed from the results of seismic inversion for properties such as porosity and net sand. These trend maps have then been used to control the geostatistical population of properties and/or objects into the reservoir simulation model. However, none of the rich inter-property and inter-layer correlations were respected. The models have also tended to be built at extremely fine scale (less then a few meters) then upscaled to a coarser scale for reservoir simulation (less than tens of meters).
It would therefore be desirable to provide a method for converting seismic inversion data into a form suitable for use in reservoir models for flow simulation and/or volumetric calculations, while preserving the correlations available from the seismic inversion, and also honouring measured well data and preferably other geological constraints. In particular, it would be desirable to provide such a method which makes it possible to estimate and/or reduce uncertainty in the reservoir model.
A new and improved method is disclosed for estimating and/or reducing uncertainty in reservoir models of potential petroleum reservoirs, on the basis of seismic inversion information. The method provides for the conversion of the results of a stochastic seismic inversion into a set of realisations of a plurality of properties suitable for use in a reservoir model, while preserving inter-property and inter-layer correlations to reduce uncertainty. Geological constraints and well measurements may also be honoured in the realisations.
More specifically, the method comprises the steps of:
Preferably, the method further comprises the step of simulating the spatial dispersion of material within a layer of the seismic inversion by vertically subdividing the layer and modelling a vertical distribution of impermeable material within the layer consistently with a vertical average of impermeable material content obtained from the seismic inversion. This allows the output realisations to capture the effect on fluid flow of the dispersion of impermeable material, or shale, within the region to be modelled.
For the calculation of volumetric (not pointwise) uncertainties, and the task of flow simulation, it is necessary to carry the inversion calculations over to grid formats that are more directly useful in 3D volumetric calculations and flow calculations. Various types of 3D grids are in common use, but perhaps the most ubiquitous is the cornerpoint grid, which is used in the commercially dominant ECLIPSE family of flow simulators (see Schlumberger ECLIPSE website: http://www.slb.com/content/services/software/reseng/eclipse_simulators/index.asp). The cornerpoint grid is therefore preferably used in the present embodiment of the invention. Moreover, the effect of intertrace correlations in the seismic data (which is deliberately neglected in Delivery) can be approximately modelled in this remapping calculation. The aim is to produce 3D models that capture both transverse correlations known from either well data or analogues, and the vertical inter-layer and inter-property correlations that seismic inversion can reveal. The joint process of remapping and merging of correlations is one which has been dubbed ‘massaging’, and the software implementation of the embodiment of the invention is referred to as DeliveryMassager.
Since the inversion models are probabilistic, it is natural for the remapped or ‘massaged’ models to inherit this probabilistic character. Objects of interest will then naturally be the ‘most likely’ massaged model, as well as a suite of ‘realisation’ models, which enable stochastic forward flow simulations to be performed for risking purposes. Volumetric statistics of interest can of course be calculated oil the fly as well.
Further embodiments, advantages, features and details of the present invention will be set out in the following description with reference to the drawings, in which:
a) shows a typical layout of an input-grid trace array for seismic inversion, and
a) shows a cross-section of a cornerpoint model,
a) and 7(b) respectively show plan and elevation views of an example region (the Stybarrow Field);
a) and (b) show median maps of net-sand for the Stybarrow Field, directly from a seismic inversion and after processing in accordance with one embodiment of the invention, respectively;
a) shows realisations of the main sand net-sand map for the Stybarrow Field in plan view,
The present invention can be embodied in many different forms. The disclosure and description of the invention in the drawings and in this description are illustrative and explanatory thereof, and various changes in the sequence of processing steps, of the parameters in the processing and of the process details may be made without departing from the scope of the invention.
The conceptual problem will first be defined in more detail.
The trace-based Bayesian inversion model implemented by Delivery is a typical input to the method in one embodiment of the invention, and produces an ensemble of realisations from the posterior distribution of the model parameters at each common midpoint (CMP) gather, or trace, of the imaged seismic data. At each trace, the inversion model is quasi 1d, with a sequence of layers parametrised by times describing the (local) geometry, and each layer is characterised by a laminated mixture of permeable and impermeable rocks, with rock velocities, density, porosity and fluid content as additional model parameters. The depths of each layer are computed from relative traveltimes and velocities, hung from a nominated reference layer and supplied reference depth. For the purposes of reservoir simulation, the model parameters of interest are typically the layer depth ‘d’, the ‘thickness’, ‘net-to-gross NG’ (or NG), fluid content ‘net-hydrocarbon’, and ‘porosity’ (to name the interesting quantities accessible from Delivery inversions in inverted commas). The Monte Carlo ensemble produced by the inversion encapsulates the coupling or correlations between these properties which is demanded by consistency with the seismic data and the prior model.
The imaged traces are typically spaced anywhere from 15 m to 200 m apart, in a regular array, and the inversion does not model the coupling that may occur between model parameters at different traces, chiefly in order to make the inversion problem tractable (a more detailed discussion of these issues can be found in Gunning and Glinsky (2004)). Very strong lateral correlations are induced in the mean (or most-likely) posterior models by the prior and seismic data, but the overall distribution describing the model fluctuations formed by a naive resampling from the Delivery outputs is a product of trace-local distributions, and thus contains no lateral correlation. A necessary and strong qualification to this statement is that any spatial interpolations of the Delivery outputs will necessarily induce correlations via the interpolation algorithms, and indeed we often recommend running the inversion on relatively coarse spacings and using interpolation to smooth; this has the merit of reducing the inversion run-time considerably.
Geologists are accustomed to thinking of transverse correlations in terms of ‘characteristic’ body sizes and depositional directions. Large scale body shapes will usually be explicitly visible in the seismic, and thus will propagate into the mean posterior models. The residual fluctuations about these means, for many environments, are most simply characterised by two-point statistics and a distance metric which reflects depositional directions, a construct which is familiar to geologists as the conventional semivariogram. Continuity of surfaces is broken at fault locations, though many internal properties may be preserved across the fault, and such geological constraints are preferably also used. Faulting is usually modelled explicitly in reservoir modelling packages like PETREL, and these faults will be embedded in the cornerpoint grids we use as the receptacle for the massaging process. Additionally, hard data for various properties will be available at well locations that have been logged, and is also used in one embodiment of the invention.
It is clear that the combined spatio-multi-property posterior distribution we are able to build, having neglected inter-trace correlations in the inversion, will be a constructed entity. The overall dimensions of the problem, for a large model, will be 106 parameters or more, so perhaps the only computationally feasible way to proceed is to merge the first and second order statistics from the inversion with the second-order statistics implicit in the variograrns in a pseudo-Gaussian framework. In terms of second order statistics, it is natural to think of the correlation matrix of the overall structure as a block matrix, with the inversion ensemble furnishing the blocks for the inter-property correlations at each location, and the spatial variogram defining how off-diagonal blocks are coupled. The overall correlation matrix is then a direct (or Kronecker) product of correlation matrices, and the natural and efficient approximation to sampling from the Gaussian distribution attached to this correlation is a generalised p-field algorithm. The great advantage of this preferred approach is that it is only necessary to form approximations to the first and second order statistics (mean and covariance), and then simulation is direct.
Some approximations are required in forming the statistics from the seismic inversion ensemble. All of the quantities of interest are non-negative, and will be approximately Gaussian if the forward seismic model is reasonably linear over the support of the prior (e.g. the thickness of a reservoir layer that is well above seismic resolution). But there are regions were approximate linearity does not hold, and model parameters are also often truncated at one end, so their posterior univariate distribution is a mixture of a spike at the truncation and a continuous tail (see especially the first, ‘simple wedge’, example in the description below, and
The main steps in the method according to one embodiment of the invention are set out in the flowchart of
In the next step, the geometry of the input grid is modified, based on the observed fault structure, to form an output grid for each layer, onto which the property information will be mapped. The values of the properties are then interpolated from the points on the input grid onto the output grid, following which the means and covariances of the output ensembles are spatially smoothed to produce a smooth trend map for each property and a smooth inter-property trend covariance.
A kriging adjustment of the property trends is then performed on the basis of the well log data, so that the trends pass through the well log data. A set of geological constraints may also be applied, including continuity requirements in respect of one or more properties between layers and/or across faults, in order to truncate or otherwise constrain some of the property values at given points. For example, certain properties will be continuous across faults, and others will not.
A set of spatially correlated random fields is then simulated on the output grid for each property, preferably using a p-field technique, and used together with the smooth inter-property trend covariance and the property trend maps to generate a set of realisations for each property which honour inter-property and inter-layer correlations from the seismic inversion, measured well data and, where used, the set of geological constraints.
The generated realisations may then be used in a reservoir model to conduct flow simulation and/or volumetric calculations in respect of the region, including the estimation of uncertainty in flow calculations.
The gridding considerations in processing seismic data will now be discussed.
In accordance with one embodiment of the invention, the geometry of the input grid is modified to form an output grid for each layer, based on an observed fault structure.
The input-grid formed by the x, y locations of the sequence of seismic traces is usually regularly sampled transversely, but the inversion region may have been confined to some polygon of interest. This grid may or may not strictly contain the extremity of the desired output grid. FIG. 1.(a) shows a typical layout of an input-grid trace array for seismic inversion, with regular spaced points in a polygon.
Typically from the inversion, we have available the distribution of (among others) layer-thickness, NG, layer-top depth, and net-hydrocarbon for the sequence of model layers at each trace x, y location.
Conversely, reservoir geological models are usually built with uneven spatial sampling, and often with less transverse resolution than that available in the seismic inversion. Various kinds of grid geometry are possible (Deutsch, 2002), but the present embodiment of the invention preferably uses the widely used cornerpoint grid format for exporting inversion information to reservoir modelling packages. These grids are used by the ECLIPSE reservoir simulator, so we use the adjectives ‘cornerpoint’ and ‘Eclipse’ as loosely interchangeable.
Cornerpoint grids specify lines for the ‘vertical’ corners of each column of gridblocks, and a set of 8 depth (z) points which define the top and bottom face of any particular gridblock. (The faces are not strictly planar, but flow calculations make suitable projections so as to conserve mass and represent flux correctly). Blocks are indexed by an i, j, k triple, and usually many blocks are tagged as ‘inactive’ if they represent an uninteresting region of space. The file formats consist of chunks denoting the grid size, local origin, block-corner lines (COORD), corner-depths (ZCORN), and block-centred properties, such as ‘active’ flags, and segment labels.
In a preferred embodiment, for the massaging calculation, we form, for each layer, a 2d output grid comprising the (x, y) projections of the midpoints of the edges of each corner point block (see little circles in
A typical plan view of the cornerpoint geometry is shown in
The smoothing and geostatistical simulation, including notation and requisite algorithms, will now be described.
The overall simulation process is perhaps easier to describe in words than mathematically Roughly speaking, in accordance with a preferred embodiment, the means and covariances of the multi-property Delivery output ensembles (or other inversion results) are spatially smoothed to produce a smooth, trend map for each property and a smooth between-properties trend covariance. The trend map may be allowed to have discontinuities at faults for certain properties. The property trends are then deformed by a kriging adjustment from well observations to produce a trend that passes through all hard observations. The estimation-variance maps produced in the kriging calculation are normalised and stored for each property, for later use. The maps may be truncated or clipped depending on consistency requirements for the properties (e.g. net-sand will be clipped to less than thickness). The trend maps then constitute the most-likely maps of properties. The uncertainty maps are defined by the diagonal entries of the smooth trend covariance multiplied by the kriging variance maps. The square root of the latter map is then a local ‘standard deviation’ for each property, which honours the inversion uncertainties and the well data.
To generate realisations, a set of normalised, spatially correlated random fields (preferably p-fields) are simulated for each property and layer. At each spatial location these fields are then preferably mixed in a linear combination described by the Cholesky factor of the smoothed between-properties trend covariance. The final set of fields may then be scaled by the normalised kriging-variance maps and added to the trend maps to produce a series of realisations.
The method will be described below with reference to execution as a computer program. The actual program execution follows the initialisation and realisation generating steps set out below reasonably literally, but some notation will first be described.
In general there exists a seismic grid on which the inversion is run, which we call the input grid (GI). The properties are to be generated on a different grid, called the output grid (GO). This is treated as a sequence of 2D grids for each layer, GO,l. The index l pertains to layers, p to properties, j to nodes on GI or a particular GO,l (as arranged in convenient ordering—say a raster scan). The set of hard well observations (O) may be suffixed l or p with implied constraint to layer l or property p. A vector m of properties of interest may be suffixed m(l,p), which denotes a generic unrolled index of property p on layer l. Segment labels for node j are denoted Sj.
The local neighbours of node j are denoted by j′˜j, or j′ε∂j. Neighbours are defined by a Euclidean distance metric confined to the same layer, with azimuth angle and principal ranges inherited from a conventional variogram specified by the user. Neighbours used in the trend smoothing are defined by the trend-smoothing variogram, whereas observation-kriging and simulations are associated with a separate, layer-specific ‘interpolation’ variogram. Since the grids are large and irregular in general, a kd-tree algorithm is preferably used for efficient nearest neighbour searching (Skiena, 1997). Nearest neighbour searches in the sequential simulation algorithms must be confined to previously visited nodes, and this may be achieved by a naive dynamic kd-tree implementation.
Initialisation steps will now be described in accordance with the preferred embodiment.
Re-Ordering of Properties
Internally, the property vectors are re-ordered to the following sequence: {depth, thickness, net-sand, other properties}, if all the italicised properties are available. This enables truncation rules to be sensibly applied from known quantities later in the calculation.
Input Grid Segmentation
Fault-sensitive smoothing of the Delivery statistics requires a segment label to be attached to each input grid point. In corner point style grids, segments labels are associated with block centers. The input grid can then naturally inherit the segment label associated with the corner-point grid block in which it falls, as computed for the user-specified reference layer.
Kriging
In accordance with one embodiment of the invention, a kriging adjustment of the property trends is performed on the basis of the well log data so that the trends pass through the well log data.
Kriging is preferably used for both integrating well observations and interpolating from the input to the output grid. Interpolation kriging calculations are performed with a fixed number of nearest neighbours, typically 8, For well observation kriging, all hard-data values are used, and for the sequential simulation routine described later, around 25 neighbours are used. Because the output grid may contain duplicated points, some rank-deficient kriging systems can arise. Robust solution of these is performed using a adaptation of the Schnabel-Eskow modified Cholesky decomposition (Schnabel and Eskow, 1999).
Well Observations
For each layer l, for the observation q at location rq, we construct and solve the ordinary kriging (OK) system for {ωj′,γ},
where the covariance used is that proper to interpolation for the layer, with unitised sill. We store a data structure for the kriging weights and neighbours, viz. {ω(l,q);j′(O),j′ε∂j⊂GO}.
Grid Interpolation
In accordance with one embodiment of the invention, the values of the plurality of properties are interpolated from the points on the input grid onto the output grid.
For each layer l, at the output grid location jεGO, we preferably construct and solve the OK system for interpolation from the input grid to the output grid
using, again, the interpolation covariance with unit sill. We save a data structure for the kriging weights and neighbours
Smoothing
In accordance with one embodiment of the invention, the means and covariances of the output ensembles are spatially smoothed to produce a smooth trend map for each property and a smooth inter-property trend covariance.
If
is the p50 statistic from the Delivery inversion for property p, layer l, location jεGI, we preferably smooth this onto the output grid using a moving average filter whose weights are based on the covariance Csm( ) specified by the trend-smoothing variogram. Specifically,
where the weights W are defined by
and the normalisation constant N≡Σj′˜jWl,p,j′. A larger number of nearest neighbours are preferably used here, typically 50 or so.
For the covariances, the full covariance matrix (coupling all layers and properties) is preferably provided by the inversion, and is preferably smoothed back onto the input grid (for reasons which will become apparent later), without regard to segmentation. The segmentation is ignored, as uncertainties can be expected to be continuous across faults. In accord with the usual rules for the sum of independent random processes (and also, conveniently, to ensure positive definiteness), the smoothed covariance is
here,
is the Delivery covariance statistic for property p, layer l with property p′, layer l′, evaluated at location jεGI. The normalisation constant N′ is the sum of the weights
It is also useful to interpolate, for later use, the smoothed trend surface at the well observation points.
FFT based methods for smoothing are awkward to use for this problem, on account of the segmentation and the irregular grid.
Trend Adjustment from well Observations
A final initialisation calculation is the adjustment of the trend surface preferably via simple kriging (SK) so it passes through the well data. Preferably, for each layer and property, at each location jεGO we compute the residual trend adjustment
where
is the qth observation of property p on layer l. The simple kriging weights ωq are the solution of the set of equations
Again, the covariance used in this SK step is normalised to unit sill. The new trend surface is then defined to be
We store also the kriging variance
which is used in the subsequent p-field simulation.
A final preferred step in the trend adjustment is application of a set of truncation rules. This may take the form of a set of geological constraints including continuity requirements in respect of one or more properties between layers and/or across faults. The loop over properties p occurs innermost in the calculation, and the internal ordering of properties described earlier enables successive application of these rules: i) net-sand=min(net-sand, thickness), ii) NG=net−sand/thickness, iii) p=max(p,0), iv) if p is normalised, p=min(p, 1).
Generation of realisations will now be described.
In accordance with one embodiment of the invention, a set of spatially correlated random fields is simulated on the output grid for each property, and a set of realisations is generated for each property on the basis of the random fields, the smooth inter-property trend covariance and the property trend maps, which honour inter-property and inter-layer correlations from the seismic inversion and measured well data
Realisations are preferably generated using a generalized p-field technique (Deutsch and Journal, 1998), which requires a set of unconditional correlated realisations on the output grid. There are various ways to do this, but the sequential simulation technique is most easily adapted to the unstructured grid.
Some notation and apparatus is necessary. For each layer l, we preferably construct a pseudo-multigrid path Pl, which is a visiting sequence for all the nodes in the layer. The sequence is pseudo-multigrid in the sense that the grid nodes are visited in a sequence derived from a breadth-first traversal of a binary tree representation of the nodes. This ensures that widely spaced points are visited early in the path. For the visited node ĵ, we denote by ∂ĵ the nearest neighbouring points of ĵ that have already been visited, up to some maximum of Nn neighbours, and with the notion of distance derived from the layer variogram. We will generate and store α=1 . . . NR realisations at each grid point during the path traversal.
The conditional distribution for the p-field ζl,p,ĵ,α, is
where the conditional mean is
the conditional variance (geometry dependent only) is
and the simple kriging weights in these last two relations are the solution of the SK system
Again, the variogram is normalised, and the p-fields ζ have univariate distribution N(0,1). The fields ζl,p,ĵ,α are stored by a fully nested loop on l, ĵ, p and α.
These p-fields are now unconditional correlated fields that contain the necessary spatial correlation on the output grid, but honour neither the inter-property/inter-layer correlations from the seismic inversion nor the hard well data.
To introduce the inter-property/layer correlations, a local interpolated covariance may be computed at each node jεGO,l (in a conventional loop over the output grid l) as
using the saved OK/interpolation weights from equation (2). The normalisation N is again defined as the sum of the squared OK/interpolation weights: squaring is used again to ensure positive definiteness. Define by
the conventional (left) Cholesky factor of
where we unroll the indices l,p in the usual way. Define also the diagonal scaling matrix
using equation (10), with the same indexing. Realisations may then be computed on the fly by ‘mixing’ the correlated p-fields and adding back the trend:
This is the p-field mixing equation which imposes the inter-property/layer correlations and well constraints on the correlated fields to produce the realisations. A final step preferably consists in the imposing of the property truncation rules on
as per the mean trend calculation.
Under this construction, the covariance of the residuals
(the first term in equation (15)) can be shown to have these reasonable special cases:
The downscaling, or ‘decoration’ algorithms of a particularly preferred embodiment will now be described.
In most realistic applications, fluid flow will be sensitive to the manner in which impermeable material (usually clay: we will use the placeholding name ‘shale’ hereon) is spatially dispersed within the ‘meso-scale’ reservoir layers used for the inversion. Capturing this effect will then require subdivision of the vertical gridding and suitable categorical simulation of the shales within a meso-scale layer.
This categorical simulation must be consistent with the net-to-gross NG obtained from the seismic inversion, or, equivalently, a ‘massaged’ realisation. The inversion forward model typically uses an effective-medium approximation based on a separation of length scales between the vertical spatial scales characterising the shale distribution and the seismic wavelength. In this regime, the effect of the shale on the seismic response is then captured by an effective macroscopic parameter, the layer net-to-gross (NG), via the Backus average. The preferred model also assumes a laminated distribution of shale, which is a respectable assumption for reservoirs where internal shales are gently dipping.
A variety of categorical simulation techniques are in common use in petroleum geostatistics. Perhaps the best understood algorithms for binary simulation are truncated Gaussian fields (le Loc'h and Gall, 1996), and the inventors have chosen to adapt this method to the ‘decoration’ problem in a preferred embodiment on account of the efficiency of simulation of the underlying continuous field. Users are expected to furnish a 3D variogram describing the spatial continuity of the underlying Gaussian field, which can be estimated in consultation with a geologist. This variogram is embedded in the (normalised) covariance function CTG( ). The algorithm we describe is somewhat heuristic, but very efficient, and strikes a good compromise between the connectivity embedded in the variogram and the coarse-scale constraints. It may be loosely described as a greedy, optimising, sequential truncated Gaussian simulation.
Rigorous sampling of high dimensional categorical spatial distributions with tight likelihoods is notoriously difficult (Winkler, 2003). A formulation in terms of discrete Markov Random Fields (MRFs) would have been more satisfactory in terms of incorporating the net-to-gross likelihood constraint, but explicit control of correlation length scales is much more difficult with MRFs. The preferred embodiment of the invention also does not try and provide a ‘most-likely’ categorical map, since this object is highly (combinatorically) non-unique, and any of the most-likely models is very non-representative. An analogy with the celebrated Ising model of statistical mechanics is helpful (Winkler, 2003), since this represents by far the best understood MRF model in the literature. If we map Ising ±1 spin states to rock categories, the temperature of the Ising model determines the correlation length of the realisations, but at any temperature, the ‘most-likely’ model is all one category or spin in an unconstrained model. For the case of smooth NG constraints, the most likely model(s) will be a layered two-zone partitioning, with the zone boundary of minimal length: this will clearly violate the homogenisation assumptions used in the inversion.
A fast approximate method of downscaling the ‘massaged’ seismic inversion results comprises (a) an initial subgridding to the desired resolution and geometry, followed by (b) a sequential truncated Gaussian simulation for the rock categories which is stochastically optimised to match the net-to-gross (NG) values produced by the seismic inversion. The stochastic optimisation is performed by a greedy selection of the best match from multiple simulations of sets of gridblocks in a column, with the columns of the cornerpoint grid visited in a multigrid sequence to ensure reproduction of the longest length scales of the supplied variogram. The truncated Gaussian threshold is computed from the target NG.
The algorithm preferably runs as follows, conditional on some known realisation of NG on the coarser grid:
for all Nn nearest neighbours ĵ′ of all blocks ĵ in the column.
with conditional mean
and conditional variance
This is very fast, requiring only O(NzNn) flops per simulation. We greedily accept the simulation whose associated truncated field
best matches the column NG (a columnwise sum), and proceed to the next column in the 2D multigrid column path.
The complexity of this algorithm is only a small multiple of the workload of a conventional sequential simulation, typically O(103) flops per node.
The Stybarrow field-case example described later illustrates some realisations drawn using this algorithm, especially
Other methods may be used for the downscaling, or ‘decoration’, operation. For example, the impermeable portion may be conceived as objects, the dimensions of which are drawn randomly from distributions specified by geoscientists, and the location and dimensions of which are annealed using maximum a posteriori probabilities constructed from the coarse inversion results, and potentially conditional to trends specified from geologic concepts or geophysical surveys.
Alternatively, all downscaled layer thicknesses may be conceived as being drawn from truncated Gaussian processes (possibly nonstationary), and simulated using heirarchical sampling for auxiliary and continuous variables which specify the active layers and the joint downscaled thickness distributions, respectively, at a particular trace. The simulation may proceed by blocks or strictly tracewise in a sequential manner, possibly using Markov Chain Monte Carlo methods. The sequential algorithm samples from the posterior distribution for the sublayer thicknesses, and is dependent on user-supplied variograms, well data, and the accumulating history of the algorithm. Fast approximate alternatives to this algorithm, using deterministic topological distortions of a truncated-Gaussian simulated sub-layer array which optimally match the coarse scale constraints, are also feasible.
The preferred embodiment of the invention may be implemented in software, preferably written in ANSI C, and distributed along with the known open-source Delivery and WaveletExtractor codes (Gunning and Glinsky, 2004, 2005; Gunning, 2003). Library dependencies are all open source. It contains an ECLIO library for handling ECLIPSE-style cornerpoint grids, and links to the high performance ATLAS library for the intensive linear algebra work (Whaley et al., 2001). The supplied kd-tree range-search library is based on Ranger from Stonybrook (Skiena, 1997). For large models, the grid smoothing is intensive as well as the sequential simulation, and RAM sizes over 1 Gb may be required.
Compilation instructions are provided in the README file at the top level of the source tree. Installation of the third party ATLAS, glib, and libxml libraries is straightforward on any variety of Unix/Linux, but should also be possible on other architectures with an ANSI compiler. The binaries supplied will be valid for current Intel Linux architectures.
Examples of the use of one embodiment of the invention will now be described.
The first example relates to a simple wedge with Graben-like fault.
This very simple synthetic example has been constructed to illustrate some of the main considerations in constructing a workflow involving the massaging method and software of one embodiment of the invention.
This model is constructed as a 2 d case for visualisation simplicity.
It is common to fix a reference depth layer to the strongest reflection—often the top of a reservoir, which is chosen as the top of layer 2. This reference is invisible when the sand pinches out, so is extrapolated horizontally for simplicity. A depth uncertainty of σz=5 m is attached to this surface in the Delivery prior; all other depths are referenced to this and computed from layer velocities and times. After the delivery inversion is run, typically with
% delivery -PD -v 3 -RWS -N 100 -m prio_traces.su ModelDescription.xml,
the summary file of statistics is generated for use in the massaging step:
% deliveryAnalyser -I realisations.su --massage-analyse ep,cdp,gx,gy \ 1,2,3 d,thickness,net-sand, NG massage_analysis.mab
which produces a set of median and covariance statistics for layers 1,2,3 and properties depth, thickness, net-sand, and NG in the file massage_analysis.mab. The seismic header words ep,cdp,gx,gy are reproduced in the file for spatial locations. This file will be very large for big models, so special binary compression techniques are used.
The detail of what happens with the statistics near the pinchout is interesting.
The histogram is of the reservoir (layer 2) thickness at the four traces tracl=12,13, 16 and 25 (shown at right). The distribution gradually evolves from a pure spike at thickness=0, through a mixed spike and minor mode (tracl=13), to a nearly Gaussian single mode when the amplitudes improve. The median (50% quantile) statistic used in the analysis to define the most-likely value at each trace is highlighted on the axis.
Producing corner-point style grids is then straightforward. An XML file Massager.xml is created with suitable entries for the required properties, smoothing variograms, residuals' variograms, and hard well data, etc. A typical runtime command is then
% deliveryMassager Massager.xml -v 3 -a -N 10 -ecl
which would produce 10 realisations of the cornerpoint grid, plus the p50 model, in files with obvious names like MassagedEclPropertiesLayer* and suitable suffixes. See Appendices 1 and 2 for more details.
The layer depths extracted from the p50 statistics at each seismic trace lack spatial continuity for two reasons: 1) noise in the seismic traces feeding the inversion, and 2) sampling error in the MCMC ensemble, which will usually scale like N−1/2 if the ensemble has N samples, but may central limit more slowly if the posterior contains many modes and/or the modes have eccentric shapes (like pinchouts, which are ‘half-Gaussians’). The example above, with only 100 realisations, can be expected to have substantial sampling noise (in practice, more realisations would be generated to reduce this).
If we impose smoothing with an isotropic Gaussian variogram of range 600 m, and form a most-likely model and several realisations, the cornerpoint grids look like those depicted in
The XML property attribute smooth_across_faults controlling continuity across faults have been altered to produce the rather non-geological realisations of
The second example relates to the Stybarrow field off Western Australia, which has been subjected to the full gamut of Delivery style workflow. A more comprehensive overview is given in Glinsky et al. (2005). The field is an early Cretaceous turbidite sandstone, whose structure comprises a narrow, wedge-like NE-to-SW tilted fault block, with normal faults providing closure to the SW. Cross-section and elevation views are in
Four wells were used for simultaneous wavelet extraction, using the software of Gunning and Glinsky (2005). The coarse layer-based model constructed for Delivery inversion comprised 6 layers in the sequence shale/thin-sand/thin-shale/main-sand/hard-shale/shale. The seismic inversion was run only on traces within the fault block/hydrocarbon trap region. The asset geological team built an ECLIPSE model of the reservoir using the same layering, identifying about a dozen internal faults and associated segments.
Since one of the wells (Stybarrow 4) penetrated the lower fluid contact, the uncertainty of chief interest was that of the net-sand volume within closure above the known contact.
The asset geologist suggested transverse correlation lengths in the km range for the main bodies in this field, and the smoothing effect of this on the p50 ‘massaged’ map is evident in
The uncertainties in the main sand net-sand volume is strongly influenced by the correlation lengths of the allowable body fluctuations, as the extent to which the stochastic volumes will central-limit (within the fault-block integration area) to a sharply defined average is strongly controlled by these lengths. The distribution of this volume was estimated by drawing an ensemble of realisations of net-sand, conditioned on well data, and integrating over the maps above the contact level. The code produces ascii files with summary statistics of 200 realisations by using the typical runtime command:
% deliveryMassager StybarrowMassager.xml -v 4-a —N 200-stats
As an example of the ‘decoration’ algorithm,
The method and deliveryMassager computer program according to one embodiment of the invention is an essential tool for coercing the stochastic seismic outputs from a stochastic seismic inversion, such as the Delivery seismic inversion tool to formats suitable for flow simulation or further 3D modelling and analysis. It performs a merging of expert-prescribed lateral correlations with the vertical correlations inferred in the inversion, which is essential for the generation of both realistic most-likely-case models and for uncertainty studies using stochastic realisations. Hard observations, faulting information, and segmentation requirement are honoured. The preferred embodiment of the invention generates industry-standard cornerpoint grid formats usable directly by common 3D modelling tools and flow simulators.
References have been made in the foregoing description to the known Delivery software, although the preferred embodiment of the invention may use seismic inversion data of any other suitable form.
The above describes a particular preferred embodiment of the invention. However, modifications may be made within the scope of the claims. In particular, the different steps of the method set out in claim 1 may be implemented in the particular ways set out in the description above, or in equivalent ways, and it should particularly be noted that the specifically described method of any given step may be carried out in combination with implementations of other steps with are different to the specific examples given. Furthermore, the several of the method steps set out in the claims may be merged and carried out at the same time.
When running the deliveryMassager code, very frequently changed runtime options are reserved for the commandline: the executable deliveryMassaqer self-documents if no arguments are supplied, for those wishing to peruse these options. Otherwise, all input parameters are specified in an XML file (see 1.1 below), but this will in turn reference other files that may be required:
Here, each of headerwords/layer-numbers/properties is a comma separated list, as per the examples.
If no corner point grid is supplied, the massaging code can produce most-likely maps and realisation files on the same grid as the seismic inversion (‘duplicate’ mode), which is often very useful. No fault block information is available when using this mode.
1.1 XML formats and Schema
The XML format used to control the massaging process has a meta-description in the associated Massager.xsd XML schema file, which can be used in the XML editor supplied with the Delivery distribution to produce strictly legal XML files. The format of the XML is largely self-explanatory, but a few explanations may be helpful.
The code produces a variety of output files, with names constructed from relevant entries in the master XML file. The simpler files are in naive geoEAS ascii format used by GSLIB (Deutsch and Journal, 1998) for ease of parsing. Stochastic outputs (‘realisations’) are generated if the -N number flag is supplied.
If ascii-mode is used (runtime flag -a)
If flies are being written to the cornerpoint grids, we get
If volumetric statistics of certain properties are requested (runtime flag --stats), simple ascii files (Realisation_summary_Stats*) with the cumulative distribution of a requested ordering property (e.g. net-sand) are generated.
Appendix 3: The Thin-Layer Detection Problem
The strong nonlinearity of the forward model in the regime of thin-layers makes the correct introduction of trace correlations difficult. Thin layers are always difficult to detect (or reject) with strong probability in single traces, as a layer introduced between identical bounding layers will introduce equal and opposite reflectivities, which will nearly cancel. each other in the convolution, and thus produce very weak (i.e. within-noise level) amplitudes. But the inversion at each trace can still provide an (perhaps weakly) updated estimate of the probability p that the layer is present. The case p≈½ is most interesting.
There is a simple mapping of this problem to a Bayesian Markov Random Field (MBF) model (Winkler, 2003; Besag, 1986) which offers considerable insight. If we think of an array of traces i characterised by an integer xi=±1 denoting ‘layer present/absent’ at each trace, then the product of (independent) updated likelihoods pi over all traces in the set can be written in the form
where Bi=(½)log(pi/(1−pi)), and ai=(1−(1/Bi)log pi) are constants that come from setting the odds ratio
pi/(1−pi)≡exp(Bi(+1−ai))/exp(Bi(−1−ai))
We may think of the exponent in equation (19) as a (-ve) ‘likelihood’ Hamiltonian for the problem, which needs to be added to a Hamiltonian expressing the prior mean and correlations between the states xi, as they might plausibly be related in a model prior to any observations (i.e. inversion results). If we write the prior for the model {xi} as a MRF with coupling over nearest neighbours given by the Hamiltonian
then the prior model corresponds to an Ising model with inverse temperature β, mean state <xi>=0 (i.e. agnostic view of layer presence/absence), and, in 1d, an exactly derivable correlation function (in the large system limit):
ρi,j≡<xixj>˜(tan hβ)|i−j|.
Clearly the correlation decays geometrically/exponentially between traces, so we define a correlation length λc by
ρi,j˜(tan hβ)|i−j|≡ exp(−|i−j|/∥c)
Clearly, longer correlations (large λc) correspond to ‘colder’ temperatures (large β).
When we add the likelihood Hamiltonian to the prior Hamiltonian, the overall system is
which corresponds exactly to the Bayesian image models discussed by Winkler (2003), in the binary case. Exact MAP estimated of the most probable state can be computed by annealing or the Ford-Fulkerson algorithm (Greig et al., 1989).
Some insight into the effect of the correlations can gleaned by considering the one dimensional case with a common update probability p=pi. This then corresponds to the Ising model in an external magnetic field B. The question of interest is then, given a set of (identical) likelihood updates at each trace corresponding to B=(½)log(p/(1−p)), what is the expected state of the system. This corresponds precisely to the mean Ising magnetisation, which is known (Thompson, 1972) for the 1 D case to be
Graphs of this curve show that the correlation in the prior strongly ‘corroborates’ any weak inclinations in the likelihood p. E.g. for a correlation length λc=10 and p=0.6, the expected state is almost certainly ‘layer present’.
This behaviour is reasonable: we expect a particular observation to be repeated many times if the correlation lengths are long, and if the observations are truly independent, the multiplication of probabilities forces the ‘suspected’ state to be very much more likely. In the inversion context, we would have to be very careful with asserting true independence of observations, since the imaged amplitudes may well have systematic effects from the processing or other geological effects in the overburden.
Number | Name | Date | Kind |
---|---|---|---|
5586066 | White et al. | Dec 1996 | A |
20060058965 | Ricard et al. | Mar 2006 | A1 |
20060241920 | Le Ravalec-Dupin et al. | Oct 2006 | A1 |