Surveys can be performed to acquire survey data regarding a target structure, such as a subsurface structure. Examples of surveys that can be performed include seismic surveys, electromagnetic (EM) surveys, wellbore surveys, and so forth. In a survey operation, one or more survey sources are used to generate survey signals (e.g. seismic signals, EM signals, etc.) that are propagated into the subsurface structure. Survey receivers are then used to measure signals reflected from or affected by the subsurface structure.
The acquired survey data can be processed to characterize the subsurface structure. Based on the characterization, decisions can be made with respect to operations to be performed with respect to the subsurface structure, including additional survey operations, drilling of a wellbore, completion of a wellbore, and so forth.
An issue associated with obtaining information based on survey data is uncertainty associated with models that characterize the subsurface structure. Failing to properly consider model uncertainty can lead to increased risks as part of decision-making associated with operations performed with respect to a subsurface structure.
In general, according to some implementations, an objective function is based on variation in predicted data over multiple sets of candidate model parameterizations that characterize a target structure. A computation is performed with respect to the objective function to produce an output. An action is performed that is selected from the group consisting of: selecting, using the output of the computation, at least one design parameter relating to performing a survey acquisition that is one of an active source survey acquisition and a non-seismic passive survey acquisition; and selecting, using the output of the computation, a data processing strategy.
In general, according to further or other implementations, the objective function is based on covariance of differences in the predicted data over the multiple sets of candidate model parameterizations.
In general, according to further or other implementations, the objective function is maximized.
In general, according to further or other implementations, maximizing the objective function comprises maximizing a nonlinear objective function.
In general, according to further or other implementations, maximizing the nonlinear objective function comprises maximizing a DN-criterion.
In general, according to further or other implementations, the at least one design parameter relating to performing the survey acquisition is selected to increase expected information in data acquired by the survey acquisition.
In general, according to further or other implementations, the data acquired by the survey acquisition is selected from the group consisting of seismic data, electromagnetic data, data acquired by a cross-well survey acquisition, data acquired by an ocean-bottom cable acquisition arrangement, data acquired by a vertical seismic profile (VSP) survey acquisition arrangement, gravity data, geodetic data, laser data, and satellite data.
In general, according to further or other implementations, selecting the at least one design parameter comprises selecting a parameter defining an offset of a survey source to a wellhead of a wellbore in which survey equipment is provided.
In general, according to further or other implementations, selecting the at least one design parameter comprises defining a region in which a spiral survey operation is performed.
In general, according to further or other implementations, selecting the at least one design parameter further comprises defining a rate of increase of a radius of a spiral pattern for the spiral survey operation.
In general, according to further or other implementations, selecting the at least one design parameter comprises selecting a design parameter for a time-lapse survey.
In general, according to further or other implementations, selecting the at least one design parameter relates to a survey data acquisition operation for survey-guided drilling of a wellbore.
In general, according to further or other implementations, selecting the at least one design parameter comprises modifying the at least one design parameter of a survey arrangement as the survey acquisition is being performed.
In general, according to further or other implementations, selecting the data processing strategy comprises selecting one or more subsets of a dataset containing acquired survey data, where the selected one or more subsets of data are processed.
In general, according to further or other implementations, the processing is selected from the group consisting of a full waveform inversion, reverse time migration processing, least squares migration processing, tomography processing, velocity analysis, noise suppression, seismic attribute analysis, static removal, and quality control at a control system.
In general, according to some implementations, a computer system includes at least one processor to provide an objective function based on a variation in predicted data over multiple sets of candidate model parameterizations that characterize a target structure, and perform a computation with respect to the objective function to produce an output. An action is performed that is selected from the group consisting of: selecting, using the output of the computation, at least one design parameter relating to performing a survey acquisition that is one of an active source survey acquisition and a non-seismic passive survey acquisition; and selecting, using the output of the computation, a data processing strategy.
In general, according to further or other implementations, the computation computes values pertaining to a DN-criterion, wherein the values identify positions of shots that are more likely to produce more informative data.
Other or alternative features will become apparent from the following description, from the drawings, and from the claims.
Some embodiments are described with respect to the following figures:
Statistical experimental design (SED) techniques can be applied to reduce (or minimize) risks associated with model uncertainty. Model uncertainty is based on the fact that a model may not actually be an accurate representation of a target structure, such as a subsurface structure. An SED technique enhances experiments to improve (or maximize) the expected information that can be obtained in observed data.
Model-oriented design is a subdiscipline of SED. With a model-oriented design, information is obtained regarding how observed data can vary with models of the subsurface structure. With model-oriented design, model discrimination may be performed. The object of model discrimination is to perform experiments to discriminate between two or more models that describe a phenomenon of interest (e.g. a target structure such as a subsurface structure). In some implementations, a hypothesis test may be enhanced by model-oriented design. Two hypotheses are proposed that provide two competing models to explain observed data. An experiment can be defined that increases (or maximizes) the odds that one model is correct and the other model is incorrect. The correct or true model can be considered the null hypothesis, while the other model is treated as an alternate hypothesis. Stated differently, a goal of the hypothesis test is to optimize an experiment to increase (or maximize) the odds that the alternative hypothesis is rejected, which ensures that the model parameters most likely to explain an observed data are in fact the correct ones.
In accordance with some implementations, a nonlinear design objective function relating to an experimental design (and more specifically to a model-oriented design) is used for reducing (or minimizing) risk associated with uncertainty, such that the expected information that can be obtained from observed data can be increased (or maximized). In some implementations, the non-linear design objective function that is used includes a DN-criterion.
Nonlinear model-oriented design relates to nonlinear data-model relationships, in which the information content of data varies nonlinearly with the model of the target structure. It is desirable to address nonlinearity because many data-model relationships (represented by theoretical functions) in subsurface exploration are nonlinear and affect model uncertainty in complicated ways.
The DN-criterion is a nonlinear design objective function that can be maximized using relatively efficient algorithms from linearized design theory. This makes the DN-criterion capable of optimizing large-scale experiments (that can contain a relatively large amount of data).
Maximizing the DN-criterion, where “criterion” refers to “objective function,” produces experiments that are expected to optimally discriminate between competing model parameterizations. A parameterization of a model refers to assigning values to one or more parameters of the model. Different parameterizations involve assigning different values to the parameter(s). For example, a model can include a velocity parameter, which represents a velocity of a seismic wave. A model can include different values of the velocity parameter at different geometric points for characterize respective portions of the subsurface structure. In other examples, a model can include additional or alternative parameters, such as a density parameter, a resistivity parameter, and so forth.
In the ensuing discussion, reference is made to subsurface structures that may contain items that are of interest, such as hydrocarbon reservoirs, fresh water aquifers, and so forth. However, in other examples, techniques or mechanisms according to some implementations can also be applied to other types of target structures, such as human tissue, mechanical structures, structures relating to mining, and so forth.
In addition, a wellbore 110 can be drilled into a subsurface structure 112. A survey string 114 can be deployed in the wellbore 110, where the survey string 114 can include survey receivers 116. In some examples, the streamer 106 can be omitted. In other examples, the survey source(s) 104 can be omitted. Also, instead of, or in addition to, providing the survey receivers 116 in the survey string 114, one or more survey sources can also be provided in the survey string 114.
In examples where the survey sources and survey receivers of the arrangement of
In other example arrangements in which the wellbore 110 and survey string 114 are omitted, the survey arrangement of
In further examples, the survey sources and survey receivers in a survey arrangement can include electromagnetic (EM) sources and EM receivers, which can be used in a controlled source EM (CSEM) survey operation. In addition, other types of survey sources and survey receivers can be employed in other implementations. For example, other survey receivers can measure gravity data, magnetotelluric data, geodetic data (to measure a shape of the earth), laser data, satellite data (e.g. global positioning system data or other type of satellite data), and so forth. In other examples, other types of data can be measured by survey receivers.
Although reference is made to an example marine survey arrangement, it is noted that techniques or mechanisms according to some implementations can also be applied to land-based survey arrangements, cross-well survey arrangements (where survey source(s) are placed in a first wellbore and survey receivers are placed in a second wellbore).
In some examples, the computer system 120 is also able to perform processing of the data acquired by the survey receivers 108 and 116. Alternatively, the computer system 120 for performing processing according to some implementations can be located remotely from the marine vessel 102, such as at a land-based facility.
The process performs (at 204) a computation with respect to the objective function to produce an output. In some implementations, performing the computation with respect to the objective function includes maximizing the objective function, such as maximizing the DN-criterion, which is discussed further below.
Based on the output of the computation performed with respect to the objective function, one or both of tasks 206 and 208 can be performed. Task 206 includes selecting at least one design parameter relating to performing an active source survey acquisition or a non-seismic passive survey acquisition, where the selecting uses the output of the computation performed with respect to the objective function. An active source survey acquisition refers to a survey acquisition performed using a survey arrangement, such as that depicted in
A design parameter relating to performing a survey acquisition can refer to any parameter that defines how the survey is performed. For example, a design parameter can define a path or a location in which survey source(s) and/or survey receiver(s) is (are) to be provided. Another design parameter can define the type of survey to be performed. As yet another example, a design parameter can define how long a survey is to be performed. There can be numerous other design parameters associated with a survey acquisition (active source survey acquisition or non-seismic passive acquisition).
Task 208 involves the selection of a data processing strategy to be employed with respect to survey data acquired in a survey acquisition. The selection uses the output of the computation performed with respect to the objective function.
In some examples, selecting the data processing strategy includes selecting one or more subsets (where each subset is less than the entirety) of data acquired in the survey acquisition. Selecting subset(s) of acquired data for processing allows for more efficient processing, since the total acquired data can include a relatively large amount of data that can be computationally expensive to process.
In other examples, selecting the data processing strategy can include selecting a strategy for attenuating noise (such as to attenuate surface noise), selecting a strategy relating to migration of acquired data, selecting a strategy relating to filtering data, selecting a strategy relating to analyzing a parameter (or parameters) of interest, and so forth. Multiple candidate data processing strategies may be available, and the selection at 208 can include selecting from among the multiple candidate data processing strategies for processing the acquired data.
As noted above, performing the computation (at 204) with respect to the objective function includes maximizing the DN-objective, in some implementations. Although the following describes details associated with use of the DN-objective, it is noted that in other implementations, other types of objective functions can be used, where such objective functions are based on covariance of differences in predicted data over multiple sets of candidate model parameterizations that characterize the subsurface structure.
Maximizing the DN-criterion produces experiments that are expected to optimally discriminate between competing model parameterizations that characterize the subsurface structure. Maximizing model discriminability over multiple model parameterizations is equivalent to minimizing the expected model uncertainty. Thus, the DN-criterion can be considered to measure experimental quality.
The following provides details relating to derivation of the DN-criterion according to some implementations.
Let
d(m,ξ)=g(m,ξ)+ε(m,ξ) (Eq. 1)
be a mathematical model of interest, where d is a vector of data observations made at observation points (geometric coordinates), m is a vector of model parameters, g is a deterministic theoretical function relating d and m, and ε is a vector of stochastic measurement errors.
It is assumed that m (which is a vector of model parameters) has a known prior distribution, ρ(m), which characterizes the state of knowledge about m before any new data is acquired. Likewise, it is assumed that ε has a known distribution, such as a probability distribution function (PDF).
A discriminating test that can be used in experimental design is a log-likelihood-ratio test, which expresses the odds ratio of the null and alternative hypothesis. The likelihood-ratio test, or its logarithm (referred to as the log-likelihood-ratio test), considers the ratio of the likelihoods of a null hypothesis and an alternative hypothesis, and thus is effectively an odds ratio. The likelihood-ratio test expresses how much more likely it is that the data is explained by one model parameterization than by another model parameterization. Maximizing the likelihood-ratio maximizes the odds that the alternative hypothesis is rejected, which is equivalent to maximizing the odds that the true model parameterization is accepted.
Denoting the true model parameterization and its corresponding data by m0 and d0 (or g(m0)+ε), respectively, and denoting an alternative model parameterization m1, the log-likelihood-ratio can be expressed as:
where L is the data likelihood function (dependence on ξ is suppressed for ease of notation). Maximizing Λ with respect to ξ maximizes the odds that the true model, m0, is accepted and the alternative model parameterization, m1, is rejected.
The log-likelihood ratio in Eq. 2a or 2b is defined for a single pair of m0 and m1, Note that there can be a relatively large number of model parameterizations that have to be compared using the log-likelihood ratio test. In some implementations, a Bayesian approach in which the hypothesis test does not depend on a single pair of model parameterizations but is integrated over a plurality of probable model parameterizations). This leads to taking the expectation of ln Λ over m0 and m1,
where π(m0,m1)=ρ(m0)ρ(m1) can be viewed as the joint distribution of m0 and m1, noting that m0 and m1 may be treated as statistically independent when they are treated as independent variables in Eq. 3. Eπ is the expectation operator over the joint distribution of m0 and m1, where the joint distribution is expressed as π(m0,m1)=ρ(m0)ρ(m1).
Maximizing the average log-likelihood ratio in Eq. 3 should therefore maximize the odds that the true model parameterization is accepted over candidate alternative model parameterizations.
When ε is Gaussian with zero mean and covariance, Cd, it can be shown that Eq. 3, simplifies to
which simplifies to
E
πln Λ=½Eπ(g0−g1)TCd−1(g0−g1). (Eq. 5)
and defining
δ≡Cd−1/2(g0−g1), (Eq. 6)
can be further simplified to
E
πln Λ=Eπ½δTδ=Eπ½trδδT=½trEπδδT. (Eq. 7)
Effectively, Eqs. 5 and 7 provide a hypothesis test over multiple pairs of candidate model parameterizations (or more generally, multiple sets of candidate parameterizations), where a pair is includes m0 and m1. More generally, Eqs. 5 and 7 represent a covariance matrix that describes how predicted data (based on corresponding model parameterizations) vary with respect to each other.
If EπδδT has any zero eigenvalues then there has to exist some m0≠m1 for which Cd−1/2(g0−g1) is parallel to the null vector(s) of EπδδT, resulting in a perfect match between g0 and g1 despite m0 not equaling m1, which can lead to non-uniqueness.
To address the foregoing non-uniqueness issue, eigenvalues can be forced to be nonzero, which can lead to achieving data-model uniqueness. To do this, log eigenvalues can be summed. This sum is automatically negative infinity for any experiment that causes EπδδT to be singular, which has the effect of eliminating those experiments as potential optima. This is essentially an additional criterion for the objective function according to some implementations. A first criterion is still to maximize the expected log likelihood ratio; a second criterion is to ensure that the maximizing experiment honors the degrees of freedom in the data-model relationship (inasmuch as this is achievable). The sum of the log eigenvalues of a matrix is equal to the log of the determinant of that matrix, which gives the objective function
Φ=ln det(EπδδT), (Eq. 8)
which is the DN-criterion. Note that this derivation allows g0−g1 to be non-Gaussian. Note also that det EπδδT is the so-called generalized variance of δ. This derivation avoids the assumption that g(m0)−g(m1) is multivariate Gaussian (multinormal).
The DN-criterion thus includes two objectives, one that maximizes the expected data likelihood ratio and the other that honors the degrees of freedom in the data-model relationship.
It is easier to discriminate between competing model parameterizations (for explaining observed data), if the predicted data vary greatly from model parameterization to model parameterization. Also, it is easier to discriminate between model parameterizations if their predicted data are expected to vary independently of one another. Looked at the other way round, if different model parameterizations predict nearly the same data, then, accounting for measurement noise, it may be difficult to discriminate which model parameterization best explains the observed data. Likewise, it may be difficult to discriminate between model parameterizations whose predicted data are perfectly correlated, because this creates the possibility that many model parameterizations can equally honor the observed data.
The DN-criterion seeks to maximize data variability while minimizing data correlation.
Several optimization algorithms for maximizing the DN-criterion can be used, including algorithms described in Darrell Coles et al., “A Free Lunch In Linearized Experimental Design?” Computers & Geosciences, pp. 1026-1034 (2011).
The algorithms described in Coles et al. are greedy in that a solution is optimized through a sequence of locally optimal updates in the hope that the result is close to the global optimum. A solution is optimized through a sequence of local updates which are optimal with respect to the current solution but not necessarily with respect to the global (overall) objective function.
Sequential algorithms can be formulated to use a recursion on the design objective function which relates its current value to its future value at a subsequent stage of the optimization. Such relations are often more efficient to evaluate than the objective function itself. In particular, the D-criterion, a linearized design objective function, can be defined as the determinant of the posterior model covariance matrix. The DN-criterion is a generalization of the D-criterion to nonlinear data-model relationships, and there is a simple recursion formula (for both criteria) that obviates explicit computation of a determinant, replacing it with an efficient matrix-vector product. The recursion is simply a rank-k update formula for the determinant of a square symmetric matrix (e.g. the data covariance EπδδT). Determinant-based design criteria can take advantage of the fact that the data covariance matrix of any subset of the candidate set of observed data is a principal submatrix (the matrix obtained by deleting similarly indexed rows and columns of a square matrix) of the data covariance matrix of the candidate set. Thus, it is sufficient to calculate EπδδT once, for the complete candidate set of observation points, and then use the aforementioned rank-k update formulas to find the optimum (or improved) subset of observations.
As discussed in connection with
In some examples, a model of a subsurface structure can be characterized by using an uncertainty workflow that can provide multiple candidate parameterizations of the model that is consistent with observed survey data. The candidate model parameterizations can be randomly sampled, to provide an ensemble (collection) of candidate model parameterizations that can be used in the process of
Because DN-optimization operates in the data space, seismic wave (e.g. compression or P-wave) travel times for candidate combinations of shots, receivers, and models are computed, which can provide a relatively large number of data points. A “shot” refers to a particular activation of at least one survey source, which produces a survey signal that is propagated through a subsurface structure, where reflected or affected signals can be detected by survey receivers.
In some cases, missing data points can result from the presence of certain structures (e.g. salt structures) in the subsurface structure. The presence of such structures may prevent a ray tracer from computing travel times. Because the DN-criterion operates on a data covariance matrix (as expressed in Eqs. 5 and 7 described above, for example), it is helpful to find a statistically consistent way of computing covariances in the presence of missing data.
In some examples, each shot-receiver pair can be weighted according to the percentage of successful travel times computed for the shot-receiver pair (over a set of candidate models). For example, a shot-receiver pair where 100% of travel times can be computed can be given a weight of 1; a shot-receiver pair where 80% of travel times can be computed can be given a weight of 0.8; and so on. This approach ensures that the computed covariance matrix can be positive semi-definite, and it also builds in a bias toward shot-receiver combinations with high success rates (for which a relatively large percentage of travel times can be computed), which is desirable since these combinations are most likely to produce informative data in a real acquisition setting, given the current state of model uncertainty.
To maximize the DN-objective according to Eq. 8, estimation of EπδδT (used in Eq. 8) for the complete set of shot-receiver pairs can be performed as follows:
G≡{g
k
|g
k
=g(mk,Ξ),mkεM}.
D(:,m)≡δm=Cd−1/2(gk−gl), where m=500(k−1)+l and k,l=1, . . . , 500.
As noted above, the DN-criterion seeks to maximize data variability (e.g. travel time variability) while minimizing data correlation (e.g. spatial correlation of travel times of a candidate shot with respect to an entire shot carpet).
The DN-values of
Higher DN-values depicted in
The annular region 304 has an inner radius (from the wellhead 302) of R1, and an outer radius of R2, as shown in
In some examples, a spiral three-dimensional (3D) VSP survey acquisition operation can be performed. A spiral 3D VSP survey acquisition operation involves towing at least one survey source (e.g. 104 in
The ability to systematically recommend a specific region for spiral VSP is useful because it ensures that the most informative data is collected while reducing acquisition costs. In addition, the rate of increase of the radius of the spiral pattern can also be determined.
DN-optimization can also be used to design can also be used to design a time-lapse survey acquisition operation. Time-lapse survey acquisition refers to performing data acquisition at different times over the same regions. DN-optimization can define the time offsets at which the time-lapse survey acquisition is to be performed.
DN-optimization can also be used to design a pre-survey acquisition geometry in which the expected measurement noise can be characterized to increase data quality during actual data acquisition. For example, a geometry referred to as a “Z-survey” can be designed for a survey data acquisition operation, where at least one survey source follows a general Z pattern, as depicted in
Another use for DN-optimization can be to produce real-time information maps for steering a marine vessel or to place the marine vessel in a vicinity where more informative shots are expected to occur. For example, the graph of
In some examples, the ability to steer a marine vessel to locations that are likely to produce more informative data can be to identify far-offset checkshot positions to constrain look-ahead models in real-time drilling of a wellbore. A checkshot can be used to provide accurate time/depth correlation to give a confirmation of where a drillstring is in both time and depth, regardless of wellbore geometry, so that a drilling operator can quickly make informed drilling decisions in the wellbore.
DN-optimization can also be used for post-acquisition quality control of acquired survey data. As discussed above, in some examples, the more informative shots occur in the annular region 304 depicted in
DN-optimization can also be used to decimate a dataset containing acquired survey data, which can be too large to be practically analyzed or to be analyzed within a desired time interval. The idea would be to systematically find a suitably small subset (or subsets) of the dataset which can be used to develop a relatively accurate model parameterization of the subsurface structure, or any portion of the subsurface structure. Selecting subset(s) of the dataset can be used in various types of processing, including tomography, least squares migration, full waveform inversion, reverse time migration, velocity analysis, noise suppression, seismic attribute analysis, static removal, and quality control at a control system (such as on a marine vessel), and so forth. In other examples, other types of processing can be performed.
Apart from the applications noted above, DN-optimization can also handle industrial-scale nonlinear design problems. The ability to probabilistically optimize experiments for the nonlinear case is desirable because posterior model distributions are complicated by nonlinearity and DN-optimization accomplishes this while still being computationally feasible for real-world problems. In many real-world applications, the true posterior model distribution is non-Gaussian because the forward operator is nonlinear, and DN-optimization properly accounts for this.
The processor(s) 504 can be connected to a network interface 506, which allows the computer system 120 to communicate over a network, such as to download data acquired by survey receivers. The processor(s) 504 can also be connected to a computer-readable or machine-readable storage medium (or storage media) 508, to store data and instructions. The storage media include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy and removable disks; other magnetic media including tape; optical media such as compact disks (CDs) or digital video disks (DVDs); or other types of storage devices. Note that the instructions discussed above can be provided on one computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable storage medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to any manufactured single component or multiple components. The storage medium or media can be located either in the machine running the machine-readable instructions, or located at a remote site from which machine-readable instructions can be downloaded over a network for execution.
Those with skill in the art will appreciate that while some embodiments discussed herein include terms that could be interpreted as potentially absolute or requiring a given thing (e.g., including without limitation “exactly”, “exact”, “only”, “key”, “important”, “requires”, “all”, “maximizing”, “maximum”, “each”, “minimize”, “minimum”, “must”, “always”, etc.), the various systems, methods, processing procedures, techniques, and workflows disclosed herein are not to be understood as limited by the use of those terms, nor are any claims that issue from this patent application necessarily limited by the use of those terms.
In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some or all of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Application Ser. No. 61/616,499, entitled “SURVEY DESIGN FOR MARINE BOREHOLE SEISMICS,” filed Mar. 28, 2012, which is hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2013/034193 | 3/28/2013 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61616499 | Mar 2012 | US |