The present disclosure relates to methods for acquiring and separating contributions from two or more different simultaneously emitting sources in a common set of measured signals representing a wavefield where the sources traverse general, related sampling grids. In particular, the present invention relates to acquiring and separating contributions from two or more different simultaneously emitting seismic sources traversing general, related sampling grids and separating sets of recorded and/or processed seismic signals.
The current disclosure relates to marine seismic surveying, including in particular marine seismic data acquisition. The general practice of marine seismic surveying is described below in relation to
Prospecting for subsurface hydrocarbon deposits (1801) in a marine environment (
Seismic sources typically employ a number of so-called airguns (1809-1811) which operate by repeatedly filling up a chamber in the gun with a volume of air using a compressor and releasing the compressed air at suitable chosen times (and depth) into the water column (1812).
The sudden release of compressed air momentarily displaces the seawater, imparting its energy on it, setting up an impulsive pressure wave in the water column propagating away from the source at the speed of sound in water (with a typical value of around ˜1500 m/s) (1813).
Upon incidence at the seafloor (or seabed) (1814), the pressure wave is partially transmitted deeper into the subsurface as elastic waves of various types (1815-1817) and partially reflected upwards (1818). The elastic wave energy propagating deeper into the subsurface partitions whenever discontinuities in subsurface material properties occur. The elastic waves in the subsurface are also subject to an elastic attenuation which reduces the amplitude of the waves depending on the number of cycles or wavelengths.
Some of the energy reflected upwards (1820-1821) is sensed and recorded by suitable receivers placed on the seabed (1806-1808), or towed behind one or more vessels. The receivers, depending on the type, sense and record a variety of quantities associated with the reflected energy, for example, one or more components of the particle displacement, velocity or acceleration vector (using geophones, mems [micro-electromechanical] or other devices, as is well known in the art), or the pressure variations (using hydrophones). The wave field recordings made by the receivers are stored locally in a memory device and/or transmitted over a network for storage and processing by one or more computers.
Waves emitted by the source in the upward direction also reflect downward from the sea surface (1819), which acts as a nearly perfect mirror for acoustic waves.
One seismic source typically includes one or more airgun arrays (1803-1805): that is, multiple airgun elements (1809-1811) towed in, e.g., a linear configuration spaced apart several meters and at substantially the same depth, whose air is released (near-) simultaneously, typically to increase the amount of energy directed towards (and emitted into) the subsurface.
Seismic acquisition proceeds by the source vessel (1802) sailing along many lines or trajectories (1822) and releasing air from the airguns from one or more source arrays (also known as firing or shooting) once the vessel or arrays reach particular pre-determined positions along the line or trajectory (1823-1825), or, at fixed, pre-determined times or time intervals. In
Typically, subsurface reflected waves are recorded with the source vessel occupying and shooting hundreds of shots positions. A combination of many sail-lines (1822) can form, for example, an areal grid of source positions with associated inline source spacings (1826) and crossline source spacings. Receivers can be similarly laid out in one or more lines forming an areal configuration with associated inline receiver spacings (1827) and crossline receiver spacings.
A common and long-standing problem in physical wavefield experimentation is how to separate recorded signals from two or more simultaneously emitting sources. In particular, for more than a decade, the simultaneous source problem has (arguably) been the most pertinent problem to solve to efficiently acquire data for 3D reflection seismic imaging of complex Earth subsurface structures.
Simultaneously emitting sources, such that their signals overlap in the (seismic) record, is also known in the industry as “blending”. Conversely, separating signals from two or more simultaneously emitting sources is also known as “deblending”. And the data from such acquisitions as “blended data”. Halliday et al. (2014) describe methods for simultaneous source acquisition relying on highly-controlled marine vibroseis records.
Modern digital data processing of wavefields (or signals) uses a discretized version of the original wavefield, say g, that is obtained by sampling g on a discrete set. The Nyquist-Shannon sampling theorem shows how g can be recovered from its samples; for an infinite number of equidistant samples and given sample rate ks, perfect reconstruction is guaranteed provided that the underlying signal was bandlimited to |k|≤kN=ks/2 (Shannon, 1949; Nyquist, 1928), where kN is the so-called Nyquist wavenumber. The Nyquist-Shannon sampling theorem is equally relevant both to signals generated from a single source being recorded on multiple receivers (receiver-side sampling) as well as signals generated from multiple sources and recorded at a single receiver (source-side sampling).
Assume that the wavefield g is measured at a specific recording location for a source that is excited at different source positions along a straight line. The sampling theorem then dictates how the source locations must be sampled for a given frequency of the source and phase velocity of the wavefield. One aspect of the sampling problem is as follows. Consider that instead of using one source, one wants to use two (or more) sources to for instance increase the rate at which data can be acquired. The second source is triggered simultaneously or close in time with the first source while moving along another arbitrarily oriented line to excite the wavefield h. At the recording location the wavefields interfere and the sum of the two wavefields, f=g+h, is measured. There is no known published exact solution to perfectly separate the wavefields g and h that were produced from each source from the combined measurement f (e.g., see Ikelle, 2010; Abma et al., 2015; Kumar et al, 2015; Mueller et al., 2015).
It may therefore be seen as an object of the invention, to present new and/or improved methods for acquiring and separating simultaneous-source data, particularly, including methods that accommodate and can exploit sources acquired on general but related sampling grids to increase the efficiency and/or the quality with which sources can be simultaneously acquired and separated.
Methods for separating or deblending wavefields generated by two or more sources contributing to a common set of measured or recorded signals are provided, where the sources have traversed general, related sampling grids, suited for seismic applications and other purposes, substantially as shown in and/or described in connection with at least one of the figures, and as set forth more completely in the claims.
Advantages, aspects and novel features of the present inventions, as well as details of an illustrated embodiment thereof, may be more fully understood from the following description and drawings.
In the following description reference is made to the attached figures, in which:
The following examples may be better understood using a theoretical overview as presented below.
The slowest observable (apparent) velocity of a signal along a line of recordings in any kind of wave experimentation is identical to the slowest physical propagation velocity in the medium where the recordings are made. As a result, after a spatial and temporal Fourier transform, large parts of the frequency-wavenumber (ωk) spectrum inside the Nyquist frequency and wavenumber tend to be empty. In particular, for marine reflection seismic data (Robertsson et al., 2015), the slowest observable velocity of arrivals corresponds to the propagation velocity in water (around 1500 m/s).
It is well known, for example, that due to the “uncertainty principle”, a function and its Fourier transform cannot both have bounded support. As (seismic) data are necessarily acquired over a finite spatial (and temporal) extent, the terms “bounded support” and “limited support” herein are used not in the strict mathematical sense, but rather to describe an “effective numerical support”, that can be characterised, e.g., by the (amplitude) spectrum being larger than a certain value. For instance, larger than a certain noise threshold, or larger than the quantization error of the analog-to-digital converters used in the measurement equipment. Further, it is understood that by explicitly windowing space and/or space-time domain data, the support of a function may be spread over a larger region of, e.g., the wavenumber-frequency domain and in such cases the term “bounded support” and “limited support” will also be understood as “effective numerical support” as it will still be possible to apply the methods described herein.
Furthermore, the terms “cone” and “cone-shaped” used herein are used to indicate the shape of the “bounded” or “effective numerical” support of the data of interest (e.g., the data that would be recorded firing the sources individually [i.e. non-simultaneously]) in the frequency-wavenumber domain. In many cases, it will still be possible to apply the methods described herein if the actual support is approximately conic or approximately cone-shaped. For example, at certain frequencies or across certain frequency ranges the support could be locally wider or less wide than strictly defined by a cone. Such variations are contemplated and within the scope of the appended claims. That is, the terms “cone” and “cone-shaped” should be understood to include approximately conic and approximately cone-shaped. In addition, in some cases we use the terms “bounded support” or “limited support” and “effective numerical support” to refer to data with “conic support” or “cone-shaped support” even though in the strict mathematical sense a “cone” is not bounded (as it extends to infinite temporal frequency). In such cases, the “boundedness” should be understood to refer to the support of the data along the wavenumber axis/axes, whereas “conic” refers to the overall shape of the support in the frequency-wavenumber domain.
Note that the term “cone-shaped support” or similar refers to the shape of the support of e.g. the data of interest (in the frequency-wavenumber domain), if it were regularly sampled along a linear trajectory in 2D or Cartesian grid in 3D. That is, it refers only to the existence of such a support and not to the actual observed support of the data of interest in the simultaneous source input data or of the separated data of interest sampled as desired. The support of both of these depends on the chosen regularly or irregularly sampled straight or curved input (activation) and output (separation) lines or grids. Such variations are within the scope of the appended claims.
For example consider a case where the input data are acquired using simultaneous curved shot lines. In this case, the methods described herein can either be applied directly to the input data, provided the curvature has not widened the support of the data interest such that it significantly overlaps with itself. In this case, the support used in the methods described herein can be different from cone-shaped. Alternatively, the methods described herein are used to reconstruct the data of interest in a transform domain which corresponds to, e.g., best-fitting regularly sampled and/or straight activation lines or Cartesian grids, followed by computing the separated data of interest in the non-transformed domain at desired regular or irregularly sampled locations.
In a wavefield experiment it may be that a source is excited sequentially for multiple source locations along a line while recording the reflected wavefield on at least one receiver. The source may be characterized by its temporal signature. In the conventional way of acquiring signals representing a wavefield the source may be excited using the same signature from source location to source location, denoted by integer n. Next, consider the alternative way of acquiring such a line of data using a periodic sequence of source signatures: every second source may have a constant signature and every other second source may have a signature which can for example be a scaled or filtered function of the first source signature. Let this scaling or convolution filter be denoted by a(t), with frequency-domain transform A(ω). Analyzed in the frequency domain, using for example a receiver gather (one receiver station measuring the response from a sequence of sources) recorded in this way, can be constructed from the following modulating function m(n) applied to a conventionally sampled and recorded set of wavefield signals:
which can also be written as
By applying the function m in Eq. 0.1 as a modulating function to data f(n) before taking a discrete Fourier transform in space (over n), F(k)=(f(n)), the following result can be obtained:
which follows from a standard Fourier transform result (wavenumber shift) (Bracewell, 1999).
Eq. 0.2 shows that the recorded data f will be scaled and replicated into two places in the spectral domain as illustrated in
Part of the data will remain at the signal cone centered around k=0 (denoted by H+ in
This process may be referred to as “wavefield apparition” or “signal apparition” in the meaning of “the act of becoming visible”. In the spectral domain, the wavefield caused by the periodic source sequence is nearly “ghostly apparent” and isolated.
A particular application of interest that can be solved by using the result in Eq. (0.2) is that of simultaneous source separation. Assume that a first source with constant signature is moved along an essentially straight line with uniform sampling of the source locations where it generates the wavefield g. Along another essentially straight line a second source is also moved with uniform sampling. Its signature is varied for every second source location according to the simple deterministic modulating sequence m(n), generating the wavefield h. The summed, interfering data f=g+h are recorded at a receiver location.
In the frequency-wavenumber domain, where the recorded data are denoted by F=G+H, the H-part is partitioned into two components H+ and H− with H=H++H− where the H−-component is nearly “ghostly apparent” and isolated around the Nyquist-wavenumber [
Although the above description has focused on acquisition along essentially straight lines, the methodology applies equally well to curved trajectories such as coil-shaped trajectories, circles, or other smoothly varying trajectories or sequences of source activations.
The concept may be extended to the simultaneous acquisition of more than two source lines by choosing different modulation functions for each source.
Acquiring a source line where the first two source locations have the same signature, followed by two again with the same signature but modified from the previous two by the function A(ω) and then repeating the pattern again until the full source line has been acquired, will generate additional signal cones centered around ±kN/2.
As is evident from Tab. I, the special case A=1 corresponds to regular acquisition and thus produces no signal apparition. Obviously, it is advantageous to choose A significantly different from unity so that signal apparition becomes significant and above noise levels. The case where A=−1 (acquisition of data where the source signature flips polarity between source locations) may appear to be the optimal choice as it fully shifts all energy from k=0 to kN (Bracewell, 1999). Although this is a valid choice for modeling, it is not practical for many applications (e.g., for marine air gun sources, see Robertsson et al., 2015 as it requires the ability to flip polarity of the source signal. The case where A=0 (source excited every second time only) may be a straightforward way to acquire simultaneous source data but has the limitation of reduced sub-surface illumination. A particularly attractive choice of A(ω) for wave experimentation seems to let every second source be excited a time shift T later compared to neighbouring recordings, that is, select A=eiωt.
The above description assumes a simple modulating sequence m(n), and thus generating the wavefield h. In practice it is difficult to obtain perfectly periodic time shifts from a measurement setup. It is for example common practice for seismic vessels to shoot or trigger their sources at predetermined (essentially equidistant) positions, and due to practical variations (vessel velocity etc.) it will be difficult to realize shots at both predetermined locations and times.
Deviations from perfectly periodic acquisition can be termed non-periodic and grouped into non-periodic controlled (or intentional) and non-periodic uncontrolled cases (such as caused by currents, rough seas, etc., which are beyond influence by the acquisition crew). Furthermore, non-periodic acquisition can be composed of a periodic part, overlain by a non-periodic part. In all these cases, the signal cone will be scaled and replicated additional times along the wavenumber axis and the effects can be dealt with by various methods, including modelling and inverting such scaled replications using cyclic convolution functions as described in more detail later.
Note that periodic or aperiodic variations in source locations can similarly be used to apparate the wavefield signals. This can be understood for example by noting that a variation in the source location produces (angle-dependent) time shifts and therefore can be used to encode the data using the apparition methods described above.
For a sub-horizontally layered Earth, the recorded reflections from the interfaces between the strata lie (approximately) on hyperbolic trajectories in the space-time domain. The change in two-way traveltime of such reflections as a function of the source-receiver distance (or offset) is known as the normal moveout (NMO) and depends on the zero-offset two-way traveltime and the corresponding average sound speed in the overlying strata.
Correction of the normal moveout (NMO correction) is a standard procedure in seismic data processing which aims to remove the offset dependent part of the traveltime and align the reflected arrivals according to their zero-offset traveltime such that they can be summed yielding an initial “stack image” of the subsurface with increased signal-to-noise ratio.
NMO correction is a very efficient way to reduce the maximum time-dip in the recorded data. On the other hand NMO correction tends to stretch the data at large offsets and at early times, effectively changing (lowering) the frequency content in a space- and time-dependent manner. Let us consider the effect of NMO correction on simultaneous source data that have been acquired using e.g. seismic apparition, or similar, principles.
Because of the stretch, it follows that the NMO correction also modifies the apparition encoding filters a(t) in an offset- and time-dependent manner. However, note that such effects can be accurately predicted or modelled either from theory and first principles and/or numerical experiments. For example, if the encoding filters used were pure time delays, then the time delay after NMO correction can be predicted accurately by multiplying with an expression for NMO stretch due to Barnes (1992):
Alternatively, the space-time dependent effect of the NMO correction on encoding filters can be considered by evaluating the effect of NMO correction at t0 on a discrete delta function δ(t−tx) and on a(t)*δ(t−tx), respectively, and computing, e.g., the ratio of the resulting responses in the frequency domain. This yields a time- and offset-dependent frequency filter which can be used to predict the effective modulation function (also time- and offset dependent in general) after NMO correction.
Thus, an effective modulation function takes into account, e.g., the space-time dependent effects of the NMO correction, or any other coordinate transform, on the encoding filters.
The well-known convolution theorem states that convolution in the time or space domain corresponds to multiplication in the temporal frequency or spatial frequency domain, respectively. The lesser-known dual of the convolution theorem states that multiplication in the space domain of d (n) with a so-called modulation operator m(n), corresponds to cyclic convolution of the (discrete) Fourier transform of the data, D (k), with the (discrete) Fourier transform of the modulation operator M (k)=(m(n)), followed by inverse (discrete) Fourier transform. Further, we note that cyclic convolution can be implemented conveniently as a matrix multiplication or using computationally fast methods such as the fast Fourier transform (FFT).
Thus, for general aperiodic modulation functions, the recorded simultaneous source data can be modelled in the frequency-wavenumber domain as the sum of the fk-domain wavefields due to the individual sources, multiplied by one or more corresponding cyclic convolution matrices. Then, the fk-domain wavefields due to the individual sources can be obtained by inverting the data through the model. Note that in this context, here and elsewhere, setting up and solving a system of equations can equally be understood as modelling and inversion, respectively.
Note that the effect of a general aperiodic modulation as compared to a periodic modulation can thus be understood as introducing additional, scaled replications (beyond the replications at (multiples of)+/−Nyquist horizontal wavenumber) of the individual signal cones of the sources, which describe the known to be compact support of the sources, along the wavenumber axis/axes. Both the position and the scaling factor of the replications then are exactly given by the (discrete) Fourier transform of the aperiodic modulation function.
Finally, the S-transform (Stockwell, 1996) decomposes a time-signal into a time-frequency representation, localizing signals in both time and frequency. It provides a frequency-dependent resolution in accordance with the uncertainty principle while maintaining a direct relationship with the Fourier spectrum.
It is possible, then, to use the cyclic convolution principle in conjunction with the S-transform (or similar time-frequency decomposition) and NMO correction to improve the separation of aliased simultaneous source data, acquired, e.g., using seismic apparition principles, in the following manner:
The list of steps is merely included for completeness of the description of a method which improves the separation of aliased simultaneous source data.
The NMO is a coordinate transformation that reduces the spatial bandwidth of the recorded data, and therefore limiting the effect of aliasing. We now proceed to discuss methods that use other coordinate transformations, and also how several coordinate transformations can be used simultaneously. Moreover, we also discuss how to make reconstruction in two steps: First by making partial reconstructions, using only the non-aliased parts; and secondly to use these partial reconstructions to regularize and solve the full reconstruction problem by means of directionality estimates, that imply local coordinate transformations specifying directions with reduced bandwidth, and hence, reduced aliasing effects.
Further, to provide a more complete summary of methods for dealing with aliased simultaneous source data, we review the notation and recapitulate the theory for regular seismic apparition. We use the notation
{circumflex over (f)}(ξ)=∫−∞∞f(x)e−2πixξdx,
for the Fourier transform in one variable, and consequently {circumflex over (f)}(ω, ξ) for the Fourier transform of two dimensional function f(t, x) with a time (t) and spatial (x) dependence.
Suppose that f1=f1 (t, x) and f2=f2 (t, x) are two function with frequency support in two cones of the form
The constraint comes from assuming that the functions f1 and f2 represent the recording of a wavefield at time t at a fixed receiver coordinate, source coordinate x, and fixed depth, where the recorded wave field at this depth is a solution to the homogeneous wave equation with a velocity c. The wavefields are generated at discrete values of x which we assume to be equally spaced, i.e. of the form x=Δxk.
We now assume that the two sources are used simultaneously, in such a way that their mixture takes the form
i.e., the recorded data is now modelled as a sum of the two functions, but where one of them has been subjected to a periodic time shift. In a more general version more general filtering operations than time shifts can be applied. Let ak be filter operators (acting on the time variable) where the k dependence is such it only depends on if k is odd or even, i.e., that ak=ak(mod2).
It can be shown that
Now, due to the assumption of conic support of {circumflex over (f)}1 and {circumflex over (f)}2 it holds that if
then only the terms where k=0 above contribute, and the following simplified relation holds
In a similar fashion it holds for
This implies that for each pair (ω, ξ) satisfying (4), the values of {circumflex over (f)}1(ω, ξ) and {circumflex over (f)}2(ω, ξ) can be obtained by solving the linear system of equations
This provides information on how to recover the wavefields f1 and f2 for frequencies either up to the limit c/(4 Δx), or more generally, satisfying the (diamond shaped) condition (4). The overlaps of the cones are illustrated in
An alternative approach for reconstruction, is by noting that if either of the support constraints (1) or (4) are satisfied, then it holds that for the values of (ω, ξ) of interest that (3) reduces for instance to
implying that {circumflex over (f)}2(ω, ξ) can be recovered from
In a similar fashion, {circumflex over (f)}1(ω, ξ) can be recovered from
In this way, the deblending can be achieved by direct consideration of the data in the shifted cones illustrated in
From (6) it is possible to recover the functions f1 and f2 partially. Let w be a filter such that ŵ has support inside the region described by (4). It is then possible to recover
h
1
=w*f
1
,h
2
=w*f
2 (7)
For values of (ω, ξ) outside the region described by (4), it is not possible to determine {circumflex over (f)}1(ω, ξ) and {circumflex over (f)}2 (ω, ξ) uniquely without imposing additional constraints. Typically, seismic data can locally be well described by sums of plane waves with different directions. The plane waves carry the imprint of the source wavelet, and according to ray theory the data from such a plane event should have the same directionality for the frequency range that covers the source wavelet. We can use this information to construct a directionality penalty that we can use for the separation of the two wavefields f1 and f2 from the blended data d. This directionality penalty is equivalent to, by means of local coordinate transformations, imposing a bandwidth limitation in undesired directions for the purpose of suppressing aliasing.
One way of estimating local directionality is by means of so-called structure tensors. For the two known wavefields h1 and h2 the corresponding structure tensors are defined as
and similarly for T2 and h2. Above, the function K describes a smooth, localizing windows, for instance a Gaussian. The eigenvalues of T1 and T2 will point-wise describe the local energy in the direction of maximum and minimum variation, and the associated eigenvectors contain the corresponding directions. The tensors are computed as elementwise convolutions of the outer product of the gradient of the underlying function, and this directly defines the generalization to higher dimensions. For the sake of simplicity, we describe here the two-dimensional case.
Let s11(t, x) and s21(t, x) be the eigenvalues of T1(t, x), and let e11(t, x) and e21(t, x) denote the corresponding eigenvectors. If the wavefield f1 only has energy in one direction in the vicinity around (t, x) covered by K, then this implies that
s
2
1(t,x)=0,
which in turn means that
∇f1·e21=0. (8)
The eigenvectors e11(t, x) and e21(t, x) define local coordinate transformation that describe directions of large and small variations. Along the directions specified by e21(t, x) only low-frequency components are to be dominant, and by suppressing the bandwidth of the reconstructions in these directions is an efficient way of de-aliasing the separated sources.
This property (8) is clearly not always satisfied (although counterparts in higher dimension hold more frequently with increased dimensionality), however it is a property that can be used as a penalty from which the blended data can be deblended. Even if (8) is not satisfied, the relation can be used to minimize the energy of the deblended data in the directions carried from h1 and h2, respectively.
From (8) we have a condition on the gradient of f1 and f2 when one the eigenvectors vanishes. For the more general case, we would need to formulate a penalty function that can deal with the cases where the components change gradually, and at places where the eigenvectors are equal in size, and equal amount of penalty should be used for the two directions. One such choice is to define
These functions have the property that
implying that (8) will be satisfied in the case where there is locally only energy in one direction, and where an equal amount of penalty will be applied in the case where there is the same amount of energy in both directions. Note that the local coordinate transformations are now implicitly given in the operator S.
This definition now allows for the generalization of (8) to penalty functionals
∫∫((∇f1)TS(T1)∇f1)(t,x)dtdx,
and
∫∫((∇f2)TS(T2)∇f2)(t,x)dtdx,
for the two wavefields. The expressions above describe the energy in the undesirable direction, given the knowledge of the bandpass filtered versions h1 and h2, respectively. The de-aliasing is now taken place by punishing high frequencies (by the derivatives) along the directions given by the local coordinate transformations specified by e11(t, x) and e21(t, x).
Before we use these expressions to define a minimization problem that describes the deblending procedure, we incorporate the original cone condition (1) in the formulation. To this end, we will now work with sampled representations of {circumflex over (f)}1 and {circumflex over (f)}2. In the following, redefining the notation, we will also use {circumflex over (f)}1 and {circumflex over (f)}2 to denote these sampled values.
We let *c denote the inverse Fourier operator that is restricted to functions supported on the cone defined by (1). Recall the definition of the apparition operator T from (2). The relationship (2) is then satisfied for (the non-unique) solutions to
with the additional constraint that {circumflex over (f)}1 and {circumflex over (f)}2 have support on the cone defined by (1). To obtain a unique approximate solution, we now add the directionality penalties and consider
with the same cone constraint. To find the minima of (9), we compute the Fréchet derivatives of the objective function (9) with respect to the functions {circumflex over (f)}1 and {circumflex over (f)}2 and equate them to zero as they should at a minimum. The first term in (9) is straightforward to derive, and concerning the other two terms it is readily verified using partial integrations that their Fréchet derivatives are described by the elliptic operators
D
m(f)=∇(S(Tm)∇f)
To formulate the solution to (9), let
b
1=cd,b2=cTd,
Furthermore, introduce
Equating the Fréchet derivatives of (9) with respect to {circumflex over (f)}1 and {circumflex over (f)}2 to zero then yield the linear relationship
for the solution of (9). This equation can be solved using an iterative solver for linear equations, for instance the conjugate gradient method. The operators in AF are realized using standard FFT, and the operators in AD are computed using a combination of Fourier transforms and differential schemes, that also may be implemented by using FFT. The operator AF describe the fit to data, while the operator AD describe the de-aliasing that takes places using the local coordinate transformations induced from e11(t, x) and e21(t, x).
As seen, for the direct deblending using seismic apparition to work it is required that the energy is contained in either a combination of the cone constraint (1) along with the condition |ω|<c/(4 Δx), or in the larger domain determined by (4). Given the characteristics of seismic data, improvements to these constraints can sometime be obtained by changes of coordinates. The simplest case where this can be utilized is by using a parabolic change of coordinates. This would apply in cases where the two source locations are close to each other. Let Ek describe the coordinate map
t=E
k(τ)=τ+qx2,
for some choice of parameter q. Now, let
d
Par(τ,k)=d(Ek(τ),k)=*({circumflex over (f)}1)(τ+qx2,kΔx)+(*({circumflex over (f)}2))(τ−qx2,kΔx),
and let
f
1
par(τ,x)=f1(τ+qx2,x) and f2par(τ,x)=f2(x+qx2,x).
Clearly, it holds that
(*({circumflex over (f)}2))(τ+qx2,kΔx)=akf2(t+qx2−Δt(−1)k,kΔx)=akf2par(t−Δt(−1)k,kΔx)=(*(f2{circumflex over (p)}ar))(τ,kΔx).
It then holds that
d
par(τ,k)=*(f1par)(τ,kΔx)−(*(f2{circumflex over (p)}ar))(τ,Δx),
and the regular apparition technique can now be directly applied to dpar(τ, k) to deblend f1par and f2par and hence also f1 and f2.
Next, still summarizing methods that can improve the separation of aliased simultaneous source data, we consider the situation where aliasing effects can be reduced for f1 and f2 when exposed to two different sets of coordinate transformation. Let
E
1
k:→, and E2k:→,
be continuous bijective coordinate transformations, and let
f
1
n(τ,kΔx)=f1(E1k(τ),kΔx), and f2n(τ,kΔx)=f2(E2kτ),kΔx).
The purpose of the coordinate transformations is to make the effective support of {circumflex over (f)}1n and {circumflex over (f)}2n be smaller than {circumflex over (f)}1 and {circumflex over (f)}2, or at least that the energy content that is contained in regions of the form (4) relative to the total energy is higher for the functions {circumflex over (f)}1n and {circumflex over (f)}2n in comparison to that of {circumflex over (f)}1 and {circumflex over (f)}2. In this way the aliasing effects when making reconstructions with limited frequency support hopefully be reduced.
However, the reduction of the aliasing effect for, say the first source, may have a negative influence on the aliasing of the second source when applying the coordinate transformations to blended data. In many cases, the data cannot be separated directly, but the effects on both the sources need to be taken into account. The interplay between the two sources and the coordinate transformations can be modelled by means of linear systems, and by solving these systems we can obtain de-aliased deblending.
Recall that in the regular apparition setup, the functions f1 and f2 could be recovered using either the central frequency cone or one of the aliased versions. In the case where the central cone was used, a (2×2) system of equations has to be solved for the recovery of the point-wise values of {circumflex over (f)}1 (ω, ξ) and {circumflex over (f)}2 (ω, ξ), while when considering the shifted cones, the values of {circumflex over (f)}1 (ω, ξ) and {circumflex over (f)}2 (ω, ξ) were obtained directly by a correction factor. In the presence of the coordinate transformations, neither one of the solutions will be directly applicable. However, it will still be the case that most of the energy (excluding induced aliasing effects from the opposite coordinate map) in the shifted cones will be due to only one of the two sources. We will make use of this fact, in combination with the fact that the sum of the two representations should equal the measured data, and in this way set up a linear system of equations for the solution of the de-aliased-deblending problem.
To simplify the presentation, let us assume in this section that the filter operators ak are identity operators. Let us introduce
d
1
n(τ,k)=f1(E1k(τ)+Δt(−1)k,kΔx)+f2(E1k(τ),kΔx)
and
d
2
n(τ,k)=f1(τ),kΔx)+f2(E2k(τ)−Δt(−1)k,kΔx)
By assuming sufficiently dense sampling in time, d1n and d2n can be computed by resampling of the measured data d. We now seek representations of d1n and d2n using the (non-aliased) Fourier representations {circumflex over (f)}1n and {circumflex over (f)}2n. These representations read
d
1
n(τ,k)=(*{circumflex over (f)}1n)((E1k)−1(E1k(τ)+Δt(−1)k),kΔx)+(*{circumflex over (f)}2n)((E2k)−1(E1k(τ)),kΔx)
and
d
2
n(τ,k)=(*{circumflex over (f)}1n)((E1k)−1(E2k(τ),kΔx)+(*{circumflex over (f)}2n)((E2k)−1(E2k(τ)−Δt(−1)k),kΔx)
respectively. Suppose now that T is sampled discretely at equally spaced points, specifically
with a sampling in the corresponding frequency variable
where (ΔTΔΩ)−1=Nτ. Also, let Ξ(k′) be an equally spaced sampling of the spatial frequency parameter ξ.
Let us introduce the coordinates
τm,k11=(E1k)−1(E1k(τ)+Δt(−1)k),kΔx),
τm,k12=(E2k)−1(E1k(τ),kΔx),
τm,k21=(E1k)−1(E2k(τ),kΔx),
τm,k22=(E2k)−1(E2k(τ)−Δt(−1)k),kΔx),
We seek discrete Fourier representations of d1n(τ, k) and d2n(τ, k), respectively. For these representations, we want to include only the frequencies that satisfy the condition (4) or something similar. To this end, we introduce the mapping τ: →2 that is indexing the points (Ω(j′), Ξ(k′)), i.e., such that all the points (Ω(ι(l)), Ξ(ι(l))) satisfy a condition similar to (4).
The Fourier representations can then be expressed using the element-wise defined Fourier matrices
*11(l;(j,k))=e2πi(Ξ(ι(l))kΔ
*12(l;(j,k))=e2πi(Ξ(ι(l))kΔ
*21(l;(j,k))=e2πi(Ξ(ι(l))kΔ
*22(l;(j,k))=e2πi(Ξ(ι(l))kΔ
Again, redefining the notation, we now use {circumflex over (f)}1n and {circumflex over (f)}2n also for the discrete representations, and similarly for d1n and d2n. We then have that
d
1
n=*11{circumflex over (f)}1n−*12f2n,
and
d
2
n=*21{circumflex over (f)}1n−*22f2n,
The next step is now to provide a formulation where the contribution from the two sources can be separated. As a first step, we will make use of the fact that the energy present in the shifted Fourier cones of d1n and d2n corresponds primarily of energy from f1n and f2n, respectively. Defining 0 element-wise by
0(l;(j,k))=e−2πi(Ξ(ι(l))kΔ
Recall that a frequency shift in representations using the discrete Fourier transform can be represented by a multiplication by (−1)k. Hence, define
Sf(τ,k)=(−1)kf(τ,k).
as the operator that is shifting information to the cones of interest. This implies that the {circumflex over (f)}2n is almost in the null-space of the mapping
({circumflex over (f)}1n,{circumflex over (f)}2n)0S(*11{circumflex over (f)}1n+*12{circumflex over (f)}2n)
and similarly, {circumflex over (f)}1n is almost in the null-space of the mapping
({circumflex over (f)}1n,{circumflex over (f)}2n)0S(*21{circumflex over (f)}1n+*22{circumflex over (f)}2n)
or in other words, the mapping
({circumflex over (f)}1n,{circumflex over (f)}2n)(0S(*11{circumflex over (f)}1n+*12{circumflex over (f)}2n),0S(*21{circumflex over (f)}1n−*22{circumflex over (f)}2n))
is almost block-diagonal. The matrix describing the map above will be square, and hence an iterative method such as the generalized minimal residual method (GMRES) could be applied to this problem, given the right hand side
However, using this information by itself means some loss of control in the residual to the data fit. To counteract that effect, we may also include an error term for the fit to the measured data. An overdetermined system for this case then reads
Let us end by discussing how to choose the coordinate transformation when dealing with NMO for different parameterizations. Let us define the Heaviside function by
and define sign(t)=2Θ(t)−1. Let us define the NMO mapping Eτ
t=E
τ
,t
0
,q
x(τ)=t0+sign(τ−τ0)√{square root over ((τ−τ0)2+q(x−x0)2)}+2Θ(τ−τ0)√{square root over (q)}|x−x0|. (12)
This mapping is bijective with inverse
This coordinate transformation is chosen such that
is continuous.
Note that the presented methods of using coordinate transformations could favourably be applied simultaneously, where expected known aliasing effects can be reduced using for instance the NMO mappings mentioned above, along with a data driven de-aliasing part that uses the local coordinate transformations for the directionality penalties.
In yet another example of a method to improve the dealiasing of simultaneous source data, if we apply an NMO correction to acquired simultaneous source data acquired using seismic apparition principles, one of the sources, say source A, will only produce a signal cone at k=0 (i.e., at zero horizontal wavenumber) but that now is more compactly supported as a result of the NMO correction. However, the other source, say source B, will produce two more compactly supported signal cones centered at k=0 and k=kn. We can therefore isolate the signal cone at k=kn (i.e., at Nyquist horizontal wavenumber) and apply inverse NMO correction to achieve approximate separation of the wavefield due to source A. Such approximate separation methods could yield acceptable results, particularly when combined with other methods such as least-squares adaptive subtraction (Dragoset, 1995).
Alternatively, one could use an apparition function that does not involve a time shift (e.g., A=0.5). In this case, the apparition encoding function will be “immune” to the stretch effect of NMO correction such that the described non time-variant separation methods can be used. Yet another option to continue using the described non time-variant separation methods would be to use stretch-free NMO operators which have been proposed previously (e.g., Trickett, 2003; Zhang et al., 2012).
Another method to handle aliased data when using, e.g., time-shifts as apparition encoding functions is to use NMO and create a look-up table of many solutions for different time shifts. We then piece together solutions sample by sample by choosing the appropriate sample in time and space for the time shift for a specific sample taking into account both the original time shift and the distortion introduced by NMO correction.
We note that (source) ghost reflections may also be considered simultaneous sources and, e.g., the methods summarized above for improved separation in the presence of aliasing also apply to the problem of (source) deghosting by simultaneous source deblending in the presence of aliasing.
To understand source deghosting by simultaneous source deblending, consider the following example: a line of ghost sources with alternating depths such as shown in
{tilde over (p)}
g(nx,ny)=m1(nx,ny)*pg(nx,ny), (13)
where the lower case variables denote expressions in the time space domain, nx is the shot number (spaced uniformly along a line) in the x-direction and ny is the number of the parallel shot-line and m1(nx, ny) is the modulating function:
The function a is a redatuming operator which is both a temporal and spatial filter [hence the convolution in space in equation (13)], which can be written in the frequency-wavenumber domain as:
where Δz=z2−z1. The action of a can then be obtained from the relation
The above equation (15) may be considered as a general example of a possible mathematical description of a redatuming operator, here in the frequency-wavenumber domain. In the frequency-wavenumber domain we obtain the following expression for the ghost wavefield using capital letters for the variables in the frequency-wavenumber domain:
which follows from a standard Fourier transform result (wavenumber shift; Bracewell, 1999).
Equation (16) shows that data from (staggered) ghost sources {tilde over (p)}g(nx, ny) will be scaled and replicated or partitioned into two places in the spectral domain. Part of the data will remain in a cone centred around k=0 with the limits of the cone defined by the slowest propagation velocity of the wavefield in the medium and part of the data will be scaled and replicated or partitioned into a signal cone centered around kN along the kx-axis with kN denoting the Nyquist wavenumber representing the sampling frequency of the data. In other words, part of the data will remain at the signal cone centered around k=0 (denoted by H+ in
The carpet of desired (non-ghost) sources (solid black stars in
{tilde over (p)}
d(nx,ny)=m2(nx,ny)*pd(nx,ny), (17)
where m2 (nx, ny) is the modulating function:
Again, the function b is a redatuming operator in the opposite direction compared to a such that:
B(ω,kx,ky)=A−1. (19)
In the frequency-wavenumber domain we therefore obtain the following expression:
Again, equation (20) shows that the data from the (staggered) desired sources {tilde over (p)}d(nx, ny) will be scaled and replicated or partitioned into two places in the spectral domain. Part of the data will remain at the signal cone centred around k=0 and part of the data will be scaled and replicated or partitioned to a signal cone centered around kN along the kx-axis. In other words, part of the data will remain at the signal cone centered around k=0 (denoted by H+ in
One method of source deghosting uses equations (16) and (20). Equations (16) and (20) tell what the mix is of desired and ghost sources that occur around k=0 (in the following denoted by F0):
In addition, equations (16) and (20) also tell what the mix of desired and ghost sources is that occur around kN along the kx-axis (in the following denoted by FN):
Equations (21) and (22) can be combined to give an expression for the wavefield of interest emitted from the desired source (i.e., the source-side deghosted wavefield):
Since the operator A is known from equation (15), the deghosted wavefield (i.e., the wavefield of interest) can be computed explicitly.
Note that A can be determined accurately for a 3D acquisition geometry. If the cross-line spacing is coarse or if only a single line of data is acquired, it may be necessary to resort two a 2D approximation of A.
Note that in this method for source deghosting we have not included any information about the reflection coefficient of the surface.
We can also make use of one more equation that relates the ghost sources to the desired sources:
p
d
=−CP
g (24)
where C is a redatuming operator that depends on the depth of the shallow sources z1 but that is related to the above defined operators A and B and knowledge or assumptions concerning the reflection coefficient of the free surface reflector, e.g. the sea surface, and is −1 in equation (24):
C=A
2z
/Δz. (25)
Equations (16), (20), (24) and (25) therefore allow us to isolate the wavefields due to the (virtual) ghost sources and or the desired (physical) sources separately without knowing the redatuming operators A, B or C. In fact, the system of equations also allows to solve for the redatuming operator itself. Knowledge of such redatuming operators is important in many applications in seismic data processing such as in imaging of seismic data or in removing multiples in the data for instance. In this case, we obtain the following expression for the mix of desired and ghost sources that occur around k=0 (again denoted by F0):
The following expression for the mix of desired and ghost sources that occur around kN along the kx-axis (again denoted by FN):
Equations (26) and (27) can be solved explicitly or iteratively for arbitrary values of z1 and sources Δz such that we no longer need any information about what the redatuming operator looks like and instead can solve for the desired wavefield due to the physical sources only, and or the wavefield due to the (virtual) ghost sources and or the redatuming operator.
Also in the source deghosting methods summarized above, for the direct deghosting by deblending to work it is required that the energy is contained in either a combination of a cone constraint such as (1) along with a condition such as |ω|<c/(4Δx), or in a larger domain such as determined by (4). Again, improvements to these constraints can be obtained by various changes of coordinates and considering their effect on the deblending as shown above, and, in some cases, even by using NMO or parabolic moveout corrections without considering their effect on the deblending, but instead using the unmodified deblending expressions. Note that the cone-shaped supports or constraints and the related diamond-shaped constraints in the frequency-wavenumber domain take on other shapes and forms in other transform domains and it should be understood when we refer to the support or the constraints as being so shaped in the fk-domain that this includes using the corresponding other shapes and forms in the other well-known transform domains as generally used in seismic data processing.
Finally, note that in the following we also refer to virtual ghost sources as effective simultaneous sources, as opposed to real simultaneous sources which we reserve for separate physical sources.
Following this summary of methods enabling acquisition and processing of perturbed and/or aliased simultaneous source or ghosted data, we present then, the new and/or improved methods for acquiring and separating simultaneous-source data acquired with general, but related sampling grids which increase the efficiency and/or the quality with which sources can be simultaneously acquired and separated. Key observations shaping the present invention are that: (1) no information about the relative source locations is required for encoding and decoding using the apparition principles described above, (2) no information about the source spatial sampling interval is required, other than for defining the support of the wave fields due to the individual sources relative to the Nyquist wavenumber (effective Nyquist wavenumber, in the case of irregular sampling), (3) the order in which the shot positions are traversed and activated is immaterial as long as all the sources traverse and activate at the shot positions in the same order, i.e., they acquire the source locations synchronously, and (4) that Fourier transforms for data sampled at regular locations are just one type of transform that possess shift (for periodic modulation) and cyclic convolution (for general aperiodic modulation) properties.
A Bravais grid in two dimensions is a set of points of the form
B
A
={Ak,k=(k1,k2)∈2},
where A is a 2×2 matrix. Associated with this grid is the dual grid
B
A
⊥
={x∈
2
:<x,y>∈
∇y∈B
A}.
It can be verified that
B
A
⊥
=B
A
,
Important examples include the regular quadratic grid for which
and the hexagon (rhombic) grid generated from
We will describe how the apparition principle works for Bravais grids. First, note that from the standard rules of the Fourier transform it holds that
Recall the two-dimensional Poisson sum formula
Using the Poisson sum formula in combination with the Fourier relation above, we have that
Now, let l∈BA and let m∈BA⊥. Each such element can thus be indexed as lk and mk,
respectively. Moreover, choose
for 0≤n1<N1 and 0≤n2<N2 where n1, n2, N1 and N2 are non-negative integers. The Poisson sum can then be written
We now introduce the notation
m
n
N
=A
−Tξn,
which allows us to rewrite the above relation as
Note that if f has such compact support that
f(lk+b)=0,
if
k∉{(n1,n2)∈2:0≤n1<N1,0≤n2<N2},
then the left hand sum above reduces to a regular two-dimensional discrete Fourier transform which can be rapidly evaluated using FFT.
We now turn our attention to the apparition setup. Let P={(p1, p2)∈2:0≤p1<P1, 0≤p2<P2 and let NP be equal to the number of elements in P. We will also use the notation modP to denote the modulus operator with respect to the set P.
Suppose that measurements are made using the NP functions fp:3→. Morever, let hpq be NP2 filter functions, for instance time shift operators. We now assume that combinations of the NP functions fp are measured at grid points
At each such point, linear combinations of the form
d(k,p,t)=Σp∈Pp∈P(hpq*fp(lk,p,⋅))(t),
are measured during the acquisition. A Fourier transformation with respect to the time variable thus gives
and an additional Fourier series expansion of {circumflex over (d)}t yields
Assume now that all of {circumflex over (f)}p(ξ1, ξ2, ω) for each fixed co have support in some domain Ω such that the set
can be mapped to some domain such that
Ω∩BA⊥⊂.
Specifically, this mean that there exists a mapping ι: 4→2 such that
In this case it follows that for each ζ∈Ω∩BA⊥ it is possible to obtain NP linear equations for the NP unknowns {circumflex over (f)}p(ζ, ω) by a suitable (as above) remapping of the left hand expression of (28) with n1 and n2 varying over intervals of length N1P1 and N2P2, respectively.
Hexagonal sampling: A simple example of apparition for hexagonal sampling is shown in
The setup can be generalized by incorporating additional coordinate transformations. The simplest such class of transformations concerns individual affine maps for each of the functions. In this case the expression A in (28) is replaced by individual matrices Ap as well as modulation factors corresponding to shifts in the affine maps. The relations (28) now still relate the Fourier transform of the encoded data d on a Bravais grid, with the Fourier transforms of the underlying functions fp on individual Bravais grids.
An even more general setup can be obtained by considering arbitrary coordinate transformations (diffeomorphisms) Rp and consider samplings at grids of the type Rp (k). Such grids could for instance include unintentional curvature caused in the acquisition process or for instance coil sampling. Instead of considering the functions fp directly, we can consider new functions of the form
f
p(Rp(k))=gp(Ak).
The relation (28) can now be derived for the functions gp. If the maps Rp are mild in some sense (like in the cases mentioned above), then properties of support hold approximately (using essential support) for the remapped functions.
Thus, the term “synchronous shooting” or “synchronous activation” will be used in the context of: a way of acquiring data with multiple sources where the relative locations of the consecutive shots of each source trace out the same or an affinely transformed version of the spatial pattern (i.e., a grid), including Bravais sampling grids, as a function of time and where the sources are activated near-simultaneously and one or more source activation parameters are varied (a) periodically across the spatial pattern of shots.
Further, the term “multi-scale apparition” will be used in the context of: synchronous shooting but where the (regular) spatial pattern traced out as a function of time by the relative locations of consecutive shots of each of the sources is a scaled version of the (rotated or skewed) spatial pattern of the other sources.
Note that in synchronous shooting it is the relative locations of consecutive shots (not sources) that form the same or an affinely transformed version of the spatial pattern (or grid). Hence the shot positions of the individual sources will, in general, be different and the actual shot positions of one source can be related to the actual shot positions of another source by, in the more general case, affine transformations.
Similarly, for the case of multi-scale apparition, the actual shot positions of one source can be related to the actual shot positions of another source by affine transformations. Thus, multi-scale apparition in the following is only a special case of the more general case of synchronous acquisition of affinely related grids.
Some limited non-exhaustive list of examples of (regular) spatial patterns that can be used for synchronous shooting are: (1) Equidistant sampling with the sampling interval in y equal to the sampling interval in x, (2) Equidistant sampling with the sampling interval in y unequal to the sampling interval in x, (3) (intentionally) Irregular sampling, and (4) Hexagonal sampling.
For synchronous shooting, the offset vectors relating the synchronously acquired positions, or more generally the affine transformations relating the shot locations, determine the amount of overlap in the subsurface coverage ranging from:
(1) no offset, i.e. the simultaneous shots are on top of each other and the amount of overlap is 100%. With the exception of acquiring the grids using two or more different source depths, there is nothing to encode with the corresponding exception of the different source ghost responses as all source-receiver positions define the same 3D Green's function, to
(2) small offsets on the order of Δx/2˜Δy/2, where Δx and Δy denote the inline shot spacing and the crossline line spacing, respectively. The simultaneous shots are on grids/patterns that are staggered with respect to each other by a small amount such as, e.g., half the inline and crossline shot spacing. In this case, clearly the shifts of the simultaneous sources are small compared to the extent (e.g., line length) of the shot pattern and all sources essentially still see the same subsurface. Reconstruction methods can be applied that take advantage of the fact that the sources see the same subsurface, to
(3) large offsets on the order of Lx˜Ly where Lx and Ly denote, e.g., the shot line length and the crossline extent of all shotlines, respectively. In this case, the simultaneous shots are completely separated in terms of their midpoints and all sources see a different part of the full 3D Green's function and illuminate a different part of the subsurface. No overlap can be exploited and the number of unknowns not reduced. On the other hand, this type of acquisition is the most efficient way to acquire, e.g., a 3D common receiver gather in part or in full.
One advantageous aspect of the generalised apparition methods and equations described above for the cases where the sampling grids are related by affine transformations and/or are Bravais grids, is now illustrated in more detail, namely, the ability better fill the available (empty) Kx-Ky space by choosing suitably scaled (and offset/rotated/skewed) versions of the related source sampling grids. This is illustrated in
The shot spacing of the shot grids associated with sources two and three is (1−sqrt(2)/2)/(sqrt(2)/2)˜0.4145 times smaller than the scale and shot spacing of the shot grids associated with sources one and four.
Note that without changing the scale (shot spacing) of the shot grids, and assuming that source wavefields all occupy the same bandwidth, then aliasing occurs when the apparated part of the signal cone of sources two and three starts to overlap the unapparated part of these sources and the unapparated source one. On the other hand, at that frequency, it is clear that the apparated part of the signal cone of source 4, which has been apparated to the corners of the Kx-Ky domain, is not yet overlapping with the unapparated part of source 4, nor with the unapparated source one. This is because the diagonal is a factor sqrt(2) longer and overlapping of sources 4 and 1 therefore occurs at a proportionally higher frequency. Thus, it is advantageous if we can reduce the relative bandwidth occupied by the sources two and three. This is possible using multi-scale apparition and indeed is the reason for the different scales (shot spacings) in
In the
In the
In the frequency slices of
Comparing
Comparing
Novel methods have been described herein for simultaneous source separation and for source deghosting. We now describe how both simultaneous source separation and source deghosting can be achieved on single dataset (e.g. common receiver gather for a single shot line) by acquiring the data in a specific and advantageous manner. Consider two sources S1 and S2 at different depths z1 and z2 where S2 is encoded according to seismic apparition principles (for example: every first of two consecutive shots, S1 and S2 are activated simultaneously whereas every second of two consecutive shots S2 is activated with a variation as described herein, e.g., with a time delay of 10 ms with respect to the activation time of S1). Using the methods described herein, the data from S1 and S2 can be separated. Traces from the separated data, which still have free-surface reflections or ghosts in them, can then be interleaved in such a manner that the previously described source deghosting method can be applied. More specifically, as an example, one can interleave all the odd traces from the separated S1 data and all the even traces from the S2 data, resulting in a dataset with varying source depth, on which the deghosting method can be applied.
An example is shown in
This example demonstrates how seismic apparition principles can be combined in a single acquisition, with source deghosting principles resulting in broadband data. More importantly, it shows how data can be double encoded and decoded with a particular acquisition and processing scheme. Variations of these methods may be derived, as well as further benefits may be obtained by performing the sim-source separation and the source-side deghosting jointly. Next, we disclose another acquisition and processing method where the data are double encoded and can be double decoded.
As another embodiment of double (or more) encoding and double (or more) decoding, consider an acquisition with three sources where the first source S1 is positioned half the inline shot spacing in front of the second source S2, and the third source S3 has an arbitrary crossline offset from the second source. In this particular embodiment, S2 and S3 are always activated at the same time. On every first of two consecutive shots, S2 and S3 are (jointly) activated simultaneously with S1 and on every second of two consecutive shots, S2 and S3 are (jointly) activated, e.g., 12 ms after S1. As, S2 and S3 are jointly encoded relative to S1 using apparition principles, they can be separated from S1 using the techniques described earlier. Then, interleaving the traces from the separated S1 and S2+S3 datasets, it is clear that S1 and S2 form a shotline that is twice as densely sampled compared to the shot spacing with which the simultaneous source data was acquired. On the other hand, every second trace still consists of a superposition of S2 and S3 and it is advantageous to realize that this new dataset corresponds to another case of seismic apparition type simultaneous source acquisition, namely where the S3 source is amplitude encoded relative to the interleaved separated S1 and S2 sources with encoding function A=0 on every first of two consecutive shots. In such a case, we will use the terminology that S3 is encoded implicitly. Thus, S3 can be separated from the interleaved separated S1 and S2 sources, yielding another dataset that can be used for processing. Once again, further benefits may be derived from performing the two decoding steps jointly.
The above methods show how different levels of encoding and decoding using apparition principles may be applied in a layered or recursive fashion, where the result of one decoding is possible following another decoding.
In many cases, in marine seismic acquisition, the crossline source sampling is poor compared to the inline source sampling. In the following, we disclose two methods that take advantage of a denser inline shot spacing, to improve the crossline source sampling.
It is advantageous to select the inline shot spacing such that it at least more than two times denser than the crossline shot line spacing. In the following, we describe, without loss of generality the case when the inline shot spacing is around four times denser than the crossline line spacing (e.g., inline shot spacing of 12.5m, crossline shot line spacing of 50m). Inline apparition can then be used to simultaneously acquire two parallel shot lines and perfectly separate these in processing up to a temporal frequency corresponding to half the aliasing frequency of the unencoded data. Thus, if the 12.5m spacing aliases a horizontally propagating wave at 60 Hz, inline apparition decodes the data perfectly up to at least 30 Hz. On the other hand, since the shot line spacing in the crossline direction was 50m, a horizontally propagating wave propagating purely in the crossline direction will alias already at 15 Hz. An important realization is that the decoding of inline apparated data also decodes any energy propagating in the crossline direction perfectly between 15 and 30 Hz. So although the crossline line spacing of 50m still aliases such crossline propagating energy between 15 and 30 Hz, we can obtain using the inline apparition technique, at twice as many shot lines in the crossline direction and completely overcome the crossline aliasing between 15 and 30 Hz.
It is clear that the complete crossline dealiasing between 15 and 30 Hz is achieved when using the inline apparition technique to acquire shot lines exactly in between the shots lines spaced every 50m. On the other hand, this is not the only way to achieve this. The simultaneous shot lines might be offset a small amount in the crossline direction, resulting in effective crossline gradient measurements which could be used in a crossline multi-channel interpolation. Alternatively, the simultaneous shot lines can, when all taken together, be understood as a filtered (shifted) and decimated version of an underlying unaliased signal. Multiple of such filtered (shifted) and decimated versions can be reconstructed using the well-known Generalized sampling expansion by Papoulis (1977). Finally, if the simultaneous shot lines are not regularly sampled in crossline, irregularly spaced Fourier transforms or spectral reconstruction techniques may be used to de-alias the sim-source separated data in the crossline direction.
It should be noted that the above description very much is adopting a worst case analysis and that by applying various of the de-aliasing techniques described herein, one can reconstruct signals to much higher frequencies than half the original aliasing frequencies. In such cases, it may also be advantageous to adopt data-driven de-aliasing techniques such as the Anti-Leakage Fourier Transform (Xu et al., 1995) in the crossline direction to reduce the crossline aliasing by more than a factor of two.
One embodiment will now be described in more detail, including with a workflow. The idea is to use closely located sources and seismic apparition for estimating source derivatives and sources at sparse samples. From the (e.g., low-pass filtered) derivatives we can estimate the directional interpolation vector field to interpolate the data. We need to de-alias the measured data as well. Up to the Nyquist sampling rate of the un-apparated data. This can be done in two steps: 1. Estimate directional interpolation vector field from low-frequency data. 2. Obtain local shifts (in both source and receiver coordinates). 3. Partition the data in smaller domains (using a partition of unity). On these domains, use constant (local) shift parameters in the source and receiver coordinates. 4. Include the shifts in the apparition setup to only recover one function. The derivative information is now captured in the shifts. 5. Reconstructions can then be made at the original sampling rate in the source coordinate. Following this we can then recover higher frequency content using the directional interpolation vector field in a similar way as is described for the directional de-aliasing. This procedure can also be iterated and/or combined with (Normal) Moveout corrections for broadening the frequency ranges. These ideas can also be used in combination with conventional sim-source separation methods where incoherency exploited in the separation instead of apparition. In addition, these methods can be applied both inline and crossline.
The methods described herein have mainly been illustrated using so-called common receiver gathers, i.e., all seismograms recorded at a single receiver. Note however, that these methods can be applied straightforwardly, after (local) transforms over one or more receiver coordinates, to individual or multiple receiver-side wavenumbers. Processing in such multi-dimensional or higher-dimensional spaces (i.e., higher than, e.g., the F,Kx,Ky space described previously) is known to reduce aliasing or overlapping issues, as the empty space increases dramatically relative to the bandwidth occupied by the seismic signals.
We note that further advantages may derive from applying the current invention to three-dimensional shot grids instead of two-dimensional shot grids, where beyond the x- and y-locations of the simultaneous sources, the shot grids also extend in the vertical (z or depth) direction. Furthermore, the methods described herein could be applied to different two-dimensional shot grids, such as shot grids in the x-z plane or y-z plane. The vertical wavenumber is limited by the dispersion relation and hence the encoding and decoding can be applied similarly to 2D or 3D shotgrids which involve the z (depth) dimension, including by making typical assumptions in the dispersion relation, e.g., in the 2D case, such as Ky=0/m (when acquiring, e.g., shot grids in the x-z plane).
Decoding after/During Imaging or Migration
The above discussion on separation after (local) transforms over one more receiver coordinates also makes it clear that seismic apparition principles can be applied in conjunction with and/or during the imaging process: using one-way or two-way wavefield extrapolation methods one can extrapolate the recorded receiver wavefields back into the subsurface and separation using the apparition principles described herein can be applied after the receiver extrapolation. Alternatively, one could directly migrate the simultaneous source data (e.g., common receiver gathers) and the apparated part of the simultaneous sources will be radiated, and subsequently extrapolated, along aliased directions, which can be exploited for separation (e.g. by recording the wavefield not in a cone beneath the sources, but along the edges of the model).
The methods described herein facilitate more efficient data acquisition. On the source side, this includes configurations that allow for acquiring data at different source depths, in parallel (simultaneously) or sequentially.
In
In
In
As mentioned previously, the methods described herein are also applicable to the receiver side. A specific embodiment which enables deghosting data on the receiver side is now presented in more detail. This embodiment exploits the tides that change the physical distance between receivers on the seafloor, the source position and the sea surface. When acquiring seismic data, for instance as part of a time-lapse survey, such changes can be utilized in a similar fashion to described source side deghosting approach, using at least two vintages of seismic data, to deghost the seismic wavefield on the receiver side. This method can also be combined with source-side enabled methods, jointly or sequentially, to provide source-side and receiver-side deghosted seismic data.
Furthermore, periodic and aperiodic modulation functions can directly be imposed on individual receivers through various means to, for instance, have the amplitude strengths varying. Such modulation function, potentially in combination with modulation functions on the source side, can be employed taking advantage of the methods described herein to for instance more efficiently de-alias the data as part of the acquisition and/or seismic processing.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not of limitation. For example it should be noted that where filtering steps are carried out in the frequency-wavenumber space, filters with the equivalent effect can also be implemented in many other domains such as tau-p or space-time.
In addition, note that although the embodiments have focused on periodic and non-periodic variations in at least one source or source acquisition parameter, the methods described apply equally well to cases where such variations are in the receiver acquisition parameters, as can be understood by applying such principles as source-receiver reciprocity and/or invoking invariance for translation along the horizontal coordinate(s). A particular application in point is that of receiver-side deghosting.
Also note that while some of the methods and embodiments have been described and illustrated by way of two-dimensional theory, processing, data, and/or examples, they can be applied/apply equally to processing of three-dimensional data and, as can be appreciated by someone of ordinary skill in the art, can be generalised to operate with three-dimensional processing on three-dimensional data or even four- or five-dimensional processing by jointly considering simultaneous source data recorded at multiple receivers.
In addition, we note that it can be advantageous to process and separate local subsets of simultaneous source data acquired using the methods and principles described herein. Processing local subsets can further reduce the aliasing ambiguity and improve the separation.
As should be clear to one possessing ordinary skill in the art, the methods described herein apply to different types of wavefield signals recorded (simultaneously or non-simultaneously) using different types of sensors, including but not limited to; pressure and/or one or more components of the particle motion vector (where the motion can be: displacement, velocity, or acceleration) associated with compressional waves propagating in acoustic media and/or shear waves in elastic media. When multiple types of wavefield signals are recorded simultaneously and are or can be assumed (or processed) to be substantially co-located, we speak of so-called “multi-component” measurements and we may refer to the measurements corresponding to each of the different types as a “component”. Examples of multi-component measurements are the pressure and vertical component of particle velocity recorded by an ocean bottom cable or node-based seabed seismic sensor, the crossline and vertical component of particle acceleration recorded in a multi-sensor towed-marine seismic streamer, or the three component acceleration recorded by a microelectromechanical system (MEMS) sensor deployed e.g. in a land seismic survey.
The methods described herein can be applied to each of the measured components independently, or to two or more of the measured components jointly. Joint processing may involve processing vectorial or tensorial quantities representing or derived from the multi-component data and may be advantageous as additional features of the signals can be used in the separation. For example, it is well known in the art that particular combinations of types of measurements enable, by exploiting the physics of wave propagation, processing steps whereby e.g. the multi-component signal is separated into contributions propagating in different directions (e.g., wavefield separation), certain spurious reflected waves are eliminated (e.g., deghosting), or waves with a particular (non-linear) polarization are suppressed (e.g., polarization filtering). Thus, the methods described herein may be applied in conjunction with, simultaneously with, or after such processing of two or more of the multiple components.
Furthermore, in case the obtained wavefield signals consist of/comprise one or more components, then it is possible to derive local directional information from one or more of the components and to use this directional information in the reduction of aliasing effects in the separation as described herein in detail.
Further, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the invention.
For example, it is understood that the techniques, methods and systems that are disclosed herein may be applied to all marine, seabed, borehole, land and transition zone seismic surveys, that includes planning, acquisition and processing. This includes for instance time-lapse seismic, permanent reservoir monitoring, VSP and reverse VSP, and instrumented borehole surveys (e.g. distributed acoustic sensing). Moreover, the techniques, methods and systems disclosed herein may also apply to non-seismic surveys that are based on wavefield data to obtain an image of the subsurface.
In
The methods described herein may be understood as a series of logical steps and (or grouped with) corresponding numerical calculations acting on suitable digital representations of the obtained wave field quantities, and hence can be implemented as computer programs or software comprising sequences of machine-readable instructions and compiled code, which, when executed on the computer produce the intended output in a suitable digital representation. More specifically, a computer program can comprise machine-readable instructions to perform the following tasks:
Reading all or part of a suitable digital representation of the obtained wave field quantities into memory from a (local) storage medium (e.g., disk/tape), or from a (remote) network location;
(2) Repeatedly operating on the all or part of the digital representation of the obtained wave field quantities read into memory using a central processing unit (CPU), a (general purpose) graphical processing unit (GPU), or other suitable processor. As already mentioned, such operations may be of a logical nature or of an arithmetic (i.e., computational) nature. Typically the results of many intermediate operations are temporarily held in memory or, in case of memory intensive computations, stored on disk and used for subsequent operations; and
(3) Outputting all or part of a suitable digital representation of the results produced when there no further instructions to execute by transferring the results from memory to a (local) storage medium (e.g., disk/tape) or a (remote) network location.
Computer programs may run with or without user interaction, which takes place using input and output devices such as keyboards or a mouse and display. Users can influence the program execution based on intermediate results shown on the display or by entering suitable values for parameters that are required for the program execution. For example, in one embodiment, the user could be prompted to enter information about e.g., the average inline shot point interval or source spacing. Alternatively, such information could be extracted or computed from metadata that are routinely stored with the seismic data, including for example data stored in the so-called headers of each seismic trace.
Next, a hardware description of a computer or computers used to perform the functionality of the above-described exemplary embodiments is described with reference to
Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 2000 and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
The hardware elements in order to achieve the computer can be realized by various circuitry elements, known to those skilled in the art. For example, CPU 2000 can be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU 2000 can be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, C P U 2000 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
Number | Date | Country | Kind |
---|---|---|---|
1608297.6 | May 2016 | GB | national |
1619036.5 | Nov 2016 | GB | national |
This application is a continuation of PCT Application No. PCT/IB2017/052667, filed May 8, 2017, which claims priority to Great Britain Application No. 1608297.6, filed May 12, 2016, and Great Britain Application No. 1619036.5, filed Nov. 10, 2016. The entire contents of the above-identified applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IB2017/052667 | May 2017 | US |
Child | 16185069 | US |