The present disclosure relates to methods for dealiasing data such as encountered when acquiring and separating contributions from two or more different simultaneously emitting sources in a common set of measured signals representing a wavefield, particularly of seismic sources and of sets of aliased recorded and/or aliased processed seismic signals.
Seismic data can be acquired in land, marine, seabed, transition zone, and boreholes, for instance. Depending on in what environment the seismic survey is taking place, the survey equipment and acquisition practices will vary.
In towed marine seismic data acquisition, a vessel tows streamers that contain seismic sensors (hydrophones and sometimes particle motion sensors). A seismic source usually towed by the same vessel excites acoustic energy in the water that reflects from the sub-surface and is recorded by the sensors in the streamers. The seismic source is typically an array of airguns but can also be a marine vibrator for instance. In modern marine seismic operations many streamers are towed behind the vessel and the vessel sails, e.g., many parallel closely spaced sail-lines (3D seismic data acquisition). It is also common that several source and/or receiver vessels are involved in the same seismic survey in order to acquire data that is rich in offsets and azimuths between source and receiver locations.
In seabed seismic data acquisition, nodes or cables containing sensors (hydrophones and/or particle motion sensors) are deployed on the seafloor. These sensors can also record the waves on and below the sea bottom and in particular shear waves which are not transmitted into the water. Similar sources as in towed marine seismic data acquisition are used. The sources are towed by one or several source vessels.
In land seismic data acquisition, the sensors on the ground are typically geophones and the sources are vibroseis trucks or dynamite. Vibroseis trucks are usually operated in arrays with two or three vibroseis trucks emitting energy close to each other roughly corresponding to the same shot location.
The general practice of marine and seabed seismic surveying is further described below in relation to
Prospecting for subsurface hydrocarbon deposits (1601) in a marine environment (
Seismic sources typically employ a number of so-called airguns (1609-1611) which operate by repeatedly filling up a chamber in the gun with a volume of air using a compressor and releasing the compressed air at suitable chosen times (and depth) into the water column (1612).
The sudden release of compressed air momentarily displaces the seawater, imparting its energy on it, setting up an impulsive pressure wave in the water column propagating away from the source at the speed of sound in water (with a typical value of around 1500 m/s) (1613).
Upon incidence at the seafloor (or seabed) (1614), the pressure wave is partially transmitted deeper into the subsurface as elastic waves of various types (1615-1617) and partially reflected upwards (1618). The elastic wave energy propagating deeper into the subsurface partitions whenever discontinuities in subsurface material properties occur. The elastic waves in the subsurface are also subject to an elastic attenuation which reduces the amplitude of the waves depending on the number of cycles or wavelengths.
Some of the energy reflected upwards (1620-1621) is sensed and recorded by suitable receivers placed on the seabed (1606-1608), or towed behind one or more vessels. The receivers, depending on the type, sense and record a variety of quantities associated with the reflected energy, for example, one or more components of the particle displacement, velocity or acceleration vector (using geophones, mems [micro-electromechanical] or other devices, as is well known in the art), or the pressure variations (using hydrophones). The wave field recordings made by the receivers are stored locally in a memory device and/or transmitted over a network for storage and processing by one or more computers.
Waves emitted by the source in the upward direction also reflect downward from the sea surface (1619), which acts as a nearly perfect mirror for acoustic waves.
One seismic source typically includes one or more airgun arrays (1603-1605): that is, multiple airgun elements (1609-1611) towed in, e.g., a linear configuration spaced apart several meters and at substantially the same depth, whose air is released (near-) simultaneously, typically to increase the amount of energy directed towards (and emitted into) the subsurface.
Seismic acquisition proceeds by the source vessel (1602) sailing along many lines or trajectories (1622) and releasing air from the airguns from one or more source arrays (also known as firing or shooting) once the vessel or arrays reach particular pre-determined positions along the line or trajectory (1623-1625), or, at fixed, pre-determined times or time intervals. In
Typically, subsurface reflected waves are recorded with the source vessel occupying and shooting hundreds of shots positions. A combination of many sail-lines (1622) can form, for example, an areal grid of source positions with associated inline source spacings (1626) and crossline source spacings. Receivers can be similarly laid out in one or more lines forming an areal configuration with associated inline receiver spacings (1627) and crossline receiver spacings.
The general practice of land seismic surveying is further described below in relation to
Prospecting for subsurface hydrocarbon deposits (1701) in a land environment (
Thus, one group of seismic sources could consist of the “array” of vibrators 1702 and 1703, while a second group of sources consists, e.g., of vibrators 1704 and 1705.
The elastic waves radiating away from the baseplate of the vibrators scatter, reflect (1708) and refract (1709) at locations or interfaces in the subsurface where the relevant material properties (e.g., mass density, bulk modulus, shear modulus) vary and are recorded at hundreds of thousand of individual/single sensors (1710) or at thousands of sensor groups (1711). Sensor signals from one or more sensors in a group can be combined or summed in the field before being sent sent to the recording truck (1712) over cables or wirelessly.
Source positions may lie along straight lines (1714) or various other trajectories or grids. Similarly, receiver positions may lay along lines oriented in a similar direction as the source lines, e.g., 1720, and/or oriented perpendicularly to the source lines (1721). Receivers may also be laid out along other trajectories or grids. The source spacing along the line (1715) is the distance the source in a group move between consecutive shotpoints. The inter source spacing (1716) is the distance between two sources in the same source group. Similarly, the receiver spacing is the spacing between individual receivers (e.g., 1718) in case single sensors or between sensor groups (e.g., 1717). The source line spacing (1719) is some representative distance between substantially parallel source lines and similarly for the receiver line spacing. Waves may be affected by perturbations in the near surface (1713) which obscure the deeper structure of interest (i.e., possible hydrocarbon bearing formations).
In land seismic data acquisition, the sensors on the ground are typically geophones.
Traditionally seismic data have been acquired sequentially: a source is excited over a limited period of time and data are recorded until the energy that comes back has diminished to an acceptable level and all reflections of interest have been captured after which a new shot at a different shot location is excited. Being able to acquire data from several sources at the same time is clearly highly desirable. Not only would it allow to cut expensive acquisition time drastically or to better sample the wavefield on the source side which typically is much sparser sampled than the distribution of receiver positions. It would also allow for better illumination of the target from a wide range of azimuths as well as to better sample the wavefield in areas with surface obstructions. In addition, for some applications such as 3D VSP acquisition, or marine seismic surveying in environmentally sensitive areas, reducing the duration of the survey is critical to save cost external to the seismic acquisition itself (e.g., down-time of a producing well) or minimize the impact on marine life (e.g., avoiding mating or spawning seasons of fish species).
Simultaneously emitting sources, such that their signals overlap in the (seismic) record, is also known in the industry as “blending”. Conversely, separating signals from two or more simultaneously emitting sources is also known as “deblending” and the data from such acquisitions as “blended data”.
Simultaneous source acquisition has a long history in land seismic acquisition dating back at least to the early 1980's. Commonly used seismic sources in land acquisition are vibroseis sources which offer the possibility to design source signal sweeps such that it is possible to illuminate the sub-surface “sharing” the use of certain frequency bands to avoid simultaneous interference at a given time from different sources. By carefully choosing source sweep functions, activation times and locations of different vibroseis sources, it is to a large degree possible to mitigate interference between sources. Such approaches are often referred to as slip sweep acquisition techniques. In marine seismic data context the term overlapping shooting times is often used for related practices. Moreover, it is also possible to design sweeps that are mutually orthogonal to each other (in time) such that the response from different sources can be isolated after acquisition through simple cross-correlation procedures with sweep signals from individual sources. We refer to all of these methods and related methods as “time encoded simultaneous source acquisition” methods and “time encoded simultaneous source separation” methods.
The use of simultaneous source acquisition in marine seismic applications is more recent as marine seismic sources (i.e., airgun sources) do not appear to yield the same benefits of providing orthogonal properties as land seismic vibroseis sources, at least not at a first glance. Western Geophysical was among the early proponents of simultaneous source marine seismic acquisition suggesting to carry out the separation in a pre-processing step by assuming that the reflections caused by the interfering sources have different characteristics. Beasley et al. (1998) exploited the fact that, provided that the sub-surface structure is approximately layered, a simple simultaneous source separation scheme can be achieved for instance by having one source vessel behind the spread acquiring data simultaneously with the source towed by the streamer vessel in front of the spread. Simultaneous source data recorded in such a fashion is straightforward to separate after a frequency-wavenumber (ωξ) transform as the source in front of the spread generates data with positive wavenumbers only whereas the source behind the spread generates data with negative wavenumbers only, to first approximation.
Another method for enabling or enhancing separability is to make the delay times between interfering sources incoherent (Lynn et al., 1987). Since the shot time is known for each source, they can be lined up coherently for a specific source in for instance a common receiver gather or a common offset gather. In such a gather all arrivals from all other simultaneously firing sources will appear incoherent. To a first approximation it may be sufficient to just process the data for such a shot gather to final image relying on the processing chain to attenuate the random interference from the simultaneous sources (aka. passive separation). However, it is of course possible to achieve better results for instance through random noise attenuation or more sophisticated methods to separate the coherent signal from the apparently incoherent signal (Stefani et al., 2007; Ikelle 2010; Kumar et al. 2015). In recent years, with elaborate acquisition schemes to for instance acquire wide azimuth data with multiple source and receiver vessels (Moldoveanu et al., 2008), several methods for simultaneous source separation of such data have been described, for example methods that separate “random dithered sources” through inversion exploiting the sparse nature of seismic data in the time-domain (i.e., seismic traces can be thought of as a subset of discrete reflections with “quiet periods” in between; e.g., Akerberg et al., 2008; Kumar et al. 2015). A recent state-of-the-art land example of simultaneous source separation applied to reservoir characterization is presented by Shipilova et al. (2016). Existing simultaneous source acquisition and separation methods based on similar principles include quasi random shooting times, and pseudo random shooting times. We refer to all of these methods and related methods as “random dithered source acquisition” methods and “random dithered source separation” methods. “Random dithered source acquisition” methods and “random dithered source separation” methods are examples of “space encoded simultaneous source acquisition” methods and “space encoded simultaneous source separation” methods.
A different approach to simultaneous source separation has been to modify the source signature emitted by airgun sources. Airgun sources comprise multiple (typically three) sub-arrays along which multiple clusters of smaller airguns are located. Whereas in contrast to land vibroseis sources, it is not possible to design arbitrary source signatures for marine airgun sources, one in principle has the ability to choose firing time (and amplitude i.e., volume) of individual airgun elements within the array. In such a fashion it is possible to choose source signatures that are dispersed as opposed to focused in a single peak. Such approaches have been proposed to reduce the environmental impact in the past (Ziolkowski, 1987) but also for simultaneous source shooting.
Abma et al. (2015) suggested to use a library of “popcorn” source sequences to encode multiple airgun sources such that the responses can be separated after simultaneous source acquisition by correlation with the corresponding source signatures following a practice that is similar to land simultaneous source acquisition. The principle is based on the fact that the cross-correlation between two (infinite) random sequences is zero whereas the autocorrelation is a spike. It is also possible to choose binary encoding sequences with better or optimal orthogonality properties such as Kasami sequences to encode marine airgun arrays (Robertsson et al., 2012). Mueller et al. (2015) propose to use a combination of random dithers from shot to shot with deterministically encoded source sequences at each shot point. Similar to the methods described above for land seismic acquisition we refer to all of these methods and related methods as “time encoded simultaneous source acquisition” methods and “time encoded simultaneous source separation” methods.
Recently there has been an interest in industry to explore the feasibility of marine vibrator sources as they would, for instance, appear to provide more degrees of freedom to optimize mutually orthogonal source functions beyond just binary orthogonal sequences that would allow for a step change in simultaneous source separation of marine seismic data. Halliday et al. (2014) suggest to shift energy in ωk-space using the well-known Fourier shift theorem in space to separate the response from multiple marine vibrator sources. Such an approach is not possible with most other seismic source technology (e.g., marine airgun sources) which lack the ability to carefully control the phase of the source signature (e.g., flip polarity).
A recent development, referred to as “seismic apparition” (also referred to as signal apparition or wavefield apparition in this invention), suggests an alternative approach to deterministic simultaneous source acquisition that belongs in the family of “space encoded simultaneous source acquisition” methods and “space encoded simultaneous source separation” methods. Robertsson et al. (2016) show that by using periodic modulation functions from shot to shot (e.g., a short time delay or an amplitude variation from shot to shot), the recorded data on a common receiver gather or a common offset gather will be deterministically mapped onto known parts of for instance the ωξ-space outside the conventional “signal cone” where conventional data is strictly located (
Methods for dealiasing recorded wavefield information making use of a non-aliased representation of a part of the recorded wavefield, and a phase factor derived from a representation of a non-aliased part of the wavefield, and combining both to a non-aliased function from which the further parts of the recorded wavefield information can be gained, suited particularly for seismic applications and other purposes, substantially as shown in and/or described in connection with at least one of the figures, and as set forth more completely in the claims.
In particular, the present disclosure provides a method of dealiasing recorded wavefield information recorded at a first sampling interval, the method comprising: forming an analytic part of the recorded wavefield information; extracting a non-aliased representation of a portion of the recorded wavefield information; forming a phase factor from a conjugate part of the analytic part of the non-aliased representation; combining the analytic part of the recorded wavefield information with the phase factor to derive a non-aliased function; applying a filtering operation to the derived non-aliased function; recombining the filtered non-aliased function with the phase factor from the analytic part of the non-aliased representation to reconstruct a representation of dealiased recorded wavefield information; generating a sub-surface representation of structures or Earth media properties from the reconstructed representation of the dealiased recorded wavefield information; and outputting the generated sub-surface representation.
Advantages, aspects and novel features of the present disclosure, as well as details of an illustrated embodiment thereof, may be more fully understood from the following description and drawings.
In the following description reference is made to the attached figures, in which:
The following examples may be better understood using a theoretical overview and its application to simultaneous source separation as presented below.
It should be understood that the same methods can be applied to the dealiasing of any wavefield in which a non-aliased content can be identified.
The method of seismic apparition (Robertsson et al., 2016) allows for exact simultaneous source separation given sufficient sampling along the direction of spatial encoding (there is always a lowest frequency below which source separation is exact). It is the only exact method there exists for conventional marine and land seismic sources such as airgun sources and dynamite sources. However, the method of seismic apparition requires good control of firing times, locations and other parameters. Seismic data are often shot on position such that sources are triggered exactly when they reach a certain position. If a single vessel tows multiple sources acquisition fit for seismic apparition is simply achieved by letting one of the sources be a master source which is shot on position. The other source(s) towed by the same vessel then must fire synchronized in time according to the firing time of the first source. However, as all sources are towed by the same vessel the sources will automatically be located at the desired positions—at least if crab angles are not too extreme. In a recent patent application (van Manen et al., 2016a) we submitted methods that demonstrate how perturbations introduced by, e.g., a varying crab angle can be dealt with in an apparition-based simultaneous source workflow. The same approach can also be used for simultaneous source separation when sources are towed by different vessels or in land seismic acquisition. Robertsson et al. (2016b) suggest approaches to combine signal apparition simultaneous source separation with other simultaneous source separation methods.
We will use the notation
for the Fourier transform.
Let denote the cone ={(Ω,ξ):|ω|>|ξ|}, and let denote the non-aliased (diamond shaped) set =\({(ω,ξ):|ω|>|ξ−½|}∪{(ω,ξ):|ω|>|ξ+½|}).
Suppose that
d(t,j)=ƒ1(t,j)+ƒ2(t−Δt(−1)j,j) (1)
is a known discrete sampling in x=j recorded at a first sampling interval. Note that due to this type of apparition sampling, the data d will always have aliasing effects present if the data is band unlimited in the temporal frequency direction.
If ƒ1 and ƒ2 represent seismic data recorded at a certain depth, it will hold that supp({circumflex over (ƒ)}1)⊂ and supp({circumflex over (ƒ)}2)⊂. We will assume that the source locations of ƒ1 and ƒ2 are relatively close to each other. Let
It is shown in Andersson 2016 that
For each pair of values (ω,ξ)∈, most of the terms over k in (2) vanish (and similarly for D2), which implies that {circumflex over (ƒ)}1(ω, ξ) and {circumflex over (ƒ)}2(ω,ξ) can be recovered through
given that sin(2πΔtω)≠0.
By including an amplitude variation in (1), the last condition can be removed. For values of (ω,ξ)∉ it is not possible to determine the values of {circumflex over (ƒ)}1(ω,ξ) and {circumflex over (ƒ)}2(ω,ξ) without imposing further conditions on the data.
Given a real valued function ƒ with zero average, let
ƒa(t)=2∫0∞∫−∞∞ƒ(t′)e2πi(t-t′)ωdt′dω.
The quantity is often referred to as the analytic part of ƒ, a description that is natural when considering Fourier series expansions in the same fashion and comparing these to power series expansions of holomorphic functions. It is readily verified that Re(ƒa)=ƒ.
As an illustrative example, consider the case where
ƒ(t)=cos(2πt)
for which it holds that
ƒa(t)=e2πit.
Now, whereas |ƒ(t)| is oscillating, |ƒa(t)|=1, i.e., it has constant amplitude. In terms of aliasing, it can often be the case that a sampled version of |ƒa| exhibits no aliasing even if ƒ and |ƒ| do so.
Let us now turn our focus back to the problem of recovering {circumflex over (ƒ)}1(ω,ξ) and {circumflex over (ƒ)}2(ω,ξ) for (ω,ξ)∉. We note that due to linearity, it holds that
d
a(t,j)=ƒ1a(t,j)+ƒ2a(t−Δt(−1)j,j).
A natural condition to impose is that the local directionality is preserved through the frequency range. The simplest such case is when ƒ1 and ƒ2 are plane waves (with the same direction), i.e, when
ƒ1(t,x)=h1(t+bx), and ƒ2(t,x)=h2(t+bx).
Without loss of generality, we assume that b>0. We note that
A similar formula holds for {circumflex over (ƒ)}2.
Let us now assume that ω<½. Inspecting (2) we see that if, e.g., −½<ξ<0 then all but three terms disappear and therefore the blended data satisfies
Let wh and wl be two filters (acting on the time variable) such that wh has unit L2 norm, and such that wh has a central frequency of ω0 and wl has a central frequency of ω0/2. For the sake of transparency let wh=w2l. Suppose that we have knowledge of {circumflex over (ƒ)}1(ω,ξ) and {circumflex over (ƒ)}2(ω,ξ) for ω<ω0, and that the bandwidth of wl is smaller than ω0/2.
Let g1=ƒ1a*wl and g2=ƒ2a*wl. Note that, e.g.,
g
1(t,x)=h1a*wl(t+bx),
so that g1 is a plane wave with the same direction as ƒ1. Moreover, |g1| will typically be mildly oscillating even when ƒ1 and |ƒ1| are oscillating rapidly.
Let
be the phase function associated with g1, and define p2 in a likewise manner as phase function for g2. If wl is narrowbanded, g1 will essentially only contain oscillations of the form
i.e., |g1(t,x)| is more or less constant.
Under the narrowband assumption on wl (and the relation wh=w2l), we consider
By multiplication by
d
h(t,x)
In the Fourier domain, this amounts to two delta-functions; one centered at the origin and one centered at (0,−½). Here, we may identify the contribution that comes from ƒ2 by inspecting the coefficient in front of the delta-function centered at (0,−½). By the aid of the low-frequency reconstructions g1 and g2, it is thus possible to move the energy that originates from the two sources so that one part is moved to the center, and one part is moved to the Nyquist wavenumber. Note that it is critical to use the analytic part of the data to obtain this result. If the contributions from the two parts can be isolated from each other, it allows for a recovery of the two parts in the same fashion as in (3). Moreover, as the data in the isolated central centers is comparatively oversampled, a reconstruction can be obtained at a higher resolution than the original data. Afterwards, the effect of the phase factor can easily be reversed, and hence a reconstruction at a finer sampling, i.e. at a smaller sampling interval than the original can be obtained.
A similar argument will hold in the case when the filters wl and wh have broader bandwidth. By making the approximation that
p
1(t,x)≈eπiω
we get that
and since wh is a bandpass filter, {circumflex over (d)}h(ω,ξ) will contain information around the same two energy centers, where the center around (0,−½) will contain only information about ƒ2. By suppressing such localized energy, we can therefore extract only the contribution from ƒ2 and likewise for ƒi.
The above procedure can now be repeated with ω0 replaced by ω1=βω0 for some β>1. In this fashion we can gradually recover (dealias) more of the data by stepping up in frequency. We can also treat more general cases. As a first improvement, we replace the plane wave assumption by a locally plane wave assumption, i.e., let φa be a partition of unity (Σαφα2=1), and assume that
ƒ1(t,x)φα2(t,x)≈h1,α(t+bαx)φα2(t,x).
In this case the phase functions will also be locally plane waves, and since they are applied multiplicatively on the space-time side, the effect of (4) will still be that energy will be injected in the frequency domain towards the two centers at the origin and the Nyquist wavenumber.
Now, in places where the locally plane wave assumption does not hold, the above procedure will not work. This is because as the phase function contains contributions from several directions at the same location, the effect of the multiplication in will no longer correspond to injecting the energy of D1 (and D2) towards the two centers around the origin and the Nyquist number. However, some of this directional ambiguity can still be resolved.
In fact, upon inspection of (2), it is clear that the energy contributions to a region with center (ω0,ξ0) must originate from regions with centers at (ω0,ξ0+k/2) for some k. Hence, the directional ambiguity is locally limited to contributions from certain directions. We will now construct a setup with filters that will make use of this fact.
Let us consider the problem where we would like to recover the information from a certain region (ω0,ξ0). Due to the assumption that ƒ1 and ƒ2 correspond to measurements that take place close to each other, we assume that ƒ1 and ƒ2 have similar local directionality structure. From (2) we know that energy centered at (ω0,ξ0) will be visible in measurements D1 at locations (ω0,ξ0+k/2). We therefor construct a (space-time) filter wh, that satisfies
We now want to follow a similar construction for the filter wl. Assuming that there is locally only a contribution from a direction associated with one of the terms over k above, we want the action of multiplying with the square of the local phase to correspond to a filtration using the part of wh that corresponds to that particular k.
This is accomplished by letting
where {circumflex over (Ψ)}={circumflex over (ψ)}*{circumflex over (ψ)}.
Under the assumption that ƒ1*wh and ƒ2*wh has a local plane wave structure, we may now follow the above procedure to recover these parts of ƒ1 and ƒ2 (by suppressing localized energy as described above). We may then completely recover ƒ1 and ƒ2 up to the temporal frequency ω0 by combining several such reconstructions, and hence we may proceed by making gradual reconstructions in the ω variable as before.
As an example we have applied one embodiment of the simultaneous source separation methodology presented here to a synthetic data set generated using an acoustic 3D finite-difference solver and a model based on salt-structures in the sub-surface and a free-surface bounding the top of the water layer. A common-receiver gather located in the middle of the model was simulated using this model a vessels acquiring two shotlines with an inline shot spacing of 25 meters. The vessel tows source 1 at 150 m cross-line offset from the receiver location as well as source 2 at 175 m cross-line offset from the receiver location. The source wavelet comprises a Ricker wavelet with a maximum frequency of 30 Hz.
Sources 1 and 2 towed behind Vessel A are encoded against each other using signal apparition with a modulation periodicity of 2 and a 12 ms time-delay such that Source 1 fires regularly and source 2 has a time delay of 12 ms on all even shots.
In
(ƒ1*wh)
This part is expected to be well sampled since much of the oscillating parts are counteracted by the factor
In an alternative embodiment we will make use of quaternion Fourier transforms instead of standard Fourier transforms, and make use of a similar idea as for the analytic part.
Let be the quaternion algebra (Hamilton, 1844). An element q∈ can be represented as q=q0+iq1+jq2+kq3, where the qj are real numbers and i2=j2=k2=ijk=−1. We also recall Euler's formula, valid for i,j,k:
e
iθ=cos θ+i sin θ, ejθ=cos θ+j sin θ, ekθ=cos θ+k sin θ.
Note that although i,j,k commute with the reals, quaternions do not commute in general. For example, we generally have eiθejϕ≠ejϕeiθ which can easily be seen by using Euler's formula. Also recall that the conjugate of q=q0+iq1+j q2+kq3 is the element q*=q0−iq1−jq2−kq3. The norm of q is defined as ∥q∥=(qq*)1/2=(q02+q12+q22+q32)1/2.
Given a real valued function ƒ=ƒ(t,x), we define the quaternion Fourier transform (QFT) of ƒ by
Qƒ(ω,ξ)=∫−∞∞∫−∞∞e−2πitωƒ(t,x)e−2πjxξdtdx.
Its inverse is given by
ƒ(t,x)=Q−1(Qƒ)(t,x)=∫−∞∞∫−∞∞e2πitωQƒ(ω,ξ)e2πjxξdωdξ.
In a similar fashion, it is possible to extend the Fourier transform to other hypercomplex representations, e.g., octanions (van der Blij, 1961), sedenions (Smith, 1995) or other Cayley or Clifford algebras. A similar argument applies to other well-known transform domains (e.g., Radon, Gabor, curvelet, etc.).
Let
Using χ we define ƒq:2→ as
ƒq=Q−1χQƒ.
We call ƒq the quaternion part of ƒ. This quantity can be seen as a generalization of the concept of analytic part. For the analytic part, half of the spectrum is redundant. For the case of quaternions, three quarters of the data is redundant.
In a similar fashion, it is possible to extend the analytic part to other hypercomplex representations, e.g., octanions (van der Blij, 1961), sedenions (Smith, 1995) or other Cayley or Clifford algebras.
The following results will prove to be important: Let ƒ (t,x)=cos u, where u=2π(at+bx)+c with a>0. If b>0 then
ƒq(t,x)=cos u+i sin u+j sin u−k cos u,
and if b<0 then
ƒq(t,x)=cos u+i sin u−j sin u+k cos u.
The result is straightforward to derive using the quaternion counterpart of Euler's formula. Note that whereas |ƒ(t,x)| is oscillating, ∥ƒq∥=√{square root over (2)}, i.e., it has constant amplitude. In terms of aliasing, it can often be the case that a sampled version of ∥ƒq∥ exhibits no aliasing even if ƒ and |ƒ| do so.
Assume that ƒ(t,x)=cos(u), and that g(t,x)=cos(v), where u=2m(a1t+b1x)+e1 and v=2π(a2t+b2x)+e2 with a1,a2≥0. It then holds that
with similar expressions if b1<0.
Let us describe how to recover {circumflex over (ƒ)}1(ω,ξ) and {circumflex over (ƒ)}2(ω,ξ) us the quaternion part. We will follow the same procedure as before, and hence it suffices to consider the case where ƒ1=h1(t+bx), ƒ2=h2(t+bx), with b>0, and ω<½. Let wh and wl be two (real-valued) narrowband filters with central frequencies of ω0 and ω0/2, respectively, as before. From (2), it now follows that
d*w
h(t,x)≈c1 cos(2πω0(t+bx)+e1)+c2 cos(2πω0(t+bx)+e2),
for some coefficients c1,c2, and phases e1,e2, and with b<0.
Since ƒ1 and ƒ2 are known for ω=ω0/2, we let
g
1(t,x)=ƒ1*wl(t,x)≈c3 cos(πω0(t+bx)+e3).
We compute the quaternion part g1q of g1, and construct the phase function associated with it as
and define p2 in a likewise manner as the phase function for g2.
Let dq be the quaternion part of d*wh. It then holds that after a left and right multiplication of conjugate of the phase factors p1 and p2
This result is remarkable, since the unaliased part of d is moved to the center, while the aliased part remains intact. Hence, it allows for a distinct separation between the two contributing parts.
We now use the same example data set as in the previous example. In
The methods described herein have mainly been illustrated using so-called common receiver gathers, i.e., all seismograms recorded at a single receiver. Note however, that these methods can be applied straightforwardly over one or more receiver coordinates, to individual or multiple receiver-side wavenumbers. Processing in such multi-dimensional or higher-dimensional spaces can be utilized to reduce data ambiguity due to sampling limitations of the seismic signals.
We note that further advantages may derive from applying the current invention to three-dimensional shot grids instead of two-dimensional shot grids, where beyond the x- and y-locations of the simultaneous sources, the shot grids also extend in the vertical (z or depth) direction. Furthermore, the methods described herein could be applied to different two-dimensional shot grids, such as shot grids in the x-z plane or y-z plane. The vertical wavenumber is limited by the dispersion relation and hence the encoding and decoding can be applied similarly to 2D or 3D shotgrids which involve the z (depth) dimension, including by making typical assumptions in the dispersion relation.
The above discussion on separation over one more receiver coordinates also makes it clear that seismic apparition principles can be applied in conjunction with and/or during the imaging process: using one-way or two-way wavefield extrapolation methods one can extrapolate the recorded receiver wavefields back into the subsurface and separation using the apparition principles described herein can be applied after the receiver extrapolation. Alternatively, one could directly migrate the simultaneous source data (e.g., common receiver gathers) and the apparated part of the simultaneous sources will be radiated, and subsequently extrapolated, along aliased directions, which can be exploited for separation (e.g. by recording the wavefield not in a cone beneath the sources, but along the edges of the model).
As should be clear to one possessing ordinary skill in the art, the methods described herein apply to different types of wavefield signals recorded (simultaneously or non-simultaneously) using different types of sensors, including but not limited to; pressure and/or one or more components of the particle motion vector (where the motion can be: displacement, velocity, or acceleration) associated with compressional waves propagating in acoustic media and/or shear waves in elastic media. When multiple types of wavefield signals are recorded simultaneously and are or can be assumed (or processed) to be substantially co-located, we speak of so-called “multi-component” measurements and we may refer to the measurements corresponding to each of the different types as a “component”. Examples of multi-component measurements are the pressure and vertical component of particle velocity recorded by an ocean bottom cable or node-based seabed seismic sensor, the crossline and vertical component of particle acceleration recorded in a multi-sensor towed-marine seismic streamer, or the three component acceleration recorded by a microelectromechanical system (MEMS) sensor deployed e.g. in a land seismic survey.
The methods described herein can be applied to each of the measured components independently, or to two or more of the measured components jointly. Joint processing may involve processing vectorial or tensorial quantities representing or derived from the multi-component data and may be advantageous as additional features of the signals can be used in the separation. For example, it is well known in the art that particular combinations of types of measurements enable, by exploiting the physics of wave propagation, processing steps whereby e.g. the multi-component signal is separated into contributions propagating in different directions (e.g., wavefield separation), certain spurious reflected waves are eliminated (e.g., deghosting), or waves with a particular (non-linear) polarization are suppressed (e.g., polarization filtering). Thus, the methods described herein may be applied in conjunction with, simultaneously with, or after such processing of two or more of the multiple components.
Furthermore, in case the obtained wavefield signals consist of/comprise one or more components, then it is possible to derive local directional information (e.g. phase factors) from one or more of the components and to use this directional information in the reduction of aliasing effects in the separation as described herein in detail.
Further, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the invention.
For example, it is understood that the techniques, methods and systems that are disclosed herein may be applied to all marine, seabed, borehole, land and transition zone seismic surveys, that includes planning, acquisition and processing. This includes for instance time-lapse seismic, permanent reservoir monitoring, VSP and reverse VSP, and instrumented borehole surveys (e.g. distributed acoustic sensing). Moreover, the techniques, methods and systems disclosed herein may also apply to non-seismic surveys that are based on wavefield data to obtain an image of the subsurface.
In
The methods described herein may be understood as a series of logical steps and (or grouped with) corresponding numerical calculations acting on suitable digital representations of the acquired seismic recordings, and hence can be implemented as computer programs or software comprising sequences of machine-readable instructions and compiled code, which, when executed on the computer produce the intended output in a suitable digital representation. More specifically, a computer program can comprise machine-readable instructions to perform the following tasks:
(1) Reading all or part of a suitable digital representation of the obtained wave field quantities into memory from a (local) storage medium (e.g., disk/tape), or from a (remote) network location;
(2) Repeatedly operating on the all or part of the digital representation of the obtained wave field quantities read into memory using a central processing unit (CPU), a (general purpose) graphical processing unit (GPU), or other suitable processor. As already mentioned, such operations may be of a logical nature or of an arithmetic (i.e., computational) nature. Typically the results of many intermediate operations are temporarily held in memory or, in case of memory intensive computations, stored on disk and used for subsequent operations; and
(3) Outputting all or part of a suitable digital representation of the results produced when there no further instructions to execute by transferring the results from memory to a (local) storage medium (e.g., disk/tape) or a (remote) network location.
Computer programs may run with or without user interaction, which takes place using input and output devices such as keyboards or a mouse and display. Users can influence the program execution based on intermediate results shown on the display or by entering suitable values for parameters that are required for the program execution. For example, in one embodiment, the user could be prompted to enter information about e.g., the average inline shot point interval or source spacing. Alternatively, such information could be extracted or computed from metadata that are routinely stored with the seismic data, including for example data stored in the so-called headers of each seismic trace.
Next, a hardware description of a computer or computers used to perform the functionality of the above-described exemplary embodiments is described with reference to
Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1900 and an operating system such as Microsoft Windows 10, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
The hardware elements in order to achieve the computer can be realized by various circuitry elements, known to those skilled in the art. For example, CPU 1900 can be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of America, or may be other processor types that would be recognized by one of ordinary skill in the art (for example so-called GPUs or GPGPUs). Alternatively, the CPU 1900 can be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, C P U 1900 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
Number | Date | Country | Kind |
---|---|---|---|
1700520.8 | Jan 2017 | GB | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IB2018/050034 | Jan 2018 | US |
Child | 16509151 | US |