It is not easy to analyze seismic data. Ghosts, free-surface multiples, and internal multiples can all get in the way of finding the desired information such as the location, thicknesses, and velocities of propagation associated with layers of interest.
I here disclose a new type of scattering events that we discovered while analyzing the scattering diagrams of the correlation-type representation theorem in inhomogeneous media. I also link these new scattering events with the notion of negative refraction in optics and that of virtual particles in quantum field theory. Just like the Feynmann diagrams in quantum field theory, the scattering diagrams used in our analysis in this short note are a schematic form for the wave propagation that allows us to understand and develop the wave-scattering theory and its applications in simple and natural terms rather than only in an abstract mathematical way.
I have called the new set of scattering “virtual-reflection” events. Again, virtual events here are events which are not directly recorded in standard seismic data acquisition, but their existence allows us to construct, for instance, internal multiples with scattering points at the sea surface; the standard construct of internal multiples does not include any scattering points at the sea surface.
I also disclose here four applications of the virtual events: (i) attenuation of internal multiples, (ii) an integration of velocity estimation and migration (i.e., automated imaging: version 1), (iii) an integration of multiples attenuation, velocity analysis, and migration (i.e., automated imaging: version 2) and (iv) an integration of multiples attenuation, P/S decomposition, velocity analysis, and migration (i.e., automated imaging: version 3).
In imaging applications, I here describe imaging methods in which the interval velocity estimation and depth migration are performed simultaneously. The methods work in a way similar to velocity-migration methods; that is, we perform several constant-velocity migrations (e.g., Stolt, 1978) and select the migration result and the corresponding velocity that produce the best-focused image of the subsurface. However, there are two key differences between our imaging method and classical velocity-migration methods. One of these differences is that our imaging produces the thicknesses of the layers of the subsurface, and the actual velocities associated with these layers, instead of estimating root-mean-square velocities and locating the subsurface reflectors. We then use the fact that we can accurately image the first reflector in the subsurface (i.e., the sea floor in the marine case) to construct from the thicknesses a depth image and an interval velocity model of the subsurface. The other key difference between our imaging method and classical velocity-migration methods is that we operate on the new scattering events introduced recently, known as virtual events instead of the actual data themselves.
a and 5b: An illustration with scattering diagrams of the two-step process for generating internal multiples. The first step (
a and 7b: An illustration with scattering diagrams of the first two iterations of the iterative process described in
Putting Scattering Diagrams to Work
Examples of the wave-propagation paths which constitute towed-streamer data are depicted in
The key processes of marine seismic imaging (which are at the heart of modern oil and gas exploration and production) include (i) removing free-surface-reflection events from the data and leaving primaries and internal multiples, (ii) removing internal multiples from the data and leaving primaries, and then (iii) locating the scattering points and reflectors in the subsurface, which are the sources of primaries and internal multiples in particular. The key of our discovery is that all these processes can be explained, derived, and “optimized” using scattering diagrams (diagrammatica) in a way similar to the way the quantum field theory is often explained using Feynman diagrams. Our description of the optimization of seismic processes will become clearer in the next paragraph. Note that diagrammatica here mean a collection of scattering diagrams used to describe seismic events. We obviously expect this collection to grow significantly in the coming years in such a way that will enable us to describe the entire field of petroleum seismology through scattering diagrams.
Before we describe the convention used in drawing our scattering diagrams, let us recall that solutions of wave equations (wave equations are the building blocks of seismology) involve waves traveling in positive as well negative time, the so-called “retarded” and “advanced” waves. Retarded waves progressively move with increasing time, as visualized in the classical movies of wave propagation (e.g., Ikelle and Amundsen, 2005). They are consistent with the way the current seismic data acquisitions are carried out; they arrive at receiver locations at some time after they have left the source location. Advanced waves travel backward in time; that is, they arrive at the hydrophones or geophones before they have left the source point. These waves are really an affront to our common sense and our understanding of how the world operates—our ever-aging bodies being an obvious testimony. So despite the fact that advanced waves are valid solutions to the wave equations, they are generally ignored in most seismology studies, at least in part, because of their counterintuitive nature. One of the key features of our diagrammatica here is that these advanced waves are included in our constructions of the scattering diagrams of seismic events.
In our scattering diagrams, such as the ones in
As we can see in
So in addition to removing multiples from the data and leaving only primaries, modern seismic imaging methods also require the estimation of a smooth-velocity model of the subsurface before applying migration algorithms to seismic data. By using the concept of virtual events, we here found that internal multiples and primaries can be constructed with the scattering point at the sea surface, just like free-surface multiples, as depicted in
Let us remark that one can establish an analogy between virtual events and the concept of virtual particles in quantum field theory. Just like virtual events, virtual particles are theoretical particles that cannot be detected directly but are nonetheless a fundamental part of quantum field theory. There is also a connection between virtual events and the notion of negative refraction in optics. This notion is generally attributed to Veselago (1968), who first hypothesized that the material with a negative refractive index could exist so that light entering a material with a negative refractive index from a material with a positive refractive index will bend in the opposite direction of the usual observation. The similarities between the last two legs of the virtual events and the path of negative refraction are included in these patent.
Our discussion of these various constructs of seismic events is centered on the convolution-type and crosscorrelation-type representation theorems, as derived, for example, in Bojarski (1980), de Hoop (1995), and Gangi (1970). Other studies, especially those related to multiple attenuation and up/down wavefield separation, have used the convolution-type representation theorem as the starting point of the development of their solutions. They include Kennett (1979), Fokkema and van den Berg (1990, 1993), Ziolkowski et al. (1999), Amundsen (2001), Amundsen et al. (2001), and Ikelle et al. (2003). We have included Fokkema and van den Berg (1990, 1993) in the list, although their starting point is actually the convolution-type reciprocity theorems from which the convolution-type representation theorem can be deduced. One of the novelties here is our use of both the crosscorrelation-type and convolution-type representation theorems in our constructs of the scattering diagrams of seismic events.
Let us also note that Van Manen et al. (2005) have also recently used the crosscorrelation-type representation theorem for improving the computation time of finite-difference modeling. Derode et al. (2003), Roux and Fink (2003), and Wapenaar (2004) have explicitly or implicitly used the correlation-type reciprocity theorems to retrieve Green's function of inhomogeneous media from wavefield recordings. We also show in this patent that the intuitive results of internal multiple attenuation can be derived from the crosscorrelation-type representation theory. Although not yet established, the works of Rickett and Claerbout (1999) and of Schuster et al. (2004) on daylight imaging, and that of Berkhout and Verschuur (2005) and Verschuur and Berkhout (2005) on internal multiple attenuation, can also be related to the crosscorrelation-type representation theorem because, at the very least, they invoke time reversal and the crosscorrelation of wavefields.
The solutions to the problem of free-surface multiple attenuation are for example U.S. Pat. Ser. No. 6,763,304 B2, U.S. 2002/0118602, U.S. Pat. No. 5,051,960, U.S. Pat. No. 5,986,973, U.S. Pat. No. 5,995,905, U.S. Pat. No. 6,094,620, U.S. Pat. No. 6,507,787, EP 0 541 265 A2, U.S. Pat. No. 6,832,161 B1. A patent related to internal multiple attenuation is (GB2090407A). Examples of imaging solutions are U.S. Pat. No. 4,760,563, U.S. Pat. No. 4,766,574, and U.S. Pat. No. 2004/0196738 A1. A solution to wavefield separation into upgoing and downgoing waves is (U.S. Pat. No. 2005/0117451).
Again, the invention described here is different from previous inventions in at least three aspects:
In physics, renormalization refers to a variety of theoretical concepts and computational techniques revolving either around the idea of resealing transformation, or the process of removing infinities from calculated quantities. Renormalization is used here in the context of resealing a transformation—more precisely, resealing the crosscorrelation operation.
The first question is, why do we need to renormalize virtual events? The second question is how do we mathematically describe this renormalization? Our objective in this section is to answer these two question.
The field of virtual events is generally defined as follows:
where PA denotes the virtual data, PP is the pressure data, and ν3 are the particle-velocity data.
To ensure an effective removal of predicted internal multiples from the data or higher resolution in the imaging, it is important to make the amplitude of virtual events consistent with those of the data by replacing PP* in the computation of virtual events with PP−1. The field PP−1 is defined as follows:
or its equivalent,
Thus, (1) becomes
where PA′ denotes the field of normalized virtual events. The traveltimes of normalized virtual events in PA′ are unchanged compared to the traveltimes of the same events in PA. They only differ in amplitudes.
The Concept of Virtual Events in the Attenuation of Internal Multiples
As shown in
To explicitly analyze the traveltimes of internal multiples predicted in
υ3=α1′Y1+α2′Y2+α3′Y3, (6)
PP=β1Z1, (7)
where
Z1=exp{−iωτ1(z)}, Yk=exp{−iωτk(y)} (8)
and k takes the values 1, 2, and 3. If τ1, τ2, and τ3 denote the one-way traveltimes in the first layer, second layer, and third layer, respectively, then τ1(z)=2τ1, τ1(y)=2τ1+2τ2, τ2(y)=2τ1+2τ2+τ3, and τ3(y)=2τ1+4τ2+2τ3. The crosscorrelation of ν3 and PP, which we have denoted γ′k1, is given by
{tilde over (γ)}k1′(ω)=αk′β1 exp{−iω[τk(y)−τl(x)]}. (9)
In the time domain, this field is
{tilde over (γ)}k1′(t)=αk′β1δ[t−τk(y)+τ1(z)]=αkβ1δ[t−tk(ys)], (10)
where
tk(yz)=τk(y)−τ1(z). (11)
γ′k1(t) is the Fourier transform of γ′k1(ω). Notice that tk(yz)>0, thus all the virtual events in γ′k1(t) are causal. The convolution of γ′k1 with ν3 for predicting internal multiples, which we have denoted ηkl1, is given by
ηkn1(ω)=αk′αn′β1 exp{−iω[τk(y)−τ1(z)+τn(y)]}. (12)
In the domain, this field is
ηkn1(t)=αk′β1δ[t−τk(y)−τn(y)+τ1(z)]=αkβ1δ[t−tkn(yzy)], (13)
where
tkn(yzy)=τk(y)+τn(y)−τ1(z). (14)
ηkl1(t) is the Fourier transform of ηkl1(ω). So the traveltimes of internal multiples, denoted ηkl1 in
The results in
In the second iteration, we move the BIMG deeper, to a new position, say, the BIMG2, and define new fields PP2(xs, ω, x), ν32(xs, ω, x), P′P2(x, ω, xr), and ν′32(xs, ω, x), as depicted in
Notice that arbitrary trajectories can be used for selecting the BIMG locations. In other words, one portion of an event may be located above the BIMG, and the other portion of the same event may be located below the BIMG. This separation is not a problem; the portion of the event located above the BIMG will be used to predict internal multiples in one iteration, and the second portion of the event located below the BIMG will be used in the next iteration to predict the second set of internal multiples associated with the event located below the BIMG. In other words, the fact that some complex events may not fall completely above the BIMG or completely below the BIMG is another reason why the iterative process described in
The Concept of Virtual Events in Seismic Imaging
Let us now turn our attention to the problem of constructing scattering diagrams for seismic imaging—that is, locating scattering points instead of constructing seismic events. Our approach here is to image virtual events instead of primaries as is classically done in seismic processing.
By segmenting our data, we can turn the problem of depth imaging to that of imaging with constant velocities. Let us recall that imaging with constant velocities is very computation-efficient compared to depth imaging, but it generally leads erroneous model of the subsurface just like time imaging. The reason for this is that the constant-velocity imaging ignores the Fermat principle. Our use of constant velocity imaging does ignore the Fermat principle.
So how do we construct our depth imaging with constant-velocity imaging. Let us start by looking at two examples of velocity model imaging. Suppose that we construct the virtual events by crosscorrelating the sea-floor reflection with the data from the other reflectors. The results show that we can find the actual velocity, which allows us to properly image the reflector R2. Unfortunately, that is the only reflector which is correctly imaged in this process. However, if we repeat the process by using data constructed by crosscorrelating the response of R2 and the rest of the data, we can again find the actual velocity which allows us to properly image the reflector R3. Therefore, the challenge now is how to construct an automated depth-imaging process based on the idea of a sequential constant velocity imaging of virtual data that can simultaneously produce both the actual velocity model and the depth image of the subsurface. The resulting algorithm is iterative and it uses the concept of bottom image generator (BIG).
By segmenting the data, we cannot image all the reflectors in the data simultaneously. However, an iterative approach can be used to produce a complete model of the subsurface. The basic idea is to continuously move the boundary between P(xs, ω, x) and ν3(xs, ω, xr) at each iteration. We will call this boundary the bottom-image generator (BIG).
Some Data-Processing Background Related to Up/Down Separation, P/S Separation, and Velocity-Migration Analysis
Deghosting and Up/Down Separation
Let us start by recalling the definitions of sources and receiver ghosts. A receiver ghost is an event whose last bounce is at the free surface, whereas a source is an event whose first bounce is at the free surface. In most marine seismic data, source ghosts are generally treated as a component of the source signature because they are very close to the sea surface. Thus they are indistinguishable from events associated with them. The situation is quite different regarding receiver ghosts, especially the OBS (ocean bottom seismic), VC (vertical cable), and walkaway VSP (vertical seismic profile), where the receivers are not close to the sea surface at all. Even for towed-streamer data, the receiver ghosts can be quite significant, especially in bad weather, as one of the current practices for reducing ocean swell noise is to merge streamer quite deep in water.
We can also notice that all receiver ghosts can be classified as downgoing events with respect to the receiver locations. In towed-streamer, OBS, and VC data, the entire downgoing wavefield is made of receiver ghosts only. So the receiver deghosting process, i.e., the process of removing receiver ghosts from data, is equivalent to the up/down separation of these data. However, this equivalence is not true for the walkaway VSP; there are downgoing events which are not receiver ghosts when receivers are located below the sea floor. The fact that this equivalence is not true is the reason why the word deghosting is not generally used in borehole seismic processing, where the objective is to remove the entire downgoing wavefield from the data, including receiver ghosts.
In our imaging algorithm the up/down separation is applied at the BIMG locations at each iteration.
Here are exemplary formulae one can use for performing up/down separation:
where the filters
with k=[k1, k2]T and σ takes the values 1 and 2. νj(up)(x, ω, xs) and p(up)(k, ω, xs) represents of upgoing particle velocity and pressure, respectively. The equations are here written and applies in f-k (frequency-wavenumber) domain. Alternative formulae for up/down separation in other domains can also be used.
P/S Splitting
In the third version of our imaging method, we require that data be split into P—(i.e., P-P data) and S-waves (i.e., P-S data). in addition to the application of demultiple.
Let νj be the particle velocity. We define the P/S splitting, through upgoing P-wave and S-wave potential fields φ and ψk, respectively, by applying the divergence and curl operator, respectively, to νj, i.e.,
{tilde over (φ)}=−iωρkp−2∂j{tilde over (υ)}j; {tilde over (χ)}j=iωρks−2εjkl∂k{tilde over (υ)}l. (22)
Velocity-Migration Analysis
The basic idea for reconstructing the background velocity is to image our data with various velocity models and to select the model which produces focused images of the subsurface. The two basic components of this approach to estimating the background are (i) the tool used for imaging the data and (ii) the criteria for determining the best velocity model. We can, for example, use for the imaging algorithm is a prestack time migration algorithm like Stolt migration. Many constant-velocity migrations are performed, for a number of velocities between Vmin and Vmax, with a step of ΔV. The criterion for selecting the correct velocity can be the amplitude of migrated results. This criterion essentially amounts to “focusing” the seismic traces so that a large response is obtained. When the traces are properly lined up (i.e., properly moveout corrected), then the sum of traces will be maximized. This idea is similar to the focusing actions of a lens or a parabolic reflector for plane waves.
Algorithmic Steps: Version 1
Step 1: Use the actual data to reconstruct the first reflectors, such as the sea-floor reflection in the case of marine data. We also reconstruct the first velocity model. We use the standard velocity-migration method to reconstruct these image and velocity models of the subsurface.
Step 2: We design the bottom-image generator (BIG) location using the imaging result. The information above the BIG is assumed to be correct, and the remaining model below the BIG is throwaway.
Step 3: We then use a demigration scheme of the image above BIG obtained in Step 2 to also define a bottom-internal-multiple generator (BIMG). The demigration scheme will create only data above the BIMG.
Step 4: Create virtual events using data above and below the BIGM.
Step 5: Remove then internal multiples as described in
Step 6: Scan the field of virtual events with a velocity-migration; that is, we perform several constant-velocity migrations (e.g., Stolt, 1978) and select the migration result and the corresponding velocity that produce the best-focused image of the subsurface. (See
Algorithmic Steps: Version 2
Step 1: Use the actual data to reconstruct the first reflectors, such as the sea-floor reflection in the case of marine data. We also reconstruct the first velocity model. We use the standard velocity-migration method to reconstruct these image and velocity models of the subsurface.
Step 2: We design the bottom-image generator (BIG) location using the imaging result. The information above the BIG is assumed to be correct, and the remaining model below the BIG is a throwaway.
Step 3: We then use a demigration scheme of the image above the BIG obtained in Step 2 to also define a bottom internal multiple generator (BIMG). The demigration scheme will create only data above the BIMG.
Step 4: Create virtual events using data above and below the BIGM.
Step 5: Perform an up/down separation using one of the techniques described in Ikelle and Amundsen (2005).
Step 6: Scan the field of virtual events with a velocity-migration; that is, we perform several constant-velocity migrations (e.g., Stolt, 1978) and select the migration result and the corresponding velocity that produce the best-focused image of the subsurface. (See
Algorithmic Steps: Version 3
Step 1: Perform an P/S separation of the data.
Step 2: Use the P-P data to reconstruct the first reflectors such as the sea floor reflection in the case of marine data. We also reconstruct the first velocity model. We use the standard velocity-migration to reconstruct these images and velocity models of the subsurface.
Step 3: Use the P-S data to reconstruct the first reflectors such as the sea floor reflection in the case of marine data. We also reconstruct the first velocity model. We use the standard velocity-migration to reconstruct these image and velocity models of the subsurface.
Step 4: We design the bottom image generator (BIG) location using the imaging result. The information above the BIG is assumed correct and the remaining model below the BIG is throwaway.
Step 5: We then use a demigration scheme of the image above the BIG obtained in Step 4 to also define a bottom internal multiple generator (BIMG). The demigration scheme will create only data above BIMG.
Step 6: Create P-P and P-S virtual events using data above and below the BIGM.
Step 7: Perform an up/down separation using one of the techniques described in Ikelle and Amundsen (2005).
Step 8: Scan the field of P-P virtual events with a velocity-migration; that is, we perform several constant-velocity migrations (e.g., Stolt, 1978) and select the migration result and the corresponding velocity that produce the best-focused image of the subsurface. We then define a BIG corresponding to the shallowest set of reflectors which are best-focused. (See
Step 9: Scan the field of P-S virtual events with a velocity-migration that is, we perform several constant-velocity migrations (e.g., Stolt, 1978) and select the migration result and the corresponding velocity that produce the best-focused image of the subsurface. We then define a BIG corresponding to the shallowest set of reflectors which are best-focused. (See
We start again from Step 3.
In each case we produce an image, which is in the nature of a concrete result viewable by humans. This image may be printed on paper in one color or in more than one color. It may be displayed on a screen in one color or more than one color.
One skilled in the art may, without undue experimentation, devise myriad obvious improvements and variations upon the invention as set forth herein, all of which are intended to be encompassed within the claims which follow.
This application claims the benefit of U.S. application No. 60/747,376 filed May 16, 2006, and U.S. application No. 60/747,921, filed May 22, 2006, each of which is hereby incorporated by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4682307 | Newman | Jul 1987 | A |
4760563 | Beylkin | Jul 1988 | A |
4766574 | Whitmore, Jr. et al. | Aug 1988 | A |
5051960 | Levin | Sep 1991 | A |
5629905 | Lau | May 1997 | A |
5812493 | Robein et al. | Sep 1998 | A |
5986973 | Jericevic et al. | Nov 1999 | A |
5987389 | Ikelle et al. | Nov 1999 | A |
5995905 | Ikelle et al. | Nov 1999 | A |
6094400 | Ikelle | Jul 2000 | A |
6094620 | Gasparotto et al. | Jul 2000 | A |
6101448 | Ikelle et al. | Aug 2000 | A |
6327537 | Ikelle | Dec 2001 | B1 |
6507787 | Filpo Ferreira Da Silva et al. | Jan 2003 | B1 |
6678207 | Duren | Jan 2004 | B2 |
6745129 | Li et al. | Jun 2004 | B1 |
6763304 | Schonewille | Jul 2004 | B2 |
6832161 | Moore | Dec 2004 | B1 |
20020118602 | Sen et al. | Aug 2002 | A1 |
20040059517 | Szajnowski | Mar 2004 | A1 |
20040196738 | Tal-Ezer | Oct 2004 | A1 |
20050117451 | Robertsson | Jun 2005 | A1 |
20050180262 | Robinson | Aug 2005 | A1 |
20050286344 | Li et al. | Dec 2005 | A1 |
20080106974 | Bergery | May 2008 | A1 |
Number | Date | Country |
---|---|---|
0541265 | May 1993 | EP |
2090407 | Jul 1982 | GB |
2312281 | Oct 1997 | GB |
Number | Date | Country | |
---|---|---|---|
20080162051 A1 | Jul 2008 | US |
Number | Date | Country | |
---|---|---|---|
60747376 | May 2006 | US | |
60747921 | May 2006 | US |