In the related application mentioned above, processes are described that assist with the identification of potential hydrocarbon deposits that include performing a structural interpretation of a three-dimensional seismic volume, transforming the three-dimensional seismic volume into a stratal-slice volume, performing a stratigraphic interpretation of the stratal-slice volume which includes the extracting of bounding surfaces and faults and transforming the stratal-slice volume into the spatial domain. As illustrated, an exemplary seismic volume before domain transformation is presented in
Three-dimensional seismic data has been used to explore the Earth's crust for over 30 years, yet the imaging and subsequent identification of geologic features in the data remains a time consuming manual task. Most current approaches fail to realistically model many 3-D geologic features and offer no integrated segmentation capabilities. In the image processing community, image structure analysis techniques have demonstrated encouraging results through filters that enhance feature structure using partial derivative information. These techniques are only beginning to be applied to the field of seismic interpretation and the information they generate remains to be explored for feature segmentation. Dynamic implicit surfaces, implemented with level set methods, have shown great potential in the computational sciences for applications such as modeling, simulation, and segmentation. Level set methods allow implicit handling of complex topologies deformed by operations where large changes can occur without destroying the level set representation. Many real-world objects can be represented as an implicit surface but further interpretation of those surfaces is often severely limited, such as the growth and segmentation of plane-like and high positive curvature features. In addition, the complexity of many evolving surfaces requires visual monitoring and user control in order to achieve preferred results.
Despite volatile economic conditions, long-term trends suggest that worldwide demand for energy will continue to grow in order to support continued world industrialization and improvement of living standards. Because an efficient and cost effective global infrastructure for production and distribution of oil and gas already exists, it is expected to remain a convenient and economical source of energy for the foreseeable future. Extracting oil and gas from the Earth's crust begins with collecting sub-surface data and analyzing it for potential reservoirs and geologic features important to drilling and production. Unfortunately, most of the easy oil fields in the world have already been discovered and therefore current exploration efforts focus on difficult to reach fields or missed plays in already developed fields.
Analyzing the seismic data used to locate reservoirs is a complex task due to the data's unique layered structure, the difficulty in identifying features, and the large size of data sets. In addition, increased acquisition has created an explosion in the amount of seismic data that needs to be analyzed; yet the oil industry is experiencing a drastic shortage of interpreters as the current generation nears retirement. This presents a great opportunity for computer-aided techniques to be developed in order to aid geoscientists in recognizing features in seismic datasets.
A similar problem exists in the medical imaging community for analyzing CT and MRI scans of patients as well as data generated by the visible human project. Research in this area has developed many fundamental techniques in the area of image structure analysis and surface segmentation for 3-D volumetric data. There are many corollaries between the features represented in medical and seismic datasets (e.g. depositional channel features have a similar character to vascular systems), and one can apply the techniques developed herein for medical imaging when considerations such as noise character, local orientations, and the structure of features specific to seismic datasets are taken into account.
Image structure analysis techniques enhance feature structure using partial derivative information. This exemplary approach is powerful because it employs a combination of first and second order derivative information to differentiate between a wide variety of structures. These techniques are only beginning to be applied to the field of seismic interpretation and the information they generate remains to be explored for feature segmentation. This presents an opportunity to adapt image structure analysis in order to create representations for geologic features that can be used to allow for easier segmentation.
Surface segmentation separates features from background data using either an implicit or explicit representation of a surface. Up until recently, most published work in computer graphics and vision for imaging applications have used explicit surfaces constructed from triangles. Triangulated surfaces require extreme care to be taken when discontinuous topological changes occur and smoothness is difficult to guarantee. In addition, there is no guarantee that the result of an explicit surface will be physically realizable. Implicit surfaces are represented volumetrically using level set methods and have an advantage over explicit surfaces in how easily dynamic topological changes and geometric quantities, such as normals and curvatures, are determined. Also, the results of level set simulations are physically realizable implicit surface models, which is desirable when attempting to represent geologic features.
Three-dimensional seismic data interpretation is a challenge in both imaging and segmentation where the ultimate goal is to automatically segment features contained in a data set. Unfortunately, the science of geology has many unknowns and the seismic data used to represent it requires a trained eye and subjective analysis that cannot be reliably automated. Similar problems are manifested in other imaging fields such as when an evolving surface segmentation requires human knowledge to proceed, possibly due to poor imaging or a highly complex feature. In order to account for unknowns in data, visual monitoring and user control of the segmentation that employs domain knowledge is necessary; this is a non-trivial exercise. Therefore, a need and opportunity exists to incorporate image structure analysis and implicit surface modeling into an interactive environment for segmentation. One exemplary embodiment presents a unified approach in the form of an Interactive “Visulation” (simultaneous visualization and simulation) Environment (IVE) designed to efficiently segment geologic features with high accuracy. The IVE unifies image structure analysis and implicit surface modeling as a surface-driven solution that assists geoscientists in the segmentation and modeling of faults, channels, and other geobodies in 3-D seismic data.
An exemplary embodiment of this invention therefore presents a unified approach that combines image structure analysis and implicit surface modeling in an Interactive “Visulation” Environment designed to segment geologic features. The IVE allows geoscientists to observe the evolution of surfaces and steer them toward features of interest using their domain knowledge. In accordance with one exemplary embodiment, the process is implemented on a GPU for increased performance and interaction. The resulting system is a surface-driven solution for the interpretation of 3-D seismic data, in particular for the segmentation and modeling of faults, channels, salt bodies and other geobodies.
It is an aspect of the present invention to provide systems, methods and techniques for data processing.
It is another aspect of this invention to provide systems, methods and techniques for seismic data processing.
It is a further aspect of this invention to provide systems, methods and techniques for 3-D seismic data processing.
Even further aspects of the invention relate to visualizing one or more faults in a data volume.
Even further aspects of the invention relate to visualizing one or more channels in a data volume.
Even further aspects of the invention relate to visualizing one or more salt bodies in a data volume.
Even further aspects of the invention relate to visualizing one or more geobodies in a data volume.
Additional aspects relate to performing structure analysis of an input volume.
Aspects also relate to applying a surface velocity procedure.
Aspects further relate to utilization of a gradient structure tensor to assist with determining an orientation of strata.
Even further aspects relate to using level sets to represent a deformable structure.
Additional aspects relate to usage of velocity of the level set to describe motion of a surface in space and time.
These and other features and advantages of this invention are described in, or are apparent from, the following detailed description of the exemplary embodiments.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The exemplary embodiments of the invention will be described in detail, with reference to the following figures. It should be understood that the drawings are not necessarily shown to scale. In certain instances, details which are not necessary for an understanding of the invention or which render other details difficult to perceive may have been omitted. It should be understood, of course, that the invention is not necessarily limited to the particular embodiments illustrated herein.
The exemplary embodiments of this invention will be described in relation to processing, interpretation, visualization and simulation of data, and in particular seismic data. However, it should be appreciated, that in general, the systems and methods of this invention will work equally well for any type of data representing any environment, object, body or article.
The exemplary systems and methods of this invention will also be described in relation to seismic data interpretation and manipulation. However, to avoid unnecessarily obscuring the present invention, the following description omits well-known structures and devices that may be shown in block diagram form or otherwise summarized.
For purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the present invention. However, it should be appreciated that the present invention may be practiced in a variety of ways beyond the specific details set forth herein.
Furthermore, while the exemplary embodiments illustrated herein show the various components of the system collocated, it is to be appreciated that the various components of the system can be located at distant portions of a distributed network, such as a communications network and/or the Internet, or within a dedicated secure, unsecured and/or encrypted system. Thus, it should be appreciated that the components of the system can be combined into one or more devices or collocated on a particular node of a distributed network, such as a communications network. As will be appreciated from the following description, and for reasons of computational efficiency, the components of the system can be arranged at any location within a distributed network without affecting the operation of the system.
Furthermore, it should be appreciated that various links can be used to connect the elements and can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. The term module as used herein can refer to any known or later developed hardware, software, firmware, or combination thereof that is capable of performing the functionality associated with that element. The terms determine, calculate and compute, and variations thereof, as used herein are used interchangeably and include any type of methodology, process, mathematical operation or technique, including those performed by a system, such as a processor, an expert system or neural network.
Additionally, all references identified herein are incorporated herein by reference in their entirely.
In step S130, the interactive visulation and manipulation environment is populated and displayed to a user. Next, in step S140 the “result” is steered and manipulated until a satisfactory representation is developed. In step S140, the level sets continue to be used to revise and steer result. The revising and steering of the result uses the level set technique that was initialized in step S120. Then, in step S150 control continues to step S160. Otherwise, control jumps back to step S130 for further revising and adjustment of one or more parameters.
In step S160, one or more segmented surfaces that include a visulation of one or more features, such as geologic features, are saved and or output.
The Structure analysis module further includes a gradient structure tensor module 162 and a Hessian tensor module 164.
The operation of the above elements will now be discussed in relation to the corresponding overall theory behind an exemplary embodiment of this invention. Structure analysis of a 3-D dataset (input volume) finds it roots in the field of image processing where the structure of a 2-D image is represented as gradients, edges, or similar types of information. This translates to 3-D data where gradients, edges, curvature, and other image elements can be represented in three-dimensions. This information is gained by calculating derivatives and partial derivatives, and then analyzing the vector representation of magnitude changes of pixel (or voxel in 3-D) values. In two-dimensions the orientation of maximum change in an image corresponds to Equation 1, where Ix is the partial derivative of image I in the x-direction, and Iy is the partial derivative in the y-direction.
The vector resulting from this is directed according to the ordering of pixel points (high to low, or low to high values) and points along the orientation of the angle θ, which varies from [0,π) with a magnitude given by g. Another helpful way to consider this vector is to think of it as the normal vector to a gradient contour in the image, which will make more sense when working with level sets hereinafter. The calculation of the Ix, Iy partial derivatives (Equation 2) can be accomplished using standard central differences between neighboring pixels (voxels) or more robustly by convolving neighboring voxels with a Gaussian mask over a range of voxels and then taking the difference of the Gaussian-smoothed neighbors.
The orientation of seismic strata are generally not horizontal (parallel to the ground plane), which means filtering techniques used on seismic images must take into account local orientations, otherwise undesired blurring across horizons will inevitably result as in the case of mean and median filtering. To measure the orientation of seismic strata, the gradient structure tensor (GST) is used. For a local neighborhood I(x,y,z) in a 3-D image the GST is given by Equation 3.
Since the GST represents an orientation rather than a direction, this formulation allows the blurring of tensors in a way that lets vectors pointing in opposite directions to support an orientation rather than counteract each other. In addition, the GST is a 3×3 positive semidefinite matrix, which is invariant to Gaussian convolution.
Using Gaussian convolution to average the tensors creates a more robust representation of the orientation field. The eigenanalysis of the GST provides information about the local orientation and coherence of the seismic data. Eigenvectors define a local coordinate axis while eigenvalues describe the local coherence, which represents the strength of the gradient along the respective eigenvectors. The dominant eigenvector represents the direction of the gradient orthogonal to the seismic strata, while the smaller two eigenvectors form an orthogonal plane parallel to the seismic strata. Near faults or other discontinuities in the data, the strength of the dominant eigenvector before Gaussian smoothing is not sufficient to confidently define a plane orthogonal to the strata (See FIG. 3—The seismic strata (red-to-blue layering) are rarely perfectly horizontal. Green surface describes the correct local coordinate system for small section of the volume). However, after Gaussian smoothing of the tensors, a more confident eigenstructure is represented at faults and discontinuities that more accurately represent the true orientation. The orientation of the respective eigenvectors provides a robust estimate of the local orientation at each point in the image. This orientation may be described by two angles, the dip angle θ and the azimuth angle θ using the three components of the eigenvector (ex, ey, ez) as defined by
where 0°<φ<360° and 0°<θ<180°.
The Hessian is determined as the matrix of the second-order partial derivatives of the image (or volume). The Hessian tensor is given by
where second partial derivatives of the image I(x,y,z) are represented as Ixx, Iyy, Izz, and so forth. The eigenvalues of this tensor are ordered as λ1>λ2>λ3 and their corresponding eigenvectors as v1, v2, v3, respectively. Using the eigenvalues, this tensor can classify local second-order structures that are plane-like, line-like, and blob-like. The conditions for which the different eigenvalues describe these features as:
Level sets are an implicit representation of a deformable surface. One advantage of level set methods is that instead of manipulating a surface directly, it is embedded as the zero level set of a higher dimensional function called the level set function. The level set function is then evolved such that at any time the evolving surface can be implicitly obtained by extracting the zero level set.
An implicit representation of a surface consists of all points S={i|φ(i)=0}, where φ: R3R. Level sets relate the motion of the surface S to a PDE on the volume as
where V describes the motion of the surface in space and time. This framework allows for a wide variety of deformations to be implemented by defining an appropriate V. This velocity (or speed) term can be combined with several other terms such as geometric terms (e.g. mean-curvature) and image-dependent terms. Equation 4 is sometimes referred to as the level set equation.
The initial level set must be represented as a signed distance function where each level set is given by its distance from the zero level set. The distance function is signed so there is differentiation between the inside and outside of the level set surface. For this work all points contained within the level set surface are considered to be negative distances. The distance function is computed using a technique that solves the Eikonal equation, which is commonly done using the fast marching method or the fast sweeping method. This equates to a surface expanding in the normal direction with unit speed and can be considered a special case of the level set function.
The surface integral (surface area) and the volume integral of the surface S can be easily defined using the implicit representation of the level set. The Dirac delta function on the interface is defined as
δ(i)|∇φ| Equation 7
and the Heaviside function (integral of the Dirac delta function) as
Using these functions one can derive the surface area integral (in 3-D)
and the volume integral
Additional intrinsic geometric properties of the implicit surface can be easily determined using this formulation. For instance, the normal is computed on the level set as
and the curvature is obtained as the divergence of the normal as
The level set equation (Equation 6) contains a velocity term V. The velocity of the level set is a representation that describes the motion of the surface in space and time. This framework allows for a wide variety of deformations to be implemented by a combination of global, geometric, and image-dependent terms, depending on the application area. Equation 13 gives a basic template of a velocity equation as the combination of two data-dependent terms and a surface topology term. The D term is a propagating advection term scaled according to α in the direction of the surface normal. The term ∇·(∇φ/|∇φ|) is the mean-curvature of the surface defined in Equation 12 and its influence is scaled by β. The final term ∇A·|∇φ| is the dot product of the gradient vector of an advection field with the surface normal, which is scaled by γ.
Velocity functions are considered that contain terms of advection and diffusion. It is important to understand the difference between these flows in the level set context. This can be stated that advective flow is a propagation of finite speed in a certain direction, while diffusive flow is defined everywhere in all directions. The numerical analysis of these terms relates to solving a hyperbolic PDE for advection that is solved using an upwind scheme and a parabolic PDE for diffusion that is solved by central differences. In this scheme, stability can be enforced by using the Courant-Friedrichs-Lewy (CFL) condition, which states that numerical waves should propagate at least as fast as physical waves. Therefore, the time step used for iterating the level set must be less than the grid spacing divided by the fastest velocity term in the domain. The time step is restricted based on the velocity term as shown in Equation 14 where v(i) is the velocity calculated at voxel i and Δx, Δy, and Δz are the grid spacing in three-dimensions.
With a velocity function consisting of advective and diffusive terms, image-based scaling factors can be used to guide the terms, such as ones derived from volume attributes. Hereinafter, a unique set of velocity functions is developed for evolving surfaces to segment geologic features in seismic data.
Level set motion by mean curvature is considered such that the interface moves in the normal direction with a velocity proportional to its curvature v=−bκ where b>0 is a constant and κ is the mean curvature defined in Equation 15.
For b>0 the front moves in the direction of concavity, such that circles (in 2-D) or spheres (in 3-D) shrink to a single point and disappear (see
Returning to further functionality of the structure analysis module, after removing noise in seismic data by conducting anisotropic smoothing along stratigraphic layers, the result is a new seismic volume with attenuated noise and enhanced features. The next step is to use structure analysis for extracting information that helps identify data features. First, a more robust representation of the orientation field given by the structure tensor is computed using Gaussian convolution, which averages the tensor orientations. Next, the eigenanalysis of the smoothed structure tensor can be computed in order to provide the local orientations as well as indications of singularities in the data volume. Thanks to the representation of the GST, three real eigenvalues and eigenvectors will be found. The eigenvectors define a local coordinate axis while eigenvalues describe the local coherence, which represents the strength of the gradient along each respective eigenvector. Potential critical points are located in the data volume by using the three-dimensional gradient magnitude given by Equation 16.
|∇I|=√{square root over (Ix2+Iy2+Iz2)} Equation 16
The gradient magnitude is a simple and powerful technique for detecting singularities. When isolating medial-surfaces in a distance transform volume, singularities are defined by areas of low gradient magnitude. The opposite is used when identifying channel edges from a seismic volume. After being isolated, singularities can be classified as 1-saddles, 2-saddles, and maximums as depicted in
A structure analysis technique to specifically enhance geologic channels in seismic data is described hereinafter. Previous sections have described more general approaches to enhancing and locating features using the first-order structure tensor. Now, a mathematical model given by the second order tensor (Equation 5), called the Hessian matrix, is used to enhance translation invariant second order structures in a seismic dataset. The type of seismic data used in this section is one that has been smoothed to enhance continuous stratigraphy more than small discontinuities.
In similar work, Frangi et al. exploited the eigenvalues of the Hessian in order to develop a MRI vessel enhancement filter. This resulted in a vesselness function that integrated all three eigenvalues computed at a range of scales in order to detect various sizes of vessels. Sato et al. expanded on this work and used the Hessian to detect sheets and blobs in 3-D images. Seismic channels represent a domain-specific image feature that cannot be appropriately modeled using either of these existing techniques of Hessian analysis, due to a channel's unique layered structure. For that reason a new confidence and curvature-based attribute has been created that is able to enhance channel features and provide the terms for a PDE used in segmentation.
Bakker et al. detected channels in 3-D seismic datasets by using the first order structure tensor (GST) to identify the location of features while honoring seismic orientation. In particular, they used an orientated GST and enhanced features while removing noise by filtered eigenanalysis. Through their orientated representation, they were able create a curvature-corrected structure tensor that accounted for line-like and plane-like curvilinear structures. They attain a confidence measure from the eigenvalues of the transformed GST, where larger eigenvalues provide stronger confidence in the orientation estimation. Their unique approach to extracting curvature information uses a parabolic transformation of the GST, which yields a curvature-corrected confidence measure that is maximized for the transformation most closely resembling local structure.
The exemplary method presented herein is similar to that of Bakker et al. in how confidence and curvature information is obtained from image structure analysis. Although, there is a significant difference in the approach presented here since it uses the second order tensor to directly extract confidence and curvature information with no intermediate transformation. The second order tensor has the advantage of directly providing this information without needing to use a parabolic transformation. Concerns are often made about error in second order calculations that can result in unstable tensor fields. This problem is largely overcome by applying tensor smoothing across the volume using a Gaussian kernel, which stabilizes the tensor components without destroying the Hessian representation. The confidence and curvature information is later used to drive a segmentation process for completely extracting channel features, which is something that was not considered in previous work.
A measure of confidence and curvature in seismic data will correspond to regions of high depositional curvature that present a strong and confident amplitude response. As described, it can be seen that this description maps well to the imaging of stratigraphic features such as channels. One exemplary goal is to define a channelness measure that captures the specific structure associated with channels. The first eigenvector v1 and its corresponding eigenvalue λ1 are a primary focus. Due to the layered structure of channels, they are approximated as planar features with high curvature along the gradient direction (
Since channels generally have a relatively constant cross section, the second order tensor is smoothed with a single Gaussian sigma value. Choosing a sigma value that approximates the distance across a channel results in optimal enhancement.
Enhancing fault features directly from the seismic volume using the first-order structure tensor is described hereinafter. There is a long established line of research in seismic interpretation that has lead to the characterization of geologic faults based on their structural character: faults are discontinuities in the strata that extend vertically. This characterization is the focus in developing a function that returns positive propagation values for features and negative propagation values for non-features.
Typically, faults are enhanced from raw seismic datasets using a 3-step approach: vertical discontinuities are detected, vertical discontinuities are enhanced laterally in 2-D, and then they are enhanced again laterally and vertically in 3-D. While this is an over-simplification of the fault enhancement technique, it should still be obvious that faults are never enhanced directly from a seismic volume. Instead, a number of cascaded techniques are used to create a final volume that measures fault likelihood. An effective implementation of this technique provided by TerraSpark Geosciences (B. J. Kadlec, H. M. Tufo, Medial Surface Guided Level Sets for Shape Exaggeration, IASTED Visualization, Imaging, and Image Processing (VIIP), Special Session on Applications of Partial Differential Equations in Geometric Design and Imaging, September 2008) generates a measure of Fault Enhancement, which is essentially a probability that represents the likelihood for a fault to exist at a given voxel in the volume. The problem with using a number of cascaded attribute volumes to enhance fault structure is that incorrect information can be added anywhere along the pipeline and it is difficult to reference the source of this misinformation. Although these cascaded techniques are computationally efficient and produce reliable and quality representations of fault features, it is still beneficial to generalize the approach to a single function that can be computed directly from the seismic data.
Therefore, it is desired to directly enhance faults from the seismic data using a single function. This can be accomplished by looking for discontinuous features in the seismic strata by calculating the variance across the strata. In particular, discontinuities can be located along the seismic strata defined by the two smaller eigenvectors of the structure tensor. Given this representation, the first step is to compute the variance within a user-defined planar window along the strata of the voxel under consideration. Next, moving along the positive and negative direction of the dominant eigenvector and using the same planar window, additional variances are calculated and summed together. The summation of these variances completes the fault attribute computation. Other fault imaging techniques like coherence, semblance, or continuity follow a similar approach and achieve comparable results in recognizing these discontinuous regions. Part of what makes this approach unique is that the local strata is used to guide the vertical summation of variances, which is different from traditional approaches that make an assumption of perfectly horizontal strata layering.
Function ƒ(x,y,z) in Equation 18 results in a scalar value such that higher values correspond to a greater likelihood of a fault and a lower value corresponds to a low likelihood of a fault. It will be shown that this analysis of discontinuities in seismic data can be used to directly guide level sets for fault extraction. As discussed hereinafter, a technique is described for the enhancement of fault features calculated on the fly directly in seismic data during level set surface evolution.
A new technique for segmenting channel features from 3-D seismic volumes is discussed in relation to and supplemental to previous teachings as well as
The confidence and curvature analysis of the Hessian allows for the volumetric enhancement of features, but it does not complete the segmentation required to fully represent a channel. Recall that 3-D image segmentation can be accomplished explicitly in the form of a parameterized surface or implicitly in the form of a level set. As described, the level set is the preferential technique because of its ability to handle complex geometries and topological changes, among other reasons. The level set method requires additional information about regions to be segmented in order to drive the propagation of the implicit surface. This is commonly done in the form of a scalar speed function that defines propagation speeds in the surface normal direction. Feddern et al. recently described a structure tensor valued extension of curvature flow for level sets. Their work generalized the use of the structure tensor for mean curvature flow by utilizing image tensor components in the curvature calculation. One exemplary embodiment expands on the generalization of Feddern et al. by allowing a level set surface to evolve towards specific features using a propagation speed given by a tensor-derived channelness term, an advection motion also based on the channelness term, and mean-curvature motion to encourage a smooth final segmentation.
In order to guide the level set evolution towards channel features, the velocity equation comprises two data-dependent terms and the mean-curvature term. The level set evolution is therefore defined as the combination of three terms as shown in Equation 19:
The D term is a propagating speed term defined by the channelness (equation 14) and scaled according to α in the direction of the surface normal. The term ∇·(∇φ/|∇φ|) is the mean-curvature of the surface and its influence is scaled by)β. The final term ∇A·|∇φ| is the dot product of the gradient vector of the advecting force, defined as inverse channelness, with the surface normal. The advecting inverse channelness gradient is scaled by γ. The contribution of each of these terms is generalized in
The combination of two image-fitting functions with a mean-curvature term is necessary to achieve realistic channel segmentation. The propagating channelness term is derived from the second order structure tensor and drives the segmentation into regions with a high likelihood of containing a channel feature. This representation is appealing as the physical process being calculated in this term can be interpreted as an external convection field. Although far from realistically simulating the ancient fluid flow that created the channel, the channelness guided propagation follows convective laws used in the erosion and deposition of a flowing medium and therefore has physical meaning. As channelness highlights the interior of a channel, the gradient of its inverse highlights feature boundaries and edges. Using this gradient as an advecting force represents the way in which the evolving surface moves towards channel edges when parallel to them, but does not cross over the edge. When driven by the channelness propagation, this advecting force acts like the bank of an ancient channel where flowing medium is forced to stop and move parallel along the edge. The mean-curvature of the surface is useful for alleviating the effects of noise in the image by preventing the surface from leaking into non-channel regions and maintaining a smooth representation. The combined contribution of these terms can be adjusted using the α, β, and α constants according to the nature of the feature being segmented. In general, an equal contribution value of ⅓ for each term is sufficient to accurately segment the channel. In the case of a greatly meandering channel, the mean-curvature term (γ) should be de-emphasized in order to allow a more sinuous segmentation.
The results of segmentation using confidence and curvature-guided level sets are shown for channels from two different 3-D seismic volumes. In practice, geoscientists prefer to manually define the centerline of a channel they hope to segment since it is a relatively quick step compared to manually interpreting the entire 3-D channel surface, which requires exponentially more time. For this reason, the initial seed used in each of the segmentations was a 1-pixel wide tube manually drawn to approximate the center of the channel from end to end. Level set seeding is discussed in further detail hereinafter.
The channel in
The channel shown in
In
In general, for this application, it is not desired to have a single parameter-set used for all channels, since over geologic time channels often deposit on top of one another. When this happens it becomes necessary to differentiate between two intersecting features by manually choosing these parameters. For this reason, the technique was developed such that a limited amount of user-control is necessary in order to allow semi-automated segmentation of channels.
This section focuses on representing planar level sets in 3-D for the purposes of fault segmentation. This is challenging for implicit surface modeling since a planar fault surface is a 2-D manifold in three-dimensions, which is both difficult to represent and compute. The reason for these problems is that derivatives are not defined everywhere on the fault manifold, for instance at the edges of the fault manifold, and that implicit surfaces require an inside and outside of a surface to be defined, but a manifold has no inside points. Therefore, the approach was taken to represent a segmented fault as the bounding surface of the fault's 2-D manifold (see
The starting point for segmenting faults is the initial seeds, which are assumed to be either manually picked or automatically extracted. Level set seeding is covered in more detail hereinafter. Next, the initial seeds are represented as an implicit surface, which then requires a velocity function to drive growth for the accurate segmentation of faults. A natural representation for this function can be derived from the approaches described above. Given the success gained from using a fault likelihood measure for highlighting faults, this measurement is used as a basis for the level set velocity function. The fault likelihood is a scalar byte value ƒ from (0-255) and it can be thresholded for the level set velocity in a number of different ways. The goal of thresholding on the fault likelihood is to encourage growth in regions of high fault likelihood and shrinking in regions of low fault likelihood. The T term in the fault likelihood function specifies a threshold value around which faults are segmented. For the case of the sawtooth form (Equation 20), all voxels in the volume greater than T will grow and all voxels less than T will contract the level set. For the case of the triangle form (Equation 21), all voxels greater than T plus or minus some range (ε) will grow, while all voxels outside of this range will contract the level set. The result of their corresponding speed functions is shown in
F(i)=i−T Equation 20
F(i)=ε−|i−T| Equation 21
The threshold-based speed function is combined into the level set equation given as:
Where F(I) is the fault likelihood propagation function on volume I scaled by α. The term ∇·(∇φ/|∇φ|) is the mean curvature of the level set, scaled by β. As in other level set velocity functions, the coefficients α and β designate the amount of influence the terms of the equation have on the overall growth process. This velocity equation becomes more advanced with the addition of a feature exaggeration term as will be covered hereinafter, and using generalized advection constraints.
When level set growth is determined by parameters of fault likelihood and mean curvature there is a challenge to determine the proper weighting of these terms in the velocity calculation. The tradeoff is to prevent leaking growth of the fault into undesirable regions while still allowing controlled growth into faulted regions. This tradeoff is controlled by the β and β coefficients. Determining the optimal values of these coefficients required significant testing on a number of different data sets in order to properly model the behavior of fault growth. Computing multiple iterations of the level set evolution with a range of coefficient values allowed for a determination of which coefficients produced the best growth.
In analyzing the results of this process, the advantages of using the level set representation for segmenting fault features should be noted. In
In accordance with one exemplary embodiment, implicit surface visulation is a task that is well suited to being computed on a GPU (Graphics Processing Unit) due to the dense volumetric representation of the level sets and the localized finite differencing used to calculate derivatives. The level set algorithm developed to compute the implicit surface visulation will be described in the context of stream processing, which is a SIMD model of parallel processing described by a data set (stream) and an operation applied to the stream (kernel function). This model of processing is suitable for applications that exhibit high compute intensity, data parallelism, and data locality, all of which are qualities of the implicit surface visulation technique.
The streaming level set implementation comprises three major components: data packing, numerical computation, and visualization. The data packing focuses on optimally storing the 3-D level set function into GPU texture memory such that it can be accessed and indexed efficiently. The numerical computation of the level set should be done in a way that takes advantage data locality and maximizes compute intensity of a kernel function during each iteration. The visualization component comprises a marching cubes kernel that extracts and displays the implicit surface at every iteration.
An initial seed point is used to start a level set segmentation and this seed point should be represented by its signed distance transform in order to enable level sets to be computed. A signed distance transform represents the arrival times of an initial front moving in its normal direction with constant speed, which is negative inside and positive outside of the initial front. As mentioned, this is most often computed on the CPU using the fast marching method, which maintains a heap data structure to ensure correct ordering of point updates. Unfortunately, this technique does not map well to a streaming kernel due to the trouble of maintaining the heap structure on a GPU. Therefore an iterative method is used to allow the distance transform to be computed in-stream.
The fast iterative method (FIM) calculates the distance transform used for initializing the level set front. The FIM is an appropriate technique for streaming architectures, like GPUs, due to the way local and synchronous updates allow for better cache coherency and scalability. FIM works by managing a list of active blocks that are iteratively updated until convergence is reached. A convergence measure is used to determine whether or not blocks should be added or removed from the active list through synchronous tile updates.
In the thread decomposition paradigm, the threads that execute a kernel are organized as a grid of blocks. A block is a batch of threads that work together and communicate by sharing data through the local shared memory and can synchronize their memory accesses. Threads in different blocks cannot communicate or synchronize with each other. At the lowest level, a warp is a sub-set of threads from a block that gets processed at the same time by the microprocessor. Warps are issued in an undefined order by a thread scheduler and therefore cannot be synchronized, so the lowest level of thread synchronization occurs at the block-level. This block-independence is what allows the CUDA architecture to scale well because as more processing units are added to future devices, more blocks can be independently computed in parallel.
A block-based updating scheme is used during computation on the IVE such that a block of threads share resources and work in parallel to update blocks of the solution. In this work blocks are fixed to a size of 8×8×4 such that 256 threads are executed in parallel and have access to same region of the volume stored in shared-memory. A one-to-one mapping of threads to voxels is used in this implementation, such that a block of 256 threads computes the solution iteratively for blocks of 256 voxels until the entire grid of all voxels have been computed. For a grid size of 2563 voxels it takes approximately 2562 individual block updates to compute a solution.
A 3-D array mapped to a texture is used to represent a volume on the GPU. The data is stored in 32-bit floating-point for both the input volumes and the level set volumes. It is necessary to store the level set volumes in floating point to ensure accurate calculations. Depending on the application, as many as four input volumes can be necessary for representing scalar values that control level set terms. In addition, at least two level set volumes are allocated for conducting a ping-pong computation where the active and result storage volumes are swapped each iteration. Along with these volumes, three large texture-mapped arrays are allocated for look-up tables to implement the isosurface extraction routine for storing edges, triangles, and numbers of vertices. Lastly, two vertex buffer objects (VBOs) are created for storing triangle vertices and normals used in rendering. It can be seen that this approach is greedy in its use of available GPU memory in order to enable fast computation.
In order to more efficiently move data from global to shared memory on the GPU, it should be stored in global memory (DRAM) in a way that allows reads to be as coalesced as possible. Coalesced memory accesses by a multiprocessor read consecutive global memory locations and create the best opportunity to maximize memory bandwidth. Therefore, packing a volume in global memory with the same traversing order as memory accesses made by the algorithm is the most efficient way to store a volume in global memory. This can be accomplished in a straightforward manner by re-ordering a volume such that 8×8×4 blocks of the volumes occur consecutively in linear memory. Next, the re-ordered volumes in global memory can be mapped to textures, which provides an opportunity for data to be entered in a local on-chip cache (8 KB) with significantly lower latency. With this combined approach, chances are significantly increased that when data is requested from global memory it will be cached either from texture-mapping or when requesting nearby memory locations. Memory reads are thereby optimized as long as there is some locality in the fetches. For the purposes of one exemplary embodiment, non-local texture fetches rarely need to be made since the level set computation requires access only to neighboring voxels in the volume. In practice, texture memory that has been cached can be accessed like an L1 cache (1-2 cycles) as compared to global (non-coalesced) memory reads that require a significant 400-600 cycle latency. In practice, these numbers will vary greatly depending on exact memory access patterns and how often the texture cache needs to be updated, which cannot be controlled by the programmer.
There are significant advantages to reading from texture memory as compared to global GPU memory, which is necessary to experience the full benefits of the GPU architecture. Textures act as low-latency caches that provide higher bandwidth for reading and processing data. In particular, textures are optimized for 2-D spatial locality such that localized accesses to texture memory is cached on-chip. Textures also provide for linear interpolation of voxel values through texture filtering that allows for easy renderings at sub-voxel precision (see FIG. 22—As shown in
Since shared memory provides over 2 orders of magnitude faster access to data than global memory, it should be pre-loaded with data that is expected to be frequently used. Shared memory is first set aside for the storing the level set volume, since it is the volume most frequently accessed during computation of the evolving surface (e.g., when calculating finite differences). Blocks of size 8×8×4 comprise 256 floating-point values or 1 KB of shared memory. Since each SM (Streaming Multiprocessor) has 16 KB of shared memory available, additional data can be stored in the remaining memory. This memory should next be assigned to the bordering voxels around each block (˜1 KB) so that level set values computed on block edges do not have to access global memory. Next, we can store blocks of size 1 KB from any of the feature volumes that provide information for the level set terms such as the input seismic data or a fault likelihood volume.
Since shared memory, caches, and registers are shared by one or more blocks on a SM, there is a limit to how many blocks can be launched at one time. This depends on how many SM resources are required by a single block. Therefore, there exists a tradeoff between block sizes and the use of shared memory since larger block sizes will require more data to be accessed from shared memory, thereby reducing the amount of data that can be stored in the banks. An additional concern is that the 16 KB of shared memory is divided into 16 banks that allow for simultaneous access. When multiple banks are accessed at the same time (i.e., bank conflict), requests must be serialized which causes performance to degrade. Block sizes of 8×8×4 provide a good middle ground between resource allocation per thread as well as being large enough to hide memory latency through many parallel computations.
One of the many advantages of using an implicit surface representation for modeling geologic features, as opposed to an explicit representation like a triangulated mesh, is its ability to dynamically adapt to drastically changing topologies and maintain a stable representation during computation. A disadvantage with the implicit representation is that it poses a challenge to extracting and directly visualizing iso surfaces of the function, something that comes cheaply with an explicit surface representation. The natural way to visualize an implicit surface is using direct volume rendering, which renders the implicit surface directly in its native state on the GPU. This could be accomplished by using a ray-marching pixel shader to render the level set directly in the GPU texture. By marching rays through the volume and accumulating densities from the “3-D texture” as a stack of 2-D textures, a value for the level set can be rendered. Unfortunately, direct volume rendering is only a rendering technique and may not provide a final form of the surface that can be used for further processing and editing by geoscientists in a geologic model. More importantly though, is that volume rendering is much more computationally expensive than extracting an isosurface to visualize using marching cubes. In order to assure that the speed of the IVE is as fast as possible, an isosurface extraction technique can be used for visualization instead of volume rendering.
Isosurface extraction using the marching cubes algorithm extracts a triangulated mesh of the level set surface. This approach is desirable since the resulting surface is identical to the level set surface and can be used in the many mesh-based reservoir-modeling tools. For this reason, a new technique was developed for extracting the isosurface of a level set surface using a modified streaming marching cubes algorithm that allows for fast and easy visualization on the GPU. Marching cubes is efficiently implemented to run on the GPU in a way that extracts triangles directly from the level set representation. This approach requires no further processing or intermediate storage of triangles prior to rendering and is therefore able to run at interactive rates.
The first step is to classify each voxel of the level set surface based on the number of triangle vertices it will generate (if any). In this voxel classification step, the goal is to determine whether each vertex of a voxel is inside or outside of the isosurface (i.e., level set) of interest. Next, the process iterates over all voxels in the volume and records the number of vertices that lie within the isosurface. If there are no vertices found for a voxel that lies within the isosurface, that voxel is designated as inactive so that it will be skipped.
The next step is to compact all active voxels into a single array. This is accomplished by iterating over the array that designates whether or not a voxel is active, and in the case where it is active, the voxel's index is saved in a compacted array. In order to exploit parallelism during the isosurface extraction, a prefix sum (scan) is performed across the volume in order to determine which voxels contain vertices and compact those voxels into a single array, while ignoring empty ones with no vertices. This scan can be accomplished efficiently in parallel by using the prefix sum. This scan results in a compacted array that ensures for the remaining steps the only voxels being calculated are truly active. Well-balanced parallelism is then accomplished by evenly dividing this compacted array among GPU stream processors.
The final step is to iterate over the compacted active voxel array and generate triangles for rendering. This is done by simply checking all active voxels in the compacted array and calculating the points of their intersecting triangles. Since the compacted array contains the locations of vertices where the isosurface intersected a given voxel, 3-D point locations are readily available. The three points that make up the triangle are then used to calculate the surface normal for the triangle face using a cross product. The triangle vertices and normal vector are then saved into vertex buffer objects, which are buffers on the GPU for storing geometric data. Finally, the isosurface is displayed by rendering the triangles in the vertex buffer. The normals are used for calculating the lighting and shading of the triangles. The result is a triangulated mesh representation of the implicit surface that is readily visualized on the GPU and can easily be transferred to main system memory for post-processing and editing at the end of a simulation.
Advanced techniques for level set modeling and a fast GPU solver have been presented, so now the prospect of real-time structure analysis conducted on-the-fly as the level set evolves can be discussed. By conducting structure analysis on a narrow-band around the implicit surface in real-time, it allows geologic features to be imaged on the fly for driving surface evolution (see
The techniques described use implicit surfaces to model geologic features require many level set terms (propagation, advection, etc) to be calculated before the implicit surfaces can be computed. The reason for this is largely due to computational efficiency, since it is more efficient to compute these terms all at once for the entire domain rather than on an as-needed basis. The advent of GPGPU (General-Purpose computation on GPUs) is providing significant changes to this paradigm by allowing for greater interaction and faster responses to data interrogations. The GPGPU paradigm provides sufficient computational power to calculate many level set terms on the fly in a way that steers the level set surface in real-time. This removes many of the requirements to pre-process the structure analysis of an input volume before it can be interpreted using level sets. This also provides geoscientists with a more immediate response to their interrogations.
For applying this technique to seismic data features, the structure analysis techniques described need to be written as a kernel. Although the existence of a kernel has already been mentioned, it has not been defined in what it means for this particular application. A kernel refers to a function that is called on a single voxel and returns some value based on the structural analysis of an input dataset and/or topological properties of an implicit surface. This value is intended to be used as a term in the level set evolution. In the previous section, the 3-D edge detector was a kernel used to generate a propagation term.
Following from the definition of a kernel, it seems straightforward to implement the structure analysis techniques as function calls for a given voxel. Unfortunately this approach does not take advantage of the volumetric representation of the input data or the level set representation, and therefore many redundant calculations result. In particular, in the case where a kernel is being calculated on an input dataset that is constant and never changes, a single voxel may have its kernel calculated multiple times. This is not a problem for a dataset where the ratio of feature to non-feature is very low, meaning the final surface will be small in relation to the size of the volume. In the case where the ratio of a feature to a non-feature is very large, such as if the final surface encompasses a large part of the input volume, this becomes a significant inefficiency. The reason is two-fold.
First is the already mentioned fact that as the narrow-band of the level set computation moves through the volume, the same voxel will have its kernel calculated multiple times possibly resulting in redundant calculations. Second is that many of the structure analysis techniques require a series of calculations on a region of voxels around the voxel in the kernel. This regional computation becomes more pronounced in operations that conduct smoothing on a region of voxels around a center voxel, as the region of influence can be quite large. When a smoothing operation is cascaded by a subsequent computation that also requires a region of voxels, the previous operation must be first computed for each of the voxels in the region. The result is that for cascaded operations requiring a region of voxels, the kernel paradigm becomes far more computationally intensive than pre-calculating the level set terms at one time. Therefore, the kernels described in this section are adapted for simple structure analysis techniques and assume the input volume has been previously smoothed.
In order to calculate structural attributes of faults and channels in seismic data, the local horizon or stratum must first be found at a given location in the seismic data. This requires first calculating the structure tensor by finite differences and then finding the sorted eigenvalues and orthonormalized eigenvectors of the structure tensor. Since in the GPU implementation both the level set domain and the seismic data are stored in 3-D texture-mapped memory, memory values can be quickly retrieved for use in very fast derivative calculations for generating the structure tensor.
Next, the eigenvalues and eigenvectors of the structure tensor must be determined. This is accomplished by using a noniterative algorithm from the medical imaging community. For solving the eigensystem, an algorithm was chosen that does not require iteration in order to allow fast calculations of eigenvalues and eigenvectors that leverage the high computational throughput of, for example, a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU (CUDA). Iterative techniques can decrease the throughput of the GPU if they are not taking advantage of the large number of calculations that can be quickly computed on the GPU.
After having a representation of the structure tensor and its eigenvalues and eigenvectors, it is straightforward to determine the kernels described for imaging faults and channels. The kernels are computed during each block update of the level set domain that was described. As every voxel in the evolving level set surface is solved, the feature kernel is first computed then the resulting values are immediately used in the level set equation. This order of computations is important because it results in feature kernels only being computed when the evolving surface is driven into that region of the volume. This also provides a layer of adaptivity to the technique since kernels can use information about the current position and shape of the implicit surface into the parameters and orientations of their computation.
All level set evolutions require a seed or set of seed points from which the evolution begins to grow. One standard way for accomplishing this is by using a shape-prior model, which approximates the object being segmented and helps the evolution proceed to a solution faster. Another more obvious way is by a manual seed, which is picked or drawn into the segmentation by the user. A number of techniques are described for creating seed inputs to level set evolutions for seismic interpretation applications. This section focuses on applications to fault segmentation, although the techniques described are generally applicable to other features.
Automatic seeds can be generated using techniques that approximate the location of features of interest. One exemplary goal of these techniques is that they are fast to compute and their approximation to the feature of interest is close enough to at least intersect at one point. In the case of geologic faults, an automatic seed input can come from a lineament extraction technique that auto-tracks peaks or from a traditional Hough transform operated on 2-D time slices of a 3-D volume. Both of these techniques attempt to trace features two-dimensionally (i.e., on horizontal slices) by following peaks in an attribute volume such as channelness or a fault likelihood volume.
The use of local orientation information between features can help improve on the manual merging technique by identifying which surface patches are good candidates to be combined. This describes a new technique called “smart merging.” Two surfaces are often manually merged together if they have a common orientation and a close distance to each other. Therefore, using a smart merging technique would make it possible to pre-empt the merging decisions a user would make on their own.
The smart merge works by when a surface patch is first selected for merging, a search is made for all other surface patches being displayed that are a given distance away. The distance is measured between two sets of points by using the distance of their midpoints. Although the midpoint approximation is not the most accurate way to compare the distance between two patches, it is fast and when the patches being used are compact it performs well. For those patches that are within the distance cutoff, their orientation is then compared to the first-selected patch. There are many ways to compare the orientation between two surface patches; the technique used here is to calculate the dot product of the normals and the coplanarity between the two patches. The dot product between the two normals is close to 1.0 when the normals are pointing in the same orientation. Coplanarity is calculated by taking the dot product of the first patch's normal with the vector between the two midpoints of the patches. This dot product is close to 0.0 when the patches are coplanar. The results of these calculations are compared to three user-defined parameters: minimum distance, minimum normal dot product, and maximum coplanarity dot product. If the result passes each of these parameter tests, the current patch in the search queue is automatically merged with the selected patch.
Another way of thinking about the smart merging technique is as a lightweight clustering technique. Above was described a complex clustering and segmentation technique for combining a large surface into discrete components. When choosing clustering parameters a choice is made between erring on the side of over-segmentation (i.e., creating too many patches) or under-segmentation. Since it is generally easier to combine two discrete patches into one than it is to break an under-segmented component into two pieces, a default of over-segmentation is preferred. Unfortunately, due to the simplicity of the smart-merging technique compared to the more complex clustering and segmentation techniques, it may make wrong decisions by merging together two patches that shouldn't be merged. Therefore, the technique allows for easy undo operations if the result is less than desirable.
Motivated by the techniques developed for smart merging, a more effective technique was created for assisting with the merging of many discrete surface patches. Hide merging essentially works the opposite of smart merging by simplifying the visual display through hiding all patches that are certainly not going to be merged with the patch under consideration (See
Semi-automatic seeds comprise using the previous output of a level set evolution as the input to a new simulation. This approach can be used as a way to iteratively segment by using the output of a previous level set simulation as the input to a later simulation. This may be a desired order of operations if a user does not know how much evolution is necessary to segment a feature, and if not enough evolution was done previously so that the process can continue evolving further. Continuing a level set evolution that has reached its final iteration can be considered a special case of a semi-automatic seed.
Manual seeds are hand-drawn into the computer using an interaction technique similar to “painting.” This is the most versatile technique for creating seed points since it gives a user the most control over the process, but it also can be more time consuming than automatic and semi-automatic seeds. The manual seeding has been implemented in two ways by using either a cubic paintbrush that can be elongated in any direction or a point set dropper that places points at mouse cursor locations. In either case, the user moves along 2-D slices in the 3-D volume and places seeds at places that approximate the location of features. The result of the painting procedure is then used as the initial zero level set for segmentation. The different use cases between the two approaches are that the cubic paintbrush is typically used to enclose a feature of interest (as in
Previous sections have described techniques for level set methods that model a wide variety of features, but there still exists a need to modify these evolutions in an interactive setting. The interactive guiding of level set evolution, also called interactive steering, can be accomplished through careful modifications of the velocity function. Applying user-defined control to the level set surface in this way allows growth and shrinkage in specific, which is the desired functionality.
Interactive steering is implemented using a shaped 3-D paintbrush, which defines the region of the surface where growing and shrinking occurs. Since both the implicit surface and the propagation function are stored in a volumetric format, there are two potential ways to approach this topic. The first approach is to modify the surface directly by applying the 3-D paintbrush to the implicit surface volume. This requires dynamically modifying the distance transform representation of the implicit surface in order to redefine the zero-valued surface to encompass the changes made by the paintbrush. The implementation of this approach requires a reinitialization of the distance transform representation such that the user-defined modifications are treated as a zero crossing that is intersected with the implicit surface. In the case of growth the logical union of the zero crossings is the result and in the case of shrinkage the result is the logical difference of the implicit surface zero crossing with the zero crossing of the user-defined modification. The background motivation for this approach based on CSG modeling was described above. After the combination of zero crossings is complete, the domain needs to be reinitialized so that it again correctly represents a distance transform volume.
An alternative way to implement growth and shrinkage based on user-defined modifications is to work indirectly by changing the underlying velocity function, which in turn modifies the implicit surface (after a few iterations). This is one approach that was developed due to its simplicity and speed. Since the first approach requires recomputing the distance transform, a computationally expensive move, it is usually preferable to compute the surface evolution for extra iterations in order to encompass the modifications via the velocity function. Although, there is a point at which recomputing the distance transform volume requires less computation. This occurs when user-defined modifications significantly modify the implicit surface to the point that a large number of iterations would be necessary to evolve the surface into a new shape.
The velocity function modifying approach works by using the 3-D paintbrush to directly assign velocity values to the volume representing the evolution velocity. In the case of growth, positive propagation values are assigned by the paintbrush to the velocity volume, and in the case of shrinkage negative propagation values are used to retract the surface. This allows for the real-time modification of the surface as shown in
The techniques presented for level set modeling based on the paradigm where an initial seed computes a final solution surface needs to be expanded for situations where a surface requires further evolution in specific regions in order to fully describe a feature. Therefore, a technique is presented for allowing an existing implicit surface to expand or shrink in specific regions while the remainder of the surface remains fixed. Instead of using a shaped 3-D paintbrush to accomplish this approach, the technique described for manual seed points using a point set dropper can be employed. Since there is already an existing representation of the implicit surface available, seed points can be drawn directly on the region of the implicit surface where growing or shrinking should occur (see
In
In
In
In step S840, a surface velocity technique is applied based on the specific type of geologic feature for which modeling and visualization is desired. For example, for faults, control continues to step S825. For channels, control continues to step S830. For salt bodies, control continues to step S835. For geobodies, control continues to step S840.
In step S825, a determination is made whether the fault is to be defined by attribute or defined by seismic amplitude. If the fault is to be defined by attribute, control continues to step S845, with control otherwise continuing to step S850 if it is defined by seismic amplitude. Next, in step S845, fault likelihood is determined with control continuing to step S855 if it is an AFE-style fault enhanced volume or to step S860 if it is a coherence or edge stacked volume. Then, in step S865 and step S870, respectively, a determination is made whether a threshold has been met. For example, one is directed toward
In step S880, one or more of the determined surfaces can optionally be merged. Next, in step S885, the data representing the geologic body is used to visualize, for example, on a computer display, the one or more geologic features. These features are then presented in step S890 with control continuing to step S895 where the control sequence ends.
In step S940, one or more critical points are determined. Next, in step S950, singularities are classified with control continuing to step S960 where the control sequence ends.
In step S220, the stratal domain or time/depth domain is determined. Next, in step S230, a determination is made whether the threshold has been met. If the threshold has not been met, control jumps back to step S220 with control otherwise continuing to step S240.
If the channel is defined by attribute, control continues to step S250 where, in conjunction with step S260, a determination is made whether the threshold has been met. Upon finding the boundaries of the surface, control continues to step S240.
In step S270, a similar procedure is performed based on channelocity. Again, when a threshold is met, control continues to step S240 with control otherwise jumping back to step S270.
In step S240, a channel surface is generated with control continuing to step S290 where the control sequence ends.
In step S320, and based on either analysis in the stratal domain or the time/depth domain, an analysis is performed and a determination in step S330 made whether a threshold has been met. If a threshold has been met, control jumps back to step S320 with control otherwise continuing to step S340. A similar methodology is applied if the body is defined by attribute with an analysis of the data occurring in step S350 and control continuing to step S360 to determine whether a threshold has been met. If a threshold has been met, control jumps back to step S350 with control otherwise continuing to step S340.
In step S340, one or more of the geobody surfaces are generated with control continuing to step S370 where the control sequence ends.
In
While the above-described flowcharts have been discussed in relation to a particular sequence of events, it should be appreciated that changes to this sequence can occur without materially effecting the operation of the invention. Additionally, the exact sequence of events need not occur as set forth in the exemplary embodiments. Additionally, the exemplary techniques illustrated herein are not limited to the specifically illustrated embodiments but can also be utilized with the other exemplary embodiments and each described feature is individually and separately claimable.
The systems, methods and techniques of this invention can be implemented on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device such as PLD, PLA, FPGA, PAL, any means, or the like. In general, any device capable of implementing a state machine that is in turn capable of implementing the methodology illustrated herein can be used to implement the various methods and techniques according to this invention.
Furthermore, the disclosed methods may be readily implemented in processor executable software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this invention is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized. The systems, methods and techniques illustrated herein can be readily implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the functional description provided herein and with a general basic knowledge of the computer and geologic arts.
Moreover, the disclosed methods may be readily implemented in software that can be stored on a computer-readable storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. The systems and methods of this invention can be implemented as program embedded on personal computer such as an applet, JAVA® or CGI script, in C or C++, Fortran, or the like, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated system or system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system, such as the hardware and software systems of a dedicated seismic interpretation device.
It is therefore apparent that there has been provided, in accordance with the present invention, systems and methods for interpreting data. While this invention has been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications and variations would be or are apparent to those of ordinary skill in the applicable arts. Accordingly, it is intended to embrace all such alternatives, modifications, equivalents and variations that are within the spirit and scope of this invention.
One of ordinary skill in the art would be aware of the following references which are incorporated herein by reference in their entirety:
121. J. A. Sethian, J. Strain, Crystal Growth and Dendritic Solidification, J. Comp Phys. 98, 2, pp. 231-253, 1992.
This application claims the benefit of and priority under 35 U.S.C. §119(e) to U.S. Patent Application No. 60/044,150, filed 11 Apr. 2008, entitled “Channel Segmentation,” and is related to PCT Application PCT/US2007/071733 (Published as WO2008/005690), U.S. Provisional Patent Application No. 61/018,958, entitled “Level Set Fault Segmentation,” and U.S. Provisional Patent Application No. 61/018,961, entitled “Structure Tensor Analysis For Seismic Data,” all of which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US09/40331 | 4/13/2009 | WO | 00 | 1/4/2011 |
Number | Date | Country | |
---|---|---|---|
61044150 | Apr 2008 | US |