Canvas control for 3D data volume processing

Information

  • Patent Grant
  • 9595129
  • Patent Number
    9,595,129
  • Date Filed
    Tuesday, April 9, 2013
    11 years ago
  • Date Issued
    Tuesday, March 14, 2017
    7 years ago
  • CPC
  • Field of Search
    • US
    • 345 419000
    • 345 424000
    • 345 440000
    • 703 010000
    • 703 014000
    • CPC
    • G06T15/08
  • International Classifications
    • G06T15/00
    • G06T15/08
Abstract
A method is provided for displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation. At least one two-dimensional (2D) canvas is generated. The 2D canvas corresponds to a plane in the 3D data set. The 2D canvas is shown in a first display window. One or more primitives are created on the 2D canvas. A volumetric region of the 3D volumetric data set corresponding to the one or more primitives is identified. The volumetric region is displayed in a 3D scene. The 3D scene is shown in a second display window.
Description
FIELD

The present techniques relate to providing three-dimensional (3D) data and/or visualizations of data corresponding to physical objects and analysis thereof. In particular, an exemplary embodiment of the present techniques relates to providing visualizations, interrogation, analysis and processing of user-selected portions of a 3D data volume.


BACKGROUND

This section is intended to introduce various aspects of the art, which may be associated with embodiments of the disclosed techniques. This discussion is believed to assist in providing a framework to facilitate a better understanding of particular aspects of the disclosed techniques. Accordingly, it should be understood that this section is to be read in this light, and not necessarily as admissions of prior art.


Volumetric (3D) model construction and visualization have been widely accepted by numerous disciplines as a mechanism for analyzing, communicating, and comprehending complex 3D datasets. Examples of structures that can be subjected to volumetric analysis include the earth's subsurface, facility designs and the human body. The ability to easily interrogate and explore 3D models is one aspect of 3D visualization. Relevant models may contain both 3D volumetric objects and co-located 3D polygonal objects. One example of a volumetric object is a seismic volume, shown in FIG. 1 at reference number 100. Other examples of volumetric objects are seismic volumes, MRI scans, reservoir simulation models, and geologic models. Interpreted horizons, faults and well trajectories are examples of polygonal objects. In some cases, there is a need to view the volumetric and polygonal objects concurrently to understand their geometric and property relations. If every cell of the 3D volumetric object is rendered fully opaque, as is the case with seismic volume 100 in FIG. 1, other objects in the scene may be occluded, and so it becomes advantageous at times to render such volumetric objects with transparency so that other objects may be seen through them. As an example, FIG. 2 depicts seismic volume 100 displayed with a degree of transparency. These 3D model interrogation and exploration tasks are useful during exploration, development and production phases in the oil and gas industry. Similar needs exist in other industries.


3D volumetric objects may be divided into two basic categories: those rendered using structured grids and those rendered using unstructured grids. Other types of grids may be defined on a spectrum between purely structured grids and purely unstructured grids. Both structured and unstructured grids may be rendered for a user to explore and understand the associated data. Known volume rendering techniques for structured grids render a full 3D volume with some degree of transparency, which enables the user to see through the volume. However, determining relations of 3D object properties is difficult, because it is hard to determine the exact location of semi-transparent data.


One way to view and interrogate a 3D volume is to render a cross-section through the 3D volume. The surface of the intersection between the cross-section and the 3-D volume may be rendered as a polygon with texture-mapped volume cell properties added thereto. For a structured grid rendered for a seismic or a medical scan, the user can create cross-sections along one of the primary directions: XY (inline or axial), XZ (cross-line or coronal) and YZ (time slice or sagittal). A traditional cross-section spans the extent of the object. In this case other objects such as horizons, wells or the like are partially or completely occluded and it is difficult to discern 3D relationships between objects. This effect is shown in FIG. 3, which is a 3D graph 300 of a subsurface region. The graph 300, which may provide a visualization of 3D data for a structured grid or an unstructured grid, shows a first cross-section 302, a second cross-section 304, a third cross-section 306, and a fourth cross-section 308. Each of the four cross-sections is chosen to allow a user to see data in a physical property model that comprises data representative of a property of interest. However, a first horizon 310 and a second horizon 312, as well as data displayed on cross-sections 302, 304 and 306 which also may be of interest to a user, are mostly obscured or occluded by the visualizations of the four cross-sections.


A ribbon section is one attempt to make traditional cross-sectional visual representations more flexible. One way to create a ribbon section is to extrude a line or polyline vertically through the volume, creating a curtain or ribbon, upon which surface the volumetric data from the intersection of the ribbon with the volume is painted. This concept of ribbon sections is depicted in FIG. 4, which is a 3D graph 400 of a subsurface region showing a ribbon section 402 defined by a polyline 404 comprising a first line segment 406 and a second line segment 408. Although ribbon section 402 is less intrusive than the cross-sections shown in FIG. 3, portions of a first horizon 410 and a second horizon 412 are still occluded as long as the ribbon section is displayed.


Another attempt to make traditional cross-sectional visual representations more flexible is to implement a three-dimensional probe within the data volume. This is demonstrated in FIG. 5, where a cube-shaped probe 500 is painted with volumetric data from the intersection of each of the probe's surfaces with the volume. Probe 500 may be moved around within the data volume. However, there are still instances in which horizons 502, 504 may be occluded.


All of the above methods rely on predefined geometric primitives like planes, combinations of planes, polylines, volumes, hexahedrons and others. These primitives are simple to understand, but they rarely match the geometry of a physical object. The above methods sometimes provide editing capabilities, like the ability to edit the polyline or change the orientation of the cross-section, so the user may better match the physical object. However, the editing tasks are time consuming and very often a perfect match cannot be obtained e.g. when a curved physical object is examined with a planar cross-section.


U.S. Patent Application Publication No. 2005/0231530 discloses a method for 3D object creation and editing based on 3D volumetric data via 2D drawing tools. In its operation, the user creates a 2D structure in the rendering space. These 2D structures, such as 2D points, 2D lines etc, are transformed/projected into 3D structure. This method relies on visualization of the 3D volumetric data as well as 2D interactions happening in the same rendering space. By doing this, the user's 2D operations are restricted by how the 3D data is visualized in rendering space. For example, their rendering of volumetric data uses planar slices (also known as cross-sections), and the 3D structures created by the 2D drawing tools will be collocated with these planar slices. To create a non planar 3D structure the user must perform digitization on numerous planar slices. For example, creating a cylinder requires drawing circles on a large number of 2D slices intersecting the cylinder. Another example involves creating a curved surface connecting two vertical wells. The method disclosed in the '530 Application requires a user to digitize lines on multiple time slices. What is needed is a method of rendering or displaying data using simple, intuitive editing commands while minimizing occlusion of data of interest.


SUMMARY

In one aspect, a method is disclosed for displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation. At least one two-dimensional (2D) canvas is generated. The 2D canvas corresponds to a plane in the 3D data set. The 2D canvas is shown in a first display window. One or more primitives are created on the 2D canvas. A volumetric region of the 3D volumetric data set corresponding to the one or more primitives is identified. The volumetric region is displayed in a 3D scene. The 3D scene is shown in a second display window.


In another aspect, a system is disclosed for displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation. The system includes a processor and a tangible, machine-readable storage medium that stores machine-readable instructions for execution by the processor. The machine-readable instructions include: code for generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window; code for creating one or more primitives on the 2D canvas; code for identifying a volumetric region of the 3D volumetric data set corresponding to the one or more primitives; and code for displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.


In another aspect, a computer program product is provided having computer executable logic recorded on a tangible, machine readable medium. When executed the computer program product displays selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation. The computer program product includes: code for generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window; code for creating one or more primitives on the 2D canvas; code for identifying a volumetric region of the 3D volumetric data set corresponding to the one or more primitives; and code for displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.


In still another aspect, a method of producing hydrocarbons is disclosed. According to the method, selected portions of a three-dimensional (3D) volumetric data set representing a subsurface hydrocarbon reservoir are displayed. The displaying includes generating at least one two-dimensional (2D) canvas. The 2D canvas corresponds to a plane in the 3D data set. The 2D canvas is shown in a first display window. One or more primitives are created on the 2D canvas. A volumetric region of the 3D volumetric data set corresponding to the one or more primitives is identified. The volumetric region is displayed in a 3D scene, which is shown in a second display window. Hydrocarbons are produced from the subsurface hydrocarbon reservoir using the displayed volumetric region.





BRIEF DESCRIPTION OF THE DRAWINGS

Advantages of the present techniques may become apparent upon reviewing the following detailed description and the accompanying drawings in which:



FIG. 1 is a perspective view of a visualization of volumetric data with an opaque color map according to known principles;



FIG. 2 is a perspective view of a visualization of volumetric data with a semi-transparent color map according to known principles;



FIG. 3 is a perspective view of a visualization of volumetric data including cross-sections or planes according to known principles;



FIG. 4 is a perspective view of a visualization of volumetric data by rendering an arbitrary cross-section according to known principles;



FIG. 5 is a perspective view of a visualization of volumetric data by rendering a probe or volume of interest according to known principles;



FIG. 6A is a display of a volume visualized in two dimensions according to disclosed aspects and methodologies;



FIG. 6B is a display of the volume of FIG. 6A visualized in three dimensions according to disclosed aspects and methodologies;



FIG. 7A is a display of geometric primitives on a 2D canvas according to disclosed aspects and methodologies;



FIG. 7B is a display of volumes visualized based on the geometric primitives of FIG. 7A;



FIG. 8A is a display of geometric primitives on a 2D canvas in which the shape and/or size of one of the geometric primitives is modified according to disclosed aspects and methodologies;



FIG. 8B is a display of volumes visualized based on the geometric primitives of FIG. 8A;



FIG. 9A is a display of geometric primitives according to disclosed aspects and methodologies;



FIG. 9B is a perspective view of a display of a 3D volume visualized based on the geometric primitives of FIG. 9A;



FIG. 10 is a perspective view of a display of a 3D volume visualized based on the geometric primitives of FIG. 9A, in which 3D visualization is performed with a semi-transparent color map according to disclosed aspects and methodologies;



FIG. 11A is a display of geometric primitives according to disclosed aspects and methodologies;



FIG. 11B is a perspective view of a 3D visualization, using a semi-transparent color map, of volumes corresponding to the geometric primitives of FIG. 11A according to disclosed methodologies and techniques;



FIG. 12A is a display of geometric primitives according to disclosed aspects and methodologies;



FIG. 12B is a perspective view of a 3D visualization of volumes corresponding to the geometric primitives of FIG. 12A according to disclosed methodologies and techniques;



FIG. 13A is a display of geometric primitives according to disclosed aspects and methodologies;



FIG. 13B is a perspective view of a 3D visualization of volumes corresponding to the geometric primitives of FIG. 13A according to disclosed methodologies and techniques;



FIG. 14A is a display of geometric primitives corresponding to a drilling operation according to disclosed methodologies and techniques;



FIG. 14B is a perspective view of a 3D visualization of volumes corresponding to the geometric primitives of FIG. 13A according to disclosed methodologies and techniques;



FIGS. 15A and 15B are displays of a freehand drawing and fill operation on a 2D canvas according to disclosed methodologies and techniques;



FIGS. 16A, 16B and 16C are displays of an erase operation on a 2D canvas according to disclosed methodologies and techniques;



FIG. 17 is a block diagram of a computing system;



FIG. 18 is a flowchart of a method according to disclosed methodologies and techniques;



FIG. 19 is a block diagram representing computer code according to disclosed methodologies and techniques;



FIG. 20 is a side elevational view of a hydrocarbon reservoir; and



FIG. 21 is a flowchart of a method according to disclosed methodologies and techniques.





DETAILED DESCRIPTION

In the following detailed description section, specific embodiments are described in connection with preferred embodiments. However, to the extent that the following description is specific to a particular embodiment or a particular use, this is intended to be for exemplary purposes only and simply provides a description of the exemplary embodiments. Accordingly, the present techniques are not limited to embodiments described herein, but rather, it includes all alternatives, modifications, and equivalents falling within the spirit and scope of the appended claims.


At the outset, and for ease of reference, certain terms used in this application and their meanings as used in this context are set forth. To the extent a term used herein is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in at least one printed publication or issued patent.


As used herein, the term “3D seismic data volume” refers to a 3D data volume of discrete x-y-z or x-y-t data points, where x and y are not necessarily mutually orthogonal horizontal directions, z is the vertical direction, and t is two-way vertical seismic signal travel time. In subsurface models, these discrete data points are often represented by a set of contiguous hexahedrons known as cells or voxels. Each data point, cell, or voxel in a 3D seismic data volume typically has an assigned value (“data sample”) of a specific seismic data attribute such as seismic amplitude, acoustic impedance, or any other seismic data attribute that can be defined on a point-by-point basis.


As used herein, the term “cell” refers to a closed volume formed by a collection of faces, or a collection of nodes that implicitly define faces.


As used herein, the term “computer component” refers to a computer-related entity, either hardware, firmware, software, a combination thereof, or software in execution. For example, a computer component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. One or more computer components can reside within a process and/or thread of execution and a computer component can be localized on one computer and/or distributed between two or more computers.


As used herein, the terms “computer-readable medium” or “tangible machine-readable medium” refer to any tangible storage that participates in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and volatile media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Computer-readable media may include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium like a holographic memory, a memory card, or any other memory chip or cartridge, or any other physical medium from which a computer can read. When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, exemplary embodiments of the present techniques may be considered to include a tangible storage medium or tangible distribution medium and prior art-recognized equivalents and successor media, in which the software implementations embodying the present techniques are stored.


As used herein, the term “cross-section” refers to a plane that intersects a structured grid or an unstructured grid.


As used herein, “displaying” includes a direct act that causes displaying, as well as any indirect act that facilitates displaying. Indirect acts include providing software to an end user, maintaining a website through which a user is enabled to affect a display, hyperlinking to such a website, or cooperating or partnering with an entity who performs such direct or indirect acts. Thus, a first party may operate alone or in cooperation with a third party vendor to enable the reference signal to be generated on a display device. The display device may include any device suitable for displaying the reference image, such as without limitation a CRT monitor, a LCD monitor, a plasma device, a flat panel device, or printer. The display device may include a device which has been calibrated through the use of any conventional software intended to be used in evaluating, correcting, and/or improving display results (e.g., a color monitor that has been adjusted using monitor calibration software). Rather than (or in addition to) displaying the reference image on a display device, a method, consistent with the invention, may include providing a reference image to a subject. “Providing a reference image” may include creating or distributing the reference image to the subject by physical, telephonic, or electronic delivery, providing access over a network to the reference, or creating or distributing software to the subject configured to run on the subject's workstation or computer including the reference image. In one example, the providing of the reference image could involve enabling the subject to obtain the reference image in hard copy form via a printer. For example, information, software, and/or instructions could be transmitted (e.g., electronically or physically via a data storage device or hard copy) and/or otherwise made available (e.g., via a network) in order to facilitate the subject using a printer to print a hard copy form of reference image. In such an example, the printer may be a printer which has been calibrated through the use of any conventional software intended to be used in evaluating, correcting, and/or improving printing results (e.g., a color printer that has been adjusted using color correction software).


As used herein, the term “horizon” refers to a geologic boundary in the subsurface structures that are deemed important by an interpreter. Marking these boundaries is done by interpreters when interpreting seismic volumes by drawing lines on a seismic section. Each line represents the presence of an interpreted surface at that location. An interpretation project typically generates several dozen and sometimes hundreds of horizons. Horizons may be rendered using different colors to stand out in a 3D visualization of data.


As used herein, “hydrocarbon” includes any hydrocarbon substance, including for example one or more of any of the following: oil (often referred to as petroleum), natural gas, gas condensate, tar and bitumen.


As used herein, “hydrocarbon management” or “managing hydrocarbons” includes hydrocarbon extraction, hydrocarbon production, hydrocarbon exploration, identifying potential hydrocarbon resources, identifying well locations, determining well injection and/or extraction rates, identifying reservoir connectivity, acquiring, disposing of and/or abandoning hydrocarbon resources, reviewing prior hydrocarbon management decisions, and any other hydrocarbon-related acts or activities.


As used herein, the term “I,J,K space” refers to an internal coordinate system for a geo-cellular model, having specified integer coordinates for (i,j,k) for consecutive cells. By convention, K represents a vertical coordinate. I,J,K space may be used as a sample space in which each coordinate represents a single sample value without reference to a physical characteristic.


As used herein, the term “3D plane” refers to a plane in three-dimensional (3D) space. This plane is typically defined by a point and a normal vector or by an equation A*x+B*y+C*z+D=0.


As used herein, the term “structured grid” refers to a matrix of volume data points known as voxels. Both the structured grid and the voxels have regular, defined geometries. Structured grids may be used with seismic data volumes.


As used herein, the term “unstructured grid” refers to a collection of cells with arbitrary geometries. Each cell can have the shape of a prism, hexahedron, or other more complex 3D geometries. When compared to structured grids, unstructured grids can better represent actual data since unstructured grids can contain finer (i.e. smaller) cells in one area with sudden changes in value of a property, and coarser (i.e. larger) cells elsewhere where the value of the property changes more slowly. Finer cells may also be used in areas having more accurate measurements or data certainty (for example, in the vicinity of a well). The flexibility to define cell geometry allows the unstructured grid to represent physical properties better than structured grids. In addition, unstructured grid cells can also better resemble the actual geometries of subsurface layers because cell shape is not restricted to a cube and may be given any orientation. However, all cell geometries need to be stored explicitly, thus an unstructured grid may require a substantial amount of memory. Unstructured grids may be employed in connection with reservoir simulation models. The term “unstructured grid” relates to how data is defined and does imply that the data itself has no structure. For example, one could represent a seismic model as an unstructured grid with explicitly defined nodes and cells. The result would necessarily be more memory intensive and inefficient to process and visualize than the corresponding structured definition.


As used herein, the term “voxel” refers to the smallest data point in a 3D volumetric object. Each voxel has unique set of coordinates and contains one or more data values that represent the properties at that location. Each voxel represents a discrete sampling of a 3D space, similar to the manner in which pixels represent sampling of the 2D space. The location of a voxel can be calculated by knowing the grid origin, unit vectors and the i,j,k indices of the voxel. As voxels are assumed to have similar geometries (such as cube-shaped), the details of the voxel geometries do not need to be stored, and thus structured grids require relatively little memory. However, dense sampling may be needed to capture small features, therefore increasing computer memory usage requirements.


Some portions of the detailed description which follows are presented in terms of procedures, steps, logic blocks, processing and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, step, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present application, discussions using the terms such as “generating”, “creating”, “identifying”, “displaying”, “defining”, “rendering”, “predicting”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Example methods may be better appreciated with reference to flow diagrams.


While for purposes of simplicity of explanation, the illustrated methodologies are shown and described as a series of blocks, it is to be appreciated that the methodologies are not limited by the order of the blocks, as some blocks can occur in different orders and/or concurrently with other blocks from that shown and described. Moreover, less than all the illustrated blocks may be required to implement an example methodology. Blocks may be combined or separated into multiple components. Furthermore, additional and/or alternative methodologies can employ additional, not illustrated blocks. While the figures illustrate various serially occurring actions, it is to be appreciated that various actions could occur concurrently, substantially in parallel, and/or at substantially different points in time.


As set forth below, aspects of the disclosed techniques relate to an interactive visualization of selected portions of volumetric data sets. These volumetric data sets are visualized in a three-dimensional (3D) window. In addition to the 3D window, a user may interact using a separate two-dimensional (2D) canvas. This 2D canvas corresponds to a plane in the three-dimensional space represented in the 3D window. The user creates, edits or deletes 2D shapes on the 2D canvas. These shapes could be as simple as a circle, line segment or a hand drawn curve. Based on these 2D drawings a volume is created based on the 2D shape, and the volume is rendered in the 3D window. The portion of the volume intersecting the volumetric data set is identified or visualized in the 3D window.


In an aspect, the 2D canvas corresponds to the top or map view of the 3D window. Shapes drawn on the 2D canvas are extruded vertically as shown in FIG. 6A, where a circle 602 drawn on the 2D canvas 604 corresponds to a cylinder 606 in a 3D window 608 in FIG. 6B. The portion of the volumetric data set intersected by the outer surface 608 of cylinder 606 is visualized in the 3D window 610. The portion of the volumetric data set outside cylinder 606 is not visualized. Alternatively, the portion of the volumetric data set outside the cylinder is visualized as transparent or semi-transparent. The user may further explore the volumetric data set by interacting with the 2D canvas. For example, the user may add another 2D primitive to the 2D canvas 604, such as an ellipse 702 in FIG. 7A. As shown in FIG. 7B, the visualization of the volumetric data set is updated to reflect the change on the 2D canvas by displaying an elliptical prism 704 in 3D window 610.


Another type of interaction is the editing of the 2D shapes. An example is illustrated in FIG. 8A, where the area enclosed by ellipse 702 is increased from area 702a to area 702b on 2D canvas 604. As shown in FIG. 8B, the volume of the elliptical prism is likewise increased as shown by reference number 802, and a corresponding portion of the volumetric data set is rendered in 3D window 610.


According to methodologies and techniques disclosed herein, a primitive geometric element may be entered on the 2D canvas by freehand drawing. FIG. 9A illustrates the result of a user creating two small ellipses 902, 904 on 2D canvas 604 and connecting them with a freehand drawn curve 906. A user can select different types of brushes as well as drawing styles for the free hand drawing. The portion 908 of the volumetric data set corresponding to the 2D drawing is rendered in 3D window 610, as shown in FIG. 9B.


The user can select different color maps for the rendering of the volumetric data set. FIGS. 9B and 10 are rendered in the 3D window from the same drawing on the 2D canvas, shown in FIG. 9A. However, the portion of the volumetric data set corresponding to the 2D drawing is rendered using a different color map in each figure: FIG. 9B uses a fully opaque color map and FIG. 10 uses a semi-transparent color map, as shown at 1000.



FIG. 11 illustrates other 2D canvas editing capabilities. In this Figure the user has defined 2 ellipses 1102, 1104 on a 2D canvas 1100 and a curve 1106 connecting the ellipses. Curve 1106 has been created by defining 3 points represented as dots 1108, 1110, 1112. The user modifies the shape of the curve by moving the location of the middle point. Dot 1114 represents the new location of the middle point. By moving the middle point to location 1114, the user has changed the position of curve 1106 to the dashed line 1116. The portion 1118 of the volumetric object corresponding to the new shape on the 2D canvas is rendered in 3D window 1120 [FIG. 11B does not show the change to dashed line 1116 as shown in FIG. 11A.] using a semi-transparent color map.


The 2D canvas primitives can be either vector or raster primitives, similar to a generic 2D paint application. The raster primitives can be very easily converted into a 2D texture, but may have sampling or stair-stepping artefacts. A 2D vector primitive does not have these artefacts, and so a diagonal line in 2D would correspond to a perfectly diagonal line or plane in 3D.



FIGS. 12A and 12B illustrate more complex user interactions according to methodologies and techniques. FIG. 12A shows a 2D canvas 1200 upon which a user has generated or drawn several 2D primitives: an oval 1202, two line segments 1204, 1206, and a free-hand line 1208. As shown in 3D space 1210 in FIG. 12B, the portion 1220 of the volumetric object corresponding to the generated 2D primitives is rendered in 3D. The user may manipulate some or all of the 2D primitives after the initial creation thereof. As depicted in FIG. 13A, the user has moved oval 1202 and line segment 1204 on 2D canvas 1200 as demonstrated by arrows 1214, 1216 in 12A. FIG. 13B shows how such movement causes a new rendering 1220 of the portion of the volumetric object in 3D space 1210.


The 2D canvas primitives may also be obtained from 3D geometric objects. For example, a well trajectory is a 3D path of a drilled well from a surface location to a target area of a reservoir. This path may be rendered in three-dimensional space and may also be converted or projected back onto the 2D canvas and a 2D primitive could be created. The user may then modify this 2D primitive and/or use the primitive as a reference for additional operations on the 2D canvas. FIGS. 14A and 14B illustrate this aspect of displaying subsurface data according to disclosed methodologies and techniques. A 2D canvas 1400 is shown in FIG. 14A, and the corresponding rendering in a 3D window 1402 is shown in FIG. 14B. In both Figures five well trajectories 1404, 1406, 1408, 1410, 1412 originate from a drill center 1414. These trajectories are rendered in 3D window 1402 as lines and are projected back into 2D canvas 1400, where they are also represented as lines. Seismic volume information corresponding to the vertical planes defined by each of the well trajectories is displayed only for a desired depth interval, as shown at 1416, 1418, 1420, 1422, and 1424. The desired depth interval may be limited by a horizon 1426. Seismic data for horizon depth 1426 is shown on 2D canvas as background contours or coloring. A user can control the display by controlling the properties of the lines in 2D. If the user desires to expand or widen the well traverse regions, the only needed operation is to alter the thickness of the lines on the 2D canvas. If a user desires to expand the amount of seismic data displayed in 3D window, the desired depth interval is modified.


These 2D primitives derived from 3D objects may serve as a location reference for additional operations on the 2D canvas. For example, a user studying possible connectivity between wells may draw a simple polyline 1428 connecting two wells 1404, 1406, as shown in FIG. 14A. Polyline 1428 may then be used to render a region of interest 1430 in 3D window 1402.


Various methods of extrusion may be used to create 3D objects from 2D primitives. A user may limit the amount of extrusion by either specifying an amount of extrusion or limiting the extrusion by providing a geometric limit e.g. surface, geologic horizon or fault. Alternatively, different types of operations may be applied to create the 3D portion of the volume. For example, the 2D primitive may be grown by a specific distance in 2 or 3 dimensions. As another example, the 2D primitive may be rotated in 3D to create the 3D portion. As yet another example, creating the 3D region/portion may involve performing Boolean operations on 3D regions created from multiple 2D canvases.



FIGS. 15A and 15B demonstrate another aspect of the disclosed methodologies and techniques. A geometric primitive 1502, rendered in 2D in FIG. 14A, may be changed to a solid 2D object 1504. The solid object 1504, shown again in FIG. 16A, may be the subject of an ‘erase’ operation 1602 (FIG. 16B) in 2D, thereby changing the shape of the object to that shown in FIG. 16C at 1604.



FIG. 17 is a block diagram of a computer system 1700 that may be used to perform any of the methods disclosed herein. A central processing unit (CPU) 1702 is coupled to system bus 1704. The CPU 1702 may be any general-purpose CPU, although other types of architectures of CPU 1702 (or other components of exemplary system 1700) may be used as long as CPU 1702 (and other components of system 1700) supports the inventive operations as described herein. The CPU 1702 may execute the various logical instructions according to disclosed aspects and methodologies. For example, the CPU 1702 may execute machine-level instructions for performing processing according to aspects and methodologies disclosed herein.


The computer system 1700 may also include computer components such as a random access memory (RAM) 1706, which may be SRAM, DRAM, SDRAM, or the like. The computer system 1700 may also include read-only memory (ROM) 1708, which may be PROM, EPROM, EEPROM, or the like. RAM 1706 and ROM 1708 hold user and system data and programs, as is known in the art. The computer system may also include one or more graphics processor units 1714, which may be used for various computational activities. The computer system 1700 may also include an input/output (I/O) adapter 1710, a communications adapter 1722, a user interface adapter 1724, and a display adapter 1718. The I/O adapter 1710, the user interface adapter 1724, and/or communications adapter 1722 may, in certain aspects and techniques, enable a user to interact with computer system 1700 in order to input information.


The I/O adapter 1710 preferably connects a storage device(s) 1712, such as one or more of hard drive, compact disc (CD) drive, floppy disk drive, tape drive, etc. to computer system 1700. The storage device(s) may be used when RAM 1706 is insufficient for the memory requirements associated with storing data for operations of embodiments of the present techniques. The data storage of the computer system 1700 may be used for storing information and/or other data used or generated as disclosed herein. The communications adapter 1722 may couple the computer system 1700 to a network (not shown), which may enable information to be input to and/or output from system 1700 via the network (for example, the Internet or other wide-area network, a local-area network, a public or private switched telephony network, a wireless network, any combination of the foregoing). User interface adapter 1724 couples user input devices, such as a keyboard 1728, a pointing device 1726, and the like, to computer system 1700. The display adapter 1718 is driven by the CPU 1702 to control, through a display driver 1716, the display on a display device 1720. Information and/or representations of one or more 2D canvases and one or more 3D windows may be displayed, according to disclosed aspects and methodologies.


The architecture of system 1700 may be varied as desired. For example, any suitable processor-based device may be used, including without limitation personal computers, laptop computers, computer workstations, and multi-processor servers. Moreover, embodiments may be implemented on application specific integrated circuits (ASICs) or very large scale integrated (VLSI) circuits. In fact, persons of ordinary skill in the art may use any number of suitable structures capable of executing logical operations according to the embodiments.



FIG. 18 depicts, in block form, a method 1800 for displaying selected portions of a three-dimensional (3D) volumetric data set according to aspects and methodologies disclosed herein. The volumetric data set may be 3D seismic, a structured reservoir model, an unstructured reservoir model, or a geologic model. At block 1802 at least one two-dimensional (2D) canvas is generated. The 2D canvas corresponds to a plane in the 3D data set. The 2D canvas is shown in a first display window. At block 1804 one or more primitives is created on the 2D canvas. The primitives may include one or more line drawings, point drawings, polygon drawings, raster primitives, and/or vector primitives. Creating the primitives may include brush paintings, fill operations, erase operations, and/or creating a primitive based on a 2D projection from an object in the 3D scene. At block 1806 a volumetric region of the 3D volumetric data set corresponding to the one or more primitives is identified. The volumetric region may be identified by creating a volume by performing an operation on the one or more primitives, and defining the volumetric region as an intersection of the created volume and the 3D volumetric data set. The operation may be extrude, grow, extrude with a geometric limit, or a geometric transformation such as a translation, a scale operation, or a rotation. Alternatively, the volumetric region may be identified based on a Boolean operation of at least two precursor volumetric regions. The volumetric region may be identified based on ray casting operations or virtual fragment operations on graphic processors. At block 1808 the volumetric region is displayed in a 3D scene. The 3D scene is shown in a second display window. The 3D scene may be shown based on the volumetric region. The 3D scene may be transparent where the volumetric region is transparent or opaque where the volumetric region is opaque. The 3D scene may be semi-transparent where the volumetric region is semi-transparent. A user may control the transparency of the 3D scene.



FIG. 19 shows a representation of machine-readable logic or code 1800 that when executed displays selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation. Code 1900 may be used or executed with a computing system such as computing system 1700. At block 1902 code is provided for generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window. At block 1904 code is provided for creating one or more primitives on the 2D canvas. At block 1906 code is provided for identifying a volumetric region of the 3D volumetric data set corresponding to the one or more primitives. At block 1908 code is provided for displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window. Code effectuating or executing other features of the disclosed aspects and methodologies may be provided as well. This additional code is represented in FIG. 19 as block 1910, and may be placed at any location within code 1900 according to computer code programming techniques.


Aspects disclosed herein may be used to perform hydrocarbon management activities such as extracting hydrocarbons from a subsurface formation, region, or reservoir, which is indicated by reference number 2002 in FIG. 20. A method 2100 of extracting hydrocarbons from subsurface reservoir 2002 is shown in FIG. 21. At block 2102 inputs are received from a numerical model, geologic model, or flow simulation of the subsurface region, where the model or simulation has been run or improved using the methods and aspects disclosed herein. At block 2104 the presence and/or location of hydrocarbons in the subsurface region is predicted. At block 2106 hydrocarbon extraction is conducted to remove hydrocarbons from the subsurface region, which may be accomplished by drilling a well 2004 using oil drilling equipment 2006 (FIG. 20). Other hydrocarbon management activities may be performed according to known principles.


Illustrative, non-exclusive examples of methods and products according to the present disclosure are presented in the following non-enumerated paragraphs. It is within the scope of the present disclosure that an individual step of a method recited herein, including in the following enumerated paragraphs, may additionally or alternatively be referred to as a “step for” performing the recited action.

  • A. A method for displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation, comprising:


generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window;


creating one or more primitives on the 2D canvas;


identifying a volumetric region of the 3D volumetric data set corresponding to the one or more primitives; and


displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.

  • A1. The method according to paragraph A, wherein the volumetric data set is one of a 3D seismic, a structured reservoir model, an unstructured reservoir model, and a geologic model.
  • A2. The method according to any of paragraphs A-A1, wherein the one or more primitives includes at least one of a line drawing, a point drawing, and a polygon drawing.
  • A3. The method according to any of paragraphs A-A2, wherein creating one or more primitives includes at least one of a brush painting, a fill operation, and an erase operation.
  • A4. The method according to any of paragraphs A-A3, wherein creating one or more primitives includes creating a primitive based on a 2D projection from an object in the 3D scene.
  • A5. The method according to any of paragraphs A-A4, wherein each of the one or more primitives is a raster primitive.
  • A6. The method according to any of paragraphs A-A5, wherein each of the one or more primitives is a vector primitive.
  • A7. The method according to any of paragraphs A-A6, wherein the volumetric region is identified by creating a volume by performing an operation on the one or more primitives, and defining the volumetric region as an intersection of the created volume and the 3D volumetric data set.
  • A8. The method according to paragraph A7, wherein the operation comprises one of extrude and grow.
  • A9. The method according to paragraph A7, wherein the operation comprises extrude with a geometric limit.
  • A10. The method according to paragraph A7, wherein the operation comprises a geometric transformation.
  • A11. The method according to paragraph A10, wherein the transformation is one of a translation, a scale operation, or a rotation.
  • A12. The method according to any of paragraphs A-A11, wherein the volumetric region is identified based on a Boolean operation of at least two precursor volumetric regions.
  • A13. The method according to any of paragraphs A-A12, wherein the 2D canvas is a first 2D canvas, and further wherein the volumetric region is identified based on a Boolean operation on 3D regions identified by the first 2D canvas and a second 2D canvas.
  • A14. The method according to any of paragraphs A-A13, wherein the volumetric region is identified based on ray casting operations on graphic processors.
  • A15. The method according to any of paragraphs A-A14, wherein the volumetric region is identified based on virtual fragment operations on graphic processors.
  • A16. The method according to any of paragraphs A-A15, wherein the 3D scene is rendered based on the volumetric region.
  • A17. The method according to any of paragraphs A-A16, wherein the 3D scene is transparent where the volumetric region is transparent.
  • A18. The method according to any of paragraphs A-A17, wherein the 3D scene is opaque where the volumetric region is opaque.
  • A19. The method according to any of paragraphs A-A18, wherein the 3D scene is semi-transparent where the volumetric region is semi-transparent.
  • A20. The method according to any of paragraphs A-A19, wherein a user can control transparency of the 3D scene.
  • A21. The method according to any of paragraphs A-A20, further comprising:


predicting at least one of a presence, location, and amount of hydrocarbons in the subsurface formation; and


managing hydrocarbons in the subsurface formation based on said prediction.

  • B. A system for displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation, the system comprising:
    • a processor;
    • a tangible, machine-readable storage medium that stores machine-readable instructions
    • for execution by the processor, wherein the machine-readable instructions include code for generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window,
    • code for creating one or more primitives on the 2D canvas,
    • code for identifying a volumetric region of the 3D volumetric data set corresponding to the one or more primitives, and
    • code for displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.
  • C. A computer program product having computer executable logic recorded on a tangible, machine readable medium, the computer program product when executed displays selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation, the computer program product comprising:
    • code for generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window,
    • code for creating one or more primitives on the 2D canvas,
    • code for identifying a volumetric region of the 3D volumetric data set corresponding to the one or more primitives, and
    • code for displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.
  • D. A method of producing hydrocarbons, comprising:
    • displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface hydrocarbon reservoir, wherein the displaying includes
    • generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window, creating one or more primitives on the 2D canvas,
    • identifying a volumetric region of the 3D volumetric data set corresponding to the one or more primitives, and
    • displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window; and
    • producing hydrocarbons from the subsurface hydrocarbon reservoir using the displayed volumetric region.

Claims
  • 1. A method for displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation, comprising: generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window;creating one or more 2D primitives on the 2D canvas;creating a 3D volume from the one or more 2D primitives created on the 2D canvas;forming a volumetric region, which is a subset of the 3D volumetric data set, from an intersection between the 3D volume created from the one or more 2D primitives and the 3D volumetric data set; anddisplaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.
  • 2. The method of claim 1, wherein the volumetric data set is one of a 3D seismic, a structured reservoir model, an unstructured reservoir model, and a geologic model.
  • 3. The method of claim 1, wherein the one or more 2D primitives includes at least one of a line drawing, a point drawing, and a polygon drawing.
  • 4. The method of claim 1, wherein creating one or more 2D primitives includes at least one of a brush painting, a fill operation, and an erase operation.
  • 5. The method of claim 1, wherein creating one or more 2D primitives includes creating a primitive based on a 2D projection from an object in the 3D scene.
  • 6. The method of claim 1, wherein each of the one or more 2D primitives is a raster primitive.
  • 7. The method of claim 1, wherein each of the one or more 2D primitives is a vector primitive.
  • 8. The method of claim 1, wherein the 2D canvas is a top or map view of the second display window, the second display window is separate from and adjacent to the first display window, the second display window is a 3D window and the first display window is a 2D window.
  • 9. The method of claim 1, wherein the creating the 3D volume comprises performing an extrude operation or a grow operation.
  • 10. The method of claim 9, wherein the extrude operation is performed with a geometric limit.
  • 11. The method of claim 1, wherein the creating the 3D volume comprises performing a geometric transformation.
  • 12. The method of claim 11, wherein the transformation is one of a translation, a scale operation, or a rotation.
  • 13. The method of claim 1, wherein the volumetric region is identified based on a Boolean operation of at least two precursor volumetric regions.
  • 14. The method of claim 1, wherein the 2D canvas is a first 2D canvas, and further wherein the volumetric region is identified based on a Boolean operation on 3D regions identified by the first 2D canvas and a second 2D canvas.
  • 15. The method of claim 1, wherein the volumetric region is identified based on ray casting operations on graphic processors.
  • 16. The method of claim 1, wherein the volumetric region is identified based on virtual fragment operations on graphic processors.
  • 17. The method of claim 1, wherein the 3D scene is shown based on the volumetric region.
  • 18. The method of claim 1, wherein the 3D scene is transparent where the volumetric region is transparent.
  • 19. The method of claim 1, wherein the 3D scene is opaque where the volumetric region is opaque.
  • 20. The method of claim 1, wherein the 3D scene is semi-transparent where the volumetric region is semi-transparent.
  • 21. The method of claim 1, wherein a user can control transparency of the 3D scene.
  • 22. The method of claim 1, further comprising: predicting at least one of a presence, location, and amount of hydrocarbons in the subsurface formation based on the volumetric region; andmanaging hydrocarbons in the subsurface formation based on said prediction.
  • 23. A system for displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation, the system comprising: a processor;a tangible, machine-readable storage medium that stores machine-readable instructions for execution by the processor, wherein the machine-readable instructions includecode for generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window,code for creating one or more 2D primitives on the 2D canvas,code for creating a 3D volume from the one or more 2D primitives created on the 2D canvas,code for forming a volumetric region, which is a subset of the 3D volumetric data set, from an intersection between the 3D volume created from the one or more 2D primitives and the 3D volumetric data set, andcode for displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.
  • 24. A computer program product having computer executable logic recorded on a tangible, machine readable non-transitory medium, the computer program product when executed displays selected portions of a three-dimensional (3D) volumetric data set representing a subsurface formation, the computer program product comprising: code for generating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window,code for creating one or more 2D primitives on the 2D canvas,code for creating a 3D volume from the one or more 2D primitives created on the 2D canvas,code for forming a volumetric region, which is a subset of the 3D volumetric data set, from an intersection between the 3D volume created from the one or more 2D primitives and the 3D volumetric data set, andcode for displaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window.
  • 25. A method of producing hydrocarbons, comprising: displaying selected portions of a three-dimensional (3D) volumetric data set representing a subsurface hydrocarbon reservoir, wherein the displaying includesgenerating at least one two-dimensional (2D) canvas, the 2D canvas corresponding to a plane in the 3D data set, the 2D canvas being shown in a first display window,creating one or more 2D primitives on the 2D canvas,creating a 3D volume from the one or more 2D primitives created on the 2D canvas,forming a volumetric region, which is a subset of the 3D volumetric data set, from an intersection between the 3D volume created from the one or more 2D primitives and the 3D volumetric data set, anddisplaying the volumetric region in a 3D scene, the 3D scene being shown in a second display window; andproducing hydrocarbons from the subsurface hydrocarbon reservoir using the displayed volumetric region.
CROSS-REFERENCE TO RELATED APPLICATION

This application is the National Stage entry under 35 U.S.C. 371 of PCT/US2013/035841, that published as Intl. Patent Application No. 2013/169429 and was filed on 9 Apr. 2013, which claims the benefit of U.S. Provisional Patent Application No. 61/644,196 filed May 8, 2012 entitled METHOD OF USING CANVAS BASED CONTROL FOR 3D DATA VOLUME VISUALIZATION, INTERROGATION, ANALYSIS AND PROCESSING, each of which is incorporated by reference herein, in its entirety, for all purposes.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2013/035841 4/9/2013 WO 00
Publishing Document Publishing Date Country Kind
WO2013/169429 11/14/2013 WO A
US Referenced Citations (210)
Number Name Date Kind
5468088 Shoemaker et al. Nov 1995 A
5671136 Willhoit Sep 1997 A
5708764 Borrel et al. Jan 1998 A
5992519 Ramakrishnan et al. Nov 1999 A
6035255 Murphy et al. Mar 2000 A
6044328 Murphy et al. Mar 2000 A
6070125 Murphy et al. May 2000 A
6128577 Assa et al. Oct 2000 A
6191787 Lu et al. Feb 2001 B1
6219061 Lauer et al. Apr 2001 B1
6236994 Swartz et al. May 2001 B1
6353677 Pfister et al. Mar 2002 B1
6373489 Lu et al. Apr 2002 B1
6388947 Washbourne et al. May 2002 B1
6490528 Cheng et al. Dec 2002 B2
6516274 Cheng et al. Feb 2003 B2
6519568 Harvey et al. Feb 2003 B1
6549854 Malinverno et al. Apr 2003 B1
6549879 Cullick et al. Apr 2003 B1
6643656 Peterson Nov 2003 B2
6664961 Ray et al. Dec 2003 B2
6690820 Lees et al. Feb 2004 B2
6694264 Grace Feb 2004 B2
6697063 Zhu Feb 2004 B1
6757613 Chapman et al. Jun 2004 B2
6765570 Cheung et al. Jul 2004 B1
6766254 Bradford et al. Jul 2004 B1
6772066 Cook Aug 2004 B2
6823266 Czernuszenko et al. Nov 2004 B2
6826483 Anderson et al. Nov 2004 B1
6829570 Thambynayagam et al. Dec 2004 B1
6834732 Haarstad Dec 2004 B2
6839632 Grace Jan 2005 B2
6912467 Schuette Jun 2005 B2
6912468 Marin et al. Jun 2005 B2
6940507 Repin et al. Sep 2005 B2
6968909 Aldred et al. Nov 2005 B2
6980939 Dhir et al. Dec 2005 B2
6980940 Gurpinar et al. Dec 2005 B1
6987878 Lees et al. Jan 2006 B2
6993434 Cheng Jan 2006 B2
7003439 Aldred et al. Feb 2006 B2
7006085 Acosta et al. Feb 2006 B1
7027925 Terentyev et al. Apr 2006 B2
7031842 Musat et al. Apr 2006 B1
7050953 Chiang et al. May 2006 B2
7079953 Thorne et al. Jul 2006 B2
7096172 Colvin et al. Aug 2006 B2
7098908 Acosta et al. Aug 2006 B2
7136064 Zuiderveld Nov 2006 B2
7181380 Dusterhoft et al. Feb 2007 B2
7203342 Pedersen Apr 2007 B2
7248256 Minami et al. Jul 2007 B2
7248258 Acosta et al. Jul 2007 B2
7272973 Craig Sep 2007 B2
7281213 Callegari Oct 2007 B2
7283941 Horowitz et al. Oct 2007 B2
7314588 Blankenship Jan 2008 B2
7330791 Kim et al. Feb 2008 B2
7337067 Sanstrom Feb 2008 B2
7359845 Kelfoun Apr 2008 B2
7362329 Zuiderveld Apr 2008 B2
7366616 Bennett et al. Apr 2008 B2
7373251 Hamman et al. May 2008 B2
7412363 Callegari Aug 2008 B2
7437358 Arrouye et al. Oct 2008 B2
7451066 Edwards et al. Nov 2008 B2
7460957 Prange et al. Dec 2008 B2
7478024 Gurpinar et al. Jan 2009 B2
7502026 Acosta et al. Mar 2009 B2
7512543 Raghuraman et al. Mar 2009 B2
7539625 Klumpen et al. May 2009 B2
7546884 Veeningen et al. Jun 2009 B2
7548873 Veeningen et al. Jun 2009 B2
7565243 Kim et al. Jul 2009 B2
7576740 Dicken Aug 2009 B2
7584086 Frankel Sep 2009 B2
7596481 Zamora et al. Sep 2009 B2
7603264 Zamora et al. Oct 2009 B2
7603265 Mainguy et al. Oct 2009 B2
7606666 Repin et al. Oct 2009 B2
7616213 Chuter Nov 2009 B2
7627430 Hawtin Dec 2009 B2
7630872 Xia et al. Dec 2009 B2
7630914 Veeningen et al. Dec 2009 B2
7657407 Logan Feb 2010 B2
7657414 Zamora et al. Feb 2010 B2
7668700 Erignac et al. Feb 2010 B2
7684929 Prange et al. Mar 2010 B2
7711532 Dulac et al. May 2010 B2
7725302 Ayan et al. May 2010 B2
7739623 Liang Jun 2010 B2
7743006 Woronow et al. Jun 2010 B2
7796468 Kellogg Sep 2010 B2
7814989 Nikolakis-Mouchas et al. Oct 2010 B2
7876705 Gurpinar et al. Jan 2011 B2
7913190 Grimaud et al. Mar 2011 B2
7925483 Xia et al. Apr 2011 B2
7953585 Gurpinar et al. May 2011 B2
7953587 Bratton et al. May 2011 B2
7970545 Sanstrom Jun 2011 B2
7986319 Dommisse et al. Jul 2011 B2
7991600 Callegari Aug 2011 B2
7995057 Chuter Aug 2011 B2
8005658 Tilke et al. Aug 2011 B2
8055026 Pedersen Nov 2011 B2
8073664 Schottle et al. Dec 2011 B2
8094515 Miller et al. Jan 2012 B2
8103493 Sagert et al. Jan 2012 B2
8145464 Arnegaard et al. Mar 2012 B2
8150663 Mallet Apr 2012 B2
8155942 Sarma et al. Apr 2012 B2
8199166 Repin et al. Jun 2012 B2
8301426 Abasov et al. Oct 2012 B2
8325179 Murray et al. Dec 2012 B2
8345929 Kovacic et al. Jan 2013 B2
8346695 Pepper et al. Jan 2013 B2
8364404 Legendre et al. Jan 2013 B2
8381815 Karanikas et al. Feb 2013 B2
8483852 Kurtenbach Jul 2013 B2
8521837 Badheka et al. Aug 2013 B2
8638328 Lin Jan 2014 B2
8686996 Cheung et al. Apr 2014 B2
8698798 Murray et al. Apr 2014 B2
8727017 Hilliard et al. May 2014 B2
8786604 Gorell Jul 2014 B2
8797319 Lin Aug 2014 B2
8803878 Andersen et al. Aug 2014 B2
8812334 Givens et al. Aug 2014 B2
8849630 Amemiya Sep 2014 B2
8849640 Holl et al. Sep 2014 B2
8884964 Holl Nov 2014 B2
8892407 Budiman Nov 2014 B2
8931580 Cheng Jan 2015 B2
9008972 Imhof Apr 2015 B2
9026417 Sequeira, Jr. May 2015 B2
9070049 Fredrich et al. Jun 2015 B2
9098647 Boyd Aug 2015 B2
9123161 Adair Sep 2015 B2
20020177955 Jalali et al. Nov 2002 A1
20040012670 Zhang Jan 2004 A1
20040207652 Ratti et al. Oct 2004 A1
20040210395 Cheng et al. Oct 2004 A1
20050119959 Eder Jun 2005 A1
20050171700 Dean Aug 2005 A1
20050213809 Lees et al. Sep 2005 A1
20060224423 Sun et al. Oct 2006 A1
20060247903 Schottle Nov 2006 A1
20060265508 Angel et al. Nov 2006 A1
20070266082 McConnell et al. Nov 2007 A1
20080088621 Grimaud et al. Apr 2008 A1
20080144903 Wang et al. Jun 2008 A1
20080165185 Smith et al. Jul 2008 A1
20080165186 Lin Jul 2008 A1
20080243749 Pepper et al. Oct 2008 A1
20080294393 Laake et al. Nov 2008 A1
20080306803 Vaal et al. Dec 2008 A1
20090027380 Rajan et al. Jan 2009 A1
20090027385 Smith Jan 2009 A1
20090037114 Peng et al. Feb 2009 A1
20090040224 Igarashi et al. Feb 2009 A1
20090043507 Dommissee et al. Feb 2009 A1
20090089028 Sagert et al. Apr 2009 A1
20090122061 Hammon, III May 2009 A1
20090125362 Reid et al. May 2009 A1
20090157367 Meyer et al. Jun 2009 A1
20090182541 Crick et al. Jul 2009 A1
20090198447 Legendre et al. Aug 2009 A1
20090205819 Dale et al. Aug 2009 A1
20090222742 Pelton et al. Sep 2009 A1
20090229819 Repin et al. Sep 2009 A1
20090240564 Boerries et al. Sep 2009 A1
20090295792 Liu et al. Dec 2009 A1
20090299709 Liu Dec 2009 A1
20090303233 Lin et al. Dec 2009 A1
20100171740 Andersen et al. Jul 2010 A1
20100172209 Miller et al. Jul 2010 A1
20100191516 Benish et al. Jul 2010 A1
20100206559 Sequeira, Jr. et al. Aug 2010 A1
20100214870 Pepper et al. Aug 2010 A1
20100225642 Murray et al. Sep 2010 A1
20100283788 Rothnemer et al. Nov 2010 A1
20110004447 Hurley et al. Jan 2011 A1
20110029293 Petty et al. Feb 2011 A1
20110044532 Holl et al. Feb 2011 A1
20110054857 Moguchaya Mar 2011 A1
20110060572 Brown et al. Mar 2011 A1
20110063292 Holl et al. Mar 2011 A1
20110074766 Page et al. Mar 2011 A1
20110107246 Vik May 2011 A1
20110112802 Wilson et al. May 2011 A1
20110115787 Kadlec May 2011 A1
20110153300 Holl et al. Jun 2011 A1
20110161133 Staveley et al. Jun 2011 A1
20120150449 Dobin Jun 2012 A1
20120166166 Czernuszenko Jun 2012 A1
20130112407 Cheng et al. May 2013 A1
20130140037 Sequeira, Jr. et al. Jun 2013 A1
20130298065 Kurtenbach et al. Nov 2013 A1
20130317798 Cheng et al. Nov 2013 A1
20130338984 Braaksma et al. Dec 2013 A1
20130338987 Cheng et al. Dec 2013 A1
20140160128 Cheung et al. Jun 2014 A1
20140245211 Gorell Aug 2014 A1
20140270393 Louis et al. Sep 2014 A1
20140278117 Dobin et al. Sep 2014 A1
20140365192 Cheng et al. Dec 2014 A1
20150094994 Sequeira, Jr. et al. Apr 2015 A1
20160003008 Uribe et al. Jan 2016 A1
20160003956 Walker et al. Jan 2016 A1
Foreign Referenced Citations (22)
Number Date Country
2312381 Jun 1999 CA
1036341 Nov 1998 EP
1230566 Nov 2000 EP
0014574 Mar 2000 WO
03003053 Jan 2003 WO
03072907 Sep 2003 WO
03078794 Sep 2003 WO
2005020044 Mar 2005 WO
2006029121 Mar 2006 WO
2006065915 Jun 2006 WO
2007076044 Jul 2007 WO
2007100703 Sep 2007 WO
2008121950 Oct 2008 WO
2009032416 Mar 2009 WO
2009039422 Mar 2009 WO
2009075946 Jun 2009 WO
2009079160 Jun 2009 WO
2009080711 Jul 2009 WO
2009148681 Dec 2009 WO
2011031369 Mar 2011 WO
2011038221 Mar 2011 WO
2014142976 Sep 2014 WO
Non-Patent Literature Citations (14)
Entry
Resmi et al, A Semi-Automatic Method for Segmentation and 3D modeling of glioma tumors from brain MRI, J. Biomedical Science and Engineering, 2012, 5, 378-383.
Patel, Daniel, et al. “Knowledge-assisted visualization of seismic data.” Computers & Graphics 33.5 (2009): 585-596.
Bharat, K, et al. (2001), “Who Links to Whom: Mining Linkage Between Web sites”, Proceedings of the 2001 IEE Int'l Conf on Data Mining, pp. 51-58.
Cabral, B., et al (1995), “Accelerated Volume Rendering and Tomographic Reconstruction Using Texture Mapping Hardware”, IEEE in Symposium on Volume Visualization, pp. 91-98, 131.
Crawfis, R., et al. (1992), “Direct Volume Visualization of Three-Dimensional Vector Fields”, Proceedings of the 1992 Workshop on Volume Visualization, pp. 55-60.
Drebin, R., et al. (1988), “Volume Rendering”,Computer Graphics, the Proceedings of 1988 SIGGRAPH Conference, vol. 22, No. 4, pp. 65-74.
Lorensen, W., et al., (1987), “Marching Cubes: A High-Resolution 3D Surface Construction Algorithm”, Computer Graphics, The Proceeding of 1987 SIGGRAPH Conference, vol. 21, No. 4, pp. 163-169.
McCann, P., et al. (2003), “Horizontal Well Path Planning and Correction Using Optimization Techniques,” J. of Energy Resources Tech. 123, pp. 187-193.
Mugerin. C., et al. (2002), “Well Design Optimization: Implementation in GOCAD,” 22nd Gocade Meeting, Jun. 2002.
Rainaud, J.F., et al. (2004), “WOG—Well Optimization by Geosteering: A Pilot Software for Cooperative Modeling on Internet,” Oil & Gas Science & Tech. 59(4), pp. 427-445.
Reed, P., et al. (2003) “Simplifying Multiobjective Optimization Using Genetic Algorithms,” Proceedings of World Water and Environmental Resources Congress, 10 pgs.
Udoh, E., et al. (2003), “Applicatons of Strategic Optimization Techniques to Development and Management of Oil and Gas Resources”, 27th SPE Meeting, 16 pgs.
Rohlf, J., et al., (2011), “IRIS Performer: A High Performance Multiprocessing Toolkit for Real-Time 3D Graphics”, Silicon Graphics Computer Systems, 14 pages.
Holden, P., (1994), “VoxelGeo 1.1.1 Productivity Tool for the Geosciences”, Vital Images, Inc., 92 pages.
Related Publications (1)
Number Date Country
20150049084 A1 Feb 2015 US
Provisional Applications (1)
Number Date Country
61644196 May 2012 US