Displaying and manipulating seismic sections is one of the core activities of geoscience screening and interpretation workflows. Interaction with seismic sections is typically performed using panning and section player controls, initiated via a conventional human interface device like a mouse or a keyboard. As described herein, various technologies and techniques can facilitate screening, interpretation, etc., of seismic or other data.
One or more computer-readable media including computer-executable instructions to instruct a computing device to format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a first linear motion signal from manipulation of an input device; format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a second linear motion signal from manipulation of an input device where the first linear motion and second linear motion are orthogonal motions; format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a first rotational motion signal from manipulation of an input device; and format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a second rotational motion signal from manipulation of an input device where the first rotational motion and the second rotational motion are clockwise and counter-clockwise motions. Various other apparatuses, systems, methods, etc., are also disclosed.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
Features and advantages of the described implementations can be more readily understood by reference to the following description taken in conjunction with the accompanying drawings.
The following description includes the best mode presently contemplated for practicing the described implementations. This description is not to be taken in a limiting sense, but rather is made merely for the purpose of describing the general principles of the implementations. The scope of the described implementations should be ascertained with reference to the issued claims.
In the example of
The simulation component 120 may process information to conform to one or more attributes, for example, as specified by the attribute component 130, which may be a library of attributes. Such processing may occur prior to input to the simulation component 120. Alternatively, or in addition to, the simulation component 120 may perform operations on input information based on one or more attributes specified by the attribute component 130. As described herein, the simulation component 120 may construct one or more models of the geologic environment 150, which may be relied on to simulate behavior of the geologic environment 150 (e.g., responsive to one or more acts, whether natural or artificial). In the example of
Various technologies and techniques are described herein for analysis and visualization of information. In the example of
As described herein, the management components 110 may include features of a commercially available simulation framework such as the PETREL® seismic to simulation software framework (Schlumberger Limited, Houston, Tex.). The PETREL® framework provides components that allow for optimization of exploration and development operations. The PETREL® framework includes seismic to simulation software components that can output information for use in increasing reservoir performance, for example, by improving asset team productivity. Through use of such a framework, various professionals (e.g., geophysicists, geologists, and reservoir engineers) can develop collaborative workflows and integrate operations to streamline processes.
As described herein, the management components 110 may include features for geology and geological modeling to generate high-resolution geological models of reservoir structure and stratigraphy (e.g., classification and estimation, facies modeling, well correlation, surface imaging, structural and fault analysis, well path design, data analysis, fracture modeling, workflow editing, uncertainty and optimization modeling, petrophysical modeling, etc.). Particular features may allow for performance of rapid 2D and 3D seismic interpretation, optionally for integration with geological and engineering tools (e.g., classification and estimation, well path design, seismic interpretation, seismic attribute analysis, seismic sampling, seismic volume rendering, geobody extraction, domain conversion, etc.). As to reservoir engineering, for a generated model, one or more features may allow for simulation workflow to perform streamline simulation, reduce uncertainty and assist in future well planning (e.g., uncertainty analysis and optimization workflow, well path design, advanced gridding and upscaling, history match analysis, etc.). The management components 110 may include features for drilling workflows including well path design, drilling visualization, and real-time model updates (e.g., via real-time data links).
As described herein, various aspects of the management components 110 may be add-ons or plug-ins (e.g., executable code) that operate according to specifications of a framework environment. For example, a commercially available framework environment marketed as the OCEAN® framework environment (Schlumberger Limited) allows for seamless integration of add-ons (or plug-ins) into a PETREL® framework workflow. The OCEAN® framework environment leverages .NET® tools (Microsoft Corporation, Redmond, Wash.) and offers stable, user-friendly interfaces for efficient development. As described herein, various components may be implemented as add-ons (or plug-ins) that conform to and operate according to specifications of a framework environment (e.g., according to application programming interface (API) specifications, etc.). Various technologies described herein may be optionally implemented as components in an attribute library.
In the field of seismic analysis, aspects of a geologic environment may be defined as attributes. In general, seismic attributes help to condition conventional amplitude seismic data for improved structural interpretation tasks, such as determining the exact location of lithological terminations and helping isolate hidden seismic stratigraphic features of a geologic environment. Attribute analysis can be quite helpful to defining a trap in exploration or delineating and characterizing a reservoir at the appraisal and development phase. An attribute generation process (e.g., in the PETREL® framework or other framework) may rely on a library of various seismic attributes (e.g., for display and use with seismic interpretation and reservoir characterization workflows). At times, a need or desire may exist for generation of attributes on the fly for rapid analysis. At other times, attribute generation may occur as a background process (e.g., a lower priority thread in a multithreaded computing environment), which can allow for one or more foreground processes (e.g., to enable a user to continue using various components).
Attributes can help extract the maximum amount of value from seismic and other data, for example, by providing more detail on subtle lithological variations of a geologic environment (e.g., an environment that includes one or more reservoirs).
The input device 230 may be a so-called “3D” mouse, for example, the 3DCONNEXION® SPACE NAVIGATOR™ device marketed by 3Dconnexion GmbH, Munich, Germany, which provides for manipulation of 3D graphics applications (e.g., cooperatively with a conventional mouse) to perform zooming (e.g., shifting z-axis a direction linearly in re plane), panning left/right (e.g., shifting z-axis a direction linearly in re plane), panning up/down (e.g., pulling up along z-axis or pushing down along z-axis), tilting (e.g., linear direction tilt over angle φ), spinning (e.g., angle Θ) and rolling (e.g., linear direction tilt over angle φ). As a use example, one hand may engage the input device 230 to position a 3D graphics application model or navigate a 3D graphics application environment while the other hand simultaneously uses a conventional mouse to select, create or edit. As described herein, other input devices marketed by 3DConnexion GmbH (e.g., SpaceExplorer device, etc.) or others may optionally be suitable for input.
The input device 250 may include an emitter 255, a detector 257 and circuitry 259 for processing and output. As shown, the device 250 may allow for tilt input along a z-axis, as well as spinning about the z-axis.
The input device 270 may include a ball 272 such as a roller ball as well as various buttons 272 and 274 for user input. Such a device may include a software driver that can call for rendering of a graphical user interface that allows for user configuration of the buttons.
The input device 290 may be a touch screen that operates in conjunction with one or more graphical user interfaces. A graphical user interface may include a control dial and control arrows or other controls that allow for user input, for example, equivalent to or akin to user input received via a hardware device. As shown with respect to the input device 290, features such as slider bars, etc., may be provided as graphics controls.
In the example scenarios 302, 304, and 306, the data 301 is at least three-dimensional and capable of being sliced in planes (e.g., orthogonal or non-orthogonal sections). A planar window 303 is also shown in the example scenarios 302, 304 and 306. In the scenario 302, the input device 330 may be manipulated to pan or zoom the window 303 of the data 301 for a slice xb. User input via the device 330 may cause formatting of data for viewing (e.g., rendering to a display) according to a response 312-1 or a response 312-2. For example, the response 312-1 translates movement along a radial line of the device 330 from 90 degrees to 270 degrees (or alternatively tilt toward such angles) to speed in the data 301 along the z-axis of the Cartesian coordinate system of the data 301. For the response 312-2, it translates movement along a radial line of the device 330 from 0 degrees to 180 degrees (or alternatively tilt toward such angles) to speed in the data 301 along the y-axis of the Cartesian coordinate system of the data 301. A user may optionally operate a control that provides for selection of a different coordinate of the data, optionally along a non-orthogonal plane, etc.
In the scenario 304, the input device 330 may be manipulated to move to a different slice xa of the data 301. User input via the device 330 may cause formatting of data for viewing (e.g., rendering to a display) according to a response 314. For example, the response 314 translates rotational movement about a rotational axis of the device 330 (e.g., from 0 degrees to 360 degrees) to speed in the data 301 along the x-axis of the Cartesian coordinate system of the data 301. A user may optionally operate a control that provides for selection of a different coordinate of the data, optionally along a non-orthogonal plane, etc.
In the scenario 306, the input device 330 may be manipulated to move to yet another slice xc of the data 301. User input via the device 330 may cause formatting of data for viewing (e.g., rendering to a display) according to a response 316. For example, the response 316 translates rotational movement about a rotational axis of the device 330 (e.g., shown as from 360 degrees to 0 degrees) to speed in the data 301 along the x-axis of the Cartesian coordinate system of the data 301. A user may optionally operate a control that provides for selection of a different coordinate of the data, optionally along a non-orthogonal plane, etc.
As described herein, an input device may be a jog-and-shuttle wheel or dial device configured to output instructions responsive to user input to, for example, control panning and zooming behavior of seismic sections. An input device may provide for direction of panning and zooming and speed. An input device may provide for section slicing (e.g., increasing or decreasing line numbers) and speed and/or the step size in which section number is increased or decreased. The example scenarios of
As described herein, an input device may include a physical wheel or a graphical control wheel where turning the wheel to the right and left can zoom a section in and out. In such an example, how far the wheel is turned from a center null position can determine the speed of the zooming. Referring to the scenario 302 of
Again, as described in the scenarios 304 and 306 of
In the example scenarios 302, 304 and 306 of
As described herein, one or more computer-readable media can include computer-executable instructions to instruct a computing device to: format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a first linear motion signal from manipulation of an input device; format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a second linear motion signal from manipulation of an input device where the first linear motion and second linear motion can be or are orthogonal motions; format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a first rotational motion signal from manipulation of an input device; and format multidimensional data, with respect to one or more dimensions of a multidimensional coordinate system, responsive to receipt of a second rotational motion signal from manipulation of an input device where the first rotational motion and the second rotational motion can be or are clockwise and counter-clockwise motions. Such one or more computer-readable media can include computer-executable instructions to instruct a computing system to render formatted multidimensional data to a display device and optionally render a graphic to a display device where a characteristic of the graphic depends on a received motion signal. For example, such a graphic may be an arrow and the characteristic may be size, color, etc. With respect to an input device, such a device may be a touch screen or a jog-and-shuttle wheel or other type of device (e.g., optionally with a rotatable wheel).
As described herein, one or more computer-readable media may include computer-executable instructions to instruct a computing system to render a series of views of formatted multidimensional data to a display device at a frame speed dependent upon an extent of motion (e.g., motion due to manipulation of an input device).
In the scenario 402, the input device 430 may be in a null position as associated with a slice xb of the data 401 that includes the feature 405. As described herein, the track module 425 may allow a user to select the feature 405 for tracking in the data 401.
In the scenario 404, the input device 430 may be manipulated by spinning in a clockwise direction to move to a slice xa of the data 401. Where the feature 405 has been selected for tracking, the track module 425 causes the computing platform 420 to track and window the feature 405 for rendering to the display 440 (e.g., by appropriately formatting at least some of the data 401). Further, a graphic 444 may be rendered to the display 440 to indicate direction and speed. For example, the arrow graphic 444 may be sized, colored, etc., such that a user is notified as to how fast navigation is occurring through the data 401 (e.g., larger arrow, darker color or shade indicating faster navigation and smaller arrow, lighter color or shade indicating slower navigation).
In the scenario 406, the device 430 may be manipulated by spinning in a counter-clockwise direction to move to a slice x, of the data 401. Where the feature 405 has been selected for tracking, the track module 425 causes the computing platform 420 to track and window the feature 405 for rendering to the display 440 (e.g., by appropriately formatting at least some of the data 401). Further, as indicated in the scenario 406, a “flag” control may be selected to flag the rendered information 442 of feature 405 for the slice xc. Flagging may cause data, associated information, etc., to be stored to memory (e.g., consider memory 422 of the computing platform 420). In the scenario 406, a log may indicate that a flag was set such that a user may readily return to the data, associated information, etc., for any of a variety of purposes. For example, where a flag is set for a feature relevant to extraction of a resource from a reservoir, information associated with the feature may become readily available for input to a workflow (see, e.g., the workflow component 144 of
As mentioned, the track module 425 may provide for windowing as a form of formatting. As described herein, such windowing may be dynamic and depend on the type of selection made for tracking. For example, where a geologic structure is selected (e.g., as identified by a property, attribute, etc.), the track module 425 may cause the computing platform 420 to analyze the data 401 for boundaries of the structure. In such an example, in turn, boundaries can automatically alter windowing to zoom-in or zoom-out depending on the scale of the structure in various viewing planes. Accordingly, display space can be optimized for a user to visually inspect or otherwise analyze a structure.
In an example where a property limit or range is selected for tracking (e.g., permeability from x1 to x2), a track module may perform an analysis to identify a relatively contiguous 3D network within data and allow for rendering of the network to a display (e.g., slice-wise rendering while also showing locations in an accompanying 3D perspective view). As described herein, tracking can include display of a close-up view (e.g., planar view) and an expanded view (e.g., 3D perspective view).
As described herein, formatting of data can include various operations whereby discrete data points in a multidimensional coordinate system are averaged, interpolated, etc. Hence, formatting can include averaging, interpolating or other types of data processing. Formatting optionally includes processing to obtain polygonal data describing a feature (e.g., a horizon or other structure). Formatting optionally includes processing data to determine contours for a section. Formatting can include processing data to obtain polygonal contours and transforming the contours into a multidimensional object (e.g., 3D object).
The method 510 of
As described herein, a method can include identifying a three-dimensional feature in seismic data; receiving one or more instructions responsive to manipulation of an input device, the one or more instructions corresponding to a series of two-dimensional views of the three-dimensional feature in the seismic data; tracking the three-dimensional feature in the seismic data based on the one or more received instructions; and outputting information to render each view of the series of two-dimensional views of the three-dimensional feature in the seismic data. Such a method may include formatting information and outputting formatted information (e.g., where the formatting information includes windowing a three-dimensional feature). A method may include outputting information to render a graphic where the graphic has a characteristic dependent on receipt of one or more instructions responsive to manipulation of an input device and where the graphic indicates a speed of rendering of successive two-dimensional views (e.g., a frame rate or equivalent thereof).
In the scenario 602, the input device 630 may be in a null position as associated with a slice xb of the data 601 that includes the feature 605 for a time t0. As described herein, the track module 625 may allow a user to select the feature 605 for tracking in the data 601, which may include information with respect to time.
In the scenario 604, the input device 630 may be manipulated by spinning in a clockwise direction to move forward in time to a time t0+Δt of the data 601. In the scenario 606, the input device 630 may be manipulated by spinning in a counter-clockwise direction to move backward in time to a time t0−Δt of the data 601.
As described herein, a “flag” control may be selected to flag the rendered information 642 of feature 605 for a selected time. Flagging may cause data, associated information, etc., to be stored to memory (e.g., consider memory 622 of the computing platform 620). A log may indicate that a flag was set such that a user may readily return to the data, associated information, etc., for any of a variety of purposes. For example, where a flag is set for a feature relevant to extraction of a resource from a reservoir (e.g., resource pool, network, etc.), information associated with the feature may become readily available for input to a workflow (see, e.g., the workflow component 144 of
The method 710 of
As described herein, one or more computer-readable media can include computer-executable instructions to instruction a computing device to: receive a time instruction for a reservoir model, the instruction responsive to manipulation of an input device wherein a direction of the manipulation determines whether the instruction corresponds to a past time or to a future time; access reservoir model data from a data store responsive to receipt of an instruction that corresponds to a past time or to perform a simulation of a reservoir model and access reservoir model data from the simulation responsive to receipt of an instruction that corresponds to a future time; and render at least some accessed reservoir model data to a display based on receipt of an instruction that corresponds to a past time or receipt of an instruction that corresponds to a future time. One or more computer-readable media can include instructions where a clockwise direction corresponds to a future time and where a counter-clockwise direction corresponds to a past time. One or more computer-readable media can include instructions to render a graphic to a display where a characteristic of the graphic depends on a direction of the manipulation of an input device. While rotational directions are mentioned, linear directions may alternatively be used for time (e.g., consider capabilities that allow user assignment of manipulation features of an input device).
The GUI 815 shows a 3D perspective view of features in a data set along with direction of acceleration of gravity (G). The GUI 815 shows four isolated structures bound by two horizons (H1 and H2).
The GUI 825 allows a user to select or otherwise enter information for use in tracking, formatting, rendering, etc. A user may select a property (or attribute), a property versus time (e.g., change in property over a period of time), an area (e.g., cross-sectional area), a slope (e.g., optionally with respect to gravity), or other metric (e.g., criteria, property, feature, etc.), which may be a custom metric based on one or more values. Further, the GUI 825 includes a record control, which may cause a particular fly through of a data set to be recorded (e.g., stored to memory, whether via storage of images, graphics instructions, settings, etc.).
The GUIs 835 and 845 may be complimentary. For example, the GUI 845 may be windowed (e.g., sized for display) based on some feature or characteristic displayed via the GUI 835. In the example of
Referring again to the GUI 825, the “Record” control optionally includes an “On” setting and an “Off” setting. As indicated in the GUI 855, such settings may relate to speed (or speed feature). Hence, where the speed drops below an “On” setting, recording starts and when the speed rises above an “Off” setting recording stops. In such a manner, recording of selected features of interest may occur automatically and readily allow for review by a user.
The method 870 of
The method 870 of
In
In
In
In the examples of
An example of a method 1050 is also shown in
The method 1050 of
As described herein, a 2D plane coming out of a seismic section (a 3D volume with a physical analog) may be used to visualize a fourth dimension. In such an example, at every point in the 3D volume, another “axis” is provided and may be rendered as extending out of a selected portion of the 3D volume or in a separate graphical display (e.g., a separate display window). An input device such as a jog wheel can allow a user to input instructions to render information in the fourth dimension (e.g., by moving a line along a slice in the 3D volume) thus effectively updating the view of the 3D volume with fourth dimension information.
As to an example of a fourth dimension, consider frequency response, which may be rendered as a 2D plane of amplitude and frequency where amplitude values for each point at an intersection may be displayed for 0 to highest frequency. As another example, consider angle as an alternative representation for offset. In general, there are two approaches to look at amplitudes in a prestack data domain: Amplitude Versus Offset (AVO) or Amplitude Versus Angle (AVA). The difference between these two approaches depends on velocity model, such that gathers will look slightly different. As to offset, offset direction or angle domain of the seismic prestack data may be rendered (e.g., values in from zero offset to maximum offset, or zero angle to maximum angle).
An example of a method 1150 is also shown in
As described herein, one or more computer-readable media may include computer-executable instructions to instruct a computing system to output information for controlling a process. For example, such instructions may provide for output to sensing process, an injection process, drilling process, an extraction process, etc. Accordingly, based on visualization, a user may via a graphical control cause a computing device to issue an instruction that results in a physical response in a field (see, e.g, the environment 150 of
As described herein, components may be distributed, such as in the network system 1210. The network system 1210 includes components 1222-1, 1222-2, 1222-3, . . . 1222-N. For example, the components 1222-1 may include the processor(s) 1002 while the component(s) 1222-3 may include memory accessible by the processor(s) 1202. Further, the component(s) 1202-2 may include an I/O device for display and optionally interaction with a method. The network may be or include the Internet, an intranet, a cellular network, a satellite network, etc.
Although various methods, devices, systems, etc., have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as examples of forms of implementing the claimed methods, devices, systems, etc.