The exemplary embodiments described herein relate generally to the field of geophysical prospecting, and more particularly to the analysis of seismic or other geophysical subsurface imaging data. Specifically, the disclosure describes a method to classify seismic surfaces or patches of seismic surfaces that have been previously obtained.
This section is intended to introduce various aspects of the art, which may be associated with exemplary embodiments of the present invention. This discussion is believed to assist in providing a framework to facilitate a better understanding of particular aspects of the present invention. Accordingly, it should be understood that this section should be read in this light, and not necessarily as admissions of prior art.
Seismic surfaces are horizons that have been tracked through 2D or 3D seismic data, which represent and generally follow subterranean reflector surfaces. They generally correspond to boundaries between layers of rock, with everything below the horizon older than everything above the surfaces, hence represent boundaries of equivalent time.
Since the 1970's, geoscientists have used the concepts of seismic stratigraphy to interpret and label the key types of seismic stratigraphic surfaces—sequence boundaries (SBs) and flooding surfaces (FSs). One fundamental concept of seismic stratigraphy is that sequence boundaries (SBs) and flooding surfaces (FSs) divide seismic data into chronological packages, forming boundaries of genetically related packages of strata called seismic sequences and seismic systems tracts.
Traditionally surfaces in seismic data have been tracked interactively along a 2D line or volume of seismic data. Computer-based surface picks were initially interpreted using drawing or tracking software. Subsequent innovations allow surfaces to be tracked automatically or semi-automatically through 2D or 3D seismic data nearly instantaneously using software now routinely available in numerous commercially available software products for geophysical interpretation (e.g., Viswanathan 1996 U.S. Pat. No. 5,570,106); Pedersen, 2002, GB Patent No. 2,375,448; Admasu and Toennies, 2004; James, WO 2007046107). With these methods, interpreted surfaces are based on one or more seed point(s) or seed track(s) provided by the interpreter, with the final interpretation interactively accepted or revised by the interpreter. Options or ambiguities in interpretation, such as which branch to take when a surface splits, are frequently resolved by application of seismic stratigraphic concepts by the seismic interpreter. One component of seismic interpretation, then, is the gradual development of a conceptual geologic or seismic stratigraphic framework model of the region represented by the seismic data. Part of this is implicit or explicit classification or labeling of surfaces as FSs, SBs, or other meaningful geologic or geophysical surface types by the interpreter as a guide to executing the interpretation and subsequent procedures. The interpreter does this based on the seismic reflection geometries and terminations (onlap, downlap, truncation and toplap), seismic characteristics of the surface itself (amplitude, dip, smoothness or rugosity, continuity, etc.), and seismic facies characteristics of the bounding intervals, following the concepts of seismic stratigraphy. Judgment and evaluation based on the developing conceptual geologic model is done at several points in the interpretation process, including selection of which surfaces to track, what choices to make when encountering ambiguities, deciding whether to accept or revise a surface, and selecting areas of interest for subsequent analyses, interpretation, or visualization, for example, as potential hydrocarbon reservoirs, source facies, or seal facies.
Further innovations in the interpretation of seismic surfaces now provide methods of automatic picking a dense set of surfaces, also known as “stacks of surfaces” or “global interpretation” in seismic volumes. These methods refer to interpretation of many or all surfaces, or portions of surfaces in seismic volumes. Geologically-motivated mathematical rules or user-guidance may be employed at decision points to resolve ambiguities, such as a faults or where reflectors merge or branch, and/or overlapping or crossing of surfaces. In some cases, sets of surface parts may be the final product. These extend over only portions of seismic volumes, often terminating where further correlation is ambiguous (i.e., “horizon patches” of Imhof et al., 2009). These sets of surfaces or surface parts can be produced relatively rapidly from 2D lines or 3D volumes of seismic data with little to no user interaction.
Examples of methods for automatically generating “stacks of surfaces” or “stacks of surface patches” that generally follow seismic events such as peaks, troughs, or zero crossings include:
As described above, interactive seismic interpretation is nearly always done using a conceptual geologic model. The model is used to help select which surfaces to track, what choices to make when encountering ambiguities, whether to accept or revise a surface, and selection of areas of interest for subsequent analyses. When automatically generating stacks of surfaces or surface patches, such as occurs when applying the methods cited above, this step has not yet occurred. The output is a set of unclassified surfaces.
Surface Labeling
Other methods of surface clustering or labeling have been developed. These include: U.S. Pat. No. 6,771,800 (“Method of Chrono-Stratigraphic Interpretation of a Seismic Cross Section or Block”) to Keskes et al. (2004) discloses a method to transform seismic data into the depositional or chronostratigraphic domain. They construct virtual reflectors, discretize the seismic section or volume, count the number of virtual reflectors in each pixel or voxel, and renormalize this histogram. By doing this procedure for every trace, they create a section or volume where each horizontal slice approximates a surface indicating a geologic layer deposited at one time. This can be used by an interpreter to determine sedimentation rates, highlighting geologic hiatuses, which are surfaces of non-deposition.
Monsen et al. (“Geologic-process-controlled interpretation based on 3D Wheeler diagram generation,” SEG 2007) extended U.S. Pat. No. 7,248,539 to Borgos. They extract stratigraphic events from the seismic data and categorize them into over/under relationships based on local signal characteristics, deriving a relative order of patches using a topological sort. Flattened surfaces are then positioned in this relative order to allow a user to interpret the surface type by relative age, position, and basinward and landward extents, or through transformation to the depositional Wheeler domain (Wheeler, 1958). Wheeler methods can work in shelf margin depositional environments to determine surface types, but may not work in other settings, such as continental or deepwater. They also do not compute confidence measures.
A method to classify one or more seismic surfaces or surface patches based on measurements from seismic data, including: obtaining, by a computer, a training set including a plurality of previously obtained and labeled seismic surfaces or surface patches and one or more training seismic attributes measured or calculated at, above, and/or below the seismic surfaces; obtaining, by the computer, one or more unclassified seismic surfaces or surface patches and one or more seismic attributes measured or calculated at, above, and/or below the unclassified seismic surfaces; learning, by the computer, a classification model from the previously obtained and labeled seismic surfaces or surface patches and the one or more training seismic attributes; and classifying, by the computer, the unclassified seismic surfaces or surface patches based on the application of the classification model to the unclassified seismic surfaces or surface patches.
The method can further include quantifying a degree of confidence in a classification of the unclassified seismic surfaces or surface patches.
In the method, the classifying can include labeling the unclassified seismic surfaces or surface patches with a label that differentiates between stratigraphic classes.
In the method, the labeling can include differentiating between sequence boundaries and flooding surfaces.
In the method, the classifying can use a relationship between surfaces to further differentiate flooding surfaces into maximum flooding surfaces or transgressive flooding surfaces.
In the method, the learning can include learning the classification model implicitly from the plurality of previously obtained and labeled seismic surfaces or surface patches and the one or more training seismic attributes.
In the method, the one or more seismic attributes can include a single measure of attribute contrast above and below a seismic surface or surface patch of the plurality of previously obtained and labeled seismic surfaces or surface patches in order to collapse stratigraphically diagnostic seismic facies information into a single boundary measure, and the classifying is based on the single measure of attribute contrast.
In the method, the classifying can include eliminating redundant attributes from amongst the one or more seismic attributes measured or calculated at, above, and/or below the unclassified seismic surfaces using a single-link hierarchical dendogram.
In the method, the classifying can be by hard assignment.
In the method, the classifying can be by soft assignment.
In the method, the classifying can include segmenting the one or more unclassified seismic surfaces or surface patches.
In the method, the classifying can include individually classifying segments of the one or more unclassified seismic surfaces or surface patches.
In the method, the segmenting can result in approximately equal segment sizes.
In the method, the segmenting can include using the one or more seismic attributes to determine segment size, wherein at least one segment has a different size than another segment.
The method can include differentiating between different flooding surfaces of the classification of the unclassified seismic surfaces or surface patches.
The method can include using the classification of the unclassified seismic surfaces or surface patches to manage the production of hydrocarbons.
In the method, the classification model can be learned in an incompletely labeled training dataset.
In the method, the classifying can include classifying surfaces above but not below AVO.
While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims. It should also be understood that the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating principles of exemplary embodiments of the present invention. Moreover, certain dimensions may be exaggerated to help visually convey such principles.
Exemplary embodiments are described herein. However, to the extent that the following description is specific to a particular, this is intended to be for exemplary purposes only and simply provides a description of the exemplary embodiments. Accordingly, the invention is not limited to the specific embodiments described below, but rather, it includes all alternatives, modifications, and equivalents falling within the true spirit and scope of the appended claims.
Traditionally, sequence boundaries and flooding surfaces have been identified by human interpreters based on the seismic character above and/or below seismic reflections. The present technological advancement provides a method for automating the identification of flooding surfaces and sequence boundaries. An exemplary method described herein pertains to automatically classifying one or more seismic surfaces or surface patches which have been obtained from seismic data. These may be classified as flooding surfaces (FSs), sequence boundaries (SBs), or other surfaces of geologic or geophysical importance based on measurements from seismic data, such as surfaces above but not below AVO, and optionally compute confidence measures for the classification. The identification of surfaces and segments can be accomplished by referencing a training set of example segments and example surfaces. Different training sets may be used for different environments of deposition (EODs), basins, surveys, and/or different types of seismic data (zero-phase, quadrature, etc.).
An exemplary method can include (1) inputting the seismic surface(s) or surface patch(es), (2) inputting attributes measured from seismic data, (3) computing features at, above, and/or below the surface that characterize the surface and/or bounding facies, potentially using a measure of contrast, (4) classifying the surface, optionally quantifying the likelihood of the surface belonging to each type, and (5) outputting surface class and quantifying confidence. The surfaces can them be used for data analysis, interpretation, or visualization.
The classified surfaces and the optional confidences can be used to explore for or manage hydrocarbons. As used herein, hydrocarbon management includes hydrocarbon extraction, hydrocarbon production, hydrocarbon exploration, identifying potential hydrocarbon resources, identifying well locations, determining well injection and/or extraction rates, identifying reservoir connectivity, acquiring, disposing of and/or abandoning hydrocarbon resources, reviewing prior hydrocarbon management decisions, and any other hydrocarbon-related acts or activities.
Optionally, as shown in
The selected set of surface features can be a subset of the input training surface seismic attributes. As discussed below, a subset may be determined in order to reduce redundancy and improve efficiency. In step 405, segment patterns for each surface type are calculated and learn classification model from the training surface segment patterns.
Note that the set of training surfaces may be incompletely labeled; that is, the user may not have specified classification labels for all of the input training surfaces. This data can then be used to learn the classification model within a semi-supervised or active learning framework, as opposed to the more common supervised learning framework which requires completely labeled training sets. Semi-supervised learning takes advantage of the inherent structure in labeled and unlabeled training data to learn classifiers. In other words, the structure between labeled and unlabeled data is used to bootstrap and improve on a classifier learned using only the labeled training surfaces. Semi-supervised learning methods include self-training, co-training, and semi-supervised support vector machines; see X. Zhu, “Semi-Supervised Learning Literature Survey”, Computer Science Technical Report TR 150, University of Wisconsin-Madison, 2008, for an overview. Active learning uses a common classifier method but it tries to improve the information in the labeled training by querying the user for the label of selected unlabeled surfaces. The main goal of active learning is to achieve the same or higher classification accuracy with fewer overall labeled training examples by the fact that is can choose which training examples are labeled. Burr Settles provides in “Active Learning Literature Survey”, Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2010, a survey of active learning approaches. Ultimately, and regardless of the situation, the steps described in
Learning a Classification Model
Soft assignment relates classes of objects with unsharp boundaries in which membership is a matter of degree. Those of ordinary skill are familiar with the application of probabilistic evaluation and fuzzy logic to classification or assignment problems and two common examples of soft assignment methodologies. Soft assignment is tolerant of imprecision, uncertainty, partial truth, and approximation. Hard assignment, on the contrary, does not account for any imprecision and would evaluate classification problems as a binary set (i.e., true/false).
Measurements from the seismic data, such as amplitude, dip, frequency, phase, or polarity, often called seismic attributes, are input to the classification. A seismic attribute is a quantity extracted or derived from seismic data that can be analyzed in order to enhance information that might be more subtle in a traditional seismic image. As illustrated in
Attributes from above and below a surface can also be combined into a single measurement of contrast across a surface (
There are numerous methods to calculate above/below surface contrast. Two methods that could be used are the Euclidean distance,
or normalized similarity,
where xAxA and xBxB are vectors with values extracted from one or more attributes, or computed statistics thereof, within a window or interval above and below, respectively, of the surface for which the surface contrast measure is being evaluated, and kk denotes the kkth element of the vector.
The “above/below surface contrast” is a quantification of how much some characteristic(s) differs above versus below the surface. The Euclidean distance gives you a measure for the contrast because it tells how much two vectors (xA and xB) differxkAxkB.
Because hundreds to thousands of attributes could be extracted from numerous positions and intervals (i.e.,
The next step,
Next, as shown in
Table 1 shows example segment labels for a SB, TFS, and MFS, and the segment patterns for several surfaces.
Next, as shown in
There are a number of methods in the machine learning literature that can be used to reduce the number of attributes (“Computational Method of Feature Selection”, edited by Huan Liu and Horoshi Motoda). These include principal component analysis, factor analysis, projection pursuit, decision trees, random forests, and single-link hierarchical dendograms (see Matlab's linkage and dendogram functions, which was applied in the example below (see
Cross-validation should be part of computing and selecting the surface features (step 404) in order to estimate how accurately the model will work on unlabeled surfaces, prune and select features, but it is not required. K-fold cross validation and leave-one-out cross validation are two ways this could be done. Segment classes can be eliminated, merged, or increased based on based on cross-validation.
Next in
The classification model can be learned implicitly from the input labeled training surfaces, as in K-nearest neighbors classifiers (a method that classifies objects based on closest training examples in the feature space). This can be used to perform step 405. The K-nearest neighbor classifier assigns segment x to a particular class based on majority vote among the classes of k nearest training segments to segment x. Learning a classification model implicitly means that no optimization/learning takes place per se. An implicitly learned classifier is defined by the set of training data points and an algorithm which uses that data to make a classification on a new datum. Hence, this implicit learning is simply the process of gathering the training data, from which classifications are made. For example, a K-nearest neighbor classifier is implicit because there isn't really anything to learn per se and using the classifier to make a classification involves (the algorithm of) measuring the distance of the datum under evaluation to all of the training data and assigning the majority label of the K nearest neighbors. In contrast, learning, say, a neural network is explicit because something actually needs to be done with the training data which results in learning the parameters of the neural net that set the classifier.
Table 2 shows example label patterns for a sequence boundary and a flooding surface in a particular dataset. Somewhat different patterns would be recorded for other surfaces.
Classification and Confidence Calculations for Unclassified Surface(s)
Next, as is shown in
Next, as shown in
Next, as shown in
Next, as shown in
Next, as in
Another classification approach is to use a generalization of Bayesian probability theory called Transferable Belief Model (TBM), which is used to represent and combine measures of belief in evidence bearing on a hypothesis, which in this case can be the hypothesis that a segment belongs to a segment type, or that an unlabeled surface belongs to a surface type. (Smets and Kennes, 1994; Smets and Ristic, 2004). This model is more flexible than classic Bayesian probability theory when knowledge is incomplete (missing attributes, segments) and when dealing with uncertainty, ignorance and conflicting evidence (SB and FS can both have conformable segments). Reported confidence is the TBM combined similarity value. Using this method, a proximity value is calculated to compare the seismic attributes from a test segment to the seismic attributes of the training-set segments. This can be done by evaluating the vector representing the test segment seismic attribute and the vectors representing each training-set segment using the dynamic time warping algorithm (DTW) The set of distance values and their similarity values (1-distance) from the DTW calculation are degrees of support for the simple support functions for the hypothesis that a feature belongs to a labeled segment. These degrees of support for each attribute are combined into simple, separable support functions representing the degree of belief that a test segment belongs to a labeled segment class, using the TBM. The beliefs that a training segment belongs to each training-segment class are transformed into pignistic probabilities (Smets and Kennes, 1994) and the test segment is labeled with the training-segment class that has the highest pignistic probability value.
If the classifier provides a measure of belief or probability confidence thresholds can be set for segments, and if the confidence level of a segment is below this threshold, the segment could be classified as not otherwise specified (NOS). If the measures are normalized, then we can require the confidence level to be significantly above the “random classification” baseline. (The “random classification” baseline is 1/N1/N, where NN is the number of classes. In the case of FS vs. SB classification that is 0.5.) The user can then specify that the confidence level must be significantly above the “random classification” baseline in an absolute sense, say greater than 0.7, or in a relative sense, say 20% above the “random classification” baseline, or 0.6.
Next, as shown in
Next, as shown in
Concurrent with the proximity calculation, a conditional probability value is calculated that defines the likelihood that a given training-set segment class is found within a given surface class, SB or FS. This is calculated from the number of times a segment type occurs and the number of times a surface type occurs within the training set. The probability value that a segment belongs to a surface class, SB or FS, is assigned to each labeled test segment. The belief that the test surface belongs to a surface class is calculated, using the Dempster-Shafer combination rule, from the combined set of segment probability values assigned to the test surface. The beliefs that each segment of the test surface belongs to a particular surface class are transformed into pignistic probabilities [Smets and Kennes, 1994]. The pignistic probabilities are used to compute a probability that the unlabeled surface belongs to either the SB or FS surface class. The surface class with the highest probability is selected and assigned to the test surface. The classifier code is easily modifiable to also output the belief that the surface is not in the set of surfaces.
The following example describes implementing the present technological advancement to a 2D seismic line. Here, the process starts with three labeled sequence boundaries (SBs), and eight labeled flooding surfaces (FSs), four transgressive (TFSs) and four maximum flooding surfaces (MFSs), all adapted from Abreu et al., 2010 (
In accordance with the above discussion, a training set was established to classify unclassified surfaces. As in
Next, as in
Next, as in
Next, as in
Next, the classifier was tested using ‘leave-one-out” validation. This method involved sequentially removing each surface, one-at-a-time, from the training set, to use as the unclassified surface, having the remaining surfaces form the training set, and evaluating the results. Each iteration followed the same steps used in classifying an unclassified surface, as illustrated in
Next, as in
Next, as in
Computer Implementation
The computer system 2400 may also include computer components such as nontransitory, computer-readable media. Examples of computer-readable media include a random access memory (RAM) 2406, which may be SRAM, DRAM, SDRAM, or the like. The computer system 2400 may also include additional non-transitory, computer-readable media such as a read-only memory (ROM) 2408, which may be PROM, EPROM, EEPROM, or the like. RAM 2406 and ROM 2408 hold user and system data and programs, as is known in the art. The computer system 2400 may also include an input/output (I/O) adapter 2410, a communications adapter 2422, a user interface adapter 2424, and a display adapter 2418.
The I/O adapter 2410 may connect additional non-transitory, computer-readable media such as a storage device(s) 2412, including, for example, a hard drive, a compact disc (CD) drive, a floppy disk drive, a tape drive, and the like to computer system 2400. The storage device(s) may be used when RAM 2406 is insufficient for the memory requirements associated with storing data for operations of the present techniques. The data storage of the computer system 2400 may be used for storing information and/or other data used or generated as disclosed herein. For example, storage device(s) 2412 may be used to store configuration information or additional plug-ins in accordance with the present techniques. Further, user interface adapter 2424 couples user input devices, such as a keyboard 2428, a pointing device 2426 and/or output devices to the computer system 2400. The display adapter 2418 is driven by the CPU 2402 to control the display on a display device 2420 to, for example, present information to the user regarding available plug-ins.
The architecture of system 2400 may be varied as desired. For example, any suitable processor-based device may be used, including without limitation personal computers, laptop computers, computer workstations, and multi-processor servers. Moreover, the present technological advancement may be implemented on application specific integrated circuits (ASICs) or very large scale integrated (VLSI) circuits. In fact, persons of ordinary skill in the art may use any number of suitable hardware structures capable of executing logical operations according to the present technological advancement. The term “processing circuit” encompasses a hardware processor (such as those found in the hardware devices noted above), ASICs, and VLSI circuits. Input data to the computer system 2400 may include various plug-ins and library files. Input data may additionally include configuration information.
The foregoing application is directed to particular example embodiments of the present technological advancement. It will be apparent, however, to one skilled in the art, that many modifications and variations to the embodiments described herein are possible. All such modifications and variations are intended to be within the scope of the present invention, as defined in the appended claims. As will be obvious to the reader who works in the technical field, the present technological advancement is intended to be fully automated, or almost fully automated, using a computer programmed in accordance with the disclosures herein.
The following documents are hereby incorporated by reference in their entirety:
This application claims the benefit of U.S. Provisional Patent Application 62/152,453 filed Apr. 24, 2015 entitled SEISMIC STRATIGRAPHIC SURFACE CLASSIFICATION, the entirely of which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4916615 | Chittimeni | Apr 1990 | A |
4992995 | Favret | Feb 1991 | A |
5047991 | Hsu | Sep 1991 | A |
5265192 | McCormack | Nov 1993 | A |
5274714 | Hutcheson et al. | Dec 1993 | A |
5416750 | Doyen et al. | May 1995 | A |
5444619 | Hoskins et al. | Aug 1995 | A |
5465308 | Hutcherson et al. | Nov 1995 | A |
5539704 | Doyen et al. | Jul 1996 | A |
5586082 | Anderson et al. | Dec 1996 | A |
5677893 | de Hoop et al. | Oct 1997 | A |
5852588 | de Hoop et al. | Dec 1998 | A |
5940777 | Keskes | Aug 1999 | A |
6052650 | Assa et al. | Apr 2000 | A |
6226596 | Gao | May 2001 | B1 |
6236942 | Bush | May 2001 | B1 |
6295504 | Ye et al. | Sep 2001 | B1 |
6363327 | Wallet et al. | Mar 2002 | B1 |
6411903 | Bush | Jun 2002 | B2 |
6438493 | West et al. | Aug 2002 | B1 |
6466923 | Young | Oct 2002 | B1 |
6473696 | Onyia et al. | Oct 2002 | B1 |
6526353 | Wallet et al. | Feb 2003 | B2 |
6560540 | West et al. | May 2003 | B2 |
6574565 | Bush | Jun 2003 | B1 |
6574566 | Grismore et al. | Jun 2003 | B2 |
6618678 | Van Riel | Sep 2003 | B1 |
6625541 | Shenoy et al. | Sep 2003 | B1 |
6725163 | Trappe et al. | Apr 2004 | B1 |
6735526 | Meldahl et al. | May 2004 | B1 |
6751558 | Huffman et al. | Jun 2004 | B2 |
6754380 | Suzuki et al. | Jun 2004 | B1 |
6754589 | Bush | Jun 2004 | B2 |
6757614 | Pepper et al. | Jun 2004 | B2 |
6771800 | Keskes et al. | Aug 2004 | B2 |
6801858 | Nivlet et al. | Oct 2004 | B2 |
6804609 | Brumbaugh | Oct 2004 | B1 |
6847895 | Nivlet et al. | Jan 2005 | B2 |
6882997 | Zhang et al. | Apr 2005 | B1 |
6941228 | Toelle | Sep 2005 | B2 |
6950786 | Sonneland et al. | Sep 2005 | B1 |
6957146 | Taner et al. | Oct 2005 | B1 |
6970397 | Castagna et al. | Nov 2005 | B2 |
6977866 | Huffman et al. | Dec 2005 | B2 |
6988038 | Trappe et al. | Jan 2006 | B2 |
7006085 | Acosta et al. | Feb 2006 | B1 |
7053131 | Ko et al. | May 2006 | B2 |
7092824 | Favret et al. | Aug 2006 | B2 |
7098908 | Acosta et al. | Aug 2006 | B2 |
7162463 | Wentland et al. | Jan 2007 | B1 |
7184991 | Wentland et al. | Feb 2007 | B1 |
7188092 | Wentland et al. | Mar 2007 | B2 |
7203342 | Pedersen | Apr 2007 | B2 |
7206782 | Padgett | Apr 2007 | B1 |
7222023 | Laurenet et al. | May 2007 | B2 |
7243029 | Lichman et al. | Jul 2007 | B2 |
7248258 | Acosta et al. | Jul 2007 | B2 |
7248539 | Borgos et al. | Jul 2007 | B2 |
7266041 | Padgett | Sep 2007 | B1 |
7295706 | Wentland et al. | Nov 2007 | B2 |
7295930 | Dulac et al. | Nov 2007 | B2 |
7308139 | Wentland et al. | Dec 2007 | B2 |
7453766 | Padgett | Nov 2008 | B1 |
7453767 | Padgett | Nov 2008 | B1 |
7463552 | Padgett | Dec 2008 | B1 |
7502026 | Acosta et al. | Mar 2009 | B2 |
7658202 | Wiley et al. | Feb 2010 | B2 |
7697373 | Padgett | Apr 2010 | B1 |
7881501 | Pinnegar et al. | Feb 2011 | B2 |
8010294 | Dorn et al. | Aug 2011 | B2 |
8027517 | Gauthier et al. | Sep 2011 | B2 |
8055026 | Pedersen | Nov 2011 | B2 |
8065088 | Dorn et al. | Nov 2011 | B2 |
8121969 | Chan et al. | Feb 2012 | B2 |
8128030 | Dannenberg | Mar 2012 | B2 |
8213261 | Imhof et al. | Jul 2012 | B2 |
8219322 | Monsen et al. | Jul 2012 | B2 |
8326542 | Chevion et al. | Dec 2012 | B2 |
8346695 | Pepper et al. | Jan 2013 | B2 |
8358561 | Kelly et al. | Jan 2013 | B2 |
8363959 | Boiman et al. | Jan 2013 | B2 |
8380435 | Kumaran et al. | Feb 2013 | B2 |
8385603 | Beucher et al. | Feb 2013 | B2 |
8447524 | Chen et al. | May 2013 | B2 |
8447525 | Pepper et al. | May 2013 | B2 |
8463551 | Aarre | Jun 2013 | B2 |
8515678 | Pepper et al. | Aug 2013 | B2 |
8625885 | Brinson et al. | Jan 2014 | B2 |
8792301 | James | Jul 2014 | B2 |
8803878 | Andersen et al. | Aug 2014 | B2 |
8843353 | Posamentier et al. | Sep 2014 | B2 |
8849640 | Holl et al. | Sep 2014 | B2 |
9122956 | Fink | Sep 2015 | B1 |
20050137274 | Ko et al. | Jun 2005 | A1 |
20050171700 | Dean | Aug 2005 | A1 |
20050288863 | Workman | Dec 2005 | A1 |
20060115145 | Bishop | Jun 2006 | A1 |
20060184488 | Wentland | Aug 2006 | A1 |
20070067040 | Ferree | Mar 2007 | A1 |
20080123469 | Wibaux et al. | May 2008 | A1 |
20080212841 | Gauthier et al. | Sep 2008 | A1 |
20080270033 | Wiley et al. | Oct 2008 | A1 |
20100174489 | Bryant et al. | Jul 2010 | A1 |
20100211363 | Dorn et al. | Aug 2010 | A1 |
20100245347 | Dorn et al. | Sep 2010 | A1 |
20110307178 | Hoekstra | Dec 2011 | A1 |
20120072116 | Dorn et al. | Mar 2012 | A1 |
20120090001 | Yen | Apr 2012 | A1 |
20120117124 | Bruaset et al. | May 2012 | A1 |
20120150447 | Van Hoek et al. | Jun 2012 | A1 |
20120195165 | Vu et al. | Aug 2012 | A1 |
20120197530 | Posamentier et al. | Aug 2012 | A1 |
20120197531 | Posamentier et al. | Aug 2012 | A1 |
20120197532 | Posamentier et al. | Aug 2012 | A1 |
20120197613 | Vu et al. | Aug 2012 | A1 |
20120257796 | Henderson et al. | Oct 2012 | A1 |
20120322037 | Raglin | Dec 2012 | A1 |
20130006591 | Pyrcz et al. | Jan 2013 | A1 |
20130064040 | Imhof et al. | Mar 2013 | A1 |
20130080066 | Al-Dossary et al. | Mar 2013 | A1 |
20130138350 | Thachaparambil et al. | May 2013 | A1 |
20130158877 | Bakke et al. | Jun 2013 | A1 |
20130338927 | Kumaran | Dec 2013 | A1 |
20140081613 | Dinnusse et al. | Mar 2014 | A1 |
20140188769 | Lim et al. | Jul 2014 | A1 |
20150178631 | Thomas et al. | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
9964896 | Dec 1999 | WO |
2005017564 | Feb 2005 | WO |
2012090001 | Jul 2012 | WO |
2013048798 | Apr 2013 | WO |
Entry |
---|
Borgos, H.G., et al. (2005) “Automated Structural Interpretation Through Classification of Seismic Horizons”, Mathematical Methods and Modelling in Hydrocarbon Exploration and Production, pp. 89-106. |
Carrillat, A., et al. (2008) “Integrated Geological and Geophysical Analysis by Hierarchical Classification: Combining Seismic Stratigraphic and AVO Attributes”, Petroleum Geoscience, vol. 14, No. 4, pp. 339-354. |
Nguema, E.P. N., et al. (2008) “Model-based Classification With Dissimilarities: A Maximum Likelihood Approach”, Pattern Analysis and Applications, vol. 11, No. 3-4, pp. 281-298. |
Marroquin, I. D. , et al. (2009) “A Visual Data-Mining Methodology for Seismic Facies Analysis: Part 1—Testing and Comparison With Other Unsupervised Clustering Methods”, Geophysics, vol. 74, No. 1, p. 1-p. 11. |
Abreu, V., (1998), “Geologic Evolution of Conjugate Volcanic Passive Margins: Pelotas Basin (Brazil) and Offshore Namibia (Africa); Implication For Global Sea-Level Changes”, Rice University, Dept. of Earth Science, Houston, TX, Ph.D. Thesis, pp. 3-355. |
Admasu, F., et al., (2004), “Automatic Method for Correlating Horizons Across Faults In 3D Seismic Data”, IEEE Conference on Computer Vision and Pattern Recognition, 6 pgs. |
Chopra, S., et al., (2008), “Emerging and Future Trends In Seismic Attributes”, The Leading Edge, pp. 298-317. |
Coward, M.P., et al., (1999), The Distribution of Petroleum Reserves In Basins ofTthe South Atlantic Margins, In: Cameron, N.R., Bate, R.H. and Clure, V.S. (eds) The Oil and Gas Habitats of The South Atlantic, Geological Society, London, Special Publications, vol. 153, pp. 101-131. |
Davison, I., (1999), Tectonics and Hydrocarbon Distribution Along The Brazilian South Atlantic Margin, In: Cameron, N.R., Bate, R.H. and Clure, V.S. (eds) The Oil and Gas Habitats of The South Atlantic, Geological Society, London, Special Publications, vol. 153, pp. 133-151. |
deGroot, P., et al., (2012), “Attributes Play Important Role In Seismic Interpretation”, E&P, vol. 85, No. 10, pp. 31, 33-34. |
Hart, B.S., (2013), “Whither Seismic Stratigraphy, Interpretation”, Society of Exploration Geophysicists and American Association of Petroleum Geologists, vol. 1, No. 1, pp. SA3-SA20. |
Monsen, et al., (2007), “Geologic-Process-Controlled Interpretation Based on 3D Wheeler Diagram Generation”, SEG 2007, EAGE 69th Conference & Exhibition—London, UK, Jun. 11-14, 2007, 5 pages. |
Mitchum, R.M., et al., (1977), “Seismic Stratigraphy and Global Changes of Sea Level, Part 6: Stratigraphic Interpretation of Seismic Reflection Patterns In Depositional Sequences, In C.E. Clayton, ed., Seismic Stratigraphy—Applications To hydrocarbon Exploration”, Tulsa, OK, American Association of Petroleum Geologists Memoir 26, pp. 63-81. |
Pal, N.R., et al., (2001), “Some Classification Algorithms Integrating Dempster-Shafer Theory of Evidence With The Rank Nearest Neighbor Rules”, IEEE Transactions On Systems, Man and Cybernetics—Part A: Systems and Applications, vol. 31, No. 1, pp. 59-66. |
Number | Date | Country | |
---|---|---|---|
20160313463 A1 | Oct 2016 | US |
Number | Date | Country | |
---|---|---|---|
62152453 | Apr 2015 | US |