Hydrocarbon exploration and production requires substantial generation of data in measuring or monitoring aspects of downhole operations and evaluating downhole and formation conditions. Various types of measurements, such as resistivity and gamma ray measurements for evaluating properties of a formation, force measurements for monitoring and evaluating drilling and other operations, and environmental measurements such as temperature, pressure and fluid properties, are generated and need to be delivered to users.
Well log data is commonly exchanged between companies using industry standard file formats. The two most common file formats which are used to exchange well log data are Digital Log Interchange Standard (DLIS) (Petrotechnical Open Software Corporation, 1991) and Log ASCII Standard (LAS) (Canadian Well Logging Society, 1992). This well log data is typically processed internally within a company via various internal file formats. Supporting numerous file formats presents a challenge to users that need to read and write well log data. Methodologies for storing well log data typically requires multiple applications needed to create, read, write, and convert the various file formats received from well log data generators.
A method of processing data includes: receiving by an application acquisition data from at least one data acquisition source, the acquisition data having a first data format, the first data format including the acquisition data and business data related to the first data format; instructing a software component to access the acquisition data; and processing the acquisition data by the software component, wherein processing includes separating the acquisition data from the business data and making the acquisition data available to one or more other components via a common interface that can be implemented by the software component and the other components.
A system for processing data includes: a processor including an application configured to receive acquisition data from at least one data acquisition source, the acquisition data having a first data format, the first data format including the acquisition data and business data related to the first data format; and at least one software component configured to access and process the acquisition data, wherein processing includes separating the acquisition data from the business data and making the acquisition data available to one or more other components via a common interface that can be implemented by the software component and the other components.
The subject matter, which is regarded as the invention, is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings, wherein like elements are numbered alike, in which:
Disclosed are methods and apparatuses for processing data, such as field data generated from downhole drilling, exploration, evaluation and production operations. The method includes providing an application programming interface (API) implemented by components such as data adapters that are configured to expose properties and methods and hide the details of the underlying data format. A data adapter is provided for each data format that is configured to run only when a specific data format is in use. A respective data adapter extracts the deliverable or acquisition data from an acquisition data source and may store the acquisition data in a format (e.g., as streams). The data adapter is responsible for processing and delivering the acquisition data to a user without requiring the application program to know anything about the specific file format being used. Each data adapter or other component implements a virtual data model (VDM) that provides instructions for communicating between each component and provides a virtual organization of the acquisition data. For example, the VDM associates acquisition data items (e.g., curves, tables) with various elements or objects and organizes the elements or objects such that components can access and view the data according to a common structure. If the acquisition data is to be delivered in a requested format, a respective data adapter reads the data using the VDM and converts the acquisition data into the delivered format.
The data adapters are responsible for creating an abstract interface that makes no assumption about the data format or its limitations. Applications written to this interface can work equally well with any acquisition data file formats or acquisition databases. Virtualization of such data allows for easily and efficiently delivering data having different formats (e.g., file formats and database formats) and containing different types of information in a generic format that is readily accessible to a user without extensive modification or code changes.
In one embodiment, the data acquisition unit 22, the logging instrument 18, and/or other components of the system 10 include devices as necessary to provide for storing and/or processing data collected from the logging instrument 18 and other components of the system 10. Exemplary devices include, without limitation, at least one processor, storage, memory, input devices, output devices and the like.
The system 30 includes a processor 34, such as a microprocessor and/or CPU, in which various software and hardware components reside. A software application 35, is responsible for receiving data from the acquisition unit 22 and providing the data to a requesting entity, such as a user or client computer 32. As described herein, a “component' may refer to a hardware or software component provided for performing one or more functions. For example, a software component may be a software package, application or module, object or set of objects for performing a function. Different components may communicate with one another via interfaces. Methods described herein may be performed by the application or one or more delivery components 36 in conjunction with data adapters as described further below.
In the embodiment shown in
The application 35 (and/or component 36) interacts with other components, such as the data adapters 40 and storage 38 via one or more interfaces 42. A software interface is analogous to a hardware interface like a bus or pin assignments of an integrated circuit. It is a contract between two software modules (e.g., data adapters) that describes how they agree to communicate with each other. In addition, the interface provides privacy to each module such that they are free to implement the interface in any way they deem necessary without any pre-conceived expectations from the modules they will be communicating with. Other attributes of the interface include a virtual data model, as well as indications of various functionalities of an associated component (e.g., transactions, indexing, paging, versioning. etc.)
For each type of acquisition data file received, a data adapter 40 is provided that can be launched when an acquisition data file having a specific file format is received or requested. Examples of file formats include Digital Log Interchange Standard (DLIS) and Log ASCII Standard (LAS) formats. Data files having a specific format include acquisition data (e.g., measurement data or other deliverable data) as well as “business data”, or data that governs how the acqusition data is encoded or formatted in a file. Examples of business data includes meta-data, headers and identifiers.
The data adapter 40 is a component that exposes an interface to a corresponding type of data file, and gets loaded when the file type it supports is opened. The data adapter 40 is configured to expose its properties and methods associated to a particular well data file and hide from the application 35 all details of the underlying data format (e.g., the physical structure of the data in the file). Because the data adapter 40 is responsible for this processing, the application 35 is only required to go through a very minimal amount of business logic based on the file type, and from that point on the code is completely generic. An example of a data adapter is an ActiveX control. A specific data adapter (e.g., a DLIS data adapter or a LAS data adapter) is provided for each acquisition file type.
The adapter 40 implements a virtual data model (VDM) interface that hides the details of the data format that the adapter supports. Internally, the adapter 40 can convert the format into a generic secondary format (if necessary) that is fully capable of implementing the VDM interface. As such, the application 35 does not need to know which data format is being used, but instead only needs to understand the VDM interfaces.
Any adapter 40 may be created, as long as it understands the data format being used and can implement the VDM interface to make that data available to other modules or components in the system. The adapters 40 isolate the application 35 or component 36 from “rote functionality” (e.g., indexing, searching, caching, paging, filtering, transactions, etc.) so that various applications do not have to duplicate this effort.
In the first stage 61, a well logging, drilling and/or LWD operation is performed, for example, via the system 10 and the logging instrument 18. For example, resistivity or other measurement data (referred to herein as “acquisition data”) is taken over selected time windows during a duration of the operation to generate raw measurement data.
The acquisition data may include various types of data, such as curves, tables, images and text data. Groups of acquisition data files or data objects (described herein as “projects”) that are received may come from one well or one measurement operation, or come from multiple wells and/or operations.
In the second stage 62, the acquisition data is transmitted to the processor 34.
In the third stage 63, the acquisition data is read by the application 35 using a data adapter 40 that understands the acquisition data format. The data adapters 40 operate as separate components, and thus when a correction is made to one adapter, there is no chance of breaking functionality in another adapter. In the example shown in
A respective data adapter 40, (e.g. acquisition data adapter, DLIS, or LAS adapter) isolates the application 35 and its components from the need to understand each individual data format. Instead these components acquire the information in these data formats using a generic or common set of interfaces referred to as the Virtual Data Model (VDM) interface, an example of which is shown as API 42.
The VDM provides instructions for communicating between each component and provides a virtual organization of the acquisition data by associating data items (e.g., curves, tables, images and text data) with virtual data elements and organizing the elements. The VDM provides an appearance of a structure of acquisition data that the application sees. For example, the VDM organizes elements based on parameters such as a project, well, measurement type, domain and others.
In one embodiment, the interface is based on a generic programming construct referred to as “streams.” Streams are the most basic element in a project. Every item stored in a project becomes a stream; data items such as curves, tables, images and text data all exist (after processing by the respective data adapter 40) as streams. “Streams” as described herein may refer to a stream of data, characters or bytes. Streams can grow simply by appending data to them, such as would be required by a curve that is receiving new data from a real-time source. Streams may be generated and stored by the data adapter 40 via a separate stream interface such as an IStream interface. Data adapters 40 responsible for reading the streams and converting the streams into a client file format may do so via the stream interface.
In the fourth stage 64, in one embodiment, the data adapter 40 implements the virtual data model. The data model describes various components and storages, as well as acquisition data organization.
For example, as part of the conversion or extraction of the acquisition data from the received format, the data adapter 40 organizes the various data streams generated from a project. In its simplest form, a project is a collection of streams, where streams are a collection of bytes. In one embodiment, the streams (or elements representing the streams) are collected in “storage objects” that represent data sources. Each storage object also has a collection of “groups” which are each collections of streams that share a common property.
An exemplary data model by which streams are organized in the storage 38 is shown in
In one embodiment, each data acquisition item (such as a curve or curve segment) of a project, which has been opened or read by the data adapter 40, is represented by one or more stream objects 44 (also referred to simply as a “stream” 44). Each stream 44 may be stored in a project storage object 46. Streams that have common attributes (e.g., measurement type, well number, domain, etc.) may be stored together (within the storage object 46) in a group object 48 or group 48. Groups are collections of related streams, such as tables that define the presentation on a log, and curves that are sampled together and have the same number of levels. Unrelated streams may be stored together, for example, in a stream collection 50.
The stream interface may be configured to simply access the streams (e.g., an IStream interface), or may also include additional functionality for processing, accessing or analyzing the streams. In the case of the acquisition data being well data, exemplary interfaces used by the adapters 40 to communicate with the various objects include “IWellGroup” (a generic group which supports groups of blobs, images, and other types that do not have their own specialized stream interfaces), “ICurveGroup” (groups of curves), and “ITableGroup” (groups of tables).
Other exemplary interfaces include an “IWellStream” interface that inherits from IStream and provides some convenience methods and properties over the basic IStream interface. An “ICurveStream” interface inherits from IWellStream and provides convenience methods and properties that are appropriate for a curve such as the number of levels, data type, number of columns, seek to level, etc. An “IIndexStream” interface inherits from ICurveStream and provides convenience methods and properties to support the sorting and searching of an index curve (for indexing data by, e.g., depth, time or true vertical depth). An “IWellStreamTable” interface inherits from IWellStream and provides convenience methods and properties for supporting tables. Note that the above specialization examples of IStream are conveniences; ultimately everything could be done with the basic IStream interface.
In the fifth stage 65, upon receiving a request for well data, the application 35 or component 36 instantiates the relevant data adapter, which retrieves streams associated with VDM elements (e.g., groups) corresponding to the type of requested data (e.g., different curves, time frames, domains, etc.) In one embodiment, the requester (e.g., user) requests data having selected attributes. The relevant data adapter converts the streams in the corresponding group and delivers them in the requested format.
The method 60 can be applied to copy acquisition data from a first format to a second format, e.g., to copy a LAS project to a DLIS project. Although this example is described in conjunction with LAS and DLIS formats, the method may be utilized for any data file formats. In this example, the application 35 has an open LAS project and is requested to deliver acquisition data for that project in DLIS format.
The application 35 launches a DLIS adapter 40 (e.g., an ActiveX control entitled “DLIS.OCX”) and asks the adapter 40 to create a new DLIS file. The DLIS adapter 40 creates a DLIS file and releases control to the application. The application 35 then directs a LAS adapter 40 to enumerate all of its source streams (e.g., step through a catalog and/or read all of the data streams) into the LAS adapter (e.g., into a cache or storage for the adapter). The application also directs the DLIS adapter to create a corresponding store or file for the data streams (e.g., build a cache for the DLIS adapter).
The LAS adapter 40 then copies all of its streams to the corresponding file or cache in the DLIS adapter 40. The copy can be performed, for example, via a “CopyTo” command supported in the IStream interface, where the source stream copies itself to a given destination which is passed in as an IStream parameter. The application 35 asks the DLIS adapter to perform a save to commit the changes to the DLIS file. The DLIS adapter 40 recognizes that the DLIS file is not up-to-date with the cache and writes all of its streams into the DLIS file. The DLIS adapter 40 then returns control to the application. The application 35 asks the DLIS adapter 40 to close the project. From the application's point of view, it is just reading streams from one adapter and writing streams to another adapter.
Referring to
Exemplary storages are shown in
Two primary project stores 52 are shown in
In one example, the primary store 52 labeled “.dlis” includes data files in a DLIS file format. The column labeled “Cache” shows the size of the secondary store 54 for the .dlis project, and the “Size” column displays the size of each primary store 52. The secondary store 54 includes streams generated by a DLIS data adapter 40 after processing the .dlis project.
The cache 54 for the DLIS file 52 is in the XWDF format and is thus created as a separate file folder. To create the cache 54, a DLIS data adapter 40 names the cache the same as the DLIS file and adds an XWDF extension. When the cache 54 is visible, it is possible to browse into the cache 54 and see all of the individual streams of the project file.
If, however, the application 35 opens a LAS file that already has a secondary store 54, the secondary store 54 will speed up the opening of the LAS file because the text file does not need to be parsed again. For example, the application 35 launches the LAS adapter 40 (e.g., LAS.OCX) and asks the LAS adapter 40 to open the file. The LAS adapter 40 detects that the cache (secondary store 54) is present and does not need to parse the text file, and then returns control to the application 35. All requests are now processed by an adapter manipulating the streams of the secondary cache.
The use of secondary stores allows for additional functionality that allows users to revert changes to a project without the need to copy files to a backup every time the project is opened. For example, when a typical word processor document is opened, the document is first copied to a backup and all of the editing is done on the backup file. If the user decides the editing has taken a wrong turn, they can just close the file without saving it, and the original data remains untouched.
Generally when working with well data projects, which can have large sizes (e.g., two gigabytes), copying a project every time it is opened may be unreasonable. Having a secondary store associated with a project offers a simple solution to this dilemma because the project is not monolithic, but is instead a collection of individual streams. For example, if a user opens a project and deletes a couple of curves and filters a couple of other curves and then realizes that he is working in the wrong curve group, the user can recover simply by not saving the changes.
In one embodiment, when a project is closed, the data adapter 40 looks for streams in the primary store 52 that are not in the secondary store 54 (this would not happen if the user committed the changes by doing a save), and then the missing secondary streams are reconstructed from the primary store 52. The data adapter 40 also looks for stream modifications by checking for changes in the stream's modification time (would not happen if the user committed the changes by doing a save), and then the modified secondary streams are replaced by the streams in the primary store 52.
Committing the transaction by doing a save reverses the behavior. The data adapter 40 looks for streams in the primary store 52 that are not in the secondary store 54, and then deletes the same streams in the primary store 52. The data adapter 40 also looks for stream modifications in the secondary store by checking for changes in the stream's modification time, and then the modified secondary streams are copied into the primary store 52. This simple form of transactions avoids having to make a backup of the project every time it is opened.
The adapters 40, as part of the virtual data model, can group the streams of a project into different groups to allow easy transitioning between multiple indices (e.g., measured depth, date/time, and true vertical depth). The design should not depend on “regulated” curve data (fixed spaced data, or data that has been converted to regularly spaced data in some domain). Grouping the project streams into relevant groups relieves the application 35 of indexing and allows for quick retrieval of data having requested parameters.
One type of group that is useful for indexing and presenting well data is a “curve group.” The concept of a curve group is similar to a DLIS frame; curves in the same group share a common sample rate, have the same number of levels, and are depth/time aligned. In one embodiment, if the model supports versioning, the curves in a group also share the same version.
In one embodiment, each curve group 48 may include one or more “indexing curves” that are used to sort data based on the current “presentation mode”. Exemplary indexing curves allow curves to be presented according to selected modes, such as “Bit Measured Depth”, “Time” and “True Vertical Depth” (TVD). For example, the curves shown in
In one embodiment, the groups 48 are organized and indexed according to the VDM prior to translating into a specific file format. In this way, the original acquisition data is kept intact until a deliverable (acquisition data in a requested format) is generated for a customer, at which point the data is regulated to files having the requested format (e.g., LAS or DLIS files). An advantage of being able to use unregulated data is that the logs are as close to the original data supplied by the acquisition system as is possible.
Dividing the project up into curve groups that share a common sample rate provides the ability to work with unregulated data and greatly reduces the time required to switch between presentation modes. There is an added advantage of being able to do elastic depth shifts without having to resample the data; data points in the index curve are all that needs to be changed.
In one embodiment, data adapters 40 are responsible for indexing, thus isolating the application 35 from rote functionality. As discussed above, indexing allows for quick transitions into multiple domains (e.g., depth, time, and TVD). The indexing functionality of the adapters 40 also provides additional capabilities, such as rapid searching for the level associated with a given index, sequential access to data from a pair of indices, filtering based on direction of data, and filtering out invalid indices (e.g., null data, not a number, or not finite).
In one embodiment, each data adapter 40 is also responsible for supplying sequential data. This is particularly useful in working with un-regulated data, i.e., data that has not been converted into regularly spaced data, which would typically require an application to search for the next level of data that is being delivered or presented according to some domain and/or filter.
For example, as the user scrolls through a well log, the application 35 needs to find all of the points from just above the top of the screen to just below the bottom of the screen. There could be thousands of levels visible on the screen for each curve being presented. Searching for each of those points would greatly slow down the speed a presentation could be scrolled. The data adapter 40 addresses this by sorting the streams into an array of “sorted valid” levels which allows the application 35 to step through the levels sequentially similar to what happens when working with unregulated data. The levels are sorted and may also be “validated” by removing levels that are considered invalid, e.g., null data, data that is not a number or not finite, and data that is part of invalid transactions.
The data adapter 40 can thus present and/or deliver data streams in a sequential fashion by organizing or indexing streams according to the domain associated with the data and types of data. For example, generally a user would not want to see a presentation that includes curves displaying all acquired data and curves in all domains. Unregulated data allows for such presentations, however a user can specify certain presentation modes. For example, a user can specify a “Presentation Mode” (e.g., depth, time, or TVD) and a “Plot Mode” (e.g., All Data, Drilling Data, or Back Plot Data).
For example, if the Plot Mode is changed from All Data to Drilling Data, curves can be displayed that include only the drilling data and exclude other data. In this case all of the measurements that were made while not drilling have been discarded by the indexer. Having these responsibilities off-loaded to data adapters means that all of the applications that use these adapters do not end up duplicating this effort.
Another functionality of the data adapters 40, in one embodiment, is versioning. A common scenario with well log data is to process the data in some manor such as depth shifting, but wanting to preserve the original data. Without versioning, an application 35 can do one of two things: output the new (processed) data to another project file, or output the new data to the same project using a different name. Embodiments described herein allow one to keep all of the data in the same project file, as renaming curves is generally not desirable (e.g., presentations are linked to the curve's name and will be broken if the curve is renamed). For example, a user can specify versioning, which causes the data adapter 40 to copy the relevant streams to a different version of the streams so that the curve can be edited without modifying the initial data. By supporting versioning, the processed data can keep the original name and reside in the same project file.
In one embodiment, data adapters 40 are responsible for paging the data into memory using a “least recently used” algorithm. This avoids the need to read an entire curve into memory when trying to render it on a data display (e.g., a log).
For example, the application 35 is responsible for passing paging parameters to a data adapter 40 via an interface that specifies a number of pages in memory at one time, and the number of data levels in each page. In one embodiment, these settings are made available in a generic file (e.g., a XML file) so the numbers could be optimized across all of the various data formats.
When the application 35 asks for a level of curve data, the adapter 40 checks to see if the level is in one of the pages, and if not loads the page into memory. If the maximum number of pages is already in memory, the oldest page is removed from memory first.
Referring to
The Knowledge Base Bus 100 serves to keep much of the business logic in knowledge bases, and out of the application code, allowing software to become far more flexible. The embodiment shown in
Each of these knowledge bases may be implemented as a component with a well-known interface. Like the data adapters 40, there may be more than one component that can implement the interface and the one that is used depends on the environment it is used in. For example, if the application 35 is running on a computer with the acquisition system 22, the component loaded may share resources with the acquisition system 22.
The interface from each of the knowledge bases is passed into the data adapters 40 when they are loaded. An example of how they are used would be in the case of a LAS file where there is no knowledge of what the original data types are when parsing the text file. The LAS adapter 40 can use the curve dictionary 106 to create a curve stream using the preferred data type instead of having to default everything to a double precision floating point value.
In one embodiment, the business logic involved in loading the correct knowledge base 104, 106, 108 and distributing the interfaces is incorporated in a single component that exposes one control interface 110. An exemplary interface 110 is called “ICommonControls”, which is seen by other components as the Knowledge Base bus and is the only interface necessary to be passed to other components.
Referring to
Referring to
In one embodiment, components 36 of the processing system are implemented in the data bus architecture as “smart components” because they know what information they require and are smart enough to pull that information off of the buses. Exemplary smart components include delivery/display components such as a log rendering component 116, a scale rendering component 118 and a form rendering component 120.
Smart components are clients of the data bus architecture 114 and represent a role reversal from conventional architecture where the application has to understand the data needs of all of the components it hosts. In a conventional application, each component used by an application exports all of the properties it requires to do its task. The application is responsible for gathering the information and “pushing” it into the component. The problem is that with complex components, such as log rendering components, there can literally be several hundred properties necessary to drive the component. Examples of such properties include single sample curve presentation, waveform presentation, grid presentation, annotation, zonation, fills, raw curve data, and client configuration properties. There can be around 200 properties associated with curve presentation alone. Every application that uses the log component has to know how to gather this information and push it into the component.
In contrast, the data bus architecture 114 described herein only requires that the application 35 instantiate the data buses 100, 102, and the smart components 116, 118, 120 are able to pull the data off of the buses that they need.
In this design, the components still expose interfaces, but instead of pushing data into interfaces via a large set of properties they pick the data off of the appropriate bus. “Dumb components”, or components without the capability to select data from the data buses 100, 102, can be made into smart components by wrapping them in a smart component. This reduces the size of the component interface and focuses it on how the application will control it.
Every smart component that is written understands its data requirements and can be expected to be able to retrieve them from the data buses 100, 102, as opposed to every application that uses the component having to know what every component's data requirements are. This means that applications can be simpler to write and reduces the duplication of effort between applications. Instead of passing in the data in a large set of properties, the application just has to pass in the interface to the bus.
The following code routine example illustrates how the system 30 can isolate the application from data formats, and direct data adapters 40 to copy one data format to another using simple code. A first portion of the code follows:
——uuidof ( WellLogData ));
——uuidof( WellLogData ));
As shown in the above portion of the code, the routine begins by creating an instance of the knowledge base interface 110 (CommonControls.OCX) which is responsible for loading the Knowledge Base Bus 100 to support the data model's “data driven” design. The interface 110 is responsible for the business logic necessary to load the proper controls based on where the user is running. Primarily, the interface 110 determines if the acquisition system is available and if not loads a set of controls that do not depend on the acquisition system's services.
The WellLogData.OCX interface 112 is the control responsible for implementing the VDM and loading the correct data adapter 40 to support the given data format. When the source and destination pathnames are passed into the interface's Pathname property, the correct adapter is instantiated.
A second portion of the code follows:
The second portion shows how the code retrieves the interface to the source and destination interfaces. The source interface is used to open the source file and the destination interface is used to create the destination file. With the source file open and the destination file created, the “CopyTo” method copies the source storage to the destination storage and in the process converts from one file format to another.
The data bus architecture described herein can be used to control the creation, collection, modification and presentation of tabular data via tables governed by the knowledge bus. The concept of a table is the basis of relational databases because a table provides an ideal means of relating data by providing multiple properties (columns of the table) that are related by being on the same row of the table.
The use of tables as described herein illustrates a design principle referred to as “data driven design” where the business logic to create, convert, and present data is controlled by information collected from the Knowledge Base Bus 100.
For example, survey data is stored in tables that are governed by the Data Dictionary 104 and Unit Server 108 knowledge bases, which are used together to modify application behavior at run time without the need write code or re-compile. In addition to storing customer data in tables, the presentation information to display the data is also stored in tables.
An exemplary XML code line which controls the “Measured Depth” column of the survey data table 124 follows:
A single class is data driven by the Data Dictionary to control how the data is presented and modified by the user, which allows the application 35 to avoid writing custom code for every table definition. A simple XML change can be used to modify a table. For example, an additional parameter is added to the survey table 124, e.g., a “Casing size” parameter, by the following XML line:
After adding the above line to the XML, the survey data properties will now include a “Casing Size” column as shown in
Embodiments of tables described herein as they relate to the data model have two important characteristics: they are self-describing, i.e., they have an embedded data dictionary, and they are created, accessed, and presented with the support of the Data Dictionary knowledge base 104. There are three interfaces in the data model that relate to tables. An “ICaseTable” interface is provided to access data in tables. An “IWellStreamTable” interface is inherited from the IWellStream interface to provide stream I/O to the tables. An “IDataDictionary” interface (e.g., an ActiveX control entitled “DataDictionary.OCX”) is the primary interface of the Data Dictionary knowledge base 104 which is used to create, access, and present tabular data. The Data Dictionary knowledge base 104 component knows what the current schema of the table is, so when a new table is created, it is created with the latest schema definition. The application code uses the IDataDictionary interface by calling the ISchemaCollection::CreateTable method which only needs a schema name as its input and is returned as an ICaseTable interface. If an older project is opened that was created with an earlier schema, the Data Dictionary will ask the ICaseTable interface to convert the data to the new schema. In one embodiment, the store for the Data Dictionary knowledge base 104 is an XML file that is read at run-time.
More complex relationships can be created by linking tables together. For example, as shown in
An exemplary section of a catalog table 126 is shown in
A curve heading table 130 is also shown in
Generally, some of the teachings herein are reduced to an algorithm that is stored on machine-readable media. The algorithm is implemented by the computer processing system and provides operators with desired output.
In support of the teachings herein, various analysis components may be used, including digital and/or analog systems. The digital and/or analog systems may be included, for example, in the downhole electronics unit 42 or the processing unit 32. The systems may include components such as a processor, analog to digital converter, digital to analog converter, storage media, memory, input, output, communications link (wired, wireless, pulsed mud, optical or other), user interfaces, software programs, signal processors (digital or analog) and other such components (such as resistors, capacitors, inductors and others) to provide for operation and analyses of the apparatus and methods disclosed herein in any of several manners well-appreciated in the art. It is considered that these teachings may be, but need not be, implemented in conjunction with a set of computer executable instructions stored on a computer readable medium, including memory (ROMs, RAMs), optical (CD-ROMs), or magnetic (disks, hard drives), or any other type that when executed causes a computer to implement the method of the present invention. These instructions may provide for equipment operation, control, data collection and analysis and other functions deemed relevant by a system designer, owner, user or other such personnel, in addition to the functions described in this disclosure.
It will be recognized that the various components or technologies may provide certain necessary or beneficial functionality or features. Accordingly, these functions and features as may be needed in support of the appended claims and variations thereof, are recognized as being inherently included as a part of the teachings herein and a part of the invention disclosed.