Various embodiments in accordance with the present invention will be described with reference to the drawings, in which:
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form.
Embodiments of the invention relate structured database systems, and specifically to processing point-in-time queries on such systems. In one embodiment, XML data is organized and stored as subtrees in a database. The subtrees are marked with a “birth timestamp” (similar to a “system change number”) at the time they are created and a “death timestamp” at the time they are marked for deletion. In one embodiment, multiple subtrees created by the same query may share the same birth timestamp. For both birth and death timestamps, their times may be synchronized to a clock time such as Greenwich meantime, or to an arbitrary time scale defined, for example, by a counter.
In various embodiments, a point-in-time query is processed by comparing a query timestamp with the birth and/or death timestamps of the subtrees. For example, according to one set of embodiments, the point-in-time query is not allowed to “see” subtrees that have a birth timestamp that is later than the query timestamp. This ensures that the query does not retrieve subtrees that did not exist in the database at the time of the query timestamp. Further, the query is not allowed to “see” subtrees that have a death timestamp earlier than the query timestamp. This ensures that the query does not retrieve subtrees that were marked for deletion at the time of the of the query timestamp.
Thus, in various embodiments, the birth timestamp prevents queries from accessing new subtrees before they are created (e.g., before an insert operation creating a subtree is transactionally complete), and the death timestamp prevents queries from accessing obsolete subtrees once they have been marked for deletion.
In an embodiment of the present invention, an XML document (or other structured document) is parsed into “subtrees” for efficient handling. An example of an XML document and its decomposition is described in this section, with following sections describing apparatus, methods, structures and the like that might create and store subtrees. Subtree decomposition is explained with reference to a simple example, but it should be understood that such techniques are equally applicable to more complex examples.
In a convention used for the figures of the present application, directed edges are oriented from an initial node that is higher on the page than the edge's terminal node, unless otherwise indicated. Nodes are represented by their labels, often with their delimiters. Thus, the root node in
As used herein, “subtree” refers to a set of nodes with a property that one of the nodes is a root node and all of the other nodes of the set can be reached by following edges in the orientation direction from the root node through zero or more non-root nodes to reach that other node. A subtree might contain one or more overlapping nodes that are also members of other “inner” or “lower” subtrees; nodes beyond a subtree's overlapping nodes are not generally considered to be part of that subtree. The tree of
To simplify the following description and figures, single letter labels will be used, as in
Some nodes may contain one or more attributes, which can be expressed as (name, value) pairs associated with nodes. In graph theory terms, the directed edges come in two flavors, one for a parent-child relationship between two tags or between a tag and its data node, and one for linking a tag with an attribute node representing an attribute of that tag. The latter is referred to herein as an “attribute edge”. Thus, adding an attribute (key, value) pair to an XML file would map to adding an attribute edge and an attribute node, followed by an attribute value node to a tree representing that XML file. A tag node can have more than one attribute edge (or zero attribute edges). Attribute nodes have exactly one descendant node, a value node, which is a leaf node and a data node, the value of which is the value from the attribute pair.
In the tree diagrams used herein, attribute edges sometimes are distinguished from other edges in that the attribute name is indicated with a preceding “@”.
Note from
The pointer in a link node advantageously does not reference the other link node specifically; instead the pointer advantageously references the subtree in which the other link node can be found.
Navigation from lower link node 106 to upper link node 104 (and vice versa) is nevertheless possible. For instance, the target location of lower link node 106 can be used to obtain a data structure for subtree 100 (an example of such a data structure is described below). The data structure for subtree 100 includes all seven of the nodes shown for subtree 100 in
Using a reference scheme that connects a link node to a target subtree (rather than to a particular node within the target subtree) makes lower link node 106 insensitive to changes in subtree 100. For instance, a new node may be added to subtree 100, causing the storage location of upper link node 104 to change. Lower link node 106 need not be modified; it can still reference subtree 100 and be able to locate upper link node 104. Likewise, upper link node 104 is insensitive to changes in subtree 102 that might affect the location of lower link node 106. This increases the modularity of the subtree structure. Subtree 100 can be modified without affecting link node 106 as long as link node 104 is not deleted. (If link node 104 is deleted, then subtree 102 is likely to be deleted as well.) Similarly, subtree 102 can be modified without affecting link node 104; if subtree 102 is deleted, then link node 104 will likely be deleted as well. Handling subtree updates that affect other subtrees is described in detail in Lindblad IIIA.
It should be noted that this indirect indexing approach is reliable as long as cyclic connections between subtrees are not allowed, i.e., as long as subtree 100 has only one node that connects to subtree 102 and vice versa. Those of ordinary skill in the art will appreciate that non-circularity is an inherent feature of XML and numerous other structured document formats.
Each subtree can be stored as a data structure in a storage area (e.g., in memory or on disk), preferably in a contiguous region of the storage area.
In
As shown in
It should be noted that each link node (such as described above with reference to
In the case where link node block 1212(1) corresponds to link node 106 of
As shown in
Atom data block 1214 is shown in detail in
It will be appreciated that the data structure described herein for storing subtree data is illustrative and that variations and modifications are possible. Different fields and/or field names may be used, and not all of the data shown herein is required. The particular coding schemes (e.g., unary coding, atom coding) described herein need not be used; different coding schemes or unencoded data may be stored. The arrangement of data into blocks may also be modified without restriction, provided that it is possible to determine which nodes are associated with a particular subtree and to navigate hierarchically between subtrees. Further, as described below, subtree data can be found in scratch space, in memory and on disk, and implementation details of the subtree data structure, including the atom data substructure, may vary within the same embodiment, depending on whether an in-scratch, in-memory, or on-disk subtree is being provided.
According to one embodiment of the invention, a computer database management system is provided that parses XML documents into subtree data structures (e.g., similar to the data structure described above), and updates the subtree data structures as document data is updated. The subtree data structures may also be used to respond to queries.
A typical XML handling system according to one embodiment of the present invention is illustrated in
System 1300 also includes parameter storage 1312 that maintains parameters usable to control operation of elements of system 1300 as described below. Parameter storage 1312 can include permanent memory and/or changeable memory; it can also be configured to gather parameters via calls to remote data structures. A user interface 1314 might also be provided so that a human or machine user can access and/or modify parameters stored in parameter storage 1312.
Data loader 1304 includes an XML parser 1316, a stand builder 1318, a scratch storage unit 1320, and interfaces as shown. Scratch storage 1320 is used to hold a “scratch” stand 1321 (also referred to as an “in-scratch stand”) while it is in the process of being built by stand builder 1318. Building of a stand is described below. After scratch stand 1321 is completed (e.g., when scratch storage 1320 is full), it is transferred to database 1308, where it becomes stand 1321′.
System 1300 might comprise dedicated hardware such as a personal computer, a workstation, a server, a mainframe, or similar hardware, or might be implemented in software running on a general purpose computer, either alone or in conjunction with other related or unrelated processes, or some combination thereof. In one example described herein, database 1308 is stored as part of a storage subsystem designed to handle a high level of traffic in documents, queries and retrievals. System 1300 might also include a database manager 1332 to manage database 1308 according to parameters available in parameter storage 1312.
System 1300 reads and stores XML schema data type definitions and maintains a mapping from document elements to their declared types at various points in the processing. System 1300 can also read, parse and print the results of XML XQuery expressions evaluated across the XML database and XML schema store.
In the architecture described herein, XML database 1308 includes one or more “forests” 1322, where a forest is a data structure against which a query is made. In one embodiment, a forest 1322 encompasses the data of one or more XML input documents. Forest 1322 is a collection of one or more “stands” 1306, wherein each stand is a collection of one or more subtrees (as described above) that is treated as a unit of the database. The contents of a stand in one embodiment are described below. In some embodiments, physical delimitations (e.g., delimiter data) are present to delimit subtrees, stands and forests, while in other embodiments, the delimitations are only logical, such as by having a table of memory addresses and forest/stand/subtree identifiers, and in yet other embodiments, a combination of those approaches might be used.
In one implementation, a forest 1322 contains some number of stands 1306, and all but one of these stands resides in a persistent on-disk data store (shown as database 1308) as compressed read-only data structures. The last stand is an “in-memory” stand (not shown) that is used to re-present subtrees from on-disk stands to system 1300 when appropriate (e.g., during query processing or subtree updates). System 1300 continues to add subtrees to the in-memory stand as long as it remains less than a certain (tunable) size. Once the size limit is reached, system 1300 automatically flushes the in-memory stand out to disk as a new persistent (“on-disk”) stand.
Two main data flows into database 1308 are shown. The flow on the right shows XML documents 1302 streaming into the system through a pipeline comprising an XML parser 1316 and a stand builder 1318. These components identify and act upon each subtree as it appears in the input document stream, as described below. The pipeline generates scratch data structures (e.g., a stand 1320) until a size threshold is exceeded, at which point the system automatically flushes the in-memory data structures to disk as a new persistent on-disk stand 1306.
The flow on the left shows processing of queries. A query processor 1310 receives a query (e.g., XQuery query 1340), parses the query, optimizes it to minimize the amount of computation required to evaluate the query, and evaluates it by accessing database 1308. For instance, query processor 1310 advantageously applies a query to a forest 1322 by retrieving a stand 1306 from disk into memory, apply the query to the stand in memory, and aggregate results across the constituent stands of forest 1322; some implementations allow multiple stands to be processed in parallel. Results 1342 are returned to the user. One such query system could be the system described in Lindblad IIA.
Queries to query processor 1310 can come from human users, such as through an interactive query system, or from computer users, such as through a remote call instruction from a running computer program that uses the query results. In one embodiment, queries can be received and responded to using a hypertext transfer protocol (HTTP). It is to be understood that a wide variety of query processors can be used with the subtree-based database described herein. According to one set of embodiments, query processor 1310 is particularly adapted to efficiently process point-in-time queries described in greater detail below.
Processing of input documents will now be described.
Parser 1316 also includes a subtree finder 1406 that allocates nodes identified in the tokenized document to subtrees according to subtree rules 1408 stored in parameter storage 1312. In one embodiment, subtree finder 1406 allocates nodes to subtrees based on a subtree root element indicated by the subtree rules 1408 Thus, an XML document is divided into subtrees from matching subtree nodes down. For example, if an XML document including citations was processed and the subtree root element was set to “citation”, the XML document would be divided into subtrees each having a root node of “citation”. In other cases, the division of subtrees is not strictly by elements, but can be by subtree size or tree depth constraints, or a combination thereof or other criteria.
Each subtree identified by subtree finder 1406 are provided to stand builder 1318, which includes a subtree analyzer 1410, a posting list generator 1412, and a key generator 1414. Subtree analyzer 1410 generates a subtree data structure (e.g., data structure 1200 of
As stand builder 1318 generates the various data structures associated with subtrees, it places them into scratch stand 1320, which acts as a scratch storage unit for building a stand. The scratch storage unit is flushed to disk when it exceed a certain size threshold, which can be set by a database administrator (e.g., by setting a parameter in parameter storage 1312). In some implementations of data loader 1304, multiple parsers 1316 and/or stand builders 1318 are operated in parallel (e.g., as parallel processes or threads), but preferably each scratch storage unit is only accessible by one thread at a time.
One example of a structure of an XML database used with the present invention is shown in
Forest structure 1504 includes one or more stand structures 1506, each of which contains data related to a number of subtrees, as shown in detail for stand 1506. For example, stand 1506 may be a directory in a disk-based file system, and each of the blocks may be a file. Other implementations are also possible, and the description of “files” herein should be understood as illustrative and not limiting of the invention.
TreeData file 1510 includes the data structure (e.g., data structure 1200 of
ListData file 1514 contains information about the text or other data contained in the subtrees that is useful in processing queries. For example, in one embodiment, ListData file 1514 stores “posting lists” of subtree identifiers for subtrees containing a particular term (e.g., an atom), and ListIndex file 1516 is used to provide more efficient access to particular terms in ListData file 1514. Examples of posting lists and their creation are described in detail in Lindblad IIA, and a detailed description is omitted herein as not being critical to understanding the present invention.
Qualities file 1518 provides a fixed-width array indexed by subtree identifier that encodes one or more numeric quality values for each subtree; these quality values can be used for classifying subtrees or XML documents. Numeric quality values are optional features that may be defined by a particular application. For example, if the subtree store contained Internet web pages as XHTML, with the subtree units specified as the <HTML> elements, then the qualities block could encode some combination of the semantic coherence and inbound hyper link density of each page. Further examples of quality values that could be implemented are described in Lindblad IVA, and a detailed description is omitted herein as not being critical to understanding the present invention.
Timestamps file 1520 provides a fixed-width array indexed by subtree identifier that stores two 64-bit timestamps indicating a creation and deletion time for the subtree. For subtrees that are current, the deletion timestamp may be set to a value (e.g., zero) indicating that the subtree is current. As described below, Timestamps file 1520 can be used to support modification of individual subtrees, as well as storing of archival information. Timestamps file 1520 may be filtered by query processor 1310 to enable historical database queries as described below.
The next three files provide selected information from the data structure 1200 for each subtree in a readily-accessible format. More specifically, Ordinals file 1522 provides a fixed-width array indexed by subtree identifier that stores the initial ordinal for each subtree, i.e., the ordinal value stored in block 1202 of the data structure 1200 for that subtree; because the ordinal increments as every node is processed, the ordinals for different subtrees reflects the ordering of the nodes within the original XML document. URI-Keys file 1524 provides a fixed-width array indexed by subtree identifier that stores the URI key for each subtree, i.e., the uri-key value stored in block 1202 of the data structure 1200. Unique-Keys file 1526 provides a fixed-width array indexed by subtree identifier that stores the unique key for each subtree, i.e., the unique-key value stored in block 1202 of the data structure 1200. It should be noted that any of the information in the Ordinals, URI-Keys, and Unique-Keys files could also be obtained, albeit less efficiently, by locating the subtree in the TreeData file 1510 and reading its subtree data structure 1200. Thus, these files are to be understood as auxiliary files for facilitating access to selected, frequently used information about the subtrees. Different files and different combinations of data could also be stored in this manner.
Frequencies file 1528 stores a number of entries related to the frequency of occurrence of selected tokens, which might include all of the tokens in any subtrees in the stand or a subset thereof. In one embodiment, for each selected token, frequency file 1528 holds a count of the number of subtrees in which the token occurs.
It will be appreciated that the stand structure described herein is illustrative and that variations and modifications are possible. Implementation as files in a directory is not required; a single structured file or other arrangement might also be used. The particular data described herein is not required, and any other data that can be maintained on a per-subtree basis may also be included. Use of subtree data structure 1200 is not required; as described above, different subtree data structures may also be implemented.
As the stands of a forest are generated, processed and stored, they can be “log-structured”, i.e., each stand can be saved to a file system as a unit that is never edited (other than the timestamps file). To update a subtree, the old subtree is marked as deleted (e.g., by setting its deletion timestamp in Timestamps file 1520) and a new subtree is created. The new subtree with the updated information is constructed in a memory cache as part of an in-memory stand and eventually flushed to disk, so that in general, the new subtree may be in a different stand from the old subtree it replaces. Thus, any insertions, deletions and updates to the forest are processed by writing new or revised subtrees to a new stand. This feature localizes updates, rather than requiring entire documents to be replaced.
It should be noted that in some instances, updates to a subtree will also affect other subtrees; for instance, if a lower subtree is deleted, the link node in the upper subtree is preferably be removed, which would require modifying the upper subtree. Transactional updating procedures that might be implemented to handle such changes while maintaining consistency are described in detail in Lindblad IIIA.
It is to be understood that marking a subtree as deleted does not require that the subtree immediately be removed from the data store. Rather than removing any data, the current time can be entered as a deletion timestamp for the subtree in Timestamps file 1520 of
Stand size is advantageously controlled to provide efficient I/O, e.g., by keeping the TreeData file size of a stand close to the maximum amount of data that can be retrieved in a single I/O operation. As stands are updated, stand size may fluctuate. In some embodiments of the invention, merging of stands is provided to keep stand size optimized. For example, in system 1300 of
In one embodiment, the background merge process can be tuned by two parameters: Merge-min-ratio and Merge-min-size, which can be provided by parameter storage 1312. Merge-min-ratio specifies the minimum allowed ratio between any two on-disk stands; once the ratio is exceeded, system 1300 automatically schedules stands for merging to reduce the maximum size ratio between any two on-disk stands. Merge-min-size limits the minimum size of any single on-disk stand. Stands below this size limit will be automatically scheduled for merging into some larger on-disk stand.
In the embodiment of a stand shown in
As described above, parameters can be provided using parameter storage 1312 to control various aspects of system operation. Parameters that can be provided include rules for identifying tokens and subtrees, rules establishing minimum and/or maximum sizes for on-disk and in-memory stands, parameters for determining whether to merge on-disk stands, and so on.
In one embodiment, some or all of these parameters can be provided using a forest configuration file, which can be defined in accordance with a preestablished XML schema. For example, the forest configuration file can allow a user to designate one or more ‘subtree root’ element labels, with the effect that the data loader, when it encounters an element with a matching label, loads the portion of the document appearing at or below the matching element subdivision as a subtree. The configuration file might also allow for the definition of ‘subtree parent’ element names, with the effect that any elements which are found as immediate children of a subtree parent will be treated as the roots of contiguous subtrees.
More complex rules for identifying subtree root nodes may also be provided via parameter storage 1312, for example, conditional rules that identify subtree root nodes based on a sequence of element labels or tag names. Subtree identification rules need not be specific to tag names, but can specify breaks upon occurrence of other conditions, such as reaching a certain size of subtree or subtree content. Some decomposition rules might be parameterized where parameters are supplied by users and/or administrators (e.g., “break whenever a tag is encountered that matches a label the user specifies,” or more generally, when a user-specified regular expression or other condition occurs). In general, subtree decomposition rules are defined so as to optimize tradeoffs between storage space and processing time, but the particular set of optimum rules for a given implementation will generally depend on the structure, size, and content of the input document(s), as well as on parameters of the system on which the database is to be installed, such as memory limits, file system configurations, and the like.
In various embodiments, each subtree may be associated with one or more timestamps indicating a change in state of the subtree. For example, a subtree may be associated with a “birth” timestamp indicating the time at which the subtree was created in the database. Further, a subtree may associated with a “death” timestamp indicating the time at which the subtree was marked for deletion (if applicable). As described above, subtrees are not immediately deleted from the database in a physical sense when an update or delete operation occurs; rather, they are merely rendered obsolete as of the date of that operation (e.g., the death timestamp). If a subtree has not yet been marked for deletion (i.e., is currently active), it may not have a death timestamp. Alternatively, the subtree may have a death timestamp with default value such as zero. In various embodiments, the death timestamp is later than or equal to the birth timestamp. The timestamp portion of the stand data structure is both readable and writable, thus allowing timestamps to be modified.
For any given time value a subtree may be in one of three states: “nascent,” “active,” or “deleted.” A subtree is in the nascent state if it doesn't have a birth timestamp associated with it, or its birth timestamp is later than or equal to the given time value. A subtree is in the active state if its birth timestamp is earlier than the given time value and its death timestamp is later than or equal to the given time value. A subtree is in the deleted state if its death timestamp is earlier than the given time value.
In one set of embodiments, the birth and death timestamps associated with a subtree are stored in one or more data structures that are separate from the subtree. In these embodiments, an index relationship may be maintained between subtree identifiers and birth and death timestamps. which can be used to efficiently identify whether a given subtree is “active” relative to a point in time. Alternatively, the birth and death timestamps associated with a subtree may be stored within the subtree data structure.
In various embodiments, the system includes an update clock that is incremented every time an update is committed. Committing an update includes activating zero or more nascent subtrees and deleting zero or more active subtrees. A nascent subtree is activated by setting the subtree birth timestamp to the current update clock value. An active subtree is deleted by setting the subtree death timestamp to the current update clock value.
During query evaluation, the current value of the update clock is determined at the start of query processing and used for the entire evaluation of the query. Since the clock value remains constant throughout the evaluation of the query, the state of the database remains constant throughout the evaluation of the query, even if updates are being performed concurrently.
When the database manager starts performing a merge, it first saves the current value of the update clock, and uses that value of the update clock for the entire duration of the merge. The stand merge process does not include in the output any subtrees deleted with respect to the saved update clock.
Subtree timestamp updates are allowed during the stand merge operation. To propagate any timestamp updates performed during the merge operation, at the very end of the merge operation the database manager briefly locks out subtree timestamp updates and migrates the subtree timestamp updates from the input stands to the output stand.
If a subtree is being created in the database (1604) (e.g., via an update or insert operation), the new subtree is associated with a birth timestamp indicating the time of creation (1606). The birth timestamp may be unique to the subtree being created, or may be shared among multiple subtrees that are created via a single operation. For example, if a single XML query is executed that creates several subtrees, then the new subtrees may be associated with the same birth timestamp. At step 1610, if a set of subtrees are marked for deletion, deleted subtrees are associated with a death timestamp (1612). At step 1614, if method 1600 is not finished (e.g., an update is still in process) the method returns to step 1610. Using the birth timestamp and death timestamp, queries may be performed for times on or before the current time as will be described below.
At step 1702, a query is received. At step 1704, the query timestamp for the query is determined. In various embodiments, the query timestamp may be embedded within the query itself. In other embodiments, the query timestamp may be determined or read from a separate source. A typical point-in-time query will have a query timestamp that is earlier than the time of query execution (e.g., the “current” time). However, in various embodiments the query timestamp may be equal to the time of query execution. In this manner, “current time” queries may be supported using the same logic as point-in-time (i.e., “historical”) queries.
Once the timestamp for the query has been determined, the query is executed to determine an intermediate result list of subtrees responsive to the query (1708). In one set of embodiments, indexes may be used to provide mappings between terms in the query string of the point-in-time query and the subtrees in the database, independent of timestamps. In these cases, the indexes may be used to determine the intermediate result list. The intermediate result list is then filtered to remove subtrees that are not active at the point in time of the query timestamp. As shown, subtrees in the intermediate result list that have a birth timestamp later than the query timestamp, or subtrees that do not have a birth timestamp (e.g., nascent subtrees) are removed (1710). Further, subtrees in the intermediate result that have a death timestamp earlier than the query timestamp are removed (1712).
In various embodiments, the filtering steps 1710 and 1712 are performed after the indexes described above have been fully resolved and an intermediate result list has been determined at 1708. In other embodiments, the filtering steps may occur concurrently with index resolution during the query execution process (which may be at several different points). At step 1714, the final, filtered result list is returned to the query requestor.
Note that indexes may be changing in real-time underneath the query (because of other queries making changes to the database), but the use of a query timestamp allows the query to “see” a constant view of the database. The query timestamp filters out any changes that have occurred since the start time of query execution (in the case of a “current time” query) or since the point-in-time specified (in the case of a point-in-time query).
It should be appreciated that the specific steps illustrated in
A database storage reclamation process (e.g., a garbage collection process) may be used to reclaim subtrees having a death timestamp that is dated before selected timestamp (e.g., the oldest-currently-active query in the system). The database storage reclamation process may physically delete subtrees, thereby making the deleted subtrees inaccessible to a query. Therefore, by controlling the timestamps used for a database storage reclamation process, a user may control how far in the past historical database queries may be run.
This detailed description illustrates some embodiments of the invention and variations thereof, but should not be taken as a limitation on the scope of the invention. In this description, structured documents are described, along with their processing, storage and use, with XML being the primary example. However, it should be understood that the invention might find applicability in systems other than XML systems, whether they are later-developed evolutions of XML or entirely different approaches to structuring data. It should also be understood that “XML” is not limited to the current version or versions of XML. An XML file (or XML document) as used herein can be serialized XML or more generally an “infoset.” Generally, XML files are text, but they might be in a highly compressed binary form.
Various features of the present invention may be implemented in software running on one or more general-purpose processors in various computer systems, dedicated special-purpose hardware components, and/or any combination thereof. Computer programs incorporating features of the present invention may be encoded on various computer readable media for storage and/or transmission; suitable media include suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and carrier signals adapted for transmission via wired, optical, and/or wireless networks including the Internet. Computer readable media encoded with the program code may be packaged with a device or provided separately from other devices (e.g., via Internet download).
Thus, although the invention has been described with respect to specific embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.
This application claims the benefit of U.S. Provisional Application No. 60/801,899, filed May 19, 2006 by Lindblad and entitled “POINT-IN-TIME QUERY METHOD AND SYSTEM,” which disclosure is incorporated herein by reference for all purposes. This application is related to the following commonly-owned, co-pending applications: U.S. patent application Ser. No. 10/462,100 (Attorney Docket No. 021512-000110US, entitled “SUBTREE-STRUCTURED XML DATABASE,” hereinafter “Lindblad I-A”); U.S. patent application Ser. No. 10/462,019 (Attorney Docket No. 021512-000210US, entitled “PARENT-CHILD QUERY INDEXING FOR XML DATABASES,” hereinafter “Lindblad TI-A”); U.S. patent application Ser. No. 10/462,023 (Attorney Docket No. 021512-000310US, entitled “XML DB TRANSACTIONAL UPDATE SYSTEM,” hereinafter “Lindblad III-A”); and U.S. patent application Ser. No. 10/461,935 (Attorney Docket No. 021512 000410US, entitled “XML DATABASE MIXED STRUCTURAL-TEXTUAL CLASSIFICATION SYSTEM,” hereinafter “Lindblad IV-A”).
Number | Date | Country | |
---|---|---|---|
60801899 | May 2006 | US |