The present invention relates to a format for encoding information that is logically organized in a hierarchical structure, such as XML content, for efficient storage and processing. Specifically, the present invention relates to encoding the hierarchically organized information in a format that maintains characteristics of the information, such as the hierarchical structure.
The eXtensible Markup Language (XML) has become the most popular format for exchanging information between applications. XML content is self-descriptive (i.e., it contains tags along with data), but the standard XML serialization format is text-based, including the numbers and dates. This results in a significant increase in the size of XML documents compared to other proprietary formats for capturing the same data. The increased size of XML documents causes overhead costs during transmission, due to limited network bandwidths, as well as slower performance of storage and retrieval operations, due to limited disk I/O bandwidth.
Processing XML data typically requires parsing the tags to access the values. DOMs (Document Object Models) can be used, but they typically require a lot of memory. Thus, the parsing step can be costly and can cause significant application performance degradation.
Further, the values may need to be converted from the textual representation to their native datatype (e.g., integer, float or date) before the values can be processed by the application. Associated type conversion costs also degrade overall application performance.
Approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Throughout this description, numerous references are made to XML documents and associated hierarchies, to provide specific examples of a possible implementation of the broader techniques described herein. However, these techniques are not limited strictly to implementation with XML documents. Rather, these techniques may be implemented in the context of any tag-based, delimiter-based, or text-based hierarchical information that is logically organized as a hierarchy.
Functional Overview of Embodiments
Techniques are described for encoding and processing information that is logically hierarchically structured, such as XML data in XML documents. Typically, XML data is stored as a simple binary representation of the characters that make up the XML data, that is, the XML tags (i.e., elements and attributes) and values. By contrast, with the encoding format described herein, XML data is stored in a compact binary form that maintains all of the features of XML data in a useable form, such as the hierarchical structure underlying the data (e.g., the data model or infoset), the notion of elements and attributes, etc. Hence, data encoded in this format can undergo XML-based processing on-the-fly as it is being received (e.g., as it is streamed) or fetched (e.g., as it is being fetched from a database), as if the data was being processed linearly in its textual character-based form. Significantly, processing of data encoded in this format can begin without having to wait for and decode the entire data set, because processing the data requires interpretation rather than decompression.
This compact binary format significantly minimizes the overhead due to XML tags. Hence, the encoded XML is more compact than a binary representation of the corresponding textual character representation. This binary format can be processed more efficiently than parsing because the data is effectively pre-parsed. In one embodiment, values are stored in their native type formats and, therefore, processing avoids costly type conversions.
Salient features of the encoding format are as follows.
(A) Tokenization of tags: Element names and attribute names are replaced with token IDs. Similarly, namespace URLs and prefixes can also be tokenized.
(B) Array mode optimization: If the same tag is repeated multiple times (which occurs frequently in real world XML), token IDs can also be avoided, leading to further compaction.
(C) Native type encoding: If data type related metadata (e.g., an XML schema) is present, the binary encoding format exploits it in various ways. One way of exploiting the metadata is to store values in their native datatypes, e.g., integers, floats and dates are stored as such and require no unnecessary conversions.
(D) Schema sequential optimization: If structure related metadata (e.g., an XML schema) constrains the order of elements to be in a specific order, token IDs can be avoided within the encoding. This leads to the optimal encoding of data with minimal overhead, due to exploitation of the XML data model.
(E) Sectioning of XML: A large XML document can be split into smaller pieces (sections). The binary format for such a document contains references to the various sections which can be retrieved and managed independently. This enables lazy manifestation (on-demand) of XML documents.
(F) Out-of-band communication of token definitions and annotated schemas: A client application can access the token definitions and schemas in an out-of-band fashion. Hence the mappings of token IDs to their names, etc., are completely omitted from the encoded XML data.
As current applications scale up to large volumes of XML data (i.e., large numbers of large XML documents), the performance issues with text-based encoding are exacerbated. The compact binary encoding described herein addresses many of these issues and enables significant improvements in application performance. The encoding format offers combinations of several features that lead to many advantages over the prior text-based XML format, including the following.
Size: The binary encoding of XML is significantly smaller than the original XML document. This leads to improved efficiency during transmission, storage and retrieval operations.
Processing Speed: The binary encoding of XML avoids the need for costly parsing and unnecessary type conversions, thus speeding up application processing.
Operations Codes
In one embodiment, XML data is encoded from its original character-based format into a sequence of operation codes (“opcodes”). Each opcode has a fixed number of operands. Opcodes can be associated with XML elements and attributes and associated values. Use of opcodes and operands to represent XML elements and attributes effectively pre-parses the XML data. Consequently, a receiver/consumer of the encoded data need not spend computational resources parsing the data. Furthermore, representing XML tags and values with opcodes and operands reduces the size of the data. This is because an opcode and an operand may require as little as a byte each to encode, rather than several bytes for each corresponding character representation of similar data. Examples of opcodes are described hereafter.
Tokenization of Tags
In one embodiment, XML tags are tokenized. Tokenization of tags (i.e., XML elements and attributes) means that the character (e.g., textual) representation of the tag is replaced with a short token identifier. Hence, a tag that, in its character form, may require several bytes of memory, often only requires one byte (if less than 256 tokens) to encode as a token. For example, an XML element tag <Name> requires four bytes to encode in a simple binary representation of its characters, whereas a corresponding token for that tag (e.g., a token value “1”) only requires one byte to encode in the compact binary format described herein. In one embodiment, namespaces and prefixes that characterize a namespace URL are also tokenized.
In one embodiment, token identifiers can be generated by any encoding system, i.e., generated in a distributed manner. For example, token identifiers can be generated by a database server that manages the XML data repository, by client applications, or by any tier associated with the processing of XML data. The capability of global construction of token identifiers is enabled through use of a global hash algorithm that is an element of the encoding format. Thus, any machine or mechanism that can run the hash algorithm is capable of generating token identifiers for data being encoded, thereby providing an efficient global utilization of resources.
A token definition mapping is constructed to map token identifiers to token definitions, i.e., what particular XML element or attribute is represented by a particular token identifier. Token definitions may be stored and transmitted inline in the XML document to which the definitions apply. Alternatively, a global token dictionary may be constructed and stored as global metadata in a database for use with a collection of XML documents stored in the database. For example, a token dictionary may be constructed to define all the tokens used for a particular namespace, where a namespace provides context for and scopes the element and attribute names for XML data associated with the namespace. Thus, a client application can access the token definitions in an out-of-band fashion, and the mappings of token IDs to their names, etc. are completely omitted from the encoded XML data. Furthermore, a global token dictionary provides benefits regarding querying and indexing the collection of XML documents to which the token definitions apply.
An example of the use of opcodes and tokens is based on the following simplified XML fragment:
A corresponding character representation of the fragment, according to an embodiment of the invention, is as follows.
where: “STE” is a “start of element” opcode, indicating the start of an element;
“1” is a token identifier for the <root> element, “2” is a token identifier for the <A> element;
“VAL” is a “value” opcode, with respective associated values “123” and “345”; and
“ENDE” is an “end of element” opcode, indicating the end of an element.
Note that the opcodes are represented above in a character-based format for the purpose of explanation. However, when encoded according to the compact binary format described, the opcodes can actually be encoded as simple byte values. For non-limiting examples, the “start of element” opcode may be encoded in byte format as {0000 0000}, the “end of element” opcode may be encoded in byte format as {0000 0001}, the “value” opcode may be encoded in byte format as {0000 001 0}, and so on.
Array Mode Optimization
In one embodiment, “array mode optimization” is used in encoding XML data. The general purpose of array mode optimization is to avoid repeating, in the encoded data, tokens/operands that are repeated in the un-encoded XML data. Thus, a particular opcode is used to represent “start of element in array mode.” The particular opcode is used to represent that the applicable element or attribute is the same as a previous element or attribute.
Continuing with the foregoing example data, use of array mode optimization could result in the following character representation of the example data:
where “STEAM” is a “start of element in array mode” opcode, indicating that this element is the same as a previous element. Different opcodes can be used for different types of relationships between the current element and the previous element. For example, a particular opcode can be used to indicate that the current element is the same as the previous sibling of the current element and a different particular opcode can be used to indicate the current element is a the same as the previous (in document order) element. In the example, the STEAM opcode is used to indicate that the second instance of <A> (with value of 345) is the same as the previous sibling, i.e., the first instance of <A> (with value of 123).
Hence, in the presence of repetitive tags, use of token identifiers in the encoded data, as operands to opcodes, can be avoided altogether. For example, use of the operand “2” is avoided for the second <A> element. Consequently, use of array mode opcodes results in a more compact encoded representation of the data.
Native Type Encoding
In one embodiment, native data type encoding is used in encoding XML data values. The general purpose of native type encoding is to exploit knowledge of the data type of a value to allow for encoding the value in a more compact machine representation than a simple binary representation of the value characters. Use of native type encoding is a metadata-based optimization (e.g., XML schema-based), which relies on data type definitions in an XML schema, or similar metadata, to which an XML document corresponds. Because values are stored according to their native data type, a receiver/consumer of the encoded data need not spend computational resources converting values from character-based representations to native type representations for further processing.
Native data type encoding exploits the availability of inherent compact representations (i.e., valid literals) of certain data types, such as integers, Boolean, float and date data types. The schema specifies the data type for a value, and the data type inherently has a well-known compact representation. For example, a Boolean “true” and “false” can simply be encoded with one bit in its native data type as a “0” or “1”. For another example, a conventional binary representation of the characters “1”, “2” and “3” of the value “123” requires three bytes to encode, whereas the value “123” encoded in its native type as an integer requires only one byte to encode.
Schema Sequential Optimization
In one embodiment, “schema sequential optimization” is used in encoding XML data elements and attributes. The general purpose of schema sequential optimization encoding is to exploit knowledge of schema-specified constraints regarding the structure of elements and attributes within a compliant XML document, to avoid the use of some opcodes and operands if possible. Use of schema sequential optimization encoding relies on structural constraints, i.e., the order and cardinality of tags, in an XML schema to which an XML document corresponds. Because the particular order and cardinality of some tags within an XML document may be constrained by the corresponding schema, such constraints can be relied upon when encoding the XML data from the document.
An example of the use of schema sequential optimization is based on the following simplified XML fragment:
A corresponding character representation of the fragment, without the use of schema sequential optimization, is as follows.
However, assume a schema corresponding to the XML data specifies that the root must have only one reference to element “A” followed by only one reference to element “B”. Thus, a corresponding character representation of the fragment, according to an embodiment using schema sequential optimization, is as follows.
Hence, use of certain opcodes (STE and ENDE) and corresponding operands (“2” and “3”, respectively) is avoided, resulting in a more compact representation of the underlying data. Use of those opcodes and operands are avoidable because it is known that the data must be specifically structured as specified in the schema. Therefore, the encoding scheme can rely on such constraints, i.e., that one instance of element “A” is followed by only one instance of element “B”.
XML schemas that constrain corresponding documents to a certain structure are common, in the case of Purchase Order documents, for example. Furthermore, schema sequential optimization is applicable to structured document sections (XML document sectioning is described in greater detail herein) also, and need not be applied at a document-level.
Sectioning of XML Data
The content of U.S. patent application Ser. No. 11/083,828, entitled “Method and System for Flexible Sectioning of XML Data in a Database System”, is incorporated by this reference in its entirety for all purposes as if fully disclosed herein. U.S. patent application Ser. No. 11/083,828 describes a mechanism allowing XML documents to be selectively shredded based on user-specified criteria that define how to section the data. In particular, users can specify the criteria for sectioning XML documents using XPath expressions, and can specify the table in which sections matching specified XPath expressions are to be stored. Users can specify sectioning criteria, for sectioning an XML document that does not have a well-defined schema, into relational database tables.
The techniques described in the 11/083,828 reference allow for converting an XML document into a smaller document with section references (e.g., logical pointers) to sections of content that were divided out physically from the original document. Consequently, sections can be fetched on demand rather than fetching the entire document. The techniques described hereafter are applicable to encoding sectioned documents, such as sectioned documents as described in the 11/083,828 reference.
In one embodiment, a particular opcode is used to indicate an occurrence of a section root, referred to as a node reference. In one embodiment, operands for the opcode include (a) a root element path ID, and (b) an order key.
The root element path ID identifies the XML path to the root element of the section. The path ID can be used to identify in which table the section data is stored.
The order key uniquely identifies a section and can be used for lookup of the section data in the table identified by the root element path ID. That is, the order key indicates where the section root node resides within the hierarchical structure of the XML document containing the node. The content of U.S. patent application Ser. No. 10/884,311, entitled “Index for Accessing XML Data”, is incorporated by this reference in its entirety for all purposes as if fully disclosed herein. The 10/884,311 reference describes a mechanism for indexing paths, values and order information in XML documents. The mechanism involves using a set of structures, which collectively constitute an index, for accessing XML data.
As described in the 10/884,311 reference, the order key may be represented using a Dewey-type value. Specifically, the order key of a node is created by appending a value to the order key of the node's immediate parent, where the appended value indicates the position, among the children of the parent node, of that particular child node. For example, assume that a particular node D is the child of a node C, which itself is a child of a node B that is a child of a node A. Assume further that node D has the order key 1.2.4.3. The final “3” in the order key indicates that the node D is the third child of its parent node C. Similarly, the “4” indicates that node C is the fourth child of node B. The “2” indicates that Node B is the second child of node A. The leading 1 indicates that node A is the root node (i.e. has no parent).
In one embodiment, a series of node references are compressed using a particular opcode, referred to herein as a collection reference. A collection reference opcode indicates the presence of a contiguous list of section references and, therefore, refers to a collection of nodes. In one embodiment, operands for the collection reference opcode include (a) a set of one or more path IDs, (b) the order key for the first section referenced by the collection reference opcode, and (c) the order key for the last section referenced by the collection reference opcode. Hence, multiple consecutive section reference opcodes and operands are compressed into a single collection reference opcode with associated operands, which provide the information necessary to fetch the data for the sections referenced by the collection reference.
Chunk Encoding of XML Data
In one embodiment, an XML document is encoded in chunks of data. In other words, subsets of data of a specific size (for a non-limiting example, 64 Kb) are encoded, subset by subset. Whether or not to use chunking and the size of the chunks can be negotiated between the data producer and consumer, as part of typical negotiations preceding the actual exchange of data.
In a related embodiment, encoding and transmitting XML data is performed in chunks. That is, a chunk is encoded and then transmitted, the next chunk is encoded and then transmitted, and so on. Thus, for example, in the case of a collision with respect to constructing tokens based on the hash algorithm, the encoder can mark the chunk with an indicator of the collision, send to the consumer, and continue with encoding the next chunk. For example, in the case of a client application encoding data to be stored in a database, the client can mark the chunk with an indicator of the collision, send to the database server, and continue with encoding the next chunk.
A Method for Encoding XML Data in Compact Format
At block 102, encoding of a chunk of XML data is started. For example, encoding is started for a subset of data of a specific size, from an XML document or other hierarchically structured data.
At block 104, the XML tags are tokenized. For example, each tag is input into a hash algorithm to generate a token identifier for the tag. Thus, the character (e.g., textual) representation of the tag is replaced with the short token identifier. As described, unique tokens can be generated for a document, or for a namespace, or for a database.
At decision block 106, it is determined whether any tags are repeated. If no tags are repeating a previous tag (e.g., a previous sibling tag or a previous tag in document order), then processing is passed to decision block 110. If tags are repeated, then array mode optimization is applied at block 108 to avoid repeating, in the encoded data, tokens/operands that are repeated in the un-encoded data. That is, a particular opcode is used to represent that the applicable element or attribute is the same as a previous element or attribute. Processing is then passed to decision block 110.
At decision block 110, it is determined whether a schema is available that corresponds to the XML document. If there is no schema available, then processing is passed to decision block 118. If there is a schema available, then data values are encoded in their native type format based on the schema, at block 112, to encode the value in a more compact machine representation than a simple binary representation of the value's characters.
At decision block 114, it is determined whether the schema constrains any XML elements to a particular order and cardinality. If the schema does not so constrain any elements, then processing is passed to decision block 118. If the schema does so constrain some elements, then schema sequential optimization is applied at block 116, to exploit knowledge of the schema-specified constraints to avoid the use of some opcodes and operands. Processing passes to decision block 118.
At decision block 118, it is determined whether the XML document is sectioned. If the document is not sectioned (as briefly described herein and in U.S. patent application Ser. No. 11/083,828), then go to the next chunk at block 122 and processing passes back to block 102 to start processing the next chunk, if there is one. If the document is sectioned, then use section reference opcodes and/or collection reference opcodes at block 120, to compact the section references while still providing the information necessary to fetch the data for the referenced sections. Processing passes to block 122 to go to the next chunk, and then to block 102 to start processing the next chunk, if there is one.
Once all of the data is encoded, then processing can stop for that document. As mentioned, the data can be encoded, and transmitted or stored, chunk by chunk. Therefore, there could be a block between block 120 and block 122, at which the encoded chunk is transmitted or stored, prior to or concurrent with the start of processing the next chunk.
Backwards Compatibility in the Presence of Schema Evolution
Techniques are described that enable backwards-compatible schema evolution, in the context of (a) native type encoding and (b) schema sequential optimization. The result is that instances of data that were encoded with a previous version of a schema can still be decoded based on newer versions of schemas.
Native Type Encoding
In one embodiment, providing backwards compatibility in scenarios in which a data type may be relaxed from one schema version to the next schema version, the encoding type used to encode a value for an element or attribute is encoded into the format. For example, a value of data type “number” is encoded with the opcode “VALNUM”, rather than “VAL”, to indicate that the value is encoded in “number” data type. Consequently, if the next schema version changes the data type of the value from “number” to “string”, then the data instance based on the previous schema version can still be decoded because the encoding data type is declared in the encoded data. Hence, existing encoded instances do not require change in order to conform to the new schema. However, if the existing instances are encoded again, then conformance with the new schema is recommended.
Schema Sequential Optimization
If a schema is changed such that the hierarchical structure of a corresponding document may change, then an existing instance using the schema sequential optimization may be corrupt, based on the new structure. That is, the known schema structural constraints that were exploited in application of schema sequential optimization to the existing instance may have been modified or eliminated in the new schema version and, therefore, can no longer be relied upon.
In one embodiment, for scenarios in which a schema is changed such that the hierarchical structure of a corresponding document may change, system level schema annotations are added to the new schema version at the time of versioning. These annotations can be in the form of an XML representation according to a standard XML annotation. These annotations are a mapping of the hierarchical positions of elements according to the structural constraints specified in the original schema. Returning to the schema sequential optimization example, where the schema specifies that the root must have only one reference to element “A” followed by only one reference to element “B”, this specification is annotated to the new schema. Consequently, previous instances of data that rely on a child element “B” following a child element “A” can still be decoded using the new schema version because the new version contains the annotation indicating the old constraint. However, re-encoding the data based on the new schema version would no longer be able to rely on the obsolete constraint specified in the original schema version.
In one embodiment, the annotations are in the form of a “kidList”, which is a list of identifiers (“kidNums) for child elements of the root. For example, the new schema may be annotated to include “kid 1=A” to indicate that tag <A> is the first child of the root and “kid 2=B” to indicate that tag <B> is the second child of the root, which follows the first child.
Other changes to a schema are allowed, which still meet the backwards-compatibility goal. For example, (a) adding an optional element or attribute to a new schema version, (b) adding new values, and (c) increasing maxOccurs, maxLength facets are all schema evolutions that are inherently backwards-compatible.
Hardware Overview
Computer system 200 may be coupled via bus 202 to a display 212, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 214, including alphanumeric and other keys, is coupled to bus 202 for communicating information and command selections to processor 204. Another type of user input device is cursor control 216, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 204 and for controlling cursor movement on display 212. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
The invention is related to the use of computer system 200 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 200 in response to processor 204 executing one or more sequences of one or more instructions contained in main memory 206. Such instructions may be read into main memory 206 from another machine-readable medium, such as storage device 210. Execution of the sequences of instructions contained in main memory 206 causes processor 204 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using computer system 200, various machine-readable media are involved, for example, in providing instructions to processor 204 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 210. Volatile media includes dynamic memory, such as main memory 206. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 202.
Common forms of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge or any other medium from which a computer can read.
Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 204 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 200 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 202. Bus 202 carries the data to main memory 206, from which processor 204 retrieves and executes the instructions. The instructions received by main memory 206 may optionally be stored on storage device 210 either before or after execution by processor 204.
Computer system 200 also includes a communication interface 218 coupled to bus 202. Communication interface 218 provides a two-way data communication coupling to a network link 220 that is connected to a local network 222. For example, communication interface 218 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 218 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 218 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 220 typically provides data communication through one or more networks to other data devices. For example, network link 220 may provide a connection through local network 222 to a host computer 224 or to data equipment operated by an Internet Service Provider (ISP) 226. ISP 226 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 228. Local network 222 and Internet 228 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 220 and through communication interface 218, which carry the digital data to and from computer system 200, are exemplary forms of carrier waves transporting the information.
Computer system 200 can send messages and receive data, including program code, through the network(s), network link 220 and communication interface 218. In the Internet example, a server 230 might transmit a requested code for an application program through Internet 228, ISP 226, local network 222 and communication interface 218.
The received code may be executed by processor 204 as it is received, and/or stored in storage device 210, or other non-volatile storage for later execution. In this manner, computer system 200 may obtain application code in the form of a carrier wave.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application claims the benefit of priority to U.S. Provisional Patent Application No. 60/664,003 filed on Mar. 21, 2005, entitled “A Mechanism for Efficient Schema-Based Binary Encoding of XML”, the content of which is incorporated by this reference in its entirety for all purposes as if fully disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
5643633 | Telford et al. | Jul 1997 | A |
5780830 | Boie et al. | Jul 1998 | A |
5870590 | Kita et al. | Feb 1999 | A |
5991713 | Unger et al. | Nov 1999 | A |
6018747 | Burns et al. | Jan 2000 | A |
6330573 | Salisbury et al. | Dec 2001 | B1 |
6414610 | Smith | Jul 2002 | B1 |
6427123 | Sedlar | Jul 2002 | B1 |
6519597 | Cheng et al. | Feb 2003 | B1 |
6584459 | Chang et al. | Jun 2003 | B1 |
6598055 | Keesey et al. | Jul 2003 | B1 |
6635088 | Hind et al. | Oct 2003 | B1 |
6697805 | Choquier et al. | Feb 2004 | B1 |
6883137 | Girardot et al. | Apr 2005 | B1 |
6966029 | Ahern | Nov 2005 | B1 |
7013424 | James et al. | Mar 2006 | B2 |
7013425 | Kataoka | Mar 2006 | B2 |
7031956 | Lee et al. | Apr 2006 | B1 |
7080094 | Dapp et al. | Jul 2006 | B2 |
7139746 | Shin et al. | Nov 2006 | B2 |
7143397 | Imaura | Nov 2006 | B2 |
7149753 | Lin | Dec 2006 | B2 |
7162485 | Gottlob et al. | Jan 2007 | B2 |
7171404 | Lindblad et al. | Jan 2007 | B2 |
7171407 | Barton et al. | Jan 2007 | B2 |
7216127 | Auerbach | May 2007 | B2 |
7318194 | Achilles et al. | Jan 2008 | B2 |
7398265 | Thusoo et al. | Jul 2008 | B2 |
7441185 | Coulson et al. | Oct 2008 | B2 |
7496612 | Magee et al. | Feb 2009 | B2 |
7500017 | Cseri et al. | Mar 2009 | B2 |
20010049675 | Mandler et al. | Dec 2001 | A1 |
20020038319 | Yahagi | Mar 2002 | A1 |
20020065822 | Itani | May 2002 | A1 |
20020078068 | Krishnaprasad et al. | Jun 2002 | A1 |
20020123993 | Chau et al. | Sep 2002 | A1 |
20020152267 | Lennon | Oct 2002 | A1 |
20020188613 | Chakraborty et al. | Dec 2002 | A1 |
20030041302 | McDonald | Feb 2003 | A1 |
20030093626 | Fister | May 2003 | A1 |
20030131051 | Lection et al. | Jul 2003 | A1 |
20030177341 | Devillers | Sep 2003 | A1 |
20030212662 | Shin et al. | Nov 2003 | A1 |
20030212664 | Breining et al. | Nov 2003 | A1 |
20040006741 | Radja et al. | Jan 2004 | A1 |
20040028049 | Wan | Feb 2004 | A1 |
20040044659 | Judd et al. | Mar 2004 | A1 |
20040083209 | Shin | Apr 2004 | A1 |
20040088320 | Perry | May 2004 | A1 |
20040148278 | Milo et al. | Jul 2004 | A1 |
20040167864 | Wang et al. | Aug 2004 | A1 |
20040205551 | Santos | Oct 2004 | A1 |
20040210573 | Abe et al. | Oct 2004 | A1 |
20040267760 | Brundage et al. | Dec 2004 | A1 |
20050033733 | Shadmon et al. | Feb 2005 | A1 |
20050038688 | Collins et al. | Feb 2005 | A1 |
20050050016 | Stanoi et al. | Mar 2005 | A1 |
20050050054 | Clark et al. | Mar 2005 | A1 |
20050060647 | Doan et al. | Mar 2005 | A1 |
20050086639 | Min et al. | Apr 2005 | A1 |
20050091188 | Pal et al. | Apr 2005 | A1 |
20050120031 | Ishii | Jun 2005 | A1 |
20050192955 | Farrell | Sep 2005 | A1 |
20050228792 | Chandrasekaran et al. | Oct 2005 | A1 |
20050228818 | Murthy et al. | Oct 2005 | A1 |
20050229158 | Thusoo et al. | Oct 2005 | A1 |
20050257201 | Rose et al. | Nov 2005 | A1 |
20050278616 | Eller | Dec 2005 | A1 |
20050289125 | Liu et al. | Dec 2005 | A1 |
20060021246 | Schulze et al. | Feb 2006 | A1 |
20060041838 | Khan | Feb 2006 | A1 |
20060085737 | Liu | Apr 2006 | A1 |
20060168513 | Coulson et al. | Jul 2006 | A1 |
20060173865 | Fong | Aug 2006 | A1 |
20060195865 | Fablet | Aug 2006 | A1 |
20060212467 | Murthy et al. | Sep 2006 | A1 |
20060235868 | Achilles et al. | Oct 2006 | A1 |
20060265689 | Kuznetsov et al. | Nov 2006 | A1 |
20060277179 | Bailey | Dec 2006 | A1 |
20070006078 | Jewsbury et al. | Jan 2007 | A1 |
20080028296 | Aharoni | Jan 2008 | A1 |
20080098001 | Gupta et al. | Apr 2008 | A1 |
20080098019 | Sthanikam et al. | Apr 2008 | A1 |
20080098020 | Gupta et al. | Apr 2008 | A1 |
Number | Date | Country |
---|---|---|
WO 0142881 | Jun 2001 | WO |
WO 033107576 | Dec 2003 | WO |
WO 2006026534 | Mar 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20060212467 A1 | Sep 2006 | US |
Number | Date | Country | |
---|---|---|---|
60664003 | Mar 2005 | US |