The present disclosure relates in general to data compression and databases, and more specifically to methods of compression for use in in-memory databases as well as document databases.
Computers are powerful tools of use in storing and providing access to vast amounts of information, while databases are a common mechanism for storing information on computer systems while providing easy access to users. Typically, a database is an organized collection of information stored as “records” having “fields” of information. (e.g., a restaurant database may have a record for each restaurant in a region, where each record contains fields describing characteristics of the restaurant, such as name, address, type of cuisine, and the like).
Often, databases may use clusters of computers in order to be able to store and access large amounts of data. This may require a large amount of information storage space. Often, compression may be used to reduce the amount of storage space necessary to host the information, but it may increase the computational load significantly as many common compression methods require the entire record or many records to be decompressed every time they are accessed.
As such, there is a continuing need for improved methods of storing and retrieving data at high speeds at a large scale.
Disclosed herein are methods for compressing structured or semi-structured data, though it should be appreciated that a variety of suitable compression algorithms may be utilized (i.e., no particular compression algorithm is required). System and method embodiments described herein may implement a combination of suitable data compression processes to each field of database, such that a compressed database record achieves a compression ratio comparable to commercially-accepted ratios, while still allowing decompression of the fields to occur only for the records and fields of interest (i.e., only decompressing data records or fields satisfying a database search query). Implementing compression techniques that facilitate selective decompression of records or fields allows for horizontal record-based storage of the compressed data, but also columnar or vertical access to the fields of the data on decompression. This provides the reduced storage benefits of compression, while avoiding much of the compute power and latency associated with decompression when only specific fields are to be decompressed.
Systems and methods described herein may also implement N-gram compression techniques. Conventionally, N-grams are restricted to compressing only one of chains of letters (successive characters of a string), or to chains of words (successive strings in text). Conventional N-gram compression is unable to compress chains of letters, individual words, and/or chains of words, within a single implementation of such a compression technique. Described herein is the use of N-gram-related compression for columnar compression during record storage, thereby allowing good overall compression, while still providing low-latency access to a single record or a single field within a record, in response to search queries.
Systems and methods described herein describe embodiments of compression techniques as applying to in-memory databases and document databases. However, it should be appreciated that such techniques and other aspects of the systems and methods may be applied to more general data compression.
In one embodiment, a computer-implemented method comprises determining, by a computer, a compression technique to apply to one or more data elements received in a set of data elements, wherein the computer uses a schema to determine the compression technique to apply to each data element based on a data type of the data element; compressing, by a computer, a data element using the compression technique defined by the schema, wherein the compression technique compresses the data element such that the data element is individually decompressed when returned in response to a search query; storing, by the computer, each compressed data element in a field of a record that stores data of the data type of the data element; associating, by the computer, a field notation in a reference table for each field according to a schema, wherein the representative notation identifies the data type of the field; querying, by the computer, the database for a set of one or more data elements satisfying a search query received from a search conductor; and decompressing, by the computer, each of the one or more data elements of the one or more data elements satisfying the search query using the compression technique to apply responsive to identifying the set one or more data elements satisfying the search query, wherein each data element not satisfying the search query remains compressed.
In another embodiment, a computing system comprises one or more nodes storing one or more collections, each collection comprising a set of one or more records, each record comprising a set of fields storing data; and a compression processor compressing one or more of the fields according to a schema that is associated with a collection.
Numerous other aspects, features and benefits of the present disclosure may be made apparent from the following detailed description.
The present disclosure can be better understood by referring to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. In the figures, reference numerals designate corresponding parts throughout the different views.
As used herein, the following terms have the following definitions:
“Node” refers to a computer hardware configuration suitable for running one or more modules.
“Cluster” refers to a set of one or more nodes.
“Module” refers to a computer software component suitable for carrying out one or more defined tasks.
“Collection” refers to a discrete set of records.
“Record” refers to one or more pieces of information that may be handled as a unit.
“Field” refers to one data element within a record.
“Object” refers to a logical collection of fields within a data record.
“Array” refers to an ordered list of data values within a record.
“Node” refers to a field, object, or array within a record.
“Partition” refers to an arbitrarily delimited portion of records of a collection.
“Schema” refers to data describing one or more characteristics of a collection, record, or field.
“Compress” may refer to reducing the amount of electronic data needed to represent a value.
“Dictionary” may refer to any computerized list suitable for use as a value reference.
“Token Table” refers to a table defining one or more simpler values for one or more other more complex values.
“N-gram” refers to N successive integral units of data which can be characters, words, or groups of words where N is greater than or equal to 1—i.e., in the sentence “The quick brown fox jumped over the lazy dog,” “the,” “e,” “he,” and “brown fox” are all valid N-grams.
“N-gram Table” refers to a table defining one or more simpler values for one or more other more complex values.
“Search Conductor” or “S.C.” refers to a module configured to at least run one or more queries on a partition and return the search results to one or more search managers.
“Partitioner” refers to a module configured to at least divide one or more collections into one or more partitions.
“Database” refers to any system including any combination of clusters and modules suitable for storing one or more collections and suitable to process one or more queries.
“Query” refers to a request to retrieve information from one or more suitable partitions or databases.
“Memory” refers to any hardware component suitable for storing information and retrieving said information at a sufficiently high speed.
“Fragment” refers to separating records into smaller records until a desired level of granularity is achieved.
“JSON” refers to the JavaScript Object Notation, a data-interchange format.
“BSON” refers to Binary JSON, a data-interchange format.
“YAML” refers to the coding language “YAML Ain't Markup Language,” a data-interchange format.
“Document” refers to a group of structured or semi-structured information.
“Document Database” refers to a document-oriented database, designed for storing, retrieving, and managing document-oriented information.
The present disclosure describes methods for compressing structured or semi-structured data. In one or more embodiments, one or more collections may include structured or semi-structured data that may include any number of records and any suitable number of fields, where the collections may be described using any suitable schema that may define the data structure and the compression method used for one or more fields.
In one or more embodiments, one or more fields may include information that may have a semantic similarity. In one or more embodiments, the fields may be compressed using one or more methods suitable for compressing the type of data stored in the field, where token tables, N-gram compression, serial day number compression, binary number compression, or any other suitable method may be used.
In one or more embodiments, one or more data in one or more records of a collection may include data that may be better compressed after fragmentation and fragmented data may be stored contiguously in the same partition. In one or more embodiments, fragmented record identifiers may be used to identify which record they were fragmented from to ensure the system remains aware that the records originate from the same original record in the collection. Data that would be duplicated across fragments will only have a single representation in the compressed form with an anchor to which other fragments can refer.
In one or more embodiments, a record may contain an array of data values. Arrays may contain zero or more values. Values may be fields, objects, or other arrays.
In one or more embodiments, one or more data values may be grouped as an object. Objects may contain fields, other objects, or arrays, and may be elements of other objects or arrays. Objects within a record may be compressed further by including an anchor value that refers the system to another object or fragment in the partition with identical values. When a module may output data to other modules in the system, the module may replace the referring object with the actual object values.
In one or more embodiments, input records may be semi-structured data and may be represented using JSON, BSON, YAML or any other suitable data format.
In one or more embodiments, one or more data fields may be normalized prior to compression.
In one or more embodiments, fields including data with a suitably semantic similarity may be compressed using any suitable token table. When one or more records may be added to a field with an associated token table the system determines whether the data may match previously encountered data in the token table. In one or more embodiments, if the data does not match, the system may use an alternate compression method or may update the token table. In other embodiments, the token table may be updated periodically.
In one or more embodiments, fields including data with a suitably semantic similarity may be compressed using any suitable n-gram table. When one or more records may be added to a field with an associated n-gram table the system determines whether the data may match previously encountered data in the n-gram table. In one or more embodiments, if the data does not match, the system may use an alternate compression method or may update the n-gram table. In other embodiments, the n-gram table may be updated periodically.
In one or more embodiments, the most frequently occurring values may be stored in the lower numbered indices, which may allow for the most frequently used values to be represented with fewer bytes.
In one or more embodiments, a longer value may be preferred over a shorter value for inclusion in the token table, which may allow for greater compression by eliminating longer values with the same index size as a smaller value.
In one or more embodiments, a longer value may be preferred over a shorter value for inclusion in the n-gram table, which may allow for greater compression by eliminating longer values with the same index size as a smaller value.
In one or more embodiments, records may include zero or more record descriptor bytes, any suitable number of field descriptor bytes, any suitable number of array descriptor bytes, any suitable number of object descriptor bytes, and any suitable number of bytes representing the data associated with the record.
In one or more embodiments, data in a field associated with a token table may use one or more bits to state whether the information stored in the record is compressed using the compression method defined in the schema or whether another compression method, such as n-gram compression, was used.
In one or more embodiments, length or offset data included in the one or more record descriptor bytes, field descriptor bytes, array descriptor bytes, and/or object descriptor bytes may be used to navigate through the compressed data without decompressing the records, arrays, objects, or fields.
In one or more embodiments, one or more of a collection of data records, one or more schema, one or more dictionaries, one or more n-gram tables, and one or more token tables may be stored in a hardware Storage Unit 102 in Compression Apparatus 100. RAM 104 in Compression Apparatus 100 may have loaded into it any data stored in Storage Unit 102, as well as any suitable modules, including Fragmentation Modules, Compression Modules, and Indexing Modules, amongst others. In one or more embodiments, Compression Apparatus 100 may include one or more suitable CPUs 106,
In one or more embodiments, one or more collections may include structured or semi-structured data as shown in Collection Data Table 200. In one or more embodiments, the structured data may contain any number of fields, and the semi-structured data, such as data represented using JSON, BSON, YAML or any other suitable format, may contain that may include any suitable number of fields, arrays, or objects. Collections may be described using any suitable schema, where suitable schema may define the data structure and the compression method used for one or more fields in the schema.
In one or more embodiments, one or more fields may include data values that may have a semantic similarity. In one or more embodiments, semantically similar data may include first names, last names, date of birth, and citizenship, amongst others. In one or more embodiments, a compression apparatus may compress one or more fields using one or more methods suitable for compressing the type of data stored in the field, where the compression apparatus may use custom token tables. In one or more embodiments, a compression apparatus may use n-gram compression as a default compression method for any number of fields with data not associated with a desired method of compression.
In one or more embodiments, one or more data in one or more fields of a collection may include data that may be better compressed after fragmentation. This type of data is typically where fields have multiple values per record, and a compression apparatus may better achieve matching and scoring by de-normalizing those records into multiple record fragments. Examples of data suitable for fragmentation may include full names, addresses, phone numbers and emails, amongst others. In one or more embodiments, a compression apparatus may fragment one or more data prior to compression. A compression apparatus may store fragmented data contiguously in the same partition. In one or more embodiments, a compression apparatus may use fragmented record identifiers to identify which record they were fragmented from to ensure the system remains aware that the records originate from the same original record in the collection.
In one or more embodiments, a record may contain an array of data values. Arrays may contain zero or more values and array values may have a null value to represent a missing value while preserving the proper order of values.
In one or more embodiments, a compression apparatus may group one or more data fields as an object. Objects may contain other objects and may be elements in an array. A compression apparatus may further compress objects within a record by including a value that refers the system to another object in the partition with identical values. When a module may output data to other modules in the system, the module may replace the referring object with the actual object values.
In one or more embodiments, a compression apparatus may compress one or more data in fields representing numbers using known binary compression methods.
In one or more embodiments, a compression apparatus may compress one or more data in fields representing dates using known Serial Day Number compressions algorithms.
In one or more embodiments, a compression apparatus may normalize one or more data prior to compression. Data suitable for normalization prior to compression may include street suffixes and prefixes, name suffixes and prefixes, and post/pre directional information (i.e. east, north, west, amongst others), amongst others.
In one or more embodiments, a compression apparatus may compress fields including data with a suitably semantic similarity using any suitable token table, where suitable token tables may be similar to Token Table 300.
In one or more embodiments, when one or more records may be added to a field with an associated token table the system determines whether the data may match previously encountered data in the token table. In one or more embodiments, if the data does not match, the system may use an alternate compression method instead of token tables. In one or more other embodiments, if the data does not match, the system may update its token table so as to include the data.
In one or more embodiments, the token table may be updated periodically and stored data may be re-evaluated to determine if compressibility has improved. If the compressibility of one or more data has improved, the system may decompress and re-compress any suitable data.
In one or more embodiments, the most frequently occurring values may be stored in the lower numbered indices, which may allow for the most frequently used values to be represented with fewer bytes.
In one or more embodiments, a longer value may be preferred over a shorter value for inclusion in the token table, which may allow for greater compression by eliminating longer values with the same index size as a smaller value.
In one or more embodiments a special index value may be reserved to indicate that no token data exists for the data value.
In one or more embodiments, a compression apparatus may compress fields including data with a suitably semantic similarity using any suitable n-gram table, where suitable n-gram tables may be similar to N-gram Table 400.
In one or more embodiments, when one or more records may be added to a field with an associated n-gram table the system determines whether the data may match previously encountered data in the n-gram table. In one or more embodiments, if the data does not match, the system may use an alternate compression method instead of n-gram tables. In one or more other embodiments, if the data does not match, the system may update its n-gram table so as to include the data.
In one or more embodiments, the n-gram table may be updated periodically and stored data may be re-evaluated to determine if compressibility has improved. If the compressibility of one or more data has improved, the system may decompress and re-compress any suitable data.
In one or more embodiments, the most frequently occurring values may be stored in the lower numbered indices, which may allow for the most frequently used values to be represented with fewer bytes.
In one or more embodiments a special index value may be reserved to indicate that no n-gram data exists for the data value.
In Record Representation 500, each row value in the record index column may include zero or more record descriptor bytes with information about the record, including the length, offset, or the record's location in memory amongst others. In one or more embodiments, each data node (array, field, or object) present in the record may include zero or more descriptor bytes, where suitable information about the node may be included, including a node identifier, the length of the stored data, and number of elements of the array if applicable. Following the zero or more node descriptor bytes, any suitable number of bytes may represent the data associated with the record. In one or more embodiments, the data may include one or more bits describing the contents of the data including array separation marker bits.
In one or more embodiments, data in a field associated with a token table may use one or more bits to state whether the information stored in the record is represented in a suitable Token Table, or whether another suitable compression method, such as N-gram compression, was used.
In one or more embodiments, a system may use length or offset data included in the one or more record descriptor bytes and/or the one or more node (array, object, or field) descriptor bytes to navigate through the compressed data without decompressing the records or nodes (arrays, objects, or fields).
In one or more embodiments, any suitable module in a system may index or compress data, including one or more search conductors or one or more partitioners in a MEMDB system.
In one or more embodiments, a compression apparatus employing one or more compression methods disclosed herein allow data to be compressed at rates similar to other prominent compression methods while allowing data to be decompressed and/or accessed at the node (array, object, or field) level.
In one or more embodiments, a compression apparatus employing one or more compression methods disclosed herein allow the system to skip individual records and nodes (arrays, objects, or fields) when accessing information in the records.
In one or more embodiments, a compression apparatus employing one or more compression methods disclosed herein allow the system to exit decompression of a record early when the target fields are found.
Example #1 illustrates a method for compressing names using a compression apparatus.
In this example, a data set includes a collection including one million full name records with 350 unique first names and 300 unique last names represented. The records were fragmented into a first name field and a last name field.
The individual tokens were then weighted via the product of their frequency and length from highest to lowest. Tokens with a weight less than a certain threshold were discarded to reduce the token table size.
A token table was then generated for each field by maximizing the aggregate space savings in assigning indices whereby space savings for an individual token is the product of frequency and the sum of its length minus stored index length.
The number of entries associated with single byte indices was varied from 1 to 255 to inclusive during the maximization procedure.
The algorithm guarantees that the generated token table is optimal, and the highest savings will go to the single byte stored index entries while subsequent values compress to two or more bytes. Short or infrequent entries may realize no savings and are not be included in the token table. These values revert to another compression method such as n-gram compression.
Example #2 illustrates a method for compressing text using a compression apparatus.
In this example, a large body of text was analyzed for frequency of n-grams where n-grams can represent successive sequences of characters, words, or groups or words. The text is usually acquired via analyzing a large column of field data in order to achieve columnar compression results in a field by field horizontal compression.
The individual n-grams were then weighted via the product of their frequency and length from highest to lowest. N-grams with a weight less than a certain threshold were discarded to reduce the n-gram table size.
A n-gram table was then generated for the field by maximizing the aggregate space savings in assigning indices whereby space savings for an individual n-gram is the product of frequency and the sum of its length minus stored index length.
The number of entries associated with single byte indices was varied from 1 to 255 to inclusive during the maximization procedure.
The algorithm guarantees that the generated n-gram table is optimal, and the highest savings will go to the single byte stored index entries while subsequent values compress to two or more bytes. Infrequent entries may realize no savings and are not be included in the n-gram table. These values revert to some other method of basic storage.
An example of some of the n-grams generated in the table via this method is as follows:
During compression the field data is compressed from beginning to end using a greedy algorithm to replace the largest amount of data as possible with an indexed value until the entire data is consumed.
Example #3 is a method for compressing semi-structured data in JSON documents using a compression apparatus.
In this example, JSON input documents are compressed using the following schema, with token table compression for Title, FirstName, LastName, NameSuffix and PhoneType fields, Serial Day Number compression for DateOfBirth field and number n-gram compression for PhoneNumber field:
The input record below requires 266 bytes to be represented in JSON (after removing unnecessary whitespace). After compressing, using the compression methods described in the schema above, the resulting compressed record requires only 44 bytes.
The input record below requires 108 bytes to be represented in JSON (after removing unnecessary whitespace). After compressing, using the compression methods described in the schema above, the resulting compressed record requires only 13 bytes.
Example #4 is an example of fragmenting a record. In this example, the 53rd record of a collection includes data for a couple, Bob and Carol Wilson, having a first and second address. In this example, the record is fragmented as shown in the following table.
The record index is maintained to ensure the system remains aware that the records originate from the same original record in the collection. In this example, the fragmented records further compress the data by including a value that refers the system to the previous record in the partition, i.e. when the system accesses record the name of record 53.2, the value refers the system back to the value for the name in record 53.1. When the system in Example #4 outputs data to other modules in the system, even in compressed format, the module replaces the referring values for the actual values.
Example #5 is an example of compression for archiving semi-structured data. In this example, JSON documents from a document oriented database such as MongoDB, Cassandra, or CouchDB are compressed using a schema that defines all the desired fields, including the unique identifier of each JSON document. An index is then created that maps the unique identifier to the compressed record. The resulting compressed records and index consume less than 15% of the storage required for the original document-oriented database and each JSON document or select fields of a document can be immediately accessed without decompressing unwanted data.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, GPUs, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein.
When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
This application is a continuation of U.S. patent application Ser. No. 14/557,900, filed Dec. 2, 2014, issuing May 5, 2015 as U.S. Pat. No. 9,025,892, entitled “DATA RECORD COMPRESSION WITH PROGRESSIVE AND/OR SELECTIVE DECOMPRESSION,” and which is a non-provisional patent application claiming the benefit of U.S. Provisional Patent Application Ser. No. 61/910,873, entitled “DATA RECORD COMPRESSION WITH PROGRESSIVE AND/OR SELECTIVE DECOMPRESSION,” filed Dec. 2, 2013. Each of the above-referenced applications are incorporated by reference herein, in their entireties. This application is related to U.S. patent application Ser. No. 14/557,794, filed Dec. 2, 2014, entitled “METHOD FOR DISAMBIGUATING FEATURES IN UNSTRUCTURED TEXT”, and U.S. patent application Ser. No. 14/558,300, filed Dec. 2, 2014, entitled “EVENT DETECTION THROUGH TEXT ANALYSIS USING TRAINED EVENT TEMPLATE MODELS”, and U.S. patent application Ser. No. 14/557,807, filed Dec. 2, 2014 entitled “METHOD FOR FACET SEARCHING AND SEARCH SUGGESTIONS”, and U.S. patent application Ser. No. 14/558,254, filed Dec. 2, 2014, entitled “DESIGN AND IMPLEMENTATION OF CLUSTERED IN-MEMORY DATABASE”, and U.S. patent application Ser. No. 14/557,827, filed Dec. 2, 2014, entitled “REAL-TIME DISTRIBUTED IN MEMORY SEARCH ARCHITECTURE”, and U.S. patent application Ser. No. 14/557,951, filed Dec. 2, 2014, entitled “FAULT TOLERANT ARCHITECTURE FOR DISTRIBUTED COMPUTING SYSTEMS”, and U.S. patent application Ser. No. 14/558,009, filed Dec. 2, 2014, entitled “DEPENDENCY MANAGER FOR DATABASES”, and U.S. patent application Ser. No. 14/558,055, filed Dec. 2, 2014, entitled “PLUGGABLE ARCHITECTURE FOR EMBEDDING ANALYTICS IN CLUSTERED IN-MEMORY DATABASES”, and U.S. patent application Ser. No. 14/558,101, filed Dec. 2, 2014, entitled “NON-EXCLUSIONARY SEARCH WITHIN IN-MEMORY DATABASES”, all of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6128660 | Grimm et al. | Oct 2000 | A |
6178529 | Short et al. | Jan 2001 | B1 |
6226635 | Katariya | May 2001 | B1 |
6266781 | Chung et al. | Jul 2001 | B1 |
6353926 | Parthesarathy et al. | Mar 2002 | B1 |
6457026 | Graham et al. | Sep 2002 | B1 |
6738759 | Wheeler et al. | May 2004 | B1 |
6832373 | O'Neill | Dec 2004 | B2 |
6832737 | Karlsson et al. | Dec 2004 | B2 |
7058846 | Kelkar et al. | Jun 2006 | B1 |
7099898 | Nakamura et al. | Aug 2006 | B1 |
7370323 | Marinelli et al. | May 2008 | B2 |
7421478 | Muchow | Sep 2008 | B1 |
7447940 | Peddada | Nov 2008 | B2 |
7543174 | van Rietschote et al. | Jun 2009 | B1 |
7681075 | Havemose et al. | Mar 2010 | B2 |
7818615 | Krajewski et al. | Oct 2010 | B2 |
7899871 | Kumar et al. | Mar 2011 | B1 |
7984043 | Waas | Jul 2011 | B1 |
8055933 | Jaehde et al. | Nov 2011 | B2 |
8090717 | Bharat et al. | Jan 2012 | B1 |
8122026 | Laroco et al. | Feb 2012 | B1 |
8122047 | Kanigsberg et al. | Feb 2012 | B2 |
8332258 | Shaw | Dec 2012 | B1 |
8341622 | Eatough | Dec 2012 | B1 |
8345998 | Malik | Jan 2013 | B2 |
8356036 | Bechtel et al. | Jan 2013 | B2 |
8375073 | Jain | Feb 2013 | B1 |
8423522 | Lang | Apr 2013 | B2 |
8429256 | Vidal et al. | Apr 2013 | B2 |
8645298 | Hennig et al. | Feb 2014 | B2 |
8726267 | Li et al. | May 2014 | B2 |
8782018 | Shim | Jul 2014 | B2 |
8972396 | Zhang et al. | Mar 2015 | B1 |
8995717 | Cheng et al. | Mar 2015 | B2 |
9009153 | Kahn et al. | Apr 2015 | B2 |
9025892 | Lightner et al. | May 2015 | B1 |
9032387 | Hill et al. | May 2015 | B1 |
9087005 | Chen et al. | Jul 2015 | B2 |
9201744 | Lightner et al. | Dec 2015 | B2 |
20010037398 | Chao et al. | Nov 2001 | A1 |
20020031260 | Thawonmas et al. | Mar 2002 | A1 |
20020052730 | Nakao | May 2002 | A1 |
20020099700 | Li | Jul 2002 | A1 |
20020165847 | McCartney | Nov 2002 | A1 |
20020174138 | Nakamura | Nov 2002 | A1 |
20030028869 | Drake et al. | Feb 2003 | A1 |
20030112792 | Cranor et al. | Jun 2003 | A1 |
20030158839 | Faybishenko et al. | Aug 2003 | A1 |
20030182282 | Ripley | Sep 2003 | A1 |
20040010502 | Bomfim et al. | Jan 2004 | A1 |
20040027349 | Landau et al. | Feb 2004 | A1 |
20040049478 | Jasper | Mar 2004 | A1 |
20040143571 | Bjornson et al. | Jul 2004 | A1 |
20040153869 | Marinelli et al. | Aug 2004 | A1 |
20040205064 | Zhou et al. | Oct 2004 | A1 |
20040215755 | O'Neill | Oct 2004 | A1 |
20040243645 | Broder et al. | Dec 2004 | A1 |
20050091211 | Vernau et al. | Apr 2005 | A1 |
20050192994 | Caldwell et al. | Sep 2005 | A1 |
20050203888 | Woosley et al. | Sep 2005 | A1 |
20060101081 | Lin et al. | May 2006 | A1 |
20060122978 | Brill et al. | Jun 2006 | A1 |
20060294071 | Weare et al. | Dec 2006 | A1 |
20070005639 | Gaussier et al. | Jan 2007 | A1 |
20070005654 | Schachar et al. | Jan 2007 | A1 |
20070027845 | Dettinger et al. | Feb 2007 | A1 |
20070073708 | Smith et al. | Mar 2007 | A1 |
20070100806 | Ramer et al. | May 2007 | A1 |
20070156748 | Emam et al. | Jul 2007 | A1 |
20070174167 | Natella et al. | Jul 2007 | A1 |
20070174289 | Utiger | Jul 2007 | A1 |
20070203693 | Estes | Aug 2007 | A1 |
20070203924 | Guha et al. | Aug 2007 | A1 |
20070240152 | Li et al. | Oct 2007 | A1 |
20070250501 | Grubb et al. | Oct 2007 | A1 |
20070250519 | Fineberg | Oct 2007 | A1 |
20070282959 | Stern | Dec 2007 | A1 |
20080010683 | Baddour et al. | Jan 2008 | A1 |
20080027920 | Schipunov et al. | Jan 2008 | A1 |
20080077570 | Tang et al. | Mar 2008 | A1 |
20080109399 | Liao et al. | May 2008 | A1 |
20090019013 | Tareen et al. | Jan 2009 | A1 |
20090043792 | Barsness et al. | Feb 2009 | A1 |
20090049038 | Gross | Feb 2009 | A1 |
20090089626 | Gotch | Apr 2009 | A1 |
20090094484 | Son et al. | Apr 2009 | A1 |
20090144609 | Liang et al. | Jun 2009 | A1 |
20090216734 | Aghajanyan et al. | Aug 2009 | A1 |
20090222395 | Light et al. | Sep 2009 | A1 |
20090240682 | Balmin et al. | Sep 2009 | A1 |
20090292660 | Behal et al. | Nov 2009 | A1 |
20090299999 | Loui et al. | Dec 2009 | A1 |
20090322756 | Robertson et al. | Dec 2009 | A1 |
20100077001 | Vogel et al. | Mar 2010 | A1 |
20100100437 | Dean et al. | Apr 2010 | A1 |
20100138931 | Thorley et al. | Jun 2010 | A1 |
20100161566 | Adair et al. | Jun 2010 | A1 |
20100223264 | Brucker et al. | Sep 2010 | A1 |
20100235311 | Cao et al. | Sep 2010 | A1 |
20100274785 | Procopiuc et al. | Oct 2010 | A1 |
20110047167 | Caceres | Feb 2011 | A1 |
20110071975 | Friedlander et al. | Mar 2011 | A1 |
20110093471 | Brockway et al. | Apr 2011 | A1 |
20110099163 | Harris et al. | Apr 2011 | A1 |
20110119243 | Diamond et al. | May 2011 | A1 |
20110125764 | Carmel et al. | May 2011 | A1 |
20110161333 | Langseth et al. | Jun 2011 | A1 |
20110264679 | Dettinger et al. | Oct 2011 | A1 |
20110282888 | Koperski et al. | Nov 2011 | A1 |
20110296390 | Vidal et al. | Dec 2011 | A1 |
20110296397 | Vidal et al. | Dec 2011 | A1 |
20120016875 | Jin et al. | Jan 2012 | A1 |
20120016877 | Vadrevu et al. | Jan 2012 | A1 |
20120030220 | Edwards et al. | Feb 2012 | A1 |
20120059839 | Andrade et al. | Mar 2012 | A1 |
20120102121 | Wu et al. | Apr 2012 | A1 |
20120117069 | Kawanishi et al. | May 2012 | A1 |
20120131139 | Siripurapu et al. | May 2012 | A1 |
20120143911 | Liebald et al. | Jun 2012 | A1 |
20120246154 | Duan et al. | Sep 2012 | A1 |
20120310934 | Peh et al. | Dec 2012 | A1 |
20120323839 | Kiciman et al. | Dec 2012 | A1 |
20130036076 | Yang et al. | Feb 2013 | A1 |
20130132405 | Bestgen et al. | May 2013 | A1 |
20130166480 | Popescu et al. | Jun 2013 | A1 |
20130166547 | Pasumarthi et al. | Jun 2013 | A1 |
20130290232 | Tsytsarau et al. | Oct 2013 | A1 |
20130303198 | Sadasivam et al. | Nov 2013 | A1 |
20130325660 | Callaway | Dec 2013 | A1 |
20130326325 | De et al. | Dec 2013 | A1 |
20140013233 | Ahlberg et al. | Jan 2014 | A1 |
20140022100 | Fallon | Jan 2014 | A1 |
20140046921 | Bau | Feb 2014 | A1 |
20140074784 | Mao et al. | Mar 2014 | A1 |
20140089237 | Adibi | Mar 2014 | A1 |
20140149446 | Kuchmann-Beauger et al. | May 2014 | A1 |
20140156634 | Buchmann et al. | Jun 2014 | A1 |
20140229476 | Fouad et al. | Aug 2014 | A1 |
20140244550 | Jin et al. | Aug 2014 | A1 |
20140255003 | Abramson et al. | Sep 2014 | A1 |
20140280183 | Brown et al. | Sep 2014 | A1 |
20140344288 | Evans et al. | Nov 2014 | A1 |
20140351233 | Crupi et al. | Nov 2014 | A1 |
20150074037 | Sarferaz | Mar 2015 | A1 |
20150149481 | Lee et al. | May 2015 | A1 |
20150154194 | Lightner et al. | Jun 2015 | A1 |
20150154200 | Lightner et al. | Jun 2015 | A1 |
20150154283 | Lightner et al. | Jun 2015 | A1 |
20150154286 | Lightner et al. | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
2013003770 | Jan 2013 | WO |
Entry |
---|
International Search Report and Written Opinion of the International Searching Authority dated Apr. 15, 2015 corresponding to International Patent Application No. PCT/US2014/068002, 10 pages. |
International Search Report and Written Opinion of the International Searching Authority dated Mar. 3, 2015 corresponding to International Patent Application No. PCT/US2014/067921, 10 pages. |
International Search Report and Written Opinion dated Mar. 6, 2015 corresponding to International Patent Application No. PCT/US2014/067993, 9 pages. |
International Search Report and Written Opinion of the International Searching Authority dated Apr. 15, 2015, corresponding to International Patent Application No. PCT/2014/067994, 9 pages. |
International Search Report and Written Opinion dated Mar. 10, 2015 corresponding to International Patent Application No. PCT/US2014/067999, 10 pages. |
International Search Report and Written Opinion dated Feb. 24, 2015 corresponding to International Patent Application No. PCT/US2014/067918, 10 pages. |
Bouchenak, S., “Architecture-Based Autonomous Repair Management: An Application to J2EE Clusters”, Proceedings of the 24th IEEE Symposium on Reliable Distributed Systems [online], 2005 [retrieved Dec. 16, 2015], Retrieved from Internet: <URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1541181>, pp. 1-12. |
Tunkelang, D., “Faceted Search,” Morgan & Claypool Publ., 2009, pp. i-79. |
Schuth, A., et al., “University of Amsterdam Data Centric Ad Hoc and Faceted Search Runs,” ISLA, 2012, pp. 155-160. |
Tools, Search Query Suggestions using ElasticSearch via Shingle Filter and Facets, Nov. 2012, pp. 1-12. |
Wang, et al., “Automatic Online News Issue Construction in Web Environment, ”WWW 2008/Refereed Track: Search—Applications, Apr. 21-25, 2008—Beijing, China, pp. 457-466. |
Blei et al., “Latent Dirichlet Allocation” Journal of Machine Learning Research 3 (2003), pp. 993-1022. |
Chuang et al., “A Practical Web-based Approach to Generating Topic Hierarchy for Text Segments,” CIKM '04, Nov. 8-13, 2004, Washington, DC, USA, Copyright 2004 ACM 1-58113-874-0/04/0011, pp. 127-136. |
Vizard, The Rise of In-Memory Databases, Jul. 13, 2012. |
Number | Date | Country | |
---|---|---|---|
20150234899 A1 | Aug 2015 | US |
Number | Date | Country | |
---|---|---|---|
61910873 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14557900 | Dec 2014 | US |
Child | 14703622 | US |