The present disclosure relates in general to in-memory databases, and more specifically to non-exclusionary searching within in-memory databases.
Accessing data may be simpler, more accurate and much faster from structured and semi-structured data than non-structured data. When performing a search using structured and semi-structured data by indicating key data fields it is possible to get very accurate results in a very short time frame, but is also possible that many records relevant to the query may be excluded from the results list. This may happen because the records may be stored in collections with different schemata or the records may have some missing or null fields that correspond to some of the fields specified in the query.
Therefore, there is a need for search methods with improved recall capabilities that allow mixing and matching records with different schemata.
Described herein are systems and methods providing a search paradigm that may be implemented for data storage systems, such as an in-memory database system, to provide users the ability to specify a query algorithm and a detailed scoring and ranking algorithm, such that different algorithms may be determined according to each of the separate aspects of a search query. Nodes conducting the search query may then find each of the possible candidate records using each of the specified query algorithms (even if some fields are empty or not defined in a particular schema), and then score and rank the candidate records using the specified scoring and ranking algorithms. Conventional systems do not offer the ability to provide separate query and scoring algorithms within a single search query, such that each scoring algorithm may operate on a completely separate set of fields. Systems and methods described herein provide such approaches to reduce the burden of data preparation and enables re-use of data for purposes not originally intended when the data was loaded.
Systems and methods described herein provide for non-exclusionary searching within clustered in-memory databases. The non-exclusionary search methods may allow the execution of searches where the results may include records where fields specified in the query are not populated or defined. The disclosed methods include the application of fuzzy indexing, fuzzy matching and scoring algorithms, which enables the system to search, score and compare records with different schemata. This significantly improves the recall of relevant records.
The system architecture of an in-memory database that may support the disclosed non-exclusionary search method may include any suitable combination of modules and clusters; including one or more of a system interface, a search manager, an analytics agent, a search conductor, a partitioner, a collection, a supervisor, a dependency manager, or any suitable combination.
The system may score records against the one or more queries, where the system may score the match of one or more available fields of the records and may then determine a score for the overall match of the records. If some fields are missing, a penalty or lower score may be assigned to the records without excluding them. The system may determine whether the score is above a predefined acceptance threshold, where the threshold may be defined in the search query or may be a default value. In further embodiments, fuzzy matching algorithms may compare records temporarily stored in collections with the one or more queries being generated by the system.
In one embodiment,
Numerous other aspects, features and benefits of the present disclosure may be made apparent from the following detailed description taken together with the drawing figures.
The present disclosure can be better understood by referring to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. In the figures, reference numerals designate corresponding parts throughout the different views.
As used herein, the following terms have the following definitions:
“Database” refers to any system including any combination of clusters and modules suitable for storing one or more collections and suitable to process one or more queries.
“Query” refers to a request to retrieve information from one or more suitable databases.
“Memory” refers to any hardware component suitable for storing information and retrieving said information at a sufficiently high speed.
“Node” refers to a computer hardware configuration suitable for running one or more modules.
“Cluster” refers to a set of one or more nodes.
“Module” refers to a computer software component suitable for carrying out one or more defined tasks.
“Collection” refers to a discrete set of records.
“Record” refers to one or more pieces of information that may be handled as a unit.
“Partition” refers to an arbitrarily delimited portion of records of a collection.
“Search Manager”, or “S.M.”, refers to a module configured to at least receive one or more queries and return one or more search results.
“Analytics Agent”, “Analytics Module”, “A.A.”, or “A.M.”, refers to a module configured to at least receive one or more records, process said one or more records, and return the resulting one or more processed records.
“Search Conductor”, or “S.C.”, refers to a module configured to at least run one or more search queries on a partition and return the search results to one or more search managers.
“Node Manager”, or “N.M.”, refers to a module configured to at least perform one or more commands on a node and communicate with one or more headquarters.
“Supervisor” refers to a module configured to at least communicate with one or more components of a system and determine one or more statuses.
“Heartbeat”, or “HB”, refers to a signal communicating at least one or more statuses to one or more supervisors.
“Partitioner” refers to a module configured to at least divide one or more collections into one or more partitions.
“Dependency Manager”, or “D.M.”, refers to a module configured to at least include one or more dependency trees associated with one or more modules, partitions, or suitable combinations, in a system; to at least receive a request for information relating to any one or more suitable portions of said one or more dependency trees; and to at least return one or more configurations derived from said portions.
“Link on-the-fly module” refers to any linking module that performs data linkage as data is requested from the system rather than as data is added to the system.
“Schema” refers to a characteristic of a collection, partition or database which defines what fields should be in a record.
“Dictionary” refers to a centralized repository of information, which includes details about the fields in a MEMDB such as meaning, relationships to other data, origin, usage, and format.
“Field” refers to a specific data value in a record.
“Not defined” refers to a field that is not part of a particular schema.
“Not populated” refers to fields that are part of the schema, but have no assigned values.
The present disclosure is here described in detail with reference to embodiments illustrated in the drawings, which form a part here. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The illustrative embodiments described in the detailed description are not meant to be limiting of the subject matter presented here.
An in-memory database is a database storing data in records controlled by a database management system (DBMS) configured to store data records in a device's main memory, as opposed to conventional databases and DBMS modules that store data in “disk” memory. Conventional disk storage requires processors (CPUs) to execute read and write commands to a device's hard disk, thus requiring CPUs to execute instructions to locate (i.e., seek) and retrieve the memory location for the data, before performing some type of operation with the data at that memory location. In-memory database systems access data that is placed into main memory, and then addressed accordingly, thereby mitigating the number of instructions performed by the CPUs and eliminating the seek time associated with CPUs seeking data on hard disk.
In-memory databases may be implemented in a distributed computing architecture, which may be a computing system comprising one or more nodes configured to aggregate the nodes' respective resources (e.g., memory, disks, processors). As disclosed herein, embodiments of a computing system hosting an in-memory database may distribute and store data records of the database among one or more nodes. In some embodiments, these nodes are formed into “clusters” of nodes. In some embodiments, these clusters of nodes store portions, or “collections,” of database information.
The present disclosure relates to methods for non-exclusionary searching within clustered in-memory databases. The disclosed non-exclusionary search methods include the execution of searches where the results may include records where fields specified in the query are not populated or defined. The disclosed methods also include the application of fuzzy matching and scoring algorithms, which enables the system to search, score and compare records from collections with different schemata.
In one or more embodiments, system interface 102 may be configured to feed one or more queries generated outside of the system architecture of MEMDB 100 to one or more search managers in a first cluster including at least a first search manager 104 and up to nth search manager 106. Said one or more search managers in said first cluster may be linked to one or more analytics agents in a second cluster including at least a first analytics agent 108 and up to nth analytics agent 110.
Search managers in said first cluster may be linked to one or more search conductors in a third cluster including at least a first search conductor 112 and up to nth search conductor 114. Search conductors in said third cluster may be linked to one or more partitioners 116, where partitions corresponding to at least a First Collection 118 and up to nth Collection 120 may be stored at one or more moments in time.
One or more nodes, modules, or suitable combination thereof included in the clusters included in MEMDB 100 may be linked to one or more supervisors 122, where said one or more nodes, modules, or suitable combinations in said clusters may be configured to send at least one heartbeat to one or more supervisors 122. Supervisor 122 may be linked to one or more dependency managers 124, where said one or more dependency managers 124 may include one or more dependency trees for one or more modules, partitions, or suitable combinations thereof. Supervisor 122 may additionally be linked to one or more other supervisors 122, where additional supervisors 122 may be linked to said clusters included in the system architecture of MEMDB 100.
The process may start with query received by search manager 202, in which one or more queries generated by an external source may be received by one or more search managers. In some embodiments, these queries may be automatically generated by a system interface 102 as a response to an interaction with a user. In one or more embodiments, the queries may be represented in a markup language, or other suitable language, including XML, JavaScript, HTML, other suitable language for representing parameters of search queries. In one or more other embodiments, the queries may be represented in a structure, including embodiments where the queries are represented in YAML or JSON. In some embodiments, a query may be represented in compact or binary format.
Afterwards, the received queries may be parsed by search managers 204. This process may allow the system to determine if field processing is desired 206. In one or more embodiments, the system may be capable of determining if the process is required using information included in the query. In one or more other embodiments, the one or more search managers may automatically determine which one or more fields may undergo a desired processing.
If the system determined that field processing for the one or more fields is desired, the one or more search managers may apply one or more suitable processing techniques to the one or more desired fields, during search manager processes fields 208. In one or more embodiments, suitable processing techniques may include address standardization, geographic proximity or boundaries, and nickname interpretation, amongst others. In some embodiments, suitable processing techniques may include the extraction of prefixes from strings and the generation of non-literal keys that may later be used to apply fuzzy matching techniques.
Then, when S.M. constructs search query 210, one or more search managers may construct one or more search conductor queries associated with the one or more queries. In one or more embodiments, the search conductor queries may be constructed so as to be processed as a stack-based search.
Subsequently, S.M. may send search conductor queries to S.C. 212. In some embodiments, one or more search managers may send the one or more search queries to one or more search conductors, where said one or more search conductors may be associated with collections specified in the one or more search queries.
Then, the one or more Search Conductors may apply any suitable Boolean search operators 214 (e.g., AND, OR, XOR) and index look-ups without excluding records based on not having specific fields present. The Search Conductor may execute the user-provided or application-provided Boolean operators and index look-ups. Thus, embodiments may execute user queries implementing fuzzy-indexes and ‘OR’ operators, instead of ‘AND’ operators, to get a candidate set of records that do not “exclude” potentially good results. Scoring features allow the best results (i.e., most relevant) to score highest, and the less-relevant records to score lower. In some cases, there are two stages to executing search queries. A search stage, in which Boolean operators, fuzzy indexes and filters may return a candidate set of results of potential results satisfying the search query. A next scoring stage may apply one or more user-specified or application-specified scoring methods to score the records in the candidate set, so the best results score high; poorer or less-relevant results below a given threshold can be excluded, so as to return only a reasonable result size. This may lead to having a very large candidate set of records that need to be scored, however in-memory database systems may be fast enough to handle sets of search results having sizes that may be too large in some cases for conventional systems. And the result is we don't miss good results just because some fields were empty or there was some noisy or erroneous data.
As mentioned, the Search Conductors may apply any suitable search filters 216 while not excluding records based on missing fields. The one or more search conductors may score 218 the resulting answer set records against the one or more queries, where the search conductors may score the match of one or more fields of the records and may then determine a score for the overall match of the records. The search conductors may be capable of scoring records against one or more queries, where the queries include fields that are omitted or not included in the records. In some embodiments, a search manager may send a query to a search conductor to be performed on a collection with a schema including less or different fields than those defined in the query. In this case the query may be reformed to modify those fields which do conform to the schema of the collection being searched to indicate they are there for scoring purpose only. In some cases, search manager can generate and/or modify the search query. That is, the Search Manger may builds a query plan that may be tailored or adjusted to account for missing fields, or fields that may not have an index defined in one or more collections.
According to some embodiments, collections with a schema different than that of the query may not be excluded, the available fields may be scored against the queries and a penalty or lower score may be assigned to records with missing fields. The fields in collections across MEMDB 100 may be normalized and each search conductor may have access to a dictionary of normalized fields to facilitate the score assignment process. Normalization may be performed through any suitable manual or automated process. If the user or application providing the search query defines fields that are normalized across multiple collections, the system may build queries that can be applied across multiple collections, even if each respective collection does not conform to the exact same schema or storage rules.
In some embodiments, fuzzy matching techniques may be applied to further broaden the lists of possible relevant results.
The system may determine whether the assigned score is above a specified acceptance threshold, where the threshold may be defined in the search query or may be a default value. In one or more embodiments, the default score thresholds may vary according to the one or more fields being scored. If the search conductor determines in that the scores are above the desired threshold, the records may be added to a results list. The search conductor may continue to score records until it determines that a record is the last in the current result set. If the search conductor determines that the last record in a partition has been processed, the search conductor may then sort the resulting results list. The search conductor may then return the results list to a search manager.
When S.M. receives and collates results from S.C.'s 220, the one or more search conductors return the one or more search results to the one or more search managers; where, in one or more embodiments, said one or more search results may be returned asynchronously. The one or more search managers may then compile results from the one or more search conductors into one or more results list.
The system may determine whether analytics processing 222 of the search results compiled by the one or more search managers is desired. In one or more embodiments, the system determines if the processing is desired using information included in the query. In one or more other embodiments, the one or more search managers may automatically determine which one or more fields may undergo a desired processing.
If the system determines that analytics processing 222 is desired, one or more analytics agents may process results 224, through the application of one or more suitable processing techniques to the one or more results list. In one or more embodiments, suitable techniques may include rolling up several records into a more complete record, performing one or more analytics on the results, and determining information about neighboring records, amongst others. In some embodiments, analytics agents may include disambiguation modules, linking modules, link on-the-fly modules, or any other suitable modules and algorithms.
After processing, according to some embodiments, the one or more analytics agents may return one or more processed results lists to the one or more search managers.
A search manager may return search results 226. In some embodiments, the one or more search managers may decompress the one or more results list and return them to the system that initiated the query. The returned results may be formatted in one of several formats, including XML, JSON, RDF or any other format.
In Example #1, the disclosed method for non-exclusionary searching is applied. A user defines a query with the following fields: FN (first name): John, LN (last name): Smith, DOB (date of birth): May 15, 1965 and PH (phone number): 555-1234-7890. The system performs the search and among the relevant results there are two records with missing fields, from two different collections with different schemata. The first one is from collection 1001, in this collection the following fields are defined FN: John, LN: Smith, PH: - - - and DOB: May 15, 1965. The second one is from collection ‘8021,’ in this collection the following fields are defined FN: John, LN: Smith, PH: 555-1234-7890 and DOB: - - - . Since there is a good match in most fields both of the records, neither is excluded, and they get a similar final score and are positioned in the top 10 results for the query.
In Example #2, the disclosed method for non-exclusionary searching is applied. A user defines a query with the following fields: FN (first name): John, LN (last name): Smith, DOB (date of birth): May 15, 1965 and PH (phone number): 555-1234-7890. The system performs the search and among the relevant results there are two records with similar but not exactly matched fields, from two different collections with different schemata. The first one is from collection 1001, in this collection the following fields are defined FN: Jonathan, LN: Smith, PH: 1234-7890. The second one is from collection 8021, in this collection the following fields are defined FN: John, LN: Smyth, PH: 555-1234-7890 and DOB: 1965. Since there is a good match in most fields both of the records get a final score that exceeds the score threshold and are positioned in the top 10 results for the query.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the invention. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein.
When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
This non-provisional patent application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/910,867, entitled “Non-Exclusionary Search Within In-Memory Databases,” filed Dec. 2, 2013, which is hereby incorporated in its entirety herein. This application is related to U.S. patent application Ser. No. 14/557,794, entitled “Method for Disambiguating Features in Unstructured Text,” filed Dec. 2, 2014; U.S. patent application Ser. No. 14/558,300, entitled “Event Detection Through Text Analysis Using Trained Event Template Models,” filed Dec. 2, 2014; U.S. patent application Ser. No. 14/557,807, entitled “Method for Facet Searching and Search Suggestions,” filed Dec. 2, 2014; U.S. patent application Ser. No. 14/558,254, entitled “Design and Implementation of Clustered In-Memory Database,” filed Dec. 2, 2014; U.S. patent application Ser. No. 14/557,827, entitled “Real-Time Distributed In Memory Search Architecture,” filed Dec. 2, 2014; U.S. patent application Ser. No. 14/557,951, entitled “Fault Tolerant Architecture for Distributed Computing Systems,” filed Dec. 2, 2014; U.S. patent application Ser. No. 14/558,009, entitled “Dependency Manager for Databases,” filed Dec. 2, 2014; U.S. patent application Ser. No. 14/558,055, entitled “Pluggable Architecture for Embedding Analytics in Clustered In-Memory Databases,” filed Dec. 2, 2014; and U.S. patent application Ser. No. 14/557,900, entitled “Data record compression with progressive and/or selective decompression,” filed Dec. 2, 2014; each of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6128660 | Grimm et al. | Oct 2000 | A |
6178529 | Short et al. | Jan 2001 | B1 |
6266781 | Chung et al. | Jul 2001 | B1 |
6353926 | Parthesarathy et al. | Mar 2002 | B1 |
6738759 | Wheeler et al. | May 2004 | B1 |
6832373 | O'Neill | Dec 2004 | B2 |
7058846 | Kelkar et al. | Jun 2006 | B1 |
7099898 | Nakamura et al. | Aug 2006 | B1 |
7370323 | Marinelli et al. | May 2008 | B2 |
7421478 | Muchow | Sep 2008 | B1 |
7447940 | Peddada | Nov 2008 | B2 |
7543174 | van Rietschote et al. | Jun 2009 | B1 |
7681075 | Havemose et al. | Mar 2010 | B2 |
7818615 | Krajewski et al. | Oct 2010 | B2 |
7899871 | Kumar et al. | Mar 2011 | B1 |
8055933 | Jaehde et al. | Nov 2011 | B2 |
8122026 | Laroco et al. | Feb 2012 | B1 |
8341622 | Eatough | Dec 2012 | B1 |
8345998 | Malik et al. | Jan 2013 | B2 |
8356036 | Betchel et al. | Jan 2013 | B2 |
8375073 | Jain | Feb 2013 | B1 |
8423522 | Lang et al. | Apr 2013 | B2 |
8429256 | Vidal et al. | Apr 2013 | B2 |
8726267 | Li et al. | May 2014 | B2 |
8782018 | Shim et al. | Jul 2014 | B2 |
8995717 | Cheng et al. | Mar 2015 | B2 |
9009153 | Kahn et al. | Apr 2015 | B2 |
9025892 | Lightner et al. | May 2015 | B1 |
9032387 | Hill et al. | May 2015 | B1 |
20010037398 | Chao et al. | Nov 2001 | A1 |
20020031260 | Thawonmas et al. | Mar 2002 | A1 |
20020165847 | McCartney et al. | Nov 2002 | A1 |
20020174138 | Nakamura | Nov 2002 | A1 |
20030028869 | Drake et al. | Feb 2003 | A1 |
20030112792 | Cranor et al. | Jun 2003 | A1 |
20030158839 | Faybishenko et al. | Aug 2003 | A1 |
20030182282 | Ripley | Sep 2003 | A1 |
20040010502 | Bomfim | Jan 2004 | A1 |
20040027349 | Landau et al. | Feb 2004 | A1 |
20040049478 | Jasper | Mar 2004 | A1 |
20040143571 | Bjornson et al. | Jul 2004 | A1 |
20040153869 | Marinelli et al. | Aug 2004 | A1 |
20040205064 | Zhou et al. | Oct 2004 | A1 |
20040215755 | O'Neill | Oct 2004 | A1 |
20040243645 | Broder et al. | Dec 2004 | A1 |
20050091211 | Vernau et al. | Apr 2005 | A1 |
20050192994 | Caldwell et al. | Sep 2005 | A1 |
20060101081 | Lin et al. | May 2006 | A1 |
20060294071 | Weare et al. | Dec 2006 | A1 |
20070005639 | Gaussier et al. | Jan 2007 | A1 |
20070100806 | Ramer et al. | May 2007 | A1 |
20070156748 | Emam et al. | Jul 2007 | A1 |
20070174289 | Utiger | Jul 2007 | A1 |
20070203693 | Estes | Aug 2007 | A1 |
20070203924 | Guha et al. | Aug 2007 | A1 |
20070240152 | Li et al. | Oct 2007 | A1 |
20070250519 | Fineberg et al. | Oct 2007 | A1 |
20070282959 | Stern | Dec 2007 | A1 |
20080010683 | Baddour et al. | Jan 2008 | A1 |
20080027920 | Schipunov et al. | Jan 2008 | A1 |
20080077570 | Tang et al. | Mar 2008 | A1 |
20090019013 | Tareen et al. | Jan 2009 | A1 |
20090043792 | Barsness et al. | Feb 2009 | A1 |
20090049038 | Gross | Feb 2009 | A1 |
20090089626 | Gotch et al. | Apr 2009 | A1 |
20090094484 | Son et al. | Apr 2009 | A1 |
20090222395 | Light et al. | Sep 2009 | A1 |
20090240682 | Balmin et al. | Sep 2009 | A1 |
20090292660 | Behal et al. | Nov 2009 | A1 |
20090299999 | Loui et al. | Dec 2009 | A1 |
20090322756 | Robertson et al. | Dec 2009 | A1 |
20100077001 | Vogel et al. | Mar 2010 | A1 |
20100138931 | Thorley et al. | Jun 2010 | A1 |
20100223264 | Bruckner et al. | Sep 2010 | A1 |
20100235311 | Cao et al. | Sep 2010 | A1 |
20100274785 | Procopiuc et al. | Oct 2010 | A1 |
20110071975 | Friedlander et al. | Mar 2011 | A1 |
20110093471 | Brockway et al. | Apr 2011 | A1 |
20110099163 | Harris et al. | Apr 2011 | A1 |
20110119243 | Diamond et al. | May 2011 | A1 |
20110125764 | Carmel et al. | May 2011 | A1 |
20110161333 | Langseth et al. | Jun 2011 | A1 |
20110282888 | Koperski et al. | Nov 2011 | A1 |
20110296390 | Vidal et al. | Dec 2011 | A1 |
20110296397 | Vidal et al. | Dec 2011 | A1 |
20120030220 | Edwards et al. | Feb 2012 | A1 |
20120059839 | Andrade et al. | Mar 2012 | A1 |
20120102121 | Wu et al. | Apr 2012 | A1 |
20120117069 | Kawanishi et al. | May 2012 | A1 |
20120131139 | Siripurapu et al. | May 2012 | A1 |
20120246154 | Duan et al. | Sep 2012 | A1 |
20120310934 | Peh et al. | Dec 2012 | A1 |
20120323839 | Kiciman et al. | Dec 2012 | A1 |
20130132405 | Bestgen et al. | May 2013 | A1 |
20130166480 | Popescu et al. | Jun 2013 | A1 |
20130166547 | Pasumarthi et al. | Jun 2013 | A1 |
20130290232 | Tsytsarau et al. | Oct 2013 | A1 |
20130303198 | Sadasivam et al. | Nov 2013 | A1 |
20140013233 | Ahlberg et al. | Jan 2014 | A1 |
20140022100 | Fallon et al. | Jan 2014 | A1 |
20140156634 | Buchmann et al. | Jun 2014 | A1 |
20140244550 | Jin et al. | Aug 2014 | A1 |
20140280183 | Brown et al. | Sep 2014 | A1 |
20140351233 | Crupi et al. | Nov 2014 | A1 |
20150074037 | Sarferaz | Mar 2015 | A1 |
20150154079 | Lightner et al. | Jun 2015 | A1 |
20150154264 | Lightner et al. | Jun 2015 | A1 |
20150154297 | Lightner et al. | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
2013003770 | Jan 2013 | WO |
Entry |
---|
International Search Report and Written Opinion of the International Searching Authority dated Apr. 15, 2015 corresponding to International Patent Application No. PCT/US2014/068002, 10 pages. |
International Search Report and Written Opinion dated Mar. 6, 2015 corresponding to International Patent Application No. PCT/US2014/067993, 9 pages. |
International Search Report and Written Opinion dated Mar. 10, 2015 corresponding to International Patent Application No. PCT/US2014/067999, 10 pages. |
International Search Report and Written Opinion dated Feb. 24, 2015 corresponding to International Patent Application No. PCT/US2014/067918, 10 pages. |
International Search Report and Written Opinion of the International Searching Authority dated Mar. 3, 2015 corresponding to International Patent Application No. PCT/US2014/067921, 10 pages. |
International Search Report dated Apr. 15, 2015 corresponding to International Patent Application No. PCT/US2014/067994, 4 pages. |
Written Opinion of the International Searching Authority dated Apr. 15, 2015 corresponding to International Patent Application No. PCT/US2014/067994, 9 pages. |
Tunkelang, D., “Faceted Search,” Morgan & Claypool Publ., 2009, pp. i-79. |
Schuth, A., et al., “University of Amsterdam Data Centric Ad Hoc and Faceted Search Runs, ”ISLA, 2012, pp. 155-160. |
Tools, Search Query Suggestions using ElasticSearch via Shingle Filter and Facets, Nov. 2012, pp. 1-12. |
Number | Date | Country | |
---|---|---|---|
20150154194 A1 | Jun 2015 | US |
Number | Date | Country | |
---|---|---|---|
61910867 | Dec 2013 | US |