1. Field
This disclosure relates to caching search-related data in a semi-structured database.
2. Description of the Related Art
Databases can store and index data in accordance with a structured data format (e.g., Relational Databases for normalized data queried by Structured Query Language (SQL), etc.), a semi-structured data format (e.g., XMLDBs for Extensible Markup Language (XML) data, RethinkDB for JavaScript Object Notation (JSON) data, etc.) or an unstructured data format (e.g., Key Value Stores for key-value data, ObjectDBs for object data, Solr for free text indexing, etc.). In structured databases, any new data objects to be added are expected to conform to a fixed or predetermined schema (e.g., a new Company data object may be required to be added with Name, Industry and Headquarters values, a new Bibliography data object may be required to be added with Author, Title, Journal and Date values, and so on). By contrast, in unstructured databases, new data objects can be added verbatim, so similar data objects can be added via different formats which may cause difficulties in establishing semantic relationships between the similar data objects.
Semi-structured databases share some properties with both structured and unstructured databases (e.g., similar data objects can be grouped together as in structured databases, while the various values of the grouped data objects are allowed to differ which is more similar to unstructured databases). Semi-structured database formats use a document structure that includes a plurality of nodes arranged in a tree hierarchy. The document structure includes any number of data objects that are each mapped to a particular node in the tree hierarchy, whereby the data objects are indexed either by the name of their associated node (i.e., flat-indexing) or by their unique path from a root node of the tree hierarchy to their associated node (i.e., label-path indexing). The manner in which the data objects of the document structure are indexed affects how searches (or queries) are conducted.
An example relates to a method of performing a search within a semi-structured database that is storing a set of documents, each document in the set of documents being organized with a tree-structure that contains a plurality of nodes, the plurality of nodes for each document in the set of documents including a root node and at least one non-root node, each of the plurality of nodes including a set of node-specific data entries. The example method may include detecting a threshold number of search queries for which a given value at a given target node for a given document of the set of documents is returned as a search result, and caching, in a value table stored in a cache memory, the given value in response to the detecting based on a document identifier for the given document and a path identifier that identifies a path between the root node and the given target node.
Another example relates to a method of performing a search within a semi-structured database that is storing a set of documents, each document in the set of documents being organized with a tree-structure that contains a plurality of nodes, the plurality of nodes for each document in the set of documents including a root node and at least one non-root node, each of the plurality of nodes including a set of node-specific data entries. The example method may include detecting a threshold number of search queries that result in values being returned as search results from a given target node for a given document of the set of documents and caching, in a value table stored in a cache memory, values stored at the given target node in response to the detecting based on a document identifier for the given document of the given target node and a path identifier that identifies a path between the root node and the given target node for the given document.
Another example relates to a method of performing a search within a semi-structured database that is storing a set of documents, each document in the set of documents being organized with a tree-structure that contains a plurality of nodes, the plurality of nodes for each document in the set of documents including a root node and at least one non-root node, each of the plurality of nodes including a set of node-specific data entries. The example method may include recording search result heuristics that indicate a degree to which search results are expected from each search query in a set of search queries, receiving a merge query that requests a merger of search results including two or more search queries from the set of search queries, establishing an order in which to perform the two or more search queries during execution of the merge query based on the recorded search result heuristics, executing at least one of the two or more search queries in accordance with the established order and returning one or more merged search results based on the executing.
Another example relates to a server that is configured to perform a search within a semi-structured database that is storing a set of documents, each document in the set of documents being organized with a tree-structure that contains a plurality of nodes, the plurality of nodes for each document in the set of documents including a root node and at least one non-root node, each of the plurality of nodes including a set of node-specific data entries. In an example, the server may include logic configured to detect a threshold number of search queries for which a given value at a given target node for a given document of the set of documents is returned as a search result and logic configured to cache, in a value table stored in a cache memory, the given value in response to the detection based on a document identifier for the given document and a path identifier that identifies a path between the root node and the given target node.
A more complete appreciation of embodiments of the disclosure will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure, and in which:
Aspects of the disclosure are disclosed in the following description and related drawings directed to specific embodiments of the disclosure. Alternate embodiments may be devised without departing from the scope of the disclosure. Additionally, well-known elements of the disclosure will not be described in detail or will be omitted so as not to obscure the relevant details of the disclosure.
The words “exemplary” and/or “example” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” and/or “example” is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term “embodiments of the disclosure” does not require that all embodiments of the disclosure include the discussed feature, advantage or mode of operation.
Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the disclosure may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, “logic configured to” perform the described action.
A client device, referred to herein as a user equipment (UE), may be mobile or stationary, and may communicate with a wired access network and/or a radio access network (RAN). As used herein, the term “UE” may be referred to interchangeably as an “access terminal” or “AT”, a “wireless device”, a “subscriber device”, a “subscriber terminal”, a “subscriber station”, a “user terminal” or UT, a “mobile terminal”, a “mobile station” and variations thereof. In an embodiment, UEs can communicate with a core network via a RAN, and through the core network the UEs can be connected with external networks such as the Internet. Of course, other mechanisms of connecting to the core network and/or the Internet are also possible for the UEs, such as over wired access networks, WiFi networks (e.g., based on IEEE 802.11, etc.) and so on. UEs can be embodied by any of a number of types of devices including but not limited to cellular telephones, personal digital assistants (PDAs), pagers, laptop computers, desktop computers, PC cards, compact flash devices, external or internal modems, wireless or wireline phones, and so on. A communication link through which UEs can send signals to the RAN is called an uplink channel (e.g., a reverse traffic channel, a reverse control channel, an access channel, etc.). A communication link through which the RAN can send signals to UEs is called a downlink or forward link channel (e.g., a paging channel, a control channel, a broadcast channel, a forward traffic channel, etc.). As used herein the term traffic channel (TCH) can refer to either an uplink/reverse or downlink/forward traffic channel.
Referring to
The Internet 175, in some examples, includes a number of routing agents and processing agents (not shown in
Referring to
While internal components of UEs such as UEs 200A and 200B can be embodied with different hardware configurations, a basic high-level UE configuration for internal hardware components is shown as platform 202 in
Accordingly, an embodiment of the disclosure can include a UE (e.g., UE 200A, 200B, etc.) including the ability to perform the functions described herein. As will be appreciated by those skilled in the art, the various logic elements can be embodied in discrete elements, software modules executed on a processor or any combination of software and hardware to achieve the functionality disclosed herein. For example, the ASIC 208, the memory 212, the API 210 and the local database 214 may all be used cooperatively to load, store and execute the various functions disclosed herein and thus the logic to perform these functions may be distributed over various elements. Alternatively, the functionality could be incorporated into one discrete component. Therefore, the features of UEs 200A and 200B in
The wireless communications between UEs 200A and/or 200B and the RAN 120 can be based on different technologies, such as CDMA, W-CDMA, time division multiple access (TDMA), frequency division multiple access (FDMA), Orthogonal Frequency Division Multiplexing (OFDM), GSM, or other protocols that may be used in a wireless communications network or a data communications network. As discussed in the foregoing and known in the art, voice transmission and/or data can be transmitted to the UEs from the RAN using a variety of networks and configurations. Accordingly, the illustrations provided herein are not intended to limit the embodiments of the disclosure and are merely to aid in the description of aspects of embodiments of the disclosure.
Referring to
In a further example, the logic configured to receive and/or transmit information 305 can include sensory or measurement hardware by which the communications device 300 can monitor its local environment (e.g., an accelerometer, a temperature sensor, a light sensor, an antenna for monitoring local RF signals, etc.). The logic configured to receive and/or transmit information 305 can also include software that, when executed, permits the associated hardware of the logic configured to receive and/or transmit information 305 to perform its reception and/or transmission function(s). However, in various implementations, the logic configured to receive and/or transmit information 305 does not correspond to software alone, and the logic configured to receive and/or transmit information 305 relies at least in part upon hardware to achieve its functionality.
The communications device 300 of
The communications device 300 of
The communications device 300 of
The communications device 300 of
Referring to
Generally, unless stated otherwise explicitly, the phrase “logic configured to” as used throughout this disclosure is intended to invoke an embodiment that is at least partially implemented with hardware, and is not intended to map to software-only implementations that are independent of hardware. Also, it will be appreciated that the configured logic or “logic configured to” in the various blocks are not limited to specific logic gates or elements, but generally refer to the ability to perform the functionality described herein (either via hardware or a combination of hardware and software). Thus, the configured logics or “logic configured to” as illustrated in the various blocks are not necessarily implemented as logic gates or logic elements despite sharing the word “logic.” Other interactions or cooperation between the logic in the various blocks will become clear to one of ordinary skill in the art from a review of the embodiments described below in more detail.
The various embodiments may be implemented on any of a variety of commercially available server devices, such as server 400 illustrated in
Databases can store and index data in accordance with a structured data format (e.g., Relation Databases for normalized data queried by Structured Query Language (SQL), etc.), a semi-structured data format (e.g., XMLDBs for Extensible Markup Language (XML) data, RethinkDB for JavaScript Object Notation (JSON) data, etc.) or an unstructured data format (e.g., Key Value Stores for key-value data, ObjectDBs for object data, Solr for free text indexing, etc.). In structured databases, any new data objects to be added are expected to conform to a fixed or predetermined schema (e.g., a new Company data object may be required to be added with “Name”, “Industry” and “Headquarters” values, a new Bibliography data object may be required to be added with “Author”, “Title”, “Journal” and “Date” values, and so on). By contrast, in unstructured databases, new data objects are added verbatim, which permits similar data objects to be added via different formats which causes difficulties in establishing semantic relationships between the similar data objects.
Examples of structured database entries for a set of data objects may be configured as follows:
whereby “Name”, “Industry” and “Headquarters” are predetermined values that are associated with each “Company”-type data object stored in the structured database, or
whereby “Author”, “Title”, “Journal” and “Date” are predetermined values that are associated with each “Bibliography”-type data object stored in the structured database.
Examples of unstructured database entries for the set of data objects may be configured as follows:
As will be appreciated, the structured and unstructured databases in Tables 1 and 3 and in Tables 2 and 4 store substantially the same information, with the structured database having a rigidly defined value format for the respective class of data object while the unstructured database does not have defined values associated for data object classes.
Semi-structured databases share some properties with both structured and unstructured databases (e.g., similar data objects can be grouped together as in structured databases, while the various values of the grouped data objects are allowed to differ which is more similar to unstructured databases). Semi-structured database formats use a document structure that includes a set of one or more documents that each have a plurality of nodes arranged in a tree hierarchy. The plurality of nodes are generally implemented as logical nodes (e.g., the plurality of nodes can reside in a single memory and/or physical device), although it is possible that some of the nodes are deployed on different physical devices (e.g., in a distributed server environment) so as to qualify as both distinct logical and physical nodes. Each document includes any number of data objects that are each mapped to a particular node in the tree hierarchy, whereby the data objects are indexed either by the name of their associated node (i.e., flat-indexing) or by their unique path from a root node of the tree hierarchy to their associated node (i.e., label-path indexing). The manner in which the data objects of the document structure are indexed affects how searches (or queries) are conducted.
To put the document depicted in
The document structure of a particular document in a semi-structured database can be indexed in accordance with a flat-indexing protocol or a label-path protocol. For example, in the flat-indexing protocol (sometimes referred to as a “node indexing” protocol) for an XML database, each node is indexed with a document identifier at which the node is located, a start-point and an end-point that identifies the range of the node, and a depth that indicates the node's depth in the tree hierarchy of the document (e.g., in
whereby each number represents a location of the document structure that can be used to define the respective node range, as shown in Table 8 as follows:
Accordingly, the “Inventor” context path 605A of
When a node stores a value, the value itself can have its own index. Accordingly, the value of “Brown” 650A as shown in
The flat-indexing protocol uses a brute-force approach to resolve paths. In an XML-specific example, an XPath query for /Patent/Inventor/Name/Last would require separate searches to each node in the address (i.e., “Patent”, “Inventor”, “Name” and “Last”), with the results of each query being joined with the results of each other query, as follows:
Label-path indexing is described in a publication by Goldman et al. entitled “DataGuides: Enabling Query Formulation and Optimization in Semistructured Databases”. Generally, label-path indexing is an alternative to flat-indexing, whereby the path to the target node is indexed in place of the node identifier of the flat-indexing protocol, as follows:
whereby each number represents a location of the document structure that can be used to defined the respective node range, and each letter label (A through I) identifies a context path to a particular node or value, as shown in Table 11 as follows:
Accordingly, with respect to Tables 10-11, the “Inventor” node 605A of
More detailed XML descriptions will now be provided. At the outset, certain XML terminology is defined as follows:
In Table 9 with respect to the flat-indexed protocol, it will be appreciated that the XPath query directed to /Patent/Inventor/Name/Last required four separate lookups for each of the nodes “Patent”, “Inventor”, “Name” and “Last”, along with three joins on the respective lookup results. By contrast, a similar XPath query directed to /Patent/Inventor/Name/Last using the label-path indexing depicted in Tables 10-11 would have a compiled query of lookup(E) based on the path /Patent/Inventor/Name/Last being defined as path “E”.
Generally, the label-path indexing protocol is more efficient for databases with a relatively low number of context paths for a given node name (e.g., less than a threshold such as 100), with the flat-indexing protocol overtaking the label-path indexing protocol in terms of query execution time as the number of context paths increases.
A number of different example XML document structures are depicted below in Table 12 including start and end byte offsets:
whereby each number represents a location of the document structure that can be used to defined the respective node range, and each letter label identifies a context path to a particular node or value as depicted in
Next, a flat simple content index for the documents depicted in Table 12 is as follows:
Next, a flat element index for the documents depicted in Table 12 is as follows,
The nodes of semi-structured databases can include various types of data with various associated elements and attributes. Some nodes in particular only contain a relatively small amount of data (or a single value), such as a string, a date, a number, a reference to another node and/or document location in the semi-structured database, and so on. These nodes are referred to herein as “simple” nodes. As shown in
Referring to
Referring to
As will be appreciated from a review of
Further, with respect to block 800 of
Referring to
Referring to
Referring to
At block 920, the semi-structured database server 170 determines whether the previous execution of the search query at block 900 results in a threshold number of search queries being directed to Node Y of Document X returning the Value Z (e.g., similar to 800 of
whereby Node 6 of Document 40 is cached to value “Brown”, Node 14 of Document 40 is cached to value “December 25”, Node 2 of Document 41 is cached to value “1234” and Node Y of Document X (after block 930 of
Referring to
While
Referring to
Referring to
As will be appreciated from a review of
Further, with respect to block 1100 of
Referring to
Referring to
At block 1220, the semi-structured database server 170 determines whether the previous execution of the search query at block 1200 results in a threshold number of search queries being directed to Node Y of Document X (e.g., similar to block 1100 of
whereby Node 6 of Document 40 is a “Last Name” node that includes cached values for “Brown”, “Smith”, “Johnson”, “Chang”, and “Morrison”, and Node Y of Document X includes cached values for Z_1 through Z_5 (after block 1230).
Referring to
Search queries can be bundled together for execution in a particular order, which is collectively referred to as a merge query. For example, if a merge query: //a[@b=″c″]/d/e results in 10 context entries for the @b=″c″ node, then the merge query would be performed as:
which can be rewritten as follows:
whereby each “descendant” operation occurs in-order from top to bottom. The “descendant” function depicted in Tables 19-20 represents alternative terminology for referring to a search query. As will be appreciated, the “descendant” and “contains” functions require joins. However, the “merge” function itself is a simple concatenation (or aggregation) of results. Accordingly, the original //a[criteria]/d/e can be solved in two different ways, (Table 19) by joining (“descendant(contains( )”) a concatenated (“merge”) list of results with “a/d/e”; or (Table 20) by concatenating the results from multiple smaller joins. Using the Table 19 execution allows us to reorder the smaller joins to compute the joins that will more likely return data. Accordingly, Tables 19-20 depict two different ways of executing the same search using either one large join (Table 19) or multiple smaller joins (Table 20).
If the semi-structured database server 170 determines that the search query execution from block 1310 is the first search query in the merge query to obtain any results at block 1320, the process advances to block 1330. Otherwise, if the semi-structured database server 170 determines the search query execution from block 1310 is not the first search query in the merge query to obtain any results at block 1320, then the semi-structured database server 170 joins the search results obtained at block 1310 for the current search query execution with the search results obtained for earlier search query executions, in block 1325. In an example, a single join may be performed in block 1325 irrespective of how many preceding search queries in the merge query have obtained result(s) (e.g., starting with the third search query in the merge query based on the default order that returns result(s), the result(s) obtained in block 1315 for a new search query may be joined with previous join result(s) obtained in 1325). At block 1330, the semi-structured database server 170 determines whether the current search query is the last search query in the default order for the merge query. If the current search query is not determined to be the last search query in the default order for the merge query at block 1330, the process returns to block 1310 and the next search query in the default order is executed and then evaluated. Otherwise, the current set of search results (which are joined from each search query that produced any results) is returned to the client device that requested the merge query, in block 1335.
For some merge queries, a user may only need a limited number of search results and/or may prioritize quickly-returned partial search results over a more complete (but slower) set of search results. The queries that collectively constitute the merge query can intermingle search queries that produce high numbers of search results with search queries that produce few or even zero search results. Accordingly, performing each search query in an exhaustive manner as in
whereby AVERAGE(44,3) indicates that three previous search queries directed to descendant(contains(a, a/@b(1)=“c”), a/d/e) returned an average of 44 search results, PREVIOUS(52) indicates that the previous search query for descendant(contains(a, a/@b(2)=“c”), a/d/e) returned 52 search results, PREVIOUS(0) indicates that the previous search query for descendant(contains(a, a/@b(3)=“c”), a/d/e) returned 0 search results, AVERAGE(0,7) indicates that the previous seven search queries for descendant(contains(a, a/@b(4)=“c”), a/d/e) each returned 0 search results, N/A indicates that the search result heuristics table does not record any information for the corresponding search query, and so on. It will be appreciated that the search result heuristics table could be simplified (e.g., only include the AVERAGE statistics, only include the PREVIOUS statistics, etc.) or enhanced (e.g., add in statistics such as standard deviation, etc.).
Referring to
Referring to
Using the search heuristics table depicted in Table 21 as an example, the heuristics-based order may be established as follows at block 1415:
whereby an order establishing protocol weights the search query rankings by a combination of prior search results achieved (higher numbers preferred) and reliability (e.g., AVERAGE(300,2) expected to return high search results despite a low sample size of 2, while AVERAGE(43,77) provides similar search results as compared to AVERAGE(44,3) but has a higher sample size and is deemed more reliable, the N/As are ranked below any search query except for search queries expected to return zero search results, and so on), whereby the aforementioned order is reversed for intersection merge queries, as follows:
whereby an order establishing protocol weights the search query rankings by a combination of prior search results achieved (higher numbers preferred) and reliability in a manner that is the opposite depicted in Table 22 due to the differing objective of the intersection merge query relative to the union merge query.
At block 1420, the semi-structured database server 170 executes the next search query in the merge query in the heuristics-based order (in this case, the first search query, such as the top-listed search query in Table 22 or Table 23). At block 1425, the semi-structured database server 170 determines if any results were obtained by the search query execution from 1420. If not, the process advances to block 1440. Otherwise, if at least one result is determined to be obtained from the search query execution at block 1420, the semi-structured database server 170 determines whether the search query execution from block 1420 is the first search query in the merge query to obtain any results, in block 1430.
If the semi-structured database server 170 determines that the search query execution from block 1420 is the first search query in the merge query to obtain any results at block 1430, the process advances to block 1440. Otherwise, if the semi-structured database server 170 determines the search query execution from block 1420 is not the first search query in the merge query to obtain any results at block 1430, then the semi-structured database server 170 joins (e.g., via union or intersection depending on the nature of the merge query) the search results obtained at block 1420 for the current search query execution with the search results obtained for earlier search query executions, in block 1435. In an example, a single join may be performed in block 1435 irrespective of how many preceding search queries in the merge query have obtained result(s) (e.g., starting with the third search query in the merge query based on the default order that returns result(s), the result(s) obtained in block 1425 for a new search query may be joined with previous join result(s) obtained in 1435). At block 1440, in an example, the semi-structured database server 170 may determine whether any of the one or more search results criteria for triggering an early-exit of the merge query execution have been satisfied. If so, then the current search results (if any) are returned to the requesting client device, in block 1450. Otherwise, the semi-structured database server 170 determines whether the current search query is the last search query in the heuristics-based order for the merge query, in block 1445. If the current search query is not determined to be the last search query in the heuristics-based order for the merge query at block 1445, the process returns to block 1420 and the next search query in the heuristics-based order is executed and then evaluated. Otherwise, the current set of search results (which are joined from each search query that produced any results) is returned to the client device that requested the merge query, in block 1450.
As noted above, when an early-exit of the merge query execution is determined in block 1440, the current search results can be returned to the requesting client device in block 1450. At this point, in an example, the semi-structured database server 170 may stop executing the merge query altogether in order to save resources, in which case no further search results for the merge query will be returned to the requesting client device. In another example, in block 1455, the semi-structured database server 170 may continue executing the merge query after the early-exit search results are returned in block 1450. In block 1455, the process returns to block 1420 and the next search query in the heuristics-based order is executed and then evaluated. In this alternative example, the search results returned to the requesting client device in block 1450 may be refined over time as search results for new search queries in the merge query are obtained in block 1425 and joined with the search result(s) from previous search queries in block 1435. In a further example, the process by which more refined search results are returned to the requesting client device in block 1450 as more search queries in the merge query are executed may continue until the last search query in the heuristics-based order for the merge query is processed and/or until the one or more search results criteria for triggering an early-exit of the merge query execution are no longer satisfied in block 1440.
As noted above, the merge query described with respect to
While the processes are described as being performed by the semi-structured database server 170, as noted above, the semi-structured database server 170 can be implemented as a client device, a network server, an application that is embedded on a client device and/or network server, and so on. Hence, the apparatus that executes the processes in various example embodiments is intended to be interpreted broadly.
Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal (e.g., UE). In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
While the foregoing disclosure shows illustrative embodiments of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the disclosure described herein need not be performed in any particular order. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
The present application for patent claims the benefit of U.S. Provisional Application No. 62/180,994, entitled “CACHING SEARCH-RELATED DATA IN A SEMI-STRUCTURED DATABASE”, filed Jun. 17, 2015, assigned to the assignee hereof, and expressly incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62180994 | Jun 2015 | US |