Many businesses generate and store data for their business operations. In some instances businesses offer services to store and analyze the data for other businesses. For example, a business may store and analyze search engine marketing data. As another example, a retail business or financial business may store historical information for analysis. The data may be stored on multiple servers, computers or storage devices in multiple locations. In addition, the data may be broken into multiple components and stored in separate locations. For example, configuration data may be separate from historical data. Retrieving the data and stitching the data together can be time consuming due to the need to access multiple sources to locate the data, retrieve the data and stitch the data together. If the data is to be filtered in some manner, the more complex the criteria, the more computationally intensive the search for the data may be.
While computational processes can be fast, the sheer volume of data to process in addition to filter with complex criteria can cause requests for data to require long processing times (e.g., minutes or hours versus seconds). Thus, there is a need for identifying requested data and storing the information for faster subsequent lookup in response to requests.
Various embodiments of methods and systems are presented for caching at a server identifiers (IDs) of data objects retrieved from backend data sources in response to queries from clients. In some embodiments, a server receives a query from a client specifying filter criteria. The object identifiers (IDs) for data objects satisfying the query from one or more object identifier are obtained. The data objects from one or more data sources are retrieved and the object identifiers obtained are cached in an object identifier cache. The retrieved data objects are returned to the client in response to the query. If the same query is received again, the cached object IDs for that query can be used to quickly retrieve the data objects from the data sources by direct object ID (e.g., primary key) lookup.
In some embodiments, in response to receiving a query, the server determines whether an object identifier cache matching the query already exists. Determining whether an object identifier (ID) cache already exists may include calculating a query fingerprint identifier for the query based on the filter criteria specified in the query and determining whether any of the existing object identifier caches is indexed by a query fingerprint identifier matching the query fingerprint identifier for the query. In response to determining that an object identifier (ID) cache matching the query already exists, the object IDs are obtained from the existing object identifier cache matching the query.
If an object identifier cache matching the query does not exist, then the server performs a normal query of the data sources using the filter criteria for object identifiers for objects matching the filter criteria. The server caches in a new object identifier cache for the query the object identifiers received from a data source for objects matching the filter criteria.
While the invention is described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention. Headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description.
As discussed in more detail below, embodiments provide systems and methods for caching of object identifiers at a server when responding to a client query. In some embodiments, a server receives a query from a client specifying filter criteria. The server may obtain object identifiers (IDs) for data objects satisfying the query from one or more object identifier caches. In some embodiments, the server retrieves data objects from one or more data sources using direct object ID lookups from one or more data sources using object identifiers obtained from the one or more object identifier caches. The server returns the retrieved data objects to the client in response to the query.
In some embodiments, query server 120 includes one or more computers or servers 120. Query server 120 receives queries for data objects from clients 150. The queries may include filter criteria, for example. The filter criteria may specify values or ranges for various fields of the data objects, including dates and/or sort criteria. In response to receiving a query, query server 120 queries the one or more data sources 110 to determine which object IDs 140 include data 130 matching the filter criteria for that data source. Once query server 120 determines the corresponding object IDs 140 from each of the one or more data sources 110, query server 120 joins the results from each of the one or more data sources 110 to determine the final object IDs 140 that match the filter criteria. Query server 120 caches the object IDs and retrieves the data objects corresponding to the final object IDs 140 and returns the data objects to client 150.
In some embodiments, data sources 110 are one or more computers and/or storage devices configured as a database or data source server. Each data source 110 stores a part of the data 130 corresponding to a particular object ID 140. As discussed above, the data objects may correspond to keywords of search engine marketing campaigns, in some embodiments. For example, one data source may store transactional values for keywords on an SEM campaign managed by a SEM keyword management tool. Values set for various keywords, such as bid amounts, may be stored by the SEM keyword management tool in one data source 110. Data obtained from a search engine pertaining to the keywords of the SEM campaign may be stored in another one of data sources 110, and analytics data from a web analytics tool pertaining to the keywords of the SEM campaign may be stored in yet another one of data sources 110. In other embodiments, other types of data, such as financial transaction data, may be stored in data sources 110.
In one example, data sources 110 may store analytics data for network-based marketing campaigns. For example, a client 150 may send a query requesting data objects that satisfy a set of filter criteria. The filter criteria may be a range for a bid amount (e.g., $0.50<bid amount<$5.00), the number of impressions (e.g., impressions>0), the number of clicks (e.g., clicks<1000) and the cost (e.g., cost>$2.00). The data 130 corresponding to the search criteria is stored in multiple data sources. For example, the bid amount and the number of impressions may be stored in a first data source, the number of clicks in a second data source and the cost in a third data source. In response to receiving a query from a client 150, query server 120 determines the object ID 140 for the data in the data source matching the filter criteria. The query server 120 retrieves the data objects satisfying the filter criteria from data sources 110 and returns the data to the client. As explained in more detail below, query server 120 may employ data object ID caches to facilitate handling of client queries.
For example, query server 120 may receive a query (including filter criteria) from a client, as indicated at 210. As an example, data source 110 may store analytics data for network-based marketing campaigns. A client may request data based on four search criteria. The search criteria may be a range for a bid amount (e.g., $0.50<bid amount<$5.00), the number of impressions (e.g., impressions>0), the number of clicks (e.g., clicks<1000) and the cost (e.g., cost>$2.00). As shown in
As shown at 240, since an ID cache corresponding to the query does not currently exist at server 120, in response to receiving the query, query server 120 uses the filter criteria to query data source 110 for object IDs of data objects having data matching the filter criteria of the query. Query server 120 receives the IDs (e.g., object IDs 140) for result objects, as shown at 250. The IDs for the results objects are stored in an ID cache 230. Just the object IDs are cached, not the corresponding data objects themselves. A given ID cache 230 is created specific to the filter criteria of the query. In response to subsequent queries for the same filter criteria, the ID cache 230 corresponding to the filter criteria can be located to determine the object IDs 140 instead of query server 120 having to query data source 110 using the filter criteria. This will be described in further detail below. Query server 120 retrieves results objects from data source 110 using the object IDs to directly request the objects from data source 110 (e.g., as a primary key lookup), as indicated at 260. Result objects received at server 120 from data source 110, as indicated at 270, and then returned to the client as indicated at 220.
The example described above shows query server 120 using the filter criteria to first query data source 100 for the IDs of objects matching the filter criteria, then using the object IDs to retrieve the actual data objects from data source 110. In other embodiments, query server 120 may query data source 110 for both the object IDs and data objects as part of the same operation. In addition, although the filter criteria example shown in
As indicated in 300, in some embodiments, a query specifying filter criteria is received from the client. The filter criteria (e.g., filter criteria 210 in
Assuming the server does not already have an ID cache corresponding to the filter criteria, as indicated in 310, the data source is queried for IDs (e.g., object IDs 140 in
As indicated in 320, object IDs are cached in an ID cache. The object IDs determined at 310 are stored in an ID cache (e.g., ID cache 230 in
As indicated in 330, data objects are retrieved from the data source by ID look up. The data objects corresponding to the object IDs (e.g., object ID 140 in
In some embodiments, as discussed above, query server 120 receives queries (e.g., filter criteria) 210 from clients (e.g., clients 150 in
As indicated in 500, in some embodiments, a query specifying filter criteria is received. As discussed above, filter criteria is one or more variables. The variables may be a range (e.g., 50<x<500), or a limit (e.g., K>0), or have sort criteria (e.g., sort in increasing values).
As indicated in 510, in some embodiments, assuming an ID cache already exists in the server for the specified filter criteria, object IDs are retrieved from the ID cache matching the query. If the query has been previously received, the ID cache corresponding to the query (e.g., filter criteria) may exist. The object IDs (e.g., object ID 140 in
As indicated in 520, in some embodiments, data objects are retrieved from the data source by ID lookup. The object IDs determined as indicated in 510 above are used (e.g., by query server 120 in
As indicated in 600, in some embodiments, a query specifying filter criteria is received from the client. As discussed above, the filter criteria (e.g., filter criteria 210 in
As indicated in 610, in some embodiments, the query ID is calculated from a hash of the filter criteria. The filter criteria may be hashed or have some other function applied to create a unique (or statistically unlikely to be repeated) fingerprint of the query. The hash or fingerprint of the filter criteria forms the query ID. The query ID is used to identify an existing ID cache or to index a new ID cache.
In some embodiments, if there is not an existing valid ID cache for a query, as indicated in 620, the data source is queried for IDs of objects matching filter criteria, as indicated in 630, in some embodiments. As discussed above in
As indicated in 640, in some embodiments, a new ID cache (e.g. ID cache 230 in
As indicated in 650, the data objects are retrieved from the data source by object ID lookup. As discussed above, the data objects are stored in one or more data sources and identified by object ID (e.g., object ID 140 in
In some embodiments, if there is an existing valid ID cache for a query, as indicated in 620, the object IDs are retrieved from the ID cache, as indicated in 670. As discussed above, if a query (e.g. filter criteria 210 in
As indicated in 680, in some embodiments, the data objects are retrieved from the data source by ID lookup (e.g., by primary key access). The retrieved object IDs, as indicated in 670, determine the data objects to be retrieved from one or more data sources (e.g., data sources 110 in
As indicated in 700, in some embodiments, information corresponding to modification of a data source is received. In some embodiments, a data source (e.g., data sources 110 in
As indicated in 710, in some embodiments, ID caches affected by the modification are determined. In response to receiving the indication that one or more data sources have been modified, a query server (e.g., query server 120 in
As indicated in 720, in some embodiments, the affected ID caches (e.g., ID caches 230 in
In some embodiments, query server 120 receives queries (e.g., filter criteria) 210 from clients (e.g., clients 150 in
Sub-criteria query ID 820, in some embodiments, identifies or indexes an ID cache 830 populated with the object IDs (e.g., object IDs 140 in
ID cache 830, in some embodiments, stores object IDs (e.g., object IDs 140 in
ID cache joiner 840, in some embodiments, determines the intersection of the object IDs populated in ID caches 830 identified by query ID 820. As discussed above, query ID 820 is calculated from sub-criteria 810. The object IDs in common between ID caches 830a, 830b and 830c identified by query ID 820a, 820b and 820c determine the object IDs that results builder 850 use to look up data in data sources 110. The common object IDs (e.g., object IDs 140 in
Results builder 850, in some embodiments, retrieves the data from data objects sources 110 by object ID lookup. Results builder 850 receives the common object IDs as determined by the intersection of the ID caches matching sub-criteria 810. Results builder retrieves data via object ID lookup in data sources 110. Results builder 850 combines the retrieved data. The results are returned the client (e.g., results objects 220).
Data sources 110, in some embodiments, are databases or other systems (e.g., servers) configured to store data. The data sources may exist in a distributed system, in some embodiments. The data objects stored in data sources have different portions of their data stored in each data source 110. For example, a particular data source (e.g., data source 1, 110a) may store configuration data. As another example, a particular data source (e.g., data source 2, 110b) may store historical performance data or custom assignments.
As an example, data sources 110 may store transactional and analytics data for network-based marketing campaigns. A client may request data based on four search criteria. The search criteria may be a range for a bid amount (e.g., $0.50<bid amount<$5.00), the number of impressions (e.g., impressions>0), the number of clicks (e.g., clicks<1000) and the cost (e.g., cost>$2.00). The data 130 corresponding to the search criteria is stored in multiple data sources. For example, the bid amount and the number of impressions may be stored in a first data source (e.g., data source 1, 110a), the number of clicks in a second data source (e.g., data source 2, 110b) and the cost in a third data source (e.g., data source 3, 110c). In response to receiving a query from a client 150, query server 120 calculates a respective query ID for each sub-criteria and determines that the ID cache (e.g. ID cache 830) exists for each sub-criteria (e.g. sub-criteria 810). An ID cache joiner (e.g., ID cache joiner 840) receives the object IDs from the ID caches (e.g., ID caches 830) and performs the intersection of the object IDs for each respective ID cache. The common object IDs determined from the intersection of the object IDs from each respective ID cache are used by a results builder (e.g., results builder 850) to look up the data in the respective data source (e.g., data sources 110). The data is combined and returned to the client.
Query server 120 queries the ID cache 230 for the object IDs 140 for the data in the data source matching the filter criteria. Query server 120 retrieves the data from data source 110 via object ID lookup. However, all of the object IDs corresponding to two of the search criteria may not fit the four search criteria. To determine the object IDs that match all four of the search criteria, the results of the query for the first data source is joined with the second and third data source query results. The query server queries the second data source to determine the object IDs for the data corresponding to the number of clicks criteria. The query server queries the third data source to determine the object IDs corresponding to the cost search criteria. However, as discussed above, the query results from the second and third data sources may not match the search criteria. The query server joins (e.g., intersects) the results from each of the respective data sources to determine the object IDs that match the search criteria described above. The query server uses the joined object ID results to retrieve the data objects from the data sources to present to the client.
As indicated in 900, in some embodiments, a query is received from a client specifying filter criteria. As discussed above, the queries or filter criteria (e.g., filter criteria 210 in
As indicated in 910, in some embodiments, the query is broken down into disjoint sub-criteria per data source. For example, if a given data source stores criteria x and k (e.g., data source 1 (110a) in
As indicated in 920, in some embodiments, the query ID for each disjoint sub-criteria is calculated. As discussed above, a query ID (e.g., query ID 820 in
As indicated in 940, in some embodiments, the intersection of object IDs from the ID caches is determined. As discussed above, ID caches exist for each sub-criteria. Once ID caches matching the sub-criteria are determined, as indicated in 930 above, the intersection of the ID caches is determined (e.g., ID cache joiner 840 in
As indicated in 950, in some embodiments, data objects are retrieved from each data source using object ID look up for object IDs from the intersection of ID caches. As described above, the common object IDs determined in 940 above, are used to look up data in the data sources and retrieve the data objects from each data source.
As indicated in 960, in some embodiments, the results are combined and returned to the client. As described above, components of each data object are stored in one or more data sources (e.g., data sources 110 in
For example, an ID cache 830c has one or more object IDs 140 as determined by the sub-criteria (e.g., sub-criteria 810c in
Table 1090 depicts the sorted intersection of the three ID caches. Object ID 4, object ID 1349, object ID 28 and so on were common in the three ID caches and are ordered according to the order of ID cache 830c on the left-hand side since ID cache 830c corresponds to the sub-criteria for which the sort was specified. Results ID 1030 indicates the sorted order of the results 1090. For example, in the ID cache 830c for data source 3, the order is object ID 52, object ID 4, object ID 1349, and so on. Object ID 52 is dropped since object ID 52 doesn't have a common object ID in ID cache 830a or 830b. However, object ID 4 and object 1349 also populate ID caches 830a and 830b. In results table 1090, object ID 4 and object ID 1349 populate table 1090 in the order of sort ID 1020.
Result ID 1030 not only preserves the sort ID order from ID cache 830c, but provides for fast paging through results. For example, table 1090 may have one thousand Results ID/Object ID pairs entered in the table but only the first twenty-five are returned to the client as a first page of results. The client (e.g., client 150 in
As indicated in 1100, in some embodiments, a results page request is received from the client. The requested page may be a next page, or a particular numbered page of results.
As indicated in 1110, in some embodiments, a results ID range for a requested page is determined. In response to receiving a page request from a client (e.g., client 150 in
As indicated in 1120, object IDs from the joined result table are retrieved for the results IDs in the determined range. As discussed above, the object IDs (e.g., object ID 140 in
As indicated in 1130, data objects are retrieved from data sources using object ID look up. With the object IDs (e.g., object ID 140 in
As indicated in 1200, a query specifying filter criteria is received from a client. As discussed above, the filter criteria (e.g., filter criteria 210 in
As indicated in 1220, the results for the filter criteria received from the client are retrieved using the ID cache technique. As discussed above, the ID caches technique creates and stores ID caches for filter criteria received from the client. The filter criteria is used to determine the query IDs for the caches and determine to object IDs for the data corresponding to the filter criteria in the data sources. The object IDs populate the ID caches and are used to look up data in the data sources when subsequent matching queries are received.
As indicated in 1260, if the initial page of results has not already been returned via the traditional method, then the initial page results are returned to the client, as indicated in 1270. Subsequent page requests use the ID caches technique, as indicated in 1280. As described above in
As indicated in 1260, in some embodiments, if the initial page of results has already been returned via the traditional method, then the initial page of results from the ID caches technique are not returned. As indicated in 1280 (as described above), in some embodiments, subsequent page requests use the ID caches technique.
As indicated in 1210, traditional data source queries using the filter criteria are performed in parallel with ID caches technique at 1220. In response to receiving the query specifying filter criteria from the client, the filter criteria is used to locate the data objects in the data sources and retrieve the data objects from the data sources.
As indicated in 1230, the results are stitched together from the data source queries. As discussed above, one or more data sources store components of data from data objects. The data is retrieved from the multiple data sources and stitched together into results including only data objects satisfying all the filter criteria.
As indicated in 1240, in some embodiments, if the initial page of results is not ready before the results of the ID cache process described above, then the method ends at 1240 and subsequent data retrieves are performed as described at 1280. If the initial page of results is ready before the ID cache process, then the initial page of result is returned to the client, as indicated in 1250. The ID caches technique, as described above, is used for subsequent page requests, however.
In the illustrated embodiment, computer system 1300 includes one or more processors 1310 coupled to a system memory 1320 via an input/output (I/O) interface 1330. Computer system 1300 further includes a network interface 1340 coupled to I/O interface 1330, and one or more input/output devices 1350, such as cursor control device 1360, keyboard 1370, audio device 1390, and display(s) 1380. In some embodiments, it is contemplated that embodiments may be implemented using a single instance of computer system 1300, while in other embodiments multiple such systems, or multiple nodes making up computer system 1300, may be configured to host different portions or instances of embodiments. For example, in one embodiment some elements may be implemented via one or more nodes of computer system 1300 that are distinct from those nodes implementing other elements.
In various embodiments, computer system 1300 may be a uniprocessor system including one processor 1310, or a multiprocessor system including several processors 1310 (e.g., two, four, eight, or another suitable number). Processors 1310 may be any suitable processor capable of executing instructions. For example, in various embodiments, processors 1310 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 810 may commonly, but not necessarily, implement the same ISA.
System memory 1320 may be configured to store program instructions and/or data accessible by processor 1310. In various embodiments, system memory 1320 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing desired functions, such as those described above for a load balancing of time-based tasks in a distributed computing method, are shown stored within system memory 1320 as program instructions 1325 and data storage 1335, respectively. In other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 1320 or computer system 1300. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system 1300 via I/O interface 1330. Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1340. Program instructions may include instructions for implementing the techniques described with respect to
In some embodiments, I/O interface 1330 may be configured to coordinate I/O traffic between processor 1310, system memory 1320, and any peripheral devices in the device, including network interface 1340 or other peripheral interfaces, such as input/output devices 1350. In some embodiments, I/O interface 1330 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 1320) into a format suitable for use by another component (e.g., processor 1310). In some embodiments, I/O interface 1330 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 1330 may be split into two or more separate components. In addition, in some embodiments some or all of the functionality of I/O interface 1330, such as an interface to system memory 1320, may be incorporated directly into processor 1310.
Network interface 1340 may be configured to allow data to be exchanged between computer system 1300 and other devices attached to a network, such as other computer systems, or between nodes of computer system 1300. In various embodiments, network interface 1340 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
Input/output devices 1350 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, multi-touch screens, or any other devices suitable for entering or retrieving data by one or more computer system 1300. Multiple input/output devices 1350 may be present in computer system 1300 or may be distributed on various nodes of computer system 1300. In some embodiments, similar input/output devices may be separate from computer system 1300 and may interact with one or more nodes of computer system 1300 through a wired or wireless connection, such as over network interface 1340.
Memory 1320 may include program instructions 1325, configured to implement embodiments of a load balancing of time-based tasks in a distributed computing method as described herein, and data storage 1335, comprising various data accessible by program instructions 1325. In one embodiment, program instructions 1325 may include software elements of a method illustrated in the above Figures. Data storage 1335 may include data that may be used in embodiments described herein. In other embodiments, other or different software elements and/or data may be included.
Those skilled in the art will appreciate that computer system 1300 is merely illustrative and is not intended to limit the scope of a load balancing of time-based tasks in a distributed computing method and system as described herein. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including computers, network devices, internet appliances, PDAs, wireless phones, pagers, etc. Computer system 1300 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available.
Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computer system 1300 may be transmitted to computer system 1300 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations. In some embodiments, portions of the techniques described herein may be hosted in a cloud computing or distributed computing infrastructure.
Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible/readable storage medium may include a non-transitory storage media such as magnetic or optical media, (e.g., disk or DVD/CD-ROM), volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
Various modifications and changes may be to the above technique made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended that the invention embrace all such modifications and changes and, accordingly, the above description to be regarded in an illustrative rather than a restrictive sense. While the invention is described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention. Any headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. As used throughout this application, the singular forms “a”, “an” and “the” include plural referents unless the content clearly indicates otherwise. Thus, for example, reference to “an element” includes a combination of two or more elements. Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic computing device. In the context of this specification, therefore, a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming signals, typically represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the special purpose computer or similar special purpose electronic computing device.