The present disclosure, in general, relates to computer devices and methods for storing, indexing and searching data in computer memory. The trend of increasingly large data sets and high velocity data calls for efficient data storage structure and methods.
Historically, the information technology industry has utilized binary search tree (BST) for storing and indexing data. BST is a node-based binary tree data structure where left subtrees contain only keys that are less than the key in the parent node, and right subtrees contain only keys that are greater than the key in the parent node. A subtree of tree T is a tree consisting of a node in T and all of its descendants in T. Another type of BST is T−tree, in which each node contains an ordered array of keys, a left child node, and/or a right child node.
In addition to BST's, B−tree which is a multi-way search tree has also been used for data storage and indexing. B−tree keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic time. The B−tree is a generalization of a binary search tree in that a node can have more than two children. Unlike self-balancing binary search trees, the B−tree is optimized for systems that read and write large blocks of data. It is commonly used in databases and file systems.
Hash tables are usually involved in data lookup. Hash tables can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the correct value can be found. The hash function can assign each key to a unique bucket, ideally, but this ideal situation is rarely achievable in practice (unless the hash keys are fixed; i.e. new entries are never added to the table after it is created). Instead, most hash table designs assume that hash collisions-different keys that are assigned by the hash function to the same bucket—will occur and must be accommodated in some way.
In many situations, hash tables turn out to be more efficient than search trees or any other table lookup structure. For this reason, they are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches, and sets. Hash tables, however, do not offer ordered search. Further, the cost of resizing a hash table is aggravated as data size becomes large.
The present disclosure provides, in one embodiment, computer methods and systems for data storage, indexing and search, which incorporate tree structures and hash tables. In accordance with one aspect of the present disclosure, provided is a method of searching a query key in a data storage, comprising accessing, by a computer, a binary tree comprising a plurality of nodes, wherein at least one of the nodes is a root node, each node has no more than two child nodes, a left child node and a right child node, and at least one node that is not the root node has both child nodes, and wherein each node comprises a hash table to store one or more keys, all of which are greater than all keys stored in the left child node, if any, thereof, and are smaller than all keys stored in the right child node, if any, thereof; (b) determining, starting at the root node, whether the query key is between the largest key and the smallest key stored in the node; and (c) if the query key is greater than the largest key in the node, (i) repeating steps (b) and (c) at the right child node thereof or (ii) terminating the search if no right child node exists, thereby determining that the query key does not exist in the binary tree, if the query key is smaller than the smallest key in the node, (i) repeating steps (b) and (c) at the left child node thereof or (ii) terminating the search if no left child node exists, thereby determining that the query key does not exist in the binary tree, if the query key is not greater than the largest key and not smaller than the smallest key in the node, searching the node to (i) find the query key or (ii) determine that the query does not exist in the binary tree if the query key is not found in the node.
In some aspects, the hash table in each node comprises a plurality of buckets, each bucket configured to store one or more keys. In some aspects, the keys in each bucket are sorted.
In some aspects, each key is further associated with a forward reference directed at the smallest key of all keys, if any, in the hash table that are greater than the key and a backward reference directed at the largest key of all keys, if any, in the hash table that are smaller than the key.
In some aspects, the binary tree comprises at least four, or five, or six or more layers of nodes. In some aspects, each node comprises a reference directed at each of the parent and child nodes thereof and each of the parent and child nodes thereof comprises a reference directed at said node.
Also provided, in one embodiment, is a method of searching a query key in a data storage, comprising (a) accessing, by a computer, a hash structure in the storage, which hash structure comprises (1) a first array (O[m]) that comprises m keys where the keys are sorted in the first array, (2) a second array (E[m]) of at least the same size (m) as the first array, which second array comprises, at each position (i), hash value of the key (O[i]) located at the same position (i) in the first array with a hash function (hash_f), that is, E[i]=hash_f(O[i]), and (3) a third array (I[n]) having a size (n) that is larger than the size (m) of the first array, wherein the values (E[i]'s) in the second array (E) are non-negative integers, and wherein the third array comprises, at the E[i]th position, the position (i) of the key O[i] in the first array; (b) obtaining the hash value (h) of the query key with the hash function; (c) obtaining the value at position (h) of the third array, I[h]; and (d) locating the query key at position (I[h]) of the first array.
In some aspects, the size (n) of the third array is at least 1.3 times of the size (m) of the first array. In some aspects, the data storage comprises a tree having a plurality of nodes and each node comprises at least a hash structure of step (a), and wherein the nodes do not overlap in terms of ranges of keys stored in each hash structure.
In some aspects, the tree is a binary tree. In some aspects, the tree is a B−tree or a B+tree.
Computer systems and non-transitory computer-readable medium are also provided with embedded program code to carry out the disclosed methods.
Provided as embodiments of this disclosure are drawings which illustrate by exemplification only, and not limitation, wherein:
It will be recognized that some or all of the figures are schematic representations for exemplification and, hence, that they do not necessarily depict the actual relative sizes or locations of the elements shown. The figures are presented for the purpose of illustrating one or more embodiments with the explicit understanding that they will not be used to limit the scope or the meaning of the claims that follow below.
This disclosure, in one embodiment, provides new types of structure for data storage and key-value lookup. The new data structures exhibit the features of both search tree and hash table, and can be considered as ordered-hash tables.
In one embodiment, the present disclosure provides computer methods and systems for storing, indexing and searching data, with a hash table-embedded search tree, where the node keys are organized like an abacus, hence named “Abax tree.”
In some aspects, an Abax tree refers to a binary tree that includes a plurality of nodes, wherein at least one of the nodes is a root node, each node has no more than two child nodes, a left child node and a right child node, and at least one node that is not the root node has both child nodes, and wherein each node comprises a hash table to store one or more keys, all of which are greater than all keys stored in the left child node, if any, thereof, and are smaller than all keys stored in the right child node, if any, thereof.
Such an Abax tree is binary and can be viewed as embedding hash tables into one or more of the binary nodes. An Abax tree, however, can also take other forms, such as that of a B−tree, B+tree, T−tree, multi-way search tree, or any other type of search trees. Like in a binary Abax tree, for instance, a B−tree based Abax tree embeds hash tables into one or more nodes of a B−tree, so long as all the keys stored in a hash table are within the range defined with the adjacent or connected nodes.
The keys in each node of an Abax tree, also referred to as an “Abax node,” can be mapped to a plurality of buckets with a hash function. The number of keys stored in one Abax node may be very large (for instance, millions of keys in one node or even more), thus drastically reducing the height of the tree and improving data accessing speed. Abax nodes are dynamically split and merged locally requiring no global operation in the tree. Hash collisions are resolved with storage space of controlled size so that search time in a bucket is constant.
Two types of Abax trees are described in detail in the present disclosure. The first, named “Simple Abax Tree” (SAT) or simply an “Abax tree”, is relatively a simple structure of Abax tree. The second, named “Composite Abax Tree” (CAT), is a composite tree that is structured similarly to an SAT except that each bucket contains a Simple Abax Tree. Once the structure of a SAT is shown, it will be clear how a CAT is organized.
The disclosure then introduces the ANIANS data structure and related methods where a separate indexing structure, Abax Node Index (ANI), is built and queried for fast accessing to Abax nodes in a Abax Node Store (ANS).
As a SAT grows larger, the tree may become unbalanced. SAT is balanced with tree rotation technique after a new node is added to a SAT. Embodiments of balancing a SAT include AVL tree balancing method, RedBlack tree balancing mechanism, or any other binary tree balancing method. A person skilled in the art will be familiar with the requisite techniques.
In each of the buckets, keys are stored in an Abax Chain (AC) which is a data storage structure that allows fast access to the keys in the bucket. The capacity of an AC is the maximum number of keys that may be stored in the AC. Embodiments of the present disclosure include three types of AC: the first is simply an ordered array of keys, denoted by SAC; the second is a perfect hash chain, denoted by HAC; the third is an order-preserving perfect hash chain, denoted by OHAC. The maximum length, namely the capacity, of the chain is denoted by D (or d). A bucket is full when the number of keys in the bucket is equal to the capacity D. The value of D may be a fixed number for the whole SAT, or a variable as a function of the height of a node.
In some aspects, the capacity d is large enough to minimize overflow and node split. For instance, suppose,
Horizontal: number of buckets=B
Vertical in each bucket: number of slots=D
Total number of keys to be stored in all buckets=T
The probability that any bucket will overflow (number of keys>D in a bucket):
let F=fD where f is the average load-factor of all the buckets. Then T=FB=fDB.
So we have B=T/(fD)
Assuming T is fixed, the overflow probability P will be a function of (f, D):
When a desired value of f is set (for example to 0.95), then P will decrease as the value of D increases. This means a bucket with more slots will reduce the probability of overflow hence smaller probability of node-splitting. However if the value of D is too big, insert and delete operations will be negatively affected.
Accordingly, in some aspects, the capacity D is at least 128. In one aspect, and number of buckets in each node is at least 1280 and the total number of keys in a node is at least 163,840. In another aspect, the capacity D is not greater than 195.
In another embodiment, keys may be inserted into a different bucket, the overflow bucket, in the same node when the original bucket for the keys is full. In one embodiment, we select the bucket that contains the minimum number of keys as the overflow bucket. In this case we add an overflow pointer in the original bucket so we track the overflow keys. This technique increases the average load-factor of Abax nodes by a few percent.
The product of d and w is the capacity, denoted by C, of an Abax node. When a greater value of d is used in Abax tree, insert and search operations tend to slow down, but the Abax nodes are more densely populated and fewer node-splits occur. The capacity C of an Abax node may be programmed to vary dynamically. New nodes may be allocated with greater capacity and be rotated from lower levels to higher levels so that more keys can be queried with fewer number of levels in the tree.
In some embodiments, a SAC includes an array with sequential index from 0 to d−1. Keys are stored in the array with order determined by a user-defined key-comparison function. Inserting or deleting a key in the array may require shifting other keys in the array. Search of a key in the array can be performed by binary search in logarithmic time. Embodiments of SAC also include binary search tree, T−tree, multi-way search tree, or any other type of search tree.
In some embodiments, a HAC include a hash table populated by hashing the keys with perfect hash function (PHF). A perfect hash function for a key set is a hash function that maps distinct elements in the key set to a set of integers, with no collisions. The hash output value is an index to an array with capacity d. When the number of keys in the bucket is equal to the capacity of storage array, d, the PHF essentially becomes a minimal perfect hash function (MPHF). As the number of keys in the bucket increases, it takes longer to find the PHF. One embodiment of finding the PHF and MPHF for keys in each bucket is utilizing a hyper-graph based method. HAC provides good performance for random access but poor performance for range query.
In some embodiments, an OHAC includes a hash table populated by hashing the keys with order-preserving perfect hash function (OPPHF). In addition to the properties of PHF, an OPPHF preserves the order of the input keys, that is, the mapped integers as index to the storage array follow the same order of the input keys. When the number of keys in the bucket is equal to the capacity of storage array, d, the OPPHF becomes an order-preserving minimal perfect hash function (OPMPHF). One embodiment of finding the OPPHF and OPMPHF for keys in each bucket is utilizing the CHM method. When a new key is inserted in a HAC, a new OPPHF is found with the CHM method.
When perfect hash functions are used to hash the keys in a node, information about the hash functions such as hash function type, random number seeds, graph mapping functions, etc. are stored in the header fields of each bucket.
In
A SAT may be either partially ordered (PO) or completely ordered (CO). In a partially ordered SAT (POSAT), all the nodes are orderly arranged. The keys in each bucket are also stored in order if SAC or OHAC storage structure is selected by end user in all buckets. However, the complete set of keys in all the buckets are not readily presented in order. The keys in a node are only merged from all buckets at query time to generate ordered output. We merge multiple lists of ordered keys by inserting and popping elements from a binary heap which will be described later in the presentation. If the keys in each bucket are not ordered, we then simply sort all keys in the node with a fast sorting method such as quicksort. In a completely ordered SAT (COSAT), all keys in a node are kept in order whenever a key is inserted into or deleted from the node. This is achieved by maintaining an ordered linked list of keys in different buckets. When a new key is inserted into a node, we include the new key in the ordered linked list at the appropriate position so that the order is still maintained. When a key is deleted from the node, we exclude the key from the ordered linked list. At query time, the linked list is traversed directly to output an ordered list of keys without any extra merging or sorting operation. If complete order is desired by end user, then we use a pointer in each element in the Abax chain to point to one of the remaining elements in the chain. The pointer contains a global address in the memory space. Optionally, the pointer may contain a local address that has a scope of a SAT node instead of the whole memory address space. In many cases, the local address uses less memory than global address. For instance, in a 64-bit computer system, global address may require at least 6 bytes. If local address is used, it may use only 2 bytes for the index of the buckets, 1 byte for the index to the array in an SAC, using a total of only 3 bytes as opposed to the 6 bytes in the global address. One may also use position offsets from the start location of the node to compute local addresses.
The method to search for a key in a SAT is illustrated in
The method for inserting keys into a SAT is illustrated in
If the test result in step S1 is positive (Yes), then we compare the key K with the minimum key in the current node. If K is less than the minimum key and there exists a left child node, then control goes to step S4, otherwise the control goes to step S7. In step S4, the greatest lower node below the current node is found and the maximum key value in the greatest lower node is denoted MaxK. The greatest lower node is illustrated by node 8D in
Step S12 in
If the test result in step S15 is negative, we find a pivot key in the current node as illustrated in step S17. The pivot key is one of the keys stored in the current node and is used to determine which keys to move from the current node to child nodes. In a preferred embodiment of the disclosure, the median of all keys stored in a node is selected as the pivot in the node. The process of finding the median of all keys in the node is expedited by recognizing the fact that all the buckets in the node contain ordered keys so that the median of medians in all buckets may be searched. One embodiment of finding the median of all bucket-medians is the divide-and-conquer method with 5-element blocks.
Another embodiment of finding the median of all keys in the current node is maintaining a median every time a key is inserted into or deleted from the current node, namely a running median. The running median is applicable in a SAT when total order of all keys is selected by end user. We use a tardy pointer pointing to the running median in the ordered linked list. The tardy pointer is a composite structure which contains an address element storing the address of a key in the node, and a trigger element for moving the tardy pointer. When the tardy pointer points exactly to the median key, the trigger value is zero. When a key greater than the running median is inserted into the node, or when a key less than the running median is deleted from the node, the trigger value in the tardy pointer is incremented by one. If the trigger value is equal to two, then we move the tardy pointer to the next greater key by updating the address element and reset the trigger value to zero. When a key less than the running median is inserted into the node, or when a key greater than the running median is deleted from the node, the trigger value in the tardy pointer is incremented by one. If the trigger value is equal to two, then we move the tardy pointer back to the previous smaller key and reset the trigger value to zero. If the trigger value is one, the tardy pointer is not moved.
Another embodiment of selecting the pivot in a node is simply picking the median in any randomly-chosen bucket. The random median in general is not equal to the exact median of all the keys in the node but is an expedient approach.
Whenever a new key is inserted into a node, we update the lower bound (minimum key) and upper bound (maximum key) header data by comparing them with the new key. If the new key is less than the current lower bound, then the lower bound is substituted by the new key. If the new key is greater than the current upper bound, then the upper bound key is substituted by the new key. In addition, the address of the lower bound key or upper bound key is also updated if the substitution happens. The addresses of lower bound and upper bound keys are needed for maintaining the ordered linked list in a node. One with skill in the art will appreciate that lower bound and upper bound keys in a node may need to be updated when keys are deleted from the node. Saving the lower bound key and the upper bound key does not require searching for them in the node when the keys are used in search or update operations to the node.
Referring to
A segment of the ordered linked list is illustrated in
Insertion of a new key to SAT with ordered linked list can also be carried out readily. Because the keys in the bucket are sorted, one can use binary search for the key that is the successor of the new key. A successor of the new key is the minimum key from all the keys that are greater than the new key in the bucket.
Once the successor is found, shift the successor and all the keys that are greater than the successor down one position (in the direction of greater keys). Finally, the new key can be inserted into the original successor location.
Forward and backward reference links can be updated as follows, without limitation. In method 1, for instance, go to the predecessor of the new key in the same bucket, perform standard insert operation in a linked list starting from the predecessor. With reference to
For another example, at a first step a), go to the predecessor of the new key in the same bucket. Then, at step b), go to the next key of the predecessor by following the forward reference. In the bucket that contains the next key, perform a binary search starting from the position of the next key to find the predecessor of the new key in the bucket; step c) repeat step b in until the predecessor key of the new key is found; step d) perform standard insert operation in a linked list. Also, with reference to
Delete operation is illustrated in
A SAT is balanced with tree rotation whenever a child node is allocated and added into the SAT tree from operations of either inserting keys or splitting Abax nodes. If end user does not require an Abax tree to be balanced, then tree rotation is not executed.
An embodiment of ANI is illustrated in
Another embodiment of ANI is illustrated in
Another embodiment of ANI is illustrated in
In a preferred embodiment, the ANI is illustrated in
In an embodiment of Abax Node Store (ANS), the Abax nodes in ANS are linked with pointers to form an ordered linked list, as illustrated with an exemplary structure in
In a preferred embodiment, as illustrated with an exemplary structure in
If Abax tree is incorporated in ANS, inserting a key into ANS follows the same procedure as explained in inserting a key in Abax tree illustrated in
It should be noted that when a node containing a key is mapped from the ANI, the key is always greater than or equal to the lower bound key in the node, but the key may be less than, equal to, or greater than the upper bound key stored in the node. This is true when ANI includes the lower bound keys. If the ANI includes the upper bound keys instead of the lower bound keys, then the key to be inserted is always less than or equal to the upper bound key in the node, but the key may be greater than, equal to, or less than the lower bound key stored in the node. If the ANI uses upper bound keys, then successor search of a key is needed.
Range query is a search of a set of keys that are bound by low and high end points of a range. Range query and Abax node splits often require sorting of K ordered lists of keys, known as K-way sorting. The method for the K-way sorting is described in the following steps. Step one: creating a binary heap of size K. Step two: the first key in each ordered list is inserted into the binary heap. Step three: the head element from the binary heap is popped and inserted into the output list. Step four: from the list where the head element of the heap is fetched, the next key in the list is taken and inserted into the heap. Steps three and four are then repeated until all of the ordered lists are consumed. The final output list contains the sorted keys from all the lists.
Range query may also require search for the predecessor and the successor of a key. If the low end key and the high end key of a range all exist in an Abax tree, then we do not need to search for the predecessor and successor. However, if one of them cannot be found with equality search, then predecessor or successor search is required. The predecessor of a key is the key that precedes the key in an ordered list. The successor of a key is the key that succeeds the key in the ordered list. In range query, we find the successor, denote by S, of the low end point of the range and the predecessor, denoted by P, of the high end point of the range. Then the keys between S and P in a sorted list are returned as output.
We find the predecessor of key K with the following procedure. Step one: we locate the Abax node N where key K would be stored. A process similar to that of equality search illustrated in
In one embodiment, the present disclosure provides a new hash structure (referred to as a mini-hash table) that facilitates data indexing and search. It is noted that, even though such a mini-hash table can be used as a hash table for the Abax trees disclosed herein, such a mini-hash table can be independently used, or in combination with data structure other than Abax trees.
In some aspects, a mini-hash table includes (1) a first array (O[m]) that includes m keys where the keys are sorted in the first array, (2) a second array (E[m]) of at least the same size (m) as the first array, which second array includes, at each position (i), hash value of the key (O[i]) located at the same position (i) in the first array with a hash function (hash_f), that is, E[i]=hash_f(O[i]), and (3) a third array (I[n]) having a size (n) that is larger than the size (m) of the first array, wherein the values (E[i]'s) in the second array (E) are non-negative integers, and wherein the third array includes, at the E[i]th position, the position (i) of the key O[i] in the first array.
Once a mini-hash table is created, searching the table is quick and straight forward. For instance, one can first obtain the hash value (h) of a query key with the hash function, followed by obtaining the value at position (h) of the third array, I[h]; and then locating the query key at position (I[h]) of the first array.
Such a mini-hash table is illustrated in
The keys in array O are hashed with a second hash function, and the hashed value is the index to array I. The values in array I are the index position of the keys in array O. For example, key 303 has a hash value of zero through the second hash function. Key 303 has index position of 1 in array O. So the value of array I at index position O is set to 1. The values in array I may be represented by the following equation:
I[j]=m and f2(O[m])=j
Where j=0, 1, . . . N−1; m=0, 1, 2, . . . M−1; and f2 is the second hash function.
Array E contains the indices to the array I for the keys in array O. The values of array E may be represented by the following equation:
E[j]=f2(O[j])
For example, key 953 has index position 4 in array O. By applying hash function f2( ) the hash value of key 953 is 6. Therefore, I[6]=4 and E[4]=6.
Linear probing may be used when collision occurs in array I. When the size of array I increases, the probability of collision in array I becomes smaller. The ratio of N/M should be properly maintained to avoid high probability of collision and high rate of probing.
Array I provides direct random access to keys in array O, without resorting to binary search in the bucket. Array E, as an optional data structure, stores the hashed values of keys from the second hash function without the need of re-hashing the keys that are shifted in array O during insert or delete operations in the bucket. Arrays of I and E use minimal amount of memory. If the size of array O is small (for example, less than 128), then only one byte of memory maybe needed for each element in arrays I and E.
They add relatively a very small amount of memory overhead to the entire dataset, especially when the size of each key is substantially greater than one byte. The values in arrays E and I need be maintained properly during insertion, deletion, and node-split operations.
As noted, the size (n) of the third array (array I), in some aspects, is larger than that of the first and second arrays. In one aspect, the ratio between the sizes of the third and first arrays is at least 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, or 2 fold.
The mini-hash table described here can be incorporated into data structures, such as binary trees, B−trees, B+trees or Abax trees, without limitation. In one aspect, the data structure includes a tree having a plurality of nodes and each node comprises at least a hash structure of step (a), and wherein the nodes do not overlap in terms of ranges of keys stored in each hash structure.
Like the conventional hash tables, collision can happen in a mini-hash table. That is, some keys in O(m) can hash to the same value, leading to collision. For example, key 1 may hash to number 13, and key 8 may also hash to number 13. In such an occasion, the disclosure also provides methods to resolve the collision.
For instance, in B−trees and B+trees for instance, keys arrived later overwrite the previous key(s) that hashed to the same value. As an example, key 8 takes hash value 13, and I[13] would be 8 (for key 8). Key 1 is presumably non-existing from the hash structure, and then binary search is resorted to locate key 1. So a positive answer from the hash structure tells that a query key really exists. However a negative answer from the hash structure does not mean a query does not exist in the array O(m) (due to collision and overwriting). In this case a regular binary-search can be performed to search for a query key or its predecessor/successor. In a binary Abax tree case, then the collision can be resolved by linear probing, known in the art.
The methodology described here can be implemented on a computer system or network. A suitable computer system can include at least a processor and memory; optionally, a computer-readable medium that stores computer code for execution by the processor. Once the code is executed, the computer system carries out the described methodology.
In this regard, a “processor” is an electronic circuit that can execute computer programs. Suitable processors are exemplified by but are not limited to central processing units, microprocessors, graphics processing units, physics processing units, digital signal processors, network processors, front end processors, coprocessors, data processors and audio processors. The term “memory” connotes an electrical device that stores data for retrieval. In one aspect, therefore, a suitable memory is a computer unit that preserves data and assists computation. More generally, suitable methods and devices for providing the requisite network data transmission are known.
Also contemplated is a non-transitory computer readable medium that includes executable code for carrying out the described methodology. In certain embodiments, the medium further contains data or databases needed for such methodology.
Embodiments can include program products comprising non-transitory machine-readable storage media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media may be any available media that may be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable storage media may comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store desired program code in the form of machine-executable instructions or data structures and which may be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above also come within the scope of “machine-readable media.” Machine-executable instructions comprise, for example, instructions and data that cause a general purpose computer, special-purpose computer or special-purpose processing machine(s) to perform a certain function or group of functions.
Embodiments of the present disclosure have been described in the general context of method steps which may be implemented in one embodiment by a program product including machine-executable instructions, such as program code, for example in the form of program modules executed by machines in networked environments. Generally, program modules include routines, programs, logics, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Machine-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps.
As previously indicated, embodiments of the present disclosure may be practiced in a networked environment using logical connections to one or more remote computers having processors. Those skilled in the art will appreciate that such network computing environments may encompass many types of computers, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and so on. Embodiments of the disclosure also may be practiced in distributed and cloud computing environments where tasks are performed by local and remote processing devices that are linked, by hardwired links, by wireless links or by a combination of hardwired or wireless links, through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Although the discussions above may refer to a specific order and composition of method steps, it is understood that the order of these steps may differ from what is described. For example, two or more steps may be performed concurrently or with partial concurrence. Also, some method steps that are performed as discrete steps may be combined, steps being performed as a combined step may be separated into discrete steps, the sequence of certain processes may be reversed or otherwise varied, and the nature or number of discrete processes may be altered or varied. The order or sequence of any element or apparatus may be varied or substituted according to alternative embodiments. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. Such variations will depend on the software and hardware systems chosen and on designer choice. It is understood that all such variations are within the scope of the disclosure. Likewise, software and web implementations of the present disclosure could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various database searching steps, correlation steps, comparison steps and decision steps.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
The disclosures illustratively described herein may suitably be practiced in the absence of any element or elements, limitation or limitations, not specifically disclosed here. For example, the terms “comprising”, “including,” containing,” etc. shall be read expansively and without limitation. Additionally, the terms and expressions employed here have been used as terms of description and not of limitation; hence, the use of such terms and expressions does not evidence and intention to exclude any equivalents of the features shown and described or of portions thereof. Rather, it is recognized that various modifications are possible within the scope of the disclosure claimed.
By the same token, while the present disclosure has been specifically disclosed by preferred embodiments and optional features, the knowledgeable reader will apprehend modification, improvement and variation of the subject matter embodied here. These modifications, improvements and variations are considered within the scope of the disclosure.
The disclosure has been described broadly and generically here. Each of the narrower species and subgeneric groupings falling within the generic disclosure also form part of the disclosure. This includes the generic description of the disclosure with a proviso or negative limitation removing any subject matter from the genus, regardless of whether or not the excised material is described specifically.
Where features or aspects of the disclosure are described by reference to a Markush group, the disclosure also is described thereby in terms of any individual member or subgroup of members of the Markush group.
All publications, patent applications, patents, and other references mentioned herein are expressly incorporated by reference in their entirety, to the same extent as if each were incorporated by reference individually. In case of conflict, the present specification, including definitions, will control.
Although the disclosure has been described in conjunction with the above-mentioned embodiments, the foregoing description and examples are intended to illustrate and not limit the scope of the disclosure. Other aspects, advantages and modifications within the scope of the disclosure will be apparent to those skilled in the art to which the disclosure pertains.
This application claims the benefit under 35 U.S.C. §119(e) to U.S. provisional application Ser. No. 61/855,085, filed May 7, 2013, the contents of which are incorporated here by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61855085 | May 2013 | US |