Systems and methods for accessing and updating distributed data

Information

  • Patent Grant
  • 7917474
  • Patent Number
    7,917,474
  • Date Filed
    Friday, October 21, 2005
    18 years ago
  • Date Issued
    Tuesday, March 29, 2011
    13 years ago
Abstract
Systems and methods are disclosed that provide an indexing data structure. In one embodiment, the indexing data structure is mirrored index tree where the copies of the nodes of the tree are stored across devices in a distributed system. In one embodiment, nodes that are stored on an offline device are restored, and an offline device that comes back online is merged into the distributed system and given access to the current indexing data structure. In one embodiment, the indexing data structure is traversed to locate and restore nodes that are stored on offline devices of the distributed system.
Description
REFERENCE TO RELATED APPLICATIONS

The present disclosure relates to U.S. patent application Ser. No. 11/255,817, titled “SYSTEMS AND METHODS FOR DISTRIBUTED SYSTEM SCANNING,” U.S. patent application Ser. No. 11/256,410, titled “SYSTEMS AND METHODS FOR PROVIDING VARIABLE PROTECTION,” U.S. patent application Ser. No. 11/255,346, titled “SYSTEMS AND METHODS FOR MANAGING CONCURRENT ACCESS REQUESTS TO A SHARED RESOURCE,” U.S. patent application Ser. No. 11/255,818, titled “SYSTEMS AND METHODS FOR MAINTAINING DISTRIBUTED DATA,” and U.S. patent application Ser. No. 11/256,317, titled “SYSTEMS AND METHODS FOR USING EXCITEMENT VALUES TO PREDICT FUTURE ACCESS TO RESOURCES,” each filed on Oct. 21, 2005 and each hereby incorporated by reference herein in their entirety.


FIELD OF THE INVENTION

The present disclosure generally relates to the field of distributed data management, and more particularly, to systems and methods for maintaining a copies of index data.


BACKGROUND

The increase in processing power of computer systems has ushered in a new era in which information is accessed on a constant basis. One response has been to store and maintain data in a distributed manner across multiple nodes or devices. A distributed architecture allows for more flexible configurations with respect to factors such as access speed, bandwidth management, and other performance and reliability parameters. The distributed architecture also allows multiple copies of data to be stored across the system. According, if one copy of the data is not available, then other copies of the data may be retrieved. One type of data that may be stored across a distributed system is indexing data.


The indexing data is desirably protected in the event that one or more of the devices of the distributed system fail. In addition, when a device fails, the offline indexing data is desirably restored in case of a failure by other devices. Moreover, additional problems occur when one or more of the failed devices come back online and try to reintegrate into the system.


Because of the foregoing challenges and limitations, there is an ongoing need to improve the manner in which indexing data, stored across a distributed system, is managed especially in the event of device failure.


SUMMARY

Systems and methods are disclosed that provide an indexing data structure. The indexing data structure is stored as nodes across a distributed system and copies of the nodes are also stored across the system. In some embodiments, the systems and methods restore nodes that are stored on an inaccessible portion of the distributed system. In some embodiments, portions of the system that become accessible are merged into the distributed system and given access to the current indexing data structure. In addition, in some embodiments, the indexing data structure is traversed to locate and restore nodes that are stored on inaccessible portions of the distributed system.


One embodiment of the present disclosure relates to an indexing system that includes a plurality of storage devices configured to communicate with each other. The system further includes a set of database records each record with a distinct index. The system further includes a balanced index tree structure. The balanced index tree structure includes a first and second copy of a set of leaf nodes stored among the plurality of storage devices configured to store the set of database records based on the indexes. The balanced index tree structure further includes a first and second copy of a set of parent nodes of the leaf nodes stored among the plurality of storage devices and configured to store references to the first and second copy of the set of leaf nodes. The balanced index tree structure further includes a first and second copy of a set of grandparent nodes of the leaf nodes stored among the plurality of storage devices, configured to store references to the first and second copy of the parent nodes. The balanced index tree structure further includes a first and second copy of a root node configured to store references to the first and second copy of the grandparent nodes. The set of parent nodes, set of grandparent nodes, and the root node are configured to index the first and second copy of the set of leaf nodes based on the indexes in the form of a balanced tree.


Another embodiment of the present disclosure relates to an indexing system that includes a plurality of storage devices configured to communicate with each other. The system further includes a set of data units. The set of data units includes an index value for each data unit. The system further includes an index data structure. The indexing data structure includes a first and second copy of a set of first nodes stored among the plurality of storage devices. The indexing data structure further includes a first and second copy of a set of second nodes stored among the plurality of storage devices. The first and second copy of the set of second nodes configured to store the set of data units based on the index values of each data unit. The first and second copy of the set of first nodes configured to index the first and second copy of the set of second nodes based on the index values of the data units stored in the second nodes.


Yet another embodiment of the present disclosure relates to a method for indexing data in an index tree. The method includes providing an index tree with inner nodes, leaf nodes, redundant copies of the inner nodes, and redundant copies of the leaf nodes. The method further includes receiving a first data with a first index. The method further includes traversing the index tree to select one of the leaf nodes on which to store first data based at least on the first index. The method further includes storing the first data on the selected leaf node. The method further includes storing the first data on the redundant copy of the selected leaf node. The method further includes traversing the inner nodes and redundant copies of the inner nodes that are parents of the selected leaf node to update metadata related to the inner nodes and the redundant copies of the inner nodes to reflect the stored first data.


Yet another embodiment of the present disclosure relates to a method of modifying nodes stored on distributed indexed tree. The method includes receiving a target node. The target node and a copy of the target node are stored among a plurality of devices. The method further includes accessing a parent node of the target node. The method further includes determining that the copy of the target node is stored on a failed device of the plurality of devices. The method further includes modifying the target node. The method further includes creating a new copy of the target node. The method further includes storing the new copy of the target node on at least one of the plurality of devices that is not a failed device. The method further includes recursively updating the parent node.


Yet another embodiment of the present disclosure relates to a method of restoring mirrored nodes of a distributed indexed tree. The method includes receiving a parent node. The method further includes, for each child of the parent node, determining that at least one copy of the child is located on a failed drive; retrieving a copy of the child from a non-failed drive; creating a new copy of the child; storing the new copy of the child on a non-failed drive; updating the parent and copies of the parent to reference the new copy of the child; and recursively restoring the child.


Yet another embodiment of the present disclosure relates to a method of merging a first device into a plurality of devices. The method includes providing a first device configured to store a version value. The method further includes providing a plurality of devices, with each of the plurality of devices being configured to reference at least two copies of a mirrored index data structure and to store a version value. The method further includes receiving the first version value. The method further includes querying the plurality of devices for their corresponding version values. The method further includes determining a highest version value from the version values. The method further includes determining whether the first version value is lower than the highest version value. The method further includes, if the first version value is lower than the highest version value, updating the version value of the first device to the highest version value; and updating the first device to reference the at least two copies of the mirrored index data structure.


Yet another embodiment of the present disclosure relates to a distributed system that includes a plurality of storage units. The system further includes a balanced index tree configured to be organized by index values comprising a root node, a copy of the root node, a plurality of nodes, and a copy of the plurality of nodes. The system further includes a storage module configured to store the root node, the copy of the root node, the plurality of nodes, and the copy of the plurality of nodes stored among the plurality of storage units. The system further includes index tree data stored on each of the plurality of storage units referencing the root node and the copy of the root node.


For purposes of this summary, certain aspects, advantages, and novel features of the invention are described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, for example, those skilled in the art will recognize that the invention may be embodied or carried out in a manner that achieves one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates one embodiment of a high-level block diagram of one embodiment of an index tree.



FIG. 2 illustrates one embodiment of a high-level block diagram of one embodiment of an index tree with varying levels of protection.



FIG. 3A illustrates one embodiment of a high-level block diagram of one set of devices A, B, C, and D that are in communication with each other.



FIG. 3B illustrates one embodiment of a high-level block diagram of the set of devices A, B, C, and D of FIG. 3A where Device B has lost communication with the other devices.



FIG. 3C illustrates one embodiment of a high-level block diagram of the set of devices A, B, C, and D of FIG. 3B where Device B has lost communication with the other devices and after a modify has taken place.



FIG. 3D illustrates one embodiment of a high-level block diagram of the set of devices A, B, C, and D of FIG. 3C where Device B has rejoined the set of devices.



FIG. 4A illustrates one embodiment of a flow chart of a modify process.



FIG. 4B illustrates an additional embodiment of a flow chart of a modify process.



FIG. 5 illustrates one embodiment of a flow chart of a restore tree process.



FIG. 6 illustrates one embodiment of a flow chart of a restore node process.



FIG. 7 illustrates one embodiment of a flow chart of a merge process.



FIG. 8A illustrates one embodiment of a block diagram of a distributed system.



FIG. 8B illustrates another embodiment of a block diagram of a distributed system.



FIG. 9A illustrates one embodiment of a superblock.



FIG. 9B illustrates one embodiment of an inner node.



FIG. 9C illustrates one embodiment of a leaf node.



FIG. 10 illustrates one embodiment of a high-level block diagram of one embodiment of an index tree used to store database records.



FIG. 11 illustrates one embodiment of a leaf node used to store database records.



FIG. 12 illustrates one embodiment of a high-level block diagram of one embodiment of an index tree used to store addresses of metadata data structures.



FIG. 13 illustrates one embodiment of a leaf node used to store database records.


These and other aspects, advantages, and novel features of the present teachings will become apparent upon reading the following detailed description and upon reference to the accompanying drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention. In the drawings, similar elements have may be marked with similar reference numerals.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Systems and methods which represent various embodiments and example applications of the present disclosure will now be described with reference to the drawings.


For purposes of illustration, some embodiments will be described in the context of a distributed index tree and example environments in which a distributed index tree may be used are also disclosed. The inventors contemplate that the present invention is not limited by the type of environment in which the systems and methods are used, and that the systems and methods may be used in various environments, such as, for example, the Internet, the World Wide Web, a private network for a hospital, a broadcast network for a government agency, an internal network of a corporate enterprise, an intranet, a local area network, a wide area network, and so forth. It is also recognized that in other embodiments, the systems and methods may be implemented as a single module and/or implemented in conjunction with a variety of other modules and the like. Moreover, the specific implementations described herein are set forth in order to illustrate, and not to limit, the invention. The scope of the invention is defined by the appended claims.


I. OVERVIEW

The systems and methods of the present invention provide techniques for indexing data stored with varying protection levels. In one embodiment, the data is stored in a mirrored balanced tree, also referred to as an index tree, which indexes the data and stores it in the tree. Each leaf node represents a sorted group of the indexed data. Accordingly, when a modification is made to one node in the index tree, the same modification is made to other copies of that node. Similarly, when a node is added the index tree, the appropriate number of copies of the node are created and the parent node that references the new node includes references to each of the copies. Also, when a node is deleted from the index tree, references to all copies of the node are removed from the parent node.


In one embodiment, copies of the nodes of the mirrored index tree are distributed among a set of devices. Because copies of the nodes are stored on different devices, the index tree may tolerate the failure of one or more of the devices. When modifying nodes in the index tree, if the modification encounters a copy of node that is stored on a device that is unavailable, a new copy node is stored-on an available device, and references to that node are updated to reflect its new location on the available device. In addition, when a device that was temporarily unavailable becomes available and attempts to rejoin the set of devices, that device is merged into the system and provided with references to the current copy of the index tree. Furthermore, the index tree may also be traversed to detect and restored any nodes that reside on unavailable devices by storing the nodes on available devices, and updating references to the restored nodes to reflect their new locations on the available devices.


II. MIRRORED INDEX TREE
A. General Tree

To better understand the mirrored index tree, background information regarding an index tree is now described. FIG. 1 illustrates an example index tree 100 that includes three pieces of data, Data A 110, Data B 120, and Data C 130. Each of the pieces of data includes an index, namely 01, 08, and 24 respectively. FIG. 1 illustrates how the three pieces of data are stored in the tree. The top level of the tree includes two entries, 10 and 20, also referred to as keys or index entries. In some embodiments, the keys are of a fixed or variable size. In this example, if the data's index is less than or equal to 10, the data is stored off of the first branch of the tree; if the data's index is greater than 10 and less than or equal to 20, then the data is stored off of the second branch of the tree; if the data's index is greater than 20, then the data is stored off of the third branch of the tree. Thus, in this embodiment, a top level node 140, also referred to as a root node, covers all possible indexes. It is recognized that a variety of indexing techniques may be used wherein the top level covers other subsets of possible indexes, where other types of indexes are used (e.g., whole numbers, words, letters, etc.).


In FIG. 1, Data A's index is 01 which is less than or equal to 10 and less than or equal to 04. Thus, Data A is stored off of the first branch of internal node 150 on leaf node 170. Data B's index is 08 which is less than or equal to 10 and greater than 07. Thus, Data B is stored off of the third branch of internal node 150 on leaf node 180. Data C's index is 24 which is greater than 20 and less than or equal to 46. Thus, Data C is stored off of the first branch of internal node 160 on leaf node 190.


Index trees that are well known in the art include, for example, binary trees, B trees, B+ trees, B* trees, AVL trees, and so forth. Moreover, operations for searching, reading, updating, inserting nodes, deleting nodes, and balancing an index tree are well known to those of skill in the art.


B. Mirrored Tree

The systems and methods disclosed herein provide a protected index tree. In one embodiment, the nodes of the index tree are mirrored. One advantage of mirroring the nodes is that if one copy of a node is unavailable, then the other copy of the node may be used instead. In one embodiment, the entire index tree is mirrored the same number of times (e.g., all of the nodes are mirrored two times; all of the nodes are mirrored five times, etc.). In another embodiment, different nodes of the tree may have different levels of mirroring protection. For example, one node may be mirrored two times and another node may be mirrored five times. To maintain the protection level of the index tree, in this embodiment, a node of the index tree is stored using at least the same level of protection as the children that it references. For example, if a leaf node is mirrored two times, then any parent node referencing (e.g., pointing to) that leaf node is also mirrored at least two times.



FIG. 2 illustrates one embodiment of the index tree of FIG. 1 where the index tree includes different mirroring levels, such that different nodes in the index tree are mirrored a different number of times. Fore example, Data B 120 is stored with a protection level of 3×. Accordingly, the branches of the index tree 140, 150 that lead to Data B 120 are also protected at a protection level of at least 3×.


C. Distributed Storage of the Mirrored Tree

In one embodiment, copies of a node are stored among a set of devices. For example, in FIG. 2, one copy of the root node 140 may be stored on a Device A, the second copy of the root node 140 may be stored on a Device B, and the third copy of the root node 140 may be stored on a Device C. Similarly, one copy of a leaf node 180 may be stored on Device B, the second copy of the leaf node 180 may be stored on Device C, and the third copy of the leaf node 180 may be stored on Device D. Accordingly, if one of the devices becomes unavailable (e.g., fails, crashed, becomes disconnected, is taken off line, etc.), then additional copies of the node may be retrieved from the other devices. For example, if Device B disconnects from Device A, Device C, and Device D, copies of the root node 140 are still available on Device A and Device C. Similarly, copies of the leaf node 180 are still available on Device C and Device D.


In addition, in some embodiments, references to each copy of the root of the index tree are stored on each device. These references will be referred to as a superblock. FIG. 3A illustrates the set of Devices A, B, C, and D that are in communication with each other. Each device includes a superblock that provides the address of each copy of the root node as well as the version of the index tree referenced by the superblock. In the example above, the root node 140 is stored on Device A, Device B, and Device C. Accordingly, Device A, Device B, and Device C of FIG. 3 each have a copy of the root node 140. In the example above, the leaf node 180 is stored in Device B, Device C, and Device D. Accordingly, Device B, Device C, and Device D of FIG. 3 each have a copy of the leaf node 180. It is recognized that the address may be stored in a variety of formats using, for example, device number, address offsets, cylinder numbers, storage unit numbers, cache memory IDs, and so forth. In FIG. 3A, all four of the superblocks are shown as Version 3. Because they are all the same version, the superblocks, in this example, reference the same index tree.



FIG. 3B illustrates the example of when Device B becomes disconnected from Device A, Device C, and Device D, where copies of the root node 140 are still available on Device A and Device C. Similarly, copies of the leaf node 180 are still available on Device C and Device D.



FIG. 3C illustrates the example of FIG. 3B after a modification has taken place where the modify operation created a new copy of the root node, to replace the copy of the root node that is not available on Device B. In FIG. 3C, the new copy of the root node is stored on Device D, the superblocks of the available devices, Device A, Device C, and Device D have been updated to reflect that copy 2 of the root node is located on Device D (and not Device B). In addition, the version of the superblocks of Device A, Device C, and Device D have been updated to a new version to reflect that a modification of the superblocks has taken place.



FIG. 3D illustrates the example of FIG. 3A after Device B has come back online and merged back into the set of devices. Device B's superblock has been modified to reflect that copy 2 of the root node is located on Device D and to include the new version of the superblock. In addition, because the copy of the root node on Device B is no longer referenced, it has been removed from Device B. Also, in this example, there were no attempts to modify the leaf node 180 while Device B was offline. Accordingly, a copy of leaf node 180 remains on Device B.


D. Various Embodiments

In some embodiments, the index tree is implemented as a modified B* tree. As is well known by those of ordinary skill in the art, a B* tree is a search tree where every node has between ┌m/2┐ and m children, where m>1 is a fixed integer. Nodes are kept ⅔ full by redistributing the contents to fill two child nodes, then splitting them into three nodes. It may be advantageous to use a B* tree since the height, and hence the number of maximum accesses, can be kept small depending on m. As new nodes are added, the B* tree readjusts to keep the height of the tree below a maximum number. In some embodiments, the B* tree is further configured to have variable-sized records, that can be redundantly stored, splits insertion blocks once while they are being filled, and leaves behind a trail of blocks. It will be understood that, although some of the file and logical structures are described in terms of B-trees, various concepts of the present disclosure are not necessarily limited to B-tree applications. Moreover, it is recognized that a variety of data structures known to those of ordinary skill in the art may be used including, for example, other trees, graphs, linked lists, heaps, databases, stacks, and so forth.


Furthermore, in some embodiments, the index tree is protected using other protection schemes besides or in addition to mirroring. While mirroring is discussed herein, it is recognized that a variety of other protection/correction techniques may be used in addition to or instead of mirroring. For example, the nodes of the index tree may be protected using parity protection, for example, for nodes that are distributed among multiple devices. Moreover, the index tree may include nodes that are not mirrored at all.


It also is recognized that the term storage device may refer to a variety of devices including for example, a smart storage unit, a disk drive, a server, a non-volatile memory device, a volatile memory device, and so forth. Moreover, the storage device may be locally connected and/or remotely connected to one or more other devices. For example one smart storage unit may include multiple devices. Moreover, a storage device may include multiple memory units including one more volatile memory units and/or one or more non-volatile memory units.


III. OPERATIONS

Operations for reading, modifying, and restoring a distributed mirrored index tree are set forth below. In addition, an operation for merging in a device that was previously inaccessible is also disclosed.


A. Reading


To read data stored in the distributed mirrored index tree, a read process receives the requested data's index. The read process accesses one copy of the root node (e.g., using one of the references from the superblock), and based on the data's index and the keys in the root node, accesses one copy of the node in the next level of the distributed mirrored index tree. The read process then continues using the data's index and the keys in the nodes of the tree to access one copy of the node in the next level of the distributed mirrored index tree. Once the read processes accesses a copy of the leaf node, then the read processes uses the data's index to retrieve the data corresponding to that index.


Accordingly, if one copy of a node is on a disconnected device, then the read process attempts to access another copy of that node. The read process may be configured to request copies of nodes in a predetermined order based on the devices, to use a round robin technique based on which device was last used, a most recently used technique based on devices that were recently used, a “distance-based” technique placing a preference on local devices rather than remote devices, to use a random technique, and so forth.


B. Modifying


To modify data stored in the distributed mirrored index tree, a modify process receives a node to be modified, referred to as the target node. The modify process may also receive the modification that is requested (e.g., to update data, to update reference(s) to other nodes, to remove node, etc.). The modify process traverses the tree to the parent of the target node. The modify process then determines whether all of the copies of the target node are accessible. If so, then the modify process modifies all copies of the target node.


If one of the copies of the target node is not accessible (e.g., stored on a device that is not in communication with the other devices), then the modify process modifies the available copies of the target node, creates a new copy of the target node, stores the new copy on one of the available devices, and calls the modify process using the parent node.


Accordingly, the modify process then traverses the tree to the parent of the parent node, determines whether all copies of the parent node are accessible and if so, modifies all of the copies of the parent node to point to the new copy of the target node. If one of the copies of the parent node is not accessible, then the modify process modifies available copies of the parent node, creates a new copy of the parent node, stores the new copy of the parent node on one of the available devices, and calls the modify process of the parent's parent node (e.g., the grandparent of the target node).


This modify process continues up to the root node if changes to each of the parent nodes are necessary. In one embodiment, the root node acts as a special case since the address of each copy of the root nodes is stored on each device. If one of the copies of the root node are unavailable, then the modify process modifies available copies of the root node, creates a new copy of the root node, stores the new copy of the root node on one of the available devices, and then determines whether there are a quorum of devices that are available. If not, then the modify process does not update the superblocks to point to the new root node. If so, then the modify process modifies the superblocks to point to the new copy of the root node and updates the version of the superblocks.


The modify process 400 could also include removing nodes, where no changes are made to the target node and no copies of target nodes are made. Instead, the modify process 400 recursively updates the parent node of the node to be removed to reflect that the node has been removed. In other embodiments, the modify process 400 could replace the node to be removed with one more good copies of the node.


One example of a modify process 400 is illustrated in FIG. 4A. Beginning in a start state 410, the modify process 400 proceeds to block 415. In block 415, the modify process 400 receives a node and a requested modification to the node. The node may be identified using a variety of techniques such as, for example, an identifier, a name, a path, and so forth. In addition, the modification may include, for example, modifying data stored in a leaf node, modifying pointers to children nodes, removing the node from the index tree, and so forth. Proceeding to the next block 420, the modify process 400 accesses the node's parent node. In this example, the parent node is the node that references the node, and the parent of the root node is the superblock. Proceeding to block 425, the modify process 400 determines whether all copies of the node are available. For example, a copy of the node would not be available if the copy was stored on a device that is down. If all copies are available, then the modify process 400 modifies all copies of the node with the requested modification 430 and proceeds to an end state 465. If all copies are not available, the modify process 400 modifies all available copies of the node with the requested modification 435, creates and stores a new copy of the node (or more than one copy if more than one copy is not available) 440 on an available device. It is recognized that if none of the copies are available, the modify process 400 may terminate and return an error.


Proceeding to block 445, the modify process 400 determines whether the node is the root node. If the node is not the root node, then the modify process 400 proceeds to block 450; if the node is the root node, then the modify process 400 proceeds to block 455.


In block 450, the modify process 400, recursively calls the modify process to modify the parent node to point to the new copy (or copies) of the node, and proceeds to the end state 465.


In block 455, the modify process 400 determines whether there is quorum of available devices. In one embodiment, the quorum is a majority of the devices, but it is recognized that in other embodiments, other subsets of the number of devices could be used. If there is not a quorum, then the modify process 400 proceeds to the end state 465. In some embodiments, the modify process 400 may return an error indicating that less than a quorum of the devices are available. If there is a quorum, the modify process 400 proceeds to block 460 and updates the superblocks to point to the new copy (or copies) of the root. In some embodiments, the modify process 400 also updates the superblock to store a new version. It is recognized that in some embodiments, the modify process 400 does not update the superblocks, but sends out commands for each of the devices to update their superblocks and/or to update their versions.


It is recognized that other embodiments of a modify process 400 may be used. FIG. 4B illustrates an additional embodiment of a modify process 400 that prevents any updating of the root nodes if there is not a quorum. FIGS. 4A and 4B illustrate various embodiments of the modify process 400.


C. Restoring


To restore data stored in the distributed mirrored index tree, a restore process traverses the distributed mirrored index tree to find copies of nodes that are stored on unavailable devices and to restore those copies.


The restore process begins with a copy of the superblock and determines whether all copies of the root node are available. If so, then the superblock retrieves one copy of the root node and determines whether all copies of each of the root nodes′ children are available. If not, then the restore process determines whether there is quorum of available devices. If there is not a quorum, the restore process terminates. If there is a quorum, then the restore process creates and stores a new copy of the missing root node on one of the available devices, and updates the superblocks on all of the available devices to reference the newly created copy of the root node and to update the superblocks' version.


Next, the restore process proceeds to the next level of the tree, and determines whether all copies of the root's children nodes are available. If not, then the restore process creates and stores missing copies of the root's children. The restore process then proceeds to restore children of the root's children. The restore process continues this for each level of the tree until all nodes, including the leaf nodes, have been traversed.


1. Restore Tree Process


One example of a restore tree process 500 is illustrated in FIG. 5. Beginning in a start state 510, the restore tree process 500 proceeds to the next block 515. In block 515, the restore tree process 500 obtains a copy of the superblock. Proceeding to the next block 520, the restore tree process 500 determines whether all copies of the root node are available. If so, then the restore tree process 500 proceeds to block 545. If not, then the restore tree process 500 proceeds to block 525.


In block 525, the restore tree process 500 obtains a copy of the root node. It is recognized that if none of the copies are available, the restore tree process 500 may terminate and/or return an error. In block 530, the restore tree process 500 creates and stores a new copy of the root node (or more than one copy if more than one copy is not available). The restore tree process 500 then determines whether there is quorum of available devices. If there is not a quorum, then the restore tree process 500 proceeds to the end state 550. In some embodiments, the restore tree process 500 may return an error indicating that less than a quorum of the devices is available. If there is a quorum, the restore tree process 500 proceeds to block 540 and updates the superblocks to point to the new copy (or copies) of the root node. In some embodiments, the restore tree process 500 also updates the superblock to store a new version. It is recognized that in some embodiments, the restore tree-process 500 does not update the superblocks, but sends out commands for each of the devices to update their superblocks and/or to update their versions. The restore tree process 500 then proceeds to block 545.


In block 545, the restore tree process 500 calls a restore node process 600 to restore the root node. In some embodiments, the restore node process 600 is passed the copies of the root node or references to the copies of the root node.


2. Restore Node Process


One example of a restore node process 600 is illustrated in FIG. 6. Beginning in a start state 610, the restore node process 600 proceeds to the next block 615. In block 615, the restore node process 600 obtains copies of or receives copies of a parent node (or references to the node). For each child of the parent node 620, 650, the restore node process 600 determines whether all copies of the child node are available. If so, then the restore node process 600 proceeds to the next child 620, 650. If not, then the restore node process 600 proceeds to block 630.


In block 630, the restore node process 600 obtains a copy of the child node. It is recognized that if none of the copies are available, the restore node process 600 may terminate and/or return an error. In block 635, the restore node process 600 creates and stores a new copy of the child node (or more than one copy if more than one copy is not available). Proceeding to the next block 640, the restore node process 600 updates the copies of the parent node to point to the new copy (or copies) of the child node. Proceeding to the next block 645, the restore node process 600 calls a restore process to restore the child node. In some embodiments, the restore process is passed the copies of the child node or references to the copies of the child node. Once the children of the parent node have been traversed and the children nodes have been restored, then the restore node process 600 proceeds to an end state 655.


It is recognized that the tree may be traversed in a variety of manners and that in other embodiments the tree may be traversed starting with the leaf nodes and/or the tree may be traverses level by level. FIGS. 5 and 6 are meant only to illustrate example embodiments of a restore process.


D. Merging


The distributed mirrored index tree may also be used to merge in new devices that were temporarily unavailable, but that have now become available. When a device comes back online, the device may need to access the distributed mirrored index tree. However, the device may have invalid references to copies of the root node of the distributed mirrored index tree. For example, while the device was offline, one of the copies of the root node may have been stored on the down device and may have been modified using the modify process above. Accordingly, a new copy of the root node, with the modified data may have been created and updated and stored on an available device. In addition, the superblocks' references to copies of the root node may have been modified to reference the new copy of the root node instead of the copy that was stored on the down device.


A merge process may be used to compare the version of a device's superblock with versions of the other devices. If the version is the same, then the device's superblock is current. If the device's version is lower than the versions of the other devices, then the device's superblock is updated to point to the same copies of the root node as devices with the highest version. In addition, the device's superblock device is updated to the highest version.


One example of a merge process 700 is illustrated in FIG. 7. Beginning in a start state 710, the merge process 700 proceeds to block 715. In block 715, the merge process 700 obtains the version of the superblock for the device that is merging into the set of other devices. Proceeding to the next block, 720, the merge process 700 queries the other devices for the versions in their superblocks. Proceeding to the next block 725, the merge process 700 determines the highest version. In other embodiments, the merge process may also determine whether there is a quorum of nodes that have the highest version. If not, then the merge process 700 may return an error.


Proceeding to the next block 730, the merge process 700 determines whether the device's version is less than the highest version. If not, then the merge process 700 proceeds to an end state 750. If so, then the merge process updates the device's superblock to point to the same copies of the root node as pointed to by a superblock with the highest version 735. Proceeding to the next block 740, the merge process 700 updates the superblock's version to the highest version.


The version may be represented using a variety of techniques such as, for example, an integer, a decimal, a letter, a word, and so forth.



FIG. 7 illustrates one embodiment of a merge process 700 and it is recognized that other embodiments of a merge process 700 may be used.


IV. DISTRIBUTED SYSTEM


FIG. 8A illustrates one embodiment of a distributed system 800 having an index tree management module 820 in communication with a set of devices 810. It is recognized that the index tree management module 820 may be located apart from the set of devices 810 and/or may be located on one or more of the devices 810, as illustrated in FIG. 8B. In other embodiments, the index tree management module 820 may be spread among one or more of the devices 810.


The index tree management module 820 and the devices 810 may communicate using a variety of communication techniques that are well known in the art. Such communication may include local communication, remote communication, wireless communication, wired communication, or a combination thereof.


The exemplary devices include a superblock 812 as well as a set of index tree nodes 814. As illustrated each device may include a different number of index tree nodes or may include the same number of index tree nodes. The superblock and/or index tree nodes may be stored on disks or other non-volatile memory on the device 810 and/or in RAM or other volatile memory on the device 810. The distributed system 800 is not limited to a particular type of memory. In addition, the distributed system 800 may include devices that do not include any superblocks and/or any index tree nodes.


In some embodiments, the distributed system 800 may be accessible by one or more other systems, modules, and/or users via various types of communication. Such communication may include, for example, the Internet, a private network for a hospital, a broadcast network for a government agency, an internal network of a corporate enterprise, an intranet, a local area network, a wide area network, and so forth. It is recognized that the distributed system 800 may be used in a variety of environments in which data is stored. For example, the distributed system 800 may be used to stored records in a database, content data, metadata, user account data, and so forth.


It is also recognized that in some embodiments, the systems and methods may be implemented as a single module and/or implemented in conjunction with a variety of other modules and the like. Moreover, the specific implementations described herein are set forth to illustrate, and not to limit, the present disclosure.


V. SAMPLE INDEX TREE NODES


FIGS. 9A, 9B, and 9C illustrate example embodiments of a superblock 900, an inner node 910, and a leaf node 930. In various embodiments, these nodes can have redundant copies in a manner described herein.


A. Superblock



FIG. 9A illustrates one embodiment of a superblock 900 that can be configured to provide, among others, the functionality of pointing to the copies of the root node for an index tree. In one embodiment, the superblock 900 points to an index tree by pointing to (e.g., storing the device number and address of) copies of the root node. The exemplary superblock 900 includes a header section 902, followed by a listing of pointers 904 to the one or more copies of the root node. The exemplary list of pointers includes baddr1 to baddrN. Thus, the pointer baddr1 points to the first copy of the root node, baddr2 to the second copy of the root node, and so on. In one embodiment, unused pointers are stored as zeroes or NULL values and placed at the end of the listing 904. For example, if the superblock 200 points to two copies of a root node, then the pointers baddr1 and baddr2 would be positioned at the beginning of the listing 904, and the remainder of the listing 904 would be zeroed out.


In other embodiments, the superblock 900 may be configured to point to more than one index tree.


As further shown in FIG. 9A, the header section 902 can include version information that indicates how current the index tree is (e.g., version information). The header section 902 can also include information about the height of the index trees that are pointed to by the pointers 904. A height of zero indicates that the superblock 900 does not point to any index tree. A height of one indicates that the superblock 900 points directly to copies of leaf blocks (e.g., there are no inner blocks). A height of n>1 indicates that there are n−1 levels of inner blocks. It is recognized that the superblock 900 may include additional and/or other data such as, for example, the name of the index tree(s), the date the superblock 900 was last updated, the number of devices required for a quorum, the date the superblock 900 was created, permission information indicating which devices and/or users have permission to read, write, or delete the superblock 900, and so forth.


As set forth above, in one embodiment, a copy of the superblock 900 is stored on each device of the distributed system.


B. Inner Node



FIG. 9B illustrates one embodiment of an inner node 910 that includes a header section 912 followed by a listing of index entries 714 (shown as key1, key2, . . . , keyn) and related offset values 920. The offset values 920 point to pointer entries 918 that relate to the index entries 914. The pointer entries 918 point to leaf nodes or to another level of inner nodes.


Inner nodes 910 provide mappings to values between index entries 914 using pointer entries 918. For example, offset0 points to the address of the node for values less than key1; offset1 points to the address of the node for index entries greater than or equal to key1 and less than key2; offset2 points to the address of the node for index entries greater than or equal to key2 and less than key3; and so forth.


The number of pointer entries for each offset depends on the number of mirrored copies of that node. For example, if child node is mirrored two times, then any offset pointing to that node will have at least two pointer entries related to that offset. Similarly, if a child node is mirrored three times, then any offset pointing to that node will have at least three pointer entries related to that offset. In the exemplary inner node 910, offset0 points to baddr01, baddr02, and baddr03 signifying that there are three copies of the child node located at baddr01, baddr02, and baddr03; the node is mirrored three times (3×). Similarly, offset1 points to baddr11 and baddr12 signifying that there are two copies of the second child node located at baddr11 and baddr12; that node is mirrored two times (2×). Accordingly, the inner nodes provide information as to where copies of their children nodes are stored.


In one embodiment, the index entries 914 and the offsets 920 are arranged in an increasing order beginning from the top of the inner node 910. The pointer entries 918 corresponding to the offsets 920 are arranged beginning from the bottom of the inner node 910. Thus, a free space 916 can exist between the index entries 914 and the pointer entries 918. Such an arrangement and the free space 916 provide for easy addition of new index entries 914. For example, if keyn+1 is to be added, it can be inserted below the last entry (keyn) of the index entries 914. A corresponding pointer entry can then be inserted above the last entry. The free space 916 accommodates such addition, and the existing index entries and the pointer blocks are not disturbed. This embodiment allows referenced nodes to be protected at different levels allowing for the addition of multiple pointer entries 918 for each offset 220. In addition, it allows the index tree to be rebalanced such that if additional index entries 214 are needed to balance the tree, then they can be added.


As further shown in FIG. 9B, the header 912 can include information similar to that of the inner node 910 discussed above The header 912 can also indicate the number of index entries 914 (e.g., key_count). The header 912 can also indicate the maximum protection “mp” (the maximum redundancy) for the index entries 214 (and the corresponding pointer entries). The header 912 can also indicate how many (e.g., mp_count) index entries (e.g., child nodes) have the maximum protection. In other embodiments, the header 912 may also include information about the protection level of each of the child nodes in addition to or instead of the maximum protection level. In other embodiments, the header 912 may include information about a subset of the protection levels and counts related to those protection levels. The information about the maximum protection and the count that can be used allow for variable protection in the index tree as disclosed in U.S. patent application entitled “Systems and Methods for Providing Variable Protection in an Indexing System,” filed concurrently herewith, which is hereby incorporated by reference herein in its entirety.


Moreover, it is recognized that the inner nodes 910 may include additional and/or other data, such as, for example, the date the inner node 910 was last updated, the date the inner node 910 was created, permission information indicating which devices and/or users have permission to read, write, or delete the inner node 910, and so forth. It is also recognized that the information discussed above may be stored in the header 912 or in other areas of the inner node 910.


In one embodiment, the inner node 910 as a whole constitutes a fixed amount of data. Thus, the foregoing arrangement of the index entries 914 and the pointer entries 918, in conjunction with the free space 916, allows for addition of new data without altering the existing structure. In one embodiment, the inner node 910 is 8 kB in size. It is recognized, however, that the inner node may be of a variety of sizes.


C. Leaf Node



FIG. 9C illustrates one embodiment of the leaf node 930 having a header 932 and a listing of leaf index entries 934. The leaf index entries 934 (key1, key2 . . . , keyn) have corresponding offsets 940, and are arranged in a manner similar to that of the inner node 910 described above. In one embodiment, the leaf nodes 930 are at the bottom level of the tree, with no lower levels. Thus, the offsets 940 for the leaf index entries 934 points to data 938 for the corresponding index entry 934. The exemplary leaf node includes n index entries 934, where key1 corresponds to offset1, which points to two copies of the data that correspond to key1, where the two copies of the data are stored at data11 and data12. The index entries may correspond to a variety of data. For example, the data 938 may include records in a database, user account information, version data, metadata, addresses to other data, such as metadata data structures for files and directories of the distributed file system, and so forth. For example, offset1 points to the address block having example two copies of the data (data11 and data12), which may be, for example, two copies physical addresses of a metadata structure for a file that is distributed within the distributed system.


In one embodiment, the arrangement of the leaf index entries 934 and the data 938, with a free space 936, is similar to that of the inner node 910 described above in reference to FIG. 9B. The header 932 may also include similar information as that of the inner node 910.


In one embodiment, the leaf block 930 as a whole constitutes a fixed amount of data. In one embodiment, the leaf block 930 is 8 kB in size. It is recognized, that the leaf block 930 may be a variety of sizes.


IV. EXAMPLE ENVIRONMENTS

The following provides example environments in which a distributed mirrored index tree may be used. It is recognized that the systems and methods disclosed herein are not limited to such example environments and that such examples are only meant to illustrate embodiments of the invention.


A. Employee Database System



FIG. 10 illustrates an example distributed mirrored index tree 1000 for storing employee database records, where the records are sorted by last name. For example, the index value for employee Phil Ader is “Ader” and the index value for Jan Saenz is “Saenz.” The exemplary index tree 1000 includes nodes that are mirrored two times.


As an example, if a request to modify Kaye Byer's name to be “Kay” instead of “Kaye,” following the modify process disclosed herein, the modify process 400 would obtain a copy of node 1020a or 1020b and determine whether both 1040a and 1040b were on live devices. If, for example, 1040b was stored on a failed device, the modify process 400 would make the change to 1040a, copy the modified 1040a to create a new copy of 1040b stored on an available device, and then check to see if 1020a and 1020b were both on live devices. If so, then the modify process 400 would update the pointers in 1020a to point to the new 1040b and update the pointers in 1020b to point to the new 1040b.



FIG. 11 illustrates an example leaf node 1100 that corresponds to node 1040a. The exemplary leaf node 1100 includes a header 1152 noting that the node is a leaf node, the node is version 5, the number of entries is 2, the maximum protection is 1×, and the number of entries is 2. The entries 1134 include Ader and Byer whose corresponding offsets 1140 point to the respective data values “Ader, Phil” and “Byer, Kay” 1138.


B. Intelligent Distributed File System


As another example, in one embodiment, the systems and methods may be used with an intelligent distributed file system as disclosed in U.S. patent application Ser. No. 10/007,003, entitled “System and Method for Providing a Distributed File System Utilizing Metadata to Track Information About Data Stored Throughout the System,” filed Nov. 9, 2001, which claims priority to Application No. 60/309,803 filed Aug. 3, 2001, which is hereby incorporated by reference herein in its entirety.


In one embodiment, the intelligent distributed file system uses metadata data structures to track and manage detailed information about files and directories in the file system. Metadata for a file may include, for example, an identifier for the file, the location of or pointer to the file's data blocks as well as the type of protection for each file, or each block of the file, the location of the file's protection blocks (e.g., parity data, or mirrored data). Metadata for a directory may include, for example, an identifier for the directory, a listing of the files and subdirectories of the directory as well as the identifier for each of the files and subdirectories, as well as the type of protection for each file and subdirectory. In other embodiments, the metadata may also include the location of the directory's protection blocks (e.g., parity data, or mirrored data). The metadata data structures are stored in the intelligent distributed file system.


1. Distributed Mirrored Index Trees


In one embodiment, the intelligent distributed file system uses a distributed mirrored index tree to map the identifiers for a file or directory to the actual address of the file's or directory's metadata data structure. Thus, as metadata data structures are moved to different smart storage units or different address locations, only the index tree entries need needs to be updated. Other metadata data structures that reference that file or directory need not be updated to reflect the new location. Instead, the metadata data structures that reference that file or directory just use the identifier of that file or directory.



FIG. 12 illustrates one embodiment of a distributed mirrored index tree 1200 that stores addresses of metadata data structures, or nodes, that are indexed by integers. The root node 1210 includes two index entries 10 and 20. Accordingly, entries with index values less than 10 are stored off the first branch of the root node 1210, entries with index values greater than or equal to 10 and less than 20 are stored off the second branch of the root node 1210, and entries with index values greater than or equal to 20 are stored off the third branch of root node 1210.


Similarly, inner node 1220 has index values 3 and 7. Accordingly, entries with index values less than 3 are stored off the first branch of the inner node 1220, entries with index values greater than or equal to 3 and less than 7 are stored off the second branch of the inner node 1220, and entries with index values greater than or equal to 7 (but presumably less than 10) are stored off the third branch of inner node 1220.


In addition, leaf node 1250 has index values 1 and 2. Accordingly, entries with index values of 1 or 2 are stored in the leaf node 1250. Similarly, entries with index values of 3, 4, 5, or 6 are stored in the leaf node 1260, and entries with index values of 7, 8, or 9 are stored in the leaf node 1270.


The exemplary index tree 1200 also maintains the protection level of the index tree. For example, leaf node 1250 is mirrored two times and root node 1210 is mirrored three times.


2. Example Leaf Node



FIG. 13 illustrates an example leaf node 1300 that corresponds to leaf node 1250. The exemplary leaf node 1300 includes a header 1352 noting that the node is a leaf node, the node is version 2.1, the number of entries is 2, the maximum protection is 2×, and the number of entries is 2. The entries 1334 include 01 and 02 whose corresponding offsets 1340 point to the respective copies of the address entries “addrA” and “addrB” 1338. In this example, “addrA” is the address of the metadata data structure with identifier 01.


Furthermore, as discussed above, FIGS. 11 and 13 illustrate examples of how the leaf node may be stored. Various configurations of the superblocks, inner nodes, and leaf nodes may be used.


V. CONCLUSION

Although the above-disclosed embodiments have shown, described, and pointed out the fundamental novel features of the invention as applied to the above-disclosed embodiments, it should be understood that various omissions, substitutions, and changes in the form of the detail of the devices, systems, and/or methods shown may be made by those skilled in the art without departing from the scope of the invention. Consequently, the scope of the invention should not be limited to the foregoing description, but should be defined by the appended claims.

Claims
  • 1. A computer-implemented method of maintaining protection levels for nodes of a distributed mirrored indexed tree while modifying data stored in the nodes, the method comprising: receiving a request to modify a target node of a mirrored index data structure organized in a hierarchy and stored among a plurality of storage devices, the mirrored index data structure comprising: a root node;at least one copy of the root node, wherein the root node and the copy of the root node are stored on different storage devices;a plurality of nodes beneath the root node in the hierarchy, each of the nodes referencing one or more index nodes or indexed data, the plurality of nodes including the target node; andat least one copy of each node of the plurality of nodes stored on one of the plurality of storage devices, wherein each node of the plurality of nodes and its respective copy are stored on different storage devices;accessing, by a computer processor, a first reference to the target node and a second reference to a copy of the target node, the first reference and the second reference stored on a parent node of the target node, wherein the parent node is the root node or one of the plurality of nodes and is above the target node in the hierarchy, wherein the target node is stored on a first storage device of a plurality of storage devices, the copy of the target node is stored on a second storage device of the plurality of storage devices, the second storage device different from the first storage device;determining, by a computer processor, whether the second storage device storing the copy of the target node is unavailable;accessing, by a computer processor, the target node;modifying, by a computer processor, the target node based on the request;if the second storage device is unavailable, storing, by a computer processor, a copy of the modified target node on a third storage device of the plurality of storage devices, wherein the third storage device is available and is different from the first storage device and the second storage device, and updating, by a computer processor, the second reference; andif the second storage device is available, updating, by a computer processor, the copy of the target node.
  • 2. The computer-implemented method of claim 1 wherein updating the second reference comprises recursively applying, by a computer processor, the method of claim 1 to modify one or more references in one or more ancestor nodes of the target node, the ancestor nodes among the plurality of nodes, above the target node in the hierarchy, and including at least the parent node, one or more copies of the parent node, any grandparent nodes of the target node, and any copies of any grandparent nodes.
  • 3. The computer-implemented method of claim 1 wherein updating the second reference comprises: updating, by a computer processor, the parent node to reference the copy of the modified target node stored on the third storage device and to not reference the copy of the target node stored on the second storage device; andupdating, by a computer processor, any copies of the parent node to reference the copy of the modified target node stored on the third storage device and to not reference the copy of the target node stored on the second storage device.
  • 4. The computer-implemented method of claim 3, further comprising: determining, by a computer processor, that the parent node is the root node;determining that a copy of the parent node is stored on an unavailable device;storing a copy of the updated parent node on one of the plurality of storage devices that is available and that is different from the device on which the parent node is stored;determining, by a computer processor, whether there is a quorum of available storage devices among the plurality of storage devices that are available; andif there is a quorum, updating, by a computer processor, data stored on each of the plurality of devices to reference the copy of the updated parent node stored on the available third device and to not reference the copy of the parent node stored on the unavailable storage device.
  • 5. The computer-implemented method of claim 1 wherein updating the second reference comprises recursively applying, by a computer processor, the method of claim 1 to modify one or more references in one or more ancestor nodes of the target node, the ancestor nodes among the plurality of nodes, above the target node in the hierarchy, and including at least the parent node, one or more copies of the parent node, any grandparent nodes of the target node, and any copies of any grandparent nodes, wherein the method of claim 1 is recursively applied, by a computer processor, to the one or more ancestor nodes of the target node until it is determined, by a computer processor, that the one or more ancestor nodes and each copy of the one or more ancestor nodes are stored on available storage devices.
  • 6. The computer-implemented method of claim 1 wherein the distributed mirrored index tree is implemented as at least one of a balanced tree, a hash table, and a linked list.
  • 7. A computer-implemented method of restoring mirrored nodes of a distributed indexed tree, the method comprising: accessing, by a computer processor, a first reference to a child node of a mirrored index data structure organized in a hierarchy and stored among a plurality of drives, the mirrored index data structure comprising: a root node;at least one copy of the root node, wherein the root node and the copy of the root node are stored on different drives;a plurality of nodes beneath the root node in the hierarchy, each of the plurality of nodes referencing one or more index nodes or indexed data, the plurality of nodes including the child node; andat least one copy of each node of the plurality of nodes stored on one of the plurality of drives, wherein each node of the plurality of nodes and its respective copy are stored on different drives;accessing, by a computer processor, a second reference to a copy of the child node, the child node stored on a first drive of a plurality of drives and the copy of the child node stored on a different second drive of the plurality of drives, the first reference and the second reference stored in a parent node of the child node, wherein the parent node is the root node or one of the plurality of nodes and is above the child node in the hierarchy;determining, by a computer processor, whether the second drive on which the copy of the child node is stored is unavailable; andif the second drive is unavailable, storing, by a computer processor, a new copy of the child node on an available third drive of the plurality of drives, the third drive different than the first drive and the second drive, and updating, by a computer processor, the second reference.
  • 8. The computer-implemented method of claim 7 wherein updating the second reference comprises recursively applying, by a computer processor, the method of claim 7 to modify one or more references in one or more ancestor nodes of the child node and to restore ancestor nodes of the child node and copies of the ancestor nodes of the child node which are stored on unavailable drives, the ancestor nodes among the plurality of nodes, above the child node in the hierarchy, and including at least the parent node, one or more copies of the parent node, any grandparent nodes of the child node, and any copies of the grandparent nodes.
  • 9. The computer-implemented method of claim 7 further comprising: accessing a third reference to a grandchild node stored on the plurality of drives and a fourth reference to a copy of the grandchild node stored on the plurality of drives, the grandchild node beneath the child node in the hierarchy and the copy of the grandchild node stored on different drives, the third reference and the fourth reference stored in the child node;determining whether the drive on which the copy of the grandchild node is stored is unavailable; andif the drive on which the copy of the grandchild node is stored is unavailable, storing a new copy of the grandchild node on an available drive of the plurality of drives, the new copy of the grandchild node and the grandchild node stored on different drives, and updating the fourth reference to reference the new copy of the grandchild node stored on the available drive instead of the copy of the grandchild node stored on the unavailable drive.
  • 10. The computer-implemented method of claim 7 wherein the child node is an indexed node in a distributed mirrored index tree, the distributed mirrored index tree implemented as at least one of a balanced tree, a hash table, and a linked list.
  • 11. A distributed system comprising: a plurality of storage units;a balanced, mirrored index tree stored among the plurality of storage units, the balanced, mirrored index tree organized in a hierarchy and comprising: a root node stored on one of the plurality of storage units;a copy of the root node stored on one of the plurality of storage units, wherein the root node and the copy of the root node are stored on different storage units;a plurality of nodes beneath the root node in the hierarchy, each of the nodes stored on one of the plurality of storage units; anda copy of each node of the plurality of nodes stored on one of the plurality of storage units, wherein each node of the plurality of nodes and its respective copy are stored on different storage units;wherein each of the plurality of storage units comprises one or more memory devices, at least one executable software module stored on the one or more memory devices, a processor configured to execute the at least one executable software module, a first index tree reference referencing the root node and a second index tree reference referencing the copy of the root node, the first index tree reference and the second index tree reference stored on the one or more memory devices of each of the plurality of storage units, andwherein the at least one executable software module comprises a modify module configured to:identify a target node of the plurality of nodes to be modified;determine that a copy of the target node is stored on an unavailable storage unit;modify the target node;store a copy of the modified target node on one of the plurality of storage units that is available and that is different from the storage unit that stores the target node; andupdate a parent node of the target node to reference the copy of the modified target node instead of the copy of the target node, wherein the parent node is the root node or one of the plurality of nodes and is above the target node in the hierarchy.
  • 12. The distributed system of claim 11, wherein the at least one executable software module further comprises a restore module configured to: for each child node of the parent node,determine that one copy of the child node is located on an unavailable storage unit;retrieve the child node from an available storage unit;initiate the storage of a new copy of the child node on an available storage unit that is different from the storage unit on which the child node is stored;initiate the updating of the parent node and copies of the parent node to reference the new copy of the child node instead of the copy of the child node.
  • 13. The distributed system of claim 11, the plurality of storage units further comprising a version value stored on the one or more memory devices, wherein the at least one executable software module further comprises a merge module configured to: receive a first request from a first storage unit to merge into the plurality of storage units, the first storage unit configured to store a first version value;query the plurality of storage units for their corresponding version values;determine a highest version value from the version values and determine which storage unit stores the highest version value;determine whether the first version value is lower than the highest version value; andif the first version value is lower than the highest version value, update the first version value to be the same as the highest version value;update the first index tree reference of the first storage unit to reference the same root node that is referenced by the first index tree reference of the storage unit storing the highest version value; andupdate the second index tree reference of the first storage unit to reference the same copy of the root node that is referenced by the second index tree reference of the storage unit storing the highest version value.
  • 14. The distributed system of claim 11 wherein the balanced index tree is implemented as at least one of a balanced tree, a hash table, and a linked list.
  • 15. The computer-implemented method of claim 3 wherein updating the second reference further comprises: determining that a copy of the parent node is stored on an unavailable device;storing a copy of the modified parent node on one of the plurality of devices that is available and that is different from the device on which the parent node is stored;accessing a grandparent node of the target node, the grandparent node including references to the parent node and to the copy of the parent node;modifying the grandparent node to reference the copy of the modified parent node instead of the copy of the parent node;accessing a copy of the grandparent node, the copy of the grandparent node stored on a device that is different from the device on which the grandparent node is stored; andmodifying the copy of the grandparent node to reference the copy of the modified parent node instead of the copy of the parent node.
  • 16. The computer-implemented method of claim 3 wherein updating the second reference further comprises: determining that a copy of the parent node is stored on an unavailable device;storing a copy of the modified parent node on one of the plurality of devices that is available and that is different from the device on which the parent node is stored;accessing a grandparent node of the target node, the grandparent node including references to the parent node and to the copy of the parent node;modifying the grandparent node to reference the copy of the modified parent node instead of the copy of the parent node;determining that a copy of the grandparent node is stored on an unavailable device;storing a copy of the modified grandparent node on one of the plurality of devices that is available and that is different from the device on which the grandparent node is stored;accessing a root node of the target node, the root node including references to the grandparent node and to the copy of the grandparent node,modifying the root node to reference the copy of the modified grandparent node instead of the copy of the grandparent node;accessing a copy of the root node, the copy of the root node stored on a device that is different from the device on which the root node is stored;modifying the copy of the root node to reference the copy of the modified grandparent node instead of the copy of the grandparent node.
  • 17. The computer-implemented method of claim 3 wherein updating the second reference further comprises: determining that a copy of the parent node is stored on an unavailable device;storing a copy of the modified parent node on one of the plurality of devices that is available and that is different from the device on which the parent node is stored;accessing a grandparent node of the target node, the grandparent node including references to the parent node and to the copy of the parent node;modifying the grandparent node to reference the copy of the modified parent node instead of the copy of the parent node;determining that a copy of the grandparent node is stored on an unavailable device;storing a copy of the modified grandparent node on one of the plurality of devices that is available and that is different from the device on which the grandparent node is stored;accessing a root node of the target node, the root node including references to the grandparent node and to the copy of the grandparent node,modifying the root node to reference the copy of the of the copy of the modified grandparent node instead of the copy of the grandparent node;determining that a copy of the root node is stored on an unavailable device; andstoring a copy of the modified root node on one of the plurality of devices that is available and that is different from the device on which the root node is stored,accessing a superblock node of the target node, the superblock node including references to the root node and to the copy of the root node,determining whether there is a quorum among the plurality of devices that are available;if there is a quorum, updating the superblock to reference the copy of the modified root node instead of the copy of the root node and storing an updated superblock version of the distributed mirrored index tree.
  • 18. The computer-implemented method of claim 17 wherein updating the superblock further comprises instructing other devices in the plurality of devices to reference the copy of the modified root node instead of the copy of the root node and to store the updated superblock version of the distributed mirrored index tree.
  • 19. A computer-readable storage device including a set of instructions that causes a computer to perform the method of computer-implemented method of claim 1.
  • 20. A computer-readable storage device including a set of instructions that causes a computer to perform the method of computer-implemented method of claim 7.
  • 21. The distributed system of claim 11, wherein the modify module is further configured to apply the modify module using the parent of the target node as the target node.
  • 22. The distributed system of claim 12, wherein the restore module is further configured to apply the restore module using the parent of the parent node as the parent node.
  • 23. The computer-implemented method of claim 11, wherein the plurality of nodes comprises leaf nodes comprising address data and inner nodes comprising references to leaf nodes or other inner nodes.
US Referenced Citations (353)
Number Name Date Kind
5163131 Row et al. Nov 1992 A
5181162 Smith et al. Jan 1993 A
5212784 Sparks May 1993 A
5230047 Frey et al. Jul 1993 A
5251206 Calvignac et al. Oct 1993 A
5258984 Menon et al. Nov 1993 A
5329626 Klein et al. Jul 1994 A
5359594 Gould et al. Oct 1994 A
5403639 Belsan et al. Apr 1995 A
5459871 Van Den Berg Oct 1995 A
5481699 Saether Jan 1996 A
5548724 Akizawa et al. Aug 1996 A
5548795 Au Aug 1996 A
5568629 Gentry et al. Oct 1996 A
5596709 Bond et al. Jan 1997 A
5606669 Bertin et al. Feb 1997 A
5612865 Dasgupta Mar 1997 A
5649200 Leblang et al. Jul 1997 A
5657439 Jones et al. Aug 1997 A
5668943 Attanasio et al. Sep 1997 A
5680621 Korenshtein Oct 1997 A
5694593 Baclawski Dec 1997 A
5696895 Hemphill et al. Dec 1997 A
5734826 Olnowich et al. Mar 1998 A
5754756 Watanabe et al. May 1998 A
5761659 Bertoni Jun 1998 A
5774643 Lubbers et al. Jun 1998 A
5799305 Bortvedt et al. Aug 1998 A
5805578 Stirpe et al. Sep 1998 A
5805900 Fagen et al. Sep 1998 A
5806065 Lomet Sep 1998 A
5822790 Mehrotra Oct 1998 A
5862312 Mann Jan 1999 A
5870563 Roper et al. Feb 1999 A
5878410 Zbikowski et al. Mar 1999 A
5878414 Hsiao et al. Mar 1999 A
5884046 Antonov Mar 1999 A
5884098 Mason, Jr. Mar 1999 A
5884303 Brown Mar 1999 A
5890147 Peltonen et al. Mar 1999 A
5917998 Cabrera et al. Jun 1999 A
5933834 Aichelen Aug 1999 A
5943690 Dorricott et al. Aug 1999 A
5966707 Van Huben et al. Oct 1999 A
5996089 Mann Nov 1999 A
6000007 Leung et al. Dec 1999 A
6014669 Slaughter et al. Jan 2000 A
6021414 Fuller Feb 2000 A
6029168 Frey Feb 2000 A
6038570 Hitz et al. Mar 2000 A
6044367 Wolff Mar 2000 A
6052759 Stallmo et al. Apr 2000 A
6055543 Christensen et al. Apr 2000 A
6055564 Phaal Apr 2000 A
6070172 Lowe May 2000 A
6081833 Okamoto et al. Jun 2000 A
6081883 Popelka et al. Jun 2000 A
6108759 Orcutt et al. Aug 2000 A
6117181 Dearth et al. Sep 2000 A
6122754 Litwin et al. Sep 2000 A
6138126 Hitz et al. Oct 2000 A
6154854 Stallmo Nov 2000 A
6173374 Heil et al. Jan 2001 B1
6202085 Benson et al. Mar 2001 B1
6209059 Ofer et al. Mar 2001 B1
6219693 Napolitano et al. Apr 2001 B1
6279007 Uppala Aug 2001 B1
6321345 Mann Nov 2001 B1
6334168 Islam et al. Dec 2001 B1
6353823 Kumar Mar 2002 B1
6384626 Tsai et al. May 2002 B2
6385626 Tamer et al. May 2002 B1
6393483 Latif et al. May 2002 B1
6397311 Capps May 2002 B1
6405219 Saether et al. Jun 2002 B2
6408313 Campbell et al. Jun 2002 B1
6415259 Wolfinger et al. Jul 2002 B1
6421781 Fox et al. Jul 2002 B1
6434574 Day et al. Aug 2002 B1
6449730 Mann Sep 2002 B2
6453389 Weinberger et al. Sep 2002 B1
6457139 D'Errico et al. Sep 2002 B1
6463442 Bent et al. Oct 2002 B1
6496842 Lyness Dec 2002 B1
6499091 Bergsten Dec 2002 B1
6502172 Chang Dec 2002 B2
6502174 Beardsley et al. Dec 2002 B1
6523130 Hickman et al. Feb 2003 B1
6526478 Kirby Feb 2003 B1
6546443 Kakivaya et al. Apr 2003 B1
6549513 Chao et al. Apr 2003 B1
6557114 Mann Apr 2003 B2
6567894 Hsu et al. May 2003 B1
6567926 Mann May 2003 B2
6571244 Larson May 2003 B1
6571349 Mann May 2003 B1
6574745 Mann Jun 2003 B2
6594655 Tal et al. Jul 2003 B2
6594660 Berkowitz et al. Jul 2003 B1
6594744 Humlicek et al. Jul 2003 B1
6598174 Parks et al. Jul 2003 B1
6618798 Burton et al. Sep 2003 B1
6631411 Welter et al. Oct 2003 B1
6658554 Moshovos et al. Dec 2003 B1
6662184 Friedberg Dec 2003 B1
6671686 Pardon et al. Dec 2003 B2
6671704 Gondi et al. Dec 2003 B1
6687805 Cochran Feb 2004 B1
6725392 Frey et al. Apr 2004 B1
6732125 Autrey et al. May 2004 B1
6742020 Dimitroff et al. May 2004 B1
6748429 Talluri et al. Jun 2004 B1
6801949 Bruck et al. Oct 2004 B1
6848029 Coldewey Jan 2005 B2
6856591 Ma et al. Feb 2005 B1
6871295 Ulrich et al. Mar 2005 B2
6895482 Blackmon et al. May 2005 B1
6895534 Wong et al. May 2005 B2
6907011 Miller et al. Jun 2005 B1
6907520 Parady Jun 2005 B2
6917942 Burns et al. Jul 2005 B1
6920494 Heitman et al. Jul 2005 B2
6922696 Lincoln et al. Jul 2005 B1
6922708 Sedlar Jul 2005 B1
6934878 Massa et al. Aug 2005 B2
6940966 Lee Sep 2005 B2
6954435 Billhartz et al. Oct 2005 B2
6990604 Binger Jan 2006 B2
6990611 Busser Jan 2006 B2
7007044 Rafert et al. Feb 2006 B1
7007097 Huffman et al. Feb 2006 B1
7017003 Murotani et al. Mar 2006 B2
7043485 Manley et al. May 2006 B2
7043567 Trantham May 2006 B2
7069320 Chang et al. Jun 2006 B1
7103597 McGoveran Sep 2006 B2
7111305 Solter et al. Sep 2006 B2
7113938 Highleyman et al. Sep 2006 B2
7124264 Yamashita Oct 2006 B2
7146524 Patel et al. Dec 2006 B2
7152182 Ji et al. Dec 2006 B2
7177295 Sholander et al. Feb 2007 B1
7181746 Perycz et al. Feb 2007 B2
7184421 Liu et al. Feb 2007 B1
7194487 Kekre et al. Mar 2007 B1
7206805 McLaughlin, Jr. Apr 2007 B1
7225204 Manley et al. May 2007 B2
7228299 Harmer et al. Jun 2007 B1
7240235 Lewalski-Brechter Jul 2007 B2
7249118 Sandler et al. Jul 2007 B2
7257257 Anderson et al. Aug 2007 B2
7290056 McLaughlin, Jr. Oct 2007 B1
7313614 Considine et al. Dec 2007 B2
7318134 Oliveira et al. Jan 2008 B1
7346720 Fachan Mar 2008 B2
7370064 Yousefi'zadeh May 2008 B2
7373426 Jinmei et al. May 2008 B2
7386675 Fachan Jun 2008 B2
7386697 Case et al. Jun 2008 B1
7440966 Adkins et al. Oct 2008 B2
7451341 Okaki et al. Nov 2008 B2
7509448 Fachan et al. Mar 2009 B2
7509524 Patel et al. Mar 2009 B2
7533298 Smith et al. May 2009 B2
7546354 Fan et al. Jun 2009 B1
7546412 Ahmad et al. Jun 2009 B2
7551572 Passey et al. Jun 2009 B2
7558910 Alverson et al. Jul 2009 B2
7571348 Deguchi et al. Aug 2009 B2
7577667 Hinshaw et al. Aug 2009 B2
7590652 Passey et al. Sep 2009 B2
7593938 Lemar et al. Sep 2009 B2
7596713 Mani-Meitav et al. Sep 2009 B2
7631066 Schatz et al. Dec 2009 B1
7676691 Fachan et al. Mar 2010 B2
7680836 Anderson et al. Mar 2010 B2
7680842 Anderson et al. Mar 2010 B2
7685126 Patel et al. Mar 2010 B2
7716262 Pallapotu May 2010 B2
7739288 Lemar et al. Jun 2010 B2
7743033 Patel et al. Jun 2010 B2
7752402 Fachan et al. Jul 2010 B2
7756898 Passey et al. Jul 2010 B2
7779048 Fachan et al. Aug 2010 B2
7783666 Zhuge et al. Aug 2010 B1
7788303 Mikesell et al. Aug 2010 B2
7797283 Fachan et al. Sep 2010 B2
7822932 Fachan et al. Oct 2010 B2
20010042224 Stanfill et al. Nov 2001 A1
20010047451 Noble et al. Nov 2001 A1
20010056492 Bressoud et al. Dec 2001 A1
20020010696 Izumi Jan 2002 A1
20020029200 Dulin et al. Mar 2002 A1
20020035668 Nakano et al. Mar 2002 A1
20020038436 Suzuki Mar 2002 A1
20020055940 Elkan May 2002 A1
20020072974 Pugliese et al. Jun 2002 A1
20020075870 de Azevedo et al. Jun 2002 A1
20020078161 Cheng Jun 2002 A1
20020078180 Miyazawa Jun 2002 A1
20020083078 Pardon et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020087366 Collier et al. Jul 2002 A1
20020095438 Rising et al. Jul 2002 A1
20020107877 Whiting et al. Aug 2002 A1
20020124137 Ulrich et al. Sep 2002 A1
20020138559 Ulrich et al. Sep 2002 A1
20020156840 Ulrich et al. Oct 2002 A1
20020156891 Ulrich et al. Oct 2002 A1
20020156973 Ulrich et al. Oct 2002 A1
20020156974 Ulrich et al. Oct 2002 A1
20020156975 Staub et al. Oct 2002 A1
20020158900 Hsieh et al. Oct 2002 A1
20020161846 Ulrich et al. Oct 2002 A1
20020161850 Ulrich et al. Oct 2002 A1
20020161973 Ulrich et al. Oct 2002 A1
20020163889 Yemini et al. Nov 2002 A1
20020165942 Ulrich et al. Nov 2002 A1
20020166026 Ulrich et al. Nov 2002 A1
20020166079 Ulrich et al. Nov 2002 A1
20020169827 Ulrich et al. Nov 2002 A1
20020170036 Cobb et al. Nov 2002 A1
20020174295 Ulrich et al. Nov 2002 A1
20020174296 Ulrich et al. Nov 2002 A1
20020178162 Ulrich et al. Nov 2002 A1
20020191311 Ulrich et al. Dec 2002 A1
20020194523 Ulrich et al. Dec 2002 A1
20020194526 Ulrich et al. Dec 2002 A1
20020198864 Ostermann et al. Dec 2002 A1
20030005159 Kumhyr Jan 2003 A1
20030009511 Giotta et al. Jan 2003 A1
20030014391 Evans et al. Jan 2003 A1
20030125852 Meschewski et al. Jan 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030061491 Jaskiewicz et al. Mar 2003 A1
20030109253 Fenton et al. Jun 2003 A1
20030120863 Lee et al. Jun 2003 A1
20030135514 Patel et al. Jul 2003 A1
20030149750 Franzenburg Aug 2003 A1
20030158873 Sawdon et al. Aug 2003 A1
20030161302 Zimmermann et al. Aug 2003 A1
20030163726 Kidd Aug 2003 A1
20030172149 Edsall et al. Sep 2003 A1
20030177308 Lewalski-Brechter Sep 2003 A1
20030182325 Manely et al. Sep 2003 A1
20030233385 Srinivasa et al. Dec 2003 A1
20040003053 Williams Jan 2004 A1
20040024731 Cabrera et al. Feb 2004 A1
20040024963 Talagala et al. Feb 2004 A1
20040078812 Calvert Apr 2004 A1
20040117802 Green Jun 2004 A1
20040133670 Kaminsky et al. Jul 2004 A1
20040143647 Cherkasova Jul 2004 A1
20040153479 Mikesell et al. Aug 2004 A1
20040158549 Matena et al. Aug 2004 A1
20040189682 Troyansky et al. Sep 2004 A1
20040199734 Rajamani et al. Oct 2004 A1
20040199812 Earl et al. Oct 2004 A1
20040205141 Goland Oct 2004 A1
20040230748 Ohba Nov 2004 A1
20040240444 Matthews et al. Dec 2004 A1
20040260673 Hitz et al. Dec 2004 A1
20040267747 Choi et al. Dec 2004 A1
20050010592 Guthrie Jan 2005 A1
20050033778 Price Feb 2005 A1
20050044197 Lai Feb 2005 A1
20050066095 Mullick et al. Mar 2005 A1
20050114402 Guthrie May 2005 A1
20050114609 Shorb May 2005 A1
20050125456 Hara et al. Jun 2005 A1
20050131860 Livshits Jun 2005 A1
20050131990 Jewell Jun 2005 A1
20050138195 Bono Jun 2005 A1
20050138252 Gwilt Jun 2005 A1
20050171960 Lomet Aug 2005 A1
20050171962 Martin et al. Aug 2005 A1
20050187889 Yasoshima Aug 2005 A1
20050188052 Ewanchuk et al. Aug 2005 A1
20050192993 Messinger Sep 2005 A1
20050289169 Adya et al. Dec 2005 A1
20050289188 Nettleton et al. Dec 2005 A1
20060004760 Clift et al. Jan 2006 A1
20060041894 Cheng et al. Feb 2006 A1
20060047925 Perry Mar 2006 A1
20060059467 Wong Mar 2006 A1
20060074922 Nishimura Apr 2006 A1
20060083177 Iyer et al. Apr 2006 A1
20060095438 Fachan et al. May 2006 A1
20060101062 Godman et al. May 2006 A1
20060129584 Hoang et al. Jun 2006 A1
20060129631 Na et al. Jun 2006 A1
20060129983 Feng Jun 2006 A1
20060155831 Chandrasekaran Jul 2006 A1
20060206536 Sawdon et al. Sep 2006 A1
20060230411 Richter et al. Oct 2006 A1
20060277432 Patel Dec 2006 A1
20060288161 Cavallo Dec 2006 A1
20070038887 Witte et al. Feb 2007 A1
20070091790 Passey et al. Apr 2007 A1
20070094269 Mikesell et al. Apr 2007 A1
20070094277 Fachan et al. Apr 2007 A1
20070094310 Passey et al. Apr 2007 A1
20070094431 Fachan Apr 2007 A1
20070094452 Fachan Apr 2007 A1
20070168351 Fachan Jul 2007 A1
20070171919 Godman et al. Jul 2007 A1
20070195810 Fachan Aug 2007 A1
20070233684 Verma et al. Oct 2007 A1
20070233710 Passey et al. Oct 2007 A1
20070255765 Robinson Nov 2007 A1
20080005145 Worrall Jan 2008 A1
20080010507 Vingralek Jan 2008 A1
20080021907 Patel et al. Jan 2008 A1
20080031238 Harmelin et al. Feb 2008 A1
20080034004 Cisler et al. Feb 2008 A1
20080044016 Henzinger Feb 2008 A1
20080046432 Anderson et al. Feb 2008 A1
20080046443 Fachan et al. Feb 2008 A1
20080046444 Fachan et al. Feb 2008 A1
20080046445 Passey et al. Feb 2008 A1
20080046475 Anderson et al. Feb 2008 A1
20080046476 Anderson et al. Feb 2008 A1
20080046667 Fachan et al. Feb 2008 A1
20080059541 Fachan et al. Mar 2008 A1
20080059734 Mizuno Mar 2008 A1
20080126365 Fachan et al. May 2008 A1
20080151724 Anderson et al. Jun 2008 A1
20080154978 Lemar et al. Jun 2008 A1
20080155191 Anderson et al. Jun 2008 A1
20080168304 Flynn et al. Jul 2008 A1
20080168458 Fachan et al. Jul 2008 A1
20080243773 Patel et al. Oct 2008 A1
20080256103 Fachan et al. Oct 2008 A1
20080256537 Fachan et al. Oct 2008 A1
20080256545 Fachan et al. Oct 2008 A1
20080294611 Anglin et al. Nov 2008 A1
20090055399 Lu et al. Feb 2009 A1
20090055604 Lemar et al. Feb 2009 A1
20090055607 Schack et al. Feb 2009 A1
20090210880 Fachan et al. Aug 2009 A1
20090248756 Akidau et al. Oct 2009 A1
20090248765 Akidau et al. Oct 2009 A1
20090248975 Daud et al. Oct 2009 A1
20090249013 Daud et al. Oct 2009 A1
20090252066 Passey et al. Oct 2009 A1
20090327218 Passey et al. Dec 2009 A1
20100161556 Anderson et al. Jun 2010 A1
20100161557 Anderson et al. Jun 2010 A1
20100185592 Kryger Jul 2010 A1
20100223235 Fachan Sep 2010 A1
20100235413 Patel Sep 2010 A1
20100241632 Lemar et al. Sep 2010 A1
20100306786 Passey Dec 2010 A1
Foreign Referenced Citations (14)
Number Date Country
0774723 May 1997 EP
2006-506741 Jun 2004 JP
4464279 May 2010 JP
4504677 Jul 2010 JP
WO 9429796 Dec 1994 WO
WO 0057315 Sep 2000 WO
WO 0114991 Mar 2001 WO
WO 0133829 May 2001 WO
WO 02061737 Aug 2002 WO
WO 03012699 Feb 2003 WO
WO 2004046971 Jun 2004 WO
WO 2008021527 Feb 2008 WO
WO 2008021528 Feb 2008 WO
WO 2008127947 Oct 2008 WO
Related Publications (1)
Number Date Country
20070094310 A1 Apr 2007 US