1. Field of the Invention
This invention relates to systems and methods for executing commands in distributed systems.
2. Description of the Related Art
In a distributed system, operations are performed by a plurality of network nodes. Typical transactions include a series of commands that are executed sequentially by the distributed system. For example,
After starting 106 the transaction 100, the computer network executes a first command 110 (shown as “CMD_A”). The first command 110 may be executed on the local node, sent to one or more remote nodes, or both. The computer network may wait for the first command 110 to be completed before continuing with the transaction 100. If, for example, the first command 110 is sent to one or more remote nodes for execution thereon, the local node will wait until it receives a response from each of the remote nodes.
Once the first command 110 is complete, the computer network executes a second command 120 (shown as “CMD_B”). The computer network waits for the second command 120 to be completed before executing a third command 130 (shown as “CMD_C”). Again, the computer network waits for the third command 130 to be completed before executing a fourth command 140 (shown as “CMD_D”). Once the fourth command 140 is completed, the transaction 100 ends 108.
System resources, such as the availability of central processing units to execute the commands 110, 120, 130, 140 or bandwidth to send messages across the computer network, may be underutilized as the computer network waits for each command 110, 120, 130, 140 to execute in turn. For example, one or more of the nodes may be idle or may have extra processing capabilities available that are not used while the computer network waits for other nodes to complete their tasks. This occurs even if the underutilized system resources have sufficient data available to them to perform subsequent operations. For example, if all of the data and resources necessary to execute both the first command 110 and the third command 130 is available at the start 106 of the transaction 100, waiting for the first command 110 and the second command 120 to be completed before executing the third command 130 adds unnecessary delay to overall transaction 100.
Thus, it is advantageous to use techniques and systems for reducing latency in distributed systems by executing commands as sufficient information and system resources become available. In one embodiment, commands in a transaction include dependency information and an execution engine is configured to execute the commands as the dependencies become satisfied. In addition, or in other embodiments, the commands also include priority information. If sufficient resources are not available to execute two or more commands with satisfied dependencies, the execution engine determines an order for executing the commands based at least in part on the priority information. In one embodiment, time-intensive commands are assigned a higher priority than commands that are expected to take less time to execute.
In one embodiment, a method is provided for performing a transaction in a distributed system. The method may include providing a first command and a second command that define functions to be performed in the transaction, wherein the first command further defines a dependency; holding the first command in a waiting state until the dependency is satisfied; prioritizing the first command and second command; and executing the first command and the second command in an order based at least in part on the prioritization.
In an additional embodiment, a distributed system is provided. The distributed system may include a plurality of nodes configured to participate in a transaction through a computer network, wherein the transaction comprises commands with dependencies; a layout manager module configured to determine in which one of the plurality of nodes to write blocks of data; and an execution manager module configured to process the commands based at least in part on the dependencies.
In another embodiment, a method is provided for processing commands in a distributed system. The method may include defining dependencies for a plurality of commands; setting the plurality of commands in a waiting state; as dependencies are satisfied for particular commands, setting the particular commands in a runnable state; and executing the particular commands in the runnable state as system resources become available.
In a further embodiment, a network is provided. The network may include a plurality of nodes configured to participate in a transaction over the network, wherein the transaction comprises a plurality of commands, wherein at least one of the commands comprises dependency information, and wherein the network executes the at least one command when the dependency information is satisfied
For purposes of summarizing the invention, certain aspects, advantages and novel features of the invention have been described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, the invention may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.
Systems and methods that embody the various features of the invention will now be described with reference to the following drawings, in which:
Rather than executing commands sequentially, an execution engine, according to one embodiment, processes commands asynchronously as sufficient information and system resources become available. The commands include dependency information that defines relationships among the commands. For example, a first command may include dependency information that specifies that the execution engine is to hold the first command in a waiting state until determining that one or more nodes in a distributed system have successfully executed a second command. Once the dependency is satisfied, the execution engine moves the first command to a runnable state where it can be executed by the nodes as system resources become available.
In a transaction with a plurality of commands executed by nodes in a distributed system, the execution engine increases overlapping use of system resources by moving the commands from the waiting state to the runnable state as dependencies are satisfied. Thus, the nodes can execute multiple commands with satisfied dependencies at the same time. In other words, the nodes do not have to wait to execute commands with satisfied dependencies while other commands are executed by other nodes. This reduces latency and increases the overall speed of the transaction.
In addition, or in other embodiments, the commands also include priority information. If sufficient resources are not available to execute two or more commands with satisfied dependencies, the execution engine determines an order for executing the commands based at least in part on the priority information. In one embodiment, time-intensive commands are assigned a higher priority than commands that are expected to take less time to execute.
In the following description, reference is made to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific embodiments or processes in which the invention may be practiced. Where possible, the same reference numbers are used throughout the drawings to refer to the same or like components. In some instances, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. The present disclosure, however, may be practiced without the specific details or with certain alternative equivalent components and methods to those described herein. In other instances, well-known components and methods have not been described in detail so as not to unnecessarily obscure aspects of the present disclosure.
I. Command Data Structure
The dependency field 220 specifies conditions (also referred to herein as “waiters”) for executing the action defined by the function field 210. For example, the dependency field 220 may specify that certain data should be available before the action defined by the function field 210 is executed. As another example, the dependency field 220 may specify that a node in a distributed system execute one or more other commands to completion before executing the action defined by the function field 210. In other embodiments, the dependency field 220 may store a count of commands (for example, wait count) upon which the command should wait upon as well as a list of other commands that are awaiting completion of this command. As discussed in detail below, an execution engine is configured to move the command 200 from a waiting state to a runnable state as the waiters specified in the dependency field 220 are satisfied. Once in the runnable state, one or more nodes in the distributed system can execute the action defined by the function field 210.
The dependency field 230 specifies the order in which the action defined by the function field 210 is executed in relation to other commands in a transaction. For example, if sufficient nodes, processors within a node, network connections, or other system resources are not available to execute two or more commands with satisfied dependencies, the execution engine determines the order of execution first based at least in part on information in the dependency field 230.
In one embodiment, the priority field 230 comprises a high priority flag, a medium priority flag, and a low priority flag. Commands having high priority are executed before commands having medium or low priority. Similarly, commands having medium priority are executed before commands having low priority. Commands with the same priority level are executed in the order in which their dependencies are satisfied. If multiple commands with the same priority level are ready to be executed at the same time, the execution engine may use one or more common techniques to select the ordering of the commands (for example, round robin, selection, first in first out selection, random selection, and the like). In some embodiments, each command is associated with one priority, but in other embodiments, each command may be associated with more than one priority and/o may have sub-priorities. An artisan will recognize from the disclosure herein that priorities can be specified in other ways including, for example, specifying only two levels of priority more than three levels of priority and/or sublevels of priority within one or more levels of priority. Priorities can also be specified dynamically during execution of a transaction by setting conditions in the priority field 230. For example, the priority level may depend at least in part on a result such as a pass or failure obtained by executing another command.
In one embodiment, commands expected to take longer to execute than other commands are given a higher priority. Thus, as resources become available, the nodes can execute the lower priority commands while continuing to execute the higher priority commands. This overlap in using system resource reduces latency as commands are executed in parallel.
In addition, or in other embodiments, commands sent from a local node to one or more remote nodes in a network are assigned a higher priority than commands that are executed locally and do not utilize the network. The local node sends higher priority commands to the remote nodes before executing lower priority commands. As the remote nodes execute higher priority commands, the local node can then execute lower priority commands at the same time. This increases utilization of system resources and reduces latency because the remote nodes do not have to wait for the local node to execute the lower priority commands and the local node does not have to wait for the remote nodes to execute the higher priority commands.
A set of sample priorities is described below. It is recognized, however, that a variety of priority levels, and sub-levels, may be used and that priorities may be assigned in a variety of ways.
Including the function field 210, the dependency field 220, and the priority field 230 within the command data structure 200 also allows a distributed system to perform a transaction asynchronously. For example, a local node can send commands to a remote node that determines when and in what order to execute the commands without waiting for further messages from the local node. The remote node makes these determinations based on the information in the dependency field 220 and the priority field 230. Pushing control of command ordering from local nodes to remote nodes reduces the number of messages sent across the network, which further reduces latency.
In other embodiments, the dependencies 220 and the priorities 230 may be stored apart from the function, such as, in a look-up table, a database, or the like. For example, one or more functions may be pre-assigned dependencies 220 and priorities 230 such that once the command 200 is received, the node can look up the corresponding dependencies 220 and/or priorities 230 in the look-up table, database or the like.
II. Exemplary Dependency Graph
Dependency graphs are one example way to illustrate relationships between commands in a transaction.
The exemplary dependency graphs have lines between commands to indicate that the execution of one command cannot begin until all commands to which it points have completed. For example, the first command 312 and the third command 316 each point to the start command 310 to indicate that the start command 310 executes before the first command 312 and the third command 316 execute, that is, that the first command and the third command depend on the execution of the start command. As shown, the first command 312 executes to completion before the second command 314 executes. Further, both the second command 314 and the third command 316 execute to completion before the fourth command 318 executes. After the fourth command 318 executes, the end command 320 executes.
Since the third command 316 does not depend on the first command 312 or the second command 314, the system can execute the third command 316 any time system resources are available after executing the start command 310. This may occur, for example, when all of the data necessary to execute the third command 316 is available after the system calls the start command 310. When sufficient resources are available, the system may execute the third command 316 in parallel with the first command 312, the second command 314, or both. Parallel execution increases utilization of system resources and decreases latency.
When sufficient resources are not available, execution order in one embodiment is determined by defining priorities for the commands 312, 314, 316, 318. For example, if system resources are not available to execute the first command 312 and the third command 316 at the same time, the system will execute the command with the highest priority first. If, for example, the first command 312 has a medium priority and the third command 316 has a high priority, then the system will execute the third command 316 before executing the first command 312.
In one embodiment, priorities are based at least in part on increasing local and remote resource overlap in the system. For example, the third command 316 may be given a higher priority than the first command 312 if a local node is configured to send the third command to a remote node for execution while the local node executes the first command 312. Thus, while the local node may not have sufficient resource to execute the first command 312 and the third command 316 at the same time, sending the third command 316 to the remote node before executing the first command 312 allows the commands to be executed in parallel. In addition, or in other embodiments, higher priorities are given to commands that take longer to execute. Starting longer commands before shorter commands allows the shorter commands to execute as system resources become available while the longer commands continue to execute, thereby increasing parallel usage of system resources.
III. Node Operation
The node 410 comprises a layout manager module 412 and an execution manager module 414. As used herein, the word module is a broad term having its ordinary and customary meaning and can also refer to logic embodied in hardware or firmware, or to a collection of software instructions (i.e., a “software module”), possibly having entry and exit points, written in a programming language, such as, for example, C or C++. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules described herein are preferably implemented as software modules, but may be represented in hardware or firmware.
A. Layout Manager Module
The layout manager module 412 is configured to determine where information is located in the distributed system and where processes will be performed for a particular transaction. As described in detail below, in some embodiments the node 410 comprises a smart storage unit in a distributed file system and the layout manager module 412 is configured to determine a layout when writing or restriping blocks of data in the distributed file system. For example, the layout manager module 412 may be configured to determine a new file layout during a restriping process when one or more storage units are added to or removed from the distributed file system such that data may be added to the new storage units or redistributed to other storage units.
In addition, the layout manager module 412 may be configured to determine a new file layout during a restriping process used when the protection scheme of a file is changed. For example, if a file goes from 3+1 parity protection to 4+1 parity protection, the layout manager module 412 determines a new file layout so data can be moved to storage units in the new layout in a manner that meets the new parity protection. In one embodiment, the layout manager module 412 continues to manage the old layout until the new layout is complete to allow users access to the file under the old layout such that the data is protected by the old parity scheme until the new parity scheme is available. In one embodiment, when repairing data, the number of protection groups for a single transaction may be calculated by using the least common-multiple of the old protection group's parity group size “n” and the new protection group's parity group size “n” such that no individual blocks are covered by two different parity protection blocks.
B. Execution Manager Module
The exemplary execution manager module 414 is configured to process the set of commands in the transaction. The execution manager module 414 processes the commands as their dependencies become satisfied and as system resources become available. In some embodiments, the execution manager module 414 processes the commands according to predetermined priorities. The execution manager module 414 allows nodes in the distributed system to execute commands with higher priorities before executing commands with lower priorities as system resources become available.
The execution manager module 414 is also referred to herein as an “execution engine” or “engine.” Exemplary pseudocode according to one embodiment of the invention for executing the engine can be found in the attached Appendix which forms a part of the patent application. It should be recognized, however, that the exemplary pseudocode is not meant to limit the scope of the invention.
The execution manager module 414 initially places a command in the waiting state 510. In a block 512, the execution manager module 414 queries whether the command's dependencies are satisfied. As discussed above, the dependencies may include, for example, a specification that one or more other commands in the transaction execute to completion or return a specified result. As another example, the dependencies may include a specification that one or more other commands in the transaction start executing. If the dependencies are not satisfied, the command remains in the waiting state 510. In other embodiments, the dependencies include a count of commands (for example, wait count) upon which the command wait. As those commands complete execution, the command's wait count is decremented. Once the command's wait count reaches zero, then the command proceeds to the runnable state, or in other embodiments to the running state. In addition, the command may include a list of other commands that are awaiting completion of the command. Once the command has completed execution, a message is sent to the other commands indicating that the command has completed execution, such that the wait counts of the other commands can be decremented.
When the dependencies are satisfied, the execution manager module 414 places the command in the runnable state 520. In a block 522, the execution manager module 414 queries whether system resources are available to execute the command. For example, the execution manager module 414 may determine that a processor on a local node is currently executing another command in the transaction and is unavailable to execute the command. Further, the execution manager module 414 may determine that a network connection is unavailable to send the command to a remote node in the network or that the remote node is unavailable to execute the command.
Once system resources become available to execute the command, the execution manager module 414 queries in a block 524 whether the command's priorities have been satisfied. For example, the execution manager module 414 determines whether other transaction commands in the runnable state 520 that also use the available system resources have a higher priority than the command. If the command has the highest priority, or if the command has been in the runnable state 520 longer than other transaction commands with the same priority, the execution manager module 414 determines that the command's priorities are satisfied.
In one embodiment, the command's priorities are based on factors such as the system resources used by the command, the amount of time expected to execute the command as compared to other transaction commands, whether the command is to be executed by a local node or a remote node, a user's or programmer's preference, combinations of the forgoing, or the like. In one embodiment, priority rules specify that a user, a programmer, the execution manager module 414, or a combination of the forgoing assign a high level of priority to commands executed on remote nodes and commands expected to execute slower than other transaction commands. As noted above, the execution manager module 414 may select among commands with the same priority by sharing standard selection techniques such as, for example, round robin, first in first out, random, and the like.
When the system resources are available and the priorities are satisfied, the execution manager module 414 places the command in the running state 530. In a block 532, the system resources such as one or more nodes in the distributed system execute the command. In a block 534, the execution manager module 414 queries whether system resources have completed execution of the command. Depending on the procedure called by the command, a local node executes the procedure, sends the procedure to a remote node to be executed, or both. For example, the procedure may call for space to be allocated for writing data in a remote node. The local node sends the command comprising the procedure call, dependency information and priority information to the remote node where the procedure call is executed.
For some commands, the remote node sends a response back to the local node when the procedure call has been completed. Other commands are sent asynchronously wherein the remote node does not send a response back to the execution manager module 414 when the procedure call has been completed. For asynchronous commands, the execution manager module 414 determines that the command is complete after the command has been sent to the remote node or once a predetermined amount of time elapses after the message has been sent to the remote node.
Once the execution manager module 414 determines that the command has been executed to completion, the execution manager module 414 places the command in the done state 540. Once the commands in the transaction reach the done state 540, the process 500 ends. By moving the transaction commands through the waiting state 510, the runnable state 520, the running state 530, and the done state 540 as dependencies and priorities are satisfied, the execution manager module 414 increases the overlapping of system resource usage and reduces latency.
IV. Distributed File System Example
In one embodiment, an execution engine is used in a distributed file system as described in U.S. patent application Ser. No. 10/007,003, filed Nov. 9, 2001 and issued as U.S. Pat. No. 7,685,126 on Mar. 23, 2010, which claims priority to Application No. 60/309,803 filed Aug. 3, 2001, and U.S. patent application Ser. No. 10/714,326, filed Nov. 14, 2003, which claims priority to Application No. 60/426,464, filed Nov. 14, 2002, all of which are hereby incorporated herein by reference herein in their entirety. For example, the execution engine may be used in an intelligent distributed file system that enables the storing of file data among a set of smart storage units which are accessed as a single file system and utilizes a metadata data structure to track and manage detailed information about each file, including, for example, the device and block locations of the file's data blocks, to permit different levels of replication and/or redundancy within a single file system, to facilitate the change of redundancy parameters, to provide high-level protection for metadata and to replicate and move data in real-time. In addition, the execution engine may be configured to write data blocks or restripe files distributed among a set of smart storage units in the distributed file system wherein data is protected and recoverable even if a system failure occurs during the restriping process.
High-level exemplary transactions are provided below including a write transaction, a mirror transaction, mirror recovery transaction, a parity write transaction, and a restripe transaction. An artisan will recognize from the disclosure herein that many other transactions are possible. The attached Appendix, which forms a part of the patent application, provides a list of exemplary commands and pseudocode according to one embodiment of the invention. It should be recognized, however, that the exemplary commands and pseudocode are not meant to limit the scope of the invention.
A. Write Transaction
The write transaction 600 includes a get data command 604, an allocate command 606, a write command 608 and a set block address command 610. The get data command 604 creates a temporary buffer and stores the specified data block therein. The allocate command 606 allocates space for the specified data block in memory location in the node determined by the layout procedure 602. Since the layout procedure determines the specified data block that will be stored in the node, the get data command 604 and the allocate command 606 depend on the layout procedure 602 and will not execute until the layout procedure 602 completes execution.
In other embodiments, the layout command may be a start command and the determination of where to store data may be done in conjunction with other commands such as the allocate command. In some embodiments, the layout command or the allocate command determine the specific address memory location in which to store the data. In other embodiments, the specific address memory location is determined in real time by the node. The write command 608 depends on both the get data command 604 and the allocate command 606. Once the system executes the get data command 604 and the allocate command, the node specified by the layout procedure 602 executes the write command 608 which writes the specified data block stored in the temporary buffer to the allocated memory location. The set block address command 610 depends on the allocate command 606. Once the system executes the allocate command 606, the set block address command 610 stores an address corresponding to the allocated memory location in a metadata data structure or an inode describing the file that corresponds to the specified data block. Once the system executes the write command 608 and the set block address command 610, the write transaction 600 ends with a commit protocol 612 wherein participating nodes agree on the write transaction's 600 final outcome by either committing or aborting the write transaction 600. It is recognized that the set block address command 610 may be different depending on the allocations. For example, there could be one set block address command 610 corresponding to each allocation, one set block address command 610 for data and one for error correction data, and a different set block address command 610 for different nodes. In addition, if different nodes respond to transaction starts at different times, the set block address commands 610 may be used for different commands for different destinations.
By increasing use of system resources, the write transaction 600 reduces the amount of time required to store the data block in the distributed file system. Rather than executing the commands 604, 606, 608, 610 serially, the distributed file system executes the commands 604, 606, 608, 610 as system resources and usable data becomes available. For example, when system resources are available, the system executes the get data command 604 and the allocate command 606 in parallel. By executing commands with satisfied dependencies while other commands are also executing, the system decreases latency.
When sufficient system resources are not available, the system executes the commands according to predetermined priorities. If, for example, the get data command 604 takes longer to execute than the allocate command 606, the get data command 604 may be assigned a higher priority than the allocate 606 such that the system starts the get data command 604 before the allocate command 606. Then, as system resources become available, the system executes the allocate command 606. Depending on when system resources become available in relation to starting the get data command 604, the allocate command 606 may end before the get data command 604, which would also allow the system to execute the set block address command 610 in parallel with the get data command 604 and/or the write command 608. Thus, assigning relative priorities to the commands 604, 606, 608, 610 increases resource usage and decreases latency.
B. Mirror Transaction
The mirror transaction 700 begins with a layout procedure 702 that specifies a particular data block and determines the first node and the second node where copies of the specified data block will be written. The mirror transaction 700 includes a get data command 704, an allocate first node command 706, an allocate second node command 708, a write to first node command 710, a write to second node command 712 and a set block address command 714.
The get data command 704, the allocate first node command 706 and the allocate second node command 708 depend on information provided by the layout procedure 702 such as the identity of the specified data block and the identities of the first node and the second node. The get data command 704 creates a temporary buffer and stores the specified data block therein. The allocate first node command 706 allocates space in the first node for the specified data block. The allocate second node command 708 allocates space in the second node for the specified data block.
The write to first node command 710 writes the data block stored by the get data command 704 to a memory location in the first node allocated by the allocate first node command 706. Thus, the write to first node command 710 depends on information from the get data command 704 and the allocate first node command 706. Similarly, the write to second node command 712 writes the data block stored by the get data command 704 to a memory location in the second node allocated by the allocate second node command 708. Thus, the write to second node command 712 depends on information from the get data command 704 and the allocate second node command 708. Because the same data is being stored on two nodes, only one get data command is needed.
The set block address command 714 stores an address corresponding to the memory location in the first node and an address corresponding to the memory location in the second node to an inode describing a file corresponding to the data block. Thus, the set block address command 714 depends on information from the allocate first node command 706 and the allocate second node command 708.
After the system executes the write to first node command 710, the write to second node command 712, and the set block address command 714, the mirror transaction 700 ends with a commit protocol 716. In the commit protocol 716, the first node and the second node agree to commit to the mirror transaction 700 or to abort the mirror transaction 700 to maintain atomicity.
The mirror transaction 700 increases system resource usage and decreases latency by executing commands in parallel. For example, the system can execute the get data command 704, the allocate first node command 706, and the allocate second node command 708 in parallel when sufficient system resources are available. Similarly, the system can execute the write to first node command 710, the write to second node command 712 in parallel. An artisan will recognize that the system may also execute other commands in parallel including, for example, executing the set block address command 714 in parallel with the write to first node command 710, the write to second node command 712, or both. Thus, the amount of time required to write a mirrored data block in a distributed file system is reduced.
C. Mirror Recovery Transaction
In this example, the second node fails and the information stored therein is lost. Thus, the second data block D2 and the first mirror data block M1 are lost. Since copies were made, a user can continue to access all of the information. In other words, the first data block D1 and the second mirror data block M2 comprise copies of the lost information. However, to maintain the mirrored protection scheme, the system copies the first data block D1 to the fourth node as a new data block Q1 and copies the second mirror data block M2 to the first node as a new data block Q2. An artisan will recognize from the disclosure herein that the new data blocks Q1, Q2 can be copied to other nodes as long as copies of information are not stored on the same node as the information itself.
Once the system completes the layout procedure 902, the system can execute a read D1 command 904, an allocate Q1 command, a read M2 command 910 and an allocate Q2 command 912. The read D1 command 904 reads the first data block D1 from the first node and stores it in a temporary buffer in the first node. In other embodiments, read commands may store the data blocks in their correct location in the cache hierarchy. Later, the data may be flushed so as not to pollute the cache, or may be left in the cache. The allocate Q1 command 906 allocates space in a memory location in the fourth node where the new data block Q1 will be stored. The read M2 command 910 reads the second mirror data M2 from the third node and stores it in a temporary buffer in the third node. The allocate Q2 command 912 allocates space in a memory location in the first node where the new data block Q2 will be stored.
After executing the read D1 command 904 and the allocate Q1 command 906, the system executes a write Q1 command 918. The write Q1 command 918 writes the copy of the first data block D1 (i.e., the information read by the read D1 command 904 and stored in the temporary buffer in the first node) to the memory location in the fourth node allocated by the allocate Q1 command 906. In one embodiment, the system executes a transfer command (not shown) to move the copied first data block D1 from the temporary buffer or cache location in the first node to a temporary buffer or cache location in the fourth node before writing the copy to the memory location in the fourth node as Q1. In other embodiments, the system may include a cache for remote data and a cache for local data. When data is moved from a remote location to a local location, the data may be moved into the local cache.
After executing the read M2 command 910 and the allocate Q2 command 906, the system executes a write Q2 command 920. The write Q2 command 920 writes the copy of the second mirror data block M2 (i.e., the information read by the read M2 command 910) to the memory location in the first node allocated by the allocate Q2 command 912. As discussed above, in one embodiment, the system executes a transfer command (not shown) to move the copied second mirror data block M2 from the temporary buffer or cache location in the third node to a temporary buffer or cache location in the first node before writing the copy to the memory location in the first node as Q2.
After executing the allocate Q1 command 906 and the allocate Q2 command Q2, the system executes a set block addresses command 922. The set block addresses command 922 stores an address corresponding to the allocated memory location in the fourth node and an address corresponding to the allocated memory location in the first node to a metadata data structure or an inode describing the file.
After executing the write Q1 command 918, the write Q2 command 920, and the set block addresses command 922, the mirror recovery transaction 900 ends with a commit protocol 930. In the commit protocol 930, the first node, and the fourth node agree to commit to the mirror recovery transaction 900 or to abort the mirror recovery transaction 900 to maintain atomicity.
If sufficient system resources are available, the system can execute the read D1 command 904, the allocate Q1 command 906, the read M2 command 910, and the allocate Q2 command 912 in parallel. Other commands such as the write Q1 command 918, the write Q2 command 920, and the set block addresses command 922 can also be executed in parallel. Thus, system resource usage is increased and delay that would be caused by sequential execution is reduced.
D. Parity Write Transaction
For illustrative purposes, the data blocks D1, D2, D3 are written to different nodes and (as discussed below) correspond to the same block of parity information. However, data blocks in some embodiments are stored contiguously on the same node to reduce the amount of time it takes to complete a write transaction. For example, a file comprising thirty-two data blocks may be written using a 2+1 parity scheme by writing the first sixteen data blocks to a first memory device and the next sixteen data blocks to a second memory device. Then, sixteen blocks of parity information can be written to a third memory device. Each block of parity information corresponds to two data blocks, one written on the first memory device and the other written on the second memory device. For example, the first data block stored on the first memory device and the seventeenth data block stored on the second memory device may be XORed to create a parity block stored on the third memory device.
Returning to
The parity write transaction 1100 includes a get D1 command 1110, an allocate D1 command 1112, and a write D1 command 1114. Once the system executes the layout procedure 1102, the system executes the get D1 command 1110 to retrieve the first data block D1 from the buffer 1000, and the allocate D1 command 1112 to allocate space in a memory location in the first node for the first data block D1. After the system executes the get D1 command 1110 and the allocate D1 command 1112, the system executes the write D1 command to write the first data block D1 to the memory location in the first node.
The parity write transaction 1100 also includes a get D2 command 1120, an allocate D2 command 1122, and a write D2 command 1124. Once the system executes the layout command 1102, the system executes the get D2 command 1120 to retrieve the second data block D2 from the buffer 1000, and the allocate D2 command 1122 to allocate space in a memory location in the second node for the second data block D2. After the system executes the get D2 command 1120 and the allocate D2 command 1122, the system executes the write D2 command to write the second data block D2 to the memory location in the second node.
The parity write transaction 1100 also includes a get D3 command 1130, an allocate D3 command 1132, and a write D3 command 1134. Once the system executes the layout command 1102, the system executes the get D3 command 1130 to retrieve the third data block D3 from the buffer 1000, and the allocate D3 command 1132 to allocate space in a memory location in the third node for the third data block D3. After the system executes the get D3 command 1130 and the allocate D3 command 1132, the system executes the write D3 command to write the third data block D3 to the memory location in the third node.
The parity write transaction 1100 further includes a generate parity command 1140, an allocate P command 1142 and a write P command 1144. After the system executes the get D1 command 1110, the get D2 command 1120 and the get D3 command 1130, the system executes the generate parity command 1140. The generate parity command 1140 generates the parity data P, creates a temporary buffer and stores the parity data P therein. As discussed above, in one embodiment the generate parity command 1140 generates the parity data P by performing an XOR operation on the first data block D1, the second data block D2, and the second data block D3.
Once the layout command 1102 is complete, the system executes the allocate P command 1142 to allocate space in a memory location in the fourth node for the parity data P. After executing the generate parity command 1140 and the allocate P command 1142, the system executes the write P command 1144 to write the parity data P to the memory location in the fourth node.
Once the allocate D1 command 1112, the allocate D2 command 1122, the allocate D3 command 1132, and the allocate P command 1142 execute to completion, the system executes a set block addresses command 1150. The set block addresses command 1150 stores addresses corresponding to the memory locations allocated in the first node, the second node, the third node, and the fourth node to a metadata data structure or an inode describing the file corresponding to the data blocks D1, D2, D3.
After the write D1 command 1114, the write D2 command 1124, the write D3 command 1134, the write P command 1144, and get block address 1150 execute to completion, the parity write transaction 1100 ends with a commit protocol 1160. In the commit protocol 1160, the first node, second node, third node, and fourth node agree to commit or abort the parity write transaction 1100 to maintain atomicity. As with the other examples discussed above, the parity write transaction 1100 increases system resource overlap and reduces latency by executing a plurality of commands in parallel. For example, the first node, the second node, the third node, the fourth node, or a combination of the forgoing can each be executing commands at the same time rather than waiting while one command is executed at a time.
E. Restripe Transaction
The 3+1 parity scheme includes a first 3+1 parity group 1210 and a second 3+1 parity group 1212. The first 3+1 parity group 1210 includes a first data block D1 stored on a first node (i.e., “Node 1”), a second data block D2 stored in a second node (i.e., “Node 2”), a third data block D3 stored in a third node (i.e., “Node 3”), and first parity data P1 stored in a fourth node (i.e., “Node 4”). In one embodiment, the first parity data P1 is generated by performing an XOR operation on the first data block D1, the second data block D2, and the third data block D3.
The second 3+1 parity group 1212 includes a fourth data block D4 stored on the second node, a fifth data block D5 stored on a fifth node (i.e., “Node 5”), a sixth data block D6 stored on the fourth node, and a second parity data P2 stored on the first node. In one embodiment, the second parity data P2 is generated by performing an XOR operation on the fourth data block D4, the fifth data block D5, and the sixth data block D6.
In this example, the second node fails resulting in the loss of the second data block D2 and the fourth data block D4. Upon detecting failure of the second node, the system recovers the second data block D2 by performing an XOR operation on the first data block D1, the third data block D3, and the first parity data P1. Similarly, the system recovers the fourth data block D4 by performing an XOR operation on the fifth data block D5, the sixth data block D6, and the second parity data P2. Since the first 3+1 parity group 1210 and the second 3+1 parity group 1212 both used the failed second node, the system converts from a 3+1 parity scheme to a 2+1 parity scheme to help preserve the ability to recover from node failure.
The 2+1 parity scheme includes a first 2+1 parity group 1220, a second 2+1 parity group 1222, and a third 2+1 parity group 1224. The first 2+1 parity group 1220 includes the first data block D1 stored on the first node, the recovered second data block D2 stored on the third node, and third parity data P3 stored on the fourth node. The system generates the third parity data P3 by performing an XOR operation on the first data block D1 and the second data block D2.
The second 2+1 parity group 1222 includes the third data block D3 stored on the third node, the recovered fourth data block D4 stored on the fifth node and fourth parity data P4 stored on the first node. The system generates the fourth parity data by performing an XOR operation on the third data block D3 and the fourth data block D4. The third 2+1 parity group 1224 includes the fifth data block D5 stored on the fifth node, the sixth data block D6 stored on the sixth node, and fifth parity data P5 stored on the third node. The system generates the fifth parity data by performing an XOR operation on the fifth data block D5 and the sixth data block D6.
Once the system performs the layout procedure 1302, the system creates the first 2+1 parity group 1220, the second 2+1 parity group 1222, and the third 2+1 parity group 1224.
1. Generating the First 2+1 Parity Group
To create the first 2+1 parity group 1220, the restripe transaction 1300 reconstructs the second data block D2 and generates the third parity data P3. The restripe transaction 1300 includes a read D1 command 1310 that reads the first data block D1 from the first node, a read P1 command 1312 that reads the first parity data P1 from the fourth node, and a read D3 command 1314 that reads the third data block D3 from the third node.
The restripe transaction 1300 includes a reconstruct D2 command 1316 that reconstructs the second data block D2 that was lost when the second node failed. After the system executes the read D1 command 1310, the read P1 command 1312, and the read D3 command 1314, the reconstruct D2 command 1316 performs an XOR operation on the first data block D1, the third data block D3, and the first parity data P1 to reconstruct the second data block D2. The reconstruct D2 command 1316 stores the reconstructed second data block D2 in a temporary buffer. If the restripe transaction 1300 was keeping the previous parity (for example, keeping the 3+1 parity scheme), after the second data block D2 had been reconstructed, the second data block D2 could be stored in a new location without having to recalculate any new parity. In this example, however, the restripe transaction 1300 recovers from a failed node and also performs a conversion from the 3+1 parity scheme to the 2+1 parity scheme; thus, new party data is generated.
The restripe transaction 1300 also includes an allocate D2 command 1318. The allocate D2 command 1318 allocates space in the third node for the second data block D2. After executing the allocate D2 command 1318 and the reconstruct D2 command 1316, the system executes a write D2 command 1320 that writes the reconstructed second data block D2 in the allocated space in the third node.
After the reconstruct D2 command 1316 executes, the system also executes a generate P3 command 1322 that creates the third parity data P3 by performing an XOR operation on the first data block D1 and the recovered second data block D2. The restripe transaction 1300 includes an allocate P3 command 1324 that allocates space in the fourth node for the third parity data P3. Once the generate P3 command 1322 and the allocate P3 command 1324 are complete, the system executes a write P3 command 1326 that writes the third parity data to the fourth node.
2. Generating the Second 2+1 Parity Group
To create the second 2+1 parity group 1222, the restripe transaction 1300 reconstructs the fourth data block D4 and generates the fourth parity data P4. The restripe transaction 1300 includes a read P2 command 1330 that reads the second parity data P2 from the first node, a read D5 command 1332 that reads the fifth data block D5 from the fifth node, and a read D6 command 1334 that reads the sixth data block D6 from the sixth node.
The restripe transaction 1300 includes a reconstruct D4 command 1336. After the system executes the read P2 command 1330, the read D5 command 1332, and the read D6 command 1334, the reconstruct D4 command 1336 performs an XOR operation on the second parity data P2, the fifth data block D5, and the sixth data block D6 to reconstruct the fourth data block D4. The reconstruct D4 command 1336 stores the reconstructed fourth data block D4 in a temporary buffer.
The restripe transaction 1300 also includes an allocate D4 command 1338. The allocate D4 command 1338 allocates space in the fifth node for the fourth data block D4. Once the reconstruct D4 command 1336 and the allocate D4 command 1338 are complete, the system executes a write D4 command 1340 that writes the reconstructed fourth data block D4 in the allocated space in the fifth node.
After the read D3 command 1314 and the reconstruct D4 command 1346 execute, the system also executes a generate P4 command 1342 that creates the fourth parity data P4 by performing an XOR operation on the third data block D3 and the recovered fourth data block D4. The restripe transaction 1300 includes an allocate P4 command 1344 that allocates space in the first node for the fourth parity data P4. Once the generate P4 command 1342 and the allocate P4 command 1344 are complete, the system executes a write P4 command 1346 that writes the fourth parity data to the first node.
3. Generating the Third 2+1 Parity Group
To create the third 2+1 parity group 1224, the restripe transaction 1300 computes the fifth parity data P5 corresponding to the fifth data block D5 and the sixth data block D6. The restripe transaction 1300 includes an allocate P5 command 1350, a generate P5 command 1352, and a write P5 command 1354. The allocate P5 command 1350 allocates space in the third node for the fifth parity block P5.
Once the read D5 command 1332 and the read D6 command 1334 are complete, the system executes the generate P5 command 1352. The generate P5 command 1352 creates the fifth parity data P5 by performing an XOR operation on the fifth data block D5 and the sixth data block D6. After executing the allocate P5 command 1350 and the generate P5 command 1352, the system executes a write P5 command 1354 that writes the fifth parity data P5 to the space allocated in the third node.
4. Ending the Restripe Transaction
After executing the allocate D2 command 1318, the allocate P3 command 1324, the allocate D4 command 1338, the allocate P4 command 1334, and the allocate P5 command 1350, the system executes a set block addresses command 1370. The set block addresses command 1370 stores addresses corresponding to the memory locations allocated in the first node, the third node, the fourth node, and the fifth node during the restripe transaction 1300. The addresses are stored in a metadata data structure or an inode describing the file corresponding to the data blocks D1, D2, D3, D4, D5, D6.
After the write D2 command 1320, the write P3 command 1326, the write D4 command 1340, the write P4 command 1346, the write P5 command 1354, and the set block addresses command 1370 execute, the restripe transaction 1300 ends with a commit protocol 1380. In the commit protocol 1380, the first node, third node, fourth node, and fifth node agree to commit or abort the restripe transaction 1300 to maintain atomicity.
As with the other examples discussed above, the restripe transaction 1300 increases system resource overlap and reduces latency by executing a plurality of commands in parallel. For example, the first node, the third node, the fourth node, fifth node, or a combination of the forgoing can each be executing commands at the same time rather than waiting while the distributed file system executes one command at a time.
It is noted that the example transactions were provided to illustrate the invention and that other transactions, commands, dependences and/or priorities may be used.
While certain embodiments of the inventions have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
This Appendix forms a part of the patent application entitled “DISTRIBUTED SYSTEM WITH ASYNCHRONOUS EXECUTION SYSTEMS AND METHODS,”.
This Appendix includes a list of exemplary commands and pseudocode for an execution engine that reduces latency in a distributed file system by executing commands as sufficient information and system resources become available. It should be recognized, however, that the list of exemplary commands and pseudocode is not meant to limit the scope of the invention, but only to provide details for a specific embodiment. This Appendix includes the Appendices incorporated by reference above from U.S. Provisional Application No. 60/623,846, filed Oct. 29, 2004 entitled “Distributed System with Asynchronous Execution Systems and Methods,” and U.S. Provisional Application No. 60/628,527, filed Nov. 15, 2004 entitled “Distributed System with Asynchronous Execution Systems and Methods,” which are hereby incorporated by reference herein in their entirety.
Exemplary Commands
Commands are a (verb, waiters, priority) tuple. Some exemplary verbs are listed below:
The present application claims priority benefit under 35 U.S.C. §119(e) from U.S. Provisional Application No. 60/623,846, filed Oct. 29, 2004 entitled “Distributed System with Asynchronous Execution Systems and Methods,” and U.S. Provisional Application No. 60/628,527, filed Nov. 15, 2004 entitled “Distributed System with Asynchronous Execution Systems and Methods.” The present application hereby incorporates by reference herein both of the foregoing applications in their entirety. The present application relates to U.S. application Ser. No. 11/262,306, titled “Non-Blocking Commit Protocol Systems and Methods,” filed on even date herewith, which claims priority to U.S. Provisional Application No. 60/623,843, filed Oct. 29, 2004 entitled “Non-Blocking Commit Protocol Systems and Method;” and U.S. application Ser. No. 11/262,314, titled “Message Batching with Checkpoints Systems and Methods”, filed on even date herewith, which claims priority to U.S. Provisional Application No. 60/623,848, filed Oct. 29, 2004 entitled “Message Batching with Checkpoints Systems and Methods,” and U.S. Provisional Application No. 60/628,528, filed Nov. 15, 2004 entitled “Message Batching with Checkpoints Systems and Methods.” The present application hereby incorporates by reference herein all of the foregoing applications in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5163131 | Row et al. | Nov 1992 | A |
5181162 | Smith et al. | Jan 1993 | A |
5212784 | Sparks | May 1993 | A |
5230047 | Frey et al. | Jul 1993 | A |
5251206 | Calvignac et al. | Oct 1993 | A |
5258984 | Menon et al. | Nov 1993 | A |
5329626 | Klein et al. | Jul 1994 | A |
5359594 | Gould et al. | Oct 1994 | A |
5403639 | Belsan et al. | Apr 1995 | A |
5459871 | Van Den Berg | Oct 1995 | A |
5481699 | Saether | Jan 1996 | A |
5548724 | Akizawa et al. | Aug 1996 | A |
5548795 | Au | Aug 1996 | A |
5568629 | Gentry et al. | Oct 1996 | A |
5596709 | Bond et al. | Jan 1997 | A |
5606669 | Bertin et al. | Feb 1997 | A |
5612865 | Dasgupta | Mar 1997 | A |
5649200 | Leblang et al. | Jul 1997 | A |
5657439 | Jones et al. | Aug 1997 | A |
5668943 | Attanasio et al. | Sep 1997 | A |
5680621 | Korenshtein | Oct 1997 | A |
5694593 | Baclawski | Dec 1997 | A |
5696895 | Hemphill et al. | Dec 1997 | A |
5734826 | Olnowich et al. | Mar 1998 | A |
5754756 | Watanabe et al. | May 1998 | A |
5761659 | Bertoni | Jun 1998 | A |
5774643 | Lubbers et al. | Jun 1998 | A |
5799305 | Bortvedt et al. | Aug 1998 | A |
5805578 | Stirpe et al. | Sep 1998 | A |
5805900 | Fagen et al. | Sep 1998 | A |
5806065 | Lomet | Sep 1998 | A |
5822790 | Mehrotra | Oct 1998 | A |
5862312 | Mann | Jan 1999 | A |
5870563 | Roper et al. | Feb 1999 | A |
5878410 | Zbikowski et al. | Mar 1999 | A |
5878414 | Hsiao et al. | Mar 1999 | A |
5884046 | Antonov | Mar 1999 | A |
5884098 | Mason, Jr. | Mar 1999 | A |
5884303 | Brown | Mar 1999 | A |
5890147 | Peltonen et al. | Mar 1999 | A |
5917998 | Cabrera et al. | Jun 1999 | A |
5933834 | Aichelen | Aug 1999 | A |
5943690 | Dorricott et al. | Aug 1999 | A |
5963963 | Schmuck et al. | Oct 1999 | A |
5966707 | Van Huben et al. | Oct 1999 | A |
5996089 | Mann | Nov 1999 | A |
6000007 | Leung et al. | Dec 1999 | A |
6014669 | Slaughter et al. | Jan 2000 | A |
6021414 | Fuller | Feb 2000 | A |
6029168 | Frey | Feb 2000 | A |
6038570 | Hitz et al. | Mar 2000 | A |
6044367 | Wolff | Mar 2000 | A |
6052759 | Stallmo et al. | Apr 2000 | A |
6055543 | Christensen et al. | Apr 2000 | A |
6055564 | Phaal | Apr 2000 | A |
6070172 | Lowe | May 2000 | A |
6081833 | Okamoto et al. | Jun 2000 | A |
6081883 | Popelka et al. | Jun 2000 | A |
6108759 | Orcutt et al. | Aug 2000 | A |
6117181 | Dearth et al. | Sep 2000 | A |
6122754 | Litwin et al. | Sep 2000 | A |
6138126 | Hitz et al. | Oct 2000 | A |
6154854 | Stallmo | Nov 2000 | A |
6173374 | Heil et al. | Jan 2001 | B1 |
6202085 | Benson et al. | Mar 2001 | B1 |
6209059 | Ofer et al. | Mar 2001 | B1 |
6219693 | Napolitano et al. | Apr 2001 | B1 |
6226377 | Donaghue, Jr. | May 2001 | B1 |
6279007 | Uppala | Aug 2001 | B1 |
6321345 | Mann | Nov 2001 | B1 |
6334168 | Islam et al. | Dec 2001 | B1 |
6353823 | Kumar | Mar 2002 | B1 |
6384626 | Tsai et al. | May 2002 | B2 |
6385626 | Tamer et al. | May 2002 | B1 |
6393483 | Latif et al. | May 2002 | B1 |
6397311 | Capps | May 2002 | B1 |
6405219 | Saether et al. | Jun 2002 | B2 |
6408313 | Campbell et al. | Jun 2002 | B1 |
6415259 | Wolfinger et al. | Jul 2002 | B1 |
6421781 | Fox et al. | Jul 2002 | B1 |
6434574 | Day et al. | Aug 2002 | B1 |
6449730 | Mann | Sep 2002 | B2 |
6453389 | Weinberger et al. | Sep 2002 | B1 |
6457139 | D'Errico et al. | Sep 2002 | B1 |
6463442 | Bent et al. | Oct 2002 | B1 |
6496842 | Lyness | Dec 2002 | B1 |
6499091 | Bergsten | Dec 2002 | B1 |
6502172 | Chang | Dec 2002 | B2 |
6502174 | Beardsley et al. | Dec 2002 | B1 |
6523130 | Hickman et al. | Feb 2003 | B1 |
6526478 | Kirby | Feb 2003 | B1 |
6546443 | Kakivaya et al. | Apr 2003 | B1 |
6549513 | Chao et al. | Apr 2003 | B1 |
6557114 | Mann | Apr 2003 | B2 |
6567894 | Hsu et al. | May 2003 | B1 |
6567926 | Mann | May 2003 | B2 |
6571244 | Larson | May 2003 | B1 |
6571349 | Mann | May 2003 | B1 |
6574745 | Mann | Jun 2003 | B2 |
6594655 | Tal et al. | Jul 2003 | B2 |
6594660 | Berkowitz et al. | Jul 2003 | B1 |
6594744 | Humlicek et al. | Jul 2003 | B1 |
6598174 | Parks et al. | Jul 2003 | B1 |
6618798 | Burton et al. | Sep 2003 | B1 |
6631411 | Welter et al. | Oct 2003 | B1 |
6658554 | Moshovos et al. | Dec 2003 | B1 |
6662184 | Friedberg | Dec 2003 | B1 |
6671686 | Pardon et al. | Dec 2003 | B2 |
6671704 | Gondi et al. | Dec 2003 | B1 |
6671772 | Cousins | Dec 2003 | B1 |
6687805 | Cochran | Feb 2004 | B1 |
6725392 | Frey et al. | Apr 2004 | B1 |
6732125 | Autrey et al. | May 2004 | B1 |
6742020 | Dimitroff et al. | May 2004 | B1 |
6748429 | Talluri et al. | Jun 2004 | B1 |
6801949 | Bruck et al. | Oct 2004 | B1 |
6848029 | Coldewey | Jan 2005 | B2 |
6856591 | Ma et al. | Feb 2005 | B1 |
6871295 | Ulrich et al. | Mar 2005 | B2 |
6895482 | Blackmon et al. | May 2005 | B1 |
6895534 | Wong et al. | May 2005 | B2 |
6907011 | Miller et al. | Jun 2005 | B1 |
6907520 | Parady | Jun 2005 | B2 |
6917942 | Burns et al. | Jul 2005 | B1 |
6920494 | Heitman et al. | Jul 2005 | B2 |
6922696 | Lincoln et al. | Jul 2005 | B1 |
6922708 | Sedlar | Jul 2005 | B1 |
6934878 | Massa et al. | Aug 2005 | B2 |
6940966 | Lee | Sep 2005 | B2 |
6954435 | Billhartz et al. | Oct 2005 | B2 |
6990604 | Binger | Jan 2006 | B2 |
6990611 | Busser | Jan 2006 | B2 |
7007044 | Rafert et al. | Feb 2006 | B1 |
7007097 | Huffman et al. | Feb 2006 | B1 |
7017003 | Murotani et al. | Mar 2006 | B2 |
7043485 | Manley et al. | May 2006 | B2 |
7043567 | Trantham | May 2006 | B2 |
7069320 | Chang et al. | Jun 2006 | B1 |
7103597 | McGoveran | Sep 2006 | B2 |
7111305 | Solter et al. | Sep 2006 | B2 |
7113938 | Highleyman et al. | Sep 2006 | B2 |
7124264 | Yamashita | Oct 2006 | B2 |
7146524 | Patel et al. | Dec 2006 | B2 |
7152182 | Ji et al. | Dec 2006 | B2 |
7177295 | Sholander et al. | Feb 2007 | B1 |
7181746 | Perycz et al. | Feb 2007 | B2 |
7184421 | Liu et al. | Feb 2007 | B1 |
7194487 | Kekre et al. | Mar 2007 | B1 |
7206805 | McLaughlin, Jr. | Apr 2007 | B1 |
7225204 | Manley et al. | May 2007 | B2 |
7228299 | Harmer et al. | Jun 2007 | B1 |
7240235 | Lewalski-Brechter | Jul 2007 | B2 |
7249118 | Sandler et al. | Jul 2007 | B2 |
7257257 | Anderson et al. | Aug 2007 | B2 |
7290056 | McLaughlin, Jr. | Oct 2007 | B1 |
7313614 | Considine et al. | Dec 2007 | B2 |
7318134 | Oliverira et al. | Jan 2008 | B1 |
7346346 | Fachan | Mar 2008 | B2 |
7346720 | Fachan | Mar 2008 | B2 |
7370064 | Yousefi'zadeh | May 2008 | B2 |
7373426 | Jinmei et al. | May 2008 | B2 |
7386675 | Fachan | Jun 2008 | B2 |
7386697 | Case et al. | Jun 2008 | B1 |
7440966 | Adkins et al. | Oct 2008 | B2 |
7451341 | Okaki et al. | Nov 2008 | B2 |
7509448 | Fachan et al. | Mar 2009 | B2 |
7509524 | Patel et al. | Mar 2009 | B2 |
7533298 | Smith et al. | May 2009 | B2 |
7546354 | Fan et al. | Jun 2009 | B1 |
7546412 | Ahmad et al. | Jun 2009 | B2 |
7551572 | Passey et al. | Jun 2009 | B2 |
7558910 | Alverson et al. | Jul 2009 | B2 |
7571348 | Deguchi et al. | Aug 2009 | B2 |
7577258 | Wiseman et al. | Aug 2009 | B2 |
7577667 | Hinshaw et al. | Aug 2009 | B2 |
7590652 | Passey et al. | Sep 2009 | B2 |
7593938 | Lemar et al. | Sep 2009 | B2 |
7596713 | Mani-Meitav et al. | Sep 2009 | B2 |
7631066 | Schatz et al. | Dec 2009 | B1 |
7665123 | Szor et al. | Feb 2010 | B1 |
7676691 | Fachan et al. | Mar 2010 | B2 |
7680836 | Anderson et al. | Mar 2010 | B2 |
7680842 | Anderson et al. | Mar 2010 | B2 |
7685126 | Patel et al. | Mar 2010 | B2 |
7685162 | Heider et al. | Mar 2010 | B2 |
7689597 | Bingham et al. | Mar 2010 | B1 |
7707193 | Zayas et al. | Apr 2010 | B2 |
7716262 | Pallapotu | May 2010 | B2 |
7734603 | McManis | Jun 2010 | B1 |
7739288 | Lemar et al. | Jun 2010 | B2 |
7743033 | Patel et al. | Jun 2010 | B2 |
7752402 | Fachan et al. | Jul 2010 | B2 |
7756898 | Passey et al. | Jul 2010 | B2 |
7779048 | Fachan et al. | Aug 2010 | B2 |
7783666 | Zhuge et al. | Aug 2010 | B1 |
7788303 | Mikesell et al. | Aug 2010 | B2 |
7797283 | Fachan et al. | Sep 2010 | B2 |
7822932 | Fachan et al. | Oct 2010 | B2 |
7840536 | Ahal et al. | Nov 2010 | B1 |
7844617 | Lemar et al. | Nov 2010 | B2 |
7848261 | Fachan | Dec 2010 | B2 |
7870345 | Issaquah et al. | Jan 2011 | B2 |
7882068 | Schack et al. | Feb 2011 | B2 |
7882071 | Fachan et al. | Feb 2011 | B2 |
7899800 | Fachan et al. | Mar 2011 | B2 |
7900015 | Fachan et al. | Mar 2011 | B2 |
7917474 | Passey et al. | Mar 2011 | B2 |
20010042224 | Stanfill et al. | Nov 2001 | A1 |
20010047451 | Noble et al. | Nov 2001 | A1 |
20010056492 | Bressoud et al. | Dec 2001 | A1 |
20020010696 | Izumi | Jan 2002 | A1 |
20020029200 | Dulin et al. | Mar 2002 | A1 |
20020035668 | Nakano et al. | Mar 2002 | A1 |
20020038436 | Suzuki | Mar 2002 | A1 |
20020049778 | Bell et al. | Apr 2002 | A1 |
20020055940 | Elkan | May 2002 | A1 |
20020072974 | Pugliese et al. | Jun 2002 | A1 |
20020075870 | de Azevedo et al. | Jun 2002 | A1 |
20020078161 | Cheng | Jun 2002 | A1 |
20020078180 | Miyazawa | Jun 2002 | A1 |
20020083078 | Pardon et al. | Jun 2002 | A1 |
20020083118 | Sim | Jun 2002 | A1 |
20020087366 | Collier et al. | Jul 2002 | A1 |
20020095438 | Rising et al. | Jul 2002 | A1 |
20020107877 | Whiting et al. | Aug 2002 | A1 |
20020124137 | Ulrich et al. | Sep 2002 | A1 |
20020138559 | Ulrich et al. | Sep 2002 | A1 |
20020156840 | Ulrich et al. | Oct 2002 | A1 |
20020156891 | Ulrich et al. | Oct 2002 | A1 |
20020156973 | Ulrich et al. | Oct 2002 | A1 |
20020156974 | Ulrich et al. | Oct 2002 | A1 |
20020156975 | Ulrich et al. | Oct 2002 | A1 |
20020158900 | Hsieh et al. | Oct 2002 | A1 |
20020161846 | Ulrich et al. | Oct 2002 | A1 |
20020161850 | Ulrich et al. | Oct 2002 | A1 |
20020161973 | Ulrich et al. | Oct 2002 | A1 |
20020163889 | Yemini et al. | Nov 2002 | A1 |
20020165942 | Ulrich et al. | Nov 2002 | A1 |
20020166026 | Ulrich et al. | Nov 2002 | A1 |
20020166079 | Ulrich et al. | Nov 2002 | A1 |
20020169827 | Ulrich et al. | Nov 2002 | A1 |
20020170036 | Cobb et al. | Nov 2002 | A1 |
20020174295 | Ulrich et al. | Nov 2002 | A1 |
20020174296 | Ulrich et al. | Nov 2002 | A1 |
20020178162 | Ulrich et al. | Nov 2002 | A1 |
20020191311 | Ulrich et al. | Dec 2002 | A1 |
20020194523 | Ulrich et al. | Dec 2002 | A1 |
20020194526 | Ulrich et al. | Dec 2002 | A1 |
20020198864 | Ostermann et al. | Dec 2002 | A1 |
20030005159 | Kumhyr | Jan 2003 | A1 |
20030009511 | Giotta et al. | Jan 2003 | A1 |
20030014391 | Evans et al. | Jan 2003 | A1 |
20030033308 | Patel et al. | Feb 2003 | A1 |
20030061491 | Jaskiewicz et al. | Mar 2003 | A1 |
20030109253 | Fenton et al. | Jun 2003 | A1 |
20030120863 | Lee et al. | Jun 2003 | A1 |
20030125852 | Schade et al. | Jul 2003 | A1 |
20030126522 | English et al. | Jul 2003 | A1 |
20030131860 | Ashcraft et al. | Jul 2003 | A1 |
20030135514 | Patel et al. | Jul 2003 | A1 |
20030149750 | Franzenburg | Aug 2003 | A1 |
20030158873 | Sawdon et al. | Aug 2003 | A1 |
20030161302 | Zimmermann et al. | Aug 2003 | A1 |
20030163726 | Kidd | Aug 2003 | A1 |
20030172149 | Edsall et al. | Sep 2003 | A1 |
20030177308 | Lewalski-Brechter | Sep 2003 | A1 |
20030182312 | Chen et al. | Sep 2003 | A1 |
20030182325 | Manely et al. | Sep 2003 | A1 |
20030233385 | Srinivasa et al. | Dec 2003 | A1 |
20040003053 | Williams | Jan 2004 | A1 |
20040024731 | Cabrera et al. | Feb 2004 | A1 |
20040024963 | Talagala et al. | Feb 2004 | A1 |
20040078680 | Hu et al. | Apr 2004 | A1 |
20040078812 | Calvert | Apr 2004 | A1 |
20040117802 | Green | Jun 2004 | A1 |
20040133670 | Kaminksky et al. | Jul 2004 | A1 |
20040143647 | Cherkasova | Jul 2004 | A1 |
20040153479 | Mikesell et al. | Aug 2004 | A1 |
20040158549 | Matena et al. | Aug 2004 | A1 |
20040174798 | Riguidel et al. | Sep 2004 | A1 |
20040189682 | Troyansky et al. | Sep 2004 | A1 |
20040199734 | Rajamani et al. | Oct 2004 | A1 |
20040199812 | Earl et al. | Oct 2004 | A1 |
20040205141 | Goland | Oct 2004 | A1 |
20040230748 | Ohba | Nov 2004 | A1 |
20040240444 | Matthews et al. | Dec 2004 | A1 |
20040260673 | Hitz et al. | Dec 2004 | A1 |
20040267747 | Choi et al. | Dec 2004 | A1 |
20050010592 | Guthrie | Jan 2005 | A1 |
20050033778 | Price | Feb 2005 | A1 |
20050044197 | Lai | Feb 2005 | A1 |
20050066095 | Mullick et al. | Mar 2005 | A1 |
20050114402 | Guthrie | May 2005 | A1 |
20050114609 | Shorb | May 2005 | A1 |
20050125456 | Hara et al. | Jun 2005 | A1 |
20050131860 | Livshits | Jun 2005 | A1 |
20050131990 | Jewell | Jun 2005 | A1 |
20050138195 | Bono | Jun 2005 | A1 |
20050138252 | Gwilt | Jun 2005 | A1 |
20050171960 | Lomet | Aug 2005 | A1 |
20050171962 | Martin et al. | Aug 2005 | A1 |
20050187889 | Yasoshima | Aug 2005 | A1 |
20050188052 | Ewanchuk et al. | Aug 2005 | A1 |
20050192993 | Messinger | Sep 2005 | A1 |
20050289169 | Adya et al. | Dec 2005 | A1 |
20050289188 | Nettleton et al. | Dec 2005 | A1 |
20060004760 | Clift et al. | Jan 2006 | A1 |
20060041894 | Cheng | Feb 2006 | A1 |
20060047713 | Gornshtein et al. | Mar 2006 | A1 |
20060047925 | Perry | Mar 2006 | A1 |
20060053263 | Prahlad et al. | Mar 2006 | A1 |
20060059467 | Wong | Mar 2006 | A1 |
20060074922 | Nishimura | Apr 2006 | A1 |
20060083177 | Iyer et al. | Apr 2006 | A1 |
20060095438 | Fachan et al. | May 2006 | A1 |
20060101062 | Godman et al. | May 2006 | A1 |
20060129584 | Hoang et al. | Jun 2006 | A1 |
20060129631 | Na et al. | Jun 2006 | A1 |
20060129983 | Feng | Jun 2006 | A1 |
20060155831 | Chandrasekaran | Jul 2006 | A1 |
20060206536 | Sawdon et al. | Sep 2006 | A1 |
20060230411 | Richter et al. | Oct 2006 | A1 |
20060277432 | Patel | Dec 2006 | A1 |
20060288161 | Cavallo | Dec 2006 | A1 |
20060294589 | Achanta et al. | Dec 2006 | A1 |
20070038887 | Witte et al. | Feb 2007 | A1 |
20070091790 | Passey et al. | Apr 2007 | A1 |
20070094269 | Mikesell et al. | Apr 2007 | A1 |
20070094277 | Fachan et al. | Apr 2007 | A1 |
20070094310 | Passey et al. | Apr 2007 | A1 |
20070094431 | Fachan | Apr 2007 | A1 |
20070094449 | Allison et al. | Apr 2007 | A1 |
20070094452 | Fachan | Apr 2007 | A1 |
20070124337 | Flam | May 2007 | A1 |
20070168351 | Fachan | Jul 2007 | A1 |
20070171919 | Godman et al. | Jul 2007 | A1 |
20070192254 | Hinkle | Aug 2007 | A1 |
20070195810 | Fachan | Aug 2007 | A1 |
20070233684 | Verma et al. | Oct 2007 | A1 |
20070233710 | Passey et al. | Oct 2007 | A1 |
20070244877 | Kempka | Oct 2007 | A1 |
20070255765 | Robinson | Nov 2007 | A1 |
20080005145 | Worrall | Jan 2008 | A1 |
20080010507 | Vingralek | Jan 2008 | A1 |
20080021907 | Patel et al. | Jan 2008 | A1 |
20080031238 | Harmelin et al. | Feb 2008 | A1 |
20080034004 | Cisler et al. | Feb 2008 | A1 |
20080044016 | Henzinger | Feb 2008 | A1 |
20080046432 | Anderson et al. | Feb 2008 | A1 |
20080046443 | Fachan et al. | Feb 2008 | A1 |
20080046444 | Fachan et al. | Feb 2008 | A1 |
20080046445 | Passey et al. | Feb 2008 | A1 |
20080046475 | Anderson et al. | Feb 2008 | A1 |
20080046476 | Anderson et al. | Feb 2008 | A1 |
20080046667 | Fachan et al. | Feb 2008 | A1 |
20080059541 | Fachan et al. | Mar 2008 | A1 |
20080059734 | Mizuno | Mar 2008 | A1 |
20080126365 | Fachan et al. | May 2008 | A1 |
20080151724 | Anderson et al. | Jun 2008 | A1 |
20080154978 | Lemar et al. | Jun 2008 | A1 |
20080155191 | Anderson et al. | Jun 2008 | A1 |
20080168304 | Flynn et al. | Jul 2008 | A1 |
20080168458 | Fachan et al. | Jul 2008 | A1 |
20080243773 | Patel et al. | Oct 2008 | A1 |
20080256103 | Fachan et al. | Oct 2008 | A1 |
20080256537 | Fachan et al. | Oct 2008 | A1 |
20080256545 | Fachan et al. | Oct 2008 | A1 |
20080294611 | Anglin et al. | Nov 2008 | A1 |
20090055399 | Lu et al. | Feb 2009 | A1 |
20090055604 | Lemar et al. | Feb 2009 | A1 |
20090055607 | Schack et al. | Feb 2009 | A1 |
20090125563 | Wong et al. | May 2009 | A1 |
20090210880 | Fachan et al. | Aug 2009 | A1 |
20090248756 | Akidau et al. | Oct 2009 | A1 |
20090248765 | Akidau et al. | Oct 2009 | A1 |
20090248975 | Daud et al. | Oct 2009 | A1 |
20090249013 | Daud et al. | Oct 2009 | A1 |
20090252066 | Passey et al. | Oct 2009 | A1 |
20090327218 | Passey et al. | Dec 2009 | A1 |
20100011011 | Lemar et al. | Jan 2010 | A1 |
20100016155 | Fachan | Jan 2010 | A1 |
20100122057 | Strumpen et al. | May 2010 | A1 |
20100161556 | Anderson et al. | Jun 2010 | A1 |
20100161557 | Anderson et al. | Jun 2010 | A1 |
20100185592 | Kryger | Jul 2010 | A1 |
20100223235 | Fachan | Sep 2010 | A1 |
20100235413 | Patel | Sep 2010 | A1 |
20100241632 | Lemar et al. | Sep 2010 | A1 |
20100306786 | Passey | Dec 2010 | A1 |
20110016353 | Mikesell | Jan 2011 | A1 |
20110022790 | Fachan | Jan 2011 | A1 |
20110035412 | Fachan | Feb 2011 | A1 |
20110044209 | Fachan | Feb 2011 | A1 |
20110060779 | Lemar et al. | Mar 2011 | A1 |
20110087635 | Fachan | Apr 2011 | A1 |
20110087928 | Daud et al. | Apr 2011 | A1 |
Number | Date | Country |
---|---|---|
0774723 | May 1997 | EP |
1421520 | May 2004 | EP |
1563411 | Aug 2005 | EP |
2284735 | Feb 2011 | EP |
2299375 | Mar 2011 | EP |
04096841 | Mar 1992 | JP |
2006-506741 | Jun 2004 | JP |
4464279 | May 2010 | JP |
4504677 | Jul 2010 | JP |
WO 9429796 | Dec 1994 | WO |
WO 0057315 | Sep 2000 | WO |
WO 0114991 | Mar 2001 | WO |
WO 0133829 | May 2001 | WO |
WO 02061737 | Aug 2002 | WO |
WO 03012699 | Feb 2003 | WO |
WO 2004046971 | Jun 2004 | WO |
WO 2008021527 | Feb 2008 | WO |
WO 2008021528 | Feb 2008 | WO |
WO 2008127947 | Oct 2008 | WO |
Number | Date | Country | |
---|---|---|---|
20060101062 A1 | May 2006 | US |
Number | Date | Country | |
---|---|---|---|
60623846 | Oct 2004 | US | |
60628527 | Nov 2004 | US |