Systems and methods for managing unavailable storage devices

Information

  • Patent Grant
  • 8286029
  • Patent Number
    8,286,029
  • Date Filed
    Thursday, December 21, 2006
    17 years ago
  • Date Issued
    Tuesday, October 9, 2012
    11 years ago
Abstract
In some embodiments, storage devices, such as a storage drive or a storage node, in an array of storage devices may be reintroduced into the array of storage devices after a period of temporary unavailability without fully rebuilding the entire previously unavailable storage device.
Description
LIMITED COPYRIGHT AUTHORIZATION

A portion of disclosure of this patent document includes material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyrights whatsoever.


FIELD OF THE INVENTION

This invention relates generally to storage devices, and more specifically to managing storage devices in a computer system.


BACKGROUND

In recent years, the amount of data stored digitally on computer storage devices has increased dramatically. To accommodate increasing data storage needs, larger capacity storage devices have been developed. Typically, these storage devices are a single magnetic storage disk. Unfortunately, multiple concurrent access requests to a single storage drive can slow data reads and writes to a single drive system. One response to this problem has been to connect a plurality of storage devices to form a storage node. On storage nodes, data may be distributed over several storage disks. For example, a read operation for a file distributed over several storage drives may be faster than for a file located on a single drive because a distributed system permits parallel read requests for smaller portions of the file. Another response has been to connect a plurality of storage nodes to form a storage system of even larger capacity, referred to as a “cluster.”


One problem associated with distributed systems is drive failure and data loss. Though read and write access times tend to decrease as the number of storage devices in a system increase, the chances of storage device failures also increase as the number of storage devices increases. Thus, a distributed system is vulnerable to both temporary and permanent unavailability of storage devices.


When a storage device, for example, either a storage drive or a storage node, becomes unavailable, storage systems have to remove the storage device from the system and fully reconstruct the devices. As storage devices become increasingly larger, the amount of time required to fully reconstruct an unavailable storage device increases correspondingly, which affects response time and further exacerbates the risk of permanent data loss due to multiple device failures.


SUMMARY OF THE INVENTION

Because of the foregoing challenges and limitations, there is a need to provide a system that manages a set storage devices even if one or more of the storage devices becomes unavailable.


In one embodiment, a method for managing unavailable storage devices comprises detecting that a troubled storage device is unavailable, wherein a data set is stored on the troubled storage device, responding to a read or write request for data at least a portion of the data set while the troubled storage device is unavailable, and detecting that the troubled storage device is available and providing access to the data set stored on the troubled storage device without full reconstruction of the troubled storage device.


In another embodiment, a storage system for managing unavailable storage devices comprises a first storage device configured to respond to a read or write request for at least a portion of the data set after the first storage device returns from an unavailable state without full reconstruction of the first storage device. In one embodiment, the storage system further comprises at least one operational storage device configured to store a representation of at least a portion of the data set and provide access to the representation of at least a portion of the data set if the first storage device is unavailable.


In a further embodiment, a storage system for managing storage devices comprises a plurality of storage devices configured to store data distributed among at least two of the plurality of storage devices. In one embodiment, the storage system is further configured such that if one or more of the plurality of storage devices becomes unavailable and then becomes available again, the data is available after the one or more of the plurality of storage devices becomes available again.


For purposes of this summary, certain aspects, advantages, and novel features of the invention are described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment of the invention. Thus, for example, those skilled in the art will recognize that the invention may be embodied or carried out in a manner that achieves one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates one embodiment of a storage device.



FIGS. 2A, 2B, 2C, 2D, and 2E illustrate one embodiment of an example scenario where one of a set of drives goes down and then returns.



FIGS. 3A, 3B, and 3C illustrate one embodiment of an example scenario of a write journal when a drive goes down and then returns.



FIG. 4 illustrates one embodiment of a flowchart of operations for a read.



FIG. 5 illustrates one embodiment of a flowchart of operations for a write.



FIG. 6 illustrates one embodiment of a flowchart of operations for a journal flush.



FIG. 7 illustrates one embodiment of connections of storage nodes in one embodiment of a distributed file system.



FIG. 8A illustrates one embodiment of data stored in storage nodes in one embodiment of a distributed system.



FIG. 8B illustrates one embodiment of data stored in storage nodes in one embodiment of a distributed system wherein two storage drives are unavailable.



FIG. 9 illustrates one embodiment of a map data structure for storing locations of file data.



FIG. 10 illustrates one embodiment of a map data structure for storing data regarding file metadata.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Systems and methods which represent exemplary embodiments of the invention will now be described with reference to the drawings. Variations to the systems and methods which represent other embodiments will also be described. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the systems and methods described herein.


I. Overview


In one embodiment, the storage system provides access to data stored on a set of storage devices even when one of the storage devices is unavailable. While the storage device is unavailable, the storage system reconstructs the requested data and stores data targeted for the unavailable drive in a new location. Even though unavailable, the storage device does not have to be fully reconstructed and replaced, but can return to the storage system once it becomes available. Thus, in such embodiments, access to data on the storage devices continues with out significant interruption.


As used herein, the term “storage device” generally refers to a device configured to store data, including, for example, a storage drive, such as a single hard drive in an array of hard drives or in a storage node or an array of storage nodes, where each of the storage nodes may comprise multiple hard drives.


In one embodiment, a user or client device communicates with a storage system comprising one or more storage devices. In one embodiment, sets of data stored on the storage system (generically referred to herein as “data sets” or “files”) are striped, or distributed, across two or more of the storage devices, such as across two or more storage drives or two or more storage nodes. In one embodiment, files are divided into stripes of two or more data blocks and striping involves storing data blocks of a file on two or more storage devices. For example, if a file comprises two data blocks, a first data block of the file may be stored on a first storage device and a second data block of the file may be stored on a second storage device. A map data structure stores information on where the data is stored.


In addition to storing the data blocks of files on the storage devices, some embodiments may also store data protection data associated with the data. One example of data protection data is parity data, however, there are many other types of data protection data as discussed in further detail below. Those of ordinary skill in the art will recognize that parity data can be used to reconstruct portions of data that has been corrupted or is otherwise unavailable. In one embodiment, parity data is calculated by XORing two or more bits of data for which parity protection is desired. For example, if four data bits store the values 0110, the parity bit is equal to 0 XOR 1 XOR 1 XOR 0. Thus, the parity bit is 0. This parity bit may then be stored on a storage device and, if any one of the data bits later become lost or unavailable, the lost or unavailable bit can be reconstructed by XORing the remaining bits with the parity bit. With reference to the above-noted data block 0110, if bit one is unavailable (01X0), then bit 1 can be reconstructed using the logical equation 0 (Parity bit) XOR 0 XOR 1 XOR 0 to determine that unavailable bit one is 1. In other embodiments, other parity, error correction, accuracy, or data protection schemes may be used. The map data structure also store information on where the data protection data is stored.


In the embodiment, if one of the storage devices is unavailable, the storage system may use the data protection data to reconstruct the missing data. In addition, the storage system may use the map data structure to track current locations of write data intended for an unavailable storage device, but stored on another storage device.


II. Storage System



FIG. 1 illustrates one embodiment of a storage node 100 used to store data on a set of storage devices. In the embodiment of FIG. 1, the storage node 100 comprises multiple storage devices 130, 140, 150, 160 that are each coupled to a bus 170. An input/output interface 120 is coupled to the bus 170 and is configured to receive and transmit data to and from the storage node 100. The storage node 100 further comprises a controller 110 that is coupled to the bus 170 so that the controller is in communication with other components in the storage node 100. In one embodiment, the controller 110 manages the operations of the devices 130, 140, 150, 160 as read and write requests are received, such as, for example, from a user.


A. Storage Devices


In the exemplary storage node 100, each of the storage devices 130, 140, 150, 160 comprises a hard drive. However, it is recognized that the storage devices 130, 140, 150, 160 may include one or more drives, nodes, disks, clusters, objects, drive partitions, virtual volumes, volumes, drive slices, containers, and so forth. Moreover, the storage devices may be implemented using a variety of products that are well known in the art, such as, for example, ATA100 devices, SCSI devices, and so forth. In addition, the size of the storage devices may be the same size or may be of two or more different sizes.


B. Request Module


In one embodiment, the storage node 100 also includes a request module 180 for handling requests to read data from the storage devices 130, 140, 150, 160 as well as requests to write data to the storage devices 130, 140, 150, 160. The storage node 100 may also include other modules, such as a reconstruction module for starting the reconstruction of one or more unavailable and/or failed storage devices 130, 140, 150, 160. The storage node 100 may also include a restriper module that scans an unavailable storage devices, identifies data stored in the unavailable storage devices and begins moving the data to one or more available storage devices. The storage node 100 may also include a collector module that frees data that is no longer referenced due to writes while a drive was unavailable.


In general, the word module, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, C or C++. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Moreover, although in some embodiments a module may be separately compiled, in other embodiments a module may represent a subset of instructions of a separately compiled program, and may not have an interface available to other logical program units.


C. Group Protocol Module


In some embodiments, the storage node 100 also includes a group protocol module 195. The group protocol module 195 maintains information regarding the storage devices that are available to the storage system for read and/or write access. In one embodiment, the group protocol module 195 communicates with the storage devices 130, 140, 150, 160 and indicates, for example, the current operational state (for example, available, unavailable, up, down, dead) and/or how much space is available on the each device. In one embodiment, the group protocol module 195 comprises information regarding the availability of devices on other storage nodes. In one embodiment, when a device becomes unavailable, the group protocol module 195 notifies other storage nodes in the storage system 200. Similarly, when a previously unavailable device becomes available again, the group protocol module 195 communicates to the information nodes in the storage system.


D. Journal


In some embodiments, the storage node 100 also includes a journal 190, which may comprise one or more memory devices, such as NVRAM, flash ROM, or EEPROM, and/or a hard drive. The journal 190 is configured to store data that is intended to be stored on a device, and may or may not store other data. In an advantageous embodiment, the journal 190 is persistent such that it does not lose data when power to the storage node 100 is lost or interrupted. Thus, in the event of failure of node 100 and/or one or more of the storage devices 130, 140, 150, 160, recovery actions can be taken when power is regained or the storage node 100 reboots to ensure that transactions that were in progress prior to the failure are either completed or are aborted. If an unavailable device does not return to service, for example, the unavailable device is permanently unavailable and the information stored on the journal 190 may be transferred to other devices in the storage node 100 or, alternatively, transferred to storage devices in other storage nodes.


In some embodiments, the journal 190 is implemented as a non-linear journal. Embodiments of a non-linear journal suitable for storing write data are disclosed in U.S. patent application Ser. No. 11/506,597, entitled “Systems And Methods For Providing Nonlinear Journaling,” U.S. patent application Ser. No. 11/507,073, entitled “Systems And Methods For Providing Nonlinear Journaling,”, U.S. patent application Ser. No. 11/507,070, entitled “Systems And Methods For Providing Nonlinear Journaling,” and Ser. No. 11/507,076, entitled “Systems And Methods For Allowing Incremental Journaling,” all filed on Aug. 8, 2006, and all of which are hereby incorporated herein by reference in their entirety.


It is also recognized that in some embodiments, the storage system is implemented without using a journal. In such embodiments, the data may be synchronously written to disk during the write, and/or the data may be written, for example, to a persistent write-back cache that saves the data until the storage device becomes available.


E. System Information


The storage node 100 may run on a variety of computer systems such as, for example, a computer, a server, a smart storage unit, and so forth. In one embodiment, the computer may be a general purpose computer using one or more microprocessors, such as, for example, an Intel® Pentium® processor, an Intel® Pentium® II processor, an Intel® Pentium® Pro processor, an Intel® Pentium® IV processor, an Intel® Pentium® D processor, an Intel® Core™ processor, an xx86 processor, an 8051 processor, a MIPS processor, a Power PC processor, a SPARC processor, an Alpha processor, and so forth. The computer may run a variety of operating systems that perform standard operating system functions such as, for example, opening, reading, writing, and closing a file. It is recognized that other operating systems may be used, such as, for example, Microsoft® Windows® 3.X, Microsoft® Windows 98, Microsoft® Windows® 2000, Microsoft® Windows® NT, Microsoft® Windows® CE, Microsoft® Windows® ME, Microsoft® Windows® XP, Palm Pilot OS, Apple® MacOS®, Disk Operating System (DOS), UNIX, IRIX, Solaris, SunOS, FreeBSD, Linux®, or IBM® OS/2® operating systems.


F. Files


As used herein, a file is a collection of data stored in one logical unit that is associated with one or more filenames. For example, the filename “test.txt” may be associated with a file that comprises data representing text characters. The data blocks of the file may be stored at sequential locations on a storage device or, alternatively, portions of the data blocks may be fragmented such that the data blocks are not in one sequential portion on the storage device. In an embodiment where file striping is used, such as in a RAID 5 storage system, for example, data blocks of a file may be stored on multiple storage devices. For example, in a RAID 5 system, data blocks are interleaved across multiple storage devices within an array of storage devices. The stripe width is the size of the data block stored on a single device before moving on to the next device in the device array. On the last device in the device array, redundancy information is stored, rather than data blocks of the file. The redundancy information in RAID 5 is the parity of the previous interleaved data blocks. The process repeats for other data blocks of the file, except that the device that includes the parity data rotates from device to device in the array for each stripe. It is recognized that a variety of striping techniques may be used.


G. Data Protection


In some embodiments the storage system may utilize one or more types of data protection. For example, the storage system may implement one or more error correcting codes. These codes include a code “in which each data signal conforms to specific rules of construction so that departures from this construction in the received signal can generally be automatically detected and corrected. It is used in computer data storage, for example in dynamic RAM, and in data transmission.” (http://en.wikipedia.org/wiki/Error_correcting_code). Examples of error correction code include, but are not limited to, Hamming code, Reed-Solomon code, Reed-Muller code, Binary Golay code, convolutional code, and turbo code. In some embodiments, the simplest error correcting codes can correct single-bit errors and detect double-bit errors, and other codes can detect or correct multi-bit errors.


In addition, the error correction code may include forward error correction, erasure code, fountain code, parity protection, and so forth. “Forward error correction (FEC) is a system of error control for data transmission, whereby the sender adds redundant to its messages, which allows the receiver to detect and correct errors (within some bound) without the need to ask the sender for additional data.” (http://en.wikipedia.org/wiki/forward_error_correction). Fountain codes, also known as rateless erasure codes, are “a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can be recovered from any subset of the encoding symbols of size equal to or only slightly larger than the number of source symbols.” (http://en.wikipedia.org/wiki/Fountain_code). “An erasure code transforms a message of n blocks into a message with >n blocks such that the original message can be recovered from a subset of those blocks” such that the “fraction of the blocks required is called the rate, denoted r (http://en.wikipedia.org/wiki/Erasure_code). “Optimal erasure codes produce n/r blocks where any n blocks is sufficient to recover the original message.” (http://en.wikipedia.org/wiki/Erasure_code). “Unfortunately optimal codes are costly (in terms of memory usage, CPU time or both) when n is large, and so near optimal erasure codes are often used,” and “[t]hese require (1+ε)n blocks to recover the message. Reducing ε can be done at the cost of CPU time.” (http://en.wikipedia.org/wiki/Erasure_code).


The data protection may include other error correction methods, such as, for example, Network Appliance's RAID double parity methods, which includes storing data in horizontal rows, calculating parity for data in the row, and storing the parity in a separate row parity disks along with other double parity methods, diagonal parity methods, and so forth.


In another embodiment, odd parity may be used such that an additional NOT logical operator is applied after XORing data bits in order to determine the unavailable bit. Those of skill in the art will appreciate that there are other parity schemes that may be used in striping data and recovering lost data in a storage system. Any suitable scheme may be used in conjunction with the systems and methods described herein.


III. Example Scenario of a Down Drive


For purposes of illustration, an example scenario of a set of drives will be discussed wherein one of the drives becomes unavailable while the storage system is receiving read and write requests. This example scenario is just one of many possible scenarios and is meant only to illustrate some embodiments of the storage system.


A. Data Map



FIG. 2A illustrates an example scenario where one of a set of drives goes down and then returns to the storage system. The storage system includes five drives, Drive 0, Drive 1, Drive 2, Drive 3, and Drive 4. The storage system stores a set of data d0, d1, d2, d3, d4, and d5 wherein the data is protected using different types of parity protection. Data d0, d1, and d2 are protected using 3+1 parity protection, where p0(d0−d2) is the related parity data. Data d3 and d4 are protected using 2+2 parity protection, where p0(d3−d4) and p1(d3−d4) are the related parity data. Data d5 is protected using 2× mirroring or 1+1 parity, where p0(d5) is the related parity data. The storage system also includes a map data structure that stores the locations of the data and the parity data. As set forth in the map and as shown in the drives, d0 is stored on Drive 0 at location 0, d1 is stored on Drive 1 at location 0, d2 is stored on Drive 2 at location 3, d3 is stored on Drive 0 at location 1, d4 is stored on Drive 1 at location 1, d5 is stored on Drive 2 at location 2, p0(d0−d2) is stored on Drive 3 at location 0, p0(d3−d4) is stored on Drive 3 at location 3, p1(d3−d4) is stored on Drive 2 at location 1, and p0(d5) is stored on Drive 3 at location 2.


In FIG. 2B, Drive 1 becomes unavailable, such as, for example, because the connection to Drive 1 is unplugged. If the storage system receives a read request for d1, then the storage system will read d0 from Drive 0, d2 from Drive 2 and p0(d0−d2) from Drive 3 and then reconstruct d1 and return d1.


In FIG. 2C, Drive 1 becomes available, such as, for example, the connection to Drive 1 is plugged back in. The storage system is the same as before Drive 1 became unavailable. Moreover, even though Drive 1 became unavailable, Drive 1 did not have to be removed from the storage system and fully recreated. Instead, once it became available, it was integrated back into the storage system and made available.


In FIG. 2D, Drive 1 becomes unavailable, and the storage system receives a write request for d0, d1, and d2. The storage system determines whether all of the data locations for d0, d1, d2, and their corresponding parity data p0(d0−d2) are available. Because Drive 1 is not available for d1, then the storage system decides to store d1 on Drive 4 at location 0, which maintains the data protection by not having d1 on the same drive as the other data or parity data. Then the storage system updates the map so that the location for d1 is Drive 4, location 0 as shown in the map for FIG. 2D. The storage system then writes the data blocks d0 to Drive 0, location 0, d1 to Drive 4, location 0, and d2 to Drive 2, location 3; computes the parity data p0(d0−d2), and stores the parity data p0(d0−d2) on Drive 3, location 0.


In FIG. 2E, Drive 1 becomes available and the location of the data that was moved from Drive 1 while it was unavailable remains stored on the newly assigned drive and appropriately reference in the map. In addition, data that was not written while Drive 1 was not moved and remains on Drive 1 and is now accessible on Drive 1, such as, for example, d4. Again, even though Drive 1 became unavailable, Drive 1 did not have to be removed from the storage system and fully recreated. Instead, once it became available, it was integrated back into the storage system and made available.


It is recognized that in some embodiments, after the storage system recognizes that a drive is unavailable, the storage system may begin to move the data from the unavailable drive to another drive so that in case the drive becomes permanently unavailable the migration process has already begun, but if the drive becomes available, the data that has not been moved remains on the now available drive. It is also recognized that the example scenario of FIGS. 2A, 2B, 2C, 2D, and 2E are meant only to illustrate embodiments of a storage system and not to limit the scope of the invention.


B. Journal


In some embodiments, the storage system includes a journal for storing write transactions for the drives. In some circumstances, the actual writes to the disks of the drives d0 not occur right away. Accordingly, after a write request is processed, the data is stored in the journal until the journal is flushed. When the journal is flushed, it writes the data to the available disks. However, if a drive is not available, the data can remain in the journal until the drive becomes available and at that time it is written to the drives disk. The data can remain in the journal until the drive becomes available or the drive becomes permanently unavailable wherein the data is then removed from the journal. In other systems, once a drive is marked as unavailable, all data stored in the journal for that drive is deleted and the drive is recreated even if a drive is only down for a very short time period and fully functional when it returns and becomes available.



FIGS. 3A, 3B, and 3C illustrate one embodiment of an example scenario of a write journal when a drive becomes unavailable and then becomes available. In FIG. 3A, all of the drives are available so their status is set to UP. The storage system then receives a request to write d4 on Drive 1 at location 1 with a new data value. The storage system stores d4 in the journal associating it with Drive 1 and waits for the journal to be flushed. In FIG. 3B, Drive 1 goes becomes unavailable and the status is set to DOWN. The journal is flushed, but because Drive 1 is DOWN, d4 is kept in the journal. In FIG. 3C, Drive 1 becomes available and the status is set to UP. When the journal is flushed, d4 is written to Drive 1 and removed from the journal.


Again, even though Drive 1 became unavailable, the data destined for Drive 1 did not have to be deleted from the journal. Instead, once Drive 1 became available, it was integrated back into the system and the data was properly stored on the disk of Drive 1.


It is recognized that the journal can be implemented in many different ways and that the storage system may not include a journal as set forth above. This example scenario is meant only to illustrate some embodiments of a storage system and not to limit the scope of the invention.


IV. Read Request



FIG. 4 illustrates one embodiment of a flowchart of operations for processing a read request. Beginning at a start state 410, the read request process 400 proceeds to the next state and receives a read request 420. The read request 420 may be for one or more blocks of data. The read request process 400 then determines whether all data blocks are available 430. If all data blocks are available, the read request process 400 reads the data 440. If all data blocks are not available, then the read request process 400 reads the available data and if possible reconstructs the missing data using the data protection data 450. Next, the read request process 400 returns the data blocks (from the read and/or the reconstruction) or an error message if the read and/or the reconstruction failed 460 and proceeds to an end state 470.


In one embodiment, the reconstruction may fail if, for example, there is not enough data protection data to reconstruct the unavailable data blocks, such as, for example, if the parity is 4+1 and two of the data blocks are unavailable.


While FIG. 4 illustrates one embodiment of processing a read request, it is recognized that a variety of embodiments may be used. For example, the read request process 400 may read the available data and then determine whether all data blocks are available. Moreover, depending on the embodiment, certain of the blocks described in the figure above may be removed, others may be added, and the sequence may be altered.


V. Write Request



FIG. 5 illustrates one embodiment of a flowchart of operations for performing a write request. Beginning at a start state 510, the write request process 500 proceeds to the next state and receives a write request 520. Proceeding to the next state, the write request process 500 determines whether the devices on which the data blocks and parity blocks are to be stored are available 530. The write request may, for example, check the map data structure entries for each of the data blocks and parity blocks to determine the devices on which they will be stored, whereas in other embodiments, the drives on which they will be stored are provided to the write request process 500. Moreover, to determine whether a device is available, the write request process 500 may check the group management protocol data that indicates the states of the devices. If there is more than one device not available, the write request process 500 determines new locations for the data and/or parity blocks 540 and updates the metadata to correspond to the new locations 550. Next, the write request process 500 writes the data blocks to the appropriate devices 560, writes the parity data 570, and returns to an end state 580.


In one embodiment, the write request process 500 may fail and/or return an error if, for example, there is not enough room to store the data and/or parity blocks on other devices, such as, for example, if the parity is 4+1, there are six drives and two of the drives are unavailable. In such a scenario, the write request process 500 may return an error and/or may store the data in any available space, but return a message that some of the data is stored without the requested data protection.


While FIG. 5 illustrates one embodiment of processing a write request, it is recognized that a variety of embodiments may be used. For example, the write request process 500 may compute the data protection data or the data protection data may be received by the write request process 500. Moreover, depending on the embodiment, certain of the blocks described in the figure above may be removed, others may be added, and the sequence may be altered.


As discussed in detail below, the storage system may be implemented as part of a distributed file system. In one embodiment of a distributed file system, the write request also checks to see if all copies of the metadata storing the locations of the data is stored on available nodes. One embodiment of pseudocode for implementing a write request process is as follows:

















Write( ) {



  If (not all inodes available) {



    Re-allocate missing inodes on available drives



    Write new copies of the inode



    Update lin tree to point to new inodes



  }



  If (not all data and parity blocks available) {



    Re-allocate missing data and parity blocks on available



     drives



    Update file metatree to point to new blocks



  }



  For all data blocks b {



    Write_block_to_journal (b)



  }



  For all parity blocks b {



    Write_block_to_journal (b)



  }



}










In one embodiment, the inodes store metadata about files, and the LIN tree stores location data for the inodes. While the above psuedocode represents one embodiment of an implementation of a write process for one embodiment of a distributed file system, it is recognized that the write process may be implemented in a variety of ways and is not limited to the exemplary embodiment above.


VI. Journal Flush


As noted above, in some embodiments, when a write request is processed, the data to be stored to a disk is stored in a journal until the write to the disk until after the write has occurred. FIG. 6 illustrates one embodiment of a flowchart of operations for a journal flush. Beginning at a start state 610, the journal flush process 600 proceeds to the next state and for all devices d 620, the journal flush process 600 determines whether the devices is UP, DOWN or DEAD 630. If the device is UP, the journal flush process 600 flushes the blocks for that device to the device's disk 640. If the device is DOWN, the journal flush process 600 leaves the blocks for that device in the journal 650. If the device is DEAD, the journal flush process 600 discards the blocks in the journal that device 660. Once the devices d have been reviewed 670, the journal flush process 600 proceeds to an end state 680.


While FIG. 6 illustrates one embodiment of flushing the journal, it is recognized that a variety of embodiments may be used. For example, the flush journal process 600 may review more than one device d at a time. In addition, if a device is DEAD, the flush journal process 600 may send the blocks to a process that is handling the reconstruction of the DEAD drive. Moreover, depending on the embodiment, certain of the blocks described in the figure above may be removed, others may be added, and the sequence may be altered.


One embodiment of pseudocode for implementing a journal flush process is as follows:

















Flush_journal( ) {



  For all drives d {



    If (d is down) {



      Leave blocks in the journal



    } else if (d is up) {



      Flush blocks to disk



      When disk returns success,



      discard blocks from the journal



    } else if (d is dead) {



      Discard blocks in the journal



    }



  }



}










While the above psuedocode represents one embodiment of an implementation of a journal flush process, it is recognized that the journal flush process may be implemented in a variety of ways and is not limited to the exemplary embodiment above.


VII. Distributed System Embodiments


For purposes of illustration, some embodiments will now be described in the context of a distributed system such as, for example a distributed file system. Embodiments of a distributed file system suitable for accommodating reverse lookup requests are disclosed in U.S. patent application Ser. No. 10/007,003, entitled, “Systems And Methods For Providing A Distributed File System Utilizing Metadata To Track Information About Data Stored Throughout The System,” filed Nov. 9, 2001 which claims priority to Application No. 60/309,803, entitled “Systems And Methods For Providing A Distributed File System Utilizing Metadata To Track Information About Data Stored Throughout The System,” filed Aug. 3, 2001, U.S. Pat. No. 7,156,524 entitled “Systems and Methods for Providing A Distributed File System Incorporating a Virtual Hot Spare,” filed Oct. 25, 2002, and U.S. patent application Ser. No. 10/714,326 entitled “Systems And Methods For Restriping Files In A Distributed File System,” filed Nov. 14, 2003, which claims priority to Application No. 60/426,464, entitled “Systems And Methods For Restriping Files In A Distributed File System,” filed Nov. 14, 2002, all of which are hereby incorporated herein by reference in their entirety.


In one embodiment of a distributed file system, metadata structures, also referred to as inodes, are used to represent and manipulate the files and directories within the system. An inode is a data structure that describes a file or directory and may be stored in a variety of locations including on a storage device.


A directory, similar to a file, is a collection of data stored in one unit under a directory name. A directory, however, is a specialized collection of data regarding elements in a file system. In one embodiment, a file system is organized in a tree-like structure. Directories are organized like the branches of trees. Directories may begin with a root directory and/or may include other branching directories. Files resemble the leaves or the fruit of the tree. Although in the illustrated embodiment an inode represents either a file or a directory, in other embodiments, an inode may include metadata for other elements in a distributed file system, in other distributed systems, in other file systems, or other systems. In some embodiments files d0 not branch, while in other embodiments files may branch.



FIG. 7 illustrates an exemplary distributed system 700 comprising storage nodes 710, 720, 730, 740 and users 750, 760 that are in data communication via a communication medium 770. The communication medium 770 may comprise one or more wired and/or wireless networks of any type, such as SANs, LANs, WANs, MANs, and/or the Internet. In other embodiments, the distributed system 700 may be comprised of hard-wired connections between the storage nodes 710, 720, 730, 740, or any combination of communication types known to one of ordinary skill in the art.


In the embodiment of FIG. 7, the users 750, 760 may request data via any of the storage nodes 710, 720, 730, 740 via the communication medium 770. The users 750, 760 may comprise a personal computer, a mainframe terminal, PDA, cell phone, laptop, a client application, or any device that accesses a storage device in order to read and/or write data.



FIG. 8A illustrates a storage system 700 wherein data is stored on each of four storage nodes 710, 720, 730, 740, where each of the storage nodes comprises multiple storage devices, such as multiple hard drives. For example, storage node 710 comprises hard drives 802, 804, 806, and 808. Where the example embodiment shows the same number of devices for each node, in other embodiments, each node could have different numbers of drives.



FIG. 8B illustrates one embodiment of data stored on storage drives and storage nodes in one embodiment of a distributed system wherein two storage drives are unavailable. If the storage system determines that device 816 is unavailable, one or more of the data blocks on the unavailable device 816 can be moved to other devices that have available storage space, such as, for example, if the distributed system 800 receives a write request to write data on an unavailable device 816. In one embodiment, the distributed system 800 is a distributed file system where the metadata inode and the data sets are files.


A. Embodiments of Mapping Structures



FIG. 9 illustrates one embodiment of a map structure 900 used to store location data about data sets stored on one or more storage devices. The map structure 900 stores the location of the data blocks of the data set and data protection blocks of the data set. For example, nodes 960 store data indicating the location of the first stripe of data from FIG. 2E. The first stripe of data includes d0, d1, d2, and p0(d0−d2). In this embodiment, the data is indexed in a b-tree map structure 900 using the offset into the data set of the first block in the stripe. It is recognized, however, that a variety of map structures 900 may be used and/or different map structures 900 may be used for data protection and data. Node 970 stores the locations of the second stripe of data. The second stripe of data includes d3, d4, p0(d3−d4), and p1(d3−d4). Node 980 stores the locations of the third stripe of data. The third stripe of data includes d5 and p0(d5). The leaf nodes 960, 970, 980, 990, 992, 994 store data indicating the location of the data stripes. The leaf nodes 960, 970, 980, 990, 992, 994 are associated with parent nodes 930, 940950. The parent nodes 930, 940, 950 are associated with a root node 920. In the exemplary map structure 900, all copies of the superblocks related to the data set reference the root nodes 920 of the map structure 900.


In one embodiment, when a read or write request is received by the storage system, the map structure 900 is traversed in order to find the location of the requested data. For example, as indicated in leaf node 980, the data block d5 is stored on Drive 2, location 2 and the related parity data, p0 is stored on Drive 3, location 2. Thus, when the storage system receives a request for the data, the map structure 900 is traversed beginning at superblock 910, continuing to root node 920 and to node 940, and ending at node 980 where the location data for d5 is located. More particularly, the node 940 comprises an entry, 6, which may be referred to as a key. If the requested data is less than 6, the location data is stored off of the first branch of the node, for example, node 980; if the requested data is greater than or equal to 6, then the location data is stored off of the right branch of node 940. A similar process is performed in order to traverse from one of nodes 920 or 940 to one of the leaf nodes.


If the storage device storing the data for d5 is unavailable, the data blocks stored on the unavailable storage device may be migrated to another storage device. When this occurs, the map structure 900 is updated to indicate the new location of the data blocks in order to allow the data blocks to be accessed. In addition, if the device storing the data for node 980b, for example, is unavailable, a copy of node 980b is made and stored on an available node, and the same goes for the nodes 940 and 920. Systems and methods for traversing the map structure to check to see whether the nodes are available are disclosed in U.S. patent application Ser. No. 11/262,308 and U.S. Provisional Application Nos. 60/623,846 and 60/628,527 referenced below.


In one embodiment, the map structure 900 is a file map structure that stores the locations of the file data and the parity data of a file. The superblocks are the inodes for the file.



FIG. 10 illustrates one embodiment of a map structure 1000 used to store data on a distributed file system. More particularly, the map structure 1000 illustrates nodes that may be used in an index tree that maps the locations of inodes on a distributed file system using the unique identifier of the inodes, also referred to as a LIN tree. For example, metadata nodes 1035, 1040, 1050, and 1055 store data indicating the location of the file index, or inode, corresponding to the particular .txt files noted in the Figure. As illustrated in FIG. 10, the leaf nodes 1035, 1040 are associated with a parent node 1030 and the leaf nodes 1050, 1055 are associated with a parent node 1045. Each of the parent nodes 1030, 1045 are associated with a root node 1025. In the exemplary map structure 1000, four superblocks 1005, 1010, 1015, 1020, are illustrated, where each superblock may be stored on a different node in a storage system. The superblocks each include references to each copy of the root node 1025 that may be stored on multiple devices. In one embodiment, multiple copies of each node are stored on various devices of a distributed storage system. U.S. patent application Ser. No. 11/255,818 entitled “Systems and Methods for Maintaining Distributed Data,” filed Oct. 21, 2005, which is hereby incorporated by reference in its entirety, describes additional exemplary methods of map of data and directory information in a file system.


In one embodiment, in operation, when a read or write request is received by the storage system, the index structure is traversed in order to find the metadata node for the requested file. For example, as indicated in leaf node 1035, the file “K_file.txt” has an index of 8. Thus, when the storage system receives a request for the file associated with an index of 8, the map structure 1000 is traversed, beginning at a superblock 1005, 1010, 1015, 1020, continuing to node 1025, then continuing to node 1030, and ending at node 1035, where the metadata node for the file associated with index 8 is located. More particularly, the node 1025 comprises an entry, 20, which may be referred to as a key. If the requested file's index is less than or equal to 20, the files inode location is stored off of the first branch of the node, for example, node 1030; if the requested file's index is greater than 20, then the file's inode location is stored off of the second branch of the tree, for example, node 1045. A similar process is performed in order to traverse from one of nodes 1030 or 1045 to one of the leaf nodes comprising the location of the files inode.


Similar to the discussion above, if any of the nodes, including parent nodes, root nodes and superblocks, are stored on an unavailable device, references to the nodes on the unavailable devices should be updated to point to the new location of the index data previously stored on the unavailable nodes.


The embodiment of FIG. 10 illustrates a scenario when the metadata node device storing leaf node 1035a, node 1030 and an inode 1025a are unavailable. Thus, when one of the inode files for the file “K_file.txt” is moved to another device, metadata nodes 1035a and 1035b are updated to reflect the new location of the inode file. The system may then determine that one of the metadata files, for example, node 1035a, is stored on an unavailable device, and so metadata node 1035b is copied to become new node 1035a and new node 1035a is stored on an extant device. The system then updates the nodes 1030a and 1030b to reference the newly stored node 1035a. The system may then determine that node 1030a is stored on an unavailable device, and so node 1030b is copied to become new node 1030a, and new node 1030a is stored on an extant device. The system then updates nodes 1025a and 1025b to reference the newly stored 1030a. Because nodes 1025a and 1025b are on available devices, no additional updating is needed. Accordingly, nodes 1135a, 1130a, 1130b, 1125a, and 1125b are updated (as indicated by the dotted lines).


In one embodiment, more than one copy of each index and leaf node is stored in the distributed file system so that if one of the devices fails, the index data will still be available. In one embodiment, the distributed file system uses a process that restores copies of the index and leaf nodes of the map data structures 900, 1000 if one of the copies is stored on an unavailable device.


As used herein, data structures are collections of associated data elements, such as a group or set of variables or parameters. In one embodiment a structure may be implemented as a C-language “struct.” One skilled in the art will appreciate that many suitable data structures may be used.


Embodiments of systems and methods for restoring metadata and data that is stored on nodes or drives that are unavailable and for updating the map data structure are disclosed in U.S. patent application Ser. No. 11/255,337, entitled “Systems And Methods For Accessing And Updating Distributed Data,” filed on Oct. 21, 2005, U.S. patent application Ser. No. 11/262,308, entitled “Distributed System With Asynchronous Execution Systems And Methods,” filed on Oct. 28, 2005, which claims priority to U.S. Provisional Appl. No. 60/623,846, entitled “Distributed System With Asynchronous Execution Systems And Methods,” filed on Oct. 29, 2004, and U.S. Provisional Appl. No. 60/628,527, entitled “Distributed System With Asynchronous Execution Systems And Methods,” filed on Nov. 15, 2004, and Patent Appl. No. 10,714,326, entitled “Systems and Methods for Restriping Files In A Distributed System,” filed on Nov. 14, 2003, which claims priority to U.S. Provisional Appl. No. 60/426,464, entitled “Systems and Methods for Restriping Files In A Distributed System,” filed on Nov. 14, 2002, all of which are hereby incorporated herein by reference in their entirety.


B. Group Management Protocol


In some embodiments, a group management protocol (“GMP”) is used to maintain a view of the nodes and/or drives available to the distributed file system. The GMP communicates which storage devices, for example, storage nodes and storage drives, are available to the storage system, their current operational state (for example, available, unavailable, up, down, dead) and how much space is available on the each device. The GMP sends a notification when a storage devices is unavailable, when it becomes available again, and/or when it becomes is permanently unavailable. The storage system uses information from the GMP to determine which storage devices are available for reading and writing after receiving a read or write request.


On embodiment of a set of pseudocode for a GMP is set forth as follows:














If (receive an error from the drive on a write) {


 Send notice that drive is about to go down


 Mark drive as down on the participant side


 Execute a GMP transaction to inform the rest of the cluster the drive


  is down


   Broadcast that we want to bring drive down (GMP prepare


    message)


   Receive an OK from all nodes (GMP prepared message)


   Broadcast that they should take the drive down (GMP


    commit message)


   Each initiator updates their map that the drive is down


}









While the above pseudocode represents one embodiment of an implementation of a GMP, it is recognized that the GMP may be implemented in a variety of ways and is not limited to the exemplary embodiment above. Moreover the GMP may be used in conjunction with other protocols for coordinating activities among multiple nodes and/or systems. Embodiments of a protocol for coordinating activities among nodes are disclosed in U.S. patent application Ser. No. 11/262,306, entitled “Non-Blocking Commit Protocol Systems And Methods,” filed Oct. 28, 2005, which claims priority to U.S. Provisional Appl. No. 60/623,843, entitled “Non-Blocking Commit Protocol Systems And Methods,” filed Oct. 29, 2004, and U.S. patent application Ser. No. 11/449,153, entitled “Non-Blocking Commit Protocol Systems And Methods,” filed Jun. 8, 2006, all of which are hereby incorporated herein by reference in their entirety.


Some of the figures and descriptions relate to an embodiment of the invention wherein the environment is that of a distributed file system, the present invention is not limited by the type of environment in which the systems and methods are used, however, and the systems and methods may be used in other environments, such as, for example, other file systems, other distributed systems, non-distributed systems, the Internet, the World Wide Web, a private network for a hospital, a broadcast network for a government agency, an internal network of a corporate enterprise, an intranet, a local area network, a wide area network, a wired network, a wireless network, a system area network, and so forth. It is also recognized that in other embodiments, the systems and methods described herein may be implemented as a single module and/or implemented in conjunction with a variety of other modules and the like.


VIII. Other Embodiments


While certain embodiments of the invention have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the present invention. The above-mentioned alternatives are examples of other embodiments, and they d0 not limit the scope of the invention. It is recognized that a variety of data structures with various fields and data sets may be used. In addition, other embodiments of the flow charts may be used.


It is also recognized that the term “remote” may include data, objects, devices, components, and/or modules not stored locally, that are or are not accessible via the local bus or data stored locally and that is “virtually remote.” Thus, remote data may include a device which is physically stored in the same room and connected to the user's device via a network. In other situations, a remote device may also be located in a separate geographic area, such as, for example, in a different location, country, and so forth.


Moreover, while the description details certain embodiments of the invention, it will be appreciated that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof.

Claims
  • 1. A storage system for managing unavailable storage devices comprising: a plurality of storage devices comprising at least a first storage device and a second storage device;one or more computer processors, each of the one or more computer processors in electronic communication with one or more of the plurality of storages devices; andat least one executable software module executed by the one or more computer processors, wherein the executable software module is configured to reconstruct data on an unavailable device on a read request, wherein the read request does not result in the executable software module copying the reconstructed data to an alternate available device, and wherein the executable software module is configured to: store a set of data on the plurality of storage devices, wherein the set of data comprises a plurality of blocks, and wherein storing the set of data comprises storing at least a first block and a second block on the first storage device;determine that the first storage device is unavailable;receive a read request corresponding to the first block on the first storage device;in response to the read request, reconstruct the first block using one or more other blocks in the plurality of blocks, wherein the read request does not trigger modifying an association between the first block and the first storage device in metadata based on a future availability of the first storage device after a period of temporary unavailability;return the reconstructed data block;receive a write request comprising a request to write an updated block corresponding to the second block;in response to the write request, store the updated block on a second storage device in the plurality of storage devices, the second storage device different from the first storage device, andmodify an association between the second block and the first storage device in metadata to reflect an association between the second block and the second storage device,determine that the first storage device is available after a period of unavailability; andin response to determining that the first storage device is available, free the second block on the first storage device.
  • 2. The storage system of claim 1, wherein the storage system is a file system.
  • 3. The storage system of claim 2, wherein the file system is a distributed file system.
  • 4. The storage system of claim 1, wherein the at least one executable software module is further configured to, in response to the write request, store the updated block in a journal.
  • 5. The storage system of claim 4, wherein the at least one executable software module is further configured to flush the updated block from the journal to the first storage device when the first storage device is available.
  • 6. The storage system of claim 1, wherein the set of data comprises a plurality of blocks of data and at least one of the first block and the second block comprises a parity block.
  • 7. The storage system of claim 1, wherein the set of data comprises mirrored data, and the first block comprises a mirror of a third block in the set of data.
  • 8. The storage system of claim 1, wherein storing the set of data on the plurality of storage devices comprises storing the set of data on a subset of storage devices of the plurality of storage devices, and wherein the subset of storage devices comprises less than all the storage devices in the plurality of storage devices.
  • 9. The storage system of claim 8, wherein storing the set of data on the subset of storage devices of the plurality of storage devices comprises storing at least one block of data on each storage device in the subset of storage devices.
  • 10. The storage system of claim 9, wherein the second storage device in the plurality of storage devices is a member of the subset of storage devices.
  • 11. The storage system of claim 9, wherein the second storage device in the plurality of storage devices is not a member of the subset of storage devices.
  • 12. The storage system of claim 1, wherein the at least one executable software module is further configured to store the set of data on the plurality of storage devices based on a parity protection plan.
  • 13. The storage system of claim 12, wherein storing the updated block on the second storage device in the plurality of storage devices satisfies the parity protection plan.
  • 14. The storage system of claim 12, wherein storing the updated block on the second storage device in the plurality of storage devices does not satisfy the parity protection plan.
  • 15. The storage system of claim 1, wherein at least one executable software module is further configured to store a second set of data on the plurality of storage devices, wherein the second set of data comprises a second plurality of blocks, and wherein storing the second set of data comprises storing at least a third block on the first storage device and a fourth block on the second storage device.
  • 16. The storage system of claim 1, wherein the set of data comprises a second set of metadata.
  • 17. The storage system of claim 16, wherein the second set of metadata comprises at least a mirrored copy of the first set of metadata.
US Referenced Citations (450)
Number Name Date Kind
4608688 Hansen et al. Aug 1986 A
4780796 Fukuda et al. Oct 1988 A
5163131 Row et al. Nov 1992 A
5181162 Smith et al. Jan 1993 A
5212784 Sparks May 1993 A
5230047 Frey et al. Jul 1993 A
5251206 Calvignac et al. Oct 1993 A
5258984 Menon et al. Nov 1993 A
5329626 Klein et al. Jul 1994 A
5359594 Gould et al. Oct 1994 A
5403639 Belsan et al. Apr 1995 A
5423046 Nunnelley et al. Jun 1995 A
5459871 Van Den Berg Oct 1995 A
5481699 Saether Jan 1996 A
5548724 Akizawa et al. Aug 1996 A
5548795 Au Aug 1996 A
5568629 Gentry et al. Oct 1996 A
5596709 Bond et al. Jan 1997 A
5606669 Bertin et al. Feb 1997 A
5612865 Dasgupta Mar 1997 A
5649200 Leblang et al. Jul 1997 A
5657439 Jones et al. Aug 1997 A
5668943 Attanasio et al. Sep 1997 A
5680621 Korenshtein Oct 1997 A
5694593 Baclawski Dec 1997 A
5696895 Hemphill et al. Dec 1997 A
5734826 Olnowich et al. Mar 1998 A
5754756 Watanabe et al. May 1998 A
5761659 Bertoni Jun 1998 A
5774643 Lubbers et al. Jun 1998 A
5799305 Bortvedt et al. Aug 1998 A
5805578 Stirpe et al. Sep 1998 A
5805900 Fagen et al. Sep 1998 A
5806065 Lomet Sep 1998 A
5822790 Mehrotra Oct 1998 A
5832200 Yoda Nov 1998 A
5862312 Mann Jan 1999 A
5870563 Roper et al. Feb 1999 A
5878410 Zbikowski et al. Mar 1999 A
5878414 Hsiao et al. Mar 1999 A
5884046 Antonov Mar 1999 A
5884098 Mason, Jr. Mar 1999 A
5884303 Brown Mar 1999 A
5890147 Peltonen et al. Mar 1999 A
5917998 Cabrera et al. Jun 1999 A
5933834 Aichelen Aug 1999 A
5943690 Dorricott et al. Aug 1999 A
5963963 Schmuck et al. Oct 1999 A
5966707 Van Huben et al. Oct 1999 A
5983232 Zhang Nov 1999 A
5996089 Mann Nov 1999 A
6000007 Leung et al. Dec 1999 A
6014669 Slaughter et al. Jan 2000 A
6021414 Fuller Feb 2000 A
6029168 Frey Feb 2000 A
6038570 Hitz et al. Mar 2000 A
6044367 Wolff Mar 2000 A
6052759 Stallmo et al. Apr 2000 A
6055543 Christensen et al. Apr 2000 A
6055564 Phaal Apr 2000 A
6070172 Lowe May 2000 A
6081833 Okamoto et al. Jun 2000 A
6081883 Popelka et al. Jun 2000 A
6108759 Orcutt et al. Aug 2000 A
6117181 Dearth et al. Sep 2000 A
6122754 Litwin et al. Sep 2000 A
6136176 Wheeler et al. Oct 2000 A
6138126 Hitz et al. Oct 2000 A
6154854 Stallmo Nov 2000 A
6169972 Kono et al. Jan 2001 B1
6173374 Heil et al. Jan 2001 B1
6202085 Benson et al. Mar 2001 B1
6209059 Ofer et al. Mar 2001 B1
6219693 Napolitano et al. Apr 2001 B1
6226377 Donaghue, Jr. May 2001 B1
6247108 Long Jun 2001 B1
6279007 Uppala Aug 2001 B1
6321345 Mann Nov 2001 B1
6334168 Islam et al. Dec 2001 B1
6334966 Hahn et al. Jan 2002 B1
6353823 Kumar Mar 2002 B1
6384626 Tsai et al. May 2002 B2
6385626 Tamer et al. May 2002 B1
6393483 Latif et al. May 2002 B1
6397311 Capps May 2002 B1
6405219 Saether et al. Jun 2002 B2
6408313 Campbell et al. Jun 2002 B1
6415259 Wolfinger et al. Jul 2002 B1
6421781 Fox et al. Jul 2002 B1
6434574 Day et al. Aug 2002 B1
6449730 Mann Sep 2002 B2
6453389 Weinberger et al. Sep 2002 B1
6457139 D'Errico et al. Sep 2002 B1
6463442 Bent et al. Oct 2002 B1
6496842 Lyness Dec 2002 B1
6499091 Bergsten Dec 2002 B1
6502172 Chang Dec 2002 B2
6502174 Beardsley et al. Dec 2002 B1
6523130 Hickman et al. Feb 2003 B1
6526478 Kirby Feb 2003 B1
6546443 Kakivaya et al. Apr 2003 B1
6549513 Chao et al. Apr 2003 B1
6557114 Mann Apr 2003 B2
6567894 Hsu et al. May 2003 B1
6567926 Mann May 2003 B2
6571244 Larson May 2003 B1
6571349 Mann May 2003 B1
6574745 Mann Jun 2003 B2
6594655 Tal et al. Jul 2003 B2
6594660 Berkowitz et al. Jul 2003 B1
6594744 Humlicek et al. Jul 2003 B1
6598174 Parks et al. Jul 2003 B1
6618798 Burton et al. Sep 2003 B1
6631411 Welter et al. Oct 2003 B1
6658554 Moshovos et al. Dec 2003 B1
6662184 Friedberg Dec 2003 B1
6668304 Satran et al. Dec 2003 B1
6671686 Pardon et al. Dec 2003 B2
6671704 Gondi et al. Dec 2003 B1
6671772 Cousins Dec 2003 B1
6687805 Cochran Feb 2004 B1
6725392 Frey et al. Apr 2004 B1
6732125 Autrey et al. May 2004 B1
6742020 Dimitroff et al. May 2004 B1
6748429 Talluri et al. Jun 2004 B1
6801949 Bruck et al. Oct 2004 B1
6848029 Coldewey Jan 2005 B2
6856591 Ma et al. Feb 2005 B1
6871295 Ulrich et al. Mar 2005 B2
6895482 Blackmon et al. May 2005 B1
6895534 Wong et al. May 2005 B2
6907011 Miller et al. Jun 2005 B1
6907520 Parady Jun 2005 B2
6917942 Burns et al. Jul 2005 B1
6920494 Heitman et al. Jul 2005 B2
6922696 Lincoln et al. Jul 2005 B1
6922708 Sedlar Jul 2005 B1
6934878 Massa et al. Aug 2005 B2
6940966 Lee Sep 2005 B2
6954435 Billhartz et al. Oct 2005 B2
6990604 Binger Jan 2006 B2
6990611 Busser Jan 2006 B2
7007044 Rafert et al. Feb 2006 B1
7007097 Huffman et al. Feb 2006 B1
7010622 Bauer et al. Mar 2006 B1
7017003 Murotani et al. Mar 2006 B2
7043485 Manley et al. May 2006 B2
7043567 Trantham May 2006 B2
7058639 Chatterjee et al. Jun 2006 B1
7069320 Chang et al. Jun 2006 B1
7103597 McGoveran Sep 2006 B2
7111305 Solter et al. Sep 2006 B2
7113938 Highleyman et al. Sep 2006 B2
7124264 Yamashita Oct 2006 B2
7146524 Patel et al. Dec 2006 B2
7152182 Ji et al. Dec 2006 B2
7165192 Cadieux et al. Jan 2007 B1
7177295 Sholander et al. Feb 2007 B1
7181746 Perycz et al. Feb 2007 B2
7184421 Liu et al. Feb 2007 B1
7194487 Kekre et al. Mar 2007 B1
7206805 McLaughlin, Jr. Apr 2007 B1
7225204 Manley et al. May 2007 B2
7228299 Harmer et al. Jun 2007 B1
7240235 Lewalski-Brechter Jul 2007 B2
7249118 Sandler et al. Jul 2007 B2
7257257 Anderson et al. Aug 2007 B2
7290056 McLaughlin, Jr. Oct 2007 B1
7313614 Considine et al. Dec 2007 B2
7318134 Oliveira et al. Jan 2008 B1
7346346 Fachan Mar 2008 B2
7346720 Fachan Mar 2008 B2
7370064 Yousefi'zadeh May 2008 B2
7373426 Jinmei et al. May 2008 B2
7386610 Vekiarides Jun 2008 B1
7386675 Fachan Jun 2008 B2
7386697 Case et al. Jun 2008 B1
7389379 Goel et al. Jun 2008 B1
7440966 Adkins et al. Oct 2008 B2
7451341 Okaki et al. Nov 2008 B2
7502801 Sawdon et al. Mar 2009 B2
7509448 Fachan et al. Mar 2009 B2
7509524 Patel et al. Mar 2009 B2
7533298 Smith et al. May 2009 B2
7536588 Hafner et al. May 2009 B2
7546354 Fan et al. Jun 2009 B1
7546412 Ahmad et al. Jun 2009 B2
7551572 Passey et al. Jun 2009 B2
7558910 Alverson et al. Jul 2009 B2
7571348 Deguchi et al. Aug 2009 B2
7577258 Wiseman et al. Aug 2009 B2
7577667 Hinshaw et al. Aug 2009 B2
7590652 Passey et al. Sep 2009 B2
7593938 Lemar et al. Sep 2009 B2
7617289 Srinivasan et al. Nov 2009 B2
7631066 Schatz et al. Dec 2009 B1
7639818 Fujimoto et al. Dec 2009 B2
7665123 Szor et al. Feb 2010 B1
7665136 Szor et al. Feb 2010 B1
7676691 Fachan et al. Mar 2010 B2
7680836 Anderson et al. Mar 2010 B2
7680842 Anderson et al. Mar 2010 B2
7685126 Patel et al. Mar 2010 B2
7685162 Heider et al. Mar 2010 B2
7689597 Bingham et al. Mar 2010 B1
7707193 Zayas et al. Apr 2010 B2
7716262 Pallapotu May 2010 B2
7734603 McManis Jun 2010 B1
7739288 Lemar et al. Jun 2010 B2
7743033 Patel et al. Jun 2010 B2
7752226 Harmer et al. Jul 2010 B1
7752402 Fachan et al. Jul 2010 B2
7756898 Passey et al. Jul 2010 B2
7779048 Fachan et al. Aug 2010 B2
7783666 Zhuge et al. Aug 2010 B1
7788303 Mikesell et al. Aug 2010 B2
7797283 Fachan et al. Sep 2010 B2
7797323 Eshghi et al. Sep 2010 B1
7822932 Fachan et al. Oct 2010 B2
7840536 Ahal et al. Nov 2010 B1
7844617 Lemar et al. Nov 2010 B2
7848261 Fachan Dec 2010 B2
7870345 Issaquah et al. Jan 2011 B2
7882068 Schack et al. Feb 2011 B2
7882071 Fachan et al. Feb 2011 B2
7899800 Fachan et al. Mar 2011 B2
7900015 Fachan et al. Mar 2011 B2
7917474 Passey et al. Mar 2011 B2
7937421 Mikesell et al. May 2011 B2
7949636 Akidau et al. May 2011 B2
7949692 Lemar et al. May 2011 B2
7953704 Anderson et al. May 2011 B2
7953709 Akidau et al. May 2011 B2
7962779 Patel et al. Jun 2011 B2
7966289 Lu et al. Jun 2011 B2
7971021 Daud et al. Jun 2011 B2
7984324 Daud et al. Jul 2011 B2
8005865 Passey et al. Aug 2011 B2
8010493 Anderson et al. Aug 2011 B2
8015156 Anderson et al. Sep 2011 B2
8015216 Fachan et al. Sep 2011 B2
8027984 Passey et al. Sep 2011 B2
8051425 Godman et al. Nov 2011 B2
8054765 Passey et al. Nov 2011 B2
8055711 Fachan et al. Nov 2011 B2
8060521 Lemar et al. Nov 2011 B2
8082379 Fachan et al. Dec 2011 B2
8112395 Patel et al. Feb 2012 B2
8176013 Passey et al. May 2012 B2
20010042224 Stanfill et al. Nov 2001 A1
20010047451 Noble et al. Nov 2001 A1
20010056492 Bressoud et al. Dec 2001 A1
20020002661 Blumenau et al. Jan 2002 A1
20020010696 Izumi Jan 2002 A1
20020029200 Dulin et al. Mar 2002 A1
20020035668 Nakano et al. Mar 2002 A1
20020038436 Suzuki Mar 2002 A1
20020049778 Bell et al. Apr 2002 A1
20020055940 Elkan May 2002 A1
20020072974 Pugliese et al. Jun 2002 A1
20020075870 de Azevedo et al. Jun 2002 A1
20020078161 Cheng Jun 2002 A1
20020078180 Miyazawa Jun 2002 A1
20020083078 Pardon et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020087366 Collier et al. Jul 2002 A1
20020095438 Rising et al. Jul 2002 A1
20020107877 Whiting et al. Aug 2002 A1
20020124137 Ulrich et al. Sep 2002 A1
20020138559 Ulrich et al. Sep 2002 A1
20020156840 Ulrich et al. Oct 2002 A1
20020156891 Ulrich et al. Oct 2002 A1
20020156973 Ulrich et al. Oct 2002 A1
20020156974 Ulrich et al. Oct 2002 A1
20020156975 Staub et al. Oct 2002 A1
20020158900 Hsieh et al. Oct 2002 A1
20020161846 Ulrich et al. Oct 2002 A1
20020161850 Ulrich et al. Oct 2002 A1
20020161973 Ulrich et al. Oct 2002 A1
20020163889 Yemini et al. Nov 2002 A1
20020165942 Ulrich et al. Nov 2002 A1
20020166026 Ulrich et al. Nov 2002 A1
20020166079 Ulrich et al. Nov 2002 A1
20020169827 Ulrich et al. Nov 2002 A1
20020170036 Cobb et al. Nov 2002 A1
20020174295 Ulrich et al. Nov 2002 A1
20020174296 Ulrich et al. Nov 2002 A1
20020178162 Ulrich et al. Nov 2002 A1
20020191311 Ulrich et al. Dec 2002 A1
20020194523 Ulrich et al. Dec 2002 A1
20020194526 Ulrich et al. Dec 2002 A1
20020198864 Ostermann et al. Dec 2002 A1
20030005159 Kumhyr Jan 2003 A1
20030009511 Giotta et al. Jan 2003 A1
20030014391 Evans et al. Jan 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030061491 Jaskiewicz et al. Mar 2003 A1
20030109253 Fenton et al. Jun 2003 A1
20030120863 Lee et al. Jun 2003 A1
20030125852 Schade et al. Jul 2003 A1
20030126522 English et al. Jul 2003 A1
20030131860 Ashcraft et al. Jul 2003 A1
20030135514 Patel et al. Jul 2003 A1
20030149750 Franzenburg Aug 2003 A1
20030158861 Sawdon et al. Aug 2003 A1
20030158873 Sawdon et al. Aug 2003 A1
20030161302 Zimmermann et al. Aug 2003 A1
20030163726 Kidd Aug 2003 A1
20030172149 Edsall et al. Sep 2003 A1
20030177308 Lewalski-Brechter Sep 2003 A1
20030182312 Chen et al. Sep 2003 A1
20030182325 Manley et al. Sep 2003 A1
20030233385 Srinivasa et al. Dec 2003 A1
20030237019 Kleiman et al. Dec 2003 A1
20040003053 Williams Jan 2004 A1
20040024731 Cabrera et al. Feb 2004 A1
20040024963 Talagala et al. Feb 2004 A1
20040078680 Hu et al. Apr 2004 A1
20040078812 Calvert Apr 2004 A1
20040117802 Green Jun 2004 A1
20040133670 Kaminksky et al. Jul 2004 A1
20040143647 Cherkasova Jul 2004 A1
20040153479 Mikesell et al. Aug 2004 A1
20040158549 Matena et al. Aug 2004 A1
20040174798 Riguidel et al. Sep 2004 A1
20040189682 Troyansky et al. Sep 2004 A1
20040199734 Rajamani et al. Oct 2004 A1
20040199812 Earl et al. Oct 2004 A1
20040205141 Goland Oct 2004 A1
20040230748 Ohba Nov 2004 A1
20040240444 Matthews et al. Dec 2004 A1
20040260673 Hitz et al. Dec 2004 A1
20040267747 Choi et al. Dec 2004 A1
20050010592 Guthrie Jan 2005 A1
20050033778 Price Feb 2005 A1
20050044197 Lai Feb 2005 A1
20050066095 Mullick et al. Mar 2005 A1
20050114402 Guthrie May 2005 A1
20050114609 Shorb May 2005 A1
20050125456 Hara et al. Jun 2005 A1
20050131860 Livshits Jun 2005 A1
20050131990 Jewell Jun 2005 A1
20050138195 Bono Jun 2005 A1
20050138252 Gwilt Jun 2005 A1
20050171960 Lomet Aug 2005 A1
20050171962 Martin et al. Aug 2005 A1
20050187889 Yasoshima Aug 2005 A1
20050188052 Ewanchuk et al. Aug 2005 A1
20050192993 Messinger Sep 2005 A1
20050193389 Murphy et al. Sep 2005 A1
20050289169 Adya et al. Dec 2005 A1
20050289188 Nettleton et al. Dec 2005 A1
20060004760 Clift et al. Jan 2006 A1
20060041894 Cheng Feb 2006 A1
20060047713 Gornshtein et al. Mar 2006 A1
20060047925 Perry Mar 2006 A1
20060053263 Prahlad et al. Mar 2006 A1
20060059467 Wong Mar 2006 A1
20060074922 Nishimura Apr 2006 A1
20060083177 Iyer et al. Apr 2006 A1
20060095438 Fachan et al. May 2006 A1
20060101062 Godman et al. May 2006 A1
20060123211 Derk et al. Jun 2006 A1
20060129584 Hoang et al. Jun 2006 A1
20060129631 Na et al. Jun 2006 A1
20060129983 Feng Jun 2006 A1
20060161920 An et al. Jul 2006 A1
20060206536 Sawdon et al. Sep 2006 A1
20060230411 Richter et al. Oct 2006 A1
20060277432 Patel et al. Dec 2006 A1
20060288161 Cavallo Dec 2006 A1
20060294589 Achanta et al. Dec 2006 A1
20070038887 Witte et al. Feb 2007 A1
20070091790 Passey et al. Apr 2007 A1
20070094269 Mikesell et al. Apr 2007 A1
20070094277 Fachan et al. Apr 2007 A1
20070094310 Passey et al. Apr 2007 A1
20070094431 Fachan Apr 2007 A1
20070094449 Allison et al. Apr 2007 A1
20070094452 Fachan Apr 2007 A1
20070124337 Flam May 2007 A1
20070168351 Fachan Jul 2007 A1
20070171919 Godman et al. Jul 2007 A1
20070192254 Hinkle Aug 2007 A1
20070195810 Fachan Aug 2007 A1
20070198518 Luchangco et al. Aug 2007 A1
20070233684 Verma et al. Oct 2007 A1
20070233710 Passey et al. Oct 2007 A1
20070244877 Kempka Oct 2007 A1
20070255765 Robinson Nov 2007 A1
20070255921 Gole et al. Nov 2007 A1
20070288490 Longshaw Dec 2007 A1
20080005145 Worrall Jan 2008 A1
20080010507 Vingralek Jan 2008 A1
20080021907 Patel et al. Jan 2008 A1
20080031238 Harmelin et al. Feb 2008 A1
20080034004 Cisler et al. Feb 2008 A1
20080044016 Henzinger Feb 2008 A1
20080046432 Anderson et al. Feb 2008 A1
20080046443 Fachan et al. Feb 2008 A1
20080046444 Fachan et al. Feb 2008 A1
20080046445 Passey et al. Feb 2008 A1
20080046475 Anderson et al. Feb 2008 A1
20080046476 Anderson et al. Feb 2008 A1
20080046667 Fachan et al. Feb 2008 A1
20080059541 Fachan et al. Mar 2008 A1
20080059734 Mizuno Mar 2008 A1
20080126365 Fachan et al. May 2008 A1
20080151724 Anderson et al. Jun 2008 A1
20080154978 Lemar et al. Jun 2008 A1
20080155191 Anderson Jun 2008 A1
20080168209 Davison Jul 2008 A1
20080168304 Flynn et al. Jul 2008 A1
20080168458 Fachan et al. Jul 2008 A1
20080243773 Patel et al. Oct 2008 A1
20080256103 Fachan et al. Oct 2008 A1
20080256537 Fachan et al. Oct 2008 A1
20080256545 Fachan et al. Oct 2008 A1
20080263549 Walker Oct 2008 A1
20080294611 Anglin et al. Nov 2008 A1
20090055399 Lu et al. Feb 2009 A1
20090055604 Lemar et al. Feb 2009 A1
20090055607 Schack et al. Feb 2009 A1
20090125563 Wong et al. May 2009 A1
20090210880 Fachan et al. Aug 2009 A1
20090248756 Akidau et al. Oct 2009 A1
20090248765 Akidau et al. Oct 2009 A1
20090248975 Daud et al. Oct 2009 A1
20090249013 Daud et al. Oct 2009 A1
20090252066 Passey et al. Oct 2009 A1
20090327218 Passey et al. Dec 2009 A1
20100016155 Fachan Jan 2010 A1
20100016353 Mikesell Jan 2010 A1
20100122057 Strumpen et al. May 2010 A1
20100161556 Anderson et al. Jun 2010 A1
20100161557 Anderson et al. Jun 2010 A1
20100185592 Kryger Jul 2010 A1
20100223235 Fachan Sep 2010 A1
20100235413 Patel Sep 2010 A1
20100241632 Lemar et al. Sep 2010 A1
20100306786 Passey Dec 2010 A1
20110022790 Fachan Jan 2011 A1
20110035412 Fachan Feb 2011 A1
20110044209 Fachan Feb 2011 A1
20110060779 Lemar et al. Mar 2011 A1
20110087635 Fachan Apr 2011 A1
20110113211 Fachan et al. May 2011 A1
20110119234 Schack et al. May 2011 A1
20110145195 Passey et al. Jun 2011 A1
20110153569 Fachan et al. Jun 2011 A1
Foreign Referenced Citations (22)
Number Date Country
0774723 May 1997 EP
1421520 May 2004 EP
1563411 Aug 2005 EP
2284735 Feb 2011 EP
2299375 Mar 2011 EP
04096841 Mar 1992 JP
2000-047831 Feb 2000 JP
2000-099282 Apr 2000 JP
2002-091804 Mar 2002 JP
2006-506741 Jun 2004 JP
4464279 May 2010 JP
4504677 Jul 2010 JP
WO 9429796 Dec 1994 WO
WO 0057315 Sep 2000 WO
WO 0114991 Mar 2001 WO
WO 0133829 May 2001 WO
WO 02061737 Aug 2002 WO
WO 03012699 Feb 2003 WO
WO 2004046971 Jun 2004 WO
WO 2008021527 Feb 2008 WO
WO 2008021528 Feb 2008 WO
WO 2008127947 Oct 2008 WO
Related Publications (1)
Number Date Country
20080151724 A1 Jun 2008 US