The present invention relates generally to disk drive systems and methods, and more particularly to disk drive systems and methods having a dynamic block architecture RAID Device Management, Reallocation, and Restriping for optimizing RAID Device layout when changes to RAID parameters or disk configuration occur.
Existing disk drive systems have been designed in such a way that a Virtual Volume is distributed (or mapped) across the physical disks in a manner which is determined at volume creation time and remains static throughout the lifetime of the Virtual Volume. That is, the disk drive systems statically allocate data based on the specific location and size of the virtual volume of data storage space. Should the Virtual Volume prove inadequate for the desired data storage purposes, the existent systems require the creation of a new Virtual Volume and the concomitant copying of previously stored data from the old Virtual Volume to the new in order to change volume characteristics. This procedure is time consuming and expensive since it requires duplicate physical disk drive space.
These prior art disk drive systems need to know, monitor, and control the exact location and size of the Virtual Volume of data storage space in order to store data. In addition, the systems often need larger data storage space, whereby more RAID Devices are added. As a result, emptied data storage space is not used, and extra data storage devices, e.g. RAID Devices, are acquired in advance for storing, reading/writing, and/or recovering data in the system. Additional RAID Devices are expensive and not required until extra data storage space is actually needed.
Therefore, there is a need for improved disk drive systems and methods, and more particularly a need for efficient, dynamic RAID space and time management systems. There is a further need for improved disk drive systems and methods for allowing RAID management, reallocation, and restriping to occur without loss of server or host data access or compromised resiliency.
The present invention, in one embodiment, is a method of RAID Restriping in a disk drive system. The method includes selecting an initial RAID device for migration based on at least one score, creating an alternate RAID device, moving data stored at the initial RAID device to the alternate RAID device; and removing the initial RAID device. The scores may include an initial score, a replacement score, and an overlay score. Furthermore, the method may be performed automatically by the system or manually, such as by a system administrator. The method may be performed periodically, continuously, after every RAID device migration, upon addition of disk drives, and/or before removal of disk drives.
The present invention, in another embodiment, is a disk drive system having a RAID subsystem and a disk manager. The disk manager is configured to automatically calculate a score for each RAID device of the RAID subsystem, select a RAID device from the subsystem based on the relative scores of the RAID devices, create an alternate RAID device, move a portion of the data stored at the selected RAID device to the alternate RAID device, and remove the selected RAID device.
The present invention, in yet another embodiment, is a disk drive system including means for selecting a RAID device for migration based on at least one score calculated for each RAID device, means for creating at least one alternate RAID device, means for moving data stored at the selected RAID device to the at least one alternate RAID device, and means for removing the selected RAID device.
While multiple embodiments are disclosed, still other embodiments of the present invention will become apparent to those skilled in the art from the following detailed description, which shows and describes illustrative embodiments of the invention. As will be realized, the invention is capable of modifications in various obvious aspects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive.
While the specification concludes with claims particularly pointing out and distinctly claiming the subject matter that is regarded as forming the present invention, it is believed that the invention will be better understood from the following description taken in conjunction with the accompanying Figures, in which:
Various embodiments of the present invention relate generally to disk drive systems and methods, and more particularly to disk drive systems and methods which implement one or more Virtual Volumes spread across one or more RAID Devices, which in turn are constructed upon a set of disk drives. RAID Device Management, Reallocation, and Restriping (“Restriping”) provides a system and method for changing the various properties associated with a Virtual Volume such as size, data protection level, relative cost, access speed, etc. This system and method may be initiated by administration action or automatically when changes to the disk configuration occur.
The various embodiments of the present disclosure provide improved disk drive systems having a dynamic block architecture RAID Device Restriping that may optimize RAID Device layout when changes to RAID parameters or disk configuration occur. In one embodiment, the layout of RAID Devices may be primarily rebalanced when disks are added to the system. By rebalancing, virtualization performance may be improved within the system by using the maximum available disk configuration. Restriping also may provide the capability to migrate data away from a group of disks, allowing those disks to be removed from the system without loss of uptime or data protection. Further, Restriping may provide the capability to change RAID parameters giving the user the ability to tune the performance and/or storage capacity even after the data has been written. Restriping additionally may provide an improved disk drive system and method for allowing Restriping to occur without loss of server or host data access or compromised resiliency.
Various embodiments described herein improve on the existent disk drive systems in multiple ways. In one embodiment, the mapping between a Virtual Volume and the physical disk drive space may be mutable on a fine scale. In another embodiment, previously stored data may be migrated automatically in small units, and the appropriate mappings may be updated without the need for an entire duplication of physical resources. In a further embodiment, portions of a Virtual Volume which are already mapped to appropriate resources need not be migrated, reducing the time needed for reconfiguration of a Volume. In yet another embodiment, the storage system can automatically reconfigure entire groups of Virtual Volumes in parallel. Additionally, the storage system may automatically reconfigure Virtual Volumes when changes to the physical resources occur. Other advantages over prior disk drive systems will be recognized by those skilled in the art and are not limited to those listed.
Furthermore, Restriping and disk categorization may be powerful tools for administrative control of the storage system. Disk drives which, for example, are found to be from a defective manufacturing lot, may be recategorized so that migration away from these disk drives occurs. Similarly, a set of drives may be held in a “reserve” category, and later recategorized to become part of a larger in-use group. Restriping to widen the RAID Devices may gradually incorporate these additional reserve units. It is noted that several benefits may be recognized by the embodiments described herein, and the previous list of examples is not exhaustive and not limiting.
For the purposes of describing the various embodiments herein, a “Volume” may include an externally accessible container for storing computer data. In one embodiment, a container may be presented via the interconnect protocol as a contiguous array of blocks. In a further embodiment, each block may have a fixed size—traditionally 512 bytes. Although, other sized blocks may be used, such as 256, 1,024, etc. bytes. Typically, supported operations performed on data at any given location may include ‘write’ (store) and ‘read’ (retrieve). Although, other operations, such as ‘verify’ may also be supported. The interconnect protocol used to access Volumes may be the same as that used to access disk drives. Thus, in some embodiments, a Volume may appear and function generally identical to that of a disk drive. Volumes traditionally may be implemented as partitions of a disk drive or simple concatenations of disk drives within an array.
A “Virtual Volume,” as used herein, may include an externally accessible container for storing data which is constructed from a variety of hardware and software resources and generally may mimic the behavior of a traditional Volume. In particular, a system containing a disk drive array may present multiple Virtual Volumes which utilize non-intersecting portions of the disk array. In this type of system, the storage resources of the individual disk drives may be aggregated in an array, and subsequently partitioned into individual Volumes for use by external computers. In some embodiments, the external computers may be servers, hosts, etc.
A “RAID Device,” as used herein, may include an aggregation of disk partitions which provides concatenation and resiliency to disk drive failure. The RAID algorithms for concatenation and resiliency are well known and include such RAID levels as RAID 0, RAID 1, RAID 0+1, RAID 5, RAID 10, etc. In a given disk array, multiple RAID Devices may reside on any given set of disks. Each of these RAID Devices may employ a different RAID level, have different parameters, such as stripe size, may be spread across the individual disk drives in a different order, may occupy a different subset of the disk drives, etc. A RAID Device may be an internally accessible Virtual Volume. It may provide a contiguous array of data storage locations of a fixed size. The particular RAID parameters determine the mapping between RAID Device addresses and the data storage addresses on the disk drives. In the present disclosure, systems and methods for constructing and modifying externally accessible Virtual Volumes from RAID Devices are described that provide the improved functionality.
Virtual Volume Construction
A storage system which utilizes the present disclosure may initially construct a set of RAID Devices having various characteristics on a disk array. The RAID Devices may be logically divided into units referred to herein as “pages,” which may be many blocks in size. A typical page size may be 4,096 blocks. Although, in principle any page size from 1 block onwards could be used. However, page sizes generally comprise block numbers in the power of 2. These pages may be managed by Virtual Volume management software. Initially, all the pages from each RAID Device may be marked as free. Pages may be dynamically allocated to Virtual Volumes on an as-needed basis. That is, pages may be allocated when it is determined that a given address is first written. Addresses that are read before being written can be given a default data value. The Virtual Volume management software may maintain the mapping between Virtual Volume addresses and pages within the RAID Devices. It is noted that a given Virtual Volume may be constructed of pages from multiple RAID Devices, which may further have differing properties.
Extending the size of a Virtual Volume constructed in this manner may be accomplished by increasing the range of addresses presented to the server. The address-to-page mapping may continue with the same allocate-on-write strategy in both the previously available and extended address ranges.
The performance and resiliency properties of a given Virtual Volume may be determined in large part by the aggregate behavior of the pages allocated to that Virtual Volume. The pages inherit their properties from the RAID Device and physical disk drives on which they are constructed. Thus, in one embodiment, page migration between RAID Devices may occur in order to modify properties of a Virtual Volume, other than size. “Migration,” as used herein, may include allocating a new page, copying the previously written data from the old page to the new, updating the Virtual Volume mapping, and marking the old page as free. Traditionally, it may not be possible to convert the RAID Device properties (i.e., remap to a new RAM level, stripe size, etc.) and simultaneously leave the data in place.
There are several independent parameters which may be modified to produce different Virtual Volume properties. Several of the scenarios are outlined in detail herein. However, the scenarios described in detail herein are exemplary of various embodiments of the present disclosure and are not limiting. The present disclosure, in some embodiments, may include simultaneous modification of any or all of these parameters.
RAID Parameter Modification
For purposes of illustration, a disk array 100 containing five disks 102, 104, 106, 108, 110 is shown in
When the migration is complete, RAID Device A 112 may be deleted, leaving the example RAID configuration shown in
The exemplary RAID reconfiguration from that of
Adding Disk Drives
Another embodiment having a disk array 200 containing five disks 202, 204, 206, 208, 210 is illustrated in
In this sequence, the wider RAID Device C 220 may be created and data from RAID Device A 216 may be migrated to RAID Device C 220. RAID Device A 216 may then be deleted, and RAID Device D 222 may be created. RAID Device D 222 may be used to relocate the data previously contained in RAID Device B 218.
In doing so, the only extra space needed on the original disk drives may be that required to create RAID Device C 220. In one embodiment of the example illustration, in the case wherein no other RAID parameter changes, each extent of RAID Device C 220 may be 5/7 the size of the extent size (i.e., RAID Device C is spread among 5 initial drives+2 additional drives) used constructing RAID Device A 216.
It is noted that the process may be entirely reversible and can be used to remove one or more disk drives from a system, such as, for example, if it was desired that disks 212 and 214 be removed from the example configuration of
The previous example of one embodiment described with reference to
The strategy for reconfiguring the system shown in
In one embodiment, a data progression process may manage the movement of data between the initial RAID Device and the temporary RAID Device(s), or in other cases, new permanent RAID Device(s). In further embodiments, Restriping may attempt to use the same RAID level, if available. In other embodiments, Restriping may move the data, to a different RAID level.
The size of a temporary RAID Device may depend on the initial RAID Device size and available space within a page pool. The size of the temporary RAID Device may provide sufficient space, such that when the initial RAID Device is deleted, the page pool may continue to operate normally and not allocate more space. The page pool may allocate more space at a configured threshold based on the size of the page pool.
Once the data has been migrated away from RAID Device C 320, it can be deleted, providing space for a new RAID Device spanning all of the disk drives, e.g., RAID Device X 326. Deleting RAID Device C 320 may return the disk space RAID Device C 320 consumed to the free space on the disk. At this point, a disk manager may combine adjacent free space allocation into a single larger allocation to reduce fragmentation. Deleting a RAID Device may create free space across a larger number of disks than was previously available. A RAID Device with a higher Score can be created from this free space slice.
After the initial RAID Device C 320 is deleted, Restriping may create a replacement RAID Device X 326, as shown in
By judiciously limiting the size of the initial RAID Devices, e.g., RAID Devices A 316, B 318, and C 320, it may be possible to create RAID Device X 326 such that it can hold all the data from RAID Devices B 318 & E 324, for example, allowing the process to continue until the final configuration is achieved in
If a temporary RAID Device or temporary RAID Devices, e.g., RAID Devices D 322 and E 324, were created and marked as temporary, the RAID Devices may be marked for removal, as shown in
In one embodiment of Restriping, removal of the temporary RAID Devices may use a subset of the steps used for migration or removal of the initial RAID Device, such as the movement of data and deletion of the temporary RAID Devices.
In one embodiment, if the Score of a temporary RAID Device exceeds the Score of the initial RAID Device, the temporary RAID Device may be considered a permanent RAID Device. That is, it may not be automatically deleted as a part of the process to move a RAID Device. In further embodiments, the temporary RAID Device may be kept only if it has a sufficiently higher Score than the initial RAID Device.
Restriping may involve a number of further steps to remove an original low-scoring RAID Device and replace it with a new higher-scoring RAID Device. For example, Restriping may account for the possibility that the disks in the system are full, and have no space for another RAID Device. Restriping may trim excess space before attempting to restripe a RAID Device. Trimming excess space may free up additional disk space and increase the success rate of Restriping.
In some embodiments, Restriping may reach a deadlock. For example, the size of the temporary space may consume a portion of the space needed to move the initial RAID Device. If it becomes impossible to remove a RAID Device because all pages cannot be freed, the RAID Device may be marked as failed, and Restriping may move on to the next RAID Device that can or should be migrated.
With reference to
In addition to identifying RAID Devices for migration or removal, as shown in
In some embodiments, Restriping may limit the movements of RAID Devices. For example, to avoid thrashing the system, Restriping may not need to absolutely maximize the Score of a RAID Device. Restriping may also mark failed RAID Devices so as not to retry them.
Restriping may recognize new disks, create new RAID devices which utilize the additional spaces, and move the data accordingly. After the process is complete, user data and free space may be distributed across the total disk drives, including the initial disks and the additional disks. It is noted that Restriping may replace RAID Devices rather than extend them. It is appreciated that the positioning of free space and user allocations on any given disk may be arbitrary, and the arrangements shown in
Selection of RAID Device for Restriping
In one embodiment, as previously discussed, Restriping may handle:
In some embodiments, including embodiments having larger, more complicated systems, it may not be obvious which set of migration operations should be used in order to obtain the desired final configuration or if it is possible to get from the initial configuration to the final desired configuration within the existing resources. In one embodiment, a scoring and optimization technique may be used to select the particular RAID Device for removal and replacement. The scoring function, in an exemplary embodiment, may employ one or more of the following properties:
In another embodiment, Restriping may be divided into three components, such as scoring, examining, and moving. RAID Device scoring may be used to determine the quality of a given RAID Device based on requested parameters and disk space available. In one embodiment, scoring may generate three values. Restriping may provide a Score for an initial RAID Device and the scores of two possible alternative RAID Devices, referred to herein as the Replacement and Overlay Scores. Details of each score for one embodiment are described below:
With respect to the Replacement and Overlay Scores, the user accessible blocks for the RAID Device may remain the same as the number of disks changes. The three scores may provide the input parameters to develop a strategy for migrating from lower to higher scoring RAID Devices. In a particular embodiment, if the Replacement Score is higher than the initial Score, a straightforward migration like that described in
In one embodiment, factors used to determine the Scores may include one or more of the following:
Table 1 illustrates an example embodiment of scoring factors that may be used. As illustrated in Table 1, the variables may include Disks In Class, Disks In Folder, RAID Level, RAID Repeat Factor, RAID Extent Size, and RAID Drives in Stripe. Disks In Class, as used in the example scoring factors, may be determined by the equation:
(DisksInClass−3*DisksOutOfClass)*DisksInClassConstant
where DisksInClass may be the number of disks used by the RAID Device that are of the proper class, DisksOutOfClass may be the number of disks used by the RAID Device that are not of the proper class, and DisksInClassConstant may be a multiplicative constant value. Disk classes may include, but are not limited to, 15K FC, 10K FC, SATA, etc. For example, if a RAID Device was supposed to use 10K FC disks, but included two SATA disks, the value for DisksOutOfClass would be two. Disks In Folder, as used in the example scoring factors, may be determined by the equation:
(DisksInFolder−3*DisksOutOfFolder)*DisksInFolderConstant
where DisksInFolder may be the number of disks used by the RAID Device that are in the proper folder of disks, DisksOutOfFolder may be the number of disks used by the RAID Device that are not in the proper folder of disks, and DisksInFolderConstant may be a multiplicative constant value. Disk folders may organize which disks can be used by RAID Devices. Disks may be moved into, and out of, folder objects at any time to change their usage. RAID Level, as used in the example scoring factors, may be zero if the disk is an undesired RAID level. RAID Repeat Factor, RAID Extent Size, and RAD Drives in Stripe may be a computed score of each divided by a factor of two. It is recognized that Table 1 illustrates one embodiment of example scoring factors and one embodiment of how the scoring factors are calculated and used. The example illustrated in Table 1 is for illustration purposes only and is not limiting. Any scoring factors, or group of scoring factors, may be used with the various embodiments disclosed herein. Furthermore, the scoring factors, or group of scoring factors, may be calculated or used in any suitable manner.
In a further embodiment, Restriping may examine the Scores of the RAID Devices to determine which, if any, RAID Devices may be moved. Restriping may move RAID Devices with a score that is lower than either the Replacement or Overlay Scores. That is, in one embodiment, if the Replacement and/or Overlay Score is greater than the initial RAID Device Score the RAID Device may be a candidate to move. In other embodiments, the initial RAID Devices may be selected for migration by any other means, including situations wherein the initial RAID Device Score is higher than the Replacement and Overlay Scores or by manual selection by a user, etc. Restriping may also determine that no RAID Devices should be moved. In a further embodiment, Restriping may pick a single RAID Device from the available RAID Devices to migrate.
If Restriping identifies a RAID Device to move, migration of the RAID Device may occur. In one embodiment, migration may include determining necessary temporary space, movement of data from the RAID Device, cleanup of the initial RAID Device, and elimination of the temporary space. In another embodiment, a dynamic block architecture page pool may use the RAID Devices and handle the movement of data from lower scoring to higher scoring RAID Devices.
In another embodiment, Restriping may further reevaluate the scores of all RAID Devices after every RAID Device migration since the reallocation of disk space may change the Scores of other RAID Devices. In a further embodiment, the scores of all the RAID Devices may be periodically computed. In some embodiments, Restriping may continually compute the Scores of the RAID Devices. In yet another embodiment, the largest gain in score may be used to select a RAID Device for removal and replacement. A hysteresis mechanism may be used to prevent the process from becoming cyclic.
RAID Device scoring may also handle different-sized disk drives.
From the above description and drawings, it will be understood by those of ordinary skill in the art that the particular embodiments shown and described are for purposes of illustration only and are not intended to limit the scope of the present invention. Those of ordinary skill in the art will recognize that the present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. References to details of particular embodiments are not intended to limit the scope of the invention.
Although the present invention has been described with reference to preferred embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
This application is a continuation of U.S. patent application Ser. No. 13/022,074, filed Feb. 7, 2011, which is a continuation of U.S. patent application Ser. No. 11/753,364, filed May 24, 2007, which claims priority to U.S. provisional patent application Ser. No. 60/808,045, filed May 24, 2006, each of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5155835 | Belsan | Oct 1992 | A |
5274807 | Hoshen et al. | Dec 1993 | A |
5331646 | Krueger et al. | Jul 1994 | A |
5371882 | Ludlam | Dec 1994 | A |
5390327 | Lubbers | Feb 1995 | A |
5502836 | Hale et al. | Mar 1996 | A |
5572661 | Jacobson | Nov 1996 | A |
5613088 | Achiwa et al. | Mar 1997 | A |
5644701 | Takewaki | Jul 1997 | A |
5659704 | Burkes et al. | Aug 1997 | A |
5664187 | Burkes et al. | Sep 1997 | A |
5696934 | Jacobson et al. | Dec 1997 | A |
5835953 | Ohran | Nov 1998 | A |
5897661 | Baranovsky et al. | Apr 1999 | A |
5933834 | Aichelen | Aug 1999 | A |
5974515 | Bachmat et al. | Oct 1999 | A |
RE36462 | Chang et al. | Dec 1999 | E |
6052759 | Stallmo et al. | Apr 2000 | A |
6058489 | Schultz et al. | May 2000 | A |
6070249 | Lee | May 2000 | A |
6073218 | DeKoning et al. | Jun 2000 | A |
6073221 | Beal et al. | Jun 2000 | A |
6073222 | Ohran | Jun 2000 | A |
6078932 | Haye et al. | Jun 2000 | A |
RE36846 | Ng et al. | Aug 2000 | E |
6115781 | Howard | Sep 2000 | A |
6115788 | Thowe | Sep 2000 | A |
6170037 | Blumenau | Jan 2001 | B1 |
6173361 | Taketa | Jan 2001 | B1 |
6192444 | White et al. | Feb 2001 | B1 |
6212531 | Blea et al. | Apr 2001 | B1 |
6215747 | Jobs | Apr 2001 | B1 |
6269431 | Dunham | Jul 2001 | B1 |
6269453 | Krantz | Jul 2001 | B1 |
6275897 | Bachmat | Aug 2001 | B1 |
6275898 | DeKoning | Aug 2001 | B1 |
6282671 | Islam et al. | Aug 2001 | B1 |
6311251 | Merritt et al. | Oct 2001 | B1 |
6347359 | Smith et al. | Feb 2002 | B1 |
6353878 | Dunham | Mar 2002 | B1 |
6356969 | DeKoning et al. | Mar 2002 | B1 |
6366987 | Tzelnic et al. | Apr 2002 | B1 |
6415296 | Challener et al. | Jul 2002 | B1 |
6438638 | Jones et al. | Aug 2002 | B1 |
6516425 | Belhadj et al. | Feb 2003 | B1 |
6560615 | Zayas et al. | May 2003 | B1 |
6584582 | O'Connor | Jun 2003 | B1 |
6611897 | Komachiya et al. | Aug 2003 | B2 |
6618794 | Sicola et al. | Sep 2003 | B1 |
6631493 | Ottesen et al. | Oct 2003 | B2 |
6636778 | Basham et al. | Oct 2003 | B2 |
6718436 | Kim et al. | Apr 2004 | B2 |
6732125 | Autrey | May 2004 | B1 |
6799258 | Linde | Sep 2004 | B1 |
6826711 | Moulton et al. | Nov 2004 | B2 |
6839827 | Beardsley et al. | Jan 2005 | B1 |
6839864 | Mambakkam et al. | Jan 2005 | B2 |
6857057 | Nelson et al. | Feb 2005 | B2 |
6857059 | Karpoff et al. | Feb 2005 | B2 |
6859882 | Fung | Feb 2005 | B2 |
6862609 | Merkey | Mar 2005 | B2 |
6877109 | Delaney et al. | Apr 2005 | B2 |
6883065 | Pittelkow et al. | Apr 2005 | B1 |
6904599 | Cabrera et al. | Jun 2005 | B1 |
6907505 | Cochran et al. | Jun 2005 | B2 |
6912585 | Taylor et al. | Jun 2005 | B2 |
6915241 | Kohlmorgen et al. | Jul 2005 | B2 |
6915454 | Moore et al. | Jul 2005 | B1 |
6938123 | Willis et al. | Aug 2005 | B2 |
6948038 | Berkowitz et al. | Sep 2005 | B2 |
6952794 | Lu | Oct 2005 | B2 |
6957294 | Saunders et al. | Oct 2005 | B1 |
6957362 | Armangau | Oct 2005 | B2 |
6965730 | Chamberlin et al. | Nov 2005 | B2 |
6978259 | Anderson et al. | Dec 2005 | B1 |
6981114 | Wu et al. | Dec 2005 | B1 |
6996582 | Daniels et al. | Feb 2006 | B2 |
6996741 | Pittelkow et al. | Feb 2006 | B1 |
7000069 | Bruning et al. | Feb 2006 | B2 |
7003567 | Suzuki et al. | Feb 2006 | B2 |
7003688 | Pittelkow et al. | Feb 2006 | B1 |
7017076 | Ohno et al. | Mar 2006 | B2 |
7032093 | Cameron | Apr 2006 | B1 |
7032119 | Fung | Apr 2006 | B2 |
7039778 | Yamasaki | May 2006 | B2 |
7043663 | Pittelkow et al. | May 2006 | B1 |
7047358 | Lee et al. | May 2006 | B2 |
7051182 | Blumenau et al. | May 2006 | B2 |
7058788 | Niles et al. | Jun 2006 | B2 |
7058826 | Fung | Jun 2006 | B2 |
7069468 | Olson et al. | Jun 2006 | B1 |
7072916 | Lewis et al. | Jul 2006 | B1 |
7085899 | Kim et al. | Aug 2006 | B2 |
7085956 | Petersen et al. | Aug 2006 | B2 |
7089395 | Jacobson et al. | Aug 2006 | B2 |
7093158 | Barron et al. | Aug 2006 | B2 |
7093161 | Mambakkam et al. | Aug 2006 | B1 |
7100080 | Howe | Aug 2006 | B2 |
7103740 | Colgrove et al. | Sep 2006 | B1 |
7103798 | Morita | Sep 2006 | B2 |
7107417 | Gibble et al. | Sep 2006 | B2 |
7111147 | Strange et al. | Sep 2006 | B1 |
7124272 | Kennedy et al. | Oct 2006 | B1 |
7127633 | Olson et al. | Oct 2006 | B1 |
7133884 | Murley et al. | Nov 2006 | B1 |
7134011 | Fung | Nov 2006 | B2 |
7134053 | Moore | Nov 2006 | B1 |
7162587 | Hiken et al. | Jan 2007 | B2 |
7162599 | Berkowitz et al. | Jan 2007 | B2 |
7181581 | Burkey | Feb 2007 | B2 |
7184933 | Prekeges et al. | Feb 2007 | B2 |
7191304 | Cameron et al. | Mar 2007 | B1 |
7194653 | Hadders et al. | Mar 2007 | B1 |
7197614 | Nowakowski | Mar 2007 | B2 |
7216258 | Ebsen et al. | May 2007 | B2 |
7222205 | Jones et al. | May 2007 | B2 |
7225317 | Glade et al. | May 2007 | B1 |
7228441 | Fung | Jun 2007 | B2 |
7237129 | Fung | Jun 2007 | B2 |
7251713 | Zhang | Jul 2007 | B1 |
7272666 | Rowan et al. | Sep 2007 | B2 |
7272735 | Fung | Sep 2007 | B2 |
7281106 | Arnan et al. | Oct 2007 | B1 |
7293196 | Hicken et al. | Nov 2007 | B2 |
7305536 | Tabata et al. | Dec 2007 | B2 |
7305579 | Williams | Dec 2007 | B2 |
7320052 | Zimmer et al. | Jan 2008 | B2 |
7380113 | Ebsen et al. | May 2008 | B2 |
7406631 | Moore | Jul 2008 | B2 |
7484111 | Fung | Jan 2009 | B2 |
7512822 | Fung | Mar 2009 | B2 |
7533283 | Fung | May 2009 | B2 |
7552350 | Fung | Jun 2009 | B2 |
7558976 | Fung | Jul 2009 | B2 |
7562239 | Fung | Jul 2009 | B2 |
7603532 | Rajan et al. | Oct 2009 | B2 |
7672226 | Shea | Mar 2010 | B2 |
7702948 | Kalman et al. | Apr 2010 | B1 |
7886111 | Klemm et al. | Feb 2011 | B2 |
20010020282 | Murotani et al. | Sep 2001 | A1 |
20020004912 | Fung | Jan 2002 | A1 |
20020004913 | Fung | Jan 2002 | A1 |
20020004915 | Fung | Jan 2002 | A1 |
20020007438 | Lee | Jan 2002 | A1 |
20020007463 | Fung | Jan 2002 | A1 |
20020007464 | Fung | Jan 2002 | A1 |
20020046320 | Shaath | Apr 2002 | A1 |
20020062454 | Fung | May 2002 | A1 |
20020073278 | McDowell | Jun 2002 | A1 |
20020095546 | Dimitri et al. | Jul 2002 | A1 |
20020103969 | Koizumi et al. | Aug 2002 | A1 |
20020112113 | Karpoff et al. | Aug 2002 | A1 |
20020129214 | Sarkar | Sep 2002 | A1 |
20020186492 | Smith | Dec 2002 | A1 |
20030005248 | Selkirk et al. | Jan 2003 | A1 |
20030033577 | Anderson | Feb 2003 | A1 |
20030046270 | Leung et al. | Mar 2003 | A1 |
20030065901 | Krishnamurthy | Apr 2003 | A1 |
20030110263 | Shillo | Jun 2003 | A1 |
20030182503 | Leong et al. | Sep 2003 | A1 |
20030188097 | Holland et al. | Oct 2003 | A1 |
20030188208 | Fung | Oct 2003 | A1 |
20030200473 | Fung | Oct 2003 | A1 |
20030212865 | Hicken et al. | Nov 2003 | A1 |
20030212872 | Patterson et al. | Nov 2003 | A1 |
20030221060 | Umberger et al. | Nov 2003 | A1 |
20030231529 | Hetrick et al. | Dec 2003 | A1 |
20040015655 | Willis et al. | Jan 2004 | A1 |
20040030822 | Rajan et al. | Feb 2004 | A1 |
20040030951 | Armangau | Feb 2004 | A1 |
20040073747 | Lu | Apr 2004 | A1 |
20040088505 | Watanabe | May 2004 | A1 |
20040107222 | Venkatesh et al. | Jun 2004 | A1 |
20040111558 | Kistler et al. | Jun 2004 | A1 |
20040117572 | Welsh et al. | Jun 2004 | A1 |
20040133742 | Vasudevan et al. | Jul 2004 | A1 |
20040163009 | Goldstein et al. | Aug 2004 | A1 |
20040172577 | Tan et al. | Sep 2004 | A1 |
20040236772 | Arakawa et al. | Nov 2004 | A1 |
20050010618 | Hayden | Jan 2005 | A1 |
20050010731 | Zalewski et al. | Jan 2005 | A1 |
20050027938 | Burkey | Feb 2005 | A1 |
20050055603 | Soran et al. | Mar 2005 | A1 |
20050065962 | Rowan et al. | Mar 2005 | A1 |
20050081086 | Williams | Apr 2005 | A1 |
20050108582 | Fung | May 2005 | A1 |
20050114350 | Rose et al. | May 2005 | A1 |
20050144512 | Ming | Jun 2005 | A1 |
20050154852 | Nakagawa et al. | Jul 2005 | A1 |
20050166085 | Thompson et al. | Jul 2005 | A1 |
20050182992 | Land et al. | Aug 2005 | A1 |
20050193058 | Yasuda et al. | Sep 2005 | A1 |
20050262325 | Shmueli et al. | Nov 2005 | A1 |
20060031287 | Ulrich | Feb 2006 | A1 |
20060041718 | Ulrich et al. | Feb 2006 | A1 |
20060059306 | Tseng | Mar 2006 | A1 |
20060093282 | Shepherd et al. | May 2006 | A1 |
20060107097 | Zohar et al. | May 2006 | A1 |
20060161752 | Burkey | Jul 2006 | A1 |
20060161808 | Burkey | Jul 2006 | A1 |
20060179218 | Burkey | Aug 2006 | A1 |
20060184821 | Hitz et al. | Aug 2006 | A1 |
20060206536 | Sawdon et al. | Sep 2006 | A1 |
20060206665 | Orsley | Sep 2006 | A1 |
20060206675 | Sato et al. | Sep 2006 | A1 |
20060218360 | Burkey | Sep 2006 | A1 |
20060218367 | Ukai et al. | Sep 2006 | A1 |
20060218433 | Williams | Sep 2006 | A1 |
20060230244 | Amarendran et al. | Oct 2006 | A1 |
20060248324 | Fung | Nov 2006 | A1 |
20060248325 | Fung | Nov 2006 | A1 |
20060248358 | Fung | Nov 2006 | A1 |
20060248359 | Fung | Nov 2006 | A1 |
20060248360 | Fung | Nov 2006 | A1 |
20060248361 | Fung | Nov 2006 | A1 |
20060248379 | Jernigan, IV | Nov 2006 | A1 |
20060253669 | Lobdell | Nov 2006 | A1 |
20060253717 | Fung | Nov 2006 | A1 |
20060259797 | Fung | Nov 2006 | A1 |
20060265608 | Fung | Nov 2006 | A1 |
20060265609 | Fung | Nov 2006 | A1 |
20060271604 | Shoens | Nov 2006 | A1 |
20060277361 | Sharma et al. | Dec 2006 | A1 |
20060277432 | Patel et al. | Dec 2006 | A1 |
20070005885 | Kobayashi et al. | Jan 2007 | A1 |
20070011425 | Sicola | Jan 2007 | A1 |
20070016749 | Nakamura et al. | Jan 2007 | A1 |
20070016754 | Testardi | Jan 2007 | A1 |
20070101173 | Fung | May 2007 | A1 |
20070168709 | Morita | Jul 2007 | A1 |
20070180306 | Soran et al. | Aug 2007 | A1 |
20070220313 | Katsuragi et al. | Sep 2007 | A1 |
20070240006 | Fung | Oct 2007 | A1 |
20070245084 | Yagisawa et al. | Oct 2007 | A1 |
20070245165 | Fung | Oct 2007 | A1 |
20070260830 | Faibish et al. | Nov 2007 | A1 |
20070266066 | Kapoor et al. | Nov 2007 | A1 |
20070288401 | Hood et al. | Dec 2007 | A1 |
20080005468 | Faibish et al. | Jan 2008 | A1 |
20080288546 | Adkins et al. | Nov 2008 | A1 |
20090235104 | Fung | Sep 2009 | A1 |
Number | Date | Country |
---|---|---|
0706113 | Apr 1996 | EP |
0757317 | Feb 1997 | EP |
0780758 | Jun 1997 | EP |
1462927 | Sep 2004 | EP |
1635252 | Mar 2006 | EP |
3259320 | Nov 1991 | JP |
7200367 | Aug 1995 | JP |
844503 | Feb 1996 | JP |
08115173 | May 1996 | JP |
8278850 | Oct 1996 | JP |
9128305 | May 1997 | JP |
10198528 | Jul 1998 | JP |
2000231454 | Aug 2000 | JP |
2001147785 | May 2001 | JP |
2001337850 | Dec 2001 | JP |
2001344139 | Dec 2001 | JP |
2002182859 | Jun 2002 | JP |
2002278819 | Sep 2002 | JP |
200350724 | Feb 2003 | JP |
2004213064 | Jul 2004 | JP |
2005502121 | Jan 2005 | JP |
2005512191 | Apr 2005 | JP |
2005149274 | Jun 2005 | JP |
2005332123 | Dec 2005 | JP |
0013077 | Mar 2000 | WO |
0225445 | Mar 2002 | WO |
03021441 | Mar 2003 | WO |
03048941 | Jun 2003 | WO |
2005017737 | Feb 2005 | WO |
Entry |
---|
Akyurek, Sedat et al. “Adaptive block rearrangement”, ACM Trans. Comput. Syst. 13, 2. May 1995 (pp. 89-121). |
Bolosky, W.J. et al. “Distributed Schedule Management in the Tiger Video Fileserver”, Abstract, 1997 (pp. 212-223). |
Cheung, L. “Design and Optimization of Distributed RAID Storage System”, Research Paper, CS444 Section A, Fall 2002 (15 pp.). |
“The Compaq Enterprise Network Storage Architecture: An Overview”, source(s): Compaq, May 2000 (pp. 1-22). |
Geist, Robert et al. “Disk performance enhancement through Markov-based cylinder remapping”, Proceedings of the 30th Annual Southeast Regional Conference (ACM-SE 30), ACM. New York, New York, U.S.A. 1992 (pp. 23-28). |
Hwang, K. et al. “Orthogonal Striping and Mirroring in Distributed RAID for I/O-Centric Cluster Computing”, IEEE Transactions on Parallel and Distributed Systems, vol. 13, No. 1, Jan. 2002 (pp. 26-44). |
Jin, H. et al. “Improving Partial Stripe Write Performance In RAID Level 5”, Department of Computer Science and Technology, Huazong University of Science and Technology, Wuhan, 430074, P.R. China, Copyright 1999 (pp. 396-400). |
Massiglia, P. “Chapter 11: Dynamic Data Mapping”, The RAIDbook, A Source for RAID Technology, Feb. 1997 (pp. 197-208). |
Menon, J. et al. “Algorithms for Software and Low-Cost Hardware RAIDs”, IBM Almaden Research Center, menonjm@almaden.ibm.com, 1995 (pp. 411-418). |
PCGuide, “Logical Block Addressing” [online], 2001 [retrieved on Jul. 21, 2011]. Retrieved from the Internet: <http://www.pcguide.com/ref/hdd/bios/modesLBA-html>. |
The PCGuide, “RAID Level 7”, 2001, <http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel7-c.html>. |
Sarhan, N.J. et al. “Adaptive block rearrangement algorithms for video-on-demand servers,” International Conference on Parallel Processing, Sep. 3-7, 2001 (pp. 452-459). doi: 10.1109/ICPP.2001.952092. URL: <http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=952092&isnumber=20585>. |
Scheuermann, P. et al. “Data partitioning and load balancing in parallel disk systems”, Copyright Springer-Verlug 1998 (19 pp.). |
Schomaker, G. “DHHT-RAID: A Distributed Heterogenous Scalable Architecture for Dynamic Storage Environments”, 21st International Conference on Advanced Networking and Applications (AINA '07), Copyright 2007 (9 pp.). |
Thomasian, A. et al. “A Performance Evaluation Tool for RAID Disk Arrays”, New Jersey Institute of Technology-NJIT, Proceedings of the First International Conference on the Quantitative Evaluation of Systems, 2004 (10 pp.). |
Thornock, Niki C. et al. “A stochastic disk I/O simulation technique”, in Proceedings of the 29th Conference on Winter Simulation (WSC '97), IEEE Computer Society, Washington, D.C., U.S.A. 1997 (pp. 1079-1086). |
Vakali, Athena et al. “Data placement schemes in replicated mirrored disk system”, Journal of System and Software, vol. 55, Issue 2, Dec. 27, 2000 (pp. 115-128). ISSN 0164-1212, DOI: 10.1016/S0164-1212(00)00065.0. <http://www.sciencedirect.com/science/article/pii/S0164120000650>. |
Vongsathorn P. et al. “A system for adaptive disk rearrangement”, Software: Practice and Experience, 20, 1990 (pp. 225-242). |
Wilkes, J. et al. “The HP AutoRAID Hierarchical Storage System”, ACM Transactions on Computer Systems, Association for Computing Machinery, vol. 14, No. 1, Feb. 1996 (pp. 108-136). |
Zadok, E. et al. “Reducing Storage Management Costs via Informed User-Based Policies”, Technical Report FSL-03-01, Aug. 14, 2003 (26 pp.). |
Number | Date | Country | |
---|---|---|---|
20120290788 A1 | Nov 2012 | US |
Number | Date | Country | |
---|---|---|---|
60808045 | May 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13022074 | Feb 2011 | US |
Child | 13555386 | US | |
Parent | 11753364 | May 2007 | US |
Child | 13022074 | US |