The disclosure generally relates to the field of storage systems, and more particularly to zoned storage systems.
Consumers and businesses are both storing increasing amounts of data with third party service providers. Whether the third party service provider offers storage alone as a service or as part of another service (e.g., image editing and sharing), the data is stored on storage remote from the client (i.e., the consumer or business) and managed, at least partly, by the third party service provider. This increasing demand for cloud storage has been accompanied by, at least, a resistance to increased price per gigabyte, if not a demand for less expensive storage devices. Accordingly, storage technology has increased the areal density of storage devices at a cost of device reliability instead of increased price. For instance, storage devices designed with shingled magnetic recording (SMR) technology increase areal density by increasing the number of tracks on a disk by overlapping the tracks.
Increasing the number of tracks on a disk increases the areal density of a hard disk drive without requiring new read/write heads. Using the same read/write head technology avoids increased prices. But reliability is decreased because more tracks are squeezed onto a disk by overlapping the tracks. To overlap tracks, SMR storage devices are designed without guard spaces between tracks. Without the guard spaces, writes impact overlapping tracks and a disk is more sensitive to various errors (e.g., seek errors, wandering writes, vibrations, etc.).
Embodiments of the disclosure may be better understood by referencing the accompanying drawings.
The term “disk” is commonly used to refer to a disk drive or storage device. This description uses the term “disk” to refer to one or more platters that are presented with a single identifier (e.g., drive identifier).
An SMR storage device presents sequences of sectors through multiple cylinders (i.e., tracks) as zones. Generally, an SMR disk does not allow random writes in a zone, although an SMR disk can have some zones configured for random writing. An SMR storage device initially writes into a zone at the beginning of the zone. To continue writing, the SMR storage device continues writing from where writing previously ended. This point (i.e., physical sector) at which a previous write ended is identified with a write pointer. As the SMR disk writes sequentially through a zone, the write pointer advances. Writes advance the write pointer through a zone until the write pointer is reset to the beginning of the zone or until the write pointer reaches the end of the zone. If a disk has more than one sequential zone, the zones can be written independently of each other. The number of zones and size of each zone can vary depending upon the manufacturer, and manufacturers can organize zones with guard bands between them.
A user/application level unit of data written to storage is referred to as an “object” herein. The write of a large object is referred to as “ingesting” the object since it often involves multiple write operations. When ingesting an object, a storage system can write parts of the object across different SMR disks for data durability and access efficiency, somewhat similar to striping data across a redundant array of independent disks (RAID). However, a RAID array uses a substantially smaller number of disks with static disk assignments, which results in data being clustered onto the disks of the RAID array. To meet larger scale demands while still providing data durability and access efficiency (“I/O efficiency”), a storage system can be designed to write an object across zones of a set of zones (“zone set”). Each zone of a zone set is contributed from an independently accessible storage medium (e.g., disks on different spindles). This avoids spindle contention, which allows for I/O efficiency when servicing client requests or when reconstructing data. To create a zone set, the storage system arbitrarily selects disks to contribute a zone for membership in the zone set. This results in a fairly even distribution of zone sets throughout the storage system. Otherwise, selection could be unintentionally biased to a subset of disks in the storage system (e.g., the first n disks) and data would cluster on these disks, which increases the impact of disk failures on the storage system. Although disk selection for zone set membership is arbitrary, the arbitrary selection can be from a pool of disks that satisfy one or more criteria (e.g., health or activity based criteria). In addition, weights can be assigned to disks to influence the arbitrary selection. Although manipulating the arbitrary selection with weights or by reducing the pool of disks reduces the arbitrariness, this evenly distributes zone sets while accounting for client demand and/or disk health.
During an initialization and/or discovery phase, the storage system monitor 103 collects information about the disks of the storage system 101. The discovery phase can be part of initialization or may be triggered by a disk failure or disk install. When a disk fails or a disk is installed, the storage system monitor 103 updates the information. At stage A1, the storage system monitor 103 creates or updates system disks information 105. The system disks information 105 is depicted with a first column of disk identifiers and a second column of disk attributes based on the information collected from the storage devices of the storage system 101. The disk identifiers can be generated by the zone set manager 109 or the storage system monitor 103, or can be an external identifier (e.g., a manufacturer-specified globally unique identifier (GUID)). The disk attributes includes a number of zones, information about the zones (e.g., starting location, size of zones), and status of the disk (e.g., offline, available). The disk attributes can also indicate additional information about the disks, such as disk capacity, sector size, health history, activity, etc. The information collected by the storage system monitor 103 at least describes each currently operational storage device of the storage system 101, and may also describe former storage devices or storage devices not currently accessible.
At stage A2, the zone set manager 109 captures available zone information in bit arrays 107. The zone set manager 109 accesses the system disks information 105, and creates/updates the available zones bit arrays 107. The first column of the bit arrays 107 in
At stage A3, the zone set manager 109 sets parameters of the random number generator (RNG) 111. The zone set manager 109 can set parameters of the RNG 111 to randomly (or pseudo-randomly) generate numbers than can map to the storage devices. For example, the zone set manager 109 sets parameters of the RNG 111 to generate numbers between 0 and 16 in correspondence with the number of storage devices in the storage system 101.
At stage B1, the zone set manager 109 obtains disk selection values from the RNG 111 to create a zone set ZS3. The zone set manager 109 requests a number of values from the RNG 111 sufficient for width of a zone set. In this illustration, zone set width is 5 zones. The zone set manager 109 obtains the disk selection values {3,5,12,14,15} from the RNG 111. Embodiments can select more values than the zone set width from the RNG 111 in anticipation that some values may correspond to disks that are unavailable or otherwise unsuitable for use in the constructed zone set. Embodiments can also obtain values from the RNG 111 on an as-needed basis.
At stage B2, the zone set manager 109 accesses the available zones bit arrays 107 to identify disks with identifiers that correspond to the disk selection values from the RNG 111. For this illustration, the values 3, 5, 12, 14, and 15 correspond to DSK3, DSK5, DSK12, DSK14, and DSK15, respectively. Correspondence between the randomly generated values and the disk identifiers can vary. For instance, the RNG 111 could generate values from a larger range of values and a modulo operation could be used to resolve the value to a disk identifier. This example illustration presumes that all disks corresponding to the disk selection values are available. As will be discussed in the flowcharts, a storage system can perform other operations to determine availability of a disk and/or zone. According to the available zones bit arrays 107, the first zone (e.g., zone 0) of the disks DSK3, DSK12, and DSK15 are not available. As shown in zone set information 113, zone 0 of DSK3, DSK12, and DSK15 are already members of zone set ZS0. DSK14 is the only disk with zone 0 available according to the zone set information 113. Since there are insufficient available zone 0's to create zone set ZS3, the zone set manager 109 creates zone set ZS3 from zone 1 of the disks DSK3, DSK5, DSK12, DSK14, and DSK15, and updates the available zones bit arrays 107 accordingly. In this example illustration, setting a bit to 1 in the bit arrays 107 indicates that a zone is no longer available. The available zones bit arrays 107 can also have a field to indicate disk status explicitly. Or the available zones bit arrays 107 can mark all zones of a disk as unavailable if the disk is unavailable. For simplicity, this example presumes that the constructed zone set will be comprised of similarly identified zones from all contributing disks, such as zone 1 from each disk in the zone set. Embodiments can select an arbitrary zone from each contributing disk, and are not limited to all zones in a zone set having zones with the same zone identifier in their respective contributing disks.
At stage B3, the zone set manager 109 updates the zone set information 113 to indicate membership of zone set ZS3. The zone set manager 109 adds an entry into the zone set information 103 that identifies ZS3, zone 1, and each disk that contributes zone 1 to zone set ZS3.
The zone set information 113 shows 2 closed zone sets and 2 open zone sets with the creation of ZS3. The zone set manager 109 creates zone sets on-demand after creating an initial configurable number of zone sets, or does not create initial zone sets. The zone set manager 109 can create a zone set for writing and wait to create another until the current zone set is full or has insufficient remaining capacity. Creating zone sets on-demand facilitates use of on disk information proximate to a write into the zone set. Choosing disks for zone sets such that many groups of zone sets are constructed that share no disks in common among zone sets in the group facilitates access to zone sets from such groups simultaneously, in parallel, or overlapped without contention for their underlying disks.
The above example illustration uses a relatively small number of disks. Statistics based on a larger storage system, for example 48 disks, more clearly illustrate the effectiveness of arbitrary disk selection that has been discovered. For the example 48 disk system, data is encoded with Reed Solomon coding. The Reed Solomon coding yields 18 erasure coded data fragments that allow data to be reconstructed as long as 16 fragments are available. Accordingly, zone set width for the data is 18. The storage system will write each of the 18 data fragments into a different zone that constitutes the zone set. If 2 zone sets are open having widths of 18, then 36 of the 48 disks have contributed a zone into one of the 2 zone sets. If the system suffers a 3 disk failure, then there is a ˜4.7% probability ((18/48)*(17/47)*(16/46)) that the failures will impact a same zone set. With these 2 zone sets created, there is a 90.6% probability of data loss from a triple disk failure in the 48 disk system since there is a 4.7% probability that the triple disk failure will impact one of the 2 zone sets (90.6=100−(2*4.7).
At block 201, a zone set manager determines a zone X available across a sufficient number of disks W, and identifies those disks. For this example, sufficient disks W corresponds to zone set width. In an example with zone 0 reserved and a protection scheme that yields 18 data fragments, the zone set manager initially determines whether the storage system has 18 disks with zone X available based on the status information obtained from the storage devices.
At block 207, the zone set manager obtains W disk selection values from a RNG or pseudo-RNG (hereinafter RNG will be used to refer to both a RNG and a pseudo-RNG). The zone set manager either requests the RNG for W values or makes W requests for a random value. The zone set manager discards repeat values until all W values are unique with respect to the set of disk selection values.
At block 209, the zone set manager determines whether any of the disk selection values do not correspond to one of the identified disks. The zone set manager determines whether each of the disk selection values from the RNG corresponds to one of the identified disks that has zone X available. If a disk selection value does not correspond to one of the identified disks, then a different random disk selection value should be obtained at block 211. Otherwise, control flows to block 217.
At block 211, the zone set manager replaces the one or more disk selection values that did not correspond to any of the identified disks. The zone set manager uses the RNG again to obtain the values and discards any repeat of the values being replaced. Control flows from block 211 back to block 209.
At block 217, the zone set manager updates data that indicates availability of zone X in the storage system. In particular, the zone set manager updates availability data to indicate that zone X on the disks corresponding to the disk selection values are no longer available. If bit arrays are used, the zone set manager accesses the array corresponding to zone X and updates each entry corresponding to the zone selection values to indicate the zones are no longer available.
At block 219, the zone set manager updates zone set data to reflect membership of the created zone set N. The zone set manager updates the zone set data to identify zone set N, to indicate zone X, and to indicate the contributing disks which correspond to the disk selection values.
At block 301, a zone set manager determines a current zone value X. The storage system can store a current zone value X to track progress through zones as zone sets are created and used. To avoid imbalanced use of the outer zones of disks, zone set creation can advance through all non-reserved, available zones. Using the example of a 48 disk storage system with an 18 zone set width, two zone sets could be created per non-reserved zone identifier. Two zone sets from zone 1 will occupy 36 disks. The remaining 12 disks are insufficient for the zone set width, so zone set creation will advance to zone 2. Thus, when available zones for a zone set identifier become insufficient for the zone set width, the value X is updated.
At block 303, the zone set manager determines whether at least W disks have zone X available. The zone set manager reads data that indicates availability of zones per disk to make the determination. For instance, the zone set manager can read a bit array for zone X and count each entry with a bit that indicates the zone is available on the corresponding disk. If at least W disks do not have zone X available, then control flows to block 305. Otherwise, control flows to block 307.
At block 305, the zone set manager updates the current zone value X. The zone set manager can increment X and then validate the result against the size of the storage system. The zone set manager can simply do a relative comparison to determine whether incremented X is greater than the total number of available disks. Or the zone set manager can perform a modulo operation on X with the total number of available disks, and set X to the result of the modulo operation. Control returns to block 303 from block 305.
At block 307, the zone set manager identifies the disks with the available zone X. The zone set manager has determined that there are sufficient disks (e.g., at least W) that can contribute zone X to the zone set N. Using the bit arrays example, the zone set manager traverses the bit array for zone X and resolves each entry with the “available” bit set to a disk identifier. The zone set manager can generate a list of these disk identifiers (or the bit array indices) for later comparison against the disk selection values.
Although arbitrary selection should distribute zone sets approximately evenly across disks of a storage system, other factors may warrant some influence over the selection. For instance, the storage devices may be of different ages and/or different manufacturers. Thus, health and/or reliability of the storage devices can vary. To account for a storage system of heterogeneous devices and possibly a sequence of disk selection values that cluster, a zone set manager can apply one or more eligibility criteria to available disks and arbitrarily select from available, eligible disks.
At block 401, a zone set manager determines a current zone value X. The storage system can store a current zone value X to track progress through zones as zone sets are created and used.
At block 403, the zone set manager determines whether at least W disks have zone X available. The zone set manager reads data that indicates availability of zones per disk to make the determination. For instance, the zone set manager can read a bit array for zone X and count each entry with a bit that indicates the zone is available on the corresponding disk. If at least W disks do not have zone X available, then control flows to block 405. Otherwise, control flows to block 407.
At block 405, the zone set manager updates the current zone value X. The zone set manager can increment X and then validate the result against the size of the storage system. The zone set manager can do a relative comparison to determine whether incremented X is greater than the total number of available disks. Or the zone set manager can perform a modulo operation on X with the total number of available disks, and set X to the result of the modulo operation. Control returns to block 403 from block 405.
At block 407, the zone set manager obtains information related to the eligibility criterion for the disks determined to have zone X available. An eligibility criterion can relate to performance, health, and/or activity. For instance, a zone set manager can set a performance criterion for one or more zone sets that are bound to a quality of service requirement. The zone set manager can obtain performance information about the storage devices from the attribute information collected from the storage devices. For a health related criterion example, the zone set manager can apply a criterion based on a specific aspect of health (e.g., read error rate, reallocated sectors count, spin retry count) or a quantification of overall health. The zone set manager can obtain current health information from a Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) system. For an activity related criterion example, the zone set manager can apply a criterion that disqualifies a disk from selection based on number of input/output (I/O) accesses within a preceding time period to avoid disrupting storage devices busy servicing client requests. The zone set manager can obtain I/O information from the storage devices.
At block 408, the zone set manager determines whether at least W of the disks with zone X available satisfy the eligibility criterion. The zone set manager can traverse a bit array for zone X and evaluate the eligibility information for each disk corresponding to an entry in the zone X bit array that indicates “available.” The zone set manager can create a list of the entries corresponding to disks determined to be eligible. If at least W of the disks with zone X available do not satisfy the eligibility criterion, then control flows to block 405. Otherwise, control flows to block 409. The zone set manager can maintain a retry counter that indicates a number of times at least W disks with zone X available failed the eligibility criterion. Repeated failure to identify sufficient disks that satisfy the eligibility criterion may relate to a system issue. Exceeding the retry counter can cause a notification to be generated.
At block 409, the zone set manager identifies the eligible disks with the available zone X. The zone set manager has determined that there are sufficient disks (e.g., at least W) that are eligible and that can contribute zone X to the zone set N. The zone set manager can return the already generated list of entry indices that correspond to eligible disks, or resolve the entry indices to the disk identifiers and return the disk identifiers.
In addition (or instead of) arbitrarily selecting from eligible disks, a storage system in accordance with this disclosure can use weights to preferentially select from the already arbitrarily selected disks. Similar to the eligibility criterion, the weights can be related to disk performance information, disk health information, and/or disk I/O access information.
At block 501, a zone set manager determines a zone X available across a sufficient number of disks W, and identifies those disks. In this example, the sufficient number W is a value greater than width of the zone set.
At block 503, the zone set manager assigns weights to the identified disks based on disk information. The zone set manager determines which disk attributes affect weighting. Configuration data can specify the weighting system. The zone set manager obtains information about the disk attributes that affect weighting for each identified disk and calculates the weight to assign to the identified disk.
At block 505, the zone set manager obtains W disk selection values from a RNG. The zone set manager either requests the RNG for W values or makes W requests for a random value. The zone set manager discards repeat values until all W values are unique with respect to the set of disk selection values. The sufficient number W is greater than the zone set width (S) to allow for preferential selection based on weights.
At block 507, the zone set manager determines whether any of the disk selection values does not correspond to one of the identified disks. The zone set manager determines whether each of the disk selection values from the RNG corresponds to one of the identified disks that has zone X available. If a disk selection value does not correspond to one of the identified disks, then a different random disk selection value should be obtained at block 511. Otherwise, control flows to block 513.
At block 511, the zone set manager replaces the one or more disk selection values that did not correspond to any of the identified disks. The zone set manager uses the RNG again to obtain the values and discards any repeat of the values being replaced. Control flows from block 511 back to block 507.
At block 513, the zone set manager selects S of the disks corresponding to the disk selection values based on the assigned weights. Assuming a greater weighted value corresponds to greater preference, the zone set manager selects the S disks with the greatest assigned weights. In some cases, less weighted value can correspond to greater preference.
At block 517, the zone set manager updates data that indicates availability of zones in the storage system. The zone set manager updates this zones availability data to indicate that zone X on the selected S disks are no longer available. If bit arrays are used, the zone set manager accesses the array corresponding to zone X and updates each entry corresponding to the selected S zone selection values to indicate the zones are no longer available.
At block 519, the zone set manager updates zone set data to reflect membership of the created zone set N. The zone set manager updates the zone set data to identify zone set N, to indicate zone X, and to indicate the contributing disks which correspond to the S selected disks.
Variations
Embodiments can use an RNG differently. Although only
The previously discussed weights and disk eligibility criteria accounted for disks with different health and different I/O activity, which may be due to heterogeneity of the storage system (e.g., different ages, different manufacturers, etc.). Embodiments can prioritize which attributes are weighted and/or which eligibility criteria are applied. Embodiments can also set conditions for application of weights and/or an eligibility criterion. Prioritization and/or conditions may be related to service requirements (QoS, service level objectives, etc.). A zone set being created may be assigned a service requirement based on a requesting customer, the type of object being written, etc. For a zone set that may be assigned a higher class of service, weights and/or an eligibility criterion that relate to disk throughput may be prioritized to influence zone set membership.
The above examples use a same zone across disks to create a zone set, but this is not necessary. A zone set can be created with different zones. The zone set membership metadata would identify each disk and the particular zone contributed from each disk. Embodiments can construct zone sets with differently identified zones, can use zones that were skipped over, and can use differently identified zones to “patch” zone sets in which a member zone fails or otherwise becomes unavailable. Embodiments can generally use a same zone across disks, and then specify a condition for forming a zone set with different zones (i.e., different zone identifiers). A zone set manager can periodically scan the available zones data and determine whether “holes” exist. A hole can be one or more zones on a disk that have been skipped over. To illustrate, disk DSK37 may have zones 1-5 available even though the zone set manager is currently creating a zone set with zone 15 across disks. The zone set manager may fill in these holes by using these zones to create zone sets when insufficient disks are available to create a zone set. For instance, the zone set manager may only have 17 eligible disks with zone 15 available for a zone set having a width of 18. The zone set manager can elect to use zone 1 from disk DSK37 for the zone set as long as none of the other 17 disks are DSK37. For patching, embodiments can select a zone regardless of zone identifier to replace a failed/unavailable zone in a zone set.
The flowcharts are provided to aid in understanding the illustrations and are not to be used to limit scope of the claims. The flowcharts depict example operations that can vary within the scope of the claims. Additional operations may be performed; fewer operations may be performed; the operations may be performed in parallel; and the operations may be performed in a different order. For example, the operations could obtain the disk selection values and then determine whether corresponding disks have a particular zone available. As another example, the operations of
The examples refer to a zone set manager as performing many of the operations. “Zone set manager” is a construct to encompass the program code executable or interpretable to perform the operations. Program code to implement the functionality can be referred to a different name. Implementations will vary by programming language(s), programmer preferences, targeted platforms, etc. In addition, the program code may be implemented as a single module/function/method, multiple modules/functions/methods, a single library file, multiple libraries, kernel code of a storage operating system, an extension to a storage operating system, subsystem, etc. The program code can be instantiated as an administrative process(es) (e.g., instantiated for an administrator console or interface), an operating system process, and/or instantiated as a background process(es). The program code would be deployed on a node (e.g., server, virtual machine, blade, server processor, etc.) that is connected to storage devices and that manages reads and writes targeting the storage devices. For example, a storage rack can comprise multiple arrays of storage devices connected to processing hardware (i.e., a node) via network connections (e.g., Ethernet cables) and via on-board connections (e.g., small computer system interface (SCSI) cables).
As will be appreciated, aspects of the disclosure may be embodied as a system, method or program code/instructions stored in one or more machine-readable media. Accordingly, aspects may take the form of hardware, software (including firmware, resident software, micro-code, etc.), or a combination of software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” The functionality presented as individual modules/units in the example illustrations can be organized differently in accordance with any one of platform (operating system and/or hardware), application ecosystem, interfaces, programmer preferences, programming language, administrator preferences, etc.
Any combination of one or more machine-readable medium(s) may be utilized. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable storage medium may be, for example, but not limited to, a system, apparatus, or device, that employs any one of or combination of electronic, magnetic, optical, electromagnetic, infrared, or semiconductor technology to store program code. More specific examples (a non-exhaustive list) of the machine-readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a machine-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A machine-readable storage medium is not a machine-readable signal medium. A machine-readable storage medium does not include transitory, propagating signals.
A machine-readable signal medium may include a propagated data signal with machine-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A machine-readable signal medium may be any machine-readable medium that is not a machine-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a machine-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as the Java® programming language, C++ or the like; a dynamic programming language such as Python; a scripting language such as Perl programming language or PowerShell script language; and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a stand-alone machine, may execute in a distributed manner across multiple machines, and may execute on one machine while providing results and or accepting input on another machine.
The program code/instructions may also be stored in a machine-readable medium that can direct a machine to function in a particular manner, such that the instructions stored in the machine-readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
While the aspects of the disclosure are described with reference to various implementations and exploitations, it will be understood that these aspects are illustrative and that the scope of the claims is not limited to them. In general, techniques for zone set creation as described herein may be implemented with facilities consistent with any hardware system or hardware systems. Many variations, modifications, additions, and improvements are possible.
Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the disclosure. In general, structures and functionality presented as separate components in the example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure.
Unless otherwise stated, use of the phrase “at least one of” before a list conjoined by “and” is non-exclusive in the claims. For instance, X comprises at least one of A, B, and C can be infringed by {A}, {B}, {C}, {A, B}, etc., and does not require all of the items of the list to be X. In addition, this language also encompasses an X that includes another item, such as {A, B, P}.
Number | Name | Date | Kind |
---|---|---|---|
4543654 | Jones | Sep 1985 | A |
5271012 | Blaum et al. | Dec 1993 | A |
5687365 | Velissaropoulos et al. | Nov 1997 | A |
6115200 | Allen et al. | Sep 2000 | A |
6505216 | Schutzman et al. | Jan 2003 | B1 |
6714371 | Codilian | Mar 2004 | B1 |
6735033 | Codilian et al. | May 2004 | B1 |
7177248 | Kim et al. | Feb 2007 | B2 |
7333282 | Iseri et al. | Feb 2008 | B2 |
7408732 | Kisaka et al. | Aug 2008 | B2 |
8099605 | Billsroem et al. | Jan 2012 | B1 |
8239621 | Yamato | Aug 2012 | B2 |
8375012 | Graefe | Feb 2013 | B1 |
8566520 | Bitner | Oct 2013 | B1 |
8625636 | Baptist et al. | Jan 2014 | B2 |
8768983 | Kohlscheen et al. | Jul 2014 | B2 |
8819259 | Zuckerman et al. | Aug 2014 | B2 |
8838911 | Hubin et al. | Sep 2014 | B1 |
8949449 | Zuckerman et al. | Feb 2015 | B2 |
8959281 | Malina et al. | Feb 2015 | B1 |
8959305 | Lecrone et al. | Feb 2015 | B1 |
8990162 | Kushwah et al. | Mar 2015 | B1 |
9269376 | Hess et al. | Feb 2016 | B1 |
9329991 | Cohen et al. | May 2016 | B2 |
9361301 | Bushman | Jun 2016 | B1 |
9471366 | Bolte et al. | Oct 2016 | B2 |
20020095546 | Dimitri | Jul 2002 | A1 |
20030105852 | Das et al. | Jun 2003 | A1 |
20040153479 | Mikesell | Aug 2004 | A1 |
20040162940 | Yagisawa | Aug 2004 | A1 |
20040213149 | Mascolo | Oct 2004 | A1 |
20050192932 | Kazar | Sep 2005 | A1 |
20060253651 | Inoue et al. | Nov 2006 | A1 |
20060271339 | Fukada | Nov 2006 | A1 |
20070104049 | Kim et al. | May 2007 | A1 |
20070113004 | Sugimoto et al. | May 2007 | A1 |
20070156405 | Schulz et al. | Jul 2007 | A1 |
20070177739 | Ganguly et al. | Aug 2007 | A1 |
20070203927 | Cave et al. | Aug 2007 | A1 |
20080126357 | Casanova et al. | May 2008 | A1 |
20080151724 | Anderson et al. | Jun 2008 | A1 |
20080201336 | Yamato | Aug 2008 | A1 |
20080201401 | Pugh et al. | Aug 2008 | A1 |
20080313398 | Koseki | Dec 2008 | A1 |
20090100055 | Wang | Apr 2009 | A1 |
20090154559 | Gardner | Jun 2009 | A1 |
20090327840 | Moshayedi | Dec 2009 | A1 |
20100030960 | Kamalavannan et al. | Feb 2010 | A1 |
20100064166 | Dubnicki et al. | Mar 2010 | A1 |
20100094921 | Roy et al. | Apr 2010 | A1 |
20100094957 | Zuckerman et al. | Apr 2010 | A1 |
20100095060 | Strange et al. | Apr 2010 | A1 |
20100162031 | Dodgson et al. | Jun 2010 | A1 |
20100185690 | Evans et al. | Jul 2010 | A1 |
20100293354 | Perez et al. | Nov 2010 | A1 |
20100306174 | Otani | Dec 2010 | A1 |
20100325345 | Ohno et al. | Dec 2010 | A1 |
20110191629 | Daikokuya et al. | Aug 2011 | A1 |
20110296104 | Noda et al. | Dec 2011 | A1 |
20120060072 | Simitci et al. | Mar 2012 | A1 |
20120072689 | Kempen et al. | Mar 2012 | A1 |
20120226886 | Hatfield et al. | Sep 2012 | A1 |
20130054889 | Vaghani | Feb 2013 | A1 |
20130086340 | Fleming et al. | Apr 2013 | A1 |
20130238235 | Kitchel et al. | Sep 2013 | A1 |
20130297905 | Yang et al. | Nov 2013 | A1 |
20130346794 | Bartlett et al. | Dec 2013 | A1 |
20140013046 | Corbett | Jan 2014 | A1 |
20140108707 | Nowoczynski et al. | Apr 2014 | A1 |
20140207899 | Mark et al. | Jul 2014 | A1 |
20140237024 | Chen et al. | Aug 2014 | A1 |
20140297680 | Triou, Jr. et al. | Oct 2014 | A1 |
20140331085 | Dhuse et al. | Nov 2014 | A1 |
20140344532 | Lazier | Nov 2014 | A1 |
20150067245 | Kruger | Mar 2015 | A1 |
20150161163 | Cypher | Jun 2015 | A1 |
20150254008 | Tal | Sep 2015 | A1 |
20150256577 | Gutiérrez et al. | Sep 2015 | A1 |
20150269964 | Fallone et al. | Sep 2015 | A1 |
20150363126 | Frick | Dec 2015 | A1 |
20150378825 | Resch | Dec 2015 | A1 |
20160070617 | Algie et al. | Mar 2016 | A1 |
20160179621 | Schirripa et al. | Jun 2016 | A1 |
20160232168 | LeMoal | Aug 2016 | A1 |
20160314043 | Slik | Oct 2016 | A1 |
20170031752 | Cilfone et al. | Feb 2017 | A1 |
20170060481 | Leggette et al. | Mar 2017 | A1 |
20170109247 | Nakajima | Apr 2017 | A1 |
Number | Date | Country |
---|---|---|
2013152811 | Oct 2013 | WO |
Entry |
---|
International Search Report and Written Opinion for Application No. PCT/US2015/048177 dated Dec. 10, 2015, 8 pages. |
Rabin “Efficient Dispersal of Information for Security, Load Balancing, and Fault Tolerance,” Journal of the Association for Computing Machinery, vol. 36, No. 2, Apr. 1989, pp. 335-348. |
Amer, et al., “Design Issue for a Shingled Write Disk System,” 26th IEEE Symposium on Massive Storage Systems and Technologies (MSST 2010), May 2010,12 pages, retrieved on Oct. 20, 2015 from http://storageconference.us/2010/Papers/MSST/Amer.pdf. |
Amer, et al.,“Data Management and Layout for Shingled Magnetic Recording,” IEEE Transactions on Magnetics, Oct. 2011,vol. 47, No. 10,pp. 3691-3697, retrieved on Oct. 15, 2015 from http://www.ssrc.ucsc.edu/Papers/amer-ieeetm11.pdf. |
Dunn, et al., “Shingled Magnetic Recording Models, Standardization, and Applications,” SNIA Education, Storage Networking Industry Association, , Sep. 16, 2014, 44 pages, retrieved on Oct. 21, 2015 from http://www.snia.org/sites/default/files/Dunn-Feldman_SNIA_Tutorial_Shingled_Magnetic_Recording-r7_Final.pdf. |
Feldman, et al., “Shingled Magnetic Recording Areal Density Increase Requires New Data Management,” USENIX, The Advanced Computing Systems Association, Jun. 2013, vol. 38, No. 3, pp. 22-30, retrieved on Oct. 20, 2015 from https://www.cs.cmu.edu/˜garth/papers/05_feldman_022-030_final.pdf. |
Gibson, et al., “Direction for Shingled-Write and Two-Dimensional Magnetic Recording System Architectures: Synergies with Solid-State Disks,” Carnegie Mellon University, Parallel Data Lab, Technical Report CMU-PDL-09-104, May 2009, 2 pages, retrieved on Oct. 20, 2015 from http://www.pdl.cmu.edu/PDL-FTP/PDSI/CMU-PDL-09-104.pdf. |
Gibson, et al., “Principles of Operation for Shingled Disk Devices,” Carnegie Mellon University, Parallel Data Laboratory, CMU-PDL-11-107, Apr. 2011, 9 pages, retrieved on Oct. 20, 2015 from http://www.pdl.cmu.edu/PDL-FTP/Storage/CMU-PDL-11-107.pdf. |
Li, X., “Reliability Analysis of Deduplicated and Erasure-Coded Storage,” ACM SIGMETRICS Performance Evaluation Review, vol. 38, No. 3, ACM New York, NY, Jan. 3, 2011, pp. 4-9. |
Luo, “Implement Object Storage with SMR based Key-Value Store,” 2015 Storage Developer Conference, Huawei Technologies Co., Sep. 2015, 29 pages, retrieved on Oct. 20, 2015 from http://www.snia.org/sites/default/files/SDC15_presentations/smr/QingchaoLuo_Implement_Object_Storage_SMR_Key-Value_Store.pdf. |
Megans, “Spectra Logic Announces Breakthrough in Nearline Disk, ArcticBlue,” Specta Logic, Boulder, CO, Oct. 15, 2015, 5 pages, retrieved on Oct. 20, 2015 from https://www.spectralogic.com/2015/10/15/spectra-logic-announces-breakthrough-in-nearline-disk-arcticblue/. |
O'Reily; J., “RAID Vs. Erasure Coding”, Network Computing,Jul. 14, 2014, 2 pages, retrieved on Apr. 1, 2016 from http://www.networkcomputing.com/storage/raid-vs-erasure-coding/1792588127. |
Renzoni, R., “Wide Area Storage From Quantum,” Quantum Corporation, 2012, 24 pages. |
Seshadri S., “High Availability and Data Protection with EMC Isilon Scale-Out NAS,” EMC Corporation, White Paper, Jun. 2015, 37 pages, retrieved on Oct. 5, 2015 from https://www.emc.com/collateral/hardware/white-papers/h10588-isilon-data-availability-protection-wp.pdf. |
SMR Layout Optimisation for XFS, Mar. 2015, v0.2, 7 pages, retrieved on Oct. 15, 2015 from http://xfs.org/images/f/f6/Xfs-smr-structure-0.2.pdf. |
Speciale P., “Scality RING: Scale Out File System & Hadoop over CDMI,” Scality, Storage Developer Conference, Sep. 19-22, 2014, 26 pages, retrieved on Oct. 5, 2015 from http://www.snia.org/sites/default/files/PaulSpeciale_Hadoop_Ring.pdf. |
Suresh, et al., “Shingled Magnetic Recording for Big Data Applications,” Parallel Data Laboratory, Carnegie Mellon University, Pittsburg, PA, CMU-PDL-12-105, May 2012, 29 pages, retrieved on Oct. 20, 2015 from http://www.pdl.cmu.edu/PDL-FTP/FS/CMU-PDL-12-105.pdf. |
Number | Date | Country | |
---|---|---|---|
20170185312 A1 | Jun 2017 | US |