Electronic system with declustered data protection by parity based on reliability and method of operation thereof

Information

  • Patent Grant
  • 10365836
  • Patent Number
    10,365,836
  • Date Filed
    Tuesday, January 27, 2015
    9 years ago
  • Date Issued
    Tuesday, July 30, 2019
    4 years ago
Abstract
An apparatus includes: an adaptive declustered RAID array configured of data storage devices (DSDs), the DSDs comprise data chunks allocated as data, a local parity, or a global parity; and a controller configured to generate a reliability indicator reflective of a reliability status of at least a portion of the adaptive declustered RAID array for reallocating the data chunks by dynamically increasing or decreasing the data chunks allocated as the local parity, the global parity, or a combination thereof.
Description
TECHNICAL FIELD

An embodiment relates generally to an electronic system, and more particularly to a system with parity coverage of declustered data based on reliability.


BACKGROUND

Modern consumer, cooperate, and industrial users of electronic systems require storage and access to quantities of information from data bases, financial accounts, libraries, catalogs, medical records, retail transactions, electronic mail, calendars, contacts, computations, or any combination thereof. The electronic systems, such as computers, servers, cloud servers, or computer complexes, are key to day-to-day global, social, and economic operations of modern life. It is vital that the quantities of information are protected from loss and provided to the user with reliable and fast access.


To increase and maintain data reliability, a storage system of the electronic systems is used to increase storage reliability, improve data throughput performance, and prevent the loss of the data. Costs and maximum storage utilization of the storage system are also areas of concern for the users/owners of the electronic systems. Economic cycles and system expenditure costs for the users/owners increase or decrease as usage, global market, and economic conditions rise or fall, respectively, so there are growing demands for the storage system to be adjustable to accommodate these cyclic changes.


The users/owners are constantly looking for ways to balance their costs and storage utilization while improving and maintaining the reliability, availability, and performance of data access. Research and development in existing technologies can take a myriad of different directions and often resulting in trade-offs or metric sacrifices. One way to simultaneously increase performance and reduce cost and data loss at the same time is to provide an electronic system product with higher capacity for storage, greater system reliability, increased performance, and improved data protection.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A and 1B show operational hardware diagrams of an electronic system according to various embodiments.



FIG. 2 shows the Data Storage Devices having chunks allocated for the Data Chunks and the Spare Data Chunks to form a first Adaptive Declustered RAID Array according to various embodiments.



FIG. 3 shows multiple layered reliability monitoring and triggering schemes of the electronic system according to various embodiments.



FIG. 4 shows a control flow for continuous self-monitored processing of reliability factors to detect, trigger, and reconfigure the Adaptive Declustered RAID Arrays according to various embodiments.



FIG. 5 shows Multiple Attribute Reliability and Metadata Interfaces of the electronic system coupling the controller with the Data Storage Devices of the Storage Unit according to various embodiments.



FIGS. 6A and 6B show flow charts of record updating by the controller using an optional sampling frequency feature and record updating by the Data Storage Devices according to various embodiments.



FIGS. 7A and 7B show flow charts of record updating by the controller and record updating by the Data Storage Devices using an optional sampling frequency feature according to various embodiments.



FIG. 8 shows an example of a graph depicting failure characteristic trends automatically monitored by the controller and the Storage Unit to dynamically maintain a predetermined reliability range of the Adaptive Declustered RAID Arrays according to various embodiments.





DETAILED DESCRIPTION

A need still remains for an electronic system with data loss prevention, increased availability, and improved performance of data access based on system reliability, cost, and storage utilization of a storage system. In view of the ever-increasing commercial competitive pressures, along with growing consumer/industry expectations and the diminishing opportunities for meaningful product differentiation in the marketplace, it is increasingly critical that answers be found to these problems. Additionally, the need to reduce cost, to improve efficiency, reliability, and performance, and to meet competitive pressures adds an even greater urgency to the critical necessity for finding answers to these problems.


Solutions to these problems have been long sought after, but prior developments have not taught or suggested any solutions and, thus, solutions to these problems have long eluded those skilled in the art.


Certain embodiments have other steps or elements in addition to or in place of those mentioned in this application. The steps or elements will become apparent to those skilled in the art from a reading of the following detailed description when taken with reference to the accompanying drawings.


The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the embodiments. It is to be understood that other embodiments would be evident based on the present disclosure, and that system, process, or mechanical changes may be made without departing from the scope of an embodiment.


In the following description, numerous specific details are given to provide a thorough understanding of the embodiments. However, it will be apparent that the embodiments can be practiced without these specific details. In order to avoid obscuring an embodiment, some well-known circuits, system configurations, and process steps are not disclosed in detail.


The drawings showing embodiments of the system are semi-diagrammatic, and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing figures. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the figures is arbitrary for the most part. Generally, an embodiment can be operated in any orientation.


Referring now to FIGS. 1A and 1B, therein are shown operational hardware diagrams of an electronic system 100 according to various embodiments. The electronic system 100 can represent an apparatus for the various embodiments. An embodiment of the electronic system 100 is depicted in FIGS. 1A and 1B and includes a controller 102 coupled to a Storage Unit (SU) 104. The controller 102 can include a processor, a Central Processing Unit (CPU), User Interface (UI) 105, peripheral input/output (PIO) interfaces, and a shared memory.


The CPU, customized based on specific application of the electronic system, can include internal cache, an arithmetic logic unit (ALU), digital signal processor (DSP), firmware controller, interrupt handlers, electro-mechanical servo controllers, or network processors, as examples. The UI 105 can include an interface for keyboards, touch screens, magnetic readers, radio frequency identifiers (RFID), or optical readers, as examples.


The PIO interfaces can include fixed or pre-programmed interfaces for devices such as printers, monitors, audio/visual devices, telecom devices, user defined sensors, electro-mechanical devices, networking devices, or telecom devices, as examples. The host memory can include one or a combination of non-volatile or volatile memory such as a memory device that retains data after power external to the device is removed or a memory device that lose data after power external to the device is removed, respectively.


The Storage Unit 104 has more than one Data Storage Devices (DSDs) 106 that are non-volatile. The DSDs 106 can be random access data storage devices, sequential access data storage devices, or any combination thereof used to store or send user information accessed by the controller 102.


Examples of the sequential access data storage devices can include Tape Drive Devices (TDD) or Optical Storage Devices (OSD). Examples of the random access data storage devices can include Hard Disk Drives (HDD), Solid State Drives (SSD), Solid State Hybrid Drives (SSHD), Bubble Memory Devices (BMD), or Molecular Memory Devices (MMD).


In various embodiments, the Storage Unit 104 can include the DSDs 106 with memory. The memory can be any combination of volatile memory or non-volatile memory for access performance, buffering, or for power loss data retention, respectively.


In various embodiments, the Storage Unit 104 can include an integer number, N, of the DSDs 106, such as DSD1 to DSDN, for example. The DSDs 106 can have different levels of storage efficiency due to overhead. Examples of overhead can include unused storage space fragmentations, space reserved by the controller 102, space reserved by manufacturer reserved, space reserved by the users of the electronic system 100, space reserved for the Storage Unit 104, or any combination thereof.


Total storage capacity of each of the DSDs 106 does not need to be the same with the other DSDs 106. For example, a first DSD can have a capacity of 2 terabytes and a second DSD can have a capacity of 3 terabytes. Each of the DSDs 106 can include a number of Data Chunks (DC) 108 that are individually allocated as a portion of information based on the user information that can include either data, local parity, or global parity.


The terms allocate, allocated, allocation, and allocating define and indicate a dedicated use or purpose of a specified element. For example, the DC 108 allocated as data indicates that the DC 108 is to be used for only for data, until reassigned for another user or purpose. The terms reallocate, reallocated, reallocation, and reallocating define and indicate a change from one dedicated use or purpose of a specified element to a different dedicated use or purpose of the specified element. For example, the DC 108, currently allocated as data can be reallocated as local parity, to change the dedicated use or purpose from only data to only local parity.


The local parity is used to protect and correct the data. The global parity is used to protect the data, the local parity, different global parity, or any combination thereof. Each of the DSDs 106 can also include a number of Spare Data Chunks (SDC) 112. Any of the SDC 112 can be allocated or reallocated into the DC 108, and any of the DC 108 can be allocated or reallocated into the SDC 112. Each of the SDC 112 and each of the DC 108 have identical storage capacities.


The DC 108 can be allocated for information and can be individually further allocated as data, local parity, or global parity, based on reliability metrics, performance expectations, and product requirements, and costs. In an embodiment, at least a first data chunk 108 can be allocated as data, a second data chunk 108 can be allocated as a local parity, a third data chunk 108 can be allocated as a global parity, and a forth data chunk 108 can be allocated as one of the SDC 112, for example.


In another embodiment, one or more of the DC 108 can be reallocated as one or more of the SDC 112. It is understood that any of the DC 108 and the SDC 112 can be allocated or reallocated. In various embodiments, at least one of the DC 108 allocated as data can be reallocated as local parity. Also as an example, at least one of the DC 108 allocated as local parity can be reallocated as global parity.


Further as an example, at least one of the DC 108 allocated as global parity can be reallocated as data. Yet further as an example, at least one of the DC 108 allocated as data can be reallocated as global parity. Yet further as an example, at least one of the DC 108 allocated as global parity can be reallocated as local parity. Yet further as an example, at least one of the DC 108 allocated as local parity can be reallocated as data.


In various embodiments, the DC 108 of at least 2 of the DSDs 106 can be used to form any number of Adaptive Declustered (AD) RAID Arrays 114. A Data Storage Device Reliability Manager Unit (DSDRMU) 116 of the Storage Unit 104 is used to extract, record, compile, analyze, and report status of the DSDs 106, the DC 108, the SDC 112, or a combination thereof to the controller 102. The DSDRMU 116 supports configuring the DSDs 106, allocating and reallocation of the DC 108 and the SDC 112, configuring portions of the AD RAID Arrays 114, or any combination thereof.


Although FIG. 1A depicts the electronic system 100 having the controller 102 and the SU 104 as discrete units, it is understood that the controller 102, the SU 104, and functional blocks within either the controller 102 and the SU 104 can be repartitioned and/or regrouped differently. For example, the DSDRMU 116 could be integrated into the controller 102 while the DSDs 106 are located in a separate physical location such as a different city or country. In another example, the controller 102 and the SU 104 could be physically regrouped into a Host system and the DSDs 106 attached to a backplane board within the Host system.


It is also understood that the DSDRMU 116 can be consolidated or distributed in one or more of the DSDs 106. In an embodiment, the DSDRMU 116 can be integrated into one of the DSDs 106, as an example. In another embodiment, all or portions of the DSDRMU 116 could be distributed between 2 or more of the DSDs, as another example. The DSDRMU 116 can operate independent of or in-line with data read/write operations to the user information by the electronic system 100.


In various embodiments, the controller 102 can include a Declustered Array (DA) Controller 118 and a Reliability Manager Unit (RMU) 122. The DA Controller 118 can be dedicated to manage, execute, monitor, and correct the reading and writing of the user information from and to the storage unit. The RMU 122 supervises the DSDRMU 116.


The RMU 122 can extract, record, compile, analyze, report reliability status/information, or a combination thereof of the electronic system 100, the DA Controller 118, each of the AD RAID Arrays 114, and a duplicate copy of all or part of the information compiled by the DSDRMU 116. The information compiled by the DSDRMU 116 can include reliability status/information, months in service, and performance information of the DSDs 106, the DC 108, and the SDC 112.


Circuitry indicating Reliability Indicators (RI) 124 and Reliability Status (RS) 126 in the RMU 122 can be used to compile and consolidate reliability information, performance information, and predictive failure conditions of the DC 108 and the SDC 112. The RMU 122 determines if and which of the AD Raid Arrays 114 require allocation, reallocation, or any combination thereof by evaluating the RI 124, the RS 126, or a combination thereof. The RMU 122 can also perform the evaluation with additional information from the DA Controller 118 or based on input to the controller 102 from users or a higher level system external, such as the PIO or the UI 105. For example, the UI 105 can be used to sending reliability directives to the controller 102 for dynamically increasing or decreasing the data chunks 108 reallocated to local parity, global parity, or any combination thereof.


In various embodiments, the RMU 122 or the controller 102 can dynamically increase or decrease the data chunks 108 allocated as local parity, global parity, or a combination thereof without assists from the DA Controller 118. This increase or decrease can be concurrent with active and pending read/write operations of the user information in the Storage Unit 104. The dynamically increase or decrease of the data chunks 108 is defined as an ability to increase or decrease the data chunks 108 when a condition is detected. In an embodiment the increase or decrease of the data chunks 108 when a condition is detected is performed without authorization, intervention, or assistance from any other elements not specifically indicated.


In an embodiment, the RMU 122 or the controller 102 can dynamically increase or decrease the data chunks 108 when needed and, in certain cases, without any intervention from other elements. In another embodiment, the DSDRMU 116 and the RMU 122 can determine if and which of the DC 108 and SDC 112 need to be increased or decreased, and in certain cases, without any intervention from other elements.


The flow diagram of FIG. 1B shows a method of operating the electronic system 100 in an embodiment. Block shows 132 shows configuring an Adaptive Declustered RAID Array of DSDs, the DSDs comprise data chunks allocated for data, a local parity, or a global parity. Block 134 shows generating a reliability indicator reflective of a reliability status of at least a portion of the Adaptive Declustered RAID Array. Block 136 shows reallocating the data chunks by dynamically increasing or decreasing the data chunks allocated as the local parity, the global parity, or a combination thereof, wherein the method is performed by a controller.


It has been discovered that embodiments provide a flexible and dynamic protection for improved reliability by increasing or decreasing the usage of the DC 108, the SDC 112, or a combination thereof as local parity, the global parity, or a combination thereof. The Adaptive Declustered Raid Arrays 114 including the DSDs 106 and the DSD reliability manager unit 116 along with the reliability manager unit 122 with the RI 124 and the RS 126 provides greater reliability, availability, and through-put than other declustered RAID array systems by preemptively allocating and reallocating the DC 108 and the SDC 112 before read/write operations are impacted for uninterrupted data access and availability.


It has been discovered that the embodiments provide greater reliability, availability, and through-put than other declustered RAID array systems by preemptively allocating and reallocating the DC 108 and the SDC 112 concurrent to active and pending read/write operations. The preemptive allocating, reallocating, or a combination thereof can be accomplished with the Adaptive Declustered Raid Arrays 114 with the DC 108 and the SDC 112, the DSD reliability manager unit 116, and the reliability manager unit 122 with the RI 124 and the RS 126.


It has been discovered that embodiment, such as with the Adaptive Declustered RAID Array 114, provide economical budget constrained usage of different models of the DSDs 106 having a variety of storage efficiencies without compromising system reliability due to self-throttling reliability features provided by dynamic allocation or reallocation of the DSDs 106.


It has been discovered that embodiments with the controller 102, as an example, including the RMU 122, the RI 124, and the RS 126 can be simultaneously connected to different one of the Storage Unit 104 having one of the DSDRMU 116. The controller 102 and the Storage Unit 104 can be geographically separated from one another while providing a robust, secure, reliable Adaptive Declustered RAID Array 114 immune from environments/geological hazard such as earthquakes, floods, subsidence, or similar naturally occurring hazard.


It has been discovered that embodiments provide self-monitoring and analysis of reliability information by the RMU 122 from the RI 124 and the RS 126 to trigger a change in allocation or reallocation local parity and global parity of the DC 108. This allocation or reallocation can increase or decrease the error protection based on wear, age, or infant mortality rates.


It has been discovered that embodiments can provide the controller 102 and the Storage Unit 104 to autonomously allocate and reallocate the SDC 112 and the DC 108 before failure occurs to minimize or eliminate performance and access failures that normally occur during a rebuild of all or portions of the DSDs 106 associated with other declustered RAID array systems. The autonomous allocation and reallocation provide lower cost and higher performing RAID system over the other declustered RAID array systems and other traditional non-clustered RAID array systems.


Referring now to FIG. 2, therein is shown the DSDs 106 having chunks 202 allocated for the DC 108 and the SDC 112 of FIG. 1A to form a first AD RAID Array 114 according to various embodiments. In an embodiment, each of the DSDs 106 can be divided into the chunks 202 having an identical size and identified by cN, where N equals a whole number used to identify any one of the chunks 202 corresponding to each of the DSDs 106. For example, c1 represents a first chunk, c2 represents a second chunk, c3 represents a third chunk, and etc., of any one of the DSDs 106.


Although the chunks 202 of each of the DSDs 106 are identified and shown in consistent logical positions, it is understood that the each of the chunks 202 can be physically formed from one or more different physical locations. For example, the chunks 202 can be located and identified on different disks, cylinders, sectors, zones, tracks, or any combination thereof, within any one of the DSDs 106.


Various embodiments can include a number for DSDs 106. As an example, FIG. 2 depicts a 1st DSD 106, a 2nd DSD 106, a 3rd DSD 106, a 4th DSD 106, a 5th DSD 106, a 6th DSD 106, a 7th DSD 106, a 8th DSD 106, a 9th DSD 106, a 10th DSD 106, a 11th DSD 106, a 12th DSD 106, a 13th DSD 106, and a 14th DSD 106.


The controller 102 of FIG. 1A and the DSDRMU 116 of FIG. 1A can initially create the chunks 202 to allocate some of the SDC 112 and the DC 108 further allocated as data, local parity, or global parity to form the first configuration for AD RAID Array 114. The allocations of the SDC 112 and the DC 108 can be based on reliability target goals, life cycles, or any combination thereof.


The reliability target goals are provided by users of the electronic system 100 of FIGS. 1A and 1B and can be based on user applications or requirements, such as data criticality, data availability, and client storage cost tradeoffs. Additional details related to the life cycle will be provided below in FIG. 8 and in other embodiments.


As an example, the chunks 202, c1 and c3, of all of the DSDs 106 can initially be allocated by the RMU 122 of FIG. 1A as a first SDC 112 and a second SDC 112, respectively. All of the chunks 202, except for c1 and c3 of all of the DSDs 106, can initially be allocated as the DC 108 by the RMU 122.


C4 of the 1st DSD can be allocated as a first local parity to protect c2 of the 5th DSD and c5 of the 6th DSD. C2 of the 2nd DSD can be allocated as a second local parity to protect c5 of the 7th DSD and the 8th DSD. C6 of the 3rd DSD can be allocated as a third local parity to protect c5 of the 9th DSD and the 10th DSD. C9 of the 6th DSD can be allocated as a fourth local parity to protect c5 of the 11th DSD and the 12th DSD.


C7 of the 14th DSD can be allocated as a first global parity to protect of all the local parities and the data. C8 of the 4th DSD can be allocated as a second global parity to exclusively protect all of the local parities.


In various embodiments, the RMU 122 of FIG. 1A analyzes one or more of the RS 126 of FIG. 1A generated by the RMU 122, received from the DSDRMU 116, or any combination thereof. The RMU 122 generates the RI 124 of FIG. 1A and determines a reallocation or allocation of at least one of the chunks 202 is necessary.


In an embodiment, SDC can be reallocated as another DC for additional local parity when the RMU 122 determines that targeted reliability metrics are not being met. For example, when the RMU 122 has determined from the RI 124 and/or the RS 126 that c5 of the 10th DSD, allocated as data, is not meeting targeted reliability metrics, the RMU 122 can add a fifth local parity to protect the data of c5 of the 10th DSD. The fifth local parity can be formed, for example, by reallocating the second SDC 112, represented by c3 of the 5th DSD, into another of the DC 108 allocated as local parity to protect c5 of the 10th DSD and thus restore the targeted reliability metrics.


In an embodiment, certain DC can be defective and replaced by reallocating other DC/SDC. For example, the RMU 122 can replace a defective c5 of the 10th DSD. The RMU 122 can reallocate the fifth local parity, which is represented by c3 of the 5th DSD, as a replacement for c5 of the 10th DSD.


In an embodiment, certain DC can be allocated as slated for migration and replaced by other DC/SDC. For example, the RMU 122 can reallocate the first SDC 112, which is represented by c1 of the 12th DSD, as another DC 108 for migration of c5 of the 10th DSD. In an embodiment, the RMU 122 can reallocate the first SDC 112, which is represented by c1 of the 12th DSD, as another DC 108 as a data replacement of the c5 of the 10th DSD.


In an embodiment, SDC can be allocated as additional global parity DC. For example, the RMU 122 can reallocate the second SDC 112, which is represented by c3 of the 5th DSD, as a third global parity to protect all of the DC 108 allocated as data, which are shown as c2 of the 5th DSD, c5 of the 6th DSD, the 7th DSD, the 8th DSD, the 9th DSD, the 10th DSD, the 11th DSD, and the 12th DSD.


In various embodiments, global/local parity DC can be allocated as additional SDC when the RMU 122 determines that targeted reliability metrics are being exceeded. For example, when the RMU 122 has determined from the RI 124 and/or the RS 126 that the first AD RAID Array 114 is exceeding targeted reliability metrics, the RMU 122 can reallocate the second global parity, which can be represented by the DC 108 of c8 of the 4th DSD, as another of the SDC 112.


In an embodiment, global parity DC can be allocated as local parity DC, and vice versa. For example, the RMU 122 can reallocate the second global parity, which can be represented by the DC 108 of c8 of the 4th DSD, as a local parity to protect a second AD RAID Array 114 formed with the DC 108 of c8 of the 10th DSD, the 11th DSD, 12th DSD, the 13th DSD, and the 14th DSD allocated as data.


In an embodiment, the RMU 122 can reallocate the second global parity, which can be represented by the DC 108 of c8 of the 4th DSD 106, as a global parity to protect the DC 108 allocated as data and local parity of the second AD RAID Array 114.


It is understood that there can be any number of the DSDs 106 with any number of the DC 108 and the SDC 112 to form one of the AD RAID Arrays 114. In an embodiment, the AD RAID Arrays 114 can be formed from a mixture of different types of DSDs, such as a combination of the HDDs, the SSD, and the MMD, as an example.


In an embodiment, the AD RAID Arrays 114 can be formed from only one DSD 106 with a combined total of some number of the DC 108 and the SDC 112, for an extremely small form factor, high performance, and limited product lifespan application. Some examples for this application can include medical emergencies, rescue emergencies, military usage, and other similar critical, low cost, short lifespan product application, as an example.


In various embodiments the DC 108 with a decreasing RI 124 can be distinguished as either retired, discarded, or dead. When the RI 124 of the DC 108 is less than a high threshold (Th) 150 of FIG. 1A with the data in the DC 108, the DC 108 can still be readable/writable with one retry and/or with a few relocated sectors, and downgraded access performance. In this example, the DC 108 can be marked as retired and will need some low-priority background data migrations, in order to move the data to another of the DC 108. Parts of the DC 108 marked as retired can be reorganized into a different one of the DC 108 or as a new one of the DC 108 with a recalculated RI 124.


If the RI 124 is below a low threshold (T1) 153 of FIG. 1A, the DC 108 has high chance of failure, indicated by multiple retries, with many relocated sectors, accesses requiring the error correction of Low Density Parity Check (LDPC), or any combination thereof.


In this example, the DC 108 are more likely to be replaced and discarded unless there is an access failure to one or more of the DC 108 which results in the DC 108 with the access failure distinguished as dead. If one of the DSDs 106 have numerous retired/discarded chunks, such as exceeding a certain threshold, then the one of the DSDs 106 is likely to be replaced to minimize further impact to the system availability if a replacement is deferred to a later time.


It has been discovered that embodiments can provide the controller 102 with the RMU 122, the RI 124, and the RS 126 along with the DSDRMU 116 to allocate, reallocate, or any combination thereof, chunks 202 of the AD RAID Array 114. This allocation or reallocation can be used to upgrade or downgrade reliability and performance of the electronic system 100 while reducing storage overhead and efficiency, automatically and without any assistance or intervention from outside of the controller 102. This allocation and reallocation can also result in little or no impact to other resources such as circuitry, hardware, software, or any combination thereof, of the electronic system 100 to provide an extremely low cost product.


It has been discovered that embodiments can provide the controller 102 with the RMU 122, the RI 124, and the RS 126 along with the DSDRMU 116 to detect the reliability and performance of chunks 202 of the AD RAID Array 114. This fast detection allows for the AD RAID Array 114 to analyze, determine, and if necessary, execute the allocation, reallocation, or any combination thereof of the chunks 202 to prevent data loss automatically and without any assistance or intervention from outside of the controller 102. Further, the AD RAID Array 114 provides an intelligent storage efficient self-adaptive RAID system based on reliability metrics and performance.


Referring now to FIG. 3, therein is shown multiple layered reliability monitoring and triggering schemes of the electronic system 100 according to various embodiments. In various embodiments, the DSDRMU 116 and the RMU 122 provide multiple ways to trigger and initialize a reconfiguration procedure of the AD RAID Array 114 of FIG. 1A.


In an embodiment, one way to trigger and initialize the reconfiguration procedure is with a DSD Reliability Monitor (DSDRM) 302 of the DSDRMU 116, coupled to the DSDs 106. The DSDRM 302 can monitor reliability attributes, such as Burst Error Rates (BER) and Error Margins (EM), of the DC 108 of FIG. 1A and the DSDs 106.


A DSD Reliability Control (DSDRC) 304 of the DSDRMU 116 can periodically evaluate the reliability attributes collected by the DSDRM 302 by analyzing and comparing the reliability attributes with on one or more predetermined thresholds. Any of the reliability attributes exceeding the predetermined thresholds can be sent to a DSD Reliability Reporter (DSDRR) 306. As an example, the reliability attributes can include identifying information, such as the DC 108 and the DSDs 106.


The DSDRR 306 can queue up and send the identifying information with the reliability attributes exceeding the predetermined thresholds to a DSD Reliability Interface (DSDRI) 308. The DSDRI 308 manages a Reliability Communication Interface (RCI) 310, such as bidirectional interface separate from any customer data interface, between the DSDRMU 116 and the RMU 122 as a reliability interface for communications between the DSDRMU 116 and the RMU 122. The RCI 310 can be used to transmit the identifying information with the reliability attributes to a Central Reliability Monitoring Unit (CRMU) 312 of the RMU 122.


The CRMU 312 can create the RI 124 and forward the identifying information to a Reliability Control Unit (RCU) 314. The RCU 314 has the capability to access and analyze reliability attributes and status of the SU 104, the DSDs 106, the DC 108, the SDC 112 of FIG. 1A, all of the AD RAID Arrays 114, the RI 124, and any other remote or off-site SU 104. The RCU 314 can generate an Overall Reliability Index (ORI) 316 of the AD RAID Array 114 and other factors such as system workload and performance information associated with the DC 108 and the DSDs 106.


Based on the ORI 316, the other factors, the RS 126 for some of the DC 108, or any combination thereof, the RCU 314 can trigger a reconfiguration of one or more of the AD RAID Arrays 114, one or more of the DC 108, one or more of the SDC 112, or any combination thereof by notifying an Array Reconfiguration Unit (ARU) 318. The other factors can also include user requirements, application requirements, operating costs, or any combination thereof.


The ARU 318 generates configuration information to the CRMU 312, the DSDRI 308, or any combination thereof, to indicate specific reconfiguration operations, when to reconfigure, and where to reconfigure. Examples of where to reconfigure can include one or more of the AD RAID Arrays 114, one or more of the SDC 112, one or more of the DC 108, or any combination thereof.


In another embodiment, the controller 102, a user requirement, or an operator of the electronic system 100 can also trigger and initialize the reconfiguration. For example, a special or unique business need can arise requiring that the ORI 316, the RI 126 of one or more of the DC 108, the RS 126 of one or more of the DC 108, one or more of the AD RAID Arrays 114, or any combination thereof needs to be reconfigured.


In yet another embodiment, the controller 102, the user requirement, or the operator of the electronic system 100 can adapt, adjust, and change the RCU 314 behavior as an indirect way to trigger and initialize the reconfiguration. For example the controller 102, the user requirement, or the operator of the electronic system 100, can temporarily or permanently change the ORI 316, the RS 126, one or more of the predetermined thresholds, or any combination thereof. Temporary or permanent adjustments to one or more of the thresholds are referred to as creating an adaptive threshold.


In various embodiments, the RMU 122 and the DSDRMU 116 can apply a neural network reliability warning scheme (NNRWS) 140 of the controller 102 of FIG. 1A and the SU 104 with reliability metrics to trigger a reconfiguration process based on a reliability warning signal from the DSDs 106. In various embodiments requiring high availability, high performance, and accurate and early preemptive indication to reconfigure the AD RAID Array 114, the controller 102 and the DSDs 106 can apply advanced prediction algorithms (APA) 142 to trigger the reconfiguration process.


In an embodiment, the APA 142 provides superior failure prediction ratios over the simple threshold-based algorithms. The APA 142 is unlike simple threshold-based algorithms such as self-monitoring of reliability factors used to compare against thresholds to signal a warning of near-failure to the controller 102. The APA 142 can be performed by the DSDs 106 and the controller 102. Examples of the APA 142 include neural networks, fuzzy logic models, Bayesian Analytical Approaches (BAA), Support Vector Machine (SVM) learning algorithms, or any combination thereof.


In an embodiment, the controller 102 can apply multi-layered back-propagation neural networks. An example of the three-layered back-propagation neural network can include a fast learning algorithm network first layer to predict a reliability index of chunk, such as the RI 124 of each of the DC 108 of the AD RAID Array 114.


A second layer of a convolutional neural network analyzing the reliability index of chunk of all the other of the DC 108 of the AD RAID Array 114. A third layer, enabling the RCU 314 to calculate a Reliability Index of RAID (RIR) 332, such as Mean Time To Data Loss (MTTDL) or Mean Time To Failure (MTTF) using a layered fuzzy neural network, as examples.


The RIR 332 can be recalculated on pre-determined periodic intervals or on-demand using the RI 124. The RIR 332 can be used to determine if a reconfiguration is required to preempt a probable failure of the AD RAID Array 114. It is understood that the neural network can have any number of layers of the back-propagation. A six-layer back-propagation neural network can be used to further improve the predictive accuracy of pending failure, as an example.


In another embodiment, the controller 102 can initially apply the simple threshold method and three-layer back-propagation neural network to compile, learn, and validate the preferred choice of triggering a reconfiguration. As an example, this triggering can be based on the expectations, usages, and applications for a specific one of the electronic system 100 and to fine tune the triggering analysis and accuracy.


In various embodiments, the RMU 122 can execute the reconfiguration using various different options. The RMU 122 can increase reliability, decrease reliability, prevent a change in reliability from occurring, or any combination thereof.


In an embodiment, at least one of the AD RAID Arrays 114 can be reconfigured to increase reliability by reallocating one or more of the DC 108 or the SDC 112. This reallocation can be to add at least one more local parity, add at least one more global parity, swap to migrate at least one of the DC 108 with another of the DC 108 having a High Reliability Index (HRI) 320, or any combination thereof.


In an embodiment, the HRI 320 is defined to be a reliability rating assigned to specific ones of the DC 108, the SDC 112, the DSDs 106, the AD RAID Arrays 114, or any combination thereof. The HRI 320 is used to indicate superior products in areas of durability, total operating life, burn-in screening, use of military specification components, stress-tests screening, physically protected from environmental/geographical hazards, physically protected by secured/limited access locations, and any combination thereof.


In another embodiment, at least one of the DC 108 can be allocated as a parity or a global parity with a Low Reliability Index (LRI) 322. In this example, this DC 108 can be reallocated as another of the SDC 112 or the DC 108 to decrease excessive reliability or create more available unused resources, respectively, for the AD RAID Array 114 or different one of the AD RAID Array 114. For example, the available unused resources can be reallocated to add at least one more local parity, add at least one more global parity, migrate/swap out and replace at least one of the DC 108 having the LRI 322.


The LRI 322 is defined to be a reliability rating assigned to specific ones of the DC 108, the SDC 112, the DSDs 106, the AD RAID Arrays 114, or any combination thereof. The LRI 322 can be used to indicate average or below average products in areas of durability, total operating life, average or high infant mortality, low/commodity priced components, having attributes opposite to attributes rated having the HRI 320, or any combination thereof.


In yet another embodiment, the controller 102, the user requirement, or the operator of the electronic system 100 can adjust and change the behavior of the RCU 314 as an indirect way to increase or decrease the reliability. For example, the controller 102, the user requirement, or the operator of the electronic system 100, can temporarily, permanently, or any combination thereof, change the Overall Reliability Index ORI 316, the RS 126, one or more of the predetermined thresholds, or any combination thereof, resulting in the reliability increasing or decreasing.


For example, in another embodiment, a newly created AD RAID Array 114 can be initialized having the DC 108 allocated as 10 data, 2 local parity, and 1 global parity representing a virtual RAID grouping of 10+2+1. After a certain time period, if reliability based on the predicted error rate of some of the DC 108 increased or the criticalness of some data is increased, the AD RAID Array 114 can be reconfigured to have 10 data, 2 local parity, and 2 global parity representing a new virtual RAID grouping of 10+2+2.


Continuing with the example, sometime later, the predicated error rate can be decreased because there has been no error event in the virtual grouping. The RMU 122 can reallocate to reclaim one of the 2 global parities and allocate the one of the 2 global parities into one of the SDC 112. So, the RAID grouping is restored back to the original initialized virtual RAID grouping of 10+2+1.


In another embodiment, the newly created AD RAID Array 114, initialized to have a virtual RAID grouping of 10+2+1, can be reconfigured to a higher reliability safety level, such as a virtual RAID grouping of 10+2+3, based on unknown and uncertainty of the newly purchase of the DSDs 106. After reliability baseline operations have been established, the RAID grouping can be reconfigured down to the original initialized virtual RAID grouping of 10+2+1.


It is understood that functional components shown, such as the RMU 122, the DSDRMU 116, the RMU 122, the DSDRMU 116, or any combination thereof, can be partitioned differently. In an embodiment, the entire RMU 122 and parts of the DSDRMU 116 could be integrated together, as an example. In another embodiment, the DSDRI 308 of the DSDRMU 116 could be integrated into the RMU 122, as an example.


It has been discovered that embodiments provide improvements that provide a more reliable, accurate, and early assessment of pending problems to trigger a reconfiguration process before a failure occurs in comparison to simple threshold-based algorithms. These improvements are provided with the use of the APA 142 by the controller 102, the DSDs 106, or any combination thereof to self-monitor, detect, and report a near-failure condition of the DSDs 106, the DC 108, the AD RAID Array 114.


The RIR 332 can be recalculated on pre-determined periodic intervals or on-demand. The RIR 332 can determine if a reconfiguration is required to preempt a probable failure of the AD RAID Array 114. This reconfiguration results in a fast and efficient method to determine and preempt the probable failure to provide superior data availability of the AD RAID Array 114 compared to other clustered or non-clustered RAID array systems.


Referring now to FIG. 4, therein is shown a control flow for continuous self-monitored processing of reliability factors to detect, trigger, and reconfigure the AD RAID Array 114 of FIG. 1A according to various embodiments. An oval 402, labeled as virtual RAID initialized and operational, represents enablement of a self-monitoring process to monitor a virtual RAID, such as the AD RAID Array 114, by the controller 102 of FIG. 1A.


A diamond-box 404, labeled with the phrase staring with “compare current calculated RIR with previous RIR,” describes an action to compare consecutive RIRs and determine whether the RIR difference is greater than a predetermined threshold, δ. As an example, the diamond-box 404 represents the controller 102 self-monitoring and periodically sampling the reliability factors of the AD RAID Array 114 over time. The sampling can retrieve the reliability information of the DC 108 of FIG. 1A from the DSDRMU 116 of FIG. 3 using the RCI 310 of FIG. 3.


As an example, the controller 102 calculates the overall reliability factors used to generate the ORI 316 of FIG. 3 and subsequently calculates the RIR 332. A current RIR 332 calculated is compared against a previously calculated RIR, e.g., one that immediately preceded the current RIR calculated. If the difference in change between the RIR 332 calculated and the previously calculated RIR 332 is greater than the predetermined threshold δ (represented by the Y path leading out of the diamond-box 404), then the controller 102 begins to determine if and what type of a reconfiguration is to be performed. Otherwise, the controller 102 can continue to calculate and compare pairs of the RIR 332 (current vs. previous) with the predetermined threshold δ, as represented by the N path leading out of the diamond-box 404.


A dashed-box 406, contains a reconfiguration process used by the controller 102 to determine if and what type of the reconfiguration is to be performed. The reconfiguration type 406 includes a downgrade type 407 and an upgrade type 409.


In the downgrade type 407, the RIR 332 is compared against first predefined reliability threshold, θ1, to determine if the AD RAID Array 114 has a configured reliability that is much more reliable than needed. In the upgrade type 409, the RIR 332 is compared to a second predefined reliability threshold, θ2, to determine if the AD RAID Array 114 has a configured reliability that is in need of a more reliable configuration.


The RIR 332 greater than θ1 indicates that all of the DC 108 and the DSDs 106 of the AD RAID Array 114 exceeds the storage reliability needed for the electronic system 100 of FIG. 1A. This RIR 332 condition can indicate that the resources, such as the DSDs 106, DC 108, or combination thereof, are not being efficiently used and parity protection should be downgraded to free up DC for data storage use, e.g., for use by another AD RAID Array 114 as previously described in FIG. 2.


The RIR 332 less than θ2 can indicate that the AD RAID Array 114 is in need of a reliability upgrade to meet the requirements of the electronic system 100 and parity protection may need to be upgraded (and/or other reliability enhancing measures such as data refresh or migration) as previously described in FIG. 2. The RIR 332 equal or greater than θ2 and equal or less than θ1, indicates that the storage reliability is currently within the requirement range of the electronic system 100 and should be continued to be monitored in the diamond-box 404, with no upgrade or downgrade reconfiguration action needed.


As indicated in the dashed-box 406, a decision to downgrade or upgrade is determined by evaluation of downgrade reconfiguration rules 410 or evaluation of upgrade reconfiguration rules 411, respectively. The upgrade and the downgrade rules can be based on rules used by the controller 102, the SU 104 of FIG. 1A, users/owners of the electronic system 100, or any combination thereof.


The upgrade and the downgrade rules can include the rules applied in and with the analysis and computations from a system user or an operator of the electronic system 100, the DSDRMU 116, the RMU 122, the RI 124, the RS 126, the BER, the EM, the APA 142, the multi-layered back-propagation neural networks, the MTTLD, the MTTF, the RIR 332, the HRI 320, the LRI 322, the ORI 316, the predetermined thresholds δ, the predefined reliability thresholds θ1 and θ2, as examples.


A dotted-box 414, shows a layer of decisions boxes used to make a final determination if the downgrade or upgrade needs to be performed immediately or deferred. The decisions can be based on criticality of the data, customer requirement, applications being performed, time of use, or any combination thereof.


In an embodiment, a decision to defer the reconfiguration can be a result of the AD RAID Array 114 having a light workload or an unavailability of the RCI 310 process reconfiguration of the AD RAID Array 114 due to priority communications with a different AD RAID Array 114 perform reconfigurations to the different AD RAID Array 114. The light workload can be attributed to being offline, being idle, or being phased-out. If no deferral is decided, then the reconfiguration proceeds in box 415. In either case, whether the reconfiguration takes place or is deferred, the process returns to box 404 where monitoring continues.


In another embodiment, a decision to immediate reconfigure could be necessary to free up some of the clusters by reallocating some of the DC 108 as the SDC 112 for use in another one of the AD RAID Array 114. In another embodiment, a decision to immediately reconfigure could be to replace one of the DC 108 having critical information approaching failure with another DC 108 or one of the SDC 112. In an embodiment, the decision to defer or immediately reconfigure can be directed with the use of adaptive thresholds described in FIG. 3.


It is understood that whether a reconfiguration is triggered based on a current RIR 332 value, user requirement, or any combination thereof, the actual trigger rule a simple threshold method with or without a neural networks to intelligent formulations using heuristic methodologies. The reconfiguration start depends on many factors such as the data criticalness, system workload, performance, or any combination thereof.


In various embodiments where other types of storage media, such as SSDs, the concepts, processes, and methods described for the electronic system 100 in FIG. 1A, FIG. 1B, FIG. 2, FIG. 3, and FIG. 4 can also apply. For example, each of the SSD records a wearing value that can be used by the controller 102 to determine indicators similar to the RI 124, the RS 126, and the ORI 316 for reliability metrics of the SSD data chunks.


It has been discovered that the architected and use of the predetermined threshold, δ, the predefined reliability threshold, θ1, the predefined reliability threshold, θ2, or any combination thereof by the DSDRMU 116 of the AD RAID Array 114 provides a method to quickly modify the reliability and performance of the AD RAID Array 114 by simply modifying any combination of the predetermined threshold, δ, the predefined reliability threshold, θ1, or the predefined reliability threshold, θ2 without the need of long complex calculations and analysis.


Referring now to FIG. 5, therein is shown Multiple Attribute Reliability and Metadata Interfaces (MARMI) 502 of the electronic system 100 of FIG. 1A coupling the controller 102 of FIG. 1A with the DSDs 106 of FIG. 1A of the SU 104 of FIG. 1A according to various embodiments. In an embodiment, the MARMI 502 can include Chunk Reliability Attributes (CRA) 504 forming one or more Chunk Groups (CG) 506, one or more Chunk Servers (CS) 510, a Metadata Master (MM) 514, optional Shadow Masters (SM) 516, and the RMU 122 of FIG. 1A.


The Chunk Reliability Attributes (CRA) 504, shown in a dashed-box, can include Data Error Rates (DER), the EM, the BER, recoverable errors, non-recoverable errors, storage performance and efficiency ratings, the RI 124 of FIG. 1A and the RS 126 of FIG. 1A of each of the DC 108 of FIG. 1A and the SDC 112 of FIG. 1A, or any combination thereof of the DSDs 106 of FIG. 1A.


The CRA 504, with the reliability attributes of the chunks 202 of FIG. 2, can be used to calculate the RIR 332, previously described in FIGS. 3 and 4. The CRA 504 can be entries in memory from circuitry, software, or any combination thereof. The CRA 504 can be organized in any number of the CG 506. For example, a first CG 506 can have 1 to P entries, a second CG 506 can have 1 to Q entries, and a third CG 506 can have 1 to R entries of the CRA 504, where P, Q, and R can be any integer value. The MM 514 provides the chunk mapping information for any of the AD RAID Arrays 114 of FIG. 1A to the RMU 122 using a Chunk Map Interface (CMI) 518.


Each of the CG 506 is directly connected to one of many of the CS 510, such as a CS 1 to CS S, where S can be any integer value. The RMU 122 can communicate directly with the CS 510 to obtain the CRA 504 using a Direct Chunk Access Interface (DCAI) 522 and shunt interfaces 524, shown with dotted-lines. The CRA 504 can be processed by each of the CS 510 and viewed by the MM 514, using Metadata Master To Chunk Server Interfaces (MMTCSI) 526, as another type of metadata and can be stored as metadata in the MM 514 of the controller 102 for use by the RMU 122 through the CMI 518.


In an embodiment the RCI 310 of FIG. 3 can be used to connect the CS 510 with the CG 506. The optional Shadow Masters (SM) 516, numbered from 1 to T, where T is any integer number, can be included in the controller 102, directly connected to the MM 514, to augment and off-load management and mapping tasks of the MM 514, by reducing the number of the CS 510 managed by the MM 514. The CS 510 also off-loads the MM 514 by managing, pre-processing, and channeling the CRA 504 to separate MMTCSI 526 for simplified mapping by the MM 514.


The MMTCSI 526 send trigger warning/alarm signals, from the DSDs 106 to the controller 102, that are associated with one or more of the CRA 504 when one or more of the CRA 504 does not satisfy the predefined reliability thresholds, predefined requirements, predefined constant values, predetermined thresholds, or any combination thereof.


It is understood that functional components shown, such as the RMU 122, for example, can be integrated differently. For example, in a small scaled electronic system 100, the functionality of RMU 122 could be integrated with one or more of the CS 510 and eliminate or absorb some functions of the MM 514.


In various embodiments, the DCAI 522, the CMI 518, the shunt interface, the MMTCSI 526, and the RCI 310 can operate with a common connection and communication infrastructure, such as an interface with Vendor Specific Commands (VSC) or other specific Application Programming Interface (API), as examples. The CRA 504 can be stored in the controller 102, the SU 104, or any combination thereof.


In an embodiment, the CRA 504 can be also classified as a type of metadata, hence, the CRA 504 can be directly stored or reside in the MM 514. In an embodiment, the CS 510 reduces the workload of MM 514 by distributing the CRA 504 information to the CS 510 for further distribution to the MM 514 and the RMU 122. The MMTCSI 526 enables the RMU 122 to directly monitor the RIR 332 of FIG. 3 and calculate, evaluate, and adjust the RIR 332 under different situations, for example.


In various embodiments, the MARMI 502 can include the fuzzy logic, the neural networks, the SVM, or other advanced prediction or learning algorithms to predict and compensate for changes, as described in FIG. 3. The MARMI 502 can also provide the reliability-related raw data/attributes, such as the EM (error margin) or the BER (bit error rate) associated with the DC 108 and/or the DSDs 106 to the controller 102 when requested, such as for user review, validation of reliability target metrics, or periodic performance monitoring of the electronic system 100.


In various embodiments, the MARMI 502 can store chunk size information used to create the chunks 202 in a normal operating mode, an advanced operating mode, or any combination thereof. In the normal operating mode, the DSDs 106 can only store and process a few limited historical reliability data information without the intervention from the controller 102. In the normal operating mode, the DSDs 106 can store information for each head, such as on record per data zone/head or even one record per head resulting in storing 240 records or 4 records, respectively, assuming there are 4 heads and 60 data zones, as an example, for creating the chunks 202.


In the advanced operating mode, the MARMI 502 can provide the chunk size information to the DSDs 106. The DSDs 106 will try to allocate continuous tracks within one surface of a disk into a single one of the DC 108. The DSDs 106 can decide the granularity to store the records based on some parameters of the DSDs 106 such as the DSDs 106 local, external cache, or combination thereof cache size based on the configuration of the MARMI 502.


For example, one of the DSDs 106, can be a 4 TB drive with a 64 MB chunk size and 65536 chunks, can select a granularity of 16 and group approximately 16 contiguous chunks on the same disk surface into one record, so that only 4096 records are required. The MARMI 502 can be used to configure the granularity or grain size based on predetermined or user settings.


As an example, the advanced operating mode provides much more chunk formatting efficiency and storage utilization accuracy, than the normal operating mode, and requires additional memory space, background capability, and computational power provided by the MARMI 502. The MARMI 502 can also store the records in a dedicated space such as a cache within the controller 102, a cache of the CPU of FIG. 1A, the shared memory of FIG. 1A, or any combination thereof. The additional memory space can be used to store other attributes such as a chunk access count, access temperature, workload/utilization history, or any combination thereof. Each of the records can have an associated time-stamp or valid period (time-window). When any of the records become invalid, the MARMI 502 must issue a request to update the value.


In the normal operating mode, during initialization of the DSDs 106, the DSDs 106 chooses a few places on each disk surface to get data, such as once per data zone per head, for example. In the advanced operating mode, more records are required and initially the DSDs 106 can use the data zone's value or head's value, to avoid too many background accesses before accessing the additional memory space.


There can be two types of record accesses/updates. The first type of the record access/update is passive access and occurs when one of the DSDs 106 is accessing a region having both user data and the record, such as a user, customer, or application reading/writing to user data on tracks that also include the record near or adjacent the user data.


The second type of the record access/update is active access and occurs by either explicit access or automatic triggered access of the record, by the MARMI 502. In embodiments, the MARMI 502 can explicitly access specific locations on tracks and access or update the record as needed, as an example.


In other embodiments, the MARMI 502 can set a periodically access, at a predetermined sampling frequency, for explicit locations on the tracks to access or update the record as needed, as another example. Configuration reliability downgrades, such as a reallocation of one or more of the DC 108 to another of the SDC 112 is usually performed using the active access type of record update unless there is a need for deep reformatting/reconfigurations, such that the record is initialized again.


The MARMI 502 can monitor online, the reliability attributes of the AD RAID Arrays 114 and dynamically reconfigure any of the AD RAID Arrays 114 without interrupting service or operation of the SU 104, in order to maintain the MTTDL at an acceptable level across life cycles of the DSDs 106 in the electronic system 100, this is further described in FIG. 8, below.


The RI 124 of FIG. 1A can be updated in the active mode, passive mode, or any combination thereof. In the passive mode, once an access happens to one of the DC 108, the reliability information corresponding to the one of the DC 108 can be obtained from the DSD 106 by having the controller 102 access reliability raw data of the one of the DC 108 to calculate the RI 124 based on the reliability raw data. In active mode, a sampling frequency is set, and the RI 124 is updated after calculation at least once a sampling time window.


In various embodiments, record updates such as the update of the RI 124 when a user IO command in the controller 102 enters the DSDs 106, the DSDs 106 can get the corresponding reliability information of the command's access regions, such as the DC 108, at the same time, and then feedback it to the controller 102. Other times when there is no IO command in the controller 102, the controller 102 can independently trigger a reliability request to drive, if necessary.


The independently trigger of a reliability request by the controller 102 is usually a background task and can be submitted independent of IO activity or in line between the user IO commands by assigning/adjusting a task priority level of the reliability request according to different activities/situations. The system can also set a sampling period to update the record in either the controller 102 side or the DSDs 106 side. If within the sampling period, there is a user IO command, then the record is updated; otherwise, a background command should be issued to get the record.


The process to initialize a record in the controller 102 side includes setting the MARMI 502 operating mode to either normal operating mode or an advanced operating mode in the user's configuration settings, setting the MARMI 502 record access type to either active access or passive access, and obtaining reliability raw data of each of the DC 108 from the DSDs 106.


The process to initialize a record in the DSDs 106 side includes setting the MARIMI 502 operating mode to either normal operating mode or an advanced operating mode in the controller configuration settings. This process to initialize the record in the DSDs 106 can also include determining a record number from the controller 102 and the DSDs 106 configuration, setting the record access type to either active access or passive access, and reading/writing particular places in the DSDs 106 to obtain and record reliability-related raw data.


It has been discovered that the combination of the DCAI 522 and the shunt interfaces 524 interconnecting the CS 510 with one another provides fast communication paths between each of the CS 510 to minimize access latency between to any one of the CG 506 and any one of the CS 510. The fast communication paths result in fast and early preemptive detection of reliability issues with an AD RAID Array 114 before failures occur. The early preemptive detection can result in fast and expedient actions such as allocation, reallocation, or any combination thereof of any of the DC 108 associated with the AD RAID Array 114.


It has been discovered that the MARMI 502 provides the RMU 122 with the ability to quickly access the RIR 332 calculations, the chunk mapping of the CRA 504, and the chunk mapping of the AD RAID Arrays 114 to adjust for different reliability situations. Adjusting for the different reliability situations enables optimum configuration, reconfiguration, allocations, reallocations, or any combination thereof for any of the AD RAID Arrays 114 to provide extremely efficient adjustments to the AD RAID Arrays 114.


It has been discovered that additional memory space, background analysis capability, and computational power of the MARMI 502 improves the advanced operating mode to substantially increase chunk formatting efficiency and storage utilization of the DSDs 106 than the normal operating mode and other declustered RAID arrays.


It has been discovered that the MARMI 502 formed of the RMU 122, the MM 514, the CS 510, and the CG 506 of CRA 504 provides the controller 102 the capability to monitor the reliability attributes of the DSDs 106 and reconfigure AD RAID Array 114 without interrupting RAID service resulting in a maintained acceptable MTTDL, system performance, and system reliability.


It has been discovered that the DCAI 522, the CMI 518, the MMTCSI 526, and the shunt interfaces 524 can be similar or identical as the RCI 310 to provide a common connection and communication infrastructure using a single set of VSCs or APIs. The common connection and communication infrastructure results in cost savings, savings in circuitry, and substantially reduced time to market due to shortened development and time-to-market.


It has been discovered that the reliability index of RAID (RIR), including the MTTDL and the MTTF, can be quickly calculated from the CRA 504 to enable fast trade-off analysis using the APA 142 of RMU 122 for fast conversion and therefore faster preemptive actions such as reconfigurations upgrades, downgrades, or any combination thereof.


Referring now to FIGS. 6A and 6B, therein are shown flow charts of record updating by the controller 102 of FIG. 1A using an optional sampling frequency feature and record updating by the DSDs 106 of FIG. 1A according to various embodiments.



FIG. 6A shows a flow chart of an optional sampling frequency feature to actively update records, according to various embodiments. As an example, shown are the process/decision blocks within the controller 102 that can be individually performed, processed, and executed with firmware, hardware, or any combination thereof.


Shown is a block 602, labeled as initialize controller record updater, representing initiation procedures that occur on power-up or reset. As an example, block 602 can include actions to perform a power-up initiation procedure of the controller 102 of FIG. 1A, including initialization of the RMU 122 of FIG. 3. In this example, the initialization can include initializing default options, parameters, initial records, and settings including the process to initialize the record in the controller, described in various embodiments.


Block 604 labeled as sampling frequency set, can follow block 602 and can represent a decision launching block to determine if the optional sampling feature to update records is enabled or disabled, before branching to a path for No or a path for Yes. The optional sampling feature can be enabled/disabled during the initiation procedures of block 602, or changed at any time by the controller 102 or user/operator intervention.


In block 604, if the optional sampling feature is determined to be disabled, then the path labeled with N is taken, otherwise path labeled with Y is taken. The path Y of block 604 leads to block 608, labeled as IO command within sampling, representing the enablement of the optional sampling feature. Block 608 can be an action to wait and monitor for either an IO command or a sample trigger event without any IO command. The sample trigger event indicates that records should be updated to ensure that the records are current, and not stale from extended periods of no IO commands.


When an IO command is detected in block 608, path Y is taken to block 610, labeled as update records. Block 610 represents the updating of records with IO data associated with the IO command. The IO data can, for example, include physical attributes and associated information such as reliability, status, performance, or exception information. After the records are updated, block 610 can branch back to block 604 (branch back not shown for simplicity of presentation) and the process flow can be resumed/restarted at block 604.


Returning back to block 608, if the sample trigger event is detected without any IO command in block 608, then path N is taken to block 612, labeled as trigger a background command to the DSD 106 to get the data. Block 612 represents the autonomous generation of at least one background command that includes the reading of the IO data from one or more of the DSDs 106 before branching to the process/decision block 610. Records are updated in block 610, before a branch back to block 604 occurs to resume/restart the process flow.


Returning back to block 604, if an IO command is detected in block 604, then irrespective of any sample trigger event, path N to block 606 is taken. Block 606, labeled as any user IO command, can represent a decision to evaluate/determine if the IO command is a user IO command directed to at least one of the DSDs 106 or a non-user IO command. If a user IO command is detected in block 606, then path Y is taken to block 610 to update the records. After the records are updated, block 610 branches back to block 604 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 606, if a non-user command is detected in block 606, then path N is taken to block 614, labeled as controller triggered command. Block 614 represents a decision process to determine if the non-user command is a controller command or a non-controller command. If the command is determined to be a controller command in block 614, then path Y is taken to block 612 to first autonomously generate the background command to read the IO data, followed by a branch to block 610 to update the records, and subsequently branch back to block 604 to resume/restart the process flow.


Returning back to block 614, if the non-user command is determined to not be a controller command in block 614, then path N to block 616 is taken. Block 616, labeled as no update, represents a decision to not update any of the records and to branch back to block 604 to resume/restart the process flow.



FIG. 6B shows a flow chart of record updating by the DSDs 106 without an optional sampling frequency feature, according to various embodiments. As an example, shown are the process/decision blocks within the DSDs 106 that can be individually performed, processed, and executed with firmware, hardware, or any combination thereof.


Shown is a block 622, labeled as initialize DSD record updater, representing initiation procedures that occur on power-up or reset. As an example, block 602 can include a power-up initiation procedure of the each of the DSDs 106 and initialization of the DSDRMU 116 of FIG. 3.


Block 624, labeled as any user IO command, follows block 622 after initiation procedures have completed and represents a decision launching action to wait until either a user IO command or a non-user command is detected. If a user IO command is detected in block 624, path Y is taken to block 626 to update records of the DSDs 106. After the records have been updated in block 626, the controller 102 is notified that records of the DSDs 106 have been updated before block 626 branches back to block 624 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 624, if a non-user command is detected in block 624, then path N to block 628 is taken. Block 628, labeled as any controller triggered command, is a decision block to determine if the non-user command is a controller command or a non-controller command. If it is determined in block 628 that the non-user command is a controller command, then path Y is taken to block 626.


The controller command can be a background command from block 612 of FIG. 6A, for example. In block 626 the controller command is executed and the records of the DSDs 106 are updated before the controller 102 is notified that the records have been updated. After the controller 102 is notified that the update has completed, block 626 branches back to block 624 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 628, if the non-user controller command is a non-controller command, then branch to path N to block 630, labeled as, no update. Block 630 represents a decision block to not update any of the records in the DSDs 106 and branch back to block 624 to resume/restart the process flow.


It has been discovered that the RMU 122 and the controller 102 with the optional sampling frequency feature, combined with the DSDRMU 116, optimizes the triggering and decision processing of record updating by forming a self-managed two-leveled parallel architecture that results in an autonomous, fast, efficient, and low overhead record updating management process.


Referring now to FIGS. 7A and 7B, therein are shown flow charts of record updating by the controller 102 of FIG. 1A and record updating by the DSDs 106 of FIG. 1A using an optional sampling frequency feature according to various embodiments.



FIG. 7A shows a flow chart of record updating by the controller 102 without an optional sampling frequency feature, according to various embodiments. As an example, shown are the process/decision blocks within the controller 102 that can be individually performed, processed, and executed with firmware, hardware, or any combination thereof.


Shown is a block 702, labeled as initialize controller record updater, representing initiation procedures that occur on power-up or reset. As an example, block 702 can include actions to perform a power-up initiation procedure of the controller 102 of FIG. 1A, including initialization of the RMU 122 of FIG. 3. In this example, the initialization can include initializing default options, parameters, initial records, and settings including the process to initialize the record in the controller, described in various embodiments.


After the intuition procedure has completed, block 706 is entered. Block 706, labeled as any user IO command, is a decision launching action to wait for either any user IO command or a non-user command. If a user IO command is detected in block 706, then path Y is taken to block 710, labeled as update records. In block 710, the records are updated before branching back to block 604 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 706, if a non-user command is detected in block 706, path N is taken to block 714, labeled as controller triggered command. In block 714 a decision process is used to determine if the non-user command is either a controller triggered command or a non-controller triggered command. If the non-user command is determined in block 714 to be a controller triggered command, path Y is taken to block 716, labeled as trigger a background command to drive to get the data.


Block 716 represents the autonomous generation of at least one background command that includes the reading of the IO data from one or more of the DSDs 106 before branching to block 710 to update the records, and subsequently branching back to block 706 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 714, if the non-user command is determined to not a controller command in block 714, path N to block 718 is taken. Block 718, labeled as no update, represents a decision to not update any of the records and to branch back to block 706 to resume/restart the process flow.



FIG. 7B shows a flow chart of record updating by the DSDs 106 with an optional sampling frequency feature, according to various embodiments. Shown are the process/decision blocks within the DSDs 106 that can be individually performed, processed, and executed with firmware, hardware, or any combination thereof.


Shown is a block 720, labeled as initialize DSD record updater, representing initiation procedures that occur on power-up or reset. As an example, block 720 can include a power-up initiation procedure of the each of the DSDs 106 and initialization of the DSDRMU 116 of FIG. 3.


Block 722, labeled as sampling frequency set, follows block 720 and represents a decision launching action to determine if the optional sampling feature to update records is enabled or disabled and subsequently branches to path N or path Y. The optional sampling feature can be enabled/disabled during the initiation procedures of block 720, or changed at any time by the controller 102 or user/operator intervention.


In block 722, if the optional sampling feature is determined to be disabled, then path N is taken, otherwise path Y is taken. Path Y of block 722 leads to block 726, labeled as IO command within sampling, representing the enablement of the optional sampling feature. Block 726 involves waiting and monitoring for either an IO command or a sample trigger event without any IO command. The sample trigger event indicates that records should be updated to ensure that the records are current, and not stale from extended periods IO commands.


When an IO command is detected in block 726, path Y is taken to block 728, labeled as update records. Block 728 represents the process of updating the records of the DSDs 106, notifying the controller 102 that the records have been updated, and branching back to block 722 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 726, if the sample trigger event and no IO command is detected in block 726, path N is taken to block 730, labeled as issue a background command. Block 730 represents the autonomous generation of at least one background command that includes the reading of the IO data from one or more of the DSDs 106 followed by a branch to block 732, labeled as execute the background command to get the data.


Block 732 represents executing the background command from block 730 to gather the data before branching to block 728. Block 728 updates the records of the DSDs 106, notifies the controller 102 that the records have been updated, and subsequently branches back to block 722 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 722, if an IO command is detected in block 722, then irrespective of any sample trigger event, path N is taken to block 724. Block 724, labeled as any user IO command, is a decision that checks for either a user IO command or a non-user command. If a user IO command is detected in block 724, path Y is taken to block 728 to update the records of the DSDs 106 and notify the controller 102 that records have been updated before branching back to block 722 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 724, if block 724 does not detect an IO command, then path N to block 734 is taken to evaluate a non-user command. In block 734, labeled as any controller triggered command, if the non-user command is determined to be a controller command, then path Y is taken to block 732, labeled as execute the background command to get the data. Block 732 represents an execution of a controller command, similar to the background command in block 730. After execution of the controller command, block 732 branches to block 728. In block 728, the records are updated and the controller 102 is notified that the records have been updated before branching back to block 722 (branch back not shown for simplicity of presentation) to resume/restart the process flow.


Returning back to block 734, if the non-user command is determined to not be a controller command in block 734, path N is taken to block 736. Block 736, labeled as no update, represents a decision to not update any of the records and to branch back to block 722 to resume/restart the process flow.


It has been discovered that the combination of the RMU 122 and the controller 102 with the DSDRMU 116 and an optional sampling frequency feature to update records of the DSDs, optimizes the triggering and decision processing of record updating by forming a self-managed two-leveled parallel architecture that results in an autonomous, fast, efficient, and low overhead record updating management process.


Referring now to FIG. 8, therein is shown an example of a graph depicting failure characteristic trends automatically monitored by the controller 102 and the SU 104 to dynamically maintain a predetermined reliability range of the Adaptive Declustered RAID Arrays 114 according to various embodiments. The graph of FIG. 8 is an example of failures that are detected, evaluated, and trigger automatic reconfigurations based on reliability targets. The graph does not represent all possible failure characteristics and behavior of the DSDs. The approaches, methods, features, and processes described below may be used independently of one another or may be combined in various ways for any failure and is not limited to or by the graph of FIG. 8. The graph plot shows a y-axis representing a Number of Device Failures (NDF) 802 of storage drives, such as the DSDs 106 of FIG. 1A, as an example.


An x-axis represents an Operating Lifetime Year (OLY) 804 corresponding to a NDF 802 value on the y-axis for a RAID storage systems. An origin 806, at an intersection of the x-axis and the y-axis, on the graph represents a first day of a first month of operation of the RAID storage system with zero failures. Early life failures (ELF) 808, Random Failures (RF) 810, and Wear Out Failures (WOF) 812 are three major storage device life stages identified in the graph.


The ELF 808 stage occurs in the initial years of life of the RAID storage systems and can depend on system class. For example, a consumer system class can have higher failures than a military system class storage system which is often built of military specification components and stress tested with accelerated early life stress tests prior to use by the users.


The controller 102 of FIG. 1A with the RMU 122 of FIG. 1A continually monitors the RI 124 of FIG. 1A and the RS 126 of FIG. 1A. The controller 102 detects changes such as decreasing reliabilities of the DSDs 106 due to emanate or sudden failures that are detected. The controller 102 performs preemptive actions that include reconfigurations, allocations, and reallocations of the DSDs 106 using numerous methods and techniques described earlier in FIGS. 1A and 1B through FIGS. 7A and 7B.


The RF 810 stage can span from 70% to 90% of an entire lifespan of RAID storage systems, depending on the system class. Since the failures are far and few, the neural network based monitoring and the APA 142 described in FIG. 3 provide a detection granularity that is very effective for fast preemptive detections and responses during the RF 810 stage. The controller 102 can perform reconfigurations, allocations, reallocations, or any combination thereof in response to small subtle changes in reliability, characteristic of storage devices in the RF 810 stage.


The detection granularity is also especially important because the failure rate of the disk drive can be from a higher bit error rate after certain times of usage, head wear, media grown defects, and other similar conditions, resulting in a reduction in MTTF or unstable swings of the actual MTTF measured by customer. The controller 102 and the SU 104 use numerous methods and techniques to address the RF 810 stage described earlier in FIGS. 1A and 1B through FIGS. 7A and 7B, and especially in FIGS. 3-5.


The WOF 812 stage is a crucial period when failures of the storage devices can be abrupt or sudden. The controller 102 and the SU 104 maintains the reliability and availability of the user data throughout the WOF 812 stage by providing continued monitoring of multiple metrics with fast detection and preemptive action. The fast detection and preemptive action is a very crucial and vital requirement needed to maintain continued storage availability and data integrity. The controller 102 addresses the WOF 812 stage with fast detection and preemptive action as previously described in FIGS. 1A and 1B through FIGS. 7A and 7B.


It has been discovered that the Adaptive Declustered RAID Array 114 can effectively compensate for the ELF 808, the RF 810, and the WOF 812 stages of the DSDs 106 by maximizing the reliability, availability, and performance throughout the entire lifespan of the DSDs 106 to provide the product users with a Return On Investment (ROI) of the Adaptive Declustered RAID Array 114 higher than a ROI of other declustured or non-declustered RAID array product.


Any suitable control circuitry may be employed to implement the flow diagrams in the above embodiments, such as any suitable integrated circuit or circuits. For example, the control circuitry may be implemented within a read channel integrated circuit, or in a component separate from the read channel, such as a disk controller, or certain operations described above may be performed by a read channel and others by a disk controller. In one embodiment, the read channel and disk controller are implemented as separate integrated circuits, and in an alternative embodiment they are fabricated into a single integrated circuit or system on a chip (SOC). In addition, the control circuitry may include a suitable preamp circuit implemented as a separate integrated circuit, integrated into the read channel or disk controller circuit, or integrated into a SOC. In addition, any of the above described modules and components may be implemented in firmware, software, hardware, or any combination thereof.


In one embodiment, the control circuitry comprises a microprocessor executing instructions, the instructions being operable to cause the microprocessor to perform the flow diagrams described herein. The instructions may be stored in any computer-readable medium. In one embodiment, they may be stored on a non-volatile semiconductor memory external to the microprocessor, or integrated with the microprocessor in a SOC. In another embodiment, the instructions are stored on the disk and read into a volatile semiconductor memory when the disk drive is powered on. In yet another embodiment, the control circuitry comprises suitable logic circuitry, such as state machine circuitry.


The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain method, event or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described tasks or events may be performed in an order other than that specifically disclosed, or multiple may be combined in a single block or state. The example tasks or events may be performed in serial, in parallel, or in some other manner. Tasks or events may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.


While certain example embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions disclosed herein. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, module, or block is necessary or indispensable. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the embodiments disclosed herein.


The resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization. Another important aspect of various embodiments is that they valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance.


These and other valuable aspects of the various embodiments consequently further the state of the technology to at least the next level.


While the various embodiments have been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, the embodiments are intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.

Claims
  • 1. An apparatus, comprising: an adaptive declustered RAID array comprising data storage devices (DSDs), the DSDs comprising data chunks, each of the data chunks allocated as a corresponding one of data, a local parity, and a global parity, the local parity to protect the data, the global parity to protect the local parity; andprocessing circuitry configured to: generate, at a first time according to a sampling frequency, a first reliability indicator, the first reliability indicator indicating a first reliability status of at least a portion of the adaptive declustered RAID array;generate, at a second time after the first time according to the sampling frequency, a second reliability indicator, the second reliability indicator indicating a second reliability status of the at least the portion of the adaptive declustered RAID array;compare the first reliability indicator and the second reliability indicator;determine whether a difference between the first reliability indicator and the second reliability indicator is greater than a predetermined threshold; andin response to determining that the difference is greater than the predetermined threshold: evaluate upgrade and downgrade configuration rules to determine whether to upgrade or to downgrade the adaptive declustered RAID array; andreallocate the data chunks, by dynamically increasing or decreasing the data chunks allocated as the local parity, the global parity, or a combination thereof, according to the evaluation of the upgrade and downgrade configuration rules.
  • 2. The apparatus of claim 1, further comprising: DSD reliability managing circuitry in one of the DSDs; anda reliability communication interface connected between the processing circuitry and the DSD reliability managing circuitry to provide a dedicated interface for managing reliability of the adaptive declustered RAID array.
  • 3. The apparatus of claim 1, wherein reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises reallocating, as a spare data chunk, at least one of the data chunks allocated as one of the local parity and the global parity based on the second reliability indicator.
  • 4. The apparatus of claim 1, wherein reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises reallocating, as the global parity, one of the data chunks allocated as the local parity, based on the second reliability indicator.
  • 5. The apparatus of claim 1, wherein reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises reallocating, as the local parity, one of the data chunks allocated as the global parity, based on the second reliability indicator.
  • 6. The apparatus of claim 1, further comprising central reliability monitoring circuitry configured to dynamically evaluate the second reliability indicator.
  • 7. The apparatus of claim 1, wherein reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises reallocating the data chunks based on a neural network reliability warning scheme of reliability metrics.
  • 8. The apparatus of claim 1, wherein reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises reallocating the data chunks based on Bayesian analytical approaches.
  • 9. The apparatus of claim 1, wherein the data chunks further comprise a spare data chunk.
  • 10. The apparatus of claim 9, wherein reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises reallocating the spare data chunk as the local parity based on the second reliability indicator.
  • 11. The apparatus of claim 9, wherein reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises real locating the spare data chunk as the global parity based on the second reliability indicator.
  • 12. The apparatus of claim 9, wherein: reallocating the data chunks according to the evaluation of the upgrade and downgrade configuration rules comprises reallocating the spare data chunk for data migration of user data from one of the data chunks to the spare data chunk, based on the second reliability indicator.
  • 13. The apparatus of claim 1, wherein the processing circuitry is further configured to generate the second reliability indicator based on early life failure, random failure, and wear out failure life cycles of the DSDs.
  • 14. The apparatus of claim 1, further comprising a user interface connected to the processing circuitry for sending reliability directives to the processing circuitry for dynamically increasing or decreasing a number of the data chunks.
  • 15. The apparatus of claim 1, wherein the second reliability indicator is indicative of mean time to data loss or mean time to failure of the at least the portion of the adaptive declustered RAID array at the second time.
  • 16. A method of operating an apparatus, the method comprising: configuring an adaptive declustered RAID) array comprising data storage devices (DSDs), the DSDs comprising data chunks, each of the data chunks allocated as a corresponding one of data, a local parity, and a global parity, the local parity to protect the data, the global parity to protect the local parity;generating, by processing circuitry, at a first time according to a sampling frequency, a first reliability indicator, the first reliability indicator indicating a first reliability status of at least a portion of the adaptive declustered RAID array;generating, by the processing circuitry, at a second time after the first time according to the sampling frequency, a second reliability indicator, the second reliability indicator indicating a second reliability status of the at least the portion of the adaptive declustered RAID array;determining, by the processing circuitry, a difference between the first reliability indicator and the second reliability indicator; and reallocating, by the processing circuitry, the data chunks, by dynamically increasing or decreasing the data chunks allocated as the local parity, the global parity, or a combination thereof, according to the difference.
  • 17. The method of claim 16, further comprising: connecting a reliability communication interface between a control circuit and DSD reliability managing circuitry in one of the DSDs to provide a dedicated interface for managing reliability of the adaptive declustered RAID array.
  • 18. The method of claim 16, wherein reallocating the data chunks includes reallocating at least one of the data chunks, which is allocated as one of the local parity and the global parity, to be a spare data chunk, based on the second reliability indicator.
  • 19. The method of claim 16, wherein reallocating the data chunks includes reallocating a data chunk which is allocated as the local parity, to be the global parity, based on the second reliability indicator.
  • 20. The method of claim 16, wherein reallocating the data chunks includes reallocating a data chunk, which is allocated as the global parity to be the local parity, based on the second reliability indicator.
  • 21. The method of claim 16, further comprising dynamically evaluating the second reliability indicator by central reliability monitoring circuitry.
  • 22. The method of claim 16, wherein reallocating the data chunks is based on a neural network reliability warning scheme of reliability metrics.
  • 23. The method of claim 16, wherein reallocating the data chunks is based on Bayesian analytical approaches.
  • 24. The method of claim 16, wherein configuring the adaptive declustered RAID array includes configuring a spare data chunk.
  • 25. The method of claim 7, further comprising reallocating the spare data chunk as the local parity based on the second reliability indicator.
  • 26. The method of claim 24, further comprising reallocating the spare data chunk as the global parity based on the second reliability indicator.
  • 27. The method of claim 24, further comprising: allocating the spare data chunk for data migration of user data from one of the data chunks to the spare data chunk, based on the second reliability indicator.
  • 28. The method of claim 16, wherein reallocating the data chunks includes reallocating the data chunks based on early life failure, random failure, and wear out failure life cycles of the DSDs.
  • 29. The method of claim 16, further comprising connecting a user interface to the processing circuitry for sending reliability directives to a controller for dynamically increasing or decreasing a number of the data chunks.
  • 30. A non-transitory computer readable medium including stored thereon instructions to be executed by processing circuitry, the instructions when executed by the processing circuitry cause the processing circuitry to: configure an adaptive declustered RAID array comprising data storage devices (DSDs), the DSDs comprising data chunks, each of the data chunks allocated as a corresponding one of data, a local parity, and a global parity, the local parity to protect the data, the global parity to protect the local parity;generate, at a first time according to a sampling frequency, a first reliability indicator, the first reliability indicator indicating a first reliability status of at least a portion of the adaptive declustered RAID array;generate, at a second time after the first time according to the sampling frequency, a second reliability indicator, the second reliability indicator indicating a second reliability status of the at least the portion of the adaptive declustered RAID array;determine a difference between the first reliability indicator and the second reliability indicator; and reallocate the data chunks, by dynamically increasing or decreasing the data chunks allocated as the local parity, the global parity, or a combination thereof, according to the difference.
  • 31. A controller, comprising: means for configuring an adaptive declustered RAID array comprising data storage devices (DSDs), the DSDs comprising data chunks, each of the data chunks allocated as a corresponding one of data, a local parity, and a global parity, the local parity to protect the data, the global parity to protect the local parity;means for generating a first reliability indicator, the first reliability indicator indicating a first reliability status of at least a portion of the adaptive declustered RAID array;means for generating a second reliability indicator, the second reliability indicator indicating a second reliability status of the at least the portion of the adaptive declustered RAID array;means for determining a difference between the first reliability indicator and the second reliability indicator; andmeans for reallocating the data chunks, by dynamically increasing or decreasing the data chunks allocated as the local parity, the global parity, or a combination thereof according to the difference.
US Referenced Citations (449)
Number Name Date Kind
6018789 Sokolov et al. Jan 2000 A
6065095 Sokolov et al. May 2000 A
6078452 Kittilson et al. Jun 2000 A
6081447 Lofgren et al. Jun 2000 A
6092149 Hicken et al. Jul 2000 A
6092150 Sokolov et al. Jul 2000 A
6094707 Sokolov et al. Jul 2000 A
6105104 Guttmann et al. Aug 2000 A
6111717 Cloke et al. Aug 2000 A
6145052 Howe et al. Nov 2000 A
6175893 D'Souza et al. Jan 2001 B1
6178056 Cloke et al. Jan 2001 B1
6191909 Cloke et al. Feb 2001 B1
6195218 Guttmann et al. Feb 2001 B1
6205494 Williams Mar 2001 B1
6208477 Cloke et al. Mar 2001 B1
6223303 Billings et al. Apr 2001 B1
6230233 Lofgren et al. May 2001 B1
6246346 Cloke et al. Jun 2001 B1
6249393 Billings et al. Jun 2001 B1
6256695 Williams Jul 2001 B1
6262857 Hull et al. Jul 2001 B1
6263459 Schibilla Jul 2001 B1
6272694 Weaver et al. Aug 2001 B1
6278568 Cloke et al. Aug 2001 B1
6279089 Schibilla et al. Aug 2001 B1
6289484 Rothberg et al. Sep 2001 B1
6292912 Cloke et al. Sep 2001 B1
6310740 Dunbar et al. Oct 2001 B1
6317850 Rothberg Nov 2001 B1
6327106 Rothberg Dec 2001 B1
6337778 Gagne Jan 2002 B1
6369969 Christiansen et al. Apr 2002 B1
6384999 Schibilla May 2002 B1
6388833 Golowka et al. May 2002 B1
6405342 Lee Jun 2002 B1
6408357 Hanmann et al. Jun 2002 B1
6408406 Parris Jun 2002 B1
6411452 Cloke Jun 2002 B1
6411458 Billings et al. Jun 2002 B1
6412083 Rothberg et al. Jun 2002 B1
6415349 Hull et al. Jul 2002 B1
6425128 Krapf et al. Jul 2002 B1
6441981 Cloke et al. Aug 2002 B1
6442328 Elliott et al. Aug 2002 B1
6445524 Nazarian et al. Sep 2002 B1
6449767 Krapf et al. Sep 2002 B1
6453115 Boyle Sep 2002 B1
6470420 Hospodor Oct 2002 B1
6480020 Jung et al. Nov 2002 B1
6480349 Kim et al. Nov 2002 B1
6480932 Vallis et al. Nov 2002 B1
6483986 Krapf Nov 2002 B1
6487032 Cloke et al. Nov 2002 B1
6490635 Holmes Dec 2002 B1
6493173 Kim et al. Dec 2002 B1
6499083 Hamlin Dec 2002 B1
6519104 Cloke et al. Feb 2003 B1
6525892 Dunbar et al. Feb 2003 B1
6545830 Briggs Apr 2003 B1
6546489 Frank, Jr. et al. Apr 2003 B1
6550021 Dalphy et al. Apr 2003 B1
6552880 Dunbar et al. Apr 2003 B1
6553457 Wilkins et al. Apr 2003 B1
6574754 Smith Jun 2003 B1
6578106 Price Jun 2003 B1
6580573 Hull et al. Jun 2003 B1
6594183 Lofgren et al. Jul 2003 B1
6600620 Krounbi et al. Jul 2003 B1
6601137 Castro et al. Jul 2003 B1
6603622 Christiansen et al. Aug 2003 B1
6603625 Hospodor et al. Aug 2003 B1
6604220 Lee Aug 2003 B1
6606682 Dang et al. Aug 2003 B1
6606714 Thelin Aug 2003 B1
6606717 Yu et al. Aug 2003 B1
6611393 Nguyen et al. Aug 2003 B1
6615312 Hamlin et al. Sep 2003 B1
6639748 Christiansen et al. Oct 2003 B1
6647481 Luu et al. Nov 2003 B1
6654193 Thelin Nov 2003 B1
6657810 Kupferman Dec 2003 B1
6661591 Rothberg Dec 2003 B1
6665772 Hamlin Dec 2003 B1
6687073 Kupferman Feb 2004 B1
6687078 Kim Feb 2004 B1
6687850 Rothberg Feb 2004 B1
6690523 Nguyen et al. Feb 2004 B1
6690882 Hanmann et al. Feb 2004 B1
6691198 Hamlin Feb 2004 B1
6691213 Luu et al. Feb 2004 B1
6691255 Rothberg et al. Feb 2004 B1
6693760 Krounbi et al. Feb 2004 B1
6694477 Lee Feb 2004 B1
6697914 Hospodor et al. Feb 2004 B1
6704153 Rothberg et al. Mar 2004 B1
6708251 Boyle et al. Mar 2004 B1
6710951 Cloke Mar 2004 B1
6711628 Thelin Mar 2004 B1
6711635 Wang Mar 2004 B1
6711660 Milne et al. Mar 2004 B1
6715044 Lofgren et al. Mar 2004 B2
6724982 Hamlin Apr 2004 B1
6725329 Ng et al. Apr 2004 B1
6735650 Rothberg May 2004 B1
6735693 Hamlin May 2004 B1
6744772 Eneboe et al. Jun 2004 B1
6745283 Dang Jun 2004 B1
6751402 Elliott et al. Jun 2004 B1
6757481 Nazarian et al. Jun 2004 B1
6772281 Hamlin Aug 2004 B2
6781826 Goldstone et al. Aug 2004 B1
6782449 Codilian et al. Aug 2004 B1
6791779 Singh et al. Sep 2004 B1
6792486 Hanan et al. Sep 2004 B1
6799274 Hamlin Sep 2004 B1
6811427 Garrett et al. Nov 2004 B2
6826003 Subrahmanyam Nov 2004 B1
6826614 Hanmann et al. Nov 2004 B1
6832041 Boyle Dec 2004 B1
6832929 Garrett et al. Dec 2004 B2
6845405 Thelin Jan 2005 B1
6845427 Atai-Azimi Jan 2005 B1
6850443 Lofgren et al. Feb 2005 B2
6851055 Boyle et al. Feb 2005 B1
6851063 Boyle et al. Feb 2005 B1
6853731 Boyle et al. Feb 2005 B1
6854022 Thelin Feb 2005 B1
6862660 Wilkins et al. Mar 2005 B1
6880043 Castro et al. Apr 2005 B1
6882486 Kupferman Apr 2005 B1
6884085 Goldstone Apr 2005 B1
6888831 Hospodor et al. May 2005 B1
6892217 Hanmann et al. May 2005 B1
6892249 Codilian et al. May 2005 B1
6892313 Codilian et al. May 2005 B1
6895455 Rothberg May 2005 B1
6895500 Rothberg May 2005 B1
6898730 Hanan May 2005 B1
6910099 Wang et al. Jun 2005 B1
6928470 Hamlin Aug 2005 B1
6931439 Hanmann et al. Aug 2005 B1
6934104 Kupferman Aug 2005 B1
6934713 Schwartz et al. Aug 2005 B2
6940873 Boyle et al. Sep 2005 B2
6943978 Lee Sep 2005 B1
6948165 Luu et al. Sep 2005 B1
6950267 Liu et al. Sep 2005 B1
6954733 Ellis et al. Oct 2005 B1
6961814 Thelin et al. Nov 2005 B1
6965489 Lee et al. Nov 2005 B1
6965563 Hospodor et al. Nov 2005 B1
6965966 Rothberg et al. Nov 2005 B1
6967799 Lee Nov 2005 B1
6968422 Codilian et al. Nov 2005 B1
6968450 Rothberg et al. Nov 2005 B1
6973495 Milne et al. Dec 2005 B1
6973570 Hamlin Dec 2005 B1
6976190 Goldstone Dec 2005 B1
6983316 Milne et al. Jan 2006 B1
6986007 Procyk et al. Jan 2006 B1
6986154 Price et al. Jan 2006 B1
6995933 Codilian et al. Feb 2006 B1
6996501 Rothberg Feb 2006 B1
6996669 Dang et al. Feb 2006 B1
7002926 Eneboe et al. Feb 2006 B1
7003674 Hamlin Feb 2006 B1
7006316 Sargenti, Jr. et al. Feb 2006 B1
7009820 Hogg Mar 2006 B1
7023639 Kupferman Apr 2006 B1
7024491 Hanmann et al. Apr 2006 B1
7024549 Luu et al. Apr 2006 B1
7024614 Thelin et al. Apr 2006 B1
7027716 Boyle et al. Apr 2006 B1
7028174 Atai-Azimi et al. Apr 2006 B1
7031902 Catiller Apr 2006 B1
7046465 Kupferman May 2006 B1
7046488 Hogg May 2006 B1
7050252 Vallis May 2006 B1
7054937 Milne et al. May 2006 B1
7055000 Severtson May 2006 B1
7055167 Masters May 2006 B1
7057836 Kupferman Jun 2006 B1
7062398 Rothberg Jun 2006 B1
7075746 Kupferman Jul 2006 B1
7076604 Thelin Jul 2006 B1
7082494 Thelin et al. Jul 2006 B1
7088538 Codilian et al. Aug 2006 B1
7088545 Singh et al. Aug 2006 B1
7092186 Hogg Aug 2006 B1
7095577 Codilian et al. Aug 2006 B1
7099095 Subrahmanyam et al. Aug 2006 B1
7106537 Bennett Sep 2006 B1
7106947 Boyle et al. Sep 2006 B2
7110202 Vasquez Sep 2006 B1
7111116 Boyle et al. Sep 2006 B1
7114029 Thelin Sep 2006 B1
7120737 Thelin Oct 2006 B1
7120806 Codilian et al. Oct 2006 B1
7126776 Warren, Jr. et al. Oct 2006 B1
7129763 Bennett et al. Oct 2006 B1
7133600 Boyle Nov 2006 B1
7136244 Rothberg Nov 2006 B1
7146094 Boyle Dec 2006 B1
7149046 Coker et al. Dec 2006 B1
7150036 Milne et al. Dec 2006 B1
7155616 Hamlin Dec 2006 B1
7171108 Masters et al. Jan 2007 B1
7171110 Wilshire Jan 2007 B1
7194576 Boyle Mar 2007 B1
7200698 Rothberg Apr 2007 B1
7205805 Bennett Apr 2007 B1
7206497 Boyle et al. Apr 2007 B1
7215496 Kupferman et al. May 2007 B1
7215771 Hamlin May 2007 B1
7237054 Cain et al. Jun 2007 B1
7240161 Boyle Jul 2007 B1
7249365 Price et al. Jul 2007 B1
7263709 Krapf Aug 2007 B1
7274639 Codilian et al. Sep 2007 B1
7274659 Hospodor Sep 2007 B2
7275116 Hanmann et al. Sep 2007 B1
7280302 Masiewicz Oct 2007 B1
7292774 Masters et al. Nov 2007 B1
7292775 Boyle et al. Nov 2007 B1
7296284 Price et al. Nov 2007 B1
7302501 Cain et al. Nov 2007 B1
7302579 Cain et al. Nov 2007 B1
7318088 Mann Jan 2008 B1
7319806 Willner et al. Jan 2008 B1
7325244 Boyle et al. Jan 2008 B2
7330323 Singh et al. Feb 2008 B1
7346790 Klein Mar 2008 B1
7366641 Masiewicz et al. Apr 2008 B1
7369340 Dang et al. May 2008 B1
7369343 Yeo et al. May 2008 B1
7372650 Kupferman May 2008 B1
7380147 Sun May 2008 B1
7392340 Dang et al. Jun 2008 B1
7404013 Masiewicz Jul 2008 B1
7406545 Rothberg et al. Jul 2008 B1
7415571 Hanan Aug 2008 B1
7436610 Thelin Oct 2008 B1
7437502 Coker Oct 2008 B1
7440214 Ell et al. Oct 2008 B1
7451344 Rothberg Nov 2008 B1
7471483 Ferris et al. Dec 2008 B1
7471486 Coker et al. Dec 2008 B1
7486060 Bennett Feb 2009 B1
7496493 Stevens Feb 2009 B1
7518819 Yu et al. Apr 2009 B1
7526184 Parkinen et al. Apr 2009 B1
7539924 Vasquez et al. May 2009 B1
7543117 Hanan Jun 2009 B1
7551383 Kupferman Jun 2009 B1
7562282 Rothberg Jul 2009 B1
7577973 Kapner, III et al. Aug 2009 B1
7596797 Kapner, III et al. Sep 2009 B1
7599139 Bombet et al. Oct 2009 B1
7619841 Kupferman Nov 2009 B1
7647544 Masiewicz Jan 2010 B1
7649704 Bombet et al. Jan 2010 B1
7653927 Kapner, III et al. Jan 2010 B1
7656603 Xing Feb 2010 B1
7656763 Jin et al. Feb 2010 B1
7657149 Boyle Feb 2010 B2
7672072 Boyle et al. Mar 2010 B1
7673075 Masiewicz Mar 2010 B1
7688540 Mei et al. Mar 2010 B1
7724461 McFadyen et al. May 2010 B1
7725584 Hanmann et al. May 2010 B1
7730295 Lee Jun 2010 B1
7760458 Trinh Jul 2010 B1
7768776 Szeremeta et al. Aug 2010 B1
7804657 Hogg et al. Sep 2010 B1
7813954 Price et al. Oct 2010 B1
7827320 Stevens Nov 2010 B1
7839588 Dang et al. Nov 2010 B1
7843660 Yeo Nov 2010 B1
7852596 Boyle et al. Dec 2010 B2
7859782 Lee Dec 2010 B1
7872822 Rothberg Jan 2011 B1
7898756 Wang Mar 2011 B1
7898762 Guo et al. Mar 2011 B1
7900037 Fallone et al. Mar 2011 B1
7907364 Boyle et al. Mar 2011 B2
7929234 Boyle et al. Apr 2011 B1
7933087 Tsai et al. Apr 2011 B1
7933090 Jung et al. Apr 2011 B1
7934030 Sargenti, Jr. et al. Apr 2011 B1
7940491 Szeremeta et al. May 2011 B2
7944639 Wang May 2011 B1
7945727 Rothberg et al. May 2011 B2
7949564 Hughes et al. May 2011 B1
7974029 Tsai et al. Jul 2011 B2
7974039 Xu et al. Jul 2011 B1
7982993 Tsai et al. Jul 2011 B1
7984200 Bombet et al. Jul 2011 B1
7990648 Wang Aug 2011 B1
7992179 Kapner, III et al. Aug 2011 B1
8004785 Tsai et al. Aug 2011 B1
8006027 Stevens et al. Aug 2011 B1
8014094 Jin Sep 2011 B1
8014977 Masiewicz et al. Sep 2011 B1
8019914 Vasquez et al. Sep 2011 B1
8040625 Boyle et al. Oct 2011 B1
8078943 Lee Dec 2011 B1
8079045 Krapf et al. Dec 2011 B2
8082433 Fallone et al. Dec 2011 B1
8085487 Jung et al. Dec 2011 B1
8089719 Dakroub Jan 2012 B1
8090902 Bennett et al. Jan 2012 B1
8090906 Blaha et al. Jan 2012 B1
8091112 Elliott et al. Jan 2012 B1
8094396 Zhang et al. Jan 2012 B1
8094401 Peng et al. Jan 2012 B1
8116020 Lee Feb 2012 B1
8116025 Chan et al. Feb 2012 B1
8134793 Vasquez et al. Mar 2012 B1
8134798 Thelin et al. Mar 2012 B1
8139301 Li et al. Mar 2012 B1
8139310 Hogg Mar 2012 B1
8144419 Liu Mar 2012 B1
8145452 Masiewicz et al. Mar 2012 B1
8149528 Suratman et al. Apr 2012 B1
8154812 Boyle et al. Apr 2012 B1
8159768 Miyamura Apr 2012 B1
8161328 Wilshire Apr 2012 B1
8164849 Szeremeta et al. Apr 2012 B1
8174780 Tsai et al. May 2012 B1
8190575 Ong et al. May 2012 B1
8194338 Zhang Jun 2012 B1
8194340 Boyle et al. Jun 2012 B1
8194341 Boyle Jun 2012 B1
8201066 Wang Jun 2012 B1
8271692 Dinh et al. Sep 2012 B1
8279550 Hogg Oct 2012 B1
8281218 Ybarra et al. Oct 2012 B1
8285923 Stevens Oct 2012 B2
8289656 Huber Oct 2012 B1
8305705 Roohr Nov 2012 B1
8307156 Codilian et al. Nov 2012 B1
8310775 Boguslawski et al. Nov 2012 B1
8315006 Chahwan et al. Nov 2012 B1
8316263 Gough et al. Nov 2012 B1
8320067 Tsai et al. Nov 2012 B1
8324974 Bennett Dec 2012 B1
8332695 Dalphy et al. Dec 2012 B2
8341337 Ong et al. Dec 2012 B1
8350628 Bennett Jan 2013 B1
8356184 Meyer et al. Jan 2013 B1
8370683 Ryan et al. Feb 2013 B1
8375225 Ybarra Feb 2013 B1
8375274 Bonke Feb 2013 B1
8380922 DeForest et al. Feb 2013 B1
8390948 Hogg Mar 2013 B2
8390952 Szeremeta Mar 2013 B1
8392689 Lott Mar 2013 B1
8407393 Yolar et al. Mar 2013 B1
8413010 Vasquez et al. Apr 2013 B1
8417566 Price et al. Apr 2013 B2
8421663 Bennett Apr 2013 B1
8422172 Dakroub et al. Apr 2013 B1
8427771 Tsai Apr 2013 B1
8429343 Tsai Apr 2013 B1
8433937 Wheelock et al. Apr 2013 B1
8433977 Vasquez et al. Apr 2013 B1
8453036 Goel May 2013 B1
8458526 Dalphy et al. Jun 2013 B2
8462466 Huber Jun 2013 B2
8467151 Huber Jun 2013 B1
8489841 Strecke et al. Jul 2013 B1
8493679 Boguslawski et al. Jul 2013 B1
8498074 Mobley et al. Jul 2013 B1
8499198 Messenger et al. Jul 2013 B1
8512049 Huber et al. Aug 2013 B1
8514506 Li et al. Aug 2013 B1
8531791 Reid et al. Sep 2013 B1
8554741 Malina Oct 2013 B1
8560759 Boyle et al. Oct 2013 B1
8565053 Chung Oct 2013 B1
8576511 Coker et al. Nov 2013 B1
8578100 Huynh et al. Nov 2013 B1
8578242 Burton et al. Nov 2013 B1
8589773 Wang et al. Nov 2013 B1
8593753 Anderson Nov 2013 B1
8595432 Vinson et al. Nov 2013 B1
8599510 Fallone Dec 2013 B1
8601248 Thorsted Dec 2013 B2
8611032 Champion et al. Dec 2013 B2
8612650 Carrie et al. Dec 2013 B1
8612706 Madril et al. Dec 2013 B1
8612798 Tsai Dec 2013 B1
8619383 Jung et al. Dec 2013 B1
8621115 Bombet et al. Dec 2013 B1
8621133 Boyle Dec 2013 B1
8626463 Stevens et al. Jan 2014 B2
8630052 Jung et al. Jan 2014 B1
8630056 Ong Jan 2014 B1
8631188 Heath et al. Jan 2014 B1
8634158 Chahwan et al. Jan 2014 B1
8635412 Wilshire Jan 2014 B1
8640007 Schulze Jan 2014 B1
8654619 Cheng Feb 2014 B1
8661193 Cobos et al. Feb 2014 B1
8667248 Neppalli Mar 2014 B1
8670205 Malina et al. Mar 2014 B1
8683295 Syu et al. Mar 2014 B1
8683457 Hughes et al. Mar 2014 B1
8687306 Coker et al. Apr 2014 B1
8693133 Lee et al. Apr 2014 B1
8694841 Chung et al. Apr 2014 B1
8699159 Malina Apr 2014 B1
8699171 Boyle Apr 2014 B1
8699172 Gunderson et al. Apr 2014 B1
8699175 Olds et al. Apr 2014 B1
8699185 Teh et al. Apr 2014 B1
8700850 Lalouette Apr 2014 B1
8743502 Bonke et al. Jun 2014 B1
8749910 Dang et al. Jun 2014 B1
8751699 Tsai et al. Jun 2014 B1
8755141 Dang Jun 2014 B1
8755143 Wilson et al. Jun 2014 B2
8756361 Carlson et al. Jun 2014 B1
8756382 Carlson et al. Jun 2014 B1
8769593 Schwartz et al. Jul 2014 B1
8773802 Anderson et al. Jul 2014 B1
8780478 Huynh et al. Jul 2014 B1
8782334 Boyle et al. Jul 2014 B1
8793532 Tsai et al. Jul 2014 B1
8797669 Burton Aug 2014 B1
8799977 Kapner, III et al. Aug 2014 B1
8819375 Pruett et al. Aug 2014 B1
8825976 Jones Sep 2014 B1
8825977 Syu et al. Sep 2014 B1
20060129761 Guha Jun 2006 A1
20070239952 Hwang Oct 2007 A1
20090113702 Hogg May 2009 A1
20090210742 Adarshappanavar Aug 2009 A1
20100306551 Meyer et al. Dec 2010 A1
20110226729 Hogg Sep 2011 A1
20120079189 Colgrove Mar 2012 A1
20120159042 Lott et al. Jun 2012 A1
20120275050 Wilson et al. Nov 2012 A1
20120281963 Krapf et al. Nov 2012 A1
20120324980 Nguyen et al. Dec 2012 A1
20140040702 He Feb 2014 A1
20140201424 Chen et al. Jul 2014 A1
20160139991 Miyamae May 2016 A1
Non-Patent Literature Citations (5)
Entry
Productivity—Quality Systems, Inc., “Sampling,” Oct. 27, 2014, https://web.archive.org/web/20141027071415/www.pqsystems.com/qualityadvisor/DataCollectionTools/sampling.php, downloaded Jul. 11, 2018.
Bianca Schroeder, et al., “Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?”, ACM Transactions on Storage, vol. 3 Issue 3, Oct. 2007, pp. 1-16.
Jon Elerath, “Hard-Disk Drives: The Good, the Bad, and the Ugly”, Communication of the ACM, vol. 52, No. 6, Jun. 2009, pp. 1-10.
Bingpeng Zhu et al., “Proactive Drive Failure Prediction for Large Scale Storage Systems,” Mass Storage Systems and Technologies (MSST), 2013 IEEE 29th Symposium on, May 2013, pp. 1-5.
Eduardo Pinheiro, et al., “Failure Trends in a Large Disk Drive Population,” the 5th USENIX Conference on File and Storage Technologies (FAST'07), Feb. 2007, pp. 1-13.