The present invention relates to a method of, and apparatus for, version mirroring. In particular, the present invention relates to a method of, and apparatus for, version mirroring using the T10 protocol.
Data integrity is a core requirement for a reliable storage system. The ability to prevent and, if necessary, identify and correct data errors and corruptions is essential for operation of storage systems ranging from a simple hard disk drive up to large mainframe storage arrays.
A typical hard disk drive comprises a number of addressable units, known as sectors. A sector is the smallest externally addressable portion of a hard disk drive. Each sector typically comprises 512 bytes of usable data. However, recent developments under the general term “advanced format” sectors enable support of sector sizes up to 4 k bytes. When data is written to a hard disk drive, it is usually written as a block of data, which comprises a plurality of contiguous sectors.
A hard disk drive is an electro-mechanical device which may be prone to errors and or damage. Therefore, it is important to detect and correct errors which occur on the hard disk drive during use. Commonly, hard disk drives set aside a portion of the available storage in each sector for the storage of error correcting codes (ECCs). This data is also known as protection information. The ECC can be used to detect corrupted or damaged data and, in many cases, such errors are recoverable through use of the ECC. However, for many cases such as enterprise storage architectures, the risks of such errors occurring are required to be reduced further.
One approach to improve the reliability of a hard disk drive storage system is to employ redundant arrays of inexpensive disk (RAID). Indeed, RAID arrays are the primary storage architecture for large, networked computer storage systems.
The RAID architecture was first disclosed in “A Case for Redundant Arrays of Inexpensive Disks (RAID)”, Patterson, Gibson, and Katz (University of California, Berkeley). RAID architecture combines multiple small, inexpensive disk drives into an array of disk drives that yields performance exceeding that of a single large drive.
There are a number of different RAID architectures, designated as RAID-1 through RAID-6. Each architecture offers disk fault-tolerance and offers different trade-offs in terms of features and performance. In addition to the different architectures, a non-redundant array of disk drives is referred to as a RAID-0 array. RAID controllers provide data integrity through redundant data mechanisms, high speed through streamlined algorithms, and accessibility to stored data for users and administrators.
RAID architecture provides data redundancy in two basic forms: mirroring (RAID 1) and parity (RAID 3, 4, 5 and 6). The implementation of mirroring in RAID 1 architectures involves creating an identical image of the data on a primary disk on a secondary disk. The contents of the primary and secondary disks in the array are identical. RAID 1 architecture requires at least two drives and has increased reliability when compared to a single disk. Since each disk contains a complete copy of the data, and can be independently addressed, reliability is increased by a factor equal to the power of the number of independent mirrored disks, i.e. in a two disk arrangement, reliability is increased by a factor of four.
RAID 3, 4, 5, or 6 architectures generally utilise three or more disks of identical capacity. In these architectures, two or more of the disks are utilised for reading/writing of data and one or more of the disks store parity information. Data interleaving across the disks is usually in the form of data “striping” in which the data to be stored is broken down into blocks called “stripe units”. The “stripe units” are then distributed across the disks.
Therefore, should one of the disks in a RAID group fail or become corrupted, the missing data can be recreated from the data on the other disks. The data may be reconstructed through the use of the redundant “stripe units” stored on the remaining disks. However, RAID architectures utilising parity configurations need to generate and write parity information during a write operation. This may reduce the performance of the system.
However, even in a multiply redundant system such as a RAID array, certain types of errors and corruptions cannot be detected or reported by the RAID hardware and associated controllers.
A number of errors and corruptions fall into this category. One such error is a misdirected write. This is a situation where a block of data which is supposed to be written to a first location is actually written to a second, incorrect, location. In this case, the system will not return a disk error because there has not, technically, been any corruption or hard drive error. However, on a data integrity level, the data at the second location has been overwritten and lost, and old data is still present at the first location. These errors remain undetected by the RAID system.
A misdirected read can also cause corruptions. A misdirected read is where data intended to be read from a first location is actually read from a second location. In this situation, parity corruption can occur due to read-modify-write (RMW) operations. Consequently, missing drive data may be rebuilt incorrectly.
Another data corruption which can occur is a torn write. This situation occurs where only a part of a block of data intended to be written to a particular location is actually written. Therefore, the data location comprises part of the new data and part of the old data. Such a corruption is, again, not detected by the RAID system.
Additionally, data is not always protected by ECC or CRC (cyclic redundancy check) systems. Therefore, such data can become corrupted when being passed from hardware such as the memory and central processing unit (CPU), via hardware adapters and RAID controllers. Again, such an error will not be flagged by the RAID system.
When silent data corruption has occurred in a RAID system, a further problem of parity pollution may occur. This is when parity information is calculated from (unknowingly) corrupt data. In this case, the parity cannot be used to correct the corruption and restore the original, non-corrupt, data.
Certain RAID systems (for example, RAID 6 systems) can be configured to detect and correct such errors. However, in order to do this, a full stripe read is required for each sub stripe access. This requires significant system resources and time. In addition, certain storage protocols exist which are able to address such issues at least in part. For example, block checksums can be utilised which are able to detect torn writes.
One further corruption that can occur is a lost write. This occurs when firmware returns a success code to indicate successful completion of a write, but not actually carry out the write process. Such an error cannot be detected using standard block checksums because the disk block retains data and checksum written on a previous occasion and so the data and checksum are consistent.
One approach to the problem of lost writes is what is known as version mirroring. Version mirroring is where each data block which belongs to a RAID stripe contains a version number. The version number is changed with every write to the block, and a parity block is updated to include a list of version numbers of all blocks that are protected thereby.
In use, whenever a data block is read, its version number is compared to the corresponding version number stored in the parity block. If a mismatch occurs, the newer block will have a higher version number and can be used to reconstruct the other data block. This approach is outlined theoretically in “Parity Lost and Parity Regained”, A. Krioukov et al, FAST '08.
However, to date, no suitable method of reliably employing such a technique in a RAID array has been proposed. Therefore, to date, known storage systems suffer from a technical problem that certain data corruptions cannot be detected reliably using methods which can be implemented on existing storage systems.
According to a first aspect of the present invention, there is provided a method of writing data to a data sector of a storage device, the data sector having at least one parity sector associated therewith, each sector being configured to comprise a data field and a data integrity field, the data integrity field comprising a guard field, an application field and a reference field, the method comprising: providing data to be written to an intended sector; generating, for said intended sector, version information for said sector; writing said data to the data field of the data sector; writing said version information to the application field of the data sector; generating a version vector based on said version information for said data sector; and writing said version vector to the application field of the parity sector.
In one embodiment, the method further comprises writing said data in units of blocks, wherein each block comprises a plurality of sectors.
In one embodiment, each sector within a given block is allocated the same version number.
In one embodiment, the version number of a sector is changed each time said sector is written to.
In one embodiment, the version number is incremented each time the sector is written to.
In one embodiment, the version number is changed randomly each time the sector is written to.
In one embodiment, the version number is changed randomly for all blocks.
In one embodiment, the version number is changed randomly independently for each block or group of blocks.
In one embodiment, the version number is incremented and randomly selected.
In one embodiment, the method further comprises: writing said data in units of stripe units, each stripe unit comprising a plurality of blocks.
In one embodiment, said intended sector comprises part of a stripe unit and the method comprises, after said step of providing: reading version information from stripe units associated with said stripe unit; reading the version vector associated with said stripe units; determining whether a mismatch has occurred between the version information and the version vector and, if a mismatch has occurred, correcting said data.
In one embodiment, said version vector comprises the version information for the or each data sector.
In one embodiment, said version vector comprises a reduction function of said version information.
According to a second aspect of the present invention, there is provided a method of reading data from a sector of a storage device, the data sector having at least one parity sector associated therewith, each sector being configured to comprise a data field and a data integrity field, the data integrity field comprising a guard field, an application field and a reference field, the method comprising: executing a read request for reading of data from a data sector; reading version information from the application field of said data sector; reading version vector from the application field of a parity sector associated with said data sector; determining whether a mismatch has occurred between the version information and the version vector and, if a mismatch has occurred, correcting said data; and reading the data from said data sector.
According to a third aspect of the present invention, there is provided a controller operable to write data to a data sector of a storage device, the data sector having at least one parity sector associated therewith, each sector being configured to comprise a data field and a data integrity field, the data integrity field comprising a guard field, an application field and a reference field, the controller being operable to provide data to be written to an intended sector, generate, for said intended sector, version information for said sector, to write said data to the data field of the data sector, to write said version information to the application field of the data sector, to generate a version vector based on said version information for said data sector; and to write said version vector to the application field of the parity sector.
According to a fourth aspect of the present invention, there is provided a controller operable to read data from a sector of a storage device, the data sector having at least one parity sector associated therewith, each sector being configured to comprise a data field and a data integrity field, the data integrity field comprising a guard field, an application field and a reference field, the controller being operable to execute a read request for reading of data from a data sector, to read version information from the application field of said data sector, to read a version vector from the application field of a parity sector associated with said data sector, to determine whether a mismatch has occurred between the version information and the version vector and, if a mismatch has occurred, to correct said data and to read the data from said data sector.
According to a fifth aspect of the invention, there is provided a data storage apparatus comprising at least one storage device and the controller of the third or fourth aspects.
According to a sixth aspect of the present invention, there is provided a computer program product executable by a programmable processing apparatus, comprising one or more software portions for performing the steps of the first and/or second aspects.
According to a seventh aspect of the present invention, there is provided a computer usable storage medium having a computer program product according to the sixth aspect stored thereon.
Embodiments of the present invention will now be described in detail with reference to the accompanying drawings, in which:
The networked storage resource 10 comprises a plurality of hosts 12. The hosts 12 are representative of any computer systems or terminals that are operable to communicate over a network. Any number of hosts 12 may be provided; N hosts 12 are shown in
The hosts 12 are connected to a first communication network 14 which couples the hosts 12 to a plurality of RAID controllers 16. The communication network 14 may take any suitable form, and may comprise any form of electronic network that uses a communication protocol; for example, a local network such as a LAN or Ethernet, or any other suitable network such as a mobile network or the internet.
The RAID controllers 16 are connected through device ports (not shown) to a second communication network 18, which is also connected to a plurality of storage devices 20. The RAID controllers 16 may comprise any storage controller devices that process commands from the hosts 12 and, based on those commands, control the storage devices 20. RAID architecture combines a multiplicity of small, inexpensive disk drives into an array of disk drives that yields performance that can exceed that of a single large drive. This arrangement enables high speed access because different parts of a file can be read from different devices simultaneously, improving access speed and bandwidth. Additionally, each storage device 20 comprising a RAID array of devices appears to the hosts 12 as a single logical storage unit (LSU) or drive.
The operation of the RAID controllers 16 may be set at the Application Programming Interface (API) level. Typically, Original Equipment Manufactures (OEMs) provide RAID networks to end users for network storage. OEMs generally customise a RAID network and tune the network performance through an API.
Any number of RAID controllers 16 may be provided, and N RAID controllers 16 (where N is an integer) are shown in
The second communication network 18 may comprise any suitable type of storage controller network which is able to connect the RAID controllers 16 to the storage devices 20. The second communication network 18 may take the form of, for example, a SCSI network, an iSCSI network or fibre channel.
The storage devices 20 may take any suitable form; for example, tape drives, disk drives, non-volatile memory, or solid state devices. Although most RAID architectures use hard disk drives as the main storage devices, it will be clear to the person skilled in the art that the embodiments described herein apply to any type of suitable storage device. More than one drive may form a storage device 20; for example, a RAID array of drives may form a single storage device 20. The skilled person will be readily aware that the above features of the present embodiment could be implemented in a variety of suitable configurations and arrangements.
The RAID controllers 16 and storage devices 20 also provide data redundancy. The RAID controllers 16 provide data integrity through a built-in redundancy which includes data mirroring. The RAID controllers 16 are arranged such that, should one of the drives in a group forming a RAID array fail or become corrupted, the missing data can be recreated from the data on the other drives. The data may be reconstructed through the use of data mirroring. In the case of a disk rebuild operation, this data is written to a new replacement drive that is designated by the respective RAID controller 16.
The host 102 is connected to the RAID controller 104 through a communication network 110 such as an Ethernet and the RAID controller 104 is, in turn, connected to the storage devices 106a -j via a storage network 112 such as an iSCSI network.
The host 102 comprises a general purpose computer (PC) which is operated by a user and which has access to the storage resource 100.
The RAID controller 104 comprises a software application layer 116, an operating system 118 and RAID controller hardware 120. The software application layer 116 comprises software applications including the algorithms and logic necessary for the initialisation and run-time operation of the RAID controller 104. The software application layer 116 includes software functional blocks such as a system manager for fault management, task scheduling and power management. The software application layer 116 also receives commands from the host 102 (e.g., assigning new volumes, read/write commands) and executes those commands. Commands that cannot be processed (because of lack of space available, for example) are returned as error messages to the user of the host 102.
The operating system 118 utilises an industry-standard software platform such as, for example, Linux, upon which the software applications forming part of the software application layer 116 can run. The operating system 118 comprises a file system 118a which enables the RAID controller 104 to store and transfer files and interprets the data stored on the primary and secondary drives into, for example, files and directories for use by the operating system 118. This may comprise a Linux-based system such as LUSTRE.
The RAID controller hardware 120 is the physical processor platform of the RAID controller 104 that executes the software applications in the software application layer 116. The RAID controller hardware 120 comprises a microprocessor, memory 122, and all other electronic devices necessary for RAID control of the storage devices 106a-j.
The storage devices 106a -j forming the RAID array 108 are shown in more detail in
As shown in
The size of a stripe unit can be selected based upon a number of criteria, depending upon the demands placed upon the RAID array 108, e.g. workload patterns or application specific criteria. Common stripe unit sizes generally range from 16 K up to 256 K. In this example, 128 K stripe units are used. The size of each stripe A, B is then determined by the size of each stripe unit in the stripe multiplied by the number of non-parity data storage devices in the array (which, in this example, is eight). In this case, if 128 K stripe units are used, each RAID stripe would comprise 8 data stripe units (plus 2 parity stripe units) and each RAID stripe A, B would be 1 MB wide.
However, the stripe size is not material to the present invention and the present example is given as a possible implementation only. Alternative arrangements may be used. Any number of drives or stripe unit sizes may be used.
The following embodiment of the invention may be utilised with the above RAID arrangement. In the following description, for brevity, a single storage device 106a will be referred to. However, the embodiment of the invention is equally applicable to other arrangements; for example, the storage device 106a may be a logical drive, or may be a single hard disk drive.
Storage on a storage device 106a-j comprises a plurality of sectors (also known as logical blocks). A sector is the smallest unit of storage on the storage device 106a-j. A stripe unit will typically comprise a plurality of sectors.
As set out above, the term “storage device” in the context of the following description may refer to a logical drive which is formed on the RAID array 108. In this case, a sector refers to a portion of the logical drive created on the RAID array 108. The following embodiment of the present invention is applicable to any of the above described arrangements.
The term “sector” used herein, whilst described in an embodiment with particular reference to 520 byte sector sizes, is generally applicable to any sector sizes within the scope of the present invention. For example, some modern storage devices comprise 4 KB data sectors and a 64 byte data integrity field. Therefore, the term “sector” is merely intended to indicate a portion of the storage availability on a storage device within the defined storage protocols and is not intended to be limited to any of the disclosed examples. Additionally, sector may be used to refer to a portion of a logical drive, i.e. a virtual drive created from a plurality of physical hard drives linked together in a RAID configuration.
In this embodiment, the storage device 106 is formatted such that each sector 200 comprises 520 bytes (4160 bits) in accordance with the American National Standards Institute's (ANSI) T10-DIF (Data Integrity Field) specification format. The T10-DIF format specifies data to be written in blocks or sectors of 520 bytes. The 8 additional bytes in the data integrity field provide additional protection information (PI), some of which comprises a checksum that is stored on the storage device together with the data. The data integrity field is checked on every read and/or write of each sector. This enables detection and identification of data corruption or errors. However, the standard PI used with the T10-DIF format is unable to detect lost or torn writes.
ANSI T10-DIF provides three types of data protection: logical block guard for comparing the actual data written to disk, a logical block application tag and a logical block reference tag to ensure writing to the correct virtual block. The logical block application tag is not reserved for a specific purpose.
A further extension to the T10-DIF format is the T-10 DIX (data integrity extension) format which enables 8 bytes of extension information to enable PI potentially to be piped from the client-side application directly to the storage device.
As set out above, the data field 202, in this embodiment, is 512 bytes (4096 bits) long and the data integrity field 204 is 8 bytes (64 bits) long. The data field 202 comprises user data 206 to be stored on the storage device 106a-j. This data may take any suitable form and, as described with reference to
In a T10-based storage device, the data integrity field 204 comprises a guard (GRD) field 208, an application (APP) field 210 and a reference (REF) field 212.
The GRD field 208 comprises 16 bits of ECC, CRC or parity data for verification by the T10-configured hardware. In other words, sector checksums are included in the GRD field in accordance with the T10 standard. The format of the guard tag between initiator and target is specified as a CRC using a well-defined polynomium. The guard tag type is required to be a per-request property, not a global setting.
The REF field 212 comprises 32 bits of location information that enables the T10 hardware to prevent misdirected writes. In other words, the physical identity for the address of each sector is included in the REF field of that sector in accordance with the T10 standard.
The APP field 210 comprises 16 bits reserved for application specific data. In practice, this field is rarely used. However, the present invention contemplates, for the first time, that the APP field 210 can be utilised to improve data integrity.
In the present invention, the available bits in the APP field 210 are used to provide version mirroring to improve data integrity. In general, version minoring is where each block (comprising, in this embodiment, 8 sectors) belonging to a RAID stripe contains a version number. The version number is the same for each sector within a block. A parity block is also provided which comprises a copy of the version number for each block associated with that parity block.
The version number is modified in some way with every write to the sector. The parity sector is also updated to include a list of version numbers of all sectors that are protected thereby. Whenever a sector is read, the parity sector is checked. If an error is detected in the version numbers, then a torn write may have occurred and the data error can be reconstructed using the parity sectors and the uncorrupted sectors in the stripe.
A schematic arrangement of version minoring is shown in
Further, in practice, each stripe unit will comprise a plurality of Linux blocks (each of 4 KB), each of which comprise 8 sectors as described with reference to
As shown, in each case the GRD field 208 comprises CRC checksum data for each sector. In addition, the REF field 212 (not shown in
In this embodiment, the APP fields 210 of each of the blocks A, B, C comprises version information. In this embodiment, the version information is the same for each sector comprising the blocks A, B, C. In this case each block A, B, C has only been written to a single time and so the version number is 0 in each case. As a result, the parity sector P comprises a parity version vector comprising each of the version numbers of A, B and C, or (0, 0, 0). In general, the parity data in the APP parity sector 210 will comprise some reduction function or convolution of the version numbers for A, B and C due to the limited data space available to store complex parity data.
It is to be appreciated that
However, in practice, each stripe unit comprises (in this embodiment) 32 Linux blocks and 256 T10 sectors.
Each sector has the same version number which enables, amongst other things, detection of torn writes within a block. Torn writes occur when only some sectors within a block are written, and cannot be detected by conventional block checksums as used under systems such as Lustre. Torn writes also cannot be detected using conventional T10 checksums.
An example of the operation of version mirroring will be described with reference to
Consequently, consider the situation where a subsequent write to A (to generate A′) is processed starting from the configuration in
At this point, a version mismatch will be identified. However, the lost write, C′, can be reconstructed from P. Then the parity data P′(A′BC′) can be correctly calculated. A′ can then be written, and P′ can be written to the parity block.
As shown above, the use of version numbers in the APP field 210 of the data integrity field 204 of the sectors 200 comprising a block can be utilised to detect lost writes and reconstruct the data.
With regard to torn writes (i.e. where not all of the sectors in a block are written to) cannot be directly detected by conventional T10 sector checksums. As noted, a Linux block comprises 4096 bytes of data instead of 512 bytes as is the case for a T10 sector. Therefore, sector checksums alone (i.e. the data in the GRD field 208) does not provide protection against torn writes within a block (where not all the sectors are written). However, the use of version numbers enables such errors to be detected. By assigning the same version number to each of the eight sectors forming a 4 KB block, a torn write can be detected by a discrepancy in the version numbering within a block.
In summary, by storing version mirroring information in the T10-DIF APP (application) field 210 of each sector 200 comprising a Linux block and forming part of a stripe unit, and a copy of the version information from each block (the “version vector”) in the APP field of the parity block, silent data corruption can be reliably detected.
This “vertical” redundancy of information, combined with the “horizontal” redundancy of version mirroring using the parity block, provides complete protection against torn writes in place of block checksums, but requires reading of at least the APP field of the parity blocks, to validate the versions of the data blocks.
The version information used in the APP field 210 of each sector 200 may be generated in a number of ways. A number of alternative approaches are listed below. However, this list is intended to be non-exhaustive and non-limiting and the skilled person would be readily aware of other approaches that may be used.
In this arrangement, the version numbering starts at 1 for each stripe unit and increases by 1 for every rewrite of the stripe unit. The parity version vector stored in the APP field 210 of the parity block must represent the versions of each stripe unit. Therefore, for an 8-stripe RAID, there would be 2 bits per version in the parity version vector.
This arrangement is sufficient to detect torn writes in general, because 4 torn writes would need to be missed in a row to result in overrun. However, the version vector or the stripe unit version must be read at each full or partial stripe write in order to know the value to increment.
As a further benefit, the “reconstruct-write” approach used by RAID 6 requires re-reading all of the sectors to recompute the parity data. Therefore, the old versions are obtained at no additional cost in the case of a partial write.
The same random version number is applied to each block and to the corresponding parity block. If the version number for any sector within a block differs, then it represents a torn write.
This approach has the advantage that, for a full stripe write, no reading of an existing version is required. Instead, a new random number is generated. For partial stripe write, however, all versions must be updated, turning all partial-stripe writes into full-stripe writes.
The approach avoids the requirement to obtain the old version number as required in option 1. However, with only 2 bits per random number, there remains a 25% chance of missing a torn write.
An alternative would be to utilise the APP field 210 in both of the parity blocks (blocks P and Q) to obtain 4 bits per stripe unit. However, this would require reading both parity blocks on every read.
This approach takes elements of the above examples. By way of example, 14 high bits could be utilised for a random number and 2 low bits for a per-sector incremental. The 2 low bits are mirrored in the version vector. The version vector must still be read for partial-stripe writes (to learn the old version, free with RAID 6 reconstruct-writes), but for a full stripe rewrite a new random number is chosen for the high bits. This approach would eliminate the need to read anything for a full-stripe write.
Whilst the above examples have been given to illustrate operation of the present invention, other approaches may be used. For example, the version number may be based on the RPC request number or on a timestamp. These variations and alternatives are intended to fall within the scope of the present invention.
The operation of a method according to the present invention will now be described with reference to
The steps of writing a full stripe of data to the RAID array will be discussed with reference to
At step 300, the host 102 generates a write request for a specific volume (e.g.
storage device 106a) to which it has been assigned access rights. The request is sent via communication network 110 to the host ports (not shown) of the RAID controller 104. The write command is then stored in a local cache (not shown) forming part of the RAID controller hardware 120 of the RAID controller 104.
The RAID controller 104 is programmed to respond to any commands that request write access to the storage device 106a. The RAID controller 104 processes the write request from the host 102 and determines the target identifying address of the stripe to which it is intended to write that data to.
The method proceeds to step 302.
The RAID 6 controller 104 utilises a Reed-Solomon code is used to generate the parity information P and Q. The method proceeds to step 304. I
The APP field 210 of each sector 200 of the stripe units is assigned a version number in accordance with the options outlined above. The version number may be, for example, 0 for a new stripe write.
The method then proceeds to step 306.
Step 306: Generate Version Vector
At step 306, the version vector for storage in the APP field 210 of the sectors of the parity block is generated from the version information of the blocks which that parity block protects.
Step 308: Write User Data to Sector
At step 308, the data 206 is written to the data area 202 of the respective sector 200. This includes writing the version information generated in step 304 to the APP field 210 of the respective sector 200.
The method then proceeds to step 310.
The parity information 208 generated in step 302 is then written to the data fields 202 of the parity blocks P and Q. In addition, the version vector for each respective parity block is also written to the APP field 210 of the parity sector.
The method then proceeds to step 312.
At step 312, the writing of the data 202 together with parity information is complete. The method may then proceed back to step 300 for further stripes or may terminate.
The steps of a partial stripe write of data to the RAID array will be discussed with reference to
At step 300, the host 102 generates a write request for a specific volume (e.g. storage device 106a) to which it has been assigned access rights. The request is sent via communication network 110 to the host ports (not shown) of the RAID controller 104. The write command is then stored in a local cache (not shown) forming part of the RAID controller hardware 120 of the RAID controller 104.
The RAID controller 104 is programmed to respond to any commands that request write access to the storage device 106a. The RAID controller 104 processes the write request from the host 102 and determines the target identifying address of the stripe to which it is intended to write that data to.
The method proceeds to step 402.
Step 402: Read Data from Corresponding Blocks
Prior to writing the data specified in the write command in step 400, the RAID controller 104 reads the data from the blocks of the other stripe units in the stripe to which the write command has been assigned in preparation for construction of an updated parity block.
The method proceeds to step 404.
In step 404, before the data is written, the parity block is read to verify the version vector and the version numbers in the existing data.
At step 406, the version vector in the parity block is compared with the version information in the APP fields 210 of the data blocks. If a mismatch is detected, then the method proceeds to step 408. If no mismatch is detected, then the method proceeds to step 412.
If a mismatch is detected in step 406, the incorrect data can be reconstructed from the existing fault-free data and the parity. If, for example, a lost write has occurred, then the parity block will comprise a higher version number than the data block. The data in that data block can then be reconstructed.
Alternatively, if a data block has a higher version number than a parity block, an error in the parity block may have occurred. The parity block can then be reconstructed for the other data blocks.
Once the data has been reconstructed, the method proceeds to step 410.
The RAID 6 controller 104 utilises a Reed-Solomon code is used to generate the parity information P and Q from the new data to be written and the data read in step 402. The method proceeds to step 412.
The version number of the newly-written blocks is updated by updating the version information stored in the APP field 210 of the sectors 200 associated with the data blocks.
The updated version numbers of the newly-written blocks are then used to calculate an updated version vector in the APP field 210 of the parity blocks associated with the data blocks which have been modified. This may comprise a reduction function of the version information in the version vector to enable the version vector to be stored in the available space in the APP fields 210 of the parity sectors 200.
The method then proceeds to step 416.
The data to be written to the respective blocks is then written to the drive, including writing the updated version information to the APP field 210 of the respective sectors 200. The method then proceeds to step 418.
The parity information generated in step 410 is then written to the data fields 202 of the parity blocks P and Q. The updated version vector generated in step 414 is also written to the APP field 210 of the respective parity sector at this step. The method then proceeds to step 420.
At step 420, the updating of the data 202 together with parity information is complete. The method may then proceed back to step 400 for further stripes or may terminate.
At step 500, the host 102 generates a read request for the RAID array 108 to which it has been assigned access rights. The request is sent via the communication network 110 to the host ports (not shown) of the RAID controller 104. The read command is then stored in a local cache (not shown) forming part of the RAID controller hardware 120 of the RAID controller 104.
The RAID controller 104 is programmed to respond to any commands that request read access to the RAID array 108. The RAID controller 104 processes the read request from the host 102 and determines the sector(s) of the storage devices 106a-106j in which the data is stored. The method then proceeds to step 504.
Step 502: Read version information
Prior to reading the data specified in the read command in step 500, the RAID controller 104 reads the version information from the APP fields 210 of the data blocks.
The method proceeds to step 504.
In step 504, before the data is read, the parity block is read to verify the version vector and the version numbers in the existing data.
Step 506: Mismatch detected?
At step 506, the version vector in the parity block is compared with the version information in the APP fields 210 of the data blocks. If a mismatch is detected, then the method proceeds to step 508. If no mismatch is detected, then the method proceeds to step 512.
If a mismatch is detected in step 406, the incorrect data can be reconstructed from the existing fault-free data and the parity. If, for example, a lost write has occurred, then the parity block will comprise a higher version number than the data block. The data in that data block can then be reconstructed.
Alternatively, if a data block has a higher version number than a parity block, an error in the parity block may have occurred. The parity block can then be reconstructed for the other data blocks.
Once the data has been reconstructed, the method proceeds to step 510.
Step 510: Read Data The data can now be read as required. The method proceeds to step 512.
At step 512, the writing of the data 202 together with parity information is complete. The method may then proceed back to step 500 for further data reads or may terminate.
Variations of the above embodiments will be apparent to the skilled person. The precise configuration of hardware and software components may differ and still fall within the scope of the present invention.
For example, the present invention has been described with reference to controllers in hardware. However, the controllers and/or the invention may be implemented in software. This can be done with a dedicated core in a multi-core system.
Additionally, whilst the present embodiment relates to arrangements operating predominantly in off-host firmware or software (e.g. on the RAID controller 104), an on-host arrangement could be used.
Further, alternative ECC methods could be used. The skilled person would be readily aware of variations which fall within the scope of the appended claims.
Embodiments of the present invention have been described with particular reference to the examples illustrated. While specific examples are shown in the drawings and are herein described in detail, it should be understood, however, that the drawings and detailed description are not intended to limit the invention to the particular form disclosed. It will be appreciated that variations and modifications may be made to the examples described within the scope of the present invention.