The present invention relates to disk drive storage and control of disk drive storage. More particularly, the present invention relates to multi-actuator disk drives and control of multi-actuator disk drives.
Disk drive manufacturers are in the process of developing and releasing multi-actuator Hard Disk Drives (HDDs). These drives have the potential to multiply the current HDD bandwidth and input-output operations per second (IOPs) ratings of a conventional single actuator HDD by a factor equal to the number of actuators included in the HDD.
HDD actuators are the electromechanical arms that move the read/write heads to the relevant locations within the HDD. Until recently, all the read/write heads in the HDD have been attached to a single actuator. The single actuator has the capability to “seek” to any Logical Block Address (LBA) on the HDD. The conventional HDD presents a single Logical Unit Number (LUN), LUN 0, to a host initiator. Multi-actuator HDDs divide the total LBA range of the device into roughly equal portions, one for each actuator. Each actuator can only access its “portion” of the LBA range. Each actuator represents a separate LUN within the HDD.
The first examples of multi-actuator HDDs will be dual actuator. Dual actuator hard disk drives (DAHDDs) use two independent stacked actuators with each actuator addressing a separate disk of the DAHDD, each disk having a storage capacity of approximately one-half of the total HDD LBA space. These actuators will be represented as LUN 0 and LUN 1 of the HDD. As additional actuators may be added in the future, this addressing scheme is likely to continue.
A storage controller allows multiple disk drives to be aggregated into RAID (Redundant Array of Independent Disks) arrays. The RAID arrays (or portions thereof) are in turn presented through the host port of the storage controller as virtual logical units with their own logical unit numbers. Many RAID array configurations provide some level of fault tolerance allowing one or more logical units within the array to fail while maintaining stored data via various mathematical redundancy methods.
In accordance with an aspect of the invention, a method for operating a storage controller to aggregate a group of multi-actuator disk drives in a multi-actuator disk drive system includes receiving in the storage controller a data stream including data to be written to the group of multi-actuator disk drives, performing RAID mapping across the group of multi-actuator disk drives on the data stream to a preselected RAID level, organizing the data stream into at least one data stream, the number of data streams selected to implement the preselected RAID level, creating at least one parity data stream, the number of parity data streams selected to implement the preselected RAID level, organizing each of the at least one data stream and each of the at least one parity data stream into blocks of data, dividing each of the at least one data stream and each of the at least one parity data stream into groups of blocks of data assigned to a logical unit representing an individual actuator of an individual multi-actuator disk drive of the group of multi-actuator disk drives, blocks of data from the of the at least one data stream assigned to the logical unit representing a different multi-actuator disk drive from that assigned to each of the at least one parity data stream, sequential ones of the groups of blocks of data of each of the of the at least one data stream and the at least one parity data stream assigned substantially equally to logical units representing different actuators in each individual multi-actuator disk drive, providing each group of blocks of data to a target port in the storage controller associated with the multi-actuator disk drive to which it has been assigned, and sending each group of blocks of data from the target port in the storage controller.
In accordance with an aspect of the invention, dividing each of the at least one data stream and each of the at least one parity data stream into groups of blocks of data includes dividing each of the at least one data stream and each of the at least one parity data stream such that each of the groups of blocks of data have an equal number of data block regions.
In accordance with an aspect of the invention, creating at least one parity data stream incudes creating a parity data stream that is a function of data from more than one data stream.
In accordance with an aspect of the invention, organizing each of the at least one data stream and each of the at least one parity data stream into blocks of data includes organizing each of the at least one data stream and each of the at least one parity data stream into up to chunk size groups of blocks of data.
In accordance with an aspect of the invention, the chunk size is chosen to be less than one half of an average write request size.
In accordance with an aspect of the invention, the method further includes issuing read commands to read groups of blocks of data from the group of multi-actuator disk drives. receiving groups of blocks of data at the storage controller, each group of blocks of data associated with the logical unit representing the individual actuator in the individual multi-actuator disk drive, each group of blocks of data associated with the individual multi-actuator disk drive being received at the respective assigned target port, separately interleaving the sequential ones of the groups of blocks of data received in each assigned target port from all of the individual actuators in the associated multi-actuator disk drive, and performing RAID mapping to assemble the separately interleaved groups of blocks of data from each multi-actuator disk drive into a single data stream in an order implementing mapping of the preselected RAID level.
In accordance with an aspect of the invention, receiving groups of blocks of data in the storage controller includes receiving chunk size groups of blocks of data at the respective assigned target port in the storage controller.
In accordance with an aspect of the invention, assembling the separately interleaved groups of blocks of data from each multi-actuator disk drive into the single data stream in an order implementing mapping of the preselected RAID level includes identifying first groups of blocks of data representing a data stream and identifying second groups of data representing a parity data stream, and assembling the first groups of blocks of data representing the data stream into the single data stream.
In accordance with an aspect of the invention, a storage controller to control a group of multi-actuator disk drives in a multi-actuator disk drive system, the storage controller comprising a host port and target ports is configured to receive at the host port a data stream including data to be written to the group of multi-actuator disk drives, perform RAID mapping across the group of multi-actuator disk drives on the data stream to a preselected RAID level, organize the data stream into at least one data stream, the number of data streams selected to implement the preselected RAID level, create at least one parity data stream, the number of parity data streams selected to implement the preselected RAID level, organize each of the at least one data stream and the at least one parity data stream into blocks of data, divide each of the at least one data stream and each of the at least one parity data stream into groups of blocks of data assigned to a logical unit representing an individual actuator of an individual multi-actuator disk drive of the group of multi-actuator disk drives, blocks of data from the of the at least one data stream assigned to the logical unit representing a different multi-actuator disk drive from that assigned to each of the at least one parity data stream, sequential ones of the groups of blocks of data of each of the of the at least one data stream and the at least one parity data stream assigned substantially equally to logical units representing different actuators in each individual multi-actuator disk drive, provide each group of blocks of data to a respective one of the target ports in the storage controller associated with multi-actuator disk drive to which it has been assigned, and send each group of blocks of data from the target port in the storage controller.
In accordance with an aspect of the invention, the division of each of the at least one data stream and each of the at least one parity data stream into groups of blocks of data includes dividing each of the at least one data stream and each of the at least one parity data stream such that each of the groups of blocks of data have an equal number of data block regions.
In accordance with an aspect of the invention, the storage controller is configured to create at least one parity data stream comprises creating a parity data stream that is a function of data from more than one data stream.
In accordance with an aspect of the invention, the organization of each of the at least one data stream and each of the at least one parity data stream into blocks of data includes organizing each of the at least one data stream and each of the at least one parity data stream into up to chunk size groups of blocks of data.
In accordance with an aspect of the invention, the storage controller is configured to choose the chunk size to be less than one half of an average write request size.
In accordance with an aspect of the invention, the storage controller is further configured to receive groups of blocks of data at the storage controller, each group of blocks of data associated with the logical unit representing the individual actuator in the individual multi-actuator disk drive, each group of blocks of data associated with the individual multi-actuator disk drive being received at the respective assigned target port, separately interleave the sequential ones of the groups of blocks of data received in each assigned target port from all of the individual actuators in the associated multi-actuator disk drive, and perform RAID mapping to assemble the separately interleaved groups of blocks of data from each multi-actuator disk drive into a single data stream in an order implementing mapping of the preselected RAID level.
In accordance with an aspect of the invention, the storage controller is configured to receive chunk size groups of blocks of data at the respective assigned target port in the storage controller.
In accordance with an aspect of the invention, the storage controller is configured to perform RAID mapping to assemble the separately interleaved groups of blocks of data from each multi-actuator disk drive into the single data stream in the order implementing mapping of the preselected RAID level by identifying first groups of blocks of data representing a data stream and identifying second groups of data representing a parity data stream, and assembling the first groups of blocks of data representing the data stream into the single data stream.
a method for operating a storage controller to write data to a group of multi-actuator disk drives in a multi-actuator disk drive system includes receiving in the storage controller a data stream including data to be written to the group of multi-actuator disk drives, performing RAID mapping across the group of multi-actuator disk drives on the data stream to a preselected RAID level, organizing the data stream into at least one data stream, the number of data streams selected to implement the preselected RAID level, creating at least one parity data stream, the number of parity data streams selected to implement the preselected RAID level, organizing each data stream and parity data stream into blocks of data, dividing each data stream and each parity data stream into groups of blocks of data assigned to a logical unit representing an individual multi-actuator disk drive and an actuator in the individual multi-actuator disk drive, blocks of data from a data stream and parity stream assigned to a logical unit representing a different multi-actuator disk drive, sequential ones of the groups of blocks of data assigned substantially equally to logical units representing actuators in each individual multi-actuator disk drive, providing each group of blocks of data to a target port in the storage controller associated with the logical unit to which it has been assigned, and sending each group of blocks of data from the target port in the storage controller.
In accordance with an aspect of the invention, dividing each data stream and each parity data stream provided to each multi-actuator disk drive into groups of blocks of data includes dividing each data stream such that each of the groups of blocks of data have an equal number of data block regions.
In accordance with an aspect of the invention, creating at least one parity data stream Includes creating a parity data stream that is a function of data from a pair of data streams.
In accordance with an aspect of the invention, organizing each data stream and parity data stream into blocks of data includes organizing each data stream and parity data stream into up to chunk size groups of blocks of data.
In accordance with an aspect of the invention, the chunk size is chosen to be less than one half of an average write request size.
In accordance with an aspect of the invention, a method for operating a storage controller to process data read from a group of multi-actuator disk drives in a multi-actuator disk drive system configured to a preselected RAID level, includes receiving groups of blocks of data at a target port in the storage controller, each group of blocks of data associated with a logical unit representing a different multi-actuator disk drive and an individual actuator on that drive, sequential ones of the groups of blocks of data received substantially equally from logical units representing individual actuators in each individual multi-actuator disk drive, and assembling the groups of blocks of data into a single data stream in an order implementing mapping of the preselected RAID level.
In accordance with an aspect of the invention, receiving groups of blocks of data at a target port in the storage controller includes receiving chunk size groups of blocks of data at a target port in the storage controller.
In accordance with an aspect of the invention, assembling the groups of blocks of data into a single data stream in an order implementing mapping of the preselected RAID level includes identifying first groups of blocks of data representing a data stream and identifying second groups of data representing a parity data stream, and assembling the first groups of blocks of data representing the data stream into the single data stream.
In accordance with an aspect of the invention, a storage controller to control a group of multi-actuator disk drives in a multi-actuator disk drive system, the storage controller configured to receive in the storage controller a data stream including data to be written to the group of multi-actuator disk drives, perform RAID mapping across the group of multi-actuator disk drives on the data stream to a preselected RAID level, organize the data stream into at least one data stream, the number of data streams selected to implement the preselected RAID level, create at least one parity data stream, the number of parity data streams selected to implement the preselected RAID level, organize each data stream and parity data stream into blocks of data, divide each data stream and each parity data stream into groups of blocks of data assigned to a logical unit representing an individual multi-actuator disk drive and an actuator in the individual multi-actuator disk drive, blocks of data from a data stream and parity stream assigned to a logical unit representing a different multi-actuator disk drive, sequential ones of the groups of blocks of data assigned substantially equally to logical units representing actuators in each individual multi-actuator disk drive, provide each group of blocks of data to a target port in the storage controller associated with the logical unit to which it has been assigned, and send each group of blocks of data from the target port in the storage controller.
In accordance with an aspect of the invention, the storage controller is configured to divide each data stream and each parity data stream provided to each multi-actuator disk drive into groups of blocks of data comprises dividing each data stream such that each of the groups of blocks of data have an equal number of data block regions.
In accordance with an aspect of the invention, the storage controller is configured to create at least one parity data stream comprises creating a parity data stream that is a function of data from a pair of data streams.
In accordance with an aspect of the invention, the storage controller is configured to organize each data stream and parity data stream into blocks of data comprises organizing each data stream and parity data stream into up to chunk size groups of blocks of data.
In accordance with an aspect of the invention, the storage controller is configured to choose the chunk size to be less than one half of an average write request size.
In accordance with an aspect of the invention, the storage controller is further configured to receive groups of blocks of data at a target port in the storage controller, each group of blocks of data associated with a logical unit representing a different multi-actuator disk drive and an individual actuator on that drive, sequential ones of the groups of blocks of data received substantially equally from logical units representing individual actuators in each individual multi-actuator disk drive, and assemble the groups of blocks of data into a single data stream in an order implementing mapping of the preselected RAID level.
In accordance with an aspect of the invention, the storage controller is configured to receive chunk size groups of blocks of data at a target port in the storage controller.
In accordance with an aspect of the invention, the storage controller is configured to assemble the groups of blocks of data into a single data stream in an order implementing mapping of the preselected RAID level by identifying first groups of blocks of data representing a data stream and identifying second groups of data representing a parity data stream, and assembling the first groups of blocks of data representing the data stream into the single data stream.
The invention will be explained in more detail in the following with reference to embodiments and to the drawing in which are shown:
Persons of ordinary skill in the art will realize that the following description is illustrative only and not in any way limiting. Other embodiments will readily suggest themselves to such skilled persons.
RAID arrays are configured by a user external to the storage controller through a user interface. The user, through the user interface, must create a RAID array from a list of available logical units (or logical unit numbers) that the attached multi-actuator HDDs present to the storage controller. Commonly-used RAID levels known to persons of ordinary skill in the art are RAID 0, RAID 1, RAID 5, and RAID 6. RAID 0 simply divides the data among multiple HDDs and does not provide for data recovery in the event of a drive failure, while the other RAID level schemes provide different levels of data recovery as is known in the art. The present invention relates to all RAID arrays that provide data recovery.
RAID 1 is a logical configuration made from two different logical units (LUs). RAID 1 is a mirroring scheme wherein each of the two LUs is a copy of the other. Writes sent to the RAID 1 array are duplicated to each LU. Reads to a RAID 1 array may be sent to either LU. A RAID 1 array can tolerate one logical unit failure. If one of the LU components fails, all reads and writes may be directed to the remaining LU.
Referring now to
The target ports 34 in the storage controller 12 direct data into, and out of, the storage controller 12 across connection 36 from the storage controller 12 to, and from, the DAHDDs 14 and 16. Each target port as this term is used herein is a PHY (physical connection) between the storage controller 12 and an individual disk drive unit 14 or 16.
The DAHDDs 14 and 16 each include two actuators each driving at least one platter. As shown in
As shown in
Referring now to
This is not an acceptable mapping because it does not support the RAID 1 fault tolerance requirement of operation in face of a single LUN failure. Both LUNs of each RAID 1 in this case are dependent on the same electromechanical mechanism (either DAHDD 0 14 or DAHDD 1 16) which creates a single point of failure. If either disk DAHDD k 0 14 or DAHDD 1 16 fails, a RAID array will fail.
Referring now to
As with the RAID 1 array depicted in
This is an acceptable mapping because it does support the RAID 1 fault tolerance requirement of operation in face of a single LUN failure. Either DAHDD 0 14 or DAHDD 1 16 may fail thus removing one LUN from each of the RAID 1 arrays. The surviving DAHDD will still contain one LUN for each of the two RAID 1 arrays.
The configuration shown in
The RAID configurations shown in
Referring now to
The total storage capacity of each of DAHDD 0 and DAHDD 1 in the RAID 1 array depicted in
The configuration of two RAID 1 arrays created on four LUNs in accordance with the aspect of the present invention in
Referring now to
In
The present invention is easily extended to a larger number of virtual disk drives as shown by the additional virtual drives VD2H through VDnH shown, respectively, at reference numerals 84-2 through 84-n.
VD0H and VD1H are each connected to a different DAHDD through the storage controller target ports 34. The DAHDD chunking layer shown at reference numerals 86-0 and 86-1 respectively manages the data distribution of the virtual disk drives, VD0H and VD1H to the DAHDDs. The data distribution to the four storage targets presented by the two DAHDDs is represented by D0AH, D0BH, D1AH and D1BH at reference numerals 88-0, 90-0, 88-1, and 90-1, respectively. DAHDD chunking layers 86-0, 86-1, divide the data stream through VD0H and VD1H equally so that both actuators of each DAHDD can operate simultaneously to achieve optimal performance of the DAHDDs. The additional dashed-line portions of the DAHDD chunking layer shown at reference numerals 86-2 through 86-n and the additional data distribution to possible additional storage targets presented by the additional DAHDDs is represented by D2AH, D2BH through DnAH and DnBH will be appreciated by persons of ordinary skill in the art and are shown for use in embodiments that employ more than two DAHDDs.
Data streams D0:L0, D0:L1, D1:L0, DELL D2:L0, D2:L1, Dn:L0, Dn:L1 are shown at reference numerals 92, 94, 96, 98, 100, 102, 104, and 106, respectively, being provided to the target ports 34 of the storage controller 12.
The read process is the reverse of the write process as shown by the bi-directional arrows in
Referring now to
A chunk is defined as a contiguous number of blocks. Sequential chunks are shown in a column at the right side of
The number of blocks in a chunk is arbitrary but all chunks are the same size. An optimal chunk size is preferably chosen to be less than one half of an average I/O request (i.e., WRITE request and READ request) size. This assures that more than half of the I/O requests would utilize both actuators. In the illustrative non-limiting example of
Using the VD0H as an example, the position of blocks within a chunk may start or end at a block position that is not coincident with the edge of a chunk as shown in
The result of this configuration is that any read or write request for any number of blocks greater than the chunk size from the RAID 1 mapping layer (VD0H and VD1H) will fit into more than one chunk. This allows DAHDD chunking to divide the workload into chunks by using a chunking overlay as shown in
VD0H is a virtual representation of the chunked data stream to, or from, the dual actuator disk drive 0 as shown in
The notable properties of the read request are:
Request Type=read
VD0H Starting LBA=block 43 as seen in the third row of the right-hand portion of
VD0H Request Blocks=59 blocks (total number of blocks indicated in bold typeface).
As depicted in
If the transfer is a write request, the method proceeds to reference numeral 128 where a write request is created. RAID 1 fault tolerance requires that the write be executed to both virtual disk drives (VD0H and VD1H) presented by the DAHDD chunking layer. In this case the DAHDD chunking algorithm of the DAHDD chunking layer will be called twice, once for each RAID 1 write request.
If the transfer is a read request, the method proceeds to reference numeral 130 where a read request is created. In RAID 1, there is an exact copy of the requested data on both virtual disk drives (VD0H and VD1H) presented by the DAHDD chunking layer. In this case a separate algorithm chooses to which virtual disk the request is to be sent. Persons skilled in the art will appreciate that the choice is rather arbitrary since both VD0H and VD1H contain the same data. Ultimately the read request is sent to the DAHDD chunking algorithm.
The method then proceeds to reference numeral 132, either from reference numeral 120 or from reference numeral 130, where the total transfer size (request_blocks) is recorded in blocks_left. The DAHDD chunking algorithm will send as many individual transfer requests to the actuators of the DAHDD as necessary until the entire transfer size is satisfied. Blocks_left will be decremented for each transfer until blocks_left is equal to 0 at which point the algorithm ends.
The method then proceeds to reference numeral 134, where the DAHDD chunking algorithm applies the chunking overlay to the virtual disk request as illustrated in
As shown in
The method then proceeds to reference numeral 136, where the starting chunk number (c) is then calculated using integer division. Any fractional remainder is omitted. In particular:
c=request_LBA/blocks_in_chunk
In the example, c=43/8=5
The method then proceeds to reference numeral 138, where the actuator number for the first disk transfer can then be calculated using modulo arithmetic. The actuator number is the whole number remainder of the starting chunk number (c) divided by the number of actuators, i.e.:
actuator=c MOD num_actuators
In the example, actuator=5 MOD 2=1
The method then proceeds to reference numeral 140, where xfr_LBA is determined. To start the request at the correct block number on the actuator, the starting LBA, relative to VD0H must be transposed to a starting LBA relative to the actuator (xfr_LBA). This can be done in three steps or fewer. All division operation are integer division and fractional remainders are omitted.
chunk_LBA=(c/num_actuators)×blocks_in_chunk
In the example, xfr_LBA=(5/2)×8=16
offset=VD0H request_LBA MOD blocks_in_chunk
In the example, offset=43 MOD 8=3
xfr_LBA=chunk_LBA+offset
In the example, xfr_LBA=16+3=19
The method then proceeds to reference numeral 142, where the xfr_size and last_xfr_size are determined. The transfer size (xfr_size) for the first disk transfer must be calculated because the first transfer may begin on a block (LBA) that is not aligned to a chunk boundary. The offset calculated in the previous step is used to determine the transfer size.
xfr_size=blocks_in_chunk−offset
In the example, xfr_size=8-3=5
The variable last_xfr_size is used when the xfr_LBA needs to be adjusted when advancing to the next row of chunks.
last_xfr_size=xfr_size
In the example, last transfer size=5
The parameters required for the first transfer are now known. The parameters are used to create a new disk transfer request at reference numeral 144.
actuator=1
blocks_left=59
xfr_type=read
xfr_LBA=19
xfr_size=5
The method then proceeds to reference numeral 146, where the disk transfer request is sent to a separate algorithm to be dispatched to the DAHDD. The dispatch algorithm is common for the storage controller and used for all types of disk drives. Once the transfer has been dispatched the algorithm continues without waiting for the transfer to complete. The transfer will complete at some time in the future and the completion will be handled by a different common transfer completion algorithm.
After each transfer is dispatched parameters are updated for the next transfer.
The method proceeds to reference numeral 148 where the blocks_left parameter is decremented by the xfr_size.
The method proceeds to reference numeral 150 where it is determined whether blocks_left is equal to zero.
If blocks_left is equal to zero, the algorithm is complete and ends at reference numeral 152.
If blocks_left is non-zero, the method proceeds to reference numeral 154, where the xfr_size for the next transfer is calculated. The new transfer size will be the minimum of blocks_left or blocks_in_chunk.
xfr_size=MIN(blocks_left,blocks_in_chunk)
The actuator number is then incremented by one at reference numeral 156. This advances the algorithm to compute a transfer for the next chunk number (
actuator=actuator+1
The method proceeds to reference numeral 158 where the new value of actuator is evaluated. If the actuator number becomes more than the number of actuators presented by the multi-actuator disk drive, some additional parameters need to be updated. This is referred to as wrap-around. The new value of actuator is evaluated for wrap-around with the following logic.
IF actuator MOD num_actuators is equal to zero
If the condition evaluates to false, the method algorithm proceeds to reference numeral 160 where the parameter last_xfr_size is set equal to xfr_size and the method then proceeds back to reference numeral 144 where a new transfer based on the currently updated transfer parameters is created.
If the condition evaluates to true, the method continues with additional transfer parameter modifications.
The method proceeds to reference numeral 162 where the process must advance to the next row of the chunk mapping shown in
actuator=0
Next, at reference numeral 164, the xfr_LBA is advanced by the number of blocks in the last transfer (last_xfr_size). All transfers to the DAHDD, other than the first transfer of any request, must begin at a block offset aligned with a chunk boundary. The xfr_LBA is always a logical block address relative to an actuator.
xfr_LBA=xfr_LBA+last_xfr_size
Next, at reference numeral 160, the variable last_xfr_size is set to equal xfr_size.
last_xfr_size=xfr_size
The method then proceeds back to reference numeral 144 where a new transfer based on the currently updated transfer parameters is created.
Referring now to
At reference numeral 174, a data stream including groups of blocks of data to be written to the group of multi-actuator disk drives is received in the controller.
At reference numeral 176, RAID mapping is performed across the group of multi-actuator disk drives on the data stream to a preselected RAID level.
At reference numeral 178, the data stream is organized into at least one data stream, the number of data streams selected to implement the preselected RAID level.
At reference numeral 180, at least one parity data stream is created as a function of one or more data streams, the number of parity data streams selected to implement the preselected RAID level. In particular, for some RAID levels a parity data stream is created that is a function of data from more than one data stream. As will be appreciated by persons of ordinary skill in the art, the number of parity data streams that are created will be selected to implement a preselected RAID level. Such skilled persons will appreciate that the parity data streams are a function of the data stream. In RAID 1, the function is identity; in higher RAID levels the function may be a mathematical or logic function. In higher RAID levels the number of parity data streams may increase to more than one. For example, in RAID 5 there is one parity data stream and in RAID 6 there are two parity data streams. The creation of parity in different levels of RAID systems is well known in the art and need not be repeated here.
At reference numeral 182, each data stream and parity data stream is organized into blocks of data.
At reference numeral 184, each data stream and each parity data stream is divided into groups of blocks of data assigned to a logical unit representing an individual multi-actuator disk drive and an actuator in the individual multi-actuator disk drive, blocks of data from a data stream and parity stream are each assigned to a logical unit representing a different multi-actuator disk drive. Sequential ones of the groups of blocks of data are assigned substantially equally to logical units representing actuators in each individual multi-actuator disk drive.
At reference numeral 186, each group of blocks of data is provided to a target port in the storage controller associated with the logical unit to which it has been assigned.
At reference numeral 188, each group of blocks of data is sent from the target port in the storage controller towards the respective multi-actuator disk drive. The method ends at reference numeral 190.
Referring now to
At reference numeral 204, read commands are issued to read groups of blocks of data from all of the multi-actuator disk drives.
At reference numeral 206, groups of blocks of data are received in the storage controller, each group of blocks of data associated with the logical unit representing the individual actuator in the individual multi-actuator disk drive, the groups of blocks of data received at a target port associated with the particular multi-actuator disk drive, sequential ones of the groups of blocks of data received substantially equally from logical units representing individual actuators in each individual multi-actuator disk drive. Blocks can be received in chunk size groups.
At reference numeral 208, the sequential ones of the groups of blocks of data received in each assigned target port from all of the actuators in each multi-actuator disk drive are separately interleaved, i.e. data received from LUNs of each multi-actuator disk drive are interleaved without being interleaved with data received from any of the multi-actuator disk drives.
At reference numeral 210, RAID mapping is performed to assemble the separately interleaved groups of blocks of data from each multi-actuator disk drive into a single data stream in an order implementing mapping of the preselected RAID level. The method ends at reference numeral 212.
Persons of ordinary skill in the art will appreciate that the method shown in
While embodiments and applications of this invention have been shown and described, it would be apparent to those skilled in the art that many more modifications than mentioned above are possible without departing from the inventive concepts herein. The invention, therefore, is not to be restricted except in the spirit of the appended claims.
Number | Date | Country | |
---|---|---|---|
62925042 | Oct 2019 | US |