The present invention is directed to computer data storage systems. In particular, the present invention is directed to methods and apparatuses for efficiently destaging cache write data from a storage controller to storage devices of a striped volume.
In data storage systems, often data is stored with redundancy to protect against component failures resulting in loss of data. Such data redundancy can be provided by simple data mirroring or parity-based techniques. Conventional Redundant Array of Inexpensive Disks (RAID) stripe configurations effectively group capacity from all but one of the disk drives in a striped volume and write the parity (XOR) of that capacity on the remaining storage device (or across multiple storage devices). When there is a failure, the data located on the failed storage device is reconstructed using data from the remaining storage devices. The reconstruction process generally takes a series of data reads and data writes from the surviving storage devices.
When data is updated by a data write from a host computer to the storage system, the redundancy data (parity) must also be updated atomically on the striped volume to maintain consistency of data and parity for data reconstruction or recovery as needed. The parity update process is fairly straightforward for full stripe writes in a controller write cache memory. The portion of data in any one stripe of one storage device is called a strip or chunk. Parity is calculated as the XOR of all data chunks in the same stripe. Therefore, if all data chunks for the same stripe are already in the write cache (as would be the case for a full stripe write), all that is needed is to XOR all of the chunks in the same stripe together in order to obtain the parity chunk, and write the data chunks and new parity chunk to the striped volume.
Although full stripe writes are fairly straightforward and can be efficiently destaged from a cache memory in normal operation, partial stripe writes are often more complicated. Partial stripe writes are writes whereby less than all data chunks in a stripe have new (dirty) data. Therefore, the dirty data chunks are stored in the write cache and the unchanged (old) data chunks are stored in the striped volume. Depending on the number of storage devices in the striped array, it will be necessary to read the old chunks from the striped array, XOR the dirty chunks with the old chunks to obtain the new parity chunk, and write the dirty chunks and the new parity chunk to the striped volume. Because of a relatively high number of reads and writes to the striped volume, destaging partial stripe writes needs to be carefully planned in order to not significantly impact performance of the storage system.
The present invention is directed to solving disadvantages of the prior art. In accordance with embodiments of the present invention, a method for destaging data from a memory of a storage controller to a striped volume is provided. The method includes determining, by a processor of the storage controller, if a stripe should be destaged from a write cache of the storage controller to the striped volume. If a stripe should be destaged, the method includes destaging, by the storage controller, a partial stripe from the write cache if a full stripe write percentage is less than a full stripe write affinity value and destaging, by the storage controller, a full stripe from the write cache if the full stripe write percentage is greater than the full stripe write affinity value. The full stripe write percentage includes a full stripe count divided by the sum of the full stripe count and a partial stripe count. The full stripe count is the number of stripes of the striped volume in the write cache where all chunks of a stripe are dirty. The partial stripe count is the number of stripes of the striped volume in the write cache where at least one chunk but less than all chunks of the stripe are dirty.
In accordance with other embodiments of the present invention, a storage controller for efficiently destaging data to a striped volume coupled to the storage controller is provided. The storage controller includes a processor and a memory, coupled to the processor. The memory includes a write cache for temporarily storing write data specified in a data write command, a full stripe write percentage, and a full stripe write affinity value. The processor determines if a stripe should be destaged from the write cache to the striped volume, wherein if the processor determines if a stripe should be destaged, the storage controller destages a partial stripe from the write cache if the full stripe write percentage is less than the full stripe write affinity value and destages a full stripe from the write cache if the full stripe write percentage is greater than the full stripe write affinity value. The full stripe write percentage includes a full stripe count divided by the sum of the full stripe count and a partial stripe count, and the full stripe count is the number of stripes of the striped volume in the write cache where all chunks of a stripe are dirty. The partial stripe count is the number of stripes of the striped volume in the write cache wherein at least one chunk but less than all chunks of the stripe are dirty.
In accordance with still other embodiments of the present invention, a system for efficiently destaging data is provided. The system includes a host computer for generating data write commands, a storage system coupled to the host computer, and a striped volume coupled to the storage controller. The striped volume includes a plurality of storage devices configured as a parity-based RAID volume. The storage system includes a storage controller, which includes a processor and a memory, coupled to the processor. The memory includes a write cache for temporarily storing write data specified in the data write commands, a full stripe write percentage, and a full stripe write affinity value. The processor determines if a stripe should be destaged from the write cache to the striped volume. If the processor determines if a stripe should be destaged, the storage controller destages a partial stripe from the write cache if the full stripe write percentage is less than the full stripe write affinity value and destages a full stripe from the write cache if the full stripe write percentage is greater than the full stripe write affinity value. The full stripe write percentage includes a full stripe count divided by the sum of the full stripe count and a partial stripe count. The full stripe count is the number of stripes of the striped volume in the write cache where all chunks of a stripe are dirty, and the partial stripe count is the number of stripes of the striped volume in the write cache where at least one chunk but less than all chunks of the stripe are dirty.
An advantage of the present invention is that it improves write performance to a striped volume by efficiently destaging write data from a storage controller write cache. Without an efficient means to destage stripes to a striped volume, one of two outcomes are likely. A storage controller may over-aggressively copy write data from the write cache to the striped volume, resulting in under-utilization of the write cache and little benefit to write caching in general. Alternatively, a storage controller may under-aggressively copy write data from the write cache to the striped volume, resulting in a full write cache. When the write cache is full, the storage controller must either delay new writes until space is available in the write cache, or else handle new writes in a write-through mode. Both results reduce performance.
Another advantage of the present invention is it is able to maintain a given amount of free space in a write cache of a storage controller, by managing a write cache watermark. A useful amount of free space will therefore be generally available in the write cache, increasing the likelihood that new writes will benefit from write caching performance improvements.
Additional features and advantages of embodiments of the present invention will become more readily apparent from the following description, particularly when taken together with the accompanying drawings.
a is a block diagram illustrating components of a first non host-based electronic data storage system in accordance with embodiments of the present invention.
b is a block diagram illustrating components of a second non host-based electronic data storage system in accordance with embodiments of the present invention.
c is a block diagram illustrating components of a host-based electronic data storage system in accordance with embodiments of the present invention.
a is a block diagram illustrating components of a single storage device striped volume in accordance with embodiments of the present invention.
b is a block diagram illustrating components of a multiple storage device striped volume in accordance with embodiments of the present invention.
a is a diagram illustrating partial stripe write destage penalties for a three-drive RAID 5 striped volume in accordance with embodiments of the present invention.
b is a diagram illustrating partial stripe write destage penalties for a five-drive RAID 5 striped volume in accordance with embodiments of the present invention.
c is a diagram illustrating full stripe write destage penalties for a five-drive RAID 5 striped volume in accordance with embodiments of the present invention.
a is a flowchart illustrating a write cache allocation process in accordance with embodiments of the present invention.
b is a flowchart illustrating a write cache memory release process in accordance with embodiments of the present invention.
a is a flowchart illustrating a calculation of full stripe write percentage in accordance with embodiments of the present invention.
b is a diagram illustrating an exemplary full stripe write percentage calculation in accordance with embodiments of the present invention.
The present inventors have observed various performance problems in certain I/O workloads from host computers to storage controllers. In particular, storage controller write caches may be difficult to manage efficiently. Although it is generally straightforward to determine when to destage full stripe writes from a write cache to a striped volume, the same cannot be said for partial stripe writes. Partial stripe writes present unique problems when a striped volume is a parity-based RAID volume. For a parity-based RAID volume, it is necessary to re-create the parity chunk for the stripe corresponding to either a partial stripe write or a full stripe write. The parity chunk is recomputed as the XOR of all the data chunks in the same stripe. For full stripe writes, all of the data chunks are already in the data cache. Therefore, updating the parity chunk simply involves XORing all of the data chunks in the data cache, and writing the resultant parity chunk to the parity chunk location in the striped volume. For partial stripe writes, less than all of the data chunks are already in the data cache. Therefore, all of the data chunks for the partial stripe that are not already in the data cache must be read from the striped volume. It is these additional reads from the striped volume that make the update for the parity chunk corresponding to a partial stripe write slower than the update for the parity chunk corresponding to a full stripe write.
Referring now to
Host computer 104 interfaces with one or more storage controllers 108, although only a single storage controller 108 is illustrated for clarity. In one embodiment, storage controller 108 is a RAID controller. In another embodiment, storage controller 108 is a storage appliance such as a provisioning, virtualization, replication, or backup appliance. Storage controller 108 transfers data to and from storage devices 116a, 116b in storage subsystem 124, over storage device bus 120. Storage device bus 120 is any suitable storage bus or group of buses for transferring data directly between storage controller 120 and storage devices 116, including SCSI, Fibre Channel, SAS, SATA, or SSA.
Storage subsystem 124 in one embodiment contains twelve storage devices 116. In other embodiments, storage subsystem 124 may contain fewer or more than twelve storage devices 116. Storage devices 116 include various types of devices, including hard disk drives, solid state drives, optical drives, and tape drives. Within a specific storage device type, there may be several sub-categories of storage devices, organized according to performance. For example, hard disk drives may be organized according to cache size, drive RPM (5,400, 7,200, 10,000, and 15,000, for example), queue depth, random transfer rate, or sequential transfer rate.
Referring now to
Referring now to
Referring now to
CPU 204 is coupled to storage controller memory 212. Storage controller memory 212 generally includes both non-volatile memory and volatile memory. The memory 212 stores firmware 228 which includes program instructions that CPU 204 fetches and executes, including program instructions for the processes of the present invention. Examples of non-volatile memory include, but are not limited to, flash memory, SD, EPROM, EEPROM, hard disks, and NOVRAM. Volatile memory stores various data structures and in some embodiments contains a read cache 232, a write cache 236, or both. In other embodiments, the read cache 232, the write cache 236, or both, may be stored in non-volatile memory. Examples of volatile memory include, but are not limited to, DDR RAM, DDR2 RAM, DDR3 RAM, and other forms of temporary memory.
The write cache 236 of memory 212 includes a number of cache elements (CEs) 293, 294, 295, . . . 296. CEs store write data from host computers 104, and are organized within chunks and stripes as illustrated in
Memory 212 further includes stored parameters for a dirty count 240, a free count 244, an outstanding count 248, a partial stripe count 252 and a full stripe count 256, a full stripe write percentage 260, a cache low watermark 272, a last destaged Logical Block Address (LBA) 274, and a full stripe write affinity value 264. Each of these parameters will be discussed in more detail in the following diagrams and flowcharts. Although the remainder of the discussion assumes only a single striped volume, it should be understood the present invention supports any number of striped volumes—with repetition of the parameters shown for each striped volume.
Memory 212 may also contain one or more data containers 276, 282, 288. Data containers 276, 282, 288 store information related to data in write cache 236. Data container 0276 includes data container 0 status 278, and data container 0 CE count 280. Data container 1282 includes data container 1 status 284, and data container 1 CE count 286. Data container 2288 includes data container 2 status 290, and data container 2 CE count 292. Data containers 276, 282, 288 are created when write data is placed in write cache 236, and is described in more detail with respect to
Storage controller 108 may have one host interface 216a, or multiple host interfaces 216a. Storage controller 108 has one or storage device interfaces 216b, which transfer data across one or more storage device buses 120 between storage controller 108 and one or more storage devices 116. CPU 204 generates target device I/O requests to storage device interface 216b. In various embodiments, the storage device interface 216b includes one or more protocol controllers, and one or more expanders.
In a preferred embodiment, storage controller 108 includes a bridge device 208, which interconnects CPU 204 with host interface(s) 216a, storage device interface(s) 216b, memory 212, and management controller 220. Bridge device 208 includes bus interfaces and buffering to process commands and data throughout storage controller 108, as well as memory and power control functions for memory 212. In a preferred embodiment, bridge device 208 also includes logical computation facilities for performing XOR operations for parity-related RAID striped volumes.
In some embodiments, storage controller 108 includes a management controller 220. CPU 204 reports status changes and errors to the management controller 220, which communicates status changes for storage controller 108 and errors to one or more users or administrators over management bus or network 224. Management controller 220 also receives commands from one or more users or system administrators over management bus or network 224. Management bus or network 224 is any bus or network capable of transmitting and receiving data from a remote computer, and includes Ethernet, RS-232, Fibre Channel, ATM, SAS, SCSI, Infiniband, or any other communication medium. Such a communication medium may be either cabled or wireless. In some storage controllers 108, status changes and errors are reported to a user or administrator through host interface 216a over host bus or network 112. This may either be in addition to, or in lieu of, management controller 220 and management bus or network 224.
It should be understood that storage controller 108 may be functionally organized in countless different functional organizations and architectures without diverting from the scope or operation of the present invention.
Referring now to
A single storage device 116 may be a striped volume 300. Storage device 116 may be a hard disk drive, optical drive, tape drive, solid state device, or any other form of mass data storage device. A striped volume 300 is a logical volume comprising two or more evenly sized stripes. The portion of a stripe on one storage device 116 is a chunk.
a illustrates a striped volume 300 having four stripes: stripe N 304, stripe N+1 308, stripe N+2 312, and stripe N+3 316. Stripe N 304 has chunk A 320, stripe N+1 308 has chunk B 324, stripe N+2 312 has chunk C 328, and stripe N+3 316 has chunk D 332. Although
Referring now to
Multiple storage devices 116, or a portion of multiple storage devices 116, may be a striped volume 334.
Referring now to
Write cache 236 is part of memory 212 of storage controller 108. Write cache 236 receives host data writes 404 from host computers 104 over host bus or network 112, and stores the write data in write cache 236 as dirty data. Dirty data is host write data 404 stored in the write cache 236 that has not yet been written to storage devices 116. Host data writes 404 are stored in the dirty portion of cache 416, awaiting conditions that will transfer storage device writes 408 from the dirty portion of cache 416 to striped volume 300, 334. Storage device writes 408 are either partial stripe writes or full stripe writes.
A cache low watermark 272 is maintained by the storage controller 108 to determine when partial or full stripes should be destaged from the write cache 236 to the striped volume 300, 334. The specific value selected for cache low watermark 272 depends on many factors, including size of the write cache 236, rate at which host data writes 404 are received, processing speed of storage controller 108 and CPU 204, and expected usage of storage controller 108. It is desirable to set the cache low watermark 272 at a level such that when an expected maximum rate of host data writes 404 are received, the CPU 204 and storage controller 108 are able to destage storage device writes 408 to keep the empty portion of cache 412 nonzero. In one embodiment, the cache low watermark 272 is at 50% of the capacity of write cache 236. In another embodiment, the cache low watermark 272 is at 80% of the capacity of write cache 236. However, in other embodiments, the cache low watermark 272 may be sent to a different level other than 50% or 80% of the capacity of write cache 236.
As host data writes 404 are received and written into write cache 236, the dirty portion of cache 416 expands accordingly, as long as sufficient space to store the new host data writes 404 is present in write cache 236. At the same time dirty portion of cache 416 expands, empty portion of cache 412 contracts. Similarly, as storage device writes 408 transfer data from the write cache 236 to the striped volume 300, 334, the dirty portion of cache 416 contracts and empty portion of cache 412 expands accordingly. Storage controller 108 maintains a cache full percentage 420, which tracks the current size of the dirty portion of cache 416.
Referring now to
Three-drive RAID 5 striped volume 334 has three storage devices 116, identified as storage devices 116a, 116b, and 116c. A given stripe has identified chunks X 504a, Y 508, and Z 512a. Assume the parity chunk for the given stripe is chunk Z 512. Therefore chunk X 504a and chunk Y 508 are data chunks. Write cache 236 includes chunk X′ 504b, which is in dirty portion of cache 416. The given stripe is a partial stripe since only a single data chunk (chunk X′ 504b) is in write cache 236, and not a new data for chunk Y 508.
If the process of the present invention requires destage of the given partial stripe to striped volume 334, it will be necessary to recompute a new parity chunk Z′ 512b using new data chunk X′ 504b and old data chunk Y 508. Therefore, storage controller 108 initially reads chunk Y 508 from storage device 116b. Next, storage controller 108 XOR's chunk Y 508 with chunk X′ 504b to obtain new parity chunk Z′ 512b. After new parity has been calculated, storage controller 108 writes chunk X′ 504a to the chunk X 504 location, and chunk Z′ 512b to the chunk Z 512a location. Therefore, for a three-drive RAID 5 striped volume 334, one read and two write operations are required to destage a partial stripe write.
Referring now to
Five-drive RAID 5 striped volume 334 includes five storage devices 116, identified as storage devices 116a, 116b, 116c, 116d, and 116e. For a given stripe, the chunks involved are chunk V 524, chunk W 528a, chunk X 532a, chunk Y 536, and chunk Z 540a. Assume the parity chunk for the given stripe is chunk Z 540.
The given stripe in write cache 236 includes new data chunk W′ 528b and new data chunk X′ 532b. Since there is not new data corresponding to chunk V 524 and chunk Y 536, this is a partial stripe write. In order to destage the partial stripe from write cache 236 is first necessary to read chunk V 524 and chunk Y 536. Next, storage controller 108 XOR's chunk V 524, chunk W′ 528b, chunk X′ 532b, and chunk Y 536 to obtain new parity chunk Z′ 540b. Once the new parity chunk Z′ 540b has been calculated, storage controller 108 writes new data chunk W′ 528b to the chunk W 528a location, new data chunk X′ 532b to the chunk X 532a location, and new parity chunk Z′ 540b to the chunk Z 540a location.
They can be seen from
Referring now to
Full stripe writes are writes in which all chunks for the given stripe have new data (dirty data) in write cache 236. Five-drive RAID 5 striped volume 334 includes five storage devices 116, identified as storage devices 116a, 116b, 116c, 116d, and 116e. For a given stripe, the chunks involved are chunk V 556a, chunk W 560a, chunk X 564a, chunk Y 568a, and chunk Z 572a. Assume the parity chunk for the given stripe is chunk Z 572.
The given stripe in write cache 236 includes new data chunk V′ 556b, new data chunk W′ 560b, new data chunk X′ 564b, and new data chunk Y′ 568b. In order to destage the full stripe from write cache 236, it is first necessary for storage controller 108 to XOR chunk V′ 556b, chunk W′ 560b, chunk X′ 564b, and chunk Y′ 568b to obtain new parity chunk Z′ 572b. Once the new parity chunk Z′ 572b has been calculated, storage controller 108 writes new data chunk V′ 556b to the chunk V 556a location, new data chunk W′ 560b to the chunk W 560a location, new data chunk X′ 564b to the chunk X 564a location, new data chunk Y′ 568b to the chunk Y 568a location, and new parity chunk Z′ 572b to the chunk Z 572a location. It can be seen that full stripe writes are more efficient than partial stripe writes since no reads of old data from striped volume 334 are required in order to generate the new parity chunk 572b.
Referring now to
The portion of the stripe 604 allocated to a single storage device 116 is a chunk. In the example of
In the example of
Data containers 612 are data structures assigned on a stripe basis as new write data is received. Data containers 612 include a status 278, 284, 290 having one of three values: either unknown, partial, or full. When a data container 612 is initially created, the data container 612 has a status 278, 284, 290 of unknown. When new write data is associated with a data container 612, and one or more empty CEs 616 remain, the data container 612 has a status 278, 284, 290 of partial. When new write data is associated with the data container 612, and all CEs of the data container 612 are dirty CEs 620, the data container 612 has a status 278, 284, 290 of full. In the preferred embodiment, data containers 612 are assigned sequentially. However, in other embodiments, data containers 612 may be assigned in any order as long as no two stripes have the same data container 612 number.
Stripe 3 was the first stripe 604 to receive write data in the striped volume 334 of
Stripe 9 was the second stripe 604 to receive write data in the striped volume 334 of
Stripe 0 was the third stripe 604 to receive write data in the striped volume 334 of
Stripe 5 was the fourth stripe 604 to receive write data in the striped volume 334 of
Stripe 13 was the fifth stripe 604 to receive write data in the striped volume 334 of
Stripe 10 was the sixth stripe 604 to receive write data in the striped volume 334 of
Stripe 4 was the seventh stripe 604 to receive write data in the striped volume 334 of
Stripe 12 was the eighth stripe 604 to receive write data in the striped volume 334 of
Stripe 6 was the ninth stripe 604 to receive write data in the striped volume 334 of
Stripe 11 was the tenth stripe 604 to receive write data in the striped volume 334 of
Only 10 data containers 612 have been assigned since only 10 stripes 604 have dirty CE's 620 in write cache 232. Stripes 1, 2, 7, and 8 do not have data containers 612 assigned since all CE's in those stripes are empty CE's 616.
Referring now to
At block 704, the storage controller 108 receives a data write command to a striped volume 300, 334 from a host computer 104. Flow proceeds to block 708.
At block 708, storage controller 108 resets a free running timer. The free running timer measures time beginning with the receipt of a new I/O write command, and in some cases initiates a destage of a stripe in write cache 236 to the striped volume 300, 334. In some embodiments, the free running timer is implemented in firmware 228 executed by CPU 204. In other embodiments, the free running timer is implemented in hardware within the storage controller 108. Flow proceeds to decision block 712.
At decision block 712, the storage controller 108 determines if there is sufficient space in the write cache 236 to store data of the data write command. If there is not sufficient space in the write cache 236 to store write data of the data write command, then flow proceeds to block 716. If there is sufficient space in the write cache 236 to store write data of the data write command, then flow proceeds to block 720.
At block 716, the storage controller 108 waits for write cache 236 space to become available to store the write data of the data write command. Once write cache 236 space becomes available, flow proceeds to block 720.
At block 720, the storage controller 108 allocates the write cache 236. The allocation process for the write cache 236 is illustrated in the flowchart of
At block 724, the storage controller 108 requests write command data 404 from the host computer 104 that provided the data write command. Flow proceeds to block 728.
At block 728, the host computer 104 that provided the data write command transfers write command data 404 to the storage controller 108 across host bus or network 112. Flow proceeds to block 732.
At block 732, the storage controller 108 stores the write command data 404 in the allocated write cache 236. At this point, the write data 404 of the data write command is stored in the write cache 236. Flow proceeds to block 736.
At block 736, the storage controller 108 provides a command completion notification to the host computer 104 that provided the data write command. When the host computer 104 receives the command completion, the host computer 104 treats the data write command as a completed command. Flow proceeds to block 740.
At block 740, the storage controller 108 releases the write cache 236. Releasing the write cache 236 is described in more detail with respect to
At block 744, the storage controller 108 deletes the data write command to the striped volume 300, 334. Flow proceeds to block 704 to wait for a next data write command.
Referring now to
At block 804, the storage controller 108 allocates cache elements (CEs) 293, 294, 295, . . . 296 in the write cache 236 to store write command data 404. Flow proceeds to block 808.
At block 808, the storage controller 108 decrements the free count 244 by the number of CEs allocated to write command data 404. The free count 244 is the number of free CEs 293, 294, 295, . . . 296 in the write cache 236, and newly added write data 404 from the data write command will reduce the free count 244 accordingly. Flow proceeds to block 812.
At block 812, the storage controller 108 increments the outstanding count 248 by the number of CEs 293, 294, 295, . . . 296 allocated to write command data 404. Flow proceeds to block 724 of
Referring now to
At block 820, the storage controller 108 updates the dirty count 240 by the number of cache elements (CEs) 293, 294, 295, . . . 296 allocated to store write command data 404. Flow proceeds to block 824.
At block 824, the storage controller 108 updates the partial stripe count 252 and full stripe count 256 based on the number of dirty cache elements (CEs) 620 in each stripe 604 of the striped volume 300, 334. Flow proceeds to block 828.
At block 828, the storage controller 108 decrements the outstanding count 248 by the number of cache elements (CEs) 620 allocated to storage device writes 408. Flow proceeds to block 832.
At block 832, the storage controller 108 updates the stripe map for the striped volume 300, 334. Flow proceeds to block 744 of
Referring now to
At block 904, the storage controller 108 identifies a first dirty cache element (CE) 620 in the striped volume 300, 334. In one embodiment, the storage controller 108 searches for dirty cache elements 620 beginning with a first storage device 116a and first stripe 604. In another embodiment, the storage controller 108 searches for dirty cache elements 620 beginning with the last storage device 116d and last stripe 604. In other embodiments, the storage controller 108 searches for dirty cache elements 620 based on some other ordering method. Flow proceeds to decision block 908.
At decision block 908, the storage controller 108 determines if there is an existing data container 612 for the dirty cache element (CE) 620. If the storage controller 108 determines there is not an existing data container 612 for the dirty cache element (CE) 620, then flow proceeds to block 912. If the storage controller 108 determines there is an existing data container 612 for the dirty cache element (CE) 620, then flow proceeds to block 916.
At block 912, the storage controller 108 creates a new data container 612 with the status of “unknown”, and assigns the next available data container number to the data container 612. New data containers 612 are assigned when a new dirty cache element (CE) 620 is found for a stripe 604 not previously represented in write cache 236. Flow proceeds to block 920.
At block 916, the storage controller 108 identifies the data container 612 including the dirty cache element (CE) 620. In this case, a data container 612 already exists for the stripe 604 in the write cache 236 containing the dirty cache element (CE) 620. Flow proceeds to block 920.
At block 920, the storage controller 108 attaches the dirty cache element (CE) 620 to the data container 612. Flow proceeds to block 924.
At block 924, the storage controller 108 updates the cache element (CE) count 280, 286, 292 in the data container 612. The cache element (CE) count 280, 286, 292 is the number of dirty cache elements 620 the data container 612. Flow proceeds to block 928.
At block 928, the storage controller 108 updates the partial 252 or full stripe count 256 if the stripe 604 including the cache element (CE) transitions to either a partial stripe or a full stripe, respectively. For a new data container 612, the partial stripe count 252 is incremented. For an existing data container 612 the full stripe count 256 is incremented if all cache elements in the stripe corresponding to the existing data container 612 are dirty cache elements 620. Correspondingly, the partial stripe count 252 is decremented if the full stripe count 256 is incremented. Flow proceeds to block 932.
At block 932, the storage controller 108 updates the data container status 278, 284, 290 if the stripe 604 including the cache element (CE) transitions to either a partial stripe or a full stripe, respectively. For a new data container 612, the data container status 278, 284, 290 is “partial”. For an existing data container 612 the data container status 278, 284, 290 is “full” if all cache elements in the stripe corresponding to the existing data container 612 are dirty cache elements 620. Flow proceeds to decision block 936.
At decision block 936, the storage controller 108 determines if there are more dirty cache elements (CEs) 620 in the write cache 236. If the storage controller 108 determines there are not more dirty cache elements (CEs) 620 in the write cache 236, then flow ends. If the storage controller 108 determines there are more dirty cache elements (CEs) in the write cache 236, then flow proceeds to block 940.
At block 940, the storage controller 108 identifies a next dirty cache element (CE) 620 in the write cache 236. In one embodiment, the next dirty cache element (CE) 620 is the next sequential dirty cache element (CE) 620 in the write cache 236. Flow proceeds to decision block 908.
Referring now to
At block 1004, the storage controller 108 free running timer times out. The free running timer started counting at block 708 of
At block 1008, the storage controller 108 determines if the dirty count 240 for the striped volume 300, 334 is at least as high as the cache low watermark 272. This is the normal destage condition when the storage controller 108 is actively processing write requests 404 from host computers 104. In another embodiment, the storage controller 108 determines if the dirty count 240 for the striped volume 300, 334 is higher than the cache low watermark 272. Flow proceeds to block 1012.
At block 1012, the storage controller 108 determines the next logical block address (LBA) containing a dirty cache element (CE) 620 following the last destaged LBA 274. The last destaged LBA 274 is updated in block 1052, following a previous stripe destage operation. Flow proceeds to block 1016.
At block 1016, the storage controller 108 calculates the full stripe write percentage 260 for the striped volume 300, 334. The full stripe write percentage 260 calculation process is illustrated in
At decision block 1020, the storage controller 108 determines if the full stripe write percentage 260 is greater than a full stripe write affinity value 264. In another embodiment, the storage controller 108 determines if the full stripe write percentage 260 is greater than or equal to the full stripe write affinity value 264. The full stripe write affinity value 264 is the likelihood that a full stripe will be destaged from the write cache 236. This value is best determined by empirical testing, and depends on the frequency and locality of reference of host data writes, the size of the write cache 236, and the time required to destage partial or full stripes from the write cache 236. In one embodiment, the full stripe write affinity value 264 is 50%. In another embodiment, the full stripe write affinity value 264 is 60%. However, in other embodiments, the full stripe write affinity value 264 is different than either 50% or 60%. If the full stripe write percentage 260 is greater than or equal to the full stripe write affinity value 264, then flow proceeds to block 1024. If the full stripe write percentage 260 is not greater than or equal to the full stripe write affinity value 264, then flow proceeds to block 1036.
At block 1024, the storage controller 108 identifies the next full stripe write in the write cache 236 for the striped volume 300, 334. The next full stripe write in the write cache 236 is identified by sequentially searching data container status 278, 284, 290 of data containers 276, 282, 288, respectively. Data containers 276, 282, 288 have a status of either ‘unknown’, ‘partial’ or ‘full’. Flow proceeds to block 1028.
At block 1028, the storage controller 108 destages the identified next full stripe write from block 1024 to storage devices 116 of the striped volume 300, 334. Destaging includes copying the identified full stripe write from the write cache 236 to the striped volume 300, 334. Flow proceeds to block 1032.
At block 1032, the storage controller 108 decrements the full stripe count 256 for the striped volume 300, 334, since a full stripe write was destaged in block 1028. Flow proceeds to block 1048.
At block 1036, the storage controller 108 identifies the next data container in the write cache 236 for the striped volume 300, 334. The next data container in the write cache 236 is identified by sequentially searching data container numbers 276, 282, 288. Flow proceeds to block 1040.
At block 1040, the storage controller 108 destages the stripe corresponding to the identified data container 276, 282, 288 from block 1036 to storage devices 116 of the striped volume 300, 334. Destaging includes copying the identified dirty cache elements 620 from the write cache 236 to the striped volume 300, 334. Flow proceeds to block 1044.
At block 1044, the storage controller 108 decrements the partial stripe count 252 for the striped volume 300, 334, if a partial stripe write was destaged in block 1040. Alternatively, the storage controller 108 decrements the full stripe count 256 for the striped volume 300, 334, if a full stripe write was destaged in block 1040. Flow proceeds to block 1048.
At block 1048, the storage controller 108 decrements the dirty count 240 and increments the free count 244 by the number of dirty cache elements (CEs) 620 destaged. If a partial stripe write was destaged, the number of dirty cache elements (CEs) 620 destaged is the number of dirty cache elements (CEs) 620 in the identified partial stripe of block 1036. If a full stripe write was destaged, the number of dirty cache elements (CEs) 620 destaged is the number of dirty cache elements (CEs) 620 in the identified full stripe of block 1024. Flow proceeds to block 1052.
At block 1052, the storage controller 108 updates the last staged LBA 274 to reflect the LBA of the destaged partial stripe write in block 1040 or full stripe write in block 1028. This step saves the LBA of the last destaged stripe so that a next LBA can be calculated in block 1012. Flow proceeds to block 1056.
At block 1056, the storage controller 108 resets the free-running timer to begin counting again. This step resumes the timer in order to destage a stripe 604 if the timer times out, in block 1004. Flow ends at block 1056.
Referring now to
At block 1104, the storage controller 108 obtains a partial stripe count 252 and a full stripe count 256 from memory 212. The partial stripe count 252 is the current number of partial stripes in the write cache 236, and the full stripe count 256 is the current number of full stripes in the write cache 236. Flow proceeds to block 1108.
At block 1108, the storage controller 108 divides the full stripe count 256 by the sum of the partial stripe count 252 and the full stripe count 256, and multiplies by 100 to obtain the full stripe write percentage 260. Flow proceeds to decision block 1020.
Referring now to
The full stripe write percentage 260 is equal to the number of full stripes in the striped volume 256 divided by the sum of the number of partial stripes in the striped volume 252 and the number of full stripes in the striped volume 256, multiplied by 100. In the example of
Finally, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5381528 | Brunelle | Jan 1995 | A |
5410653 | Macon, Jr. et al. | Apr 1995 | A |
5522054 | Gunlock et al. | May 1996 | A |
5557767 | Sukegowa | Sep 1996 | A |
5600817 | Macon, Jr. et al. | Feb 1997 | A |
5619723 | Jones et al. | Apr 1997 | A |
5680573 | Rubin et al. | Oct 1997 | A |
5687389 | Packer | Nov 1997 | A |
5809560 | Schneider | Sep 1998 | A |
6047359 | Fouts | Apr 2000 | A |
6070230 | Capps | May 2000 | A |
6092141 | Lange | Jul 2000 | A |
6092149 | Hicken et al. | Jul 2000 | A |
6195727 | Islam et al. | Feb 2001 | B1 |
6226713 | Mehotra | May 2001 | B1 |
6249804 | Lam | Jun 2001 | B1 |
6286080 | Galbraith et al. | Sep 2001 | B1 |
6321300 | Ornes et al. | Nov 2001 | B1 |
6338115 | Galbraith et al. | Jan 2002 | B1 |
6349326 | Lam | Feb 2002 | B1 |
6505268 | Schultz et al. | Jan 2003 | B1 |
6523086 | Lee | Feb 2003 | B1 |
6549977 | Horst et al. | Apr 2003 | B1 |
6567892 | Horst et al. | May 2003 | B1 |
6701413 | Shirai et al. | Mar 2004 | B2 |
6775794 | Horst et al. | Aug 2004 | B1 |
6789171 | Desai et al. | Sep 2004 | B2 |
6842792 | Johnson et al. | Jan 2005 | B2 |
6877065 | Galbraith et al. | Apr 2005 | B2 |
6910099 | Wang et al. | Jun 2005 | B1 |
6915404 | Desai et al. | Jul 2005 | B1 |
6931486 | Cavallo et al. | Aug 2005 | B2 |
6965966 | Rothberg et al. | Nov 2005 | B1 |
7069354 | Pooni et al. | Jun 2006 | B2 |
7120753 | Accapadi et al. | Oct 2006 | B2 |
7146467 | Bearden et al. | Dec 2006 | B2 |
7216203 | Bagwadi | May 2007 | B1 |
7302530 | Barrick et al. | Nov 2007 | B2 |
7318142 | Accapadi et al. | Jan 2008 | B2 |
7337262 | Beeston et al. | Feb 2008 | B2 |
7493450 | Bearden | Feb 2009 | B2 |
7523259 | Pistoulet | Apr 2009 | B2 |
7543124 | Accapadi et al. | Jun 2009 | B1 |
7724568 | Arya et al. | May 2010 | B2 |
7853751 | Manoj | Dec 2010 | B2 |
7996623 | Walker | Aug 2011 | B2 |
8074020 | Seaman et al. | Dec 2011 | B2 |
8356126 | Ashmore | Jan 2013 | B2 |
20020069322 | Galbraith et al. | Jun 2002 | A1 |
20030041214 | Hirao et al. | Feb 2003 | A1 |
20030225977 | Desai et al. | Dec 2003 | A1 |
20040205298 | Bearden et al. | Oct 2004 | A1 |
20040205299 | Bearden | Oct 2004 | A1 |
20050021879 | Douglas | Jan 2005 | A1 |
20050060495 | Pistoulet | Mar 2005 | A1 |
20050235108 | Hiratsuka | Oct 2005 | A1 |
20050235125 | Accapadi et al. | Oct 2005 | A1 |
20060020759 | Barrick et al. | Jan 2006 | A1 |
20060248278 | Beeston et al. | Nov 2006 | A1 |
20060248387 | Nicholson et al. | Nov 2006 | A1 |
20060288186 | Accapadi et al. | Dec 2006 | A1 |
20070005904 | Lemoal et al. | Jan 2007 | A1 |
20070239747 | Pepper | Oct 2007 | A1 |
20070276993 | Hiratsuka | Nov 2007 | A1 |
20080005481 | Walker | Jan 2008 | A1 |
20090219760 | Arya et al. | Sep 2009 | A1 |
20100016283 | Alturi et al. | Jul 2010 | A1 |
20100208385 | Toukairin | Aug 2010 | A1 |
20110145508 | Pelleg et al. | Jun 2011 | A1 |
20120047548 | Rowlands et al. | Feb 2012 | A1 |
20120144123 | Aronovich et al. | Jun 2012 | A1 |
Number | Date | Country | |
---|---|---|---|
20130326149 A1 | Dec 2013 | US |