Claims
- 1. A processor-readable medium comprising processor-executable instructions configured for executing a method comprising:
pre-fetching a sub-page of first data from a mass storage media to a read cache; detecting a first read cache hit upon the sub-page in the read cache; and pre-fetching a page of second data to the read cache in response to the detecting of the first read cache hit.
- 2. A processor-readable medium as recited in claim 1, wherein the pre-fetching a sub-page comprises fetching a remainder of a page of data.
- 3. A processor-readable medium as recited in claim 1, further comprising:
detecting a sequential workload, wherein the pre-fetching a sub-page of first data is in response to detecting the sequential workload.
- 4. A processor-readable medium as recited in claim 3, wherein the sequential workload comprises a small transfer length sequential workload.
- 5. A processor-readable medium as recited in claim 1, further comprising:
detecting a second read cache hit upon the page of second data in the read cache; and pre-fetching third data in response to the detecting of the second read cache hit.
- 6. A processor-readable medium as recited in claim 1, wherein the detecting the first read cache hit comprises:
retrieving at least a portion of the first data in the read cache.
- 7. A processor-readable medium as recited in claim 6, wherein in the portion of the first data includes a head portion of the first data.
- 8. A processor-readable medium as recited in claim 6, wherein the portion of the first data includes a tail portion of the first data.
- 9. A processor-readable medium as recited in claim 1, wherein the second data is logically sequential to the first data.
- 10. A processor-readable medium as recited in claim 1, wherein the second data comprises less than a page of data.
- 11. A processor-readable medium as recited in claim 5, wherein the third data comprises less than a page of data.
- 12. A processor-readable medium as recited in claim 1, wherein the amount of the second data is adjustable.
- 13. A processor-readable medium as recited in claim 1, wherein the amount of the second data is manually adjustable.
- 14. A processor-readable medium as recited in claim 1, wherein the amount of the second data is automatically adjustable.
- 15. A processor-readable medium as recited in claim 1, wherein the amount of the second data is dynamically adjustable in response to variations in an operational parameter.
- 16. A processor-readable medium as recited in claim 15, wherein the operational parameter comprises at least one of:
a disk drive data retrieval latency; a workload type; a tape drive data retrieval latency; and a read cache hit size corresponding to the read cache hit.
- 17. A method comprising:
detecting a small transfer length sequential workload; initiating a first pre-fetch operation to pre-fetch a sub-page into a read cache in response to the sequential workload; detecting a first read cache hit on the sub-page in the read cache; and initiating a second pre-fetch operation in response to the detecting of the first read cache hit.
- 18. A method as recited in claim 17, wherein the detecting a sequential workload comprises:
receiving one or more read requests to read data at addresses within a host address space.
- 19. A method as recited in claim 17, wherein the initiating a first pre-fetch operation comprises:
pre-fetching first data from a mass storage media and storing the first data into a read cache.
- 20. A method as recited in claim 19, wherein the initiating of the second pre-fetch operation comprises:
pre-fetching second data from the mass storage media and storing the second data into the read cache, wherein the second data is logical sequential to the first data.
- 21. A method as recited in claim 20, further comprising:
detecting a second read cache hit corresponding to the second data; and in response to detecting the second read cache hit, initiating a third pre-fetch operation for third data that is logically sequential to the second data.
- 22. A method as recited in claim 17, further comprising:
detecting a read cache hit on a predetermined portion of the sub-page; transitioning to page-sized read cache pre-fetching in response to the detecting of a read cache hit one a predetermined portion of the sub-page.
- 23. A method as recited in claim 22, wherein the predetermined portion is a tail portion.
- 24. A method comprising:
priming a read cache with initial data; and triggering a read pre-fetch operation in response to a read cache hit upon at least a portion of the initial data in the read cache.
- 25. A method as recited in claim 24, further comprising detecting a sequential workload.
- 26. A method as recited in claim 24, further comprising detecting a small transfer length sequential workload.
- 27. A method as recited in claim 26, further comprising:
pre-fetching a sub-page of data; and storing the pre-fetched sub-page of data in the read cache.
- 28. A method as recited in claim 27, further comprising:
detecting a read cache hit upon the sub-page of data; and transitioning to page pre-fetching wherein subsequent pre-fetch operations pre-fetch at least a page of data.
- 29. A method as recited in claim 26 wherein the read pre-fetch operation comprises transferring second data from a memory into the read cache and further comprising:
triggering a second read pre-fetch operation in response to a second read cache hit upon either the initial data in read cache or the second data in read cache.
- 30. A method as recited in claim 29 wherein the memory is a mass storage medium.
- 31. A method as recited in claim 29 wherein the memory is random access memory.
- 32. A method as recited in claim 26 wherein the portion of the initial data is a head portion.
- 33. A method as recited in claim 26 wherein the portion of the initial data is a tail portion.
- 34. A method as recited in claim 26 wherein the pre-fetch operation comprises pre-fetching a page of data.
- 35. A method as recited in claim 26 wherein the pre-fetch operation comprises pre-fetching less than a page of data.
- 36. A storage device comprising:
a priming module operable to trigger an initial pre-fetch of initial data from a mass storage medium into a read cache; and a trigger module in operable communication with the priming module, operable to trigger a subsequent pre-fetch of subsequent data from the mass storage medium into the read cache in response to a read cache hit upon the initial data in the read cache.
- 37. A storage device as recited in claim 36, wherein the priming module is further operable to detect a sequential workload and trigger the initial pre-fetch operation in response to the detection of the sequential workload.
- 38. A storage device as recited in claim 36, wherein the sequential workload is a small transfer length sequential workload.
- 39. A storage device as recited in claim 36, wherein the subsequent read pre-fetch operation comprises pre-fetching less than a page of data.
- 40. A storage device as recited in claim 36, further comprising:
a cache interface in operable communication with the trigger module and the priming module, the cache input/output module operable to pre-fetch data into and read data from the read cache memory; and an input/output processor in operable communication with the priming module, the trigger module, and the cache interface, the input/output processor operable to communicate read requests to the trigger module and the priming module.
- 41. A system comprising:
a first memory; a microprocessor in operable communication with the first memory; a read cache in operable communication with the microprocessor; and means for triggering a read cache pre-fetch operation in response to a read cache hit.
- 42. A system as recited in claim 41, wherein the means for triggering comprises:
a priming module in operable communication with the read cache, operable to transfer initial data to the read cache; and a trigger module in operable communication with the read cache, operable to transfer subsequent data to the read cache in response to the read cache hit.
- 43. A system as recited in claim 41, wherein the first memory is a disk drive.
- 44. A system as recited in claim 41, wherein the first memory is a Redundant Array of Independent Disks (RAID).
- 45. A system as recited in claim 44, wherein the RAID comprises at least one of:
magnetic disks; tapes; optical disks; or solid state disks.
- 46. A method of triggering a read cache pre-fetch comprising:
detecting a small transfer length sequential workload; pre-fetching a sub-page of data into a read cache in response to detecting the small transfer length sequential workload; detecting a read cache hit at a predetermined location in the sub-page of data in the read cache; and transitioning to page-size pre-fetching in response to detecting the read cache hit at the predetermined location in the sub-page.
- 47. A method as recited in claim 46, wherein the pre-fetching a sub-page of data into a read cache comprises:
pre-fetching data up to a next page boundary in a host address space.
- 48. A method as recited in claim 46, wherein the predetermined location includes a head portion of the sub-page in the read cache.
- 49. A method as recited in claim 46 further comprising:
pre-fetching a page of data into the read cache in response to detecting the read cache hit at the predetermined location in the sub-page of data in the read cache.
- 50. A method as recited in claim 46 further comprising:
pre-fetching a plurality of pages of data into the read cache in response to detecting the read cache hit at the predetermined location in the sub-page of data in the read cache.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application contains subject matter related to the following co-pending applications: “Method of Cache Collision Avoidance in the Presence of a Periodic Cache Aging Algorithm,” identified by HP Docket Number 200208024-1; “Method of Adaptive Read Cache Pre-Fetching to Increase Host Read Throughput,” identified by HP Docket Number 200207351-1; “Method of Adaptive Cache Partitioning to Increase Host I/O Performance, identified by HP Docket Number 200207897-1; and “Method of Detecting Sequential Workloads to Increase Host Read Throughput,” identified by HP Docket Number 100204483-1. The foregoing applications are incorporated by reference herein, assigned to the same assignee as this application and filed on even date herewith.