Claims
- 1. A processor-readable medium comprising processor-executable instructions configured for executing a method comprising:
receiving a request for data associated with a first address; and determining whether to pre-fetch data based at least in part on data in a cache memory corresponding to a second address, the second address being related to the first address.
- 2. A processor-readable medium as recited in claim 1, wherein the determining whether to pre-fetch data comprises:
determining whether the cache memory has data associated with the second address related to the first address; and if the cache memory has data associated with the second address, initiating a read pre-fetch operation wherein data associated with a third address related to the first address and second address is pre-fetched.
- 3. A processor-readable medium as recited in claim 1, wherein the second address is in a page that is sequential to a page having the first address.
- 4. A processor-readable medium as recited in claim 3, wherein the second address is an integer number of pages different from the first address.
- 5. A processor-readable medium as recited in claim 1, wherein the determining whether to pre-fetch data comprises:
subtracting a memory offset from the first address.
- 6. A processor-readable medium as recited in claim 5, wherein the memory offset is an integer multiple of a page of data.
- 7. A processor-readable medium as recited in claim 6, wherein the page of data has a page size, wherein the method further comprises:
dynamically adjusting the page size.
- 8. A method of detecting a sequential workload in a storage device comprising:
receiving a first address; determining whether data associated with a number of addresses sequential to the first address has been cached; and if the data associated with the number of addresses has been cached, caching data associated with a second address sequentially related to the first address.
- 9. A method as recited in claim 8, further comprising:
dynamically adjusting the number of addresses.
- 10. A method as recited in claim 8, wherein each of the number of addresses are an integral number of pages offset from the first address, wherein a page has a size, the method further comprising:
dynamically adjusting the size of the page.
- 11. A method as recited in claim 8 further comprising:
generating an operational parameter value characteristic of system performance; and adjusting the number of addresses based on the operational parameter.
- 12. A method as recited in claim 10, wherein the dynaimcally adjusting comprises:
determining a cache hit rate; and adjusting the size of the page based at least in part on the cache hit rate.
- 13. A method as recited in claim 8, wherein the determining whether data associated with a number of addresses comprises:
generating the number of cache indices, each of the cache indices associated with one of the number of addresses.
- 14. A method as recited in claim 13, wherein the generating a cache index associated with each of the number of addresses comprises:
generating the number of hash keys, each hash key associated with one of the number of addresses; hashing on the number of hash keys using a hash table; and determining whether a cache index exists in the hash table associated with each of the number of hash keys.
- 15. A method as recited in claim 13, wherein the generating a cache index associated with each of the number of addresses comprises:
accessing a skip list to generate the cache index.
- 16. A method as recited in claim 13, wherein the generating a cache index associated with each of the number of addresses comprises:
accessing a balanced tree structure to generate the cache index.
- 17. A storage device comprising:
a mass storage medium; a cache memory in operable communication with the mass storage medium; an input/output module operable to receive a request with a first address; and an address analysis module in operable communication with the input/output module and operable to detect a sequential host workload based on data in cache memory.
- 18. A storage device as recited in claim 17, wherein the address analysis module is further operable to determine whether the cache memory has data associated with one or more sequentially related addresses sequentially related to the first address.
- 19. A storage device as recited in claim 18, wherein the address analysis module is further operable to receive the first address and generate the one or more sequentially related addresses and generate a cache index associated with each of the one or more sequentially related addresses, each cache index referring to a location in the cache memory.
- 20. A storage device as recited in claim 18, further comprising a mapping module in operable communication with the address analysis module, wherein the mapping module is operable to receive a host address and generate a corresponding index into the cache memory.
- 21. A storage device as recited in claim 20, wherein the mapping module comprises:
a hash table; and a hashing module in operable communication with the hash table, operable to generate one or more hash keys, each of the one or more hash keys associated with the one or more sequentially related addresses, use the hash table to hash on each of the one or more hash keys, and determine whether the hash table has a cache index for each of the one or more hash keys.
- 22. A storage device as recited in claim 19, wherein the mapping module comprises a skip list.
- 23. A storage device as recited in claim 19, wherein the mapping module comprises a balanced tree structure.
- 24. A storage device comprising:
a mass storage media; a cache in operable communication with the mass storage media and having a write cache portion, wherein data in the write cache portion may be de-staged to the mass storage media; an input/output module operable to receive a request having a first address; and means for detecting a sequential workload based at least in part on data in the cache.
- 25. A storage device as recited in claim 24, wherein the means for detecting comprises:
an address analysis module operable to determine whether the cache memory has data associated with one or more sequentially related address that are sequentially related to the first address.
- 26. A storage device as recited in claim 24, wherein the storage device is an array of storage devices.
- 27. A storage device as recited in claim 26, wherein the array of storage devices comprises mass storage media comprising at least one of:
magnetic disks; tapes; optical disks; or solid state disks.
- 28. A storage device as recited in claim 24, wherein the means for detecting a sequential workload comprises a data structure having an address entry corresponding to a cache index, whereby the cache index is used to identify data in the cache that is sequentially related to the address.
- 29. A method for detecting a sequential host workload in a storage device comprising:
receiving a request for data associated with a first address; identifying cached data that is sequential to the first address; and detecting a sequential host workload in response to the identifying cached data that is sequential to the first address.
- 30. A method as recited in claim 29 further comprising:
fetching data from a mass storage media at one or more addresses related to the first address.
- 31. A method as recited in claim 29 further comprising:
fetching data from a mass storage media at one or more addresses sequential to the first address; and storing the fetched data in a cache.
- 32. A method as recited in claim 29 further comprising:
fetching data from a mass storage media at one or more addresses sequential to the first address; storing the fetched data in a cache; and subsequently satisfying a request for data using the stored data.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application contains subject matter related to the following co-pending applications: “Method of Cache Collision Avoidance in the Presence of a Periodic Cache Aging Algorithm,” identified by HP Docket Number 200208024-1; “Method of Adaptive Read Cache Pre-Fetching to Increase Host Read Throughput,” identified by HP Docket Number 200207351-1; “Method of Adaptive Cache Partitioning to Increase Host I/O Performance, identified by HP Docket Number 200207897-1; and “Method of Triggering Read Cache Pre-Fetch to Increase Host Read Throughput,” identified by HP Docket Number 200207344-1. The foregoing applications are incorporated by reference herein, assigned to the same assignee as this application and filed on even date herewith.