Claims
- 1. An apparatus, comprising:
a target module arranged to communicate with a client requesting access to a storage; a host module arranged to communicate with the storage; a cache arranged to selectively store data flowing through the apparatus; and a control module arranged to determine whether to store data flowing between the client and the storage in the cache and to pipe the data directly between the host module and the target module if the data is not to be cached.
- 2. The apparatus of claim 1, wherein at least one of the modules comprises a software component.
- 3. The apparatus of claim 1, wherein at least one of the modules comprises a hardware module.
- 4. The apparatus of claim 1, wherein the client is a process or component residing on the apparatus.
- 5. The apparatus of claim 1, wherein the client is a process or component residing on another apparatus.
- 6. The apparatus of claim 1, wherein the storage spans a portion or all of one or more physical storage media.
- 7. The apparatus of claim 6, wherein the physical storage media resides on the apparatus.
- 8. The apparatus of claim 6, wherein the physical storage media resides on one or more devices other than the apparatus.
- 9. The apparatus of claim 1, wherein the control module uses cache statistics to determine whether to store the data in the cache, the cache statistics including hit/miss ratio, clean/dirty ratio, remaining clean lines, rate of cache fill, and data access frequency.
- 10. The apparatus of claim 1, wherein the cache mirrors at least a portion of a cache on a redundant apparatus.
- 11. The apparatus of claim 1, wherein the control module is further arranged to access policies including preferred data transfer sizes, mappings of caches to storage or storages, time data can remain in a cache before being flushed, cache line sizes, and cache sizes.
- 12. The apparatus of claim 1, wherein the control module is further arranged to determine a pattern of the data and wherein determining whether to store the data is based on the pattern, a state of the cache, and a policy.
- 13. The apparatus of claim 1, wherein the control module comprises:
a flush manager arranged to manage the pace and details of flushing the cache; a system statistics manager arranged to collect information associated with the data flowing between the client and the storage, the information including cache hits, misses, and utilization, how much of the data is stored in the cache, how much of the data is piped directly between the host module and the target module, sizes of reads and writes associated with the data, and how sequential the data is.
- 14. The apparatus of claim 13, wherein the control module further comprises:
a fail-over manager arranged to communicate with a redundant apparatus to find a path between the client and the storage when the apparatus loses one or more paths to the storage or client or to have the redundant apparatus take over duties of the apparatus should the fail-over manager partially or completely fail; and an auto-configuration manager arranged to configure the cache, determine whether a redundant apparatus exists, and configure connections to the client and storage including configuring the target and host modules.
- 15. A computer-readable medium having computer-executable instructions, comprising:
receiving a request to access a storage, the request associated with data to be stored on or retrieved from the storage; determining whether to cache the data in a cache; if the data is to be cached, caching the data in the cache; and if the data is not to be cached, bypassing the cache.
- 16. The computer-readable medium of claim 15, wherein determining whether to cache the data in a cache comprises determining whether a failure is pending and if so, bypassing the cache.
- 17. The computer-readable medium of claim 15, wherein the storage is associated with a policy that indicates that data associated with the storage should not be cached and wherein determining whether to cache the data in the cache applies the policy.
- 18. The computer-readable medium of claim 15, wherein determining whether to cache the data in a cache comprises determining that the request is a request to store data on the storage and further comprising caching the data together with other data associated with other requests until a selectable size of accumulated data has been cached and then writing the accumulated data to the storage.
- 19. The computer-readable medium of claim 15, wherein determining whether to cache the data comprises applying a policy that has different effects depending on whether the request is to store data on or retrieve data from the storage.
- 20. The computer-readable medium of claim 15, wherein a policy indicates that there should be no write-back caching and wherein determining whether to cache the data in a cache applies the policy, such that the cache is bypassed on requests to write to the storage.
- 21. The computer-readable medium of claim 15, wherein determining whether to cache the data comprises applying a policy that indicates that the cache should be bypassed for reads or writes exceeding a certain size.
- 22. The computer-readable medium of claim 15, wherein determining whether to cache the data is based on a stress associated with the cache.
- 23. The computer-readable medium of claim 22, wherein the cache includes a number of dirty lines and the stress is calculated based on whether the number of dirty lines in the cache exceeds a threshold, whether a flush is required to cache the data, and whether flushes from the cache are keeping up with writes to the cache.
- 24. The computer-readable medium of claim 15, wherein determining whether to cache the data is based on whether the cache is recovering after a failure.
- 25. The computer-readable medium of claim 15, further comprising storage the cache on a local non-volatile memory in anticipation of a system failure.
- 26. The computer-readable medium of claim 15, further comprising collecting information including hit/miss performance, dirty lines in the cache, stress conditions of the cache, space available in the cache to store any data, utilization of the cache, sizes of reads and writes, how sequential or non-sequential requested data is, information associated with an operating system on a system upon which the cache resides, information associated with hardware of the system upon which the cache resides.
- 27. The computer-readable medium of claim 26, wherein the cache has an allocated number of clean and dirty lines and further comprising dynamically adjusting the number of clean and dirty lines based on a ratio of reads verses writes to the storage.
- 28. The computer-readable medium of claim 15, further comprising flushing the cache when a bandwidth utilized to the storage is below a threshold.
- 29. A computer-readable medium having computer-executable components, comprising:
a target component arranged to receive a request to access a storage; a host component arranged to communicate with the storage; a control component arranged to determine whether to store data associated with the request in a cache and to pipe the data directly between the host component and the target component if the data is not to be cached.
- 30. An apparatus, comprising:
a target module arranged to communicate with a client requesting access to a storage; a host module arranged to communicate with the storage; a cache arranged to selectively store data flowing through the apparatus; and means for determining whether to store data flowing between the client and the storage in the cache and to pipe the data directly between the host module and the target module if the data is not to be cached.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of U.S. Provisional Application No. 60/431,531, filed Dec. 9, 2002, entitled METHOD AND APPARATUS FOR DATA-AWARE DATA FLOW MANAGEMENT IN NETWORKED STORAGE SYSTEMS, which application is incorporated herein in its entirety.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60431531 |
Dec 2002 |
US |