Claims
- 1. A method for reconfiguring a cache comprising the steps of:
creating a cache with one or more stacks of cache entries; receiving a new request; tracking a workload comprising a stream of requests including said new request; and reconfiguring said cache based on said tracking of said workload.
- 2. The method as recited in claim 1, wherein said step of reconfiguring comprises a step of determining whether said new request in said workload resulted in a cache hit or a cache miss.
- 3. The method as recited in claim 2, wherein if said cache hit occurred then the method further comprises the step of:
updating a frequency count associated with a cache entry requested in a first stack.
- 4. The method as recited in claim 3 further comprising the step of:
determining whether said updated frequency count increases in number to a frequency count associated with a next higher level stack.
- 5. The method as recited in claim 4, wherein if said updated frequency count increases in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said next higher level stack, wherein said first stack is reduced in size by one entry.
- 6. The method as recited in claim 4, wherein if said updated frequency count does not increase in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said first stack.
- 7. The method as recited in claim 2, wherein if said cache miss occurred then the method further comprises the step of:
tracking one or more stack positions in each particular stack of said one or more stacks during a particular period of time for cache hits.
- 8. The method as recited in claim 7 further comprising the step of:
counting a number of said cache hits in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
- 9. The method as recited in claim 8 further comprising the step of:
adding said number of said cache hits counted in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
- 10. The method as recited in claim 9, wherein said particular duration of time is comprised of one or more windows of time.
- 11. The method as recited in claim 10, wherein each of said one or more windows of said particular duration of time is assigned a particular weight.
- 12. The method as recited in claim 11, wherein said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time is adjusted according to said weight assigned for each of said one or more windows of time.
- 13. The method as recited in claim 9 further comprising the step of:
comparing said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks with one another.
- 14. The method as recited in claim 13, wherein a first stack of said one or more stacks associated with a lowest number of hit counts in said one or more stack positions tracked is decreased in size by one entry.
- 15. The method as recited in claim 14, wherein said first stack of said one or more stacks associated with said lowest number of hit counts in said one or more stack positions tracked is decreased in size by evicting a cache entry in a least recently used stack position in said first stack.
- 16. The method as recited in claim 15, wherein one of said one or more stacks is increased in size by one entry to store information associated with a cache miss, wherein said information associated with said cache miss is stored in a most recently used stack position in said one of said one or more stacks.
- 17. The method as recited in claim 16, wherein said one of said one or more stacks is a lowest level stack of said one or more stacks.
- 18. The method as recited in claim 1 further comprising the step of:
determining whether there are more new requests.
- 19. The method as recited in claim 1, wherein said stream of requests forming said workload are requests to access a disk.
- 20. The method as recited in claim 1, wherein said stream of requests forming said workload are issued from one or more clients in a network system to a network server, wherein said cache is associated with said network server.
- 21. A computer program product having computer readable medium having computer program logic recorded thereon for reconfiguring a cache, comprising:
programming operable for creating a cache with one or more stacks of cache entries; programming operable for receiving a new request; programming operable for tracking said workload comprising a stream of requests including said new request; and programming operable for reconfiguring said cache based on said tracking of said workload.
- 22. The computer program product as recited in claim 21, wherein instructions for performing step of reconfiguring comprises instructions for performing a determination as to whether said new request in said workload resulted in a cache hit or a cache miss.
- 23. The computer program product as recited in claim 22, wherein if said cache hit occurred then the computer program product further comprises:
programming operable for updating a frequency count associated with a cache entry requested in a first stack.
- 24. The computer program product as recited in claim 23 further comprises:
programming operable for determining whether said updated frequency count increases in number to a frequency count associated with a next higher level stack.
- 25. The computer program product as recited in claim 24, wherein if said updated frequency count increases in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said next higher level stack, wherein said first stack is reduced in size by one entry.
- 26. The computer program product as recited in claim 24, wherein if said updated frequency count does not increase in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said first stack.
- 27. The computer program product as recited in claim 22, wherein if said cache miss occurred then the computer program product further comprises:
programming operable for tracking one or more stack positions in each particular stack of said one or more stacks during a particular period of time for cache hits.
- 28. The computer program product as recited in claim 27 further comprises:
programming operable for counting a number of said cache hits in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
- 29. The computer program product as recited in claim 28 further comprises:
programming operable for adding said number of said cache hits counted in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
- 30. The computer program product as recited in claim 29, wherein said particular duration of time is comprised of one or more windows of time.
- 31. The computer program product as recited in claim 30, wherein each of said one or more windows of said particular duration of time is assigned a particular weight.
- 32. The computer program product as recited in claim 31, wherein said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time is adjusted according to said weight assigned for each of said one or more windows of time.
- 33. The computer program product as recited in claim 29 further comprises:
programming operable for comparing said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks with one another.
- 34. The computer program product as recited in claim 33, wherein a first stack of said one or more stacks associated with a lowest number of hit counts in said one or more stack positions tracked is decreased in size by one entry.
- 35. The computer program product as recited in claim 34, wherein said first stack of said one or more stacks associated with said lowest number of hit counts in said one or more stack positions tracked is decreased in size by evicting a cache entry in a least recently used stack position in said first stack.
- 36. The computer program product as recited in claim 35, wherein one of said one or more stacks is increased in size by one entry to store information associated with a cache miss, wherein said information associated with said cache miss is stored in a most recently used stack position in said one of said one or more stacks.
- 37. The computer program product as recited in claim 36, wherein said one of said one or more stacks is a lowest level stack of said one or more stacks.
- 38. The computer program product as recited in claim 29 further comprises:
programming operable for determining whether there are more new requests.
- 39. The computer program product as recited in claim 21, wherein said stream of requests forming said workload are requests to access a disk.
- 40. The computer program product as recited in claim 21, wherein said stream of requests forming said workload are issued from one or more clients in a network system to a network server, wherein said cache is associated with said network server.
- 41. A system comprising:
one or more clients; a server coupled to said one or more clients, wherein said server comprises:
a processor; a memory unit operable for storing a computer program operable for reconfiguring a cache; and a bus system coupling the processor to the memory, wherein the computer program is operable for performing the following programming steps:
creating a cache with one or more stacks of cache entries; receiving a new requests; tracking said workload comprising a stream of requests including said new request; and reconfiguring said cache based on said tracking of said workload.
- 42. The system as recited in claim 41, wherein said step of reconfiguring comprises a step of determining whether said new request in said workload resulted in a cache hit or a cache miss.
- 43. The system as recited in claim 42, wherein if said cache hit occurred then the computer program is further operable to perform the programming step:
updating a frequency count associated with a cache entry requested in a first stack.
- 44. The system as recited in claim 43, wherein the computer program is further operable to perform the programming step:
determining whether said updated frequency count increases in number to a frequency count associated with a next higher level stack.
- 45. The system as recited in claim 44, wherein if said updated frequency count increases in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said next higher level stack, wherein said first stack is reduced in size by one entry.
- 46. The system as recited in claim 44, wherein if said updated frequency count does not increase in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said first stack.
- 47. The system as recited in claim 42, wherein if said cache miss occurred then the computer program is further operable to perform the programming step:
tracking one or more stack positions in each particular stack of said one or more stacks during a particular period of time for cache hits.
- 48. The system as recited in claim 47, wherein the computer program is further operable to perform the programming step:
counting a number of said cache hits in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
- 49. The system as recited in claim 48, wherein the computer program is further operable to perform the programming step:
adding said number of said cache hits counted in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
- 50. The system as recited in claim 49, wherein said particular duration of time is comprised of one or more windows of time.
- 51. The system as recited in claim 50, wherein each of said one or more windows of said particular duration of time is assigned a particular weight.
- 52. The system as recited in claim 51, wherein said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time is adjusted according to said weight assigned for each of said one or more windows of time.
- 53. The system as recited in claim 49, wherein the computer program is further operable to perform the programming step:
comparing said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks with one another.
- 54. The system as recited in claim 48, wherein a first stack of said one or more stacks associated with a lowest number of hit counts is decreased in size by one entry.
- 55. The system as recited in claim 54, wherein said first stack of said one or more stacks associated with said lowest number of hit counts in said one or more stack positions tracked is decreased in size by evicting a cache entry in a least recently used stack position in said first stack.
- 56. The system as recited in claim 55, wherein one of said one or more stacks is increased in size by one entry to store information associated with a cache miss, wherein said information associated with said cache miss is stored in a most recently used stack position in said one of said one or more stacks.
- 57. The system as recited in claim 56, wherein said one of said one or more stacks is a lowest level stack of said one or more stacks.
- 58. The system as recited in claim 41, wherein the computer program is further operable to perform the programming step:
determining whether there are more new requests.
- 59. The system as recited in claim 41, wherein said stream of requests forming said workload are requests to access a disk.
- 60. The computer program product as recited in claim 41, wherein said stream of requests forming said workload are issued from said one or more clients to said server.
CROSS REFERENCE TO RELATED APPLICATION
[0001] The present invention is related to the following U.S. patent application which is incorporated herein by reference:
[0002] Ser. No. ______ (Attorney Docket No. RSP920010001U.S.1) entitled “Designing a Cache Using a Canonical LRU-LFU Array” filed ______.