Claims
- 1. A method for increasing performance of a computing system, the method comprising:
analyzing activity in the system; and optimizing a hit ratio in at least one non-L1 cache based on analyzed activity in the system.
- 2. The method of claim 1, wherein analyzing activity in the system comprises monitoring system activity and determining knowledge associated with the monitored system activity.
- 3. The method of claim 2, wherein the determined knowledge associated with the monitored system activity further comprises recognizing patterns of monitored system activity.
- 4. The method of claim 3, wherein monitoring system activity further comprises prefetching data based on the patterns of monitored system activity in the non-L1 cache.
- 5. The method of claim 4, wherein storing data based on the patterns of monitored system activity further comprises preventing the data stored in the non-L1 cache based on the patterns of monitored system activity from being cast out.
- 6. The method of claim 5, further comprising cleaning dirty cache lines during idle time and/or prior to being accessed by external devices.
- 7. The method of claim 5, wherein preventing data stored in the non-L1 cache based on the pattern of monitored system activity from being cast out further comprises setting bits in cache tags corresponding to cache lines containing the data.
- 8. The method of claim 2, wherein monitoring system activity further comprises monitoring cache lines fetched and cache lines cast out throughout a cache hierarchy.
- 9. The method of claim 2, wherein the determining knowledge associated with the monitored system activity further comprises:
recognizing patterns of data requests; and based on the patterns recognized, determining cache lines to fix into the non-L1 cache.
- 10. The method of claim 9, further comprises, after determining cache lines to fix into the non-L1 cache, setting registers to fix cache lines therein.
- 11. The method of claim 2, wherein the determining knowledge associated with the monitored system activity further comprises:
recognizing patterns of data requests; and based on the patterns recognized, determining cache lines to prefetch into the non-L1 cache.
- 12. The method of claim 2, wherein the determining knowledge associated with the monitored system activity further comprises:
recognizing patterns of data requests; determining an importance of a cache line based upon a recognized pattern of request of the cache line; and using the importance to determine prefetching, cache line casting out or cache line invalidation.
- 13. The method of claim 2, wherein the determining knowledge associated with the monitored system activity further comprises:
identifying data having a predetermined characteristic; invalidating the data having the predetermined characteristic; and replacing invalidated data with prefetch data.
- 14. The method of claim 2, further comprises:
storing a memory map of a cache hierarchy in a cache; and accessing the memory map to increase a response time in locating requested data.
- 15. The method of claim 2, wherein the determining knowledge associated with the monitored system activity further comprises:
using timers and applying timestamps to data requests; and using the timestamps in an algorithm to determine when stored data is to be cast out.
- 16. The method of claim 2, wherein the determining knowledge associated with the monitored system activity comprises determining knowledge concerning a program code being processed.
- 17. The method of claim 2, wherein the determining knowledge associated with the monitored system activity comprises determining knowledge of data accesses throughout a cache hierarchy.
- 18. A computing system, comprising:
at least one processor; memory comprising a cache hierarchy containing at least one non-L1 cache; and a caching assistant, the caching assistant comprising:
a separate processor core, the separate processor core analyzes activity in the system; and a memory, wherein the caching assistant increases performance of the at least one processor by optimizing a hit ratio in at least one non-L1 memory based on the analyzed system activity.
- 19. A caching assistant comprising,
a cache controller, the cache controller analyzing activity in the system and accessing data from a cache hierarchy; and a dedicated cache, wherein the cache controller increases processor performance by optimizing a hit ratio in at least one non-L1 cache based on the analyzed system activity.
- 20. The caching assistant of claim 19, wherein the cache controller anticipates future requests and the cache controller is disposed logically and physically proximal to the processor.
- 21. The caching assistant of claim 20, wherein the processor, the cache controller and the dedicated cache are embedded in a processing system.
- 22. The caching assistant of claim 21, wherein the cache controller monitors system activity and determines knowledge associated with the monitored system activity.
- 23. The caching assistant of claim 22, wherein the cache controller determines knowledge associated with the monitored system activity by recognizing patterns of monitored system activity.
- 24. The caching assistant of claim 23, wherein the cache controller stores data based on the patterns of monitored system activity in at least one non-L1 cache embedded in the processing system.
- 25. The caching assistant of claim 24, wherein the cache controller prevents data stored based on the patterns of monitored system activity from being cast out.
- 26. The caching assistant of claim 25, wherein the cache controller cleans dirty cache lines during idle time and/or prior to being accessed by external devices.
- 27. The caching assistant of claim 19, wherein the cache controller monitors system activity by monitoring cache lines fetched and cache lines cast out throughout the cache hierarchy.
- 28. The caching assistant of claim 27, wherein the cache controller determines knowledge by recognizing patterns of monitored system activity and based on the patterns of monitored system activity recognized, determines cache lines to prefetch into a non-L1 cache and cache lines to fix into a non-L1 cache.
- 29. The caching assistant of claim 28, wherein the cache controller determines cache lines to prefetch into the non-L1 cache, determines cache lines to fix into the non-L1 cache, sets registers in the non-L1 cache to fix prefetched cache lines therein and sets registers in the non-L1 cache to fix cache lines determined important therein.
- 30. The caching assistant of claim 19, wherein the cache controller recognizes patterns of monitored system activity, determines an importance of a cache line based upon a recognized pattern of request of the cache line and uses the importance to determine prefetching, cache line casting out or invalidating cache lines
- 31. The caching assistant of claim 19, wherein the cache controller identifies data having a predetermined characteristic, invalidates the data having the predetermined characteristic, and replaces invalidated data with prefetch data.
- 32. The caching assistant of claim 19, wherein the cache controller stores a memory map of the cache hierarchy in a cache and accesses the memory map to increase a response time in locating requested data.
- 33. The caching assistant of claim 19, wherein the cache controller uses timers and applies timestamps to data requests and uses the timestamps in an algorithm to determine when stored data is to be cast out.
- 34. The caching assistant of claim 19, wherein the cache controller monitors system activity and determines knowledge concerning a program code being processed.
- 35. The caching assistant of claim 19, wherein the cache controller monitors system activity and determines knowledge of data accesses throughout the cache hierarchy.
- 36. A processing system comprising:
means for analyzing activity in the system; and means for optimizing a hit ratio in at least one non-L1 cache based on analyzed activity in the system.
- 37. An article of manufacture comprising a program storage medium readable by a computer, the medium tangibly embodying one or more programs of instructions executable by the computer to perform a method for increasing processor performance in a computing system, the method comprising:
analyzing activity in the system; and optimizing a hit ratio in at least one non-L1 cache based on analyzed activity in the system.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application is related to the following co-pending and commonly-assigned U.S. Patent Application, which is hereby incorporated herein by reference in their respective entirety: “METHOD AND APPARATUS PROVIDING NON LEVEL ONE INFORMATION CACHING USING PREFETCH TO INCREASE A HIT RATIO” to Walls et al., having U.S. patent application Ser. No. xx/xxxxxx.