Partial way hint line replacement algorithm for a snoop filter

Information

  • Patent Application
  • 20070233966
  • Publication Number
    20070233966
  • Date Filed
    December 14, 2006
    17 years ago
  • Date Published
    October 04, 2007
    16 years ago
Abstract
In an embodiment, a method is provided. The method of this embodiment provides receiving a request for data from a processor of a plurality of processors, determining a cache entry location based, at least in part, on the request, storing the data in a cache corresponding to the processor at the cache entry location, and storing a coherency record corresponding to the data in a snoop filter in accordance with one of the following, if there is a cache miss: at the cache entry location of a corresponding affinity in the snoop filter if the cache entry location is found in the corresponding affinity, or at a derived cache entry location of the corresponding affinity if the cache entry location is not found in the corresponding affinity.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that different references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean at least one.



FIG. 1 is a diagram of one embodiment of a system including a way hint snoop filter.



FIG. 2 is a diagram of one embodiment of a way hint snoop filter.



FIG. 3A is a diagram of one embodiment of an affinity in a way hint snoop filter.



FIG. 3B is a diagram of one embodiment of a cache entry in the way hint snoop filter.



FIG. 4 is a flow chart of one embodiment of a process for cache management based on way hints.



FIG. 5A is a diagram of one example of a cache management process.



FIG. 5B is a diagram of one example of a cache management process.



FIG. 5C is a diagram of one example of a cache management process.



FIG. 5D is a diagram of one example of a cache management process.



FIG. 6 is a diagram of an example of a cache management process in accordance with another embodiment.



FIG. 7 is a flowchart illustrating a method in accordance with another embodiment as illustrated in FIG. 6.



FIGS. 8A-8D are diagrams illustrating a line placement process in accordance with one embodiment.


Claims
  • 1. A method comprising: receiving a request for data from a processor of a plurality of processors;determining a cache entry location based, at least in part, on the request;storing the data in a cache corresponding to the processor at the cache entry location; andstoring a coherency record corresponding to the data in a snoop filter in accordance with one of the following, if there is a cache miss: at the cache entry location of a corresponding affinity in the snoop filter if the cache entry location is found in the corresponding affinity; andat a derived cache entry location of the corresponding affinity if the cache entry location is not found in the corresponding affinity.
  • 2. The method of claim 1, wherein the cache entry location comprises a set and a way.
  • 3. The method of claim 2, wherein said storing the coherency record at a derived cache entry location of the affinity corresponding to the cache comprises storing the coherency record at a randomly selected way of the set in the affinity.
  • 4. The method of claim 2, wherein said storing the coherency record at a derived cache entry location of the affinity corresponding to the cache comprises calculating a way number.
  • 5. The method of claim 4, wherein said calculating a way number comprises calculating a way number based, at least in part, on the way number, and a number of ways of the affinity.
  • 6. The method of claim 1, additionally comprising if the cache entry location is occupied by other data: evicting the other data; andsending a back invalidation message to each of the plurality of processors having a cache that includes the other data.
  • 7. The method of claim 6, wherein said evicting the other data comprises storing the other data in a back invalidation buffer.
  • 8. An apparatus comprising: a snoop filter operable to:receive a request for data from a processor of the plurality of processors;determine a cache entry location based, at least in part, on the request;store the data in a cache corresponding to the processor at the cache entry location; andstore a coherency record corresponding to the data in the snoop filter in accordance with one of the following, if there is a cache miss: at the cache entry location of a corresponding affinity in the snoop filter if the cache entry location is found in the corresponding affinity; andat a derived cache entry location of the corresponding affinity if the cache entry location is not found in the corresponding affinity.
  • 9. The apparatus of claim 8, wherein the cache entry location comprises a set and a way.
  • 10. The apparatus of claim 9, wherein the snoop filter stores the coherency record at a derived cache entry location of the affinity corresponding to the cache by storing the coherency record at a randomly selected way of the set in the affinity.
  • 11. The apparatus of claim 9, wherein the snoop filter stores the coherency record at a derived cache entry location of the affinity corresponding to the cache by calculating a way number.
  • 12. The apparatus of claim 11, wherein said calculating a way number comprises calculating a way number based, at least in part, on the way number, and a number of ways of the affinity.
  • 13. The apparatus of claim 8, the snoop filter additionally operable to perform the following if the cache entry location is occupied by other data: evict the other data; andsend a back invalidation message to each of the plurality of processors having a cache that includes the other data.
  • 14. A system comprising: an SRAM (static random access memory);a plurality of processors coupled to the SRAM; anda chipset coupled to the plurality of processors, the chipset including a snoop filter operable to access data from the SRAM and to: receive a request for data from a processor of the plurality of processors;determine a cache entry location based, at least in part, on the request;store the data in a cache corresponding to the processor at the cache entry location; andstore a coherency record corresponding to the data in the snoop filter in accordance with one of the following, if there is a cache miss: at the cache entry location of a corresponding affinity in the snoop filter if the cache entry location is found in the corresponding affinity; andat a derived cache entry location of the corresponding affinity if the cache entry location is not found in the corresponding affinity.
  • 15. The system of claim 14, wherein the cache entry location comprises a set and a way.
  • 16. The system of claim 15, wherein the snoop filter stores the coherency record at a derived cache entry location of the affinity corresponding to the cache by calculating a way number.
  • 17. The system of claim 15, the snoop filter additionally operable to perform the following if the cache entry location is occupied by other data: evict the other data; andsend a back invalidation message to each of the plurality of processors having a cache that includes the other data.
  • 18. An article of manufacture having stored thereon instructions, the instructions when executed by a machine, result in the following: receiving a request for data from a processor of a plurality of processors;determining a cache entry location based, at least in part, on the request;storing the data in a cache corresponding to the processor at the cache entry location; andstoring a coherency record corresponding to the data in a snoop filter in accordance with one of the following, if there is a cache miss: at the cache entry location of a corresponding affinity in the snoop filter if the cache entry location is found in the corresponding affinity; andat a derived cache entry location of the corresponding affinity if the cache entry location is not found in the corresponding affinity.
  • 19. The article of claim 18, wherein the cache entry location comprises a set and a way.
  • 20. The article of claim 19, wherein said instructions that result in storing the coherency record at a derived cache entry location of the affinity corresponding to the cache comprise instructions that result in calculating a way number.
Continuation in Parts (1)
Number Date Country
Parent 11395123 Mar 2006 US
Child 11639118 US