Claims
- 1. A data processing system comprising:
a plurality of processors each having a respective cache capable of storing a plurality of cached lines; and a directory for keeping track of states of the cached lines; wherein, upon a new request for a cache line, an algorithm uses said states of the cached lines to allocate a cache directory entry for the requested cache line.
- 2. The system of claim 1 wherein the algorithm uses an entry having a “shared” state before using an entry having a “dirty” state.
- 3. The system of claim 2 wherein the algorithm chooses the least-recently-used entry.
- 4. The system of claim 1 wherein the algorithm uses a directory entry not currently in use.
- 5. The system of claim 1 wherein the algorithm chooses a directory entry representing a cached line that is valid in at least one of said processors.
- 6. The system of claim 1 wherein the algorithm chooses a directory entry representing a cached line that is dirty in one of said processors.
- 7. The system of claim 1 wherein if the algorithm determines that all directory entries representing memory lines that are in transitional states, then the algorithm retries the request.
- 8. The system of claim 1 wherein said algorithm invalidates the cached line represented by said allocated cache directory entry.
- 9. A method for selecting a directory entry among a plurality of directory entries having state information, comprising the steps of:
using said state information to select said directory entry; and allowing a re-request of said directory entry if said plurality of directory entries represent cached lines in transitional states.
- 10. The method of claim 9 further comprising the step of selecting an entry having an “invalid” state if such an entry exists.
- 11. The method of claim 10 further comprising the step of selecting an entry having a “shared” state if such an entry exists.
- 12. The method of claim 11 wherein the step of selecting a shared entry uses a least-recently-used algorithm.
- 13. The method of claim 12 further comprising the step of selecting an entry having a dirty state if such an entry exists.
- 14. The method of claim 13 wherein the step of selecting a dirty entry uses a least-recently-used algorithm.
- 15. The method of claim 14 further comprising the step of invalidating the cached line represented by said selected directory entry.
- 16. A method for maintaining cache coherence for use in a data processing system including a plurality of processor nodes, each node having at least one processor with a cache, comprising the ordered steps of:
selecting one AVAILABLE among the cache directory entries that are not being used if said AVAILABLE entry is available; selecting one SHARED among the cache directory entries representing a cached line shared by the at least one processor if said SHARED entry is available; and selecting one DIRTY among the cache directory entries representing a cached line which is dirty at one of the at least one processor if said DIRTY entry is available.
- 17. The method of claim 16 wherein the step of selecting one SHARED uses a least-recently-used algorithm.
- 18. The method of claim 18 wherein the step of selecting one DIRTY uses said least-recently-used algorithm.
- 19. A cache coherence unit for use in a data processing system including multiple processor nodes, each node having at least one processor with an associated cache, comprising:
a bus interface for transferring data between said cache and a memory; a directory for storing state information about a plurality of cached lines stored in said cache; and a coherence controller coupled to said bus interface for maintaining cache coherence.
- 20. The cache coherence unit of claim 19 wherein said coherence controller comprises:
means for reading state information from said directory; and means for updating said state information in said directory.
- 21. The cache coherence unit of claim 19 further comprising means for using said state information to find a directory entry.
- 22. The cache coherence unit of claim 21 wherein said means for using uses a least-recently-used algorithm.
CROSS-REFERENCE TO CO-PENDING APPLICATIONS
[0001] This application claims the benefit of U.S. provisional application No. 60/084,795, filed on May 8, 1998.
[0002] This application is related to co-pending U.S. patent application Ser. No. 09/003,721, entitled “Cache Coherence Unit with Integrated Message Passing and Memory Protection for a Distributed, Shared Memory Multiprocessor System,” filed on Jan. 7, 1998; co-pending U.S. patent application Ser. No. 09/003,771, entitled “Memory Protection Mechanism for a Distributed Shared Memory Multiprocessor with Integrated Message Passing Support,” filed on Jan. 7, 1998; co-pending U.S. patent application Ser. No. 09/041,568, entitled “Cache Coherence Unit for Interconnecting Multiprocessor Nodes Having Pipelined Snoopy Protocol,” filed on Mar. 12, 1998; co-pending U.S. patent application Ser. No. 09/281,714, entitled “Split Sparse Directory for a Distributed Shared Memory Multiprocessor System,” filed on Mar. 30, 1999; co-pending U.S. patent application Ser. No. 09/285,316 entitled “Computer Architecture for Preventing Deadlock in Network Communications,” filed on Apr. 2 1999; and co-pending U.S. patent application Ser. No. 09/287,650 entitled “Credit-Based Message Protocol for Over-Run Protection in a Multi-Processor Computer system,” filed on Apr. 7, 1999, which are hereby incorporated by reference.