Claims
- 1. A novel architecture for set associative cache, comprising:
a set associative cache having a plurality of ways wherein the ways are segmented into a plurality of banks and wherein a first way has a fast access time; access control logic which manages access to the cache and is coupled to said plurality of ways; a plurality of muxes coupled to said first way in each of said banks and coupled to said access control logic; and wherein the access control logic controls the mux in a bank to remap any defective way in a bank to the first way in that same bank.
- 2. The architecture of claim 1 wherein said first way has a faster access time because it has a physically shorter path to said access control logic.
- 3. The architecture of claim 1 further comprising self test logic coupled to said access control logic to test the cache for defects.
- 4. The architecture of claim 3 wherein said self test logic tests the cache for defects on power up.
- 5. The architecture of claim 3 wherein said self test logic stores the location of defects in a status register.
- 6. The architecture of claim 5 wherein said access control logic reads the location of defects in the cache from the status register to determine proper control of said muxes.
- 7. The architecture of claim 1 wherein said set associative cache has a data array having a plurality of ways wherein the ways are segmented into a plurality of banks and wherein a first way has a faster access time.
- 8. The architecture of claim 1 comprising a plurality of ways having a fast access time and a plurality of muxes coupled to said plurality of ways in each of said banks and coupled to said access control logic.
- 9. The architecture of claim 8 wherein the access control logic controls the plurality of muxes in a bank to remap any defective way in a bank to a different way in that same bank.
- 10. The architecture of claim 1 wherein the access time of said first way (t1) is sufficiently fast such that the added time of the mux (tmux) will not add any latency.
- 11. The architecture of claim 10 wherein the access time of said first way (t1) added to the time of the mux (tmux) is less than or equal to the access time of the slowest way (tn).
- 12. The architecture of claim 10 wherein the access time of said first way (t1) added to the time of the mux (tmux) is less than or equal to a system clock cycle (tclk).
- 13. A microprocessor die, comprising:
self test logic which tests the die for defects; a set associative cache having a plurality of ways wherein the ways are segmented into a plurality of banks; access control logic which manages access to the cache coupled to said self test logic and coupled to said plurality of ways in said cache; a first way in said cache which has a physically shorter path to said access control logic; a plurality of muxes coupled to said first way in each of said plurality of banks and coupled to said access control logic; and wherein the access control logic controls the mux in a bank to remap any defective way in a bank to the first way in that same bank.
- 14. The microprocessor die of claim 13 comprising a plurality of ways having a physically shorter path to said access control logic and a plurality of muxes coupled to said plurality of ways in each of said banks and coupled to said access control logic.
- 15. The microprocessor die of claim 14 wherein the access control logic controls the plurality of muxes in a bank to remap any defective way in a bank to a different way in that same bank.
- 16. The microprocessor die of claim 13 wherein the access time of said first way (t1) is sufficiently fast such that the added time of the mux (tmux) will not add any latency to the microprocessor.
- 17. The microprocessor die of claim 13 wherein the access time of said first way (t1) added to the time of the mux (tmux) is less than or equal to the access time of the slowest way (tn).
- 18. The microprocessor die of claim 13 wherein the access time of said first way (t1) added to the time of the mux (tmux) is less than or equal to a system clock cycle (tclk).
- 19. A method of absorbing defects in a set associative cache, comprising:
providing a set associative cache with a plurality of ways wherein the ways are segmented into a plurality of banks and wherein a first way has a fast access time; providing a plurality of muxes coupled to said first way in each of said banks; and using the mux in a bank to remap any defective way in a bank to the first way in that same bank.
- 20. The method of claim 19 further comprising the step of testing for errors in the cache.
- 21. The method of claim 19 further comprising the step of disabling a way in a bank when that way is defective.
- 22. The method of claim 19 comprising a plurality of ways having a fast access time and a plurality of muxes coupled to said plurality of ways in each of said banks.
- 23. The method of claim 22 wherein the plurality of muxes in a bank are used to remap any defective way in a bank to a different way in that same bank.
- 24. A computer system, comprising:
a power supply; a microprocessor comprising:
a set associative cache having a plurality of ways wherein the ways are segmented into a plurality of banks; access control logic which manages access to the cache coupled to said plurality of ways in said cache; a first way in said cache which has a physically shorter path to said access control logic; a plurality of muxes coupled to said first way in each of said plurality of banks and coupled to said access control logic; and wherein the access control logic can control the mux in a bank to remap any defective way in a bank to the first way in that same bank.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application relates to the following commonly assigned co-pending applications entitled:
[0002] “Scan Wheel—An Apparatus For Interfacing A High Speed Scan-Path With A Slow Speed Tester,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-23700; “Rotary Rule And Coherence Dependence Priority Rule,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-27300; “Speculative Scalable Directory Based Cache Coherence Protocol,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-27400;
[0003] “Scalable Efficient IO Port Protocol,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-27500; “Efficient Translation Buffer Miss Processing For Applications Using Large Pages In Systems With A Large Range Of Page Sizes By Eliminating Page Table Level,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-27600; “Fault Containment And Error Recovery Techniques In A Scalable Multiprocessor,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-27700; “Speculative Directory Writes In A Directory Based CC-Non Uniform Memory Access Protocol,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-27800; “Special Encoding Of Known Bad Data,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-27900; “Broadcast Invalidate Scheme,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-28000; “Mechanism To Keep All Pages Open In A DRAM Memory System,”Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-28100; “Programmable DRAM Address Mapping Mechanism,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-28200; “Mechanism To Enforce Memory Read/Write Fairness, Avoid Tristate Bus Conflicts, And Maximize Memory Bandwidth,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-29200; “An Efficient Address Interleaving With Simultaneous Multiple Locality Options,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-29300; Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-29400; “A Method For Reducing Directory Writes And Latency In A High Performance, Directory-Based, Coherency Protocol,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 166229600; “Mechanism To Reorder Memory Read And Write Transactions For Reduced Latency And Increased Bandwidth,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-30800; “Look-Ahead Mechanism To Minimize And Manage Bank Conflicts In A Computer Memory System,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-30900; “Resource Allocation Scheme That Ensures Forward Progress, Maximizes Utilization Of Available Buffers And Guarantees Minimum Request Rate,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-31000; “Input Data Recovery Scheme,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-31100; “Fast Lane Prefetching,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-31200; “Mechanism For Synchronizing Multiple Skewed Source-Synchronous Data Channels With Automatic Initialization Feature,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-31300; “A Mechanism To Control The Allocation Of An N-Source Shared Buffer,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-31400; and “Chaining Directory Reads And Writes To Reduce DRAM Bandwidth In A Directory Based CC-NUMA Protocol,” Serial No. ______, filed Aug. 31, 2000, Attorney Docket No. 1662-31500, all of which are incorporated by reference herein.
Continuations (1)
|
Number |
Date |
Country |
Parent |
09651948 |
Aug 2000 |
US |
Child |
10690137 |
Oct 2003 |
US |