Claims
- 1. A microprocessor, coupled to a bus for transferring data from a memory to the microprocessor, the bus operating at a first clock frequency, and the microprocessor operating at a second clock frequency, wherein the second clock frequency is N times the first clock frequency, the microprocessor comprising:
a cache memory, for generating requests to read data from the memory on the bus, said requests comprising a plurality of access types; control logic, coupled to said cache memory, for receiving and accumulating said requests from said cache memory for approximately N cycles of the second clock frequency, and prioritizing said accumulated requests according to said plurality of access types; and a bus interface unit, coupled to said control logic, for receiving from said control logic after said approximately N cycles of the second clock frequency a highest priority one of said prioritized requests.
- 2. The microprocessor of claim 1, wherein said bus interface unit is configured to issue said highest priority request on the bus in response to reception thereof.
- 3. The microprocessor of claim 1, wherein said plurality of access types includes a blocking access type request, wherein said control logic prioritizes blocking access type requests as highest priority of said access types.
- 4. The microprocessor of claim 3, wherein said blocking access type request is associated with a functional unit in the microprocessor requiring data specified by said blocking access type request in order to proceed.
- 5. The microprocessor of claim 3, wherein said blocking access type request comprises a request generated by an operation causing a pipeline stall in the microprocessor while waiting for data associated with said blocking access type request.
- 6. The microprocessor of claim 3, wherein said plurality of access types includes a non-blocking page table walk access type request, wherein said control logic prioritizes non-blocking page table walk access type requests after said blocking access type requests.
- 7. The microprocessor of claim 6, wherein said non-blocking page table walk access type request comprises a request to read page table data from the memory on the bus due to a translation lookaside buffer miss.
- 8. The microprocessor of claim 6, wherein said plurality of access types includes a non-blocking store allocation access type request, wherein said control logic prioritizes non-blocking store allocation access type requests after said blocking access type requests and said non-blocking page table walk access type requests.
- 9. The microprocessor of claim 8, wherein said non-blocking store allocation access type request comprises a request to read a cache line from the memory on the bus due to a store miss in said cache memory to a write-back memory region.
- 10. The microprocessor of claim 6, wherein said plurality of access types includes a prefetch access type request, wherein said control logic prioritizes prefetch access type requests after said blocking access type requests and said non-blocking page table walk access type requests.
- 11. The microprocessor of claim 10, wherein said prefetch access type request comprises a request to read a cache line specified by a prefetch instruction.
- 12. The microprocessor of claim 10, wherein said prefetch access type request comprises a request to read a cache line speculatively generated by the microprocessor.
- 13. The microprocessor of claim 1, wherein N comprises an integer greater than one.
- 14. The microprocessor of claim 1, wherein N comprises a fraction greater than two.
- 15. A data cache, in a microprocessor coupled to a system memory by a bus, the microprocessor core logic operating according to a core clock, the bus operating according to a bus clock, the data cache comprising:
a request queue, for storing a plurality of requests to fill a cache line from the system memory on the bus, said requests comprising a plurality of types; request accumulation logic, coupled to said request queue, for storing said plurality of requests into said request queue in an order received during core clock cycles; prioritization logic, coupled to said request queue, for prioritizing said plurality of requests based on said plurality of types during a core clock cycle just prior to a next bus clock cycle; and bus request issue logic, coupled to said request queue, for removing from said request queue a highest priority of said plurality of requests prioritized by said prioritization logic for issuance on the bus.
- 16. The data cache of claim 15, wherein the core clock frequency is a multiple of the bus clock frequency.
- 17. The data cache of claim 15, wherein said prioritization logic prioritizes requests causing a pipeline stall in the microprocessor as highest priority of said plurality of requests.
- 18. The data cache of claim 17, wherein said prioritization logic prioritizes requests not causing a pipeline stall in the microprocessor as lower priority of said plurality of requests than said requests causing a pipeline stall.
- 19. The data cache of claim 18, wherein said requests not causing a pipeline stall in the microprocessor comprise requests to fill a cache line from the system memory due to a translation lookaside buffer miss.
- 20. The data cache of claim 19, wherein said requests not causing a pipeline stall in the microprocessor comprise requests to fill a cache line from the system memory due to a store miss to a write-back cacheable region.
- 21. The data cache of claim 20, wherein said prioritization logic prioritizes requests to fill a cache line from the system memory due to a store miss to a write-back cacheable region lower than said requests to fetch a cache line from the system memory due to a translation lookaside buffer miss.
- 22. The data cache of claim 19, wherein said requests not causing a pipeline stall in the microprocessor comprise requests to fill a cache line from the system memory due to a prefetch operation.
- 23. The data cache of claim 22, wherein said prioritization logic prioritizes requests to fill a cache line from the system memory due to a prefetch operation lower than said requests to fetch a cache line from the system memory due to a translation lookaside buffer miss.
- 24. A microprocessor, comprising:
a bus clock input, for receiving a bus clock signal, said bus clock signal having a first frequency for controlling operation of a bus coupling the microprocessor to a system memory; a core clock signal, having a second frequency, for controlling operation of core logic in the microprocessor, said second frequency being a multiple of said first frequency; a data cache, coupled to receive said core clock signal, for generating requests to read a cache line on said bus, said requests each having a request type; and control logic, coupled to said data cache, for accumulating said requests at said second frequency, for prioritizing said accumulated requests based on said request type at said first frequency, and for issuing a highest priority one of said requests on said bus after said prioritizing.
- 25. The microprocessor of claim 24, wherein said request type includes at least two of the following request types: a blocking request type, a non-blocking page table data request type, a non-block store allocation request type, and a non-blocking prefetch request type.
- 26. The microprocessor of claim 25, wherein said blocking request type is highest priority, said non-blocking page table data request type is next highest priority, and said non-block store allocation request type and a non-blocking prefetch request type are next highest priority.
- 27. A method for a microprocessor to transfer cache lines from a system memory on a bus coupling the microprocessor and system memory, the bus operating at a bus clock frequency and the microprocessor core logic operating at a core clock frequency, the method comprising:
determining during a core clock cycle whether said core clock cycle is occurring just prior to a next bus clock cycle, wherein the core clock frequency is a multiple of the bus clock frequency; prioritizing during said core clock cycle a plurality of bus requests accumulated during previous core clock cycles according to request type, if said determining is true; and issuing during said next bus clock cycle a highest priority one of said plurality of bus requests on the bus after said prioritizing.
- 28. The method of claim 27, wherein said plurality of bus requests comprise requests to transfer a cache line of data from the system memory to a data cache in the microprocessor on the bus.
- 29. The method of claim 27, wherein said prioritizing comprises prioritizing ones of said plurality of bus requests of a blocking request type at highest priority.
- 30. The method of claim 29, wherein said blocking request type comprises a request causing a pipeline stall in the microprocessor.
- 31. The method of claim 29, wherein said prioritizing comprises prioritizing ones of said plurality of bus requests of a non-blocking request type at lower priority than said blocking request type.
- 32. The method of claim 31, wherein said non-blocking request type comprises a request not causing a pipeline stall in the microprocessor.
- 33. The method of claim 32, wherein said non-blocking request type comprises a request to transfer page table data from the system memory to the microprocessor on the bus.
- 34. The method of claim 33, wherein said non-blocking request type comprises a request to transfer a cache line associated with a store miss in a data cache of the microprocessor from the system memory to said data cache on the bus.
- 35. The method of claim 34, wherein said prioritizing comprises prioritizing said request to transfer page table data at higher priority than said request to transfer said cache line associated with said store miss.
- 36. The method of claim 33, wherein said non-blocking request type comprises a request to prefetch a cache line from the system memory to a data cache in the microprocessor on the bus.
- 37. The method of claim 36, wherein said prioritizing comprises prioritizing said request to transfer page table data at higher priority than said request to prefetch said cache line.
Parent Case Info
[0001] This application claims priority based on U.S. Provisional Application, Serial No. 60/345,458, filed Oct. 23, 2001, entitled CONTINUOUS FILL PRIORITIZATION.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60345458 |
Oct 2001 |
US |