Claims
- 1. In a processing system comprising a processor, a first cache, a second cache and a main memory, a method for pre-fetching data into said first cache comprising the steps of:
- detecting in said first cache a cache access event for a Line M;
- searching said second cache for said Line M in response to said cache access event;
- if said Line M is found in said second cache, transferring said Line M from said second cache to said first cache;
- if said Line M is not found in said second cache, waiting until unresolved branch instructions in a Line M-1 are resolved before fetching said Line M from said main memory;
- searching said first cache for a Line M+1; and
- if said Line M+1 is not found in said first cache, searching said second cache for said Line M+1.
- 2. The method as set forth in claim 1 wherein said cache access event is a cache miss.
- 3. The method as set forth in claim 1 wherein said cache access event is a cache hit.
- 4. The method as set forth in claim 1 including the further step of, if said Line M+1 is not found in said second cache, waiting until unresolved branch instructions in said Line M are resolved before fetching Line M+1 from said main memory.
- 5. The method as set forth in claim 1 including the further step of, if said Line M+1 is found in said second cache, determining whether said Line M+1 resides in a separate logical block of memory from said Line M.
- 6. The method as set forth in claim 5 including the further step of, if said Line M+1 does not reside in said separate logical block, transferring said Line M+1 from said second cache to said first cache.
- 7. The method as set forth in claim 5 including the further step of, if said Line M+1 does reside in said separate logical block, waiting until unresolved branch instructions in said Line M are resolved before transferring said Line M+1 from said second cache to said first cache.
- 8. In a processing system comprising a processor, a first cache, a second cache and a main memory, a method for pre-fetching data into said first cache comprising the steps of:
- detecting in said first cache a cache access event for a Line M;
- searching said second cache for a Line M+1 in response to said cache access event;
- if said Line M+1 is not found in said second cache, waiting until unresolved branch instructions in said Line M are resolved before fetching said Line M+1 from said main memory;
- if said Line M+1 is found in said second cache, determining whether said Line M+1 resides in a separate logical block of memory from said Line M; and
- if said Line M+1 does not reside in said separate logical block, transferring said Line M+1 from said second cache to said first cache.
- 9. The method as set forth in claim 8 wherein said cache access event is a cache miss.
- 10. The method as set forth in claim 8 wherein said cache access event is a cache hit.
- 11. The method as set forth in claim 8 including the further step of, if said Line M+1 does reside in said separate logical block, waiting until unresolved branch instructions in said Line M are resolved before transferring said Line M+1 from said second cache to said first cache.
- 12. A processing system comprising:
- a processor;
- a first cache;
- a second cache;
- a main memory;
- means for detecting in said first cache a cache access event for a first data;
- means responsive to said cache access event for determining if a second data sequential to said first data is present in said second cache;
- means responsive to a determination that said second data is not present in said second cache for waiting until unresolved branch instructions in said first data are resolved before fetching said second data from said main memory;
- means responsive to a determination that said second data is present in said second cache for determining if said second data resides in a separate logical block of memory from said first data; and
- means responsive to a determination that said second data does not reside in said separate logical block for transferring said second data from said second cache to said first cache.
- 13. The processing system as set forth in claim 12 wherein said cache access event is a cache miss.
- 14. The processing system as set forth in claim 12 wherein said cache access event is a cache hit.
- 15. The processing system as set forth in claim 12 further comprising means responsive to a determination that said second data does reside in said separate logical block for waiting until unresolved branch instructions in said first data are resolved before transferring said second data from said second cache to said first cache.
Parent Case Info
CROSS-REFERENCE TO RELATED APPLICATIONS
This application for patent is related to the following applications for patent assigned to a common assignee.
INSTRUCTION PRE-FETCHING FOR MULTIPLE DATA PATHS, U.S. patent application Ser. No. 08/540,374, filed Dec. 6, 1995;
PRE-FETCHING DATA FROM MEMORY ACROSS PAGE BOUNDARIES, U.S. patent application Ser. No. 08/529,470, filed Sep. 18, 1995;
PROGRESSIVE DATA CACHE, U.S. patent application Ser. No. 08/519,031, filed Aug. 24, 1995;
MODIFIED L1/L2 CACHE INCLUSION FOR AGGRESSIVE PREFETCH, U.S. patent application Ser. No. 08/518,348, filed Aug. 25, 1995;
STREAM FILTER, U.S. patent application Ser. No. 08/519,032, filed Aug. 24, 1995; and
CACHE DIRECTORY FIELD FOR INCLUSION, U.S. patent application Ser. No. 08/516,347, filed Aug. 8, 1995.
These applications for patent are hereby incorporated by reference in the present disclosure as if fully set forth herein.
US Referenced Citations (10)
Non-Patent Literature Citations (4)
Entry |
Ryan, Challenges Pentium: The Cyrix architecture brings more of the benefits of superpipelining and superscalar execution to 80.times.86 programs without requiring recompilation, BYTE, vol. 19, No. 1, p. 83, Jan. 1994. |
Case, The Primer, Windows Sources, vol. 3, No. 5, p. 144(5), May 1995. |
Pomerene, Reducing cache misses in a branch history table machine, IBM Technical Disclosure Bulletin, vol. 23, No. 2, p. 853, Jul. 1980. |
Ron Wilson, CompCon 95 sees battle of the CPUs, Electronic Enginerring Times, n839, p(2), Mar. 13, 1995. |