Claims
- 1. In a stack based computing system utilizing a stack cache and a data cache, a method to handle stack cache misses, said method comprising:requesting a first data word from said stack cache in a stack cache fetch stage of an instruction pipeline in the stack based computing system; detecting a stack cache miss; requesting said first data word from the data cache; pushing a result computed using said first data word onto the stack; comparing a memory address of said first data word with a memory address of a second data word in a data cache fetch stage; retrieving said second data word in place of said first data word if said memory address of said first data word matches said memory address of said second data word; comparing said memory address of said first data word with a memory address of a third data word in an execution stage of said instruction pipeline; and retrieving said third data word in place of said first data word if said memory address of said first data word matches said memory address of said third data word.
- 2. The method of claim 1, further comprising:comparing said memory address of said first data word with a memory address of a fourth data word in a write stage of said instruction pipeline; and retrieving said fourth data word in place of said first data word if said memory address of said first data word matches said memory address of said fourth data word.
- 3. An instruction pipeline for stack based computing system utilizing a stack cache and a data cache, said instruction pipeline comprising:a stack cache fetch stage coupled to retrieve data from said stack cache; a data cache fetch stage coupled to retrieve data from said data cache; a feedback path from said data cache fetch stage to said stack cache fetch stage; and an execution stage coupled to said comparator and said feedback path.
- 4. The instruction pipeline of claim 3, wherein said instruction pipeline further comprises a write stage coupled to said comparator and said feedback path.
- 5. The instruction pipeline of claim 4,wherein said comparator is configured to compare a memory address from said stack cache fetch stage with a memory address from said data cache fetch stage, a memory address from said execution stage, and a memory address from said write stage; and wherein said feedback path is configurable to transfer a data word from said data cache fetch stage, said write stage, and said execution stage to said stack cache fetch stage.
- 6. The instruction pipeline of claim 5, wherein said feed back path comprises a multiplexer having:a first plurality of input terminals coupled to said execution stage; a second plurality of input terminals coupled to said data cache fetch stage; a third plurality of input terminals coupled to said write stage; a plurality of configuration terminals coupled to said comparator; and a plurality of output terminals coupled to said stack cache fetch stage.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application relates to the co-pending application Ser. No. 09/064,807, filed Apr. 22, 1998, “SUPERSCALAR STACK BASED COMPUTING SYSTEM”, by Koppala, et. al. owned by the assignee of this application and incorporated herein by reference.
This application relates to the co-pending application Ser. No. 09/064,642, filed Apr. 22, 1998, “REISSUE LOGIC FOR HANDLING TRAPS IN A MULTIISSUE STACK BASED COMPUTING SYSTEM”, by Koppala, et. al., now U.S. Pat. No. 6,108,768, owned by the assignee of this application and incorporated herein by reference.
This application relates to the co-pending application Ser. No. 09/064,680, filed Apr. 22, 1998, “LENGTH DECODER FOR VARIABLE LENGTH DATA”, by Koppala, et. al., now U.S. Pat. No. 6,170,050, owned by the assignee of this application and incorporated herein by reference.
US Referenced Citations (44)
Non-Patent Literature Citations (6)
Entry |
Philip Burnley, “CPU architecture for realtime VME systems,” Microprocessors & Microsystems, Butterworth & Co. Ltd. (London, Great Britain), (Apr. 12, 1988), pp. 153-158. |
Timothy J. Stanley, Robert G. Wedig, “A Performance Analysis of Automatically Managed Top of Stack Buffers,” 14th Annual Int'l. Symposium on Computer Architecture, The Computer Society of the IEEE (Pittsburgh, Pennsylvania), (Jun. 2, 1987), pp. 272-281. |
Russell R. Atkinson, Edward M. McCreight, “The Dragon Processor,” Xerox Palo Alto Research Center, The Computer Society of the IEEE, Oct. 1987; pp 65-69. |
“Up pops a 32bit stack microprocessor,” Electronic Engineering, (Jun., 1989); p. 79. |
Lanfranco Lopricre, “Line fetch/prefetch in a stack cache memory,” Microprossors and Microsystems, Butterworth-Heinemann ltd., vol. 17 (No. 9), (Nov., 1993). |
Microsoft Press Computer Dictionary, 2nd Ed., p. 279, 1994. |