The present invention relates to a recording scheme for instruction segments in a processor core in which instructions from instruction segments may be cached in reverse program order.
Conventionally, front end processing 110 may build instruction segments from stored program instructions to reduce the latency of instruction decoding and to increase front-end bandwidth. Instruction segments are sequences of dynamically executed instructions that are assembled into logical units. The program instructions may have been assembled into the instruction segment from non-contiguous regions of an external memory space but, when they are assembled in the instruction segment, the instructions appear in program order. The instruction segment may include instructions or uops (micro-instructions).
A trace is perhaps the most common type of instruction segment. Typically, a trace may begin with an instruction of any type. Traces have a single entry, multiple exit architecture. Instruction flow starts at the first instruction but may exit the trace at multiple points, depending on predictions made at branch instructions embedded within the trace. The trace may end when one of number of predetermined end conditions occurs, such as a trace size limit, the occurrence of a maximum number of conditional branches or the occurrence of an indirect branch or a return instruction. Traces typically are indexed by the address of the first instruction therein.
Other instruction segments are known. The inventors have proposed an instruction segment, which they call an “extended block,” that has a different architecture than the trace. The extended block has a multiple-entry, single-exit architecture. Instruction flow may start at any point within an extended block but, when it enters the extended block, instruction flow must progress to a terminal instruction in the extended block. The extended block may terminate on a conditional branch, a return instruction or a size limit. The extended block may be indexed by the address of the last instruction therein.
A “basic block” is another example of an instruction segment. It is perhaps the most simple type of instruction segment available. The basic block may terminate on the occurrence of any kind of branch instruction, including an unconditional branch. The basic block may be characterized by a single-entry, single-exit architecture. Typically, the basic block is indexed by the address of the first instruction therein.
Regardless of the type of instruction segment used in a processor 110, the instruction segment typically is cached for later use. Reduced latency is achieved when program flow returns to the instruction segment because the instruction segment may store instructions already assembled in program order. The instructions in the cached instruction segment may be furnished to the execution stage 120 faster than they could be furnished from different locations in an ordinary instruction cache.
While the use of instruction segments has reduced execution latency, they tend to exhibit a high degree of redundancy. A segment cache may store copies of a single instruction in multiple instruction segments, thereby wasting space in the cache. The inventors propose to reduce this redundancy by merging one or more segments into a larger, aggregate segment or by extending one instruction segment to include instructions from another instruction segment with overlapping instructions. However, extension of segments is a non-trivial task, for several reasons.
First, instructions typically are cached in program order. To extend instruction segments at the beginning of the segment would require previously stored instructions to be shifted downward through a cache to make room for the new instruction. The instructions may be shifted by varying amounts, depending upon the number of new instructions to be added. This serial shift may consume a great deal of time which may impair the effectiveness of the front-end stage 110.
Additionally, the extension may destroy previously established relationships among the instruction segments. Instruction segments not only are cached, but they also are indexed by the front-end stage 110 to identify relationships among themselves. For example, program flow previously may have exited a first segment and arrived at a second segment. A mapping from the first instruction segment to the second instruction segment may be stored by the front-end stage 110 in addition to the instruction segments themselves. Oftentimes, the mappings simply are pointers from one instruction segment to the first instruction in a second instruction segment.
Extension of instruction segments, however, may cause new instructions to be added to the beginning of the segment. In such a case, an old pointer to the segment must be updated to circumvent the newly added instructions. If not, if the old mapping were used, the front-end stage 110 would furnish an incorrect set of instructions to the execution stage 120. The processor 100 would execute the wrong instructions.
Accordingly, there is a need in the art for a front-end processing system that permits instruction segments to be extended dynamically without disruption to previously stored mappings among the instruction segments.
Embodiments of the present invention provide a recording scheme for instruction segments that store the instruction in reverse program order. By storing the instruction in reverse program order, it becomes easier to extend the instruction segment to include additional instructions. The instruction segments may be extended without having to re-index tag arrays, pointers that associate instruction segments with other instruction segments.
According to an embodiment, an ISS 220 may include a fill unit 260, a segment branch prediction unit (or “segment BPU”) 270 and a segment cache 280. The fill unit 260 may build the instruction segments. The segment cache 280 may store the instruction segments. The segment BPU 270 may predict which instruction segments, if any, are likely to be executed and may cause the segment cache 280 to furnish any predicted segment to the execution unit. The segment BPU 270 may store masks associated with each of the instruction segments stored by the segment cache 280, indexed by the IP of the terminal instruction of the instruction segments.
The ISS 220 may receive decoded instructions from the instruction cache 210. The ISS 220 also may pass decoded instructions to the execution unit (not shown). A selector 290 may select which front-end source, either the instruction cache 210 or the ISS 220, will supply instructions to the execution unit. In an embodiment, the segment cache 280 may control the selector 290.
According to an embodiment, a hit/miss indication from the segment cache 280 may control the selector 290.
During execution, a first segment may begin when program flow advances to location IP1 (as by, for example, a conditional branch). Instructions may be retrieved from the instruction cache 210 until the program flow advances to the conditional branch instruction at location IP6. Assume that the conditional branch is taken, causing program flow to advance to location IP6. In an extended block system, for example, the conditional branch would cause the instruction segment to terminate and a new segment to be created starting at location IP6. The first instruction segment may be stored in a line of the segment cache (say, 310.2 of
Program flow may advance from location IP6 to the return instruction at location IP4. The return instruction would terminate a second instruction segment 420, causing the ISS (
Assume that program flow advances to the instruction at location IP3 at some later time. Instructions may be retrieved from the instruction cache (
Returning to
The recording scheme of the present invention permits instruction segments to be merged without requiring corresponding manipulation of the mappings stored in the segment BPU 270. Continuing with the example provided in
Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
4575814 | Brooks, Jr. et al. | Mar 1986 | A |
5381533 | Peleg et al. | Jan 1995 | A |
5461699 | Arbabi et al. | Oct 1995 | A |
5889999 | Breternitz, Jr. et al. | Mar 1999 | A |
5913223 | Sheppard et al. | Jun 1999 | A |
5924092 | Johnson | Jul 1999 | A |
5966541 | Agarwal | Oct 1999 | A |
5974538 | Wilmot, II | Oct 1999 | A |
6006317 | Ramagopal et al. | Dec 1999 | A |
6073213 | Peled et al. | Jun 2000 | A |
6076144 | Peled et al. | Jun 2000 | A |
6185675 | Kranich et al. | Feb 2001 | B1 |
6189140 | Madduri | Feb 2001 | B1 |
6216200 | Yeager | Apr 2001 | B1 |
6233678 | Bala | May 2001 | B1 |
6279103 | Warren | Aug 2001 | B1 |
6339822 | Miller | Jan 2002 | B1 |
6351844 | Bala | Feb 2002 | B1 |
6418530 | Hsu et al. | Jul 2002 | B2 |
6427188 | Lyon et al. | Jul 2002 | B1 |
6496925 | Rodgers et al. | Dec 2002 | B1 |
6507921 | Buser et al. | Jan 2003 | B1 |
6535905 | Kalafatis et al. | Mar 2003 | B1 |
6535959 | Ramprasad et al. | Mar 2003 | B1 |
6578138 | Kyker et al. | Jun 2003 | B1 |
6594734 | Kyker et al. | Jul 2003 | B1 |
6785890 | Kalafatis et al. | Aug 2004 | B2 |
6961847 | Davies et al. | Nov 2005 | B2 |
20020078327 | Jourdan et al. | Jun 2002 | A1 |
20020104075 | Bala et al. | Aug 2002 | A1 |