This invention relates to computer processor systems, and more particularly to exploiting the differences in execution sequence predictability that exist for various kinds of branch instructions in pipelined processors.
In pipelined computer architectures, a branch delay instruction is a program instruction that immediately follows a conditional branch instruction and can be executed regardless of the outcome of the branch condition. The location of such an instruction in the pipeline is called a branch delay slot. Branch delay slots are used to improve performance, such as in MIPS, PA-RISC and SPARC types of RISC architectures, and in the μPD77230 and TMS320C3x types of DSP architectures.
A pipelined processor architecture will provide its optimum performance if the pipeline is kept full of useful instructions at all times. But the program flow is not always straight-line, and so the next instruction in sequence may not necessarily be the next one to execute because of conditional branches and jumps. Branch delay slots are a side-effect of the pipelined architectures, and conditional branches can not be resolved until the instruction has worked its way through the fetch, decode, and execute stages of the pipeline.
A simple, but wasteful way to deal with the uncertainties of conditional branches and jumps is to insert and execute no-operation (NOP) instructions after every conditional branch instruction until a new branch target address can be computed and loaded into the program counter. Each such branch delay slot fills one instruction cycle period.
More sophisticated designs try to execute useful program instructions in the branch delay slots which are independent of the branch instruction. Such optimization can be instilled by the compiler/scheduler at compile time. During execute time, the number of them that get executed is fixed. If the hardware supports it, the instructions are placed into the instruction stream branch delay slots. However, special handling is needed to correctly manage instruction breakpoints and debug stepping within branch delay slots.
The number of branch delay slots to be filled is dictated by the number of pipeline stages in each particular implementation, any register forwarding, the stage the branch conditions are computed in the pipeline, whether a branch target buffer (BTB) is used, etc. The performance penalties caused by conditional branch instructions and jumps has been alleviated somewhat in the prior art by using branch prediction techniques and speculative execution.
So branch or jump delay slots are conventional techniques used to keep a processor's pipeline full of operations all the time. The outcome of a jump operation may only be known in the execute pipeline stage (EX), and the outcome is required in the instruction fetch pipeline stage (IF). Consider the following code in Table-I.
The jump operation (JMP) here is a “conditional jump with immediate target” type. The condition being satisfied is determined during execute time by the contents of the memory address pointed to in register r6. If zero, the jump is taken, otherwise it is not taken. So at compile time, the compiler/scheduler cannot know what branch will be taken in the future. The target address is encoded as an immediate value as part of the operation word, as fetched in the IF stage, and represents the jump operation “JMP r6<immediate target address>”.
In a multi-stage processor pipeline, a complication arises in which operation should be fetched after a jump operation has been fetched, decoded, and executed. It could be the first operation of the fall thru path, or the first operation at the target address. The answer will only be known when the jump operation makes it most of the way down the pipeline and is actually executed in the pipeline's execute stage and the condition is evaluated.
There are different types of jump operations besides the conditional jump with immediate target type just discussed. Table-II lists a few others, and
Predictions can be made as to what operation to fetch after a jump operation. Rather than waiting for the condition of the jump operation to be evaluated, its outcome can often be predicted and used to direct the instruction fetch (IF) pipeline stage. A conditional jump with register based target type requires that the branch target be predicted, it is not encoded as part of the operation word. If a prediction is wrong, repair logic is required to recover from every mis-prediction.
Such prediction logic can add significant hardware area and complexity to the processor design.
Branch/jump delay slots have been used in processor designs to allow the IF stage to fetch operations placed just after a jump operation for execution independent of a conditional jump. The compiler/scheduler has to re-organize the code to put such instructions in the branch/jump delay slots. Such operations can safely be executed, as they are independent of the outcome of the jump operation. Useful work is done during the period of target address uncertainty. The character of the processor pipeline has to be known to the compiler/scheduler. In a 3-stage pipeline, a jump in the EX stage cannot affect the fetching of the current operation, but only the fetching of a next operation after that.
In Table-III, the shift-left-logical (SLL) operation has been moved into the JMP operation's delay slot, assuming one delay slot. Its execution is independent of the JMP because r23, r21, and r24 are not affected or effect what's going on with r6. The code sequence of Table-III, re-organized by the compiler/scheduler, is the functional equivalent of the original code sequence of Table-I.
Over the years, the number of processor pipeline stages being introduced in new products has steadily increased. Such has been accompanied by higher clock frequencies. The prediction schemes too are getting more complex, in order to improve prediction accuracy. But as a consequence, each miss-prediction becomes more expensive. The number of jump delay slots has to be increased to account for the pipeline distance between where in the execute stage (EX) the outcome of the JMP will be known, and where in the instruction fetch stage (IF) the result has to land. In an 8-stage pipeline, such as in
If an exemplary 8-stage pipeline requires five jump/branch delay slots, the compiler/scheduler for it needs to find five operations that are independent of the jump outcome so they can be moved to the JMP operation's delay slots. Such is not always possible. Consider the original code sequence in Table-IV.
Here, there are five operations that precede the JMP. But the ADD calculates the jump condition in r6 from r2 and r23. So the ADD depends on the outcome of the SUB, e.g., r2 is the result calculated by SUB. The SUB and ADD operations are not candidates that can be repositioned after the JMP. This leaves only the MUL, AND and SLL operations as viable options for relocation into the delay slots following JMP. So only three of the five operations listed here can be used to fill less than all of delay slots with useful operations. If no more can be found, two of the delay slots will have to be filled with useless NOP's. E.g., as in Table-V.
A shortage of operations that are independent of the JMP operation necessitates the inclusion by the compiler of useless NOP operations and increases the size of the assembler and machine code. Such is current practice in the state-of-the-art. For example, statically scheduled processors like the Texas InstrumentsTMS320C6x and the NXP/Philips Semiconductors TriMedia processors use a fixed number of delay slots for each jump operation.
In an example embodiment, a compiler/scheduler for a pipelined processor sorts out all the jump/branch instructions into types, such as conditional immediate, conditional register-based, non-conditional immediate, and non-conditional register-based. It assumes that the target addresses for each type will be resolved during run time at different stages in the instruction fetch, instruction decode, register file, and instruction execute stages. Different numbers of branch delay slots are assigned to each jump/branch instruction according to how soon the target address can be resolved. The compiler/scheduler then fills these branch delay slots with as many useful instructions as are available, and that can be executed without regard to the branch taken in the associated jump. The hardware construction of the pipelined processor is such that the reloading of the pipeline during a branch fits the respective number of delay slots known by the compiler/scheduler.
An advantage of the present invention is significant processor performance improvements can be achieved by the compiler/scheduler.
Another advantage of the present invention is a pipelined processor invention is provided in which infringement can be readily detected.
A still further advantage of the present invention is a compiler/scheduler is provided that can accommodate any kind or size of pipelined computer architecture.
The above summary of the present invention is not intended to represent each disclosed embodiment, or every aspect, of the present invention. Other aspects and example embodiments are provided in the figures and the detailed description that follows.
The invention may be more completely understood in consideration of the following detailed description of various embodiments of the invention in connection with the accompanying drawings, in which:
While the invention is amenable to various modifications and alternative forms, specifics thereof have been shown by way of example in the drawings and will be described in detail. It should be understood, however, that the intention is not to limit the invention to the particular embodiments described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
Processor embodiments of the present invention assign different numbers of delay slots to each different type of jump operation. Since the number of delay slots is kept to the minimum needed by each type of jump operation, the compiler/scheduler need not insert as many useless NOP's as would otherwise be the case. Such thereby effectively improves processor performance.
The NXP/Philips TM3260 TriMedia processor is a five-issue, very long instruction word (VLIW) processor. It supports a 4-Gbyte, 32-bit address space, and has a 32-bit datapath. The processor has one hundred twenty-eight 32-bit general-purpose registers, r0, . . . , r127, organized in a unified register-file structure. Register r0 always contains the integer value “0”, register r1 always contains the integer value “1”. The TM3260 issues one VLIW instruction every cycle. Each instruction may include as many as five operations. Each of the operations may be guarded, e.g., their execution can be made conditional based on the value of the least significant bit of the operation's guard register. Such allows the compiler/scheduler to do aggressive speculation/predication in order to exploit parallelism in the source code, and thereby gain better processor performance.
Referring now to
Taking into account the availability of condition and jump target information: “conditional jump with immediate target” JMP operations can be performed in the EX1 stage, with 5 delay slots. “unconditional jump with immediate target” JMP operations can be performed in the ID1 stage, with 2 delay slots. “conditional jump with register based target” JMP operations can be performed in the EX1 stage, with 5 delay slots. “unconditional jump with register based target” JMP operations can be performed in the RF stage, with 4 delay slots.
This is a significant improvement compared to always having five delay slots, allowing the compiler/scheduler to generate more efficient code.
If the JMP was a register-based non-conditional type, then a step 314 inserts four delay slots in the assembler string. If the JMP was conditional, it will take more time to evaluate during execute time. A step 316 inspects the operation for immediate or register-based type. In this example, both types of conditional JMP's will require the insertion of five delay slots by a step 318. Other applications may make distinctions on how many delay slots to insert based on whether the JMP conditional type is immediate or register-based.
A step 320 scavenges for instructions proximate to the JMP that could be in the pipeline during, and executed after, the JMP because they are independent of the JMP. If so, the source code is reorganized accordingly. If not enough can be found, then NOP's are used to fill in the balance, e.g., to fill the 2, 4, or 5 delay slots inserted by steps 312, 314, and 318. A step 322 determines if more instructions in the source code need evaluation, and if so control returns to step 304. Otherwise, a step 324 assembles the reorganized code.
For example, a NXP/Philips VLIW TriMedia processor for audio-video processing. An instruct fetch (IF) stage 420 in a processor pipeline fetches the operations 414 and operands 416 and passes them on to an instruction decode (ID) stage 422. While the ID stage 422 is decoding the instruction, the IF stage is fetching the next one from program 412. The ID stage 422 will detect any JMP instructions and be able to classify them according to how many delay slots are needed to resolve the target address. If such target addresses cannot be resolved by the ID stage 422, they are passed on in the next cycle to be executed by execution (EX) stage 424. If the ID stage 422 was able to resolve the target address at that point in the processor pipeline, a program counter 426 is loaded and the IF stage 420 updated. Otherwise, the EX stage 424 will update the program counter 426 in a next cycle or two.
While the present invention has been described with reference to several particular example embodiments, those skilled in the art will recognize that many changes may be made thereto without departing from the spirit and scope of the present invention, which is set forth in the following claims.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2007/055014 | 12/11/2007 | WO | 00 | 6/10/2009 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2008/072178 | 6/19/2008 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5386562 | Jain et al. | Jan 1995 | A |
5394530 | Kitta | Feb 1995 | A |
5574939 | Keckler et al. | Nov 1996 | A |
5715440 | Ohmura et al. | Feb 1998 | A |
5758162 | Takayama et al. | May 1998 | A |
5809294 | Ando | Sep 1998 | A |
5867683 | Witt et al. | Feb 1999 | A |
5887174 | Simons et al. | Mar 1999 | A |
6044222 | Simons et al. | Mar 2000 | A |
6178499 | Stotzer et al. | Jan 2001 | B1 |
6389531 | Irle et al. | May 2002 | B1 |
6446258 | McKinsey et al. | Sep 2002 | B1 |
6487715 | Chamdani et al. | Nov 2002 | B1 |
6560775 | Artymov et al. | May 2003 | B1 |
6772325 | Irie et al. | Aug 2004 | B1 |
6799315 | Waki et al. | Sep 2004 | B2 |
6859874 | Kruckemyer | Feb 2005 | B2 |
7043416 | Lin | May 2006 | B1 |
7266676 | Tran et al. | Sep 2007 | B2 |
7447886 | Lee et al. | Nov 2008 | B2 |
20020002670 | Yoshida et al. | Jan 2002 | A1 |
20020066006 | Worrell | May 2002 | A1 |
20030070062 | Krishnan et al. | Apr 2003 | A1 |
20050015577 | Kruckemyer | Jan 2005 | A1 |
20050125786 | Dai et al. | Jun 2005 | A1 |
20050132176 | Kruckemyer | Jun 2005 | A1 |
20060095895 | K. | May 2006 | A1 |
Number | Date | Country |
---|---|---|
1 369 776 | Dec 2003 | EP |
04 191931 | Jul 1992 | JP |
90 050374 | Feb 1997 | JP |
Entry |
---|
Exploiting Fine-Grain Thread Level Parallelism on the MIT Multi-ALU Processor Stephen W. Keckler, William J. Daily, Daniel Maskit, Nicholas P. Carter, Andrew Chang, Whay S. Leey Computer Systems Laboratory yArtificial Intelligence Laboratory Stanford University Massachusetts Institute of Technology—Jun. 27-Jul. 2, 1998. |
Gross, T.R., et al; “Optimizing Delayed Branches”; Proceedings Annual Microprogramming Workshop, Oct. 5, 1982; p. 114-120; XP008042573. |
Gonzalez, A.M.; “A Survey of Branch Techniques in Pipelined Processors”; Microprocessing and Microprogramming, Elsevier Science Publishers, BV; Amsterdam, NL; vol. 36, No. 5; Oct. 1, 1993; pp. 243-257; XP000397907; ISSN: 0165-6074. |
Terechko, A.S., et al; “PRMDL: A Machine Description Language for Clustered VLIW Architecture”; 2001; IEEE Comput. Soc; p. 821. |
Number | Date | Country | |
---|---|---|---|
20100050164 A1 | Feb 2010 | US |
Number | Date | Country | |
---|---|---|---|
60874530 | Dec 2006 | US |