This invention relates to the processing of instructions. In particular it relates to code optimization when processing instructions in a microprocessor.
Broadly, the function of a compiler is to compile a source program written in a high level language into a target program for a given instruction set architecture (ISA), which is understood by a machine in which the compiled program is executed.
In order to increase computational throughput, a compiler may perform transformations in order to optimize the speed at which the compiled program can be executed.
The output of the compiler, i.e., the compiled code will be referred to hereinafter as “macroinstructions.” This is in contrast to microinstructions, which refers to the machine implementation-specific internal representation of instructions for a given ISA. Generally, these microinstructions are not visible to a compiler. A given macroinstruction may have several microinstructions, each of which is machine implementation-specific.
Since a particular microinstruction will typically only execute correctly on a machine that understands the microinstruction, a natural limit to how much optimization a compiler does is imposed by the requirement that in general, the macroinstructions produced by a compiler should be able to execute on all machines that support a given ISA, regardless of what microinstructions correspond to the macroinstructions.
If the microinstructions corresponding to each macroinstruction in an ISA is known, a compiler may be able to optimize the code even further by producing a machine implementation-specific microinstructions.
However, in such a case, because the microinstructions are machine implementation-specific, the microinstructions will no longer operate on other machines that share the same ISA, but have different microinstructions corresponding to the macroinstructions in the ISA.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the invention.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
In producing the macroinstructions 14, the compiler 10 usually performs one or more code optimizations which allows the macroinstructions 14 to execute faster on the target machine.
In general, the macroinstructions 14 comprise complex instructions which are converted into simple instructions which are then executed on the target machine. These simple instructions are known as microinstructions. Microinstructions are highly ISA implementation-specific. Thus, a given instruction written for one ISA translates into different microinstructions on different machine implementations of the ISA.
Since macroinstructions 14 include complex instructions, in contrast to microinstructions which are simple. If the compiler 10 converts the source program 12 into microinstructions, then these microinstructions execute more efficiently or rapidly than the macroinstructions 14. This is because microinstructions are directly executable, whereas macroinstructions have to be converted to microinstructions prior to execution. However, since microinstructions are highly machine implementation-specific, microinstructions for one machine implementation of an ISA may not be able to execute on a different machine implementation of an ISA. This is undesirable since a general goal of all compiled programs is that they should execute on all machine-implementations that support a given ISA.
Thus, compilers, in general, stop short of optimizing code to the level of introducing machine implementation-specific microinstructions into a compiled program.
According to aspects of embodiments of the present invention, an intermediate code format is produced between the macroinstructions 14 and the machine implementation-specific microinstructions. In one embodiment, the intermediate code format includes a hybrid of macroinstructions and microinstructions. During execution of the intermediate code, if a machine implementation understands the microinstructions, then the microinstructions are executed; otherwise, the macroinstructions are executed. Since, the intermediate code format of an embodiment of the present invention includes macroinstructions, the code is able to execute simultaneously on all machine implementations for a given ISA. One advantage of the techniques disclosed below is that they provide a code format that includes microinstructions which may be executed more rapidly or efficiently on a target machine that understands these microinstructions, while at the same time including macroinstructions which may be executed by a machine that does not understand the microinstructions.
In another embodiment, the compiler 10′ produces binary code which includes ISA instructions (macroinstructions) as well as an alternative representation of the microinstructions.
During execution of basic blocks 22 to 28, it may turn out that there is a high probability that the basic blocks 22 to 28 actually get executed. In other words the branches between basic blocks 22, 24, 26 and 28 actually get taken.
However, the basic blocks 22 to 28 may reside on four separate cache lines as indicated in
Since basic blocks 22 to 28 have a high probability of being executed, an alternative representation of the blocks may include packing these blocks together to define basic blocks 22′ to 28′, as is illustrated in
For ease of reference, the alternative representation code 22′ to 28′ will be referred to “mesocode.” In some embodiments, the mesocode is encapsulated by the boundary markers designated by reference numerals 30 and 32 as will be seen in
Execution of the mesocode is triggered whenever a trigger is encountered in the original code. Thus, aspects of embodiments of the present invention involve embedding a trigger in the original code, e.g., trigger 34 shown in
In other embodiments, an explicit trigger is not encoded in the original code, since the start boundary marker 30 may be used as a trigger.
The boundary markers 30, 32, and the trigger 34 may be in the format of the ISA for which the code was compiled.
In one embodiment, the boundary markers 30, 32, and the trigger 34 are defined using unused templates for a given ISA architecture, e.g., the Itanium ISA. To achieve this, the mesocoded region may be bounded by instruction syllables or micro ops that are not narrowed by any other ISA templates. The microcoded regions may be kept separate as appendices to the original code and are thus unobtrusive to the original code. In another embodiment, the microcode may redundantly express frequently executed portions of the original code, encoded in a different, more efficient format.
Explicitly Parallel Instruction Computing (EPIC) ISA's, including the Itanium ISA use template carrying bundles as atomic units that are fetched and executed. Templates make it possible to decipher other types of instructions in a bundle well before the instructions are decoded. Individual instructions inside a bundle act more like micro ops and will be referred to as such to avoid confusion. Stop bits are used to express parallelism (for instructions between stop bits) and data dependency (for instructions across stop bits) behavior. The Itanium ISA also includes predication and static branch hints on the micro op level, which in conjunction with the stop bits and templates, could be used to express program behavior and granularity beyond the traditional basic block level.
The problem with forcing micro ops into fixed issue templates is that no ops (NOPs) are introduced into the code when no usable instructions can be found to fill out the rest of a template. These NOPs dilute code density and degrade cache pipeline utilization by taking up valuable space and pipeline resources that could be filled with useful instructions.
The effective fetch bandwidth is reduced due to the effects of these wasteful instructions. Predication can have the same effect in that instructions that are predicated false at runtime effectively become NOPs in the dynamic code stream, which occupy these sources and degrade the instructions per cycle (IPC). Another problem with using fixed issue templates is that branch targets are required to be bundle aligned. This can introduce cache line fragmentation when the cache line is bigger than a bundle. When a taken branch or a branch target is not aligned to the cache line, then the rest of the cache line will be wasted, which reduces effective usage of the fetch bandwidth. These problems of code density dilution may be solved by an introduction of a mesocoded region in the compiled code, which in one embodiment may represent compacted code with the wasteful NOPs and predicated false instructions removed.
User I/O devices 62 are coupled to the 54 and are operative to communicate information in appropriately structured form to and from the other parts of the computer 50. The user I/O devices 62 may include a keyboard, mouse, card reader, magnetic or paper tape, magnetic disk, optical disk, or other available input devices, including another computer.
A mass storage device 64 is coupled to bus 54 and may be implemented using one or more magnetic hard disks, magnetic tapes, CDROMs, large banks of random access memory, or the like. A wide variety of random access, and read only memory technologies are available and are equivalent for purposes of the present invention. The mass storage 64 may include computer programs and data stored therein. Some or all of the mass storage 64 may be configured to be incorporated as part of the memory system 58.
In a typical computer system 50, the processor 52, the I/O device 56, the memory system 58, and the mass storage device 64, are coupled to the bus 54 formed on a printed circuit board and integrated into single housing. However, in the particular components chosen to be integrated into a single housing is based upon market and design choices. Accordingly, it is expressly understood that fewer or more devices may be incorporated within the housing suggested by dashed line 68.
A display device 70 is used to display messages, data, a graphical or command line user interface, or other communications with a user. The display 70 may be implemented, for example, by Cathode Ray Tube (CRT) monitor, Liquid Crystal Display (LCD), or any available equivalent. A communication interface 72 provides communications capability to other devices.
Referring now to
For ease of description, the following description assumes a single pipeline. The pipeline stages 102 to 110 in
As noted above, the mesocoded regions may include machine implementation specific microinstructions, alternative non-microcode encodings, e.g., of frequently executed code, and the like. In another embodiment, the mesocoded region may include instructions of a different ISA definition. For example, in one embodiment the mesocoded region may include instructions in the format of the ISA of a co-processor or an accelerator unit. In this embodiment, when the decoder for decode stage 104 detects the mesocoded region it automatically routes the mesocoded instructions to the co-processor/accelerator unit as is illustrated in
In some cases, the mesocoded regions may include other types of coding, e.g., byte code for a Java Virtual Machine. In this case, in the error detection stage 108 an exception is thrown to a software handler 112 which then processes the byte code. This is illustrated in
According to a further aspect of one embodiment of the present invention, a program is characterized in terms of streams that comprise basic blocks to be encoded as mesocode. Each basic block includes a sequence of instructions that start at a taken branch target and end at a branch taken instruction. In one embodiment, characterizing a program in terms of streams involves three general operations. The first operation involves partitioning a global instruction execution trace into smaller or local instruction execution traces and determining the stream boundaries within each local instruction execution trace. The second operation creates a local dictionary of unique streams seen during program execution in each local instruction trace and correlates the unique streams back to the global execution instruction trace. Finally, the third operation creates a global stream dictionary that is valid for all portions of the global instruction trace, and re-labels the local instruction execution traces to reflect entries in the global stream dictionary.
Effectively, this methodology transforms traces of dynamic instructions into streams of basic blocks. In one embodiment, all unique streams have entries in the global dictionary and each unique stream is mapped to a unique symbol. Through frequency and coverage (coverage is defined as the size of a stream times a frequency which take stream is executed) analysis, all entries in the dictionary are ranked in order of priority.
In one embodiment, a software tool such as an instruction-accurate simulator is used to execute the program and to provide details of each instruction that was executed. It is possible to classify each instruction according to a type. For example, in one embodiment, the following information about instruction types are collected by the software tool:
predicate true—taken branch;
predicate true—not taken branch;
predicate false—taken branch;
predicate false—not taken branch;
load instructions; and
store instructions.
The software tool may be used to concurrently determine the stream boundaries, which as noted above, end on taken branches and begin at a branch target. Each stream has associated with it, a start instruction pointer, an end instruction pointer, unique instruction counts, as well as the length in instructions, and a profile of how many instructions of each type were executed. The ordering of the streams corresponds to the program (global) instruction execution trace.
In one embodiment, because the above-described instruction-level analysis is time consuming, the program is divided into a number of smaller chunks or local traces, each comprising a fixed number of instructions. Thereafter, each of the local traces is analyzed in parallel. This approach requires a final merging step as described below. One advantage of dividing the program into local traces for parallel analysis is that computing resources may be used to improve the efficiency of the analysis.
Once the analysis for each local trace is completed, the next operation involves grouping identical streams together and sorting them by their exit instruction pointer counts. Duplicate streams are removed and the frequencies of the remaining streams are updated. The resulting list contains only unique streams, and metrics about the streams such as the execution frequency of each stream. A unique identifier/symbol is associated with each stream. This operation is performed at a local trace level as described above and the result is a local stream dictionary that is then used to convert the raw local instruction trace to a stream trace. Thereafter, several merging operations are required to create a single global stream dictionary for the entire program. In one embodiment, each merging step takes two local stream dictionaries and removes duplicate streams, while keeping and updating the frequencies of the stream that occurred earliest in time. Additional sorting operations may be performed to identify streams, for example, with the highest frequency or coverage.
Once the global stream dictionary 128 is created, a remapping phase is performed to re-label the stream indexed local trace 124 with the unique symbols from the global dictionary 128. The remapping phase may be performed in parallel once the global dictionary 128 is created. The remapping process is illustrated in
In one embodiment, once the streams have been identified, high confidence or “hot” streams are identified. These hot streams are frequently executed. The process of identifying hot streams is illustrated with reference to
Once the hot streams and their children have been identified, a second scan of the execution trace is performed in order to construct a control flow graph (CFG) of program execution using only the hot and high confidence streams as nodes. All other streams are lumped together into a common sink. Low confidence edges and their associated nodes are pruned from the CFG based on a pruning criterion. In one embodiment, the pruning criterion is a frequency percentage defined as the execution frequency of an edge divided by the sum over all other out-edges from the same source node. The frequency percentage defines a strict percentage cut-off such that all edges below a percentage threshold and all edges leading to the common sink are removed. In one embodiment, second pruning criterion examines the variance in frequency percentage across all the edges. The variance is the difference between each edge and the edge with the maximum frequency percentage. A given threshold is set for the cases with one and two edges and scaled down linearly if there are more edges. Edges falling above the threshold or leading to or from the common sink are discarded. This process of constructing the control flow graph is illustrated with reference to
In one embodiment, the pruned CFG 158 is scanned in order to extract (see block 160 in
In one embodiment, the techniques for characterizing a program in terms of streams of basic blocks may be implemented in software.
The operations shown in
The characterization of a program in terms of streams as described above may also be performed in hardware. Thus, embodiments of the invention include hardware structures within a processor to identify streams of basic blocks during program execution.
The processor 252 further includes a stream predictor 268 whose function will be explained in greater detail below. As can be seen, the processor 252 includes a register file 270 and during execution of an instruction in the execution stage 260 values are written and read from register file 270. As discussed above, the check/error detect stage 262 detects whether the correct instruction was executed in the execute stage 260, and only if the correct instruction was executed is the processor state allowed to change in the write-back stage 264.
The processor 252 further includes a cache memory hierarchy comprising a level one instruction cache 272, a level one data cache 274, a level two cache 276, and a level three cache 278. The level two cache 276 is connected to the level three cache 278 via a cache bus 280. The system 250 also includes a memory 282 which is connected via a system bus 284 to the processor 252.
Based on information received from the error detect stage 262, the stream predictor 268 constructs a stream dictionary, such as the stream dictionary 300 illustrated in
In order to create the stream dictionaries 300, 302, the stream predictor 268 performs the operations shown in the flow chart of
In order to use the stream dictionary to predict which streams are likely to be taken, there has to be a confidence associated with the ip for each next stream. The higher the confidence, the more likely the next stream is to be taken. This confidence information may be integrated into the stream dictionary. Alternatively, a separate stream predictor table may be created, such as the table 400 shown in
In use, the fetch/prefetch stage 256, submits the address of a branch instruction to the stream predictor 268 as well as to the branch predictor 266 for a look-up. This stream predictor 268 uses the input ip to predict the ip of a stream as is shown in the flow chart of
Thus, the operations performed by the stream predictor 268 as per the flow chart of
In order to maintain the accuracy of the prediction, after the write-back stage 264, the stream prediction table 400 needs to be updated based on information about what instructions were actually executed.
Referring to
If at block 502, it is determined that retired instruction is a branch instruction, then at block 514, the stream predictor 268 determines if the processor is operating in normal mode. If the processor is operating in normal mode, then at block 516, the prediction associated with the retired instruction is checked. If the prediction is correct, then at block 518 the confidence for that prediction is increased, otherwise, at block 520 the confidence for that prediction is decreased. If at block 514, it is determined that the processor is operating in stream mode, then at block 512, the stream predictor table 400 is searched to determine if the ip of the branch matches an end ip of a stream. If there is a match, then at block 524, the confidence for the matched stream is updated. Otherwise, at block 526, a determination is made as to whether the branch was taken or not. If the branch was taken, then a new stream entry is created at block 528. At block 530, the mode of the processor is set to stream mode and at block 532 the confidence for the new stream is updated.
Although the present invention has been described with reference to specific exemplary embodiments, it will be evident that the various modification and changes can be made to these embodiments without departing from the broader spirit of the invention as set forth in the claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than in a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
5381533 | Peleg et al. | Jan 1995 | A |
6304962 | Nair | Oct 2001 | B1 |
6988183 | Wong | Jan 2006 | B1 |
6988190 | Park | Jan 2006 | B1 |
Number | Date | Country | |
---|---|---|---|
20040268100 A1 | Dec 2004 | US |