This invention relates to computer processing systems, and particularly controls entry insertion into a Branch Target Buffer via a queue structure which also serves the purpose of creating synchronization between asynchronous branch prediction and instruction decode to overcome start-up latency effects in a computer processing system.
A basic pipeline microarchitecture of a microprocessor processes one instruction at a time. The basic dataflow for an instruction follows the steps of: instruction fetch, decode, address computation, data read, execute, and write back. Each stage within a pipeline or pipe occurs in order and hence a given stage can not progress unless the stage in front of it is progressing. In order to achieve highest performance for the given base, one instruction will enter the pipeline every cycle. Whenever the pipeline has to be delayed or cleared, this adds latency which in turn can be monitored by the performance a microprocessor carries out a task. While there are many complexities that can be added on to such a pipe design, this sets the groundwork for branch prediction theory related to the stated invention.
There are many dependencies between instructions which prevent the optimal case of a new instruction entering the pipe every cycle. These dependencies add latency to the pipe. One category of latency contribution deals with branches. When a branch is decoded, is can either be “taken” or “not taken.” A branch is an instruction which can either fall through to the next sequential instruction, that is “not taken,” or branch off to another instruction address, that is, “taken,” and carries out execution on a different sequence of code.
At decode time, the branch is detected, and must wait to be resolved in order to know the proper direction the instruction stream is to proceed. By waiting for potentially multiple pipeline stages for the branch to resolve the direction to proceed, latency is added into the pipeline. To overcome the latency of waiting for the branch to resolve, the direction of the branch can be predicted such that the pipe begins decoding either down the “taken” or “not taken” path. At branch resolution time, the guessed direction is compared to the actual direction the branch was to take. If the actual direction and the guessed direction are the same, then the latency of waiting for the branch to resolve has been removed from the pipeline in this scenario. If the actual and predicted direction miscompare, then decoding proceeded down the improper path and all instructions in this path behind that of the improperly guessed direction of the branch must be flushed out of the pipe, and the pipe must be restarted at the correct instruction address to begin decoding the actual path of the given branch.
Because of controls involved with flushing the pipe and beginning over, there is a penalty associated with the improper guess and latency is added into the pipe over simply waiting for the branch to resolve before decoding further. By having a proportionally higher rate of correctly guessed paths, the ability to remove latency from the pipe by guessing the correct direction out weighs the latency added to the pipe for guessing the direction incorrectly.
In order to improve the accuracy of the guesses associated with the guess of a branch, a branch history table (BHT) can be implemented which allows for direction guessing of a branch based on the past behavior of the direction the branch previously went. If the branch is always taken, as is the case of a subroutine return, then the branch will always be guessed as taken. IF/THEN/ELSE structures become more complex in their behavior. A branch may be always taken, sometimes taken and not taken, or always not taken. Based on the implementation of a dynamic branch predictor, this will determine how well the BHT predicts the direction of the branch.
When a branch is guessed taken, the target of the branch is to be decoded. The target of the branch is acquired by making a fetch request to the instruction cache for the address which is the target of the given branch. Making the fetch request out to the cache involves minimal latency if the target address is found in the first level of cache. If there is not a hit in the first level of cache, then the fetch continues through the memory and storage hierarchy of the machine until the instruction text for the target of the branch is acquired. Therefore, any given taken branch detected at decode has a minimal latency associated with it that is added to the amount of time it takes the pipeline to process the given instruction. Upon missing a fetch request in the first level of memory hierarchy, the latency penalty the pipeline pays grows higher and higher the further up the hierarchy the fetch request must progress until a hit occurs. In order to hide part or all of the latency associated with the fetching of a branch target, a branch target buffer (BTB) can work in parallel with a BHT.
Given a current address which is currently being decoded from, the BTB can search for the next instruction address from this point forward which contains a branch. Along with storing the instruction address of branches in the BTB, the target of the branch is also stored with each entry. With the target being stored, the address of the target can be fetched before the branch is ever decoded. By fetching the target address ahead of decode, latencies associated with cache misses can be minimized to the point of time it takes between the fetch request and the decode of the branch's target.
In designing a BTB, the amount of branches that can be stored in it is part of the equation that determines how beneficial the structure is. In general, a BTB is indexed by part of an instruction address within the processor, and tag bits are stored in the BTB such that the tag bits must match the remaining address bits of concern that were not used for the indexing. In order to improve the efficiency of the BTB, it can be created such that it has an associativity greater than one. By creating an associativity greater than one, multiple branch/target pairs can be stored for a given index into the array. To determine which is the correct entry, if an entry at all, the tag bits are used to select one, at most, entries from the multiple entries stored for a given index.
When a branch is determined at decode time and it was not found ahead of time by the asynchronous BTB/BHT function, the branch is determined as a surprise branch. A surprise branch is any branch which was not found by the dynamic branch prediction logic ahead of the time of decode. A branch is not predicted by the branch prediction logic because it was not found in the BTB/BHT. There are two reasons that a branch is not found in the BTB/BHT. If a branch is not installed in the BTB/BHT then a branch can not be found as it is no where to be found. The second scenario is when the branch resides in the BTB/BHT; however, enough processing time has not been presented such that the searching could find the branch prior to it being decoded. In general, branch prediction search algorithms can have a high throughput; however, the latency required for starting a search can be of reasonable length longer compared to the starting of instructions down the pipeline in respect to the time frame that an instruction decodes.
Whenever a branch is detected at decode time, where the branch is a surprise branch, upon knowing the target and direction of the branch at a later time, it can be written into the BTB and the BHT. Upon writing the entry into the tables, the entry can ideally be found the next time a search is in the area of the stated branch.
In the case that a branch resides in the BTB/BHT but latency effects prevent the branch from being found in time, the branch is treated like a surprise branch as this branch is no different from a branch which is not in the table. Upon determining the target and direction of the branch, it will be written into the table. A standard method of entering a branch into the table is to place it in the given column (associativity) that was least recently used; thereby, keeping those branches which were most recently accessed in the tables. A reading of the columns prior to the write is not performed to check for duplicates because the amount of reads that would have to be performed in addition to the normal operation would be enough to cause additional latency delays which would further hinder branches from being found so they could be predicted; hence, this would increase the quantity of surprise branches in a series of code. Increasing the number of surprise branches causes the performance to decrease. In order to work around the raised latency issues, a recent entry queue has been designed to keep track of the recent entries into the BTB. Through the process of this queue, additional reads from the BTB are not required. Furthermore, the size of such a queue over a duplicate array or another read port on the given array is magnitudes different in size. The space for a second full size array or an additional read port can be deemed that the area spent for such an operation can be better spent elsewhere for higher performance gains. By adding a small recent entry queue, the area is kept modest while the performance delta between a queue and additional read port is minimal, if not for the better.
One problem encountered with the BTB in the case of multiple instantiations of a branch entry is that the multiple instantiations of the branch entry can be written into a branch target buffer (BTB) at a high frequency based on code looping patterns. However, this hinders the BTB performance by removing valid entries for duplicate entries of another branch. Thus, a clear need exists for a way to prevent multiple instantiations of a branch entry within the branch target buffer.
The shortcomings of the prior art are overcome and additional advantages are provided through the provision of a recent entry queue that tracks the most recent branch/target data that is stored into a branch target buffer (BTB). Through the usage of a BTB recent entry queue, any new entry that is to be written into the BTB is first checked against that of the recent entry queue. If the contained data that is to be written into the BTB is valid in the recent entry queue, then the data is already contained in the BTB. Given the data is already in the BTB, the data write into the BTB for the given entry is blocked. If the data was not blocked, then it would most likely replace other valid data when a BTB has an associativity greater than one.
As noted above, in pipeline processors of the prior art, multiple instantiations of a branch entry could be written into a branch target buffer (BTB) at a high frequency based on code looping patterns. This reduces performance by removing valid instances for duplicate entries of another branch. This had the effect of causing the BTB to behave as a single associative design given any amount of designed associativity.
According to the method, system, and program product described herein, by keeping track of closely associated duplicates to become entries, the monitoring structure is extended to not only block BTB writes but to additionally notify instruction decode when such an operation is to take place. In the case of the majority of duplicate entries, the occurrence is initiated by an instruction loop where the first branch was not predicted in time because of branch prediction start-up latency which thereby causes each additional iteration to not be predicted in time. By being able to predict one interaction of the loop, the BTB is able to get ahead and therefore potentially predict all future iterations of the branch point. By causing a delay in the pipeline by blocking the decoding operation, the branch prediction logic is able to get ahead thereby allowing the pipeline to run at the efficiency it is capable of. Such operations are viewable by higher performance which can be viewed externally as an application completing a task in a shorter time span.
Beyond blocking repetitive data blocks from being written into the BTB, the recent entry queue also serves the purposes of being able to delay decode and perform an accelerated lookup. Whenever a branch is repeatedly taken and each iteration is not predicted by the BTB, the recent entry queue provides the data necessary to detect this such that decode can be delayed thereby allowing the branch prediction logic to catch up to the decode pipeline of the machine.
Finally, because the queue is a very small structure, it has a minimal latency compared to the time it takes to look up an entry in the array; therefore, branch detection via the branch prediction logic can occur in less time/cycles when using the recent write entry queue over the BTB. Given such a pattern, particular tight loop prediction scenarios that hinder the BTB can be overcome via the recent entry queue thereby making these loops predictable. Predicting such loops successfully removes latency from the pipeline thereby improving the overall performance of such a machine. These performance improvements are noticed through the reduced amount of time that is required for a computer program to complete the task at hand.
The method of operating a computer, the program product for operating a computer, and the computer having a pipelined processor with a branch target buffer (BTB) achieves these ends by creating a recent entry queue in parallel with the branch target buffer (BTB). The recent entry queue comprises a set of branch transfer buffer (BTB) entries, which are organized as a FIFO queue, and preferable a FIFO queue that is full associative for reading.
In carrying out the described method an entry to be written into the BTB is compared against the valid entries within the recent entry queue, an entry matching an entry within the recent entry queue is blocked from being written into the BTB. When an entry is written into the BTB it is also written into the recent entry queue.
A further step involves searching the BTB for a next predicted branch and evaluating the recent entry queue while the BTB is being indexed. The recent entry queue maintains a depth up to the associativity of the BTB whereby while the BTB is indexed, the recent entry queue positions are input to comparison logic. The recent entry queue depth is searched in respect to a matching branch in parallel to searching the BTB output, where the hit detect logic to supports the associativity of the BTB. In searching the BTB for the next predicted branch, the search strategy uses a subset of the recent entry queue as a subset of the BTB, and preferably fast indexes recently encountered branches. A further aspect of the invention includes searching the complete recent entry queue to block duplicate BTB writes.
A further aspect of the invention comprises searching the recent entry queue to detect looping branches. This may be done by comparing the branch to determine if it was recently written into the queue. A further operation includes determining if the branch is backwards branching whereby a looping branch is detected. This includes first detecting a looping branch that is not predicted, and thereafter delaying a decode. The decode is delayed until a fixed number of cycles, or until the BTB predicts a branch.
A further aspect of the described method, system, and program product is staging writes to the BTB in the recent entry queue, including delaying a write and placing the write in the recent event queue, and also detecting a predicted branch while its BTB write is temporarily staged in the recent entry queue.
System and computer program products corresponding to the above-summarized methods are also described and claimed herein.
Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with advantages and features, refer to the description and to the drawings.
Various objects, features, and advantages of the invention are described in the following detailed description taken in conjunction with the accompanying drawings in which:
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
The present invention is directed to a method and apparatus for implementing a recent entry queue which complements a branch target buffer (BTB) 100 as shown generally in
As illustrated generally in
When a branch is not predicted, a surprise branch 710, (
The capabilities of the present invention can be implemented in software, firmware, hardware or some combination thereof.
As one example, one or more aspects of the present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media. The media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of: the present invention. The article of manufacture can be included as a part of a computer system or sold separately.
Additionally, at least one program storage device readable by a machine, tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the present invention can be provided.
The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
While the preferred embodiment to the invention has been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.