The present disclosure relates in general to the field of computer development, and more specifically, to branch prediction.
An instruction sequence of a computer program may include various branch instructions. A branch instruction is an instruction in a computer program that may cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order. As an example, two-way branching may be implemented with a conditional jump instruction. A branch predictor may guess whether a branch will be taken.
A branch predictor may improve the flow in the instruction pipeline. In the absence of branch prediction, a processor would have to wait until the branch instruction (e.g., conditional jump instruction) has passed the execute stage before the next instruction could enter the fetch stage. The branch predictor attempts to avoid this wait by determining whether the branch (e.g., conditional jump) is more likely to be taken or not taken. The instruction at the most likely branch (e.g., either the next instruction in the sequence or a different instruction) is then fetched and one or more instructions starting at the predicted instruction are speculatively executed. If the processor later detects that the guess was wrong, the pipeline is flushed (resulting in the speculatively executed instructions being discarded) and the pipeline starts over with the correct instruction.
The branch predictor may keep a record (i.e., history) of whether branches are taken or not taken. When the branch predictor encounters a branch instruction that has already been seen several times, it can base the prediction on the history. The branch predictor may, for example, recognize that a branch is taken more often than not, that it is taken every other time, that it is taken every fourth time, or other suitable pattern.
Like reference numbers and designations in the various drawings indicate like elements.
Although the drawings depict particular computer systems, the concepts of various embodiments are applicable to any suitable integrated circuits and other logic devices. Examples of devices in which teachings of the present disclosure may be used include desktop computer systems, server computer systems, storage systems, handheld devices, tablets, other thin notebooks, systems on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, digital cameras, media players, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. Various embodiments of the present disclosure may be used in any suitable computing environment, such as a personal computing device, a server, a mainframe, a cloud computing service provider infrastructure, a datacenter, a communications service provider infrastructure (e.g., one or more portions of an Evolved Packet Core), or other environment comprising a group of computing devices.
The branch predictor 100 may be included within or may be in communication with any type of processor operable to execute program instructions, including a general purpose microprocessor, special purpose processor, microcontroller, coprocessor, a graphics processor, accelerator, field programmable gate array (FPGA), or other type of processor (e.g., any processor described herein).
Primary branch predictor 102 may utilize any suitable branch prediction scheme(s). For example, primary branch predictor 102 may utilize one or more of an always untaken, always taken, local branch history, global branch history, one-level (e.g., Decode History Table (DHT), Branch History Table (BHT), combination DHT-BHT), two-level (e.g., Correlation Based Prediction, Two-Level Adaptive Prediction such as gshare), skewed (e.g., gskew), Tagged Geometric History Length (TAGE), or other suitable branch prediction scheme (some of the schemes listed above may utilize one or more of the other listed schemes). In a particular embodiment, branch prediction logic 112 provides branch predictions for branch instructions based on one or more branch history vectors 110 (to be described in more detail below).
Secondary branch predictor 102 may utilize any suitable branch prediction scheme(s). For example, primary branch predictor 102 may utilize any of the branch prediction schemes mentioned above, a secondary branch prediction scheme utilized in TAGE-statistical corrector (SC) (e.g., TAGE-SC-L) or TAGE-Inner Most Loop Iteration counter (IMLI) (both of these designs use TAGE as a primary branch predictor and secondary predictors for branches that are not correlated with global history, guard loops, or guard inner most loop iterations), any secondary branch prediction scheme described herein (e.g., with respect to any of
Branch prediction accuracy is one of the key determinants of performance for a superscalar processor. In order to extract increased parallelism from a workload and complete the workload faster, the processor pipeline may be widened and/or deepened to accommodate more in-flight instructions. As the pipeline increases in size, the penalty for mispredicting the program's control flow also increases. The increased accuracy of branch predictors helps to mitigate the impact of branch misprediction penalties on large processor pipelines.
In various embodiments, the primary branch predictor 102 may be used for the majority of branch predictions while the secondary branch predictor 104 is used for certain types of branch predictions (e.g., branch instructions for which the secondary branch predictor is more likely to accurately predict the branch that should be taken). A careful selection of the branch instructions for which the prediction of the secondary branch predictor 104 is used may increase the overall accuracy of the branch prediction unit 100 and decrease the resources consumed by the associated processor. In one example, branch predictor selection logic 106 may wait until it has been demonstrated that the primary branch predictor 102 is unable to correctly predict for a particular branch instruction before using the secondary branch predictor 104 to override the primary branch predictor.
Among the branch instructions that are mispredicted, certain individual branch instructions are encountered more often and contribute disproportionately more mispredictions than the other branches. Various embodiments of the present disclosure identify branch instructions that are frequently mispredicted by the primary branch predictor 102 (e.g., those branch instructions that have disproportionately high misprediction rates relative to other branch instructions). The identified branch instructions are then predicted by secondary branch predictor 104, which may be customized to more accurately predict such branch instructions.
Workloads that are representative of typical processor usage scenarios may exhibit the following characteristics: 1) a small amount of unique branch instructions (e.g., less than 10%) may contribute a disproportionately large amount (e.g., more than 90%) of all mispredictions (branch instructions that frequently mispredict are referred to herein as the highest misprediction count (HMC) branch instructions), 2) there is usually a >90% overlap between the HMC branch instructions and branch instructions that account for 90% of the misprediction-related pipeline stalls, and 3) the mispredictions for each HMC branch instruction are distributed in time (i.e., they generally do not occur in immediate succession). The second characteristic described above supports a conclusion that reducing HMC branch misprediction will greatly benefit overall performance. The first and third characteristics form the basis for various embodiments of the present disclosure that identify the HMC branch instructions and utilize a secondary branch predictor to predict the direction to be taken for these branch instructions.
In various embodiments of the present disclosure, the number of times particular branch instructions have been mispredicted are tracked with saturating counters. Once a threshold number of mispredictions is reached (e.g., a counter for a branch instruction reaches saturation), the branch instruction is identified as an HMC branch instruction and the secondary branch predictor 104 is used to predict the branch for the HMC branch instruction.
Various embodiments may provide advantages over branch confidence mechanisms and selectors in tournament-style branch predictors. Branch confidence mechanisms examine the actual outcomes of branch instructions and compare with corresponding predictions to determine whether to increase or decrease the confidences of predictions. However, with branch confidence mechanisms, predictor entries are allocated for such branch instructions, while various embodiments of the present disclosure avoid this unnecessary behavior. Branch selectors in tournament style predictors also use the actual outcomes of branch instructions to determine which predictor to actually use at prediction time. Similar to branch confidence mechanisms, tournament selectors observe all branches and update each predictor. Various embodiments of the present disclosure do not utilize the primary branch predictor 102 for branches that are frequently mispredicted, thus reducing the amount of resources consumed.
In the embodiment depicted, branch predictor selection logic 106 includes a misprediction count array 130 that includes branch instruction pointers 132 and usefulness counts (Ucounts) 134. Although an array is depicted, any suitable data structure may be used. An entry of the misprediction count array 130 includes a pointer (e.g., address or other identifier of a branch instruction) and a corresponding Ucount. A Ucount may be an indication of a number of times that a corresponding branch instruction has been mispredicted. The size of the array 130 determines the number of HMC branch instructions that may be tracked at any point in time. During operation, the array 130 may include the HMC branch instructions.
At 202, a branch instruction is executed. This may include making a prediction by the primary branch predictor 102 about the next instruction to execute and allowing the predicted instruction to (at least partially) move through the processor pipeline while the condition associated with the branch instruction is determined. At 204, a determination is made as to whether the branch instruction was mispredicted. If the branch instruction was not mispredicted, the flow ends and the array 130 is not updated.
If the branch was mispredicted, the flow moves to 206, where it is determined whether the branch instruction hits in the misprediction count array 130. This may be determined in any suitable manner. For example, a hit may occur when a pointer to (e.g., address or other identifier associated with) the branch instruction is stored in the array 130. In various embodiments, whether a hit occurs may be determined in any other suitable manner (e.g., based at least in part on an inference based on a structure of the array and/or values stored in the array). In general, a branch instruction may hit in the misprediction count array 130 when a Ucount for the branch instruction is stored in the array 130. If the branch instruction hits in the misprediction count array 130, the Ucount for the branch instruction is incremented at 208 in response to the determination that the branch instruction was mispredicted. In other embodiments, the Ucount may be adjusted in any suitable manner (e.g., reset, decremented, or otherwise adjusted based on the method used by the array 130 to track the number of mispredictions).
If the branch instruction does not hit in the misprediction count array 130 at 206, an entry of the misprediction count array 130 is identified at 210. The identified array entry is to be used to store a Ucount for the mispredicted branch instruction. If the entry is currently occupied by another branch instruction, then that branch instruction and associated Ucount are evicted in favor of the mispredicted branch instruction.
The array entry may be identified in any suitable manner. In one example, the misprediction count array 130 may be set associative to speed up the update process. For example, each branch instruction pointer may map to a respective subset of one or more entries of the misprediction count array 130. Any suitable size may be used for the array 130 and sets thereof. In one embodiment, the array 130 includes 256 entries and each set includes two entries. In another embodiments, the array 130 includes 256 entries and each set includes four entries. In an embodiment, the branch instruction pointer (or a portion thereof) is hashed and the resulting value is used to index into the set that is associated with the branch instruction. When a branch instruction is mispredicted, the set associated with the branch instruction is identified and an entry of the set is selected to store the Ucount of the mispredicted branch instruction.
In various embodiments, the identification of an array entry is performed by selecting the entry with the lowest Ucount of the examined entries (which could be all of the entries of the array 130 or the entries of one or more particular subsets of the array). Thus, an entry that has seen few mispredictions relative to the other examined entries may be selected for eviction. In a particular embodiment, an entry with a Ucount that is equal to an initialized value of a Ucount (which may be any suitable value and is zero in a particular embodiment) is selected. In a particular embodiment, when a Ucount of an entry is incremented and no entries of that set (or the entire array) have a Ucount value equal to the initial value of the Ucount, then all entries in that set (or the entire array) are decremented to ensure that at least one entry has a Ucount that is equal to the initial Ucount value. In another embodiment, all entries of a set or the entire array 130 may be decremented periodically.
In a particular embodiment, the identification of the array entry is performed by selecting the least recently used entry of the examined entries. For example, each entry may also include one or more bits indicating the age of the entry. In a particular embodiment, the age of an entry may be reset each time the corresponding branch instruction is mispredicted. In an embodiment, the ages of the entries of array 130 may be periodically incremented.
In a particular embodiment, when a new branch instruction is inserted into the array 130, the age may be set to the oldest age and/or the Ucount may be set to the lowest Ucount, making the branch instruction vulnerable to replacement. Since 90% of the branches are not going to mispredict often, this strategy protects branches that have mispredicted more than once or more recently from eviction.
At 212, the instruction pointer field (or other identification field) of the identified array entry is set with the address (or other identifier) of the mispredicted branch instruction (or the entry may otherwise be modified to include any suitable representation of the mispredicted branch instruction). At 214, the Ucount of entry is initialized. As various examples, the initialized Ucount value may be set to zero, one, or other suitable value.
In various embodiments, the array 130 may be queried for each branch instruction in an instruction stream to determine which branch predictor to use. At 302, a branch instruction is fetched. At 304, it is determined whether the branch instruction hits in the misprediction count array 130 (e.g., using any of the methods described above or other suitable methods). If the instruction does not hit in array 130, then the primary branch predictor makes the prediction for the branch instruction at 306.
If the branch instruction hits in the array 130, a determination is made as to whether the corresponding Ucount is greater than a threshold. If the Ucount is not above the threshold at 308, then the branch instruction is determined to not be an HMC branch instruction and the primary predictor makes the prediction at 306. If the Ucount is greater than the threshold, then the branch instruction is considered to be an HMC branch instruction and the secondary branch predictor 104 is used to make the prediction at 310. In a particular embodiment, if a Ucount is saturated, then the Ucount is considered to be above the threshold. For example, when a two-bit counter is used to update the Ucounts, the Ucounts may saturate at a value of three. The Ucounts may saturate at any suitable value (and the threshold value may be set accordingly) such as 3, 7, 15, 31, 63, 127, or other suitable value. In general, a lower saturation value may result in optimal identification of the HMC branches, though even at a high saturation value (e.g., 127), the array 130 may still be able to identify most if not all of the HMC branches.
As described above, any suitable type of secondary branch predictor may be used in various embodiments of the present disclosure. Particular embodiments may utilize a secondary branch predictor that is adapted to improve the prediction accuracy of branch instructions associated with program loops. In particular embodiments, misprediction count array 130 may be used to determine whether this type of secondary branch predictor is to be used to predict particular branch instructions. In other embodiments, any suitable branch predictor selection schemes may be used to determine whether this type of secondary branch predictor is to be used.
In various embodiments, any of the branch predictors (e.g., 102 and/or 104) described herein may utilize any suitable branch history vectors (e.g., history vectors 110 and 120). For various encountered branch instructions of a program, a branch history vector may include information about the state of the program when a branch instruction was encountered and whether a branch was taken or not (e.g., whether the program jumped to an instruction or whether the next instruction was executed). A branch predictor may utilize one or more global and/or local history vectors. A local history vector may include separate history buffers that each correspond to a branch instruction (e.g., a conditional jump instruction). For example, the local history vector may include entries for the previous N instances (where N may be any suitable integer) in which a particular branch instruction was encountered, where each entry includes an indication of the direction the branch instruction took. A global history vector generally does not keep a separate history record for each branch instruction (e.g., conditional jump), but instead keeps a shared history of all branch instructions. Thus, a global history may include a representation of the last X branch instructions that were encountered before the current branch instruction. The advantage of a shared history is that any correlation between different branch instructions is included in the predictions, but the history may be diluted by irrelevant information if the branch instructions are uncorrelated or if there are many other branch instructions in between the branch instruction being predicted and the correlated branch instruction.
History vectors may include any suitable information associated with previously encountered branch instructions, such as an address of the branch instruction, the branch instruction's target (e.g., an address of the branch specified by the branch instruction to be jumped to if the branch is taken), an indication of the direction the branch instruction took (e.g., a bit value indicating whether the branch was taken), any other suitable information, or any suitable representations thereof.
The code illustrated in
In
In programs such as those illustrated in
In a particular embodiment, secondary branch predictor 104 utilizes a frozen global history vector to provide branch predictions for branch instructions associated with loops. A frozen history vector may include a snapshot of a branch history vector at a particular time (e.g., at the time a loop is entered and/or detected). A frozen history vector may exclude any branch instructions encountered after the history was snapshotted. A secondary branch predictor that utilizes a frozen global history vector may be referred to herein as a frozen history predictor (FHP). Thus, in some embodiments, secondary branch predictor 104 comprises an FHP.
In the embodiment depicted, secondary branch predictor 104 (which may be an FHP) comprises one or more branch history vectors 120, branch prediction logic 122 which is able to make predictions for branch instructions based on information included in a frozen history and iteration tracker (FHIT) table 124 and one or more exit iteration count observation (EICO) tables 126, and table update logic 128 which is able to update FHIT table 124 and EICO tables 126 based on branch instruction predictions and outcomes and history vector(s) 120.
An FHP may preserve global correlations that existed between branch instructions prior to entering a loop and use that information to make a branch prediction at each iteration of the loop. The FHP learns and predicts branches causing or influencing loop exits (e.g., loop branches, break conditions, etc). The FHP may be used as an adjunct predictor to a primary branch predictor 102 such as TAGE or other suitable primary branch predictor (e.g., a global history based predictor). In various embodiments, the FHP does not predict every encountered branch instruction (e.g., the primary branch predictor 102 will predict in cases where correlations exist with branch instructions in the same or previous iterations of the loop which FHP does not target).
In a particular embodiment, only the most mispredicted branch instructions are predicted by the FHP while the primary branch predictor 102 predicts the remainder of the branches. Thus, as an example, a misprediction count array 130 may be used by branch predictor selection logic 106 to determine whether the FHP should be used to track and/or predict a particular branch instruction (a branch instruction may be tracked by the FHP even if the FHP is not used to predict the branch instruction). In other embodiments, selection logic 106 may use any suitable method for determining whether the FHP or the primary branch predictor 102 is used to predict a branch.
In various embodiments, a global history vector used by the primary branch predictor 102 continues to be updated with branch decisions after a loop is entered and a frozen history (i.e., snapshot) is generated. Once the loop is entered, a loop iteration count may be tracked and an indication of the iteration count at which the loop is exited (or when the outcome of the branch instruction changes even if the loop is not exited) is stored. The indication of the exit iteration may be used by the FHP to provide a prediction for a loop branch instruction (which may, at least in some embodiments, be a backward branch instruction) or a forward branch instruction associated with the loop (e.g., to provide a prediction for the branch instruction the next time the loop is entered).
The frozen history preserves the prior loop branch context which enables the FHP to find correlations in this region. The iteration count differentiates each dynamic instance of a branch instruction as it is called across successive iterations of the loop. In combination, the frozen history and iteration count also provide an accurate program context in which the to-be-predicted branch appears. When the loop exit is resolved, the FHP captures the correlation of the snapshotted global history with the iteration count at which the loop exited.
The frozen history and the iteration count at the time of a prediction for a branch instruction are used by the FHP to compare against the detected correlation to make the prediction. These two elements not only provide a very accurate picture of the program context in which the current branch instruction is encountered, but also improve the ability of the FHP to capture correlations with branch instructions that were encountered prior to the enclosing loop invocation, because branch instructions within the enclosing loop do not modify the snapshotted history and thus will not erase this information.
In various embodiments, iteration counts for each loop branch instruction and forward branch instruction may be tracked, since it may not be known in advance whether the branch instruction governs a loop exit. In various embodiments, all branch instructions may be tracked or a subset of the branch instructions may be tracked. For example, if a correlation between the snapshotted global history and exit iteration count exists, then the branch instruction may continue to be tracked and/or predicted by the FHP, but if a sufficient correlation is not seen then the FHP may stop tracking and/or making predictions on that branch. As another example, iteration counts for a branch instruction may be tracked based on a success rate of the primary branch predictor 102. For example, if the primary branch predictor 102 mispredicts a branch instruction, then an iteration count for the branch may be tracked. As another example, if the primary branch predictor 102 mispredicts a branch instruction at a frequency that is higher than a threshold (e.g., 10% or 50%), the iteration count for the branch instruction may be tracked.
Because it may be difficult to associate all branch instructions with their enclosing loop in the hardware pipeline, the FHP may attempt to identify the branch instructions that govern loop exits (i.e., the exit of the enclosing loop is usually caused by this branch instruction taking a particular direction). For backward branches (i.e., where the target instruction pointer of the branch instruction is less than the branch instruction pointer), the FHP assumes the not-taken direction is the loop exit outcome. Conversely, for forward branches (i.e., where the target instruction pointer is greater than the branch instruction pointer), the FHP assumes the taken direction results in a loop exit outcome. Successive encounters of the branch instruction where a loop exit outcome is not observed are assumed to be additional loop iterations.
Various categories of branch instructions that control the loop exit outcome and are thus targeted by the FHP include loop exit branches (e.g., 400A and 400B), including both backward and forward branches; forward branches causing loop breaks where the break causes most of the exits from the loop (e.g., 400C), and forward branches that heavily influence the loop exit branch where the loop exit branch is strongly correlated to another forward branch in the loop body (e.g., 400D).
In the embodiment depicted, each entry in the FHIT table 124 includes an instruction pointer field 508, an in loop field 510, a frozen history field 512, a current loop iteration field 514, a dominant loop exit iteration count field 516, and an override primary field 518. The instruction pointer field 508 is to store an instruction pointer (e.g., address or other identifier) of a branch instruction, the in loop field 510 is to store an indication of whether the branch instruction is currently in a loop (e.g., being executed within a loop), the frozen history field 512 is to store a copy (or other representation) of a frozen history vector (i.e., a snapshot of the global history vector taken at the first iteration of a loop), dominant loop exit iteration count 516 is to store an indication of the iteration number at which the most exit outcomes have been observed for the particular branch instruction and frozen history, and override primary field 518 includes an indication of whether the primary branch predictor 102 or the FHP should be used to predict the outcome of the branch (e.g., this value may indicate whether the dominant loop exit iteration count is good enough to use for the branch prediction).
The instruction pointer (IP) of a branch instruction is used to index into the FHIT table 124. The FHIT table 124 is updated at the prediction stage of the pipeline. The FHIT table 124 tracks the loop iteration count for an IP by incrementing the iteration count every time the IP is encountered and an exit outcome is not predicted by the branch prediction unit 100. A predicted exit outcome resets the iteration count. In the case of a misprediction of the branch instruction, the correct iteration count is restored based on the real outcome of that instance of the branch instruction. At the entry of a new loop for the IP (e.g., when the IP is first encountered or next encountered after an exit outcome), the global history is snapshotted and captured in the frozen history field 512 in the FHIT table 124.
When a loop is entered (i.e., when the branch instruction encountered for the first time or when the branch instruction is encountered after a loop exit is detected), the frozen history is captured, and at each iteration the dominant loop exit iteration count is compared against the current loop iteration value. Since the history may be frozen at the end of the first iteration of the loop, the snapshot of the history may include the first iteration of the loop.
The secondary branch predictor 104 may also maintain one or more EICO tables 126. An EICO table 126 includes a field that stores a unique value based on the IP of the branch instruction and a corresponding frozen history (FH). For example, an EICO table 126 may include an IP and FH hash field 520. In a particular embodiment, logic 524 implements a hash function that accepts an IP and FH as inputs and outputs a hash value (that is more compact than the combination of the IP and the FH) that may be stored in the IP and FH hash field 520 of an entry or used to index into an EICO table 126 if an entry for the IP and FH combination already exists. In a particular embodiment, the EICO table(s) only track IPs included in the FHIT.
An EICO table 126 may also include one or more fields 522A-N that are each to store an exit count (also referred to herein as a loop exist iteration count) that corresponds to the value of the current loop iteration value when an exit outcome was observed and a number of times that particular exit count has been observed for the particular branch instruction and frozen history combination. Field 522A may include a first exit count and a number of times that exit count has been observed for a first IP and FH combination, field 522B may include a second exit count and a number of times that exit count has been observed for the same IP and FH combination, and so on. As one example, for the same IP and FH combination, field 522A may indicate that an exit outcome was observed twenty times on iteration number seven, field 522B may indicate that an exit outcome was observed ninety three times on iteration number ten, etc.
In various embodiments, secondary branch predictor 104 may maintain multiple EICO tables 126 that each utilize a different length of frozen history (e.g., 10 branches, 20 branches, 50 branches, etc.). For example, the entries of EICO table 126A may include IP and FH hash values formed by hashing IPs with corresponding frozen histories that are 10 branches in length, the entries of EICO table 126B may include IP and FH hash values formed by hashing IPs with corresponding frozen histories that are 20 branches in length, and so on. In a particular embodiment, one of the EICO tables corresponds to a frozen history length that is equal to the history length of a global history vector utilized by primary branch predictor 102, though in another embodiment the frozen history lengths used by the EICO tables may all be different from the length of the global history vector. The various frozen histories corresponding to the EICO tables 126 may be subsets of the copy of the frozen history stored in the frozen history field 512. For example, the first 10 branches of the frozen history may be used in combination with the IP to index into a first EICO table, the first 20 branches of the frozen history may be used in combination with the IP to index into a second EICO table, the first 50 branches of the frozen history may be used in combination with the IP to index into a third EICO table, etc. (although various lengths are used in this example, in other embodiments, EICO tables 126 may correspond to any suitable frozen history lengths). Thus, various subsets of the frozen history stored in the frozen history field 512 and/or the entire frozen history may each correspond to a particular EICO table 126.
An EICO table 126 may be updated after either the execute stage of the pipeline or at the instruction retirement stage. In case an exit outcome is seen for a branch instruction at the time the EICOs are updated, the iteration count at which the exit is seen (i.e., the exit count) is stored in a field 522 corresponding to the IP and the frozen history associated with this exit (or the number of observations for the exit count is incremented if the exit count is already stored). The iteration count and frozen history may be read from the FHIT during prediction of this branch IP instance by the FHP and passed through the pipeline for this purpose. If a large majority of exits seen for a particular IP and frozen history combination are seen with the same exit iteration count, a strong correlation exists between this frozen history and the exit count (for this IP) and the FHP may be used to predict this branch.
At the time of loop entry during a pipeline prediction stage, when the frozen history is captured and written to the FHIT table 124, the EICO tables are also accessed. If a table entry contains a strongly correlating exit count (e.g., an exit is occurring at a particular iteration a majority of the time for the particular branch IP and frozen history), then this exit count may be written as the dominant loop exit iteration count in the FHIT table 124. Among the different entries that correspond to the particular branch IP and respective frozen histories in the EICO tables, the exit count with the strongest correlation is chosen (e.g., the exit count that is observed at the highest rate of observed exit counts for the particular IP and FH combination). As an alternative, the exit count with the strongest correlation from the EICO table corresponding to the longest frozen history may be used (since this EICO table is generally the most accurate). As the IP is encountered in successive loop iterations, if the iteration count matches this stored exit iteration count (i.e., the value in field 516) an exit outcome is predicted by the FHP, otherwise the branch direction opposite to the exit outcome is predicted by the FHP. If none of the EICO tables contained a strongly correlating exit iteration count, the FHIT entry may be marked as incapable of making a prediction for the current loop (e.g., the override primary field 518 may be set to false) and the prediction from the primary branch predictor 102 will be used.
When an EICO table 126 is full, an entry of the EICO table may be evicted in favor of a new IP and FH combination. The EICO tables corresponding to longer frozen histories may have a higher thrashing rate because there is more variability in the frozen histories as their lengths increase. Accordingly, the probability of hitting such tables is lower than the probability of hitting in the EICO tables corresponding to shorter frozen histories, but these tables are generally more accurate due to the increased frozen history length.
If the branch IP hits in the FHIT table 124, but a determination is made that the override primary field of the entry for the branch IP is set to false, the primary branch predictor is used to predict the branch at 608. It is then determined whether the primary branch predictor predicted an exit outcome (e.g., an exit outcome may be detected when the branch prediction was “not taken” if the branch instruction is a backward branch instruction or “taken” if the branch instruction is a forward branch instruction). If it is determined at 610 that an exit outcome was not predicted, the current loop iteration value of the corresponding entry of the FHIT table 124 is incremented at 612 and the flow ends. If it is determined at 610 that an exit outcome was predicted, the in loop field 510 of the entry of the FHIT table is reset and the current loop iteration value of the entry is set to 0 (since a prediction was made at 610 that the loop has been exited).
At 606, if it is determined that the override primary field is set, then a determination is made at 616 as to whether the dominant loop exit iteration count is equal to the current loop iteration value. If these values are not equal, then at 618, the FHP predicts that the loop is not exited at 618. For a backward branch instruction, this may result in a prediction that the backwards branch is to be taken. Conversely for a forward branch instruction, this may result in a prediction that the forward branch is not to be taken. At 612, the current loop iteration value is incremented and the flow ends.
At 616, if it is determined that the dominant loop exit iteration count is equal to the current loop iteration value, then the FHP predicts that the loop is exited at 620. For a backward branch, this may result in a prediction that the backward branch is not to be taken. Conversely for a forward branch, this may result in a prediction that the forward branch is to be taken. At 614, the in loop value is reset and the current loop iteration is set to 0.
If at least one matching entry exists in at least one EICO table at 708, the leading exit iteration count (e.g., the highest exit count present in one of fields 522 for the particular IP and frozen history) from each EICO table is obtained at 712 and the dominant loop exit iteration count is selected at 714. In a particular embodiment, a dominance value is calculated for each leading exit iteration count obtained at 712. As one example, the dominance value may be a dominance percentage calculated by dividing the number of times that the exit iteration count was observed by the total number of exit outcomes observed (e.g., the sum of the number of observations in each field 522 of the matching entry of the EICO table). In another embodiment, the dominant loop exit iteration count may simply be the leading exit iteration count from the matching entry in the EICO table that corresponds to the longest frozen history length. The dominant loop exit iteration count may be selected in any other suitable manner based on the exit counts and number of times observed.
At 716, it is determined whether the dominance percentage (i.e., the frequency with which the dominant loop exit iteration count was observed at loop exit of the branch instruction relative to other iteration counts that coincided with loop exits) exceeds a particular percentage. Any suitable percentage may be specified as the threshold percentage for comparison against the dominance percentage. In a particular embodiment, the threshold percentage could be based on a percentage that the primary branch predictor has mispredicted the branch (e.g., if the global branch predictor mispredicts the branch 50% of the time the threshold percentage may be set to 50% such that the FHP may be used when the FHP is expected to correctly predict the branch more than 50% of the time). In another embodiment, the threshold percentage may be set to any suitable value, such as 60%, 75%, or other suitable value. If the dominance percentage is not greater than the threshold percentage at 716, then the override primary value may be set to 0 (i.e., false) in the FHIT table at 710 (so that the primary branch predictor 102 will be used to predict the branch) and the flow ends. If the dominance percentage is greater than the threshold percentage at 716, then the override primary value is set to 1 (i.e., true) in the FHIT table at 718, the dominant loop exit iteration count is stored in the FHIT table at 720, and the flow ends.
At 802, a determination is made as to whether the branch IP hit (in the FHIT table) at the time of prediction. If it did not, the flow ends. If the branch IP did hit, a determination is made at 804 as to whether the outcome of the branch was a loop exit. If the outcome indicates there was no loop exit, the flow finishes. If the outcome indicates there was a loop exit, then the exit iteration count is set equal to the current iteration count at 806 and the number of times the particular exit iteration count has been observed is incremented for each entry corresponding to the branch IP and corresponding frozen history in each EICO table at 808. If an entry does not exist in an EICO table for the branch IP and corresponding frozen history, then an entry may be created at 808 and the number of times the particular exit iteration count has been observed is initialized (e.g., to zero or one).
In various embodiments, this flow may involve sending the various frozen histories (i.e., the frozen histories of various lengths) and the current iteration count from the front end of a processor to the back end (e.g., execution engine) of the processor so that when the actual outcome is resolved at the back end of the processor the entries of the EICO table can be updated properly.
In various embodiments, this flow may involve storing an indication of the state of the FHIT table such that the FHIT table may be restored if there is a misprediction. In a particular embodiment, branch prediction unit 100 stores a record of the changes to the FHIT table that have occurred since the branch instruction was originally predicted in order to save space (though in other embodiments, the original state of the entire FHIT table may be stored).
The flows described in
The figures below detail exemplary architectures and systems to implement embodiments of the above. For example, any of the processors described below may include any of the branch predictors described herein. In some embodiments, one or more hardware components and/or instructions described above are emulated as detailed below, or implemented as software modules.
Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.
In
The front end unit 1030 includes a branch prediction unit 100 coupled to an instruction cache unit 1034, which is coupled to an instruction translation lookaside buffer (TLB) 1036, which is coupled to an instruction fetch unit 1038, which is coupled to a decode unit 1040. In various embodiments, at least a portion of the logic of the branch prediction unit 100 (e.g., logic that modifies the FHIT table upon branch resolution) may be located in the execution engine unit 1050. The decode unit 1040 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1040 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1090 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1040 or otherwise within the front end unit 1030). The decode unit 1040 is coupled to a rename/allocator unit 1052 in the execution engine unit 1050.
The execution engine unit 1050 includes the rename/allocator unit 1052 coupled to a retirement unit 1054 and a set of one or more scheduler unit(s) 1056. The scheduler unit(s) 1056 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1056 is coupled to the physical register file(s) unit(s) 1058. Each of the physical register file(s) units 1058 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1058 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1058 is overlapped by the retirement unit 1054 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1054 and the physical register file(s) unit(s) 1058 are coupled to the execution cluster(s) 1060. The execution cluster(s) 1060 includes a set of one or more execution units 1062 and a set of one or more memory access units 1064. The execution units 1062 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1056, physical register file(s) unit(s) 1058, and execution cluster(s) 1060 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster—and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1064). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.
The set of memory access units 1064 is coupled to the memory unit 1070, which includes a data TLB unit 1072 coupled to a data cache unit 1074 coupled to a level 2 (L2) cache unit 1076. In one exemplary embodiment, the memory access units 1064 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1072 in the memory unit 1070. The instruction cache unit 1034 is further coupled to a level 2 (L2) cache unit 1076 in the memory unit 1070. The L2 cache unit 1076 is coupled to one or more other levels of cache and eventually to a main memory.
By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1000 as follows: 1) the instruction fetch 1038 performs the fetch and length decoding stages 1002 and 1004; 2) the decode unit 1040 performs the decode stage 1006; 3) the rename/allocator unit 1052 performs the allocation stage 1008 and renaming stage 1010; 4) the scheduler unit(s) 1056 performs the schedule stage 1012; 5) the physical register file(s) unit(s) 1058 and the memory unit 1070 perform the register read/memory read stage 1014; the execution cluster 1060 perform the execute stage 1016; 6) the memory unit 1070 and the physical register file(s) unit(s) 1058 perform the write back/memory write stage 1018; 7) various units may be involved in the exception handling stage 1022; and 8) the retirement unit 1054 and the physical register file(s) unit(s) 1058 perform the commit stage 1024.
The core 1090 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, Calif.; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, Calif.), including the instruction(s) described herein. In one embodiment, the core 1090 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.
It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).
While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1034/1074 and a shared L2 cache unit 1076, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.
The local subset of the L2 cache 1104 is part of a global L2 cache that is divided into separate local subsets (in some embodiments one per processor core). Each processor core has a direct access path to its own local subset of the L2 cache 1104. Data read by a processor core is stored in its L2 cache subset 1104 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1104 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. In a particular embodiment, each ring data-path is 1012-bits wide per direction.
Thus, different implementations of the processor 1200 may include: 1) a CPU with the special purpose logic 1208 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, or a combination of the two); 2) a coprocessor with the cores 1202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1202A-N being a large number of general purpose in-order cores. Thus, the processor 1200 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression and/or decompression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (e.g., including 30 or more cores), embedded processor, or other fixed or configurable logic that performs logical operations. The processor may be implemented on one or more chips. The processor 1200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.
In various embodiments, a processor may include any number of processing elements that may be symmetric or asymmetric. In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.
A core may refer to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. A hardware thread may refer to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.
The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1206, and external memory (not shown) coupled to the set of integrated memory controller units 1214. The set of shared cache units 1206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1212 interconnects the special purpose logic (e.g., integrated graphics logic) 1208, the set of shared cache units 1206, and the system agent unit 1210/integrated memory controller unit(s) 1214, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1206 and cores 1202A-N.
In some embodiments, one or more of the cores 1202A-N are capable of multi-threading. The system agent 1210 includes those components coordinating and operating cores 1202A-N. The system agent unit 1210 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1202A-N and the special purpose logic 1208. The display unit is for driving one or more externally connected displays.
The cores 1202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1202A-N may be capable of executing the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.
The optional nature of additional processors 1315 is denoted in
The memory 1340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), other suitable memory, or any combination thereof. The memory 1340 may store any suitable data, such as data used by processors 1310, 1315 to provide the functionality of computer system 1300. For example, data associated with programs that are executed or files accessed by processors 1310, 1315 may be stored in memory 1340. In various embodiments, memory 1340 may store data and/or sequences of instructions that are used or executed by processors 1310, 1315.
In at least one embodiment, the controller hub 1320 communicates with the processor(s) 1310, 1315 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1395.
In one embodiment, the coprocessor 1345 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression and/or decompression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1320 may include an integrated graphics accelerator.
There can be a variety of differences between the physical resources 1310, 1315 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.
In one embodiment, the processor 1310 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1310 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1345. Accordingly, the processor 1310 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1345. Coprocessor(s) 1345 accept and execute the received coprocessor instructions.
Processors 1470 and 1480 are shown including integrated memory controller (IMC) units 1472 and 1482, respectively. Processor 1470 also includes as part of its bus controller units point-to-point (P-P) interfaces 1476 and 1478; similarly, second processor 1480 includes P-P interfaces 1486 and 1488. Processors 1470, 1480 may exchange information via a point-to-point (P-P) interface 1450 using P-P interface circuits 1478, 1488. As shown in
Processors 1470, 1480 may each exchange information with a chipset 1490 via individual P-P interfaces 1452, 1454 using point to point interface circuits 1476, 1494, 1486, 1498. Chipset 1490 may optionally exchange information with the coprocessor 1438 via a high-performance interface 1439. In one embodiment, the coprocessor 1438 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression and/or decompression engine, graphics processor, GPGPU, embedded processor, or the like.
A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via a P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.
Chipset 1490 may be coupled to a first bus 1416 via an interface 1496. In one embodiment, first bus 1416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.
As shown in
In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.
A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language (HDL) or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In some implementations, such data may be stored in a database file format such as Graphic Data System II (GDS II), Open Artwork System Interchange Standard (OASIS), or similar format.
In some implementations, software based hardware models, and HDL and other functional description language objects can include register transfer language (RTL) files, among other examples. Such objects can be machine-parsable such that a design tool can accept the HDL object (or model), parse the HDL object for attributes of the described hardware, and determine a physical circuit and/or on-chip layout from the object. The output of the design tool can be used to manufacture the physical device. For instance, a design tool can determine configurations of various hardware and/or firmware elements from the HDL object, such as bus widths, registers (including sizes and types), memory blocks, physical link paths, fabric topologies, among other attributes that would be implemented in order to realize the system modeled in the HDL object. Design tools can include tools for determining the topology and fabric configurations of system on chip (SoC) and other hardware device. In some instances, the HDL object can be used as the basis for developing models and design files that can be used by manufacturing equipment to manufacture the described hardware. Indeed, an HDL object itself can be provided as an input to manufacturing system software to cause the manufacture of the described hardware.
In any representation of the design, the data representing the design may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.
In various embodiments, a medium storing a representation of the design may be provided to a manufacturing system (e.g., a semiconductor manufacturing system capable of manufacturing an integrated circuit and/or related components). The design representation may instruct the system to manufacture a device capable of performing any combination of the functions described above. For example, the design representation may instruct the system regarding which components to manufacture, how the components should be coupled together, where the components should be placed on the device, and/or regarding other suitable specifications regarding the device to be manufactured.
Thus, one or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, often referred to as “IP cores” may be stored on a non-transitory tangible machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that manufacture the logic or processor.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code, such as code 1430 illustrated in
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In various embodiments, the language may be a compiled or interpreted language.
The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable (or otherwise accessible) by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information therefrom.
Instructions used to program logic to perform embodiments of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
Logic may be used to implement any of the functionality of the various components such as branch prediction unit 100, primary branch predictor 102, secondary branch predictor 104, branch prediction logic 112, branch prediction logic 122, FHIT table 124, EICO tables 126, table update logic 128, misprediction count array 130, other component described herein, or any subcomponent of any of these components. “Logic” may refer to hardware, firmware, software and/or combinations of each to perform one or more functions. As an example, logic may include hardware, such as a micro-controller or processor, associated with a non-transitory medium to store code adapted to be executed by the micro-controller or processor. Therefore, reference to logic, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of logic refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term logic (in this example) may refer to the combination of the hardware and the non-transitory medium. In various embodiments, logic may include a microprocessor or other processing element operable to execute software instructions, discrete logic such as an application specific integrated circuit (ASIC), a programmed logic device such as a field programmable gate array (FPGA), a memory device containing instructions, combinations of logic devices (e.g., as would be found on a printed circuit board), or other suitable hardware and/or software. Logic may include one or more gates or other circuit components, which may be implemented by, e.g., transistors. In some embodiments, logic may also be fully embodied as software. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage medium. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. Often, logic boundaries that are illustrated as separate commonly vary and potentially overlap. For example, first and second logic may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware.
Use of the phrase ‘to’ or ‘configured to,’ in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still ‘configured to’ perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate ‘configured to’ provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term ‘configured to’ does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.
Furthermore, use of the phrases ‘capable of/to,’ and or ‘operable to,’ in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.
A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example, the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.
Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.
In at least one embodiment, a processor comprises a branch predictor to generate, in association with a program loop, a frozen history vector comprising a snapshot of a branch history vector; track a current iteration of the program loop; and provide a prediction for a branch instruction associated with the program loop, the prediction based on the frozen history vector and the current iteration of the program loop.
In an embodiment, the branch predictor is to maintain a data structure with a plurality of entries, an entry of the table to comprise an instruction pointer of the branch instruction, the current iteration of the program loop, and a dominant loop exit iteration count associated with the program loop. In an embodiment, the branch predictor is to predict a direction for the branch instruction based on whether the current iteration equals the dominant loop exit iteration count. In an embodiment, the branch predictor is to generate a plurality of frozen history vectors in association with the program loop, wherein each frozen history vector has a different length; and store at least one loop exit iteration count in association with each frozen history vector. In an embodiment, the branch predictor is to track the number of times a loop exit outcome coincides with a loop iteration count associated with the program loop. In an embodiment, the branch predictor is to detect an entry into a program loop based on whether a branch of the branch instruction is taken or not taken. In an embodiment, the branch predictor is to maintain a data structure associated with an instruction pointer of the branch instruction and the frozen history vector, wherein the data structure is to store, for a plurality of loop exit iteration counts, the number of times a loop exit outcome was detected for each loop exit iteration count. In an embodiment, the branch predictor is a secondary branch predictor that is to provide the prediction for the branch instruction in response to a determination that a primary branch predictor is not to provide a prediction for the branch instruction. In an embodiment, the processor is to determine that the secondary branch predictor is to provide the prediction based at least in part on an indication of the number of times that the primary branch predictor has mispredicted the branch instruction. In an embodiment, the processor is to determine that the secondary branch predictor is to provide the prediction based at least in part on a saturation of a misprediction counter value associated with the branch instruction.
In at least one embodiment, a method comprising generating, in association with a program loop, a frozen history vector comprising a snapshot of a branch history vector; tracking a current iteration of the program loop; and providing a prediction for a branch instruction associated with the program loop, the prediction based on the frozen history vector and the current iteration of the program loop.
In an embodiment, the method further comprises maintaining a data structure with a plurality of entries, an entry of the table to comprise an instruction pointer of the branch instruction, the current iteration of the program loop, and a dominant loop exit iteration count associated with the program loop. In an embodiment, the method further comprises predicting a direction for the branch instruction based on whether the current iteration equals the dominant loop exit iteration count. In an embodiment, the method further comprises generating a plurality of frozen history vectors in association with the program loop, wherein each frozen history vector has a different length; and storing at least one loop exit iteration count in association with each frozen history vector. In an embodiment, the method further comprises tracking the number of times a loop exit outcome coincides with a loop iteration count associated with the program loop. In an embodiment, the method further comprises detecting an entry into a program loop based on whether a branch of the branch instruction is taken or not taken. In an embodiment, the method further comprises maintaining a data structure associated with an instruction pointer of the branch instruction and the frozen history vector, wherein the data structure is to store, for a plurality of loop exit iteration counts, the number of times a loop exit outcome was detected for each loop exit iteration count. In an embodiment, the method further comprises providing, by a secondary branch predictor, the prediction for the branch instruction in response to a determination that a primary branch predictor is not to provide a prediction for the branch instruction. In an embodiment, the method further comprises determining that the secondary branch predictor is to provide the prediction based at least in part on an indication of the number of times that the primary branch predictor has mispredicted the branch instruction. In an embodiment, the method further comprises determining that the secondary branch predictor is to provide the prediction based at least in part on a saturation of a misprediction counter value associated with the branch instruction.
In at least one embodiment, a system comprises means for generating, in association with a program loop, a frozen history vector comprising a snapshot of a branch history vector; means for tracking a current iteration of the program loop; and means for providing a prediction for a branch instruction associated with the program loop, the prediction based on the frozen history vector and the current iteration of the program loop.
In an embodiment, the system further comprises means for maintaining a data structure with a plurality of entries, an entry of the table to comprise an instruction pointer of the branch instruction, the current iteration of the program loop, and a dominant loop exit iteration count associated with the program loop. In an embodiment, the system further comprises means for predicting a direction for the branch instruction based on whether the current iteration equals the dominant loop exit iteration count. In an embodiment, the system further comprises means for generating a plurality of frozen history vectors in association with the program loop, wherein each frozen history vector has a different length; and means for storing at least one loop exit iteration count in association with each frozen history vector. In an embodiment, the system further comprises means for tracking the number of times a loop exit outcome coincides with a loop iteration count associated with the program loop.
In at least one embodiment, a non-transitory machine readable storage medium has instructions stored thereon, the instructions when executed by a machine to cause the machine to generate, in association with a program loop, a frozen history vector comprising a snapshot of a branch history vector; track a current iteration of the program loop; and provide a prediction for a branch instruction associated with the program loop, the prediction based on the frozen history vector and the current iteration of the program loop.
In an embodiment, the instructions when executed to further cause the machine to maintain a data structure with a plurality of entries, an entry of the table to comprise an instruction pointer of the branch instruction, the current iteration of the program loop, and a dominant loop exit iteration count associated with the program loop. In an embodiment, the instructions when executed to further cause the machine to predict a direction for the branch instruction based on whether the current iteration equals the dominant loop exit iteration count. In an embodiment, the instructions when executed to further cause the machine to generate a plurality of frozen history vectors in association with the program loop, wherein each frozen history vector has a different length; and store at least one loop exit iteration count in association with each frozen history vector. In an embodiment, the instructions when executed to further cause the machine to track the number of times a loop exit outcome coincides with a loop iteration count associated with the program loop.
In at least one embodiment, a system comprises a processor comprising a primary branch detector to provide predictions for a plurality of branch instructions; and a secondary branch detector to generate, in association with a program loop, a frozen history vector comprising a snapshot of a branch history vector; track a current iteration of the program loop; and provide a prediction for a branch instruction associated with the program loop, the prediction based on the frozen history vector and the current iteration of the program loop.
In an embodiment, the processor further comprises a memory to store a data structure comprising a plurality of entries, an entry of the data structure to comprise an instruction pointer of the branch instruction, the current iteration of the program loop, and a dominant loop exit iteration count associated with the program loop. In an embodiment, the secondary branch predictor is to predict a direction for the branch instruction based on whether the current iteration equals the dominant loop exit iteration count. In an embodiment, the secondary branch predictor is to generate a plurality of frozen history vectors in association with the program loop, wherein each frozen history vector has a different length; and store at least one loop exit iteration count in association with each frozen history vector. In an embodiment, the secondary branch predictor is to track the number of times a loop exit outcome coincides with a loop iteration count associated with the program loop.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment.