The technology of this disclosure relates to processing of instructions for execution in a microprocessor (“processor”), and, in particular, to prefetching instructions in a processor.
Processor-based devices perform computational tasks for a wide variety of applications. A conventional processor-based device includes a processor, often referred to as a central processing unit (CPU), that executes computer program instructions to perform data-based operations and generate a result. The result may then be stored using a memory, provided as an output to an input/output (“I/O”) device, or made available (i.e., communicated) as an input value to another instruction executed by the processor, as non-limiting examples.
To increase processor performance, a processor may employ a technique known as instruction pipelining, whereby the throughput of computer program instructions being executed may be increased by dividing the processing of each instruction into a series of steps which are then executed within an execution pipeline that is composed of multiple stages. Optimal processor performance may be achieved if the processor can fetch the proper instructions from memory quickly enough to allow all stages in the execution pipeline to process instructions concurrently and sequentially as the instructions are ordered in the execution pipeline.
The speed with which a processor can fetch the instructions may be limited by the memory access latency of the processor. “Memory access latency” refers to an interval between the time the processor initiates a memory access request for data (i.e., to fetch an instruction for execution), and the time the processor actually receives the requested data. Memory access latency may negatively affect processor performance if the time interval is large enough that the processor is forced to stall further execution of instructions while waiting for the memory access request to be fulfilled. The effects of memory access latency may be minimized through the use of cache memory, also referred to simply as “cache,” which is a memory device that has a smaller capacity than system memory, but that can be accessed faster by a processor due to the type of memory used and/or the physical location of the cache relative to the processor. The cache can be used to reduce memory access latency by storing copies of data retrieved from frequently accessed memory locations in the system memory or from another, higher-level cache (i.e., a cache further from the processor).
Modern processor-based devices employ a memory hierarchy that includes system memory along with multiple levels of cache memory located between the system memory and the processor. Levels of cache memory that are closer to the processor (i.e., lower-level caches) have faster access times and smaller storage capacities, while levels of cache memory that are further from the processor have slower access times and larger storage capacities. When a memory access request is received from the processor, the first level cache (i.e., the smallest, fastest cache that is located closest to the processor) is queried to see if the requested data is stored therein. If not, the memory access request is forwarded to the next higher cache level in the memory hierarchy (and possibly to the system memory), which may result in increased memory access latency.
The memory hierarchy of the processor may include an instruction cache memory, in which frequently executed instructions may be stored and subsequently retrieved for execution. However, because processors generally fetch instructions in program order, a request to fetch an instruction that results in a miss in the instruction cache may cause a stall in the processor's execution pipeline until the instruction can be retrieved from the processor's memory subsystem.
Aspects disclosed herein include performing storage-free instruction cache hit prediction in a processor. In one exemplary aspect, the processor includes a branch predictor circuit that comprises an instruction cache hit prediction circuit. The instruction cache hit prediction circuit is configured to detect that an access by the branch predictor circuit to a branch target buffer (BTB) for an instruction results in a miss on the BTB (e.g., a miss on each BTB level of a plurality of BTB levels of the BTB, as a non-limiting example). In response, the instruction cache hit prediction circuit generates an instruction cache prefetch request for the instruction, and transmits the instruction cache prefetch request to a prefetcher circuit of the processor, which may then perform a prefetch into an instruction cache memory (e.g., from a Level 2 (L2) cache memory of the processor). By leveraging the larger instruction history provided by the BTB relative to the instruction cache memory, the instruction cache hit prediction circuit can predict hits and misses on the instruction cache memory and preemptively issue prefetch requests in response to predicted misses, without requiring the tracking or storage of additional instruction metadata.
In some aspects, the BTB may provide an instruction cache hit counter and an instruction cache miss counter for each BTB level of the plurality of BTB levels of the BTB. The instruction cache hit prediction circuit in such aspects may be configured to detect whether an access by the branch predictor circuit to a BTB level for an instruction in the instruction stream results in a hit on the instruction cache memory. If so, the instruction cache hit prediction circuit increments the instruction cache hit counter for that BTB level, and otherwise increments the instruction cache miss counter for that BTB level. The instruction cache hit prediction circuit subsequently determines a ratio of a value of the instruction cache hit counter for the BTB level to a value of the instruction cache miss counter for the BTB level. In the case of a miss on the BTB level, if the ratio exceeds a miss rate threshold, the instruction cache hit prediction circuit generates an instruction cache prefetch request for the instruction, and transmits the instruction cache prefetch request to the prefetcher circuit. In some aspects, the instruction cache hit prediction circuit may subsequently reset the instruction cache hit counter and the instruction cache miss counter for the BTB level (e.g., after expiration of a predefined time interval or after execution of a predefined number of instructions, as non-limiting examples).
In this regard, in another exemplary aspect, a processor for performing storage-free instruction cache hit prediction is disclosed. The processor comprises an instruction processing circuit configured to process an instruction stream comprising a plurality of instructions, a prefetcher circuit, and a branch predictor circuit comprising a BTB and an instruction cache hit prediction circuit. The instruction cache hit prediction circuit is configured to detect that a first access by the branch predictor circuit to the BTB for a first instruction in the instruction stream results in a miss on the BTB. The instruction cache hit prediction circuit is further configured to, responsive to detecting that the first access by the branch predictor circuit to the BTB for the first instruction in the instruction stream results in the miss on the BTB, generate a first instruction cache prefetch request for the first instruction. The instruction cache hit prediction circuit is also configured to transmit the first instruction cache prefetch request to the prefetcher circuit.
In another exemplary aspect, a method for performing storage-free instruction cache hit prediction is disclosed. The method comprises detecting, by an instruction cache hit prediction circuit of a processor, that a first access by a branch predictor circuit of the processor to a BTB for a first instruction in an instruction stream results in a miss on the BTB. The method further comprises, responsive to detecting that the first access by the branch predictor circuit to the BTB for the first instruction in the instruction stream results in the miss on the BTB, generating, by the instruction cache hit prediction circuit, a first instruction cache prefetch request for the first instruction. The method also comprises transmitting, by the instruction cache hit prediction circuit, the first instruction cache prefetch request to a prefetcher circuit of the processor.
In another exemplary aspect, a non-transitory computer-readable medium is disclosed. The non-transitory computer-readable medium stores computer-executable instructions that, when executed, cause a processor to detect that a first access by a branch predictor circuit of the processor to a BTB for a first instruction in an instruction stream results in a miss on the BTB. The computer-executable instructions further cause the processor to, responsive to detecting that the first access by the branch predictor circuit to the BTB for the first instruction in the instruction stream results in the miss on the BTB, generate a first instruction cache prefetch request for the first instruction. The computer-executable instructions also cause the processor to transmit the first instruction cache prefetch request to a prefetcher circuit of the processor.
Those skilled in the art will appreciate the scope of the present disclosure and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.
The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.
Aspects disclosed herein include performing storage-free instruction cache hit prediction in a processor. In one exemplary aspect, the processor includes a branch predictor circuit that comprises an instruction cache hit prediction circuit. The instruction cache hit prediction circuit is configured to detect that an access by the branch predictor circuit to a branch target buffer (BTB) for an instruction results in a miss on the BTB (e.g., a miss on each BTB level of a plurality of BTB levels of the BTB, as a non-limiting example). In response, the instruction cache hit prediction circuit generates an instruction cache prefetch request for the instruction, and transmits the instruction cache prefetch request to a prefetcher circuit of the processor, which may then perform a prefetch into an instruction cache memory (e.g., from a Level 2 (L2) cache memory of the processor). By leveraging the larger instruction history provided by the BTB relative to the instruction cache memory, the instruction cache hit prediction circuit can predict hits and misses on the instruction cache memory and preemptively issue prefetch requests in response to predicted misses, without requiring the tracking or storage of additional instruction metadata.
In some aspects, the BTB may provide an instruction cache hit counter and an instruction cache miss counter for each BTB level of the plurality of BTB levels of the BTB. The instruction cache hit prediction circuit in such aspects may be configured to detect whether an access by the branch predictor circuit to a BTB level for an instruction in the instruction stream results in a hit on the instruction cache memory. If so, the instruction cache hit prediction circuit increments the instruction cache hit counter for that BTB level, and otherwise increments the instruction cache miss counter for that BTB level. The instruction cache hit prediction circuit subsequently determines a ratio of a value of the instruction cache hit counter for the BTB level to a value of the instruction cache miss counter for the BTB level. In the case of a miss on the BTB level, if the ratio exceeds a miss rate threshold, the instruction cache hit prediction circuit generates an instruction cache prefetch request for the instruction, and transmits the instruction cache prefetch request to the prefetcher circuit. In some aspects, the instruction cache hit prediction circuit may subsequently reset the instruction cache hit counter and the instruction cache miss counter for the BTB level (e.g., after expiration of a predefined time interval or after execution of a predefined number of instructions, as non-limiting examples).
In this regard,
The fetch circuit 110 in the example of
With continuing reference to
The instruction processing circuit 104 in the processor 102 in
Also, in the instruction processing circuit 104, a scheduler circuit (captioned as “SCHED CIRCUIT” in
With continuing reference to
To decouple branch prediction from instruction retrieval operations, the branch predictor circuit 128 of
The instruction processing circuit 104 further includes a prefetcher circuit 138 that is configured to fetch instructions or data from a higher-level cache memory such as the L2 cache memory 114 (or from the instruction memory 108) and place it into a lower-level cache memory such as the instruction cache memory 112 before the instructions or data are actually requested by the processor. In some aspects, the prefetcher circuit 138 tracks memory access patterns to identify correlations between a current memory access request and previous memory access requests or processor activities. Once the prefetcher circuit 138 correlates a previously accessed memory address (i.e., the “trigger”) with a memory address being currently accessed (i.e., the “target”), subsequent occurrences of memory access requests to the trigger address will cause the prefetcher to retrieve the data stored at the target memory address. The prefetcher circuit 138 thus can reduce memory access latency that can result due to a miss on a lower-level cache memory.
As noted above, the instruction cache memory 112 stores frequently executed instructions for subsequent retrieval and execution. However, because the processor 102 generally fetches the instructions 106 in program order, a request to fetch an instruction that results in a miss in the instruction cache memory 112 may cause a stall in the processor 102 until the instruction can be retrieved from the instruction memory 108. Accordingly, in this regard, the branch predictor circuit 128 provides an instruction cache hit prediction circuit (captioned as “INSTR CACHE HIT PREDICTION CIRCUIT” in
In exemplary operation, the instruction cache hit prediction circuit 140 detects that an access by the branch predictor circuit 128 to the BTB 134 for an instruction (not shown) results in a miss on the BTB 134 (e.g., a miss on each BTB level of the plurality of BTB levels 136(0)-136(B), as a non-limiting example). In response, the instruction cache hit prediction circuit 140 generates an instruction cache prefetch request (captioned as “INSTR CACHE PREFETCH REQUEST” in
To illustrate exemplary elements of and operations performed by the instruction cache hit prediction circuit 140 of
Some aspects may provide that the instruction cache hit prediction circuit 140, in addition to or as an alternative to generating the instruction cache prefetch request 142 in response to detecting a miss on the BTB 134, may determine whether to trigger a prefetch based on a miss rate on the instruction cache memory 112 of
In the case of a miss on the BTB level 136(0), the instruction cache hit prediction circuit 140 may subsequently determine a ratio 214 of a value of the instruction cache hit counter 208(0) for the BTB level 136(0) to a value of the instruction cache miss counter 210(0) for the BTB level 136(0). If the ratio 214 exceeds a miss rate threshold 216, the instruction cache hit prediction circuit 140 generates an instruction cache prefetch request (captioned as “INSTR CACHE PREFETCH REQUEST” in
To illustrate exemplary operations performed by the instruction cache hit prediction circuit 140 of
In response to detecting that the first access 206 by the branch predictor circuit 128 to the BTB 134 for the first instruction 202 in the instruction stream 200 results in the miss on the BTB 134, the instruction cache hit prediction circuit 140 performs a series of operations (block 306). The instruction cache hit prediction circuit 140 generates a first instruction cache prefetch request (e.g., the instruction cache prefetch request 142 of
In response to detecting the second access 212, the instruction cache hit prediction circuit 140 performs a series of operations (block 404). The instruction cache hit prediction circuit 140 determines whether the second access 212 by the branch predictor circuit 128 to the BTB level 136(0) for the second instruction 204 in the instruction stream 200 results in a hit on the instruction cache memory 112 (block 406). If so, the instruction cache hit prediction circuit 140 increments an instruction cache hit counter (e.g., the instruction cache hit counter 208(0) of
Referring now to
In this example, the processor 502 represents one or more general-purpose processing circuits, such as a microprocessor, central processing unit, or the like. The processor 502 is configured to execute processing logic in instructions for performing the operations and steps discussed herein. In this example, the processor 502 includes an instruction cache 508 for temporary, fast access memory storage of instructions accessible by the instruction processing circuit 504. Fetched or prefetched instructions from a memory, such as from the system memory 510 over a system bus 512, are stored in the instruction cache 508. The instruction processing circuit 504 is configured to process instructions fetched into the instruction cache 508 and process the instructions for execution.
The processor 502 and the system memory 510 are coupled to the system bus 512 and can intercouple peripheral devices included in the processor-based system 500. As is well known, the processor 502 communicates with these other devices by exchanging address, control, and data information over the system bus 512. For example, the processor 502 can communicate bus transaction requests to a memory controller 514 in the system memory 510 as an example of a slave device. Although not illustrated in
Other devices can be connected to the system bus 512. As illustrated in
The processor-based system 500 in
While the computer-readable medium 532 is shown in an exemplary embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that stores the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the processing device and that causes the processing device to perform any one or more of the methodologies of the embodiments disclosed herein. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical medium, and magnetic medium.
The embodiments disclosed herein include various steps. The steps of the embodiments disclosed herein may be formed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware and software.
The embodiments disclosed herein may be provided as a computer program product, or software, that may include a machine-readable medium (or computer-readable medium) having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the embodiments disclosed herein. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes: a machine-readable storage medium (e.g., ROM, random access memory (“RAM”), a magnetic disk storage medium, an optical storage medium, flash memory, etc.); and the like.
Unless specifically stated otherwise and as apparent from the previous discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “determining,” “displaying,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data and memories represented as physical (electronic) quantities within the computer system's registers into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatuses to perform the required method steps. The required structure for a variety of these systems will appear from the description above. In addition, the embodiments described herein are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the embodiments as described herein.
Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the embodiments disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The components of the systems described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends on the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Furthermore, a controller may be a processor. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
The embodiments disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in RAM, flash memory, ROM, Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer-readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.
It is also noted that the operational steps described in any of the exemplary embodiments herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary embodiments may be combined. Those of skill in the art will also understand that information and signals may be represented using any of a variety of technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips, that may be references throughout the above description, may be represented by voltages, currents, electromagnetic waves, magnetic fields, or particles, optical fields or particles, or any combination thereof.
Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps, or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order, it is in no way intended that any particular order be inferred.
It will be apparent to those skilled in the art that various modifications and variations can be made without departing from the spirit or scope of the invention. Since modifications, combinations, sub-combinations and variations of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and their equivalents.