Efficient Multiple-Table Reference Prediction Mechanism

Abstract
A method and an apparatus for enabling a prefetch engine to detect and support hardware prefetching with different streams in received accesses. Multiple (simple) history tables are provided within (or associated with) the prefetch engine. Each of the multiple tables is utilized to detect different access patterns. The tables are indexed by different parts of the address and are accessed in a preset order to reduce the interference between different patterns. When an address does not fit the patterns of a first table, the address is passed to the next table to be checked for a match of different patterns. In this manner, different patterns may be detected at different tables within a single prefetch engine.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

The invention itself, as well as a preferred mode of use, further objects, and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:



FIG. 1 is a block diagram of a data processing system designed with prefetch engines, within which embodiments of the present invention may be practiced;



FIG. 2 is a block diagram depicting internal components of the prefetch engine in accordance with one embodiment of the present invention;



FIG. 3 is a logic flow diagram illustrating the processing of stream patterns through a prefetch engine with two history tables in accordance with one embodiment of the invention;



FIG. 4 illustrates a stride access detection scheme utilizing the multiple-table algorithm of stride pattern detection in accordance with one embodiment of the present invention; and



FIG. 5 is a graphical chart showing the execution speed comparison between conventional stream pattern detection with a single history table and the prefetching techniques utilizing multiple history tables on an application called PTRANS, in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT

The present invention provides a method and an apparatus for enabling a prefetch engine to detect and support hardware prefetching with different patterns, such as both unit and non-unit stride streams, in both virtual and physical address accesses. Multiple (simple) history tables are provided within (or associated with) the prefetch engine. Each of the multiple tables is utilized to detect different access patterns. The tables are indexed by different parts of the physical address and are accessed in a preset order to reduce the interference between different patterns. When the detected stream does not fit the patterns of a first table, the stream is passed to the next table and subsequent tables to be checked for a match of different patterns of the second table and subsequent tables until to the last table in the sequence. In this manner, multiple different patterns may be detected utilizing the multiple tables.


Referring now to the drawings and in particular to FIG. 1, there is depicted a block diagram of an example data processing system, within which the various features of the invention may be implemented, in accordance with one embodiment of the present invention. Data processing system 100 comprises at least one central processing unit (CPU) 102 which is connected via a series of buses/channels (not specifically shown) to a memory hierarchy that includes multiple levels of caching: L1 cache 104, L2 cache 106 and memory 108. CPU 102 includes various execution units, registers, buffers, and other functional units, which are all formed by integrated circuitry. In one embodiment of the present invention, CPU 102 is one of the PowerPC™ lines of microprocessors, which operates according to reduced instruction set computing (RISC) techniques. CPU 102 communicates with each of the above devices within the memory hierarchy by various means, including a bus or a direct channel.


Also illustrated within each of CPU 102 and L1 104 and L2 caches 106 of FIG. 1 are respective prefetch engines (PE) 103, 105, 107, within which logic is provided to enable the various features of the invention, as described in details below. Each PE 103/105/107 predicts future data references and issues prefetch requests for the predicted future data references. As shown, PE 103 may be included within (or associated with) the CPU, and in one embodiment, the CPU's PE 103 may issue prefetch requests to all levels of caches. Alternatively, in another embodiment, different PEs 105/107 may be included within (or associated with) each individual cache, issuing prefetch requests for only the associated cache. This embodiment applies to all three options illustrated by FIG. 1.


As utilized herein, the terms prefetch/prefetching refer to the method by which data that is stored in one memory location of the memory hierarchy (i.e., lower level caches 106 or memory 108) is transferred to a higher level memory location that is closer (yields lower access latency) to the CPU processor, before the data is actually needed/demanded by the processor. More specifically, prefetching as described hereinafter, refers to the early retrieval of data from one of the lower level caches/memory to a higher level cache or the CPU (not shown) before the CPU 102 issues a demand for the specific data being returned. Lower level caches may comprise additional levels, which would then be sequentially numbered, e.g., L3, L4. In addition to the illustrated memory hierarchy, data processing system 100 may also comprise additional storage devices that form a part of memory hierarchy from the perspective of CPU 102. Storage device may be one or more electronic storage media such as a floppy disk, hard drive, CD-ROM, or digital versatile disk (DVD). Storage device may also be the cache, memory, and storage media of another CPU in a multiprocessor system.


Those skilled in the art will further appreciate that there are other components that might be utilized in conjunction with those shown in the block diagram of FIG. 1; for example, cache controller(s) and a memory controller may be utilized as interfaces between lower level caches 104/106 and memory device 108 and CPU 102, respectively. While a particular configuration of data processing system 100 is illustrated and described, it is understood that other configurations may be possible, utilizing similarly featured components within a processor to achieve the same functional results, and it is contemplated that all such configurations fall within the scope of the present invention.


Also, while an illustrative embodiment of the present invention has been, and will continue to be, described in the context of a fully functional data processing system, those skilled in the art will appreciate that software aspects of an illustrative embodiment of the present invention are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the present invention applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution.


Turning now to FIG. 2, there is illustrated a block diagram of a PE (e.g., PE 103/105/107), which is configured with operating logic to enable the features of the inventions, according to the illustrative embodiment of the present invention. In the depicted embodiment, PE 103 comprises two main parts, namely reference prediction unit 201 and prefetch request issue unit 211. FIG. 2 shows a conventional system with one reference prediction table 207 and one active streams table 217. Within the reference prediction table 207 are one or more entries of historical data 205 of previous references. Within the active streams table 217 are one or more entries of active streams information 215.


Also illustrated within reference prediction table 207 is reference/stream prediction logic 203, which is utilized to predict future references based on issued references. Further, within active streams table 217 is prefetch request issue logic 213, which is utilized to send out prefetch requests at appropriate times. Both logic components represent the two primary components of the PE 103. Extra logic/table may be added to the reference/stream prediction logic 203 in order to enable the functionality of the multiple history table implementation described herein.


In conventional systems, the reference/stream prediction logic utilizes one table to store a certain number of previous references in each entry and initiate an active stream in the issue logic if some pattern is detected. Different types of applications exhibit different access patterns, and in conventional applications, a few different kinds of tables have been proposed to detect different types of access patterns. However, there is no one table that is able to work efficiently for all access patterns.


Unlike the conventional prefetch engines, PE 103 and Reference/Stream prediction logic 203 are configured with additional logic. Additionally, the present invention provides additional history tables that are ordered for sequential access based on a check of simpler patterns at the first and more complicated patterns at subsequent tables in the sequence. The functionality provided by the invention enables the data prefetch mechanisms within PE 103 to simultaneously support multiple patterns with very little additional hardware or cost to the overall system.



FIG. 3 provides a logic flow diagram of a two-table stride prediction algorithm, according to one embodiment. Two history tables are provided, a first “simple pattern” table 303 is utilized to detect unit and small, non-unit stride streams; and a second “large stride” table 309 is utilized to detect large, non-unit stride streams. In the described embodiment, both of these tables 303 and 309 are organized as set-associative caches and indexed by certain bits of physical address 301.


Within the illustration, physical address is depicted having a “Least Significant Bit” (LSB) and a “Most Significant Bit” (MSB). The illustration thus assumes a big endian machine, although the features of the invention are fully applicable to a little endian machine as well. Within the Physical address 301 are index bits, namely idx-l and idx-s, which respectively represent the index for the large stride table and the simple pattern table. The large stride table is indexed by bits within “idx-l”, which includes more significant bits than “idx-s”. In an embodiment having more than two history tables, additional indices are provided to access the specific tables.


The value of “idx-s” and “idx-l” basically determine how large a region to check for access patterns. Using different bits enables the prefetch engine to look into small regions for unit and small non-unit strides and to look into large regions for large strides. Use of different bits also avoids the interferences among simple patterns, and between simple patterns and large strides. This avoidance of interference then enables the PE to successfully detect both simple pattern and large stride streams.


As an example, assuming that “idx-s” begins at the 12th least significant bit and “idx-l” begins at the 20th least significant bit, the logic within the prefetch engine checks each 4-kilobyte region for simple patterns and then each 1-megabyte region for large strides. With this configuration of the physical address, the logic of the PE will successfully detect address sequence 0x0, 0x80, 0x1000, 0x1080, 0x2000, 0x2080, . . . as two streams (1) 0x0, 0x1000, 0x2000, . . . , and (2) 0x80, 0x1080, 0x2080, . . . , with a stride of 0x1000 for both streams. On the contrary, a one-table mechanism will not be able to detect either of the two streams. The two-table prefetch engine will also detect address sequence 0x0, 0x20000, 0x1, 0x30000, 0x2, 0x40000, . . . as two streams (1) 0x0, 0x1, 0x2, . . . and (2) 0x20000, 0x30000, 0x40000, . . . , with stride of 0x1 and 0x10000 respectively, while a conventional one-table prefetch engine can detect only one stream, 0x0, 0x1, 0x2, . . .


According to the illustrated logic of FIG. 3, when the prefetch engine receives an address, the logic of PE utilizes bits within “idx-s” to index into the small stride table 303. If the stream detection logic detects (at block 305) that the physical address is part of a stream within the small stride table 303, a prefetch stream will be initiated as shown at block 313. Also, when the stream is detected within the small stride table 303, the select input to the large stride selection logic 307 prevents the address from being forwarded to the large stride table 309. However, if the logic does not detect a stream within the small stride table 303, the address is forwarded to the large stride table 309. If a stream is detected at the large stride table (at block 311), a prefetch stream will be initiated as shown at block 313.


Thus, when an address is received, the first table (with lower significant index bits) in the ordered sequence of tables is initially accessed. If the prefetch logic does not detect a stride within the first table, then the address is passed/forwarded to the second table (with higher significant index bits) and the prefetch logic checks the second table to detect a stride, which would be larger, if any. Thus, only accesses that do not belong to the first table ever get passed to the second table, and so on, until the last table in the ordered sequence of tables is checked. The sequential checking of tables eliminates the likelihood of interference between different patterns and also enables each table to be a simple yet efficient design.



FIG. 4 illustrates an example scheme by which a stride may be detected according to one embodiment of the invention. A first row 402 indicates the format for the entries within a register comprising three entries. The first entry describes the state of the register, the second tracks the previous address and the last indicates the stride pattern between the previous address and subsequent addresses. The state entry begins with register 404 in an invalid (INVALID) state, which is the condition prior to receiving any input or on start up/power on of the device. After receiving an address A, the state entry is changed to initial (INIT) state within register 406, and address A is filled in as the previous address. When a next address, A+X is received, the PE's logic computes the stride as the “current address minus the initial address”, i.e., “A+X−A=X”. The logic then fills in A+X as the previous address, changes the state to intermediate (INTER), and records X as the stride, all reflected within register 408. Finally, a third address, A+2X, is received, and the PE's logic again computes the stride as “new address minus previous address”, i.e., “(A+2X)−(A+X)=X”. The new address value and calculated stride are recorded within register 410. Then, the PE logic compares the stride within register 410 with that within register 408, and when the latter stride matches the previous stride in the register/entry, the logic determines a stream is detected. As a result, the PE logic initiates a stream with a stride of X and changes the state of register 410 to final (FINAL).



FIG. 5 illustrates a chart displaying the resulting effects on PTRANS, a matrix transpose program, when conducting a prefetch operation utilizing the mechanisms of the present invention as described above. Four plots are illustrated, each plotted against normalized speedup on the Y axis. These plots indicate that compared to a one-table prefetch the two table prefetch implementation for the present invention provided almost a two-thirds reduction in execution time. Compared to the one table implementation enhanced with software stride functionality, the hardware implemented methods of the present invention yielded improved characteristics. The “no prefetch” plot is not utilized in most conventional systems.


It is important to note that although the present invention has been described in the context of a data processing system, those skilled in the art will appreciate that the mechanisms of the present invention are capable of being distributed as a program product in a variety of forms, and that the present invention applies equally regardless of the particular type of signal bearing media utilized to actually carry out the distribution. Examples of signal bearing media include, without limitation, recordable type media such as floppy disks or compact discs and transmission type media such as analog or digital communications links.


While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims
  • 1. A data processing system comprising: a processor;a memory hierarchy coupled to the processor and having at least one lower level storage device;a prefetch engine (PE) associated with the processor and which includes: a plurality of history tables, each of which is utilized to track a set of different patterns for received requests; andlogic for enabling simultaneous tracking of multiple different patterns for accessing the storage device via the plurality of history tables.
  • 2. The data processing system of claim 1, wherein: said plurality of history tables are ordered for sequential access based on a check of simple patterns at a first table and another set of more complicated patterns at subsequent tables in the sequence, wherein each table is accessed via an associated index found within an address, such that said PE simultaneously supports multiple patterns; andsaid logic further comprises logic for: parsing a received address for a first index corresponding to the first history table, wherein an index for each of the plurality of history tables can be provided by a different set of bits or the same set of bits within the address, and multiple indices are provided within the address each corresponding to one of the plurality of history tables; andenabling ordered access to said plurality of history tables utilizing said indices, wherein said plurality of history tables are accessed in a preset order to substantially eliminate interference between the different patterns.
  • 3. The data processing system of claim 2, said logic for enabling ordered access further comprising logic for: determining at a first history table among the plurality of history tables whether a received address fits a first pattern of the first history table; andwhen the address fits the first pattern of the first history table, identifying the address as belonging to a stream having that first pattern and initiating a prefetch stream.
  • 4. The data processing system of claim 1, wherein when an address does not fit the first pattern of the first history table, said logic further comprises logic for: forwarding the address to a next sequential history table within the preset order;checking the second history table for a match of its second pattern to the address; andwhen no match of the second pattern is found at the second sequential table, iteratively forwarding the address to the next sequential table(s) to be checked against the pattern(s) of the next sequential table(s) until a last table is reached within the preset order.
  • 5. The data processing system of claim 4, wherein: said first history table and each subsequent history table is identified by the corresponding index within the plurality of indices;when the address does not fit the pattern of a current history table to which the current index corresponds, parsing the address for a next index corresponding to the next history table in the preset order; andwhen the address does not fit the pattern of the next history table, sequentially parsing said address for subsequent indices corresponding to subsequent history tables supported within the PE until either a last history table is checked or a match of the pattern is found for the address.
  • 6. The data processing system of claim 1, wherein each PE includes a reference prediction unit and a prefetch request issue unit, said reference prediction unit further comprising: at least one reference prediction table having associated therewith a reference/stream prediction logic, which predicts future references based on issued references; andat least one active streams table, each table providing one or more entries of for active streams, wherein each of said at least one active streams table includes prefetch request issue logic, which sends out prefetch requests at calculated times;wherein when the stream detection logic detects that the address is part of a stream within the first table, a pre-fetch stream is initiated and the address is not forwarded to the second table.
  • 7. The data processing system of claim 1, wherein: said logic within said PE comprises a two-table stride prediction algorithm, wherein a first “simple-pattern” history table is utilized to detect simple streams and a second “complicated-pattern” history table is utilized to detect other hard-to-detect streams;wherein said first table and said second table are organized as set-associative caches and indexed by specific bits of an address within a received request, wherein a respective value of each set of index bits determines how large a region to check for access patterns, and wherein using different bits enables the PE to look into small regions for simple patterns, such as unit and small non-unit strides and large regions for other strides, such as large strides and avoid an interferences among small strides, and between small strides and large strides.
  • 8. The data processing system of claim 1, wherein: each set of index bits correspond to a specific stride history table, and when said processor executes according to a big endian addressing scheme/configuration and said system includes two history tables, a large stride history table is indexed by more significant bits within the address while a smaller stride history table is indexed by less significant bits within the address; andwhen the PE receives an address, said logic further comprises logic for: first selecting a set of bits providing a smallest value index to index into a corresponding table with a smallest stride;accessing the corresponding table having the smallest stride via the smallest value index; andwhen the prefetch logic does not detect a stride within the corresponding table, forwarding the address to the second table with a next highest value index and initiating a check of the second table to detect a larger stride, wherein a stride detected at the second table is larger than another stride detected at the first table.
  • 9. The data processing system of claim 1, wherein said logic further comprising logic for: automatically determining a stride pattern associated with a received prefetch request having an address.
  • 10. The data processing system of claim 9, wherein said logic for automatically determining a stride pattern comprises: a stride evaluation component, said component having a plurality of registers each having at least three updateable entries utilized for analyzing a single stride pattern; wherein a first entry provides a current state of the corresponding register, a second entry tracks the previous address received, and a third entry is utilized to store a stride pattern/length determined as the difference between the previous address and a subsequent address;logic for storing a first address received as the previous address within a first register;logic for computing a first stride length by subtracting the previous address from a subsequently received second address and recording the first stride length within the third entry of the second register;logic, when a third address is received, for computing a second stride length by subtracting the second address from the third address;logic for comparing the second stride length with the first stride length; andwhen the second stride length matches the first stride length, signaling detection of a presence of a stream and initiating a stream prefetch with a stride pattern equivalent to one of the second and the first stride lengths.
  • 11. A method for enabling a prefetch engine (PE) to detect and support hardware prefetching with streams of different patterns in accesses, said method comprising: providing a plurality of history tables associated with the PE, wherein each of the plurality of history tables are utilized to detect different access patterns;providing a separate index for each of the plurality of history tables within the address of a prefetch request, wherein each of said separate index is provided by a different set of bits of the address;dynamically determining a stride pattern of a received prefetch request via ordered access to one or more of said plurality of history tables, wherein said plurality of history tables are accessed in a preset sequential order to reduce interference between different access patterns; andenabling simultaneous tracking of multiple different stride patterns for accessing the storage device via the plurality of history tables.
  • 12. The method of claim 11, wherein said dynamically determining a stride pattern comprises: accessing a first history table among the plurality of history tables to determine whether the address fits the stride pattern of the first history table; andwhen the address fits the stride pattern of the first history table, identifying the address as belonging to a stream and initiating a corresponding prefetch stream.when the stride pattern does not fits that of the first history table: forwarding the address to a next sequential history table within the preset order;checking the next sequential table for a match of its stride pattern to the address; andwhen the stride pattern of the next sequential table does not match the address, iteratively forwarding the address to the next sequential table(s) to be checked against the stride pattern of each table in sequence until a last table is reached within the preset order.
  • 13. The method of claim 11, wherein the index for each of the plurality of history tables is derived from sets of bits within the address, said method further comprising: parsing a received address for a first index corresponding to the first history table; andwhen the address does not fit the pattern of the first history table to which the first index corresponds, parsing the address for a second index corresponding to the second history table and when the address does not fit the pattern of the second history table; andsequentially parsing said address for subsequent indices corresponding to subsequent history tables supported within the PE until either a last history table is checked or a match of the pattern is found for the address.
  • 14. The method of claim 11, wherein: said plurality of history tables are ordered for sequential access based on a check of unit and small stride patterns at a first table and larger stride patterns at subsequent tables in the sequence, wherein each table is accessed via an associated index found within a prefetch address, such that said PE simultaneously supports multiple patterns; andsaid method further comprises: parsing a received address for a first index corresponding to the first history table, wherein an index for each of the plurality of history tables is provided by a different set of bits within the address, and multiple indices are provided within the address each corresponding to one of the plurality of history tables; andenabling ordered access to said plurality of history tables utilizing said indices, wherein said plurality of history tables are accessed in a preset order to substantially eliminate interference between the multiple different stride patterns.
  • 15. The method of claim 11, wherein said first history table and each subsequent history table is identified by the corresponding index within the plurality of indices, said method for enabling ordered access further comprises: determining at a first history table among the plurality of history tables whether a received address fits a first stride pattern of the first history table; andwhen the address fits the first stride pattern of the first history table, identifying the address as belonging to a stream having that first stride pattern and initiating a prefetch stream;when the address does not fit the pattern of a current history table to which the current index corresponds, such that the address does not fit the first stride pattern of the first history table, parsing the address for a next index corresponding to the next history table in the preset order; andforwarding the address to a next sequential history table within the preset order;checking the second history table for a match of its second stride pattern to the address; andwhen no match of the second stride pattern is found at the second sequential table, iteratively forwarding the address to the next sequential table(s) to be checked against the stride pattern(s) of the next sequential table(s) until a last table is reached within the preset order;wherein, when the address does not fit the pattern of the next history table, sequentially parsing said address for subsequent indices corresponding to subsequent history tables supported within the PE until either a last history table is checked or a match of the stride pattern is found for the address, wherein when a stream detection logic detects that the physical address is part of a stream within the small stride table, a pre-fetch stream is initiated and the address is not forwarded to the large stride table.
  • 16. The method of claim 11, wherein: said logic within said PE comprises a two-table stride prediction algorithm, wherein a first “small stride” history table is utilized to detect unit and small, non-unit stride streams and a second “large stride” history table is utilized to detect large, non-unit stride streams;wherein said first table and said second table are organized as set-associative caches and indexed by specific bits of an address within a prefetch request, wherein a respective value of each set of index bits determines how large a region to check for access patterns, and wherein using different bits enables the PE to look into small regions for unit and small non-unit strides and large regions for large strides and avoid an interferences among small strides, and between small strides and large strides.
  • 17. The method of claim 11, wherein: each set of index bits correspond to a specific stride history table, and when said processor executes according to a big endian addressing scheme/configuration and said system includes two history tables, a large stride history table is indexed by more significant bits within the address while a smaller stride history table is indexed by less significant bits within the address; andwhen the PE receives an address, said method further comprises: first selecting a set of bits providing a smallest value index to index into a corresponding table with a smallest stride;accessing the corresponding table having the smallest stride via the smallest value index; andwhen the prefetch logic does not detect a stride within the corresponding table, forwarding the address to the second table with a next highest value index and initiating a check of the second table to detect a larger stride, wherein a stride detected at the second table is larger than another stride detected at the first table.
  • 18. The method of claim 11, further comprising: automatically determining a stride pattern associated with a received prefetch request having an address, wherein said automatically determining a stride pattern utilizes a stride evaluation component, said component having a plurality of registers each having at least three updatable entries utilized for analyzing a single stride pattern, wherein a first entry provides a current state of the corresponding register, a second entry tracks the previous address received, and a third entry is utilized to store a stride pattern/length determined as the difference between the previous address and a subsequent address;storing a first address received as the previous address within a first register;computing a first stride length by subtracting the previous address from a subsequently received second address and recording the first stride length within the third entry of the second register;when a third address is received, computing a second stride length by subtracting the second address from the third address;comparing the second stride length with the first stride length; andwhen the second stride length matches the first stride length, signaling detection of a presence of a stream and initiating a stream prefetch with a stride pattern equivalent to one of the second and the first stride lengths.
  • 19. A computer program product comprising a computer readable medium and program code on the computer readable medium that when executed provides the functions of claim 18.
  • 20. A computer program product comprising a computer readable medium and program code on the computer readable medium that when executed provides the functions of claim 11.
GOVERNMENT RIGHTS

This invention was made with Government support under Agreement No. NBCH30390004 with the United States Defense Advanced Research Projects Agency (DARPA). The U.S. Government has certain rights to this invention.