The present invention is related to the following commonly-owned, co-pending United States Patent Applications filed on even date herewith, the entire contents and disclosure of each of which is expressly incorporated by reference herein as if fully set forth herein. U.S. patent application Ser. No. 11/768,777, for “A SHARED PERFORMANCE MONITOR IN A MULTIPROCESSOR SYSTEM”; U.S. patent application Ser. No. 11/768,645, for “OPTIMIZED COLLECTIVES USING A DMA ON A PARALLEL COMPUTER”; U.S. patent application Ser. No. 11/768,781, for “DMA SHARED BYTE COUNTERS IN A PARALLEL COMPUTER”; U.S. patent application Ser. No. 11/768,784, for “MULTIPLE NODE REMOTE MESSAGING”; U.S. patent application Ser. No. 11/768,532, for “PROGRAMMABLE PARTITIONING FOR HIGH-PERFORMANCE COHERENCE DOMAINS IN A MULTIPROCESSOR SYSTEM”; U.S. patent application Ser. No. 11/768,857, for “METHOD AND APPARATUS FOR SINGLE-STEPPING COHERENCE EVENTS IN A MULTIPROCESSOR SYSTEM UNDER SOFTWARE CONTROL”; U.S. patent application Ser. No. 11/768,547, for “INSERTION OF COHERENCE EVENTS INTO A MULTIPROCESSOR COHERENCE PROTOCOL”; U.S. patent application Ser. No. 11/768,791, for “METHOD AND APPARATUS TO DEBUG AN INTEGRATED CIRCUIT CHIP VIA SYNCHRONOUS CLOCK STOP AND SCAN”; U.S. patent application Ser. No. 11/768,795, for “DMA ENGINE FOR REPEATING COMMUNICATION PATTERNS”; U.S. patent application Ser. No. 11/768,799, for “METHOD AND APPARATUS FOR A CHOOSE-TWO MULTI-QUEUE ARBITER”; U.S. patent application Ser. No. 11/768,800, for “METHOD AND APPARATUS FOR EFFICIENTLY TRACKING QUEUE ENTRIES RELATIVE TO A TIMESTAMP”; U.S. patent application Ser. No. 11/768,572, for “BAD DATA PACKET CAPTURE DEVICE”; U.S. patent application Ser. No. 11/768,593, for “EXTENDED WRITE COMBINING USING A WRITE CONTINUATION HINT FLAG”; U.S. patent application Ser. No. 11/768,805, for “A SYSTEM AND METHOD FOR PROGRAMMABLE BANK SELECTION FOR BANKED MEMORY SUBSYSTEMS”; U.S. patent application Ser. No. 11/768,905, for “AN ULTRASCALABLE PETAFLOP PARALLEL SUPERCOMPUTER”; U.S. patent application Ser. No. 11/768,810, for “SDRAM DDR DATA EYE MONITOR METHOD AND APPARATUS”; U.S. patent application Ser. No. 11/768,812, for “A CONFIGURABLE MEMORY SYSTEM AND METHOD FOR PROVIDING ATOMIC COUNTING OPERATIONS IN A MEMORY DEVICE”; U.S. patent application Ser. No. 11/768,559, for “ERROR CORRECTING CODE WITH CHIP KILL CAPABILITY AND POWER SAVING ENHANCEMENT”; U.S. patent application Ser. No. 11/768,552, for “STATIC POWER REDUCTION FOR MIDPOINT-TERMINATED BUSSES”; U.S. patent application Ser. No. 11/768,527, for “COMBINED GROUP ECC PROTECTION AND SUBGROUP PARITY PROTECTION”; U.S. patent application Ser. No. 11/768,669, for “A MECHANISM TO SUPPORT GENERIC COLLECTIVE COMMUNICATION ACROSS A VARIETY OF PROGRAMMING MODELS”; U.S. patent application Ser. No. 11/768,813, for “MESSAGE PASSING WITH A LIMITED NUMBER OF DMA BYTE COUNTERS”; U.S. patent application Ser. No. 11/768,619, for “ASYNCRONOUS BROADCAST FOR ORDERED DELIVERY BETWEEN COMPUTE NODES IN A PARALLEL COMPUTING SYSTEM WHERE PACKET HEADER SPACE IS LIMITED”; U.S. patent application Ser. No. 11/768,682, for “HARDWARE PACKET PACING USING A DMA IN A PARALLEL COMPUTER”; and U.S. patent application Ser. No. 11/768,752, for “POWER THROTTLING OF COLLECTIONS OF COMPUTING ELEMENTS”.
The present disclosure generally relates to microprocessors and to multiprocessor architectures and, more particularly, to architectures with caches implementing prefetching.
A prefetch unit is usually put between caches in different levels of hierarchy in order to alleviate the latency of access of the slower cache. A prefetcher identifies data streams and speculatively pre-fetches the next data line before requested by the processor. A prefetcher stores the data that it prefetched in a buffer whose size is a premium. This type of prefetching is useful only if future accesses can be predicted successfully.
A processor can continually request data from sequential addresses in which case the request pattern is said to be from a ‘single’ stream. On the other hand, if the request pattern of addresses from a processor is “Addr ‘A’, Addr ‘B’, Address ‘A+1’, Address ‘B+1’ . . . ” and so on then the request pattern is said to be from ‘multiple’ streams which in this case is 2, that is, from streams A and B.
A prefetcher, incorporated between an L1 and L2 cache, in response to an L1 cache line miss from the processor usually fetches ‘b’ bytes from the L2 cache. If ‘x’ bytes per cycle are used on an average by the processor and if the latency to fetch from the L2 cache is “k” cycles, then the prefetcher should have a depth of ‘x’ times ‘k’ bytes so that the processor would continually get hits in the prefetch buffer.
In a single stream request pattern, it is likely that the latency to get the data from the lower level memory exceeds the time needed by a processor to consume the data line. If a processor is fetching data belonging to a single data stream only, a prefetch engine which does not prefetch the required depth may not be able to keep up. Thus, the depth of prefetch should be high if the request pattern is from a singe stream. On the other hand, the depth of prefetch can be reduced if the requests are from multiple streams.
If a prefetcher does not distinguish between single and multiple streams and fetches the worst case prefetch depth required, then it would not be able to sustain as many streams as a prefetcher which adaptively chooses the prefetch depth. Therefore, what is desirable is a method and apparatus of prefetching streams of varying prefetch depth.
Method, apparatus and system for prefetching streams of varying prefetch depth are provided. The method in one aspect may comprise monitoring a plurality of load requests from a processing unit for data in a prefetch buffer and determining an access pattern associated with the plurality of load requests, adjusting a prefetch depth according to the access pattern, and prefetching data of the prefetch depth to the prefetch buffer.
In another aspect a method of prefetching streams of varying prefetch depth may comprise receiving a load request from a processing unit, determining whether the load request follows a sequential access pattern of memory locations, for the load request that follows a sequential access pattern, increasing or maintaining a prefetch depth to have two or more additional cache lines adjacent to a cache line of the load request prefetched to a prefetch buffer, for the load request that does not follow a sequential access pattern, decreasing the prefetch depth if a previous load request was following a sequential access pattern, and prefetching data of the prefetch depth to the prefetch buffer.
An apparatus for prefetching streams of varying prefetch depth may comprise a prefetch buffer operable to store prefetched data, a mode control logic operable to monitor a plurality of load requests from a processing unit for data in the prefetch buffer and to determine an access mode associated with the plurality of load requests, and a prefetch engine operable to adjust a prefetch depth according to the access mode to prefetch data of the prefetch depth to the prefetch buffer.
A system for prefetching streams of varying prefetch depth may comprise means for storing prefetched data, means for monitoring a plurality of load requests from a processing unit for data in the prefetch buffer and to determining an access mode associated with the plurality of load requests, and means for adjusting a prefetch depth according to the access mode and prefetching data of the prefetch depth to the prefetch buffer.
Further features as well as the structure and operation of various embodiments are described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements.
The method and apparatus in one embodiment of the present disclosure adaptively chooses the prefetch depth. Briefly prefetch depth refers to amount or size of data that is prefetched from memory. Adaptively choosing the prefetch depth allows for sustaining the maximum number of multiple streams as well as getting high prefetch buffer hits and hiding the memory latency if the access pattern reverts to a single stream.
A prefetch algorithm of the present disclosure in one embodiment dynamically adapts its prefetch depth depending on the pattern of memory requests by a processor. Depending on the pattern of memory request, a prefetch engine determines the depth of prefetching it has to perform to provide data to the processor without stalling. Dynamic adaptation in one embodiment is accomplished by keeping track of memory request patterns and the information whether the prefetched data is used by the processor or not. Depending on the request pattern, the prefetch engine switches between multiple operation modes to prefetch one or more data lines ahead of time for optimal performance.
In one embodiment of the present disclosure, a buffer referred to as a history buffer is kept and used to store history of memory addresses a processor or core accessed and missed in the prefetch buffer. Requests from the processor initially miss in the prefetch buffer that is empty, and those missed addresses are stored in a history buffer in one embodiment of the present disclosure. Once a hit occurs in the history buffer, an active stream is identified and brought into the prefetch buffer. Subsequent hits on the prefetch buffer trigger a prefetch. A state machine continually tracks the requests from the core as well as the result of matching the request address against the entries present in the prefetch buffer and detects whether the stream is single or multiple. In case the access patterns reveal a single stream, the depth of prefetch is increased. If the access pattern changes to that of multiple streams, the depth of prefetch is decreased. This happens dynamically and may be completely transparent to the software.
In one embodiment described below, a prefetcher is assumed to exist between the L1 and the L2 caches. In this example, a prefetch always fetches an L2 cache line (which is a multiple of L1 cache line size). A request for address “A” belonging to L2 cache line “n”, triggers a prefetch of the next sequential L2 cache line (henceforth called the “n+1 prefetch”). In case of single streams, the depth of prefetch is increased to two L2 lines (n+1 as well as n+2 prefetch).
It is understood to those skilled in the art that other embodiments are also possible without departing from the scope of this invention. In one embodiment, a prefetcher of the present disclosure may prefetch data from L2 cache and stores them in the prefetch buffers. In another embodiment, the prefetcher may prefetch data from L2 cache and stores them into L1 caches. In yet another embodiment, the prefetcher may prefetch data from the main memory and stores them in the L2 cache, or in the L1 cache, or in the prefetch buffers.
In one embodiment, a state machine continually tracks memory request to determine if single or multiple streams are requested, and increases or decreases the depth of prefetching. In another embodiment, the state machine tracks memory requests if single or multiple memory streams are requested, and switches between a prefetch algorithm associated with a single stream and a prefetch algorithm associated with multiple memory streams.
At step 106, address “A” is checked against a history of previous patterns of fetches. If there is a match, then the execution proceeds to step 108. Otherwise, execution proceeds to step 110. At step 108, address “A” is considered to be a part of a new stream. This address is inserted into the prefetch buffer and the L2 cache line “n” containing the data for address “A” is fetched. The data requested by the core is delivered. This line is marked so that if any future hit on this line occurs, then the next sequential L2 line (“n+1”) would be prefetched. Marking, for example, may be done by setting a flag, allow prefetch on hit to line “n” to true. Execution returns to step 102 where the prefetcher waits for the next load miss from the core.
If, at step 106, address “A” was a miss in the history buffer, address “A” and the next sequential address “n+1” are inserted into the history buffer at step 110. This is done so that if further requests hit in the history buffer, this would become a new stream. The data for address “A” is fetched and delivered to the core. Execution returns to step 102.
Referring to
At step 118, a prefetch for L2 cache line “n+1” is issued. This line is marked so that if any hit on this line occurs in the future, then the next sequential L2 line would be prefetched. Marking, for example, may be done by setting a flag, allow prefetch on hit to line “n+1” to true. Execution proceeds to step 124.
At step 116, the prefetcher examines whether any of the following conditions, a) single_stream_mode or b) allow prefetch for the next sequential L2 cache line is true. Setting of single stream mode condition is explained further below. If at least one of the conditions is true, then execution proceeds to step 120. Otherwise execution proceeds to step 124.
At step 120, the prefetcher checks whether L2 cache line “n+2” is already present in the prefetch buffer. If yes, then execution proceeds to step 124. Otherwise, execution proceeds to step 122. At step 122, a prefetch for L2 cache line “n+2” is issued. This line is marked so that if any hit on this line occurs in the future, then the next sequential L2 line would be prefetched. Marking, for example, may be done by setting a flag, allow prefetch on hit to line “n+2” to true. Execution proceeds to step 124. At step 124, the condition which allowed prefetch of line “n+1” on hit to line “n” is reset. Execution now returns to step 102.
The above-described method detects whether the memory requests from a core have single or multiple stream patterns. In the case of single stream pattern, the method prefetches at least two more cache lines following the requested cache line. If it is determined that the data requests do not follow a single stream pattern, the method prefetches a cache line following the requested cache line. Prefetch depth is thus adaptive to the type of stream pattern associated with the memory requests from the core and is dynamically adjusted, increased or decreased, based on the type or nature of the memory request.
Once in the CHECK state 204, if hits occur to line “n” or if there were no load requests from the core, then the state machine remains in the CHECK state 204. If a load request occurs for the latest prefetched line, that is, line “n+1”, then the state machine determines that it is a single stream and transitions to the SINGLE STREAM state 206. If the load request is not to the latest prefetched line, then the state machine transitions back to the IDLE state.
In the SINGLE STREAM state 206, the single_stream_mode bit is set. Recall that in single stream mode, two additional lines are prefetched, thus, two additional lines after “n” line are prefetched. If no load requests occur, then the state machine remains in the SINGLE STREAM state 206. If a “n+1” prefetch occurs, then a change to a different stream has occurred and hence the state machine moves to the CHECK state 204 and the single_stream_mode bit is reset. Otherwise, on the next load request, the state machine transitions back to the IDLE state.
It should be noted that setting of single_stream_mode bit triggers the “n+2” prefetch. A flag associated with each line, that specifies to allow prefetch to the next sequential cache line would be reset after a hit on that line occurs, and after prefetching a next sequential cache line if applicable, for example, if a single_stream_mode bit is set or a flag to allow prefetch to the next sequential cache line is set. On a hit on line “n+1”, the presence of line “n+2” without resetting of the flag associated with line “n+1” that specifies to allow prefetch to next sequential cache line sustains the prefetching of cache lines of depth 2. Thus, for example, if there is a hit on line “n+1”, and line “n+2” was already fetched, but the flag associated with line “n+1” was not reset, the next line, “n+3” would be prefetched. Table 1 illustrates an example load request pattern, actions taken and transitioning of the state diagram. In Table 1, when a line is fetched, a flag that is associated with that line, and that specifies to allow prefetch to the next sequential cache line is set.
The state machine of
The modes of operation (e.g., prefetch+1, prefetch+2, etc.) dynamically change depending on memory access pattern, and in another embodiment, according to hits in the prefetch buffer. In yet another embodiment, processor can issue a “hint” to prefetch engine to change the mode of prefetching. “Hint” can be implemented by adding a control register mapped to memory address space, and processor has write access to this control register. Prefetch mode may be determined by command written to that register.
In yet another embodiment, prefetch engine can access bidirectional streams, for accessing ascending and descending streams. For instance, mode control state machine may have states: Idle, +1, +2, −1, −2. Dynamic switching between the states, and prefetching one or more data lines for ascending or descending streams may be implemented. The transitions to the −1 and −2 states are similar to the transitions to the +1 and +2 states except that the stream follows a descending pattern (Addr “A”, Addr “A−1”, Addr “A−2” . . . and so on).
Prefetch engine 304 keeps track of memory requests issued by a processor It determines whether a prefetch request for the next data line from lower memory level should be issued or not. Mode control 306 keeps track of whether a processor is accessing a single data stream or multiple data streams, for instance, by means of a state machine shown in
The components shown in the prefecth cache 302, for instance, mode control 306, prefetch engine 304, line buffers 308 may be implemented using registers, circuit and hardware logic elements.
While the above examples showed the prefetch unit between L1 and L2 caches, it should be understood that the method and apparatus of the present disclosure may be applied and/or used in prefetching data from any levels of memory hierarchy. For example, the system and method of the present disclosure may apply to fetching data between L3 and L2 caches, L3 cache and other memory subsystems, or between any other levels of memory hierarchy.
The embodiments described above are illustrative examples and it should not be construed that the present invention is limited to these particular embodiments. Thus, various changes and modifications may be effected by one skilled in the art without departing from the spirit or scope of the invention as defined in the appended claims.
This invention was made with Government support under Contract. No. B554331 awarded by Department of Energy. The Government has certain rights in this invention.
Number | Name | Date | Kind |
---|---|---|---|
4777595 | Strecker et al. | Oct 1988 | A |
5063562 | Barzilai et al. | Nov 1991 | A |
5142422 | Zook et al. | Aug 1992 | A |
5349587 | Nadeau-Dostie et al. | Sep 1994 | A |
5353412 | Douglas et al. | Oct 1994 | A |
5452432 | Macachor | Sep 1995 | A |
5524220 | Verma et al. | Jun 1996 | A |
5634007 | Calta et al. | May 1997 | A |
5659710 | Sherman et al. | Aug 1997 | A |
5708779 | Graziano et al. | Jan 1998 | A |
5748613 | Kilk et al. | May 1998 | A |
5761464 | Hopkins | Jun 1998 | A |
5796735 | Miller et al. | Aug 1998 | A |
5809278 | Watanabe et al. | Sep 1998 | A |
5825748 | Barleu et al. | Oct 1998 | A |
5890211 | Sokolov et al. | Mar 1999 | A |
5917828 | Thompson | Jun 1999 | A |
5958040 | Jouppi | Sep 1999 | A |
6023732 | Moh et al. | Feb 2000 | A |
6061511 | Marantz et al. | May 2000 | A |
6072781 | Feeney et al. | Jun 2000 | A |
6122715 | Palanca et al. | Sep 2000 | A |
6185214 | Schwartz et al. | Feb 2001 | B1 |
6219300 | Tamaki | Apr 2001 | B1 |
6263397 | Wu et al. | Jul 2001 | B1 |
6295571 | Scardamalia et al. | Sep 2001 | B1 |
6311249 | Min et al. | Oct 2001 | B1 |
6324495 | Steinman | Nov 2001 | B1 |
6356106 | Greeff et al. | Mar 2002 | B1 |
6366984 | Carmean et al. | Apr 2002 | B1 |
6442162 | O'Neill et al. | Aug 2002 | B1 |
6466227 | Pfister et al. | Oct 2002 | B1 |
6564331 | Joshi | May 2003 | B1 |
6594234 | Chard et al. | Jul 2003 | B1 |
6598123 | Anderson et al. | Jul 2003 | B1 |
6601144 | Arimilli et al. | Jul 2003 | B1 |
6631447 | Morioka et al. | Oct 2003 | B1 |
6647428 | Bannai et al. | Nov 2003 | B1 |
6662305 | Salmon et al. | Dec 2003 | B1 |
6735174 | Hefty et al. | May 2004 | B1 |
6775693 | Adams | Aug 2004 | B1 |
6799232 | Wang | Sep 2004 | B1 |
6874054 | Clayton et al. | Mar 2005 | B2 |
6880028 | Kurth | Apr 2005 | B2 |
6889266 | Stadler | May 2005 | B1 |
6894978 | Hashimoto | May 2005 | B1 |
6954887 | Wang et al. | Oct 2005 | B2 |
6986026 | Roth et al. | Jan 2006 | B2 |
7007123 | Golla et al. | Feb 2006 | B2 |
7058826 | Fung | Jun 2006 | B2 |
7065594 | Ripy et al. | Jun 2006 | B2 |
7143219 | Chaudhari et al. | Nov 2006 | B1 |
7191373 | Wang et al. | Mar 2007 | B2 |
7239565 | Liu | Jul 2007 | B2 |
7280477 | Jeffries et al. | Oct 2007 | B2 |
7298746 | De La Iglesia et al. | Nov 2007 | B1 |
7363629 | Springer et al. | Apr 2008 | B2 |
7373420 | Lyon | May 2008 | B1 |
7401245 | Fischer et al. | Jul 2008 | B2 |
7454640 | Wong | Nov 2008 | B1 |
7454641 | Connor et al. | Nov 2008 | B2 |
7461236 | Wentzlaff | Dec 2008 | B1 |
7463529 | Matsubara | Dec 2008 | B2 |
7502474 | Kaniz et al. | Mar 2009 | B2 |
7539845 | Wentzlaff et al. | May 2009 | B1 |
7613971 | Asaka | Nov 2009 | B2 |
7620791 | Wentzlaff et al. | Nov 2009 | B1 |
7698581 | Oh | Apr 2010 | B2 |
20010055323 | Rowett et al. | Dec 2001 | A1 |
20020078420 | Roth et al. | Jun 2002 | A1 |
20020087801 | Bogin et al. | Jul 2002 | A1 |
20020100020 | Hunter et al. | Jul 2002 | A1 |
20020129086 | Garcia-Luna-Aceves et al. | Sep 2002 | A1 |
20020138801 | Wang et al. | Sep 2002 | A1 |
20020156979 | Rodriguez | Oct 2002 | A1 |
20020184159 | Tadayon et al. | Dec 2002 | A1 |
20030007457 | Farrell et al. | Jan 2003 | A1 |
20030028749 | Ishikawa et al. | Feb 2003 | A1 |
20030050714 | Tymchenko | Mar 2003 | A1 |
20030050954 | Tayyar et al. | Mar 2003 | A1 |
20030074616 | Dorsey | Apr 2003 | A1 |
20030105799 | Khan et al. | Jun 2003 | A1 |
20030163649 | Kapur et al. | Aug 2003 | A1 |
20030177335 | Luick | Sep 2003 | A1 |
20030188053 | Tsai | Oct 2003 | A1 |
20030235202 | Van Der Zee et al. | Dec 2003 | A1 |
20040003184 | Safranek et al. | Jan 2004 | A1 |
20040019730 | Walker et al. | Jan 2004 | A1 |
20040024925 | Cypher et al. | Feb 2004 | A1 |
20040059873 | Hum et al. | Mar 2004 | A1 |
20040073780 | Roth et al. | Apr 2004 | A1 |
20040103218 | Blumrich et al. | May 2004 | A1 |
20040210694 | Shenderovich | Oct 2004 | A1 |
20040243739 | Spencer | Dec 2004 | A1 |
20050007986 | Malladi et al. | Jan 2005 | A1 |
20050053057 | Deneroff et al. | Mar 2005 | A1 |
20050076163 | Malalur | Apr 2005 | A1 |
20050160238 | Steely et al. | Jul 2005 | A1 |
20050216613 | Ganapathy et al. | Sep 2005 | A1 |
20050251613 | Kissell | Nov 2005 | A1 |
20050270886 | Takashima | Dec 2005 | A1 |
20050273564 | Lakshmanamurthy et al. | Dec 2005 | A1 |
20060050737 | Hsu | Mar 2006 | A1 |
20060080513 | Beukema et al. | Apr 2006 | A1 |
20060206635 | Alexander et al. | Sep 2006 | A1 |
20060248367 | Fischer et al. | Nov 2006 | A1 |
20070055832 | Beat | Mar 2007 | A1 |
20070133536 | Kim et al. | Jun 2007 | A1 |
20070168803 | Wang et al. | Jul 2007 | A1 |
20070174529 | Rodriguez et al. | Jul 2007 | A1 |
20070195774 | Sherman et al. | Aug 2007 | A1 |
20080147987 | Cantin et al. | Jun 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20090006762 A1 | Jan 2009 | US |