This application is related to U.S. patent application Ser. No. 10/643,744, entitled “Multistream Processing System and Method”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,577, entitled “System and Method for Processing Memory Transfers”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,586, entitled “Decoupled Scalar/Vector Computer Architecture System and Method (as amended)”, filed on even date herewith (now U.S. Pat. No. 7,334,110 issued Feb. 19, 2008); to U.S. patent application Ser. No. 10/643,585, entitled “Latency Tolerant Distributed Shared Memory Multiprocessor Computer”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,754, entitled “Relaxed Memory Consistency Model”, filed on even date herewith; to U.S. patent application Ser. No. 10/643,758 entitled “Remote Translation Mechanism for a Multinode System”, filed on even date herewith; and to U.S. patent application Ser. No. 10/643,741, entitled “Multistream Processing Memory-And Barrier-Synchronization Method and Apparatus”, filed on even date herewith (now U.S. Pat. No. 7,437,521, issued Oct. 14, 2008), each of which is incorporated herein by reference.
The present invention is related to multiprocessor computers, and more particularly to a system and method for decoupling a write address from write data.
As processors run at faster speeds, memory latency on accesses to memory looms as a large problem. Commercially available microprocessors have addressed this problem by decoupling memory access from manipulation of the data used in that memory reference. For instance, it is common to decouple memory references from execution based on those references and to decouple address computation of a memory reference from the memory reference itself. In addition, Scalar processors already decouple their write addresses and data internally. Write addresses are held in a “write buffer” until the data is ready, and in the mean time, read requests are checked against the saved write addresses to ensure ordering.
With the increasing pervasiveness of multiprocessor systems, it would be beneficial to extend the decoupling of write addresses and write data across more than one processor, or across more than one functional unit within a processor. What is needed is a system and method of synchronizing separate write requests and write data across multiple processors or multiple functional units within a microprocessor which maintains memory ordering without collapsing the decoupling of the write address and the write data.
a illustrates a multiprocessor computer system according to the present invention;
b illustrates another example of a multiprocessor computer system according to the present invention;
a illustrates a method of decoupling store address and data in a multiprocessor system according to one example embodiment of the present invention;
b illustrates a method of decoupling store address and data in a multiprocessor system according to another example embodiment of the present invention;
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
A multiprocessor computer system 10 is shown in
Not all processors 12 have to be the same. A multiprocessor computer system 10 having different types of processors connected to a shared memory 16 is shown in
In the example shown, scalar processing unit 12 and vector processing unit 14 are connected to memory 16 across an interconnect network 18. In one embodiment, vector processing unit 14 includes a vector execution unit 20 connected to a vector load/store unit 22. Vector load/store unit 22 handles memory transfers between vector processing unit 14 and memory 16.
The vector and scalar units in vector processing computer 10 are decoupled, meaning that scalar unit 12 can run ahead of vector unit 14, resolving control flow and doing address arithmetic. In addition, in one embodiment, computer 10 includes load buffers. Load buffers allow hardware renaming of load register targets, so that multiple loads to the same architectural register may be in flight simultaneously. By pairing vector/scalar unit decoupling with load buffers, the hardware can dynamically unroll loops and get loads started for multiple iterations. This can be done without using extra architectural registers or instruction cache space (as is done with software unrolling and/or software pipelining). These methods of decoupling are discussed in patent application Ser. No. 10/643,585 entitled “Decoupled Vector Architecture”, filed on even date herewith, the description of which is incorporated herein by reference.
In one embodiment, both scalar processing unit 12 and vector processing unit 14 employ memory/execution decoupling. Scalar and vector loads are issued as soon as possible after dispatch. Instructions that depend upon load values are dispatched to queues, where they await the arrival of the load data. Store addresses are computed early (in program order interleaving with the loads), and their addresses saved for later use.
Methods of memory/execution decoupling are discussed as well in patent application Ser. No. 10/643,585, entitled “Decoupled Vector Architecture”, filed on even date herewith, the description of which is incorporated herein by reference.
In one embodiment, each scalar processing unit 12 is capable of decoding and dispatching one vector instruction (and accompanying scalar operand) per cycle. Instructions are sent in order to the vector processing units 14, and any necessary scalar operands are sent later after the vector instructions have flowed through the scalar unit's integer or floating point pipeline and read the specified registers. Vector instructions are not sent speculatively; that is, the flow control and any previous trap conditions are resolved before sending the instructions to vector processing unit 14.
The vector processing unit renames loads only (into the load buffers). Vector operations are queued, awaiting operand availability, and issue in order. No vector operation is issued until all previous vector memory operations are known to have completed without trapping (and as stated above, vector instructions are not even dispatched to the vector unit until all previous scalar instructions are past the trap point). Therefore, vector operations can modify architectural state when they execute; they never have to be rolled back, as do the scalar instructions.
In one embodiment, scalar processing unit 12 is designed to allow it to communicate with vector load/store unit 22 and vector execution unit 20 asynchronously. This is accomplished by having scalar operand and vector instruction queues between the scalar and vector units. Scalar and vector instructions are dispatched to certain instruction queues depending on the instruction type. Pure scalar instructions are just dispatched to the scalar queues where they are executed out of order. Vector instructions that require scalar operands are dispatched to both vector and scalar instruction queues. These instructions are executed in the scalar unit. They place scalar operands required for vector execution in the scalar operand queues that are between the scalar and vector units. This allows scalar address calculations that are required for vector execution to complete independently of vector execution.
The vector processing unit is designed to allow vector load/store instructions to execute decoupled from vector execute unit 20. The vector load/store unit 22 issues and executes vector memory references when it has received the instruction and memory operands from scalar processing unit 12. Vector load/store unit 22 executes independently from vector execute unit 20 and uses load buffers in vector execute unit 20 as a staging area for memory load data. Vector execute unit 20 issues vector memory and vector operations from instructions that it receives from scalar processing unit 12.
When vector execution unit 20 issues a memory load instruction, it pulls the load data from the load buffers that were loaded by vector load/store unit 22. This allows vector execution unit 20 to operate without stalls due to having to wait for load data to return from main memory 16.
A method for reducing delays when synchronizing the memory references of multiple processors (such as processors 12 and 14) will be discussed next. The method is useful when a processor is performing writes that, due to default memory ordering rules or an explicit synchronization operation, are supposed to be ordered before subsequent references by another processor.
It is often the case that the address for a write operation is known many clocks (perhaps 100 or more) before the data for the write operation is available. In this case, if another processor's memory references must be ordered after the first processor's writes, then a conventional system may require waiting until the data is produced and the write is performed before allowing the other processor's references to proceed.
It is desirable to split the write operations up into two parts—a write address request and a write data request—and send each out to memory system 16 separately. One embodiment of such a method is shown in
As the subsequent requests by other processors are processed by the memory system, they are checked at 54 against the stored write addresses. If, at 56, there is no match, then the subsequent requests can be serviced immediately at 60. If, however, there is a match at 56, control moves to 58, where the requests are held in the memory system until the write data arrives, and then serviced.
Not all stores have to be ordered with other memory references. In many cases, the compiler knows that there is no possible data dependence between a particular store reference and subsequent references. And in those cases, the references proceed it just lets the hardware do its own thing and the two references may get re-ordered.
Where, however, the compiler thinks that there may be a dependence, computer system 10 must make sure that a store followed by a load, or a load followed by a store, gets ordered correctly. In one embodiment, each processor 12 and 14 includes an instruction for coordinating references between processors 12 and 14. One such synchronization system is described in patent application Ser. No. 10/643,744, entitled “Multistream Processing System and Method”, filed on even date herewith, the description of which is incorporated herein by reference.
In one embodiment, computer system 10 takes the store address and runs it past the other processor's data cache to invalidate any matching entries. This forces the other processor to go to memory 16 on any subsequent reference to that address.
Processor 12 then sends the store addresses out to memory 16 and saves the addresses in memory 12. Then, when another processor 12 (or 14) executes a load that would have hit out of the data cache, it will miss because that line has been invalidated. It goes to memory 16 and gets matched against the stored store addresses. If the reference from the other processor does not match one of the store addresses stored in memory 16, it simply reads its corresponding data from memory. If it does, however, match one of the store addresses stored in memory 16, it waits until the data associated with that store address is written. Memory 16 then reads the data and returns it to the processor that requested it.
The method of the present invention therefore minimizes the delay waiting for the write data in the case there is an actual conflict, and avoids the delay in the case when there is not a conflict.
As an example, consider the case where processor A performs a write X, then processors A and B perform a synchronization operation that guarantees memory ordering, and then processor B performs a read Y. The method of the present invention will cause processor A to send the address for write X out to the memory system as soon as it is known, even though the data for X may not be produced for a considerable time.
Then, after synchronizing, processor B can send its read Y out to the memory system. If X and Y do not match, the memory system can return a value for Y even before the data for X has been produced. The synchronization event, therefore, did not require processor B to wait for processor A's write to complete before performing its read.
If, however, read Y did match the address of write X, then read Y would be stalled in the memory system until the data for write X arrived, at which time read Y could be serviced.
In one example embodiment, even though the write data and write address are sent at different times, they are received in instruction order at memory 16. In such an embodiment, you don't have to send an identifier associating an address with its associated data. Instead, the association is implied by the ordering. Such an example embodiment is illustrated in
In one embodiment, memory 16 includes a store address buffer 19 for storing write addresses while the memory waits for the associated write data.
The method according to the present invention requires that the participating processors share a memory system. In one embodiment, the processors share a cache, such as is done in chip-level multiprocessors (e.g., the IBM Power 4). In one such embodiment, store address buffer 19 is located within the cache.
In the embodiment shown in
The method for reducing delays when synchronizing the memory references of multiple processors can be extended as well to multiple units within a single processor (such as the vector and scalar units of a vector processor).
A computer 10 having a processor 28 connected across an interconnect network 18 to a memory 16 is shown in
In the embodiment shown in
For instance, in one embodiment, four processors 28 and four caches 24 are configured as a Multi-Streaming Processor (MSP) 30. An example of such an embodiment is shown in
In one embodiment, signaling between processor 28 and cache 24 runs at 400 Mb/s on processor-cache connection 32. Each processor to cache connection 32 shown in
In the embodiment shown in
In some systems, a load needed to produce store data could potentially be blocked behind a store dependent on that data. In such systems, processors 28 must make sure that loads whose values may be needed to produce store data, cannot become blocked in the memory system behind stores dependent on that data. In one embodiment of system 10, processing units within processor 28 operate decoupled from each other. It is, therefore, possible, for instance, for a scalar load and a vector store to occur out of order. In such cases, the processor must ensure that load request which occur earlier (in program order) are sent out before store address requests that may depend upon the earlier load results. In one embodiment, therefore, issuing a write request includes ensuring that all vector and scalar loads from shared memory for that processor have been sent to shared memory prior to issuing the write request.
In one embodiment, the method according to the present invention is used for vector write operations, and provides ordering between the vector unit 14 and the scalar unit 12 of the same processor 28, as well as between the vector unit of one processor 28 and both the vector and scalar units of other processors 28.
Write addresses could be held by the memory system in several different formats. In one embodiment, a write address being tracked alters the cache state of a cache line in a shared cache within a processor 28. For example, a cache line may be changed to a “WaitForData” state. This indicates that a line contained in the cache is in a transient state in which it is waiting for write data, and is therefore inaccessible for access by other functional units.
In another embodiment, a write address being tracked alters the cache state of cache line in cache 24. For example, a cache line may be changed to a “WaitForData” state. This indicates that a line contained in cache 24 is in a transient state in which it is waiting for write data, and is therefore inaccessible for access by other processors 28.
In another embodiment, write addresses to be tracked are encoded in a structure which does not save their full address. In order to save storage space, the write addresses simply cause bits to be set in a bit vector that is indexed by a subset of the bits in the write address. Subsequent references check for conflicts in this blocked line bit vector using the same subset of address bits, and may suffer from false matches. For example, a write address from one processor to address X may cause a subsequent read from another processor to address Y to block, if X and Y shared some common bits.
In an alternate embodiment of such an approach, a write address being tracked is saved in a structure that holds the entire address for each entry. Subsequent references check which detect a conflict with an entry in the blocked line bit vector, access the structure to obtain the whole write address. In this embodiment, only true matches will be blocked.
This invention can be used with multiple types of synchronization, including locks, barriers, or even default memory ordering rules. Any time a set of memory references on one processor is supposed to be ordered before memory references on another processor, the system can simply ensure that write address requests of the first processor are ordered with respect to the other references, rather than wait for the complete writes, and the write addresses can provide the ordering guarantees via the matching logic in the memory system.
The method according to the present invention reduces latency for multiprocessor synchronization events, by allowing processors to synchronize with other processors before waiting for their pending write requests to complete. They can synchronize with other processors as soon as their previous write request addresses have been sent to the memory system to establish ordering.
In the above discussion, the term “computer” is defined to include any digital or analog data processing unit. Examples include any personal computer, workstation, set top box, mainframe, server, supercomputer, laptop or personal digital assistant capable of embodying the inventions described herein.
Examples of articles comprising computer readable media are floppy disks, hard drives, CD-ROM or DVD media or any other read-write or read-only memory device.
Portions of the above description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiment shown. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4412303 | Barnes et al. | Oct 1983 | A |
4414624 | Summer, Jr. et al. | Nov 1983 | A |
4868818 | Madan et al. | Sep 1989 | A |
4888679 | Fossum et al. | Dec 1989 | A |
4989131 | Stone | Jan 1991 | A |
5012409 | Fletcher et al. | Apr 1991 | A |
5036459 | den Haan et al. | Jul 1991 | A |
5161156 | Baum et al. | Nov 1992 | A |
5175733 | Nugent | Dec 1992 | A |
5197130 | Chen et al. | Mar 1993 | A |
5218676 | Ben-ayed et al. | Jun 1993 | A |
5247635 | Kamiya | Sep 1993 | A |
5247639 | Yamahata | Sep 1993 | A |
5247691 | Sakai | Sep 1993 | A |
5276899 | Neches | Jan 1994 | A |
5341482 | Cutler et al. | Aug 1994 | A |
5347450 | Nugent | Sep 1994 | A |
5365228 | Childs et al. | Nov 1994 | A |
5375223 | Meyers et al. | Dec 1994 | A |
5418916 | Hall et al. | May 1995 | A |
5430850 | Papadopoulos et al. | Jul 1995 | A |
5430884 | Beard et al. | Jul 1995 | A |
5530933 | Frink et al. | Jun 1996 | A |
5560029 | Papadopoulos et al. | Sep 1996 | A |
5606696 | Ackerman et al. | Feb 1997 | A |
5613114 | Anderson et al. | Mar 1997 | A |
5640524 | Beard et al. | Jun 1997 | A |
5649141 | Yamazaki | Jul 1997 | A |
5684977 | Van Loo et al. | Nov 1997 | A |
5717895 | Leedom et al. | Feb 1998 | A |
5765009 | Ishizaka | Jun 1998 | A |
5781775 | Ueno | Jul 1998 | A |
5787494 | DeLano et al. | Jul 1998 | A |
5796980 | Bowles | Aug 1998 | A |
5812844 | Jones et al. | Sep 1998 | A |
5835951 | McMahan | Nov 1998 | A |
5897664 | Nesheim et al. | Apr 1999 | A |
5978830 | Nakaya et al. | Nov 1999 | A |
5987571 | Shibata et al. | Nov 1999 | A |
5995752 | Chao et al. | Nov 1999 | A |
6003123 | Carter et al. | Dec 1999 | A |
6014728 | Baror | Jan 2000 | A |
6047323 | Krause | Apr 2000 | A |
6088701 | Whaley et al. | Jul 2000 | A |
6101590 | Hansen | Aug 2000 | A |
6161208 | Dutton et al. | Dec 2000 | A |
6247169 | DeLong | Jun 2001 | B1 |
6269390 | Boland | Jul 2001 | B1 |
6269391 | Gillespie | Jul 2001 | B1 |
6317819 | Morton | Nov 2001 | B1 |
6336168 | Frederick, Jr. et al. | Jan 2002 | B1 |
6339813 | Smith et al. | Jan 2002 | B1 |
6356983 | Parks | Mar 2002 | B1 |
6385715 | Merchant et al. | May 2002 | B1 |
6389449 | Nemirovsky et al. | May 2002 | B1 |
6393536 | Hughes et al. | May 2002 | B1 |
6430649 | Chaudhry et al. | Aug 2002 | B1 |
6490671 | Frank et al. | Dec 2002 | B1 |
6496902 | Faanes et al. | Dec 2002 | B1 |
6496925 | Kota et al. | Dec 2002 | B1 |
6519685 | Chang | Feb 2003 | B1 |
6553486 | Ansari | Apr 2003 | B1 |
6591345 | Seznec | Jul 2003 | B1 |
6615322 | Arimilli et al. | Sep 2003 | B2 |
6665774 | Faanes et al. | Dec 2003 | B2 |
6684305 | Deneau | Jan 2004 | B1 |
6782468 | Nakazato | Aug 2004 | B1 |
6922766 | Scott | Jul 2005 | B2 |
6925547 | Scott et al. | Aug 2005 | B2 |
6952827 | Alverson et al. | Oct 2005 | B1 |
6976155 | Drysdale et al. | Dec 2005 | B2 |
7028143 | Barlow et al. | Apr 2006 | B2 |
7089557 | Lee | Aug 2006 | B2 |
7103631 | van der Veen | Sep 2006 | B1 |
7111296 | Wolrich et al. | Sep 2006 | B2 |
7137117 | Ginsberg | Nov 2006 | B2 |
7143412 | Koenen | Nov 2006 | B2 |
7162713 | Pennello | Jan 2007 | B2 |
7191444 | Alverson et al. | Mar 2007 | B2 |
7334110 | Faanes et al. | Feb 2008 | B1 |
7366873 | Kohn | Apr 2008 | B1 |
7421565 | Kohn | Sep 2008 | B1 |
7437521 | Scott et al. | Oct 2008 | B1 |
7519771 | Faanes et al. | Apr 2009 | B1 |
20020078122 | Joy et al. | Jun 2002 | A1 |
20020091747 | Rehg et al. | Jul 2002 | A1 |
20020116600 | Smith et al. | Aug 2002 | A1 |
20030018875 | Henry et al. | Jan 2003 | A1 |
20030097531 | Arimilli et al. | May 2003 | A1 |
20030167383 | Gupta et al. | Sep 2003 | A1 |
20030196035 | Akkary | Oct 2003 | A1 |
20040064816 | Alverson et al. | Apr 2004 | A1 |
20040162949 | Scott et al. | Aug 2004 | A1 |
20050044339 | Sheets | Feb 2005 | A1 |
20050044340 | Sheets et al. | Feb 2005 | A1 |
20050125801 | King | Jun 2005 | A1 |
20070283127 | Kohn et al. | Dec 2007 | A1 |
Number | Date | Country |
---|---|---|
0353819 | Feb 1990 | EP |
0353819 | Feb 1990 | EP |
0475282 | Sep 1990 | EP |
0473452 | Mar 1992 | EP |
0475282 | Mar 1992 | EP |
0501524 | Sep 1992 | EP |
0570729 | Nov 1993 | EP |
WO-8701750 | Mar 1987 | WO |
WO-8808652 | Nov 1988 | WO |
WO 9516236 | Jun 1995 | WO |
WO-9516236 | Jun 1995 | WO |
WO-2005020088 | Mar 2005 | WO |
WO-2005020088 | Mar 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20050044128 A1 | Feb 2005 | US |