The present invention relates generally to a symmetrical multiprocessor system and, more particularly, to the address arbitration scheme used across multiple chips in the system.
This application is related to application Ser. No. 11/121,121, filed May 5, 2005, for Retry Cancellation Mechanism to Enhance System Performance, which is incorporated by reference herein.
In a symmetrical multiprocessing system, there are three main components: the processing units with their cache; the input/output (I/O) devices with their direct memory access (DMA) engines; and the distributed system memory. The processing units execute instructions. The I/O devices handle the physical transmission of data to and from memory using the DMA engine. The processing units also control the I/O devices by issuing commands from an instruction stream. The distributed system memory stores data for use by these other components. As the number of processing units and system memory size increases, the processing systems need to be housed in separate chips.
The separate chips need to be able to communicate with each other in order to transfer data between all the components in the system. Also, in order to keep the processing unit's caches coherent, each device in the system needs to see each command issued. The processing unit's caches keep copies of data from system memory in order to allow the processing unit fast access to the data. The coherent architecture allows caches to have shared copies of data (data is unmodified and the same as in system memory), or exclusive copies of data so the processing unit can update the data (the data in the cache is the most up to date version). In order to keep each of the processing unit's caches valid, each command in the system has to be seen by each device so out of date copies of data can be invalidated and not used for future processing. Eventually, the modified copy of data will be written back to system memory and the entire process can start over again.
In order to simplify the design of the various components, all commands are sent to an address concentrator which makes sure no two commands to the same address are allowed in the system at the same time. If two commands to the same address were allowed in the system at the same time, the various components would have to keep track of each address they had acknowledged and compare it against the new address to see if they were already in the middle of a transfer for that address. If they were in the middle of a transfer, they must retry the second command so it can complete after the current transfer is completed. Also, if two or more processing units were trying to get exclusive access to a cache line, they could “fight” for the ownership and reduce system performance. By having the address concentrator ensure no two commands to the same address are active at the same time, the logic needed in each system component is reduced.
Current systems implement the address concentrator as either a separate chip in the system, as seen in
The separate chip case of
In
The single address concentrator in one of the processing chip's case of
In
The present invention utilizes the good qualities of a single address concentrator (AC), without any extra chips or wires, and distributes the AC function among the various chips, making use of the fact that each chip in the system has a copy of the AC function therein. Using the distributed address concentrator function, each chip will handle approximately one-fourth of the command traffic and the average latency of servicing the commands will be approximately the same across each chip in the system.
Referring now to
In the present invention, when a processor issues a command, the local address concentrator checks to see if the address is part of its assigned address range. If so, then this chip's AC will assume the address concentrator function for this command, and forward the command to the rest of the system. If not, then the command will be forwarded to the next chip in the system and that AC function will do its address range compare. This process continues until the appropriate AC address range is found and then that AC logic will perform the address concentration function for that command.
Whichever chip assumes the address concentrator function will start the command process of sending the reflected command 22, gathering the partial responses 24, and finally building the combined response 26. After the command phase is completed, the data movement can proceed. Since each chip has one-fourth of the system addresses, then each AC logic will handle approximately one-fourth of the total address load, and the average delay of service will be consistent across the four chips.
Chip 10d sends this reflected command to each of its internal units and the reflected command 22 is also passed to chip 10a. Chip 10a sends the command internally and then passes the reflected command 22 to chip 10b, the originating chip. Chip 10b sends the command internally, but does not need to pass it to chip 10c because this was the AC function chip for this command. Each chip forwards their partial response 24 around the ring until it arrives at the AC function for this command chip 10c. At this point, with the addition of the input from chip 10c, all chips have contributed their partial response. Thus, the AC function on the 10c chip generates the combined response 26, which is then relayed to each chip 10d, 10a, and 10b. Now that each device in the system has seen the command and the combined response, the data movement can proceed, just as in the prior art as described above. Four like address flows can be simultaneously sent through the system with each AC function on each chip servicing approximately one-fourth of the commands in the system. Additional flags are sent with the commands to indicate which chip is the AC function for a given command, and each chip in the system understands how to forward the commands and responses for a given AC assignment.
Thus, with a proper balancing of responsibilities, the address concentrator function is distributed across the system and the desired functions are accomplished without the limitations and drawbacks of the prior art as described above.
While the invention has been described in combination with embodiments thereof, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the foregoing teachings. Accordingly, the invention is intended to embrace all such alternatives, modifications and variations as fall within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4760521 | Rehwald et al. | Jul 1988 | A |
5581713 | Myers et al. | Dec 1996 | A |
5682512 | Tetrick | Oct 1997 | A |
5734926 | Freeley et al. | Mar 1998 | A |
5781757 | Deshpande | Jul 1998 | A |
6247100 | Drehmel et al. | Jun 2001 | B1 |
6513084 | Berkowitz et al. | Jan 2003 | B1 |
6591307 | Arimilli et al. | Jul 2003 | B1 |
7072904 | Najork et al. | Jul 2006 | B2 |
7210019 | Corrado | Apr 2007 | B2 |
20020129211 | Arimilli et al. | Sep 2002 | A1 |
20040064463 | Rao et al. | Apr 2004 | A1 |
20040230751 | Blake et al. | Nov 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20060253661 A1 | Nov 2006 | US |