Improvements in semiconductor processing technology have resulted in gains in computer processor performance. Not only has semiconductor feature size been reduced to allow higher component density on a die, decreases in semiconductor defects have made larger die sizes more cost effective. This has allowed integration of multiple processors and multiple levels of cache hierarchy possible in a single integrated chip.
Processor cycle time and memory access time are two important performance measures that together contribute to overall processor performance. Processor clock frequency has been improving at a rate faster than improvements in memory access time, limiting processor performance due to relatively longer memory access time. With greater interest by processor engineers in this ever-widening processor-cycle/memory-access time gap, many different cache organizations in multi-processor system have been proposed. Typically today, each processor core on a multiprocessor chip has its own first level cache. First level cache is a level of cache in a cache hierarchy most closely coupled to a processing unit of the processor. Typically the level-one cache is the fastest and/or smallest cache level coupled to the processor. Depending on the size of the individual processor and the amount of cache required, a level-two of cache may be integrated on-chip or located off-chip. The level-two cache is coupled to the level-one cache and is often shared by more than one processor in multiprocessor systems.
Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
The invention can be implemented in numerous ways, including as a process, an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. A component such as a processor or a memory described as being configured to perform a task includes both a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. In general, the order of the steps of disclosed processes may be altered within the scope of the invention.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
A multi-cluster chip is disclosed. In some embodiments, the chip includes multiple clusters connected together through a cluster communication network on a single die. Any data between the processors, caches, memory, or any chip component can be communicated through the cluster communication network. Each cluster contains multiple processors each with at least one private level of cache (e.g., L1 cache, L0 cache). Data in a private cache level may not be accessed directly by other processors. In some embodiments, only addresses are private in the cache level. Each processor may have multiple levels of cache hierarchy. In some embodiments, each cluster is associated with one or more shared levels of a cache (e.g., L2 cache, L3 cache). Data in a shared level of a cache may be accessed by more than one processor associated with the cluster. At least one shared cache level is coupled to at least one private cache level of each processor associated with the cluster. By dividing the processors into clusters that use a common cache level, the amount of set-associativity and bandwidth required by the common cache level can be optimized efficiently.
To obtain full performance of the clusters, high-bandwidth performance characteristics are desired in the cluster communication network. The common method for implementing a connection between functional units is a shared bus (i.e., as shown in 106 of
Any data that needs to be transferred amongst the clusters on the chip are communicated through the cluster communication network. Two example categories of communication between the clusters are coherency communication and message passing. In coherency communication, a coherency protocol may be used to keep data related to a same memory location residing in cluster caches of two more or clusters consistent. For example, many clusters can be reading and using data from the same memory location. If one cluster modifies the data that is to be maintained coherent, other clusters need to be informed about the modification of the data. The cluster that requests to modify the data that is to be maintained coherent may send out an “invalidate” command to other clusters using the data before modifying the data. When other clusters receive the “invalidate” command, the clusters need to re-obtain the modified data from the cluster which modified it before the data can be used again.
In some embodiments, at least one cache level in cluster cache 210 and 212 are coherent. Coherency may be maintained on at least a portion of data in the private cache level of each processor and/or on at least a portion of data in the cluster cache. When coherency is maintained only on the private cache level, coherency data traffic may be communicated. In some embodiments, a cluster cache may be inclusive of at least one private cache level of processors associated with the cluster cache. For example, any data cached in a private level of a processor is also cached in a cache level associated with the cluster cache. When the cluster cache is inclusive of the private cache level, only cluster cache coherency data traffic needs to be sent between the clusters. This may reduce cluster-to-cluster coherency traffic bandwidth and/or overall coherency traffic bandwidth compared to when the cluster cache is not inclusive of the private cache level. Coherency traffic may grow proportionally with the number of clusters in a system and not necessarily with the number of processors in the system. Coherency of data in the private cache level of processors may be maintained by respective cluster cache of the private cache level. In some embodiments, a cluster cache is only inclusive of data that is maintained coherent in the private cache level of processors. For example, instruction data in the private cache level is not maintained coherent and not included in the cluster level cache. In some embodiments, at least some data maintained coherently in the private cache level is not cached in the cluster cache.
In message passing, messages are sent by one cluster to one or more other clusters. The messages may contain any data to be shared between the clusters. In some cases, messages to be passed are specified by the programmer. The cluster caches of clusters do not necessarily have to be coherent.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
Number | Name | Date | Kind |
---|---|---|---|
5564035 | Lai | Oct 1996 | A |
5694573 | Cheong et al. | Dec 1997 | A |
5732209 | Vigil et al. | Mar 1998 | A |
5909699 | Sarangdhar et al. | Jun 1999 | A |
6101589 | Fuhrmann et al. | Aug 2000 | A |
6108721 | Bryg et al. | Aug 2000 | A |
6253292 | Jhang et al. | Jun 2001 | B1 |
6658539 | Arimilli et al. | Dec 2003 | B2 |
6738864 | Chauvel | May 2004 | B2 |
6751706 | Chauvel et al. | Jun 2004 | B2 |
7085866 | Hobson et al. | Aug 2006 | B1 |
20020073282 | Chauvel et al. | Jun 2002 | A1 |
20030009629 | Gruner et al. | Jan 2003 | A1 |
20030120876 | Hass et al. | Jun 2003 | A1 |
20030120877 | Jahnke | Jun 2003 | A1 |
20040022107 | Zaidi et al. | Feb 2004 | A1 |
20040088487 | Barroso et al. | May 2004 | A1 |
20040117559 | Glasco et al. | Jun 2004 | A1 |
20040117598 | Arimilli et al. | Jun 2004 | A1 |
20040268052 | Glasco | Dec 2004 | A1 |
20050021871 | Georgiou et al. | Jan 2005 | A1 |
20050108717 | Hong et al. | May 2005 | A1 |
20050138230 | Raisch | Jun 2005 | A1 |
20050182915 | Devaney et al. | Aug 2005 | A1 |
20050193174 | Arimilli et al. | Sep 2005 | A1 |
20060059315 | Moll | Mar 2006 | A1 |
20060129777 | Hobson et al. | Jun 2006 | A1 |
20060176890 | Clark et al. | Aug 2006 | A1 |
20070038814 | Dieffenderfer et al. | Feb 2007 | A1 |
Entry |
---|
“The Imapct of Shared-Cache Clustering in Small-Scale Shared-Memory Multiprocessors”, by Nayfeh et al. (0-8186-7237-4/96, 1996 IEEE), pp. 74-84. |
Lemieux et al. (Directional and Single-Driver Wires in FPGA Interconnect: ISBN: 0-7803-8652-3/04, published 2004), pp. 41-48. |
Definition of “point-to-point communication” from Microsoft Computer Dictionary, published 2002, pp. 2. |
Georgiou et al. (A programmable scalable platform for next-generation networking, pp. 1-20) copyright 2004, ISBN 0-12-198157-6. |
Basak et al. (Designing Processor-cluster Based Systems: Interplay Between Cluster Organizations and Broadcasting Algorithms). Aug. 12-16, 1996, ISSN: 0190-3918 pp. 271-274. |
Luiz Andre Barroso, COMPAQ, Piranha: Designing a Scalable CMP-based System for Commercial Workloads, Apr. 27, 2001. |
Barroso et al., Piranha A Scalable Architecture Based on Single-Chip Multiprocessing, In Proceeding of the 27th Annual International Symposium on Computer Architecture, Jun. 2000. |
Gostin et al., The Architecture of the HP Superdome Shared-Memory Multiprocessor, 2005. |
U.S. Appl. No. 10/908,587, filed May 18, 2005, Normoyle et al. |
Quinn Jacobson, Sun Microsystems, Ultra SPARC®, IV Processors. |
Tremblay et al., The MAJC Architecture: A Synthesis of Parallelism and Scalability, 2000 IEEE. |
Bossen et al., POWER4 Systems: Design for Reliability, IBM Server Group, Austin TX. |
Tendler et al., POWER4 System Microarchitecture, IBM, J Res & Dev, vol. 46, No. 1, Jan. 2002. |
Peter N. Glaskowsky, IBM Raises Curtain on POWER5, Oct. 14, 2003. |
McNairy et al., Montecito—The Next Product in the Itanium® Processor Family, Aug. 24, 2004. |
Brian Case, Sun Makes MAJC with Mirrors, Dual On-Chip Mirror-Image Processor Cores Cooperate for High Performance, Oct. 25, 1999. |
Sun Microsystems, UltraSPARC IV, Detailed View, http://www.sun.com/processors/UltraSPARC-IV/details.xml. |
Lostcircuits, HP PA-8800 RISC Processor, SMP on One Chip, Oct. 19, 2001, http://www.lostcircuits.com/cpu/hp—pa8800/. |
Hydra, A Next Generation Microarchitecture, http://www-hydra.stanford.edu/. |