This application is related to the following U.S. utility patent application, which is entirely incorporated herein by reference: U.S. patent application Ser. No. 11/300,705, entitled “SWITCHING METHOD AND SYSTEM FOR MULTIPLE GPU SUPPORT,” filed on Dec. 15, 2005.
The present disclosure relates to graphics processing and, more particularly, to a method and system for supporting multiple graphics processor units by converting one link to multiple links.
Current computer applications are more graphically intense and involve a higher degree of graphics processing power than their predecessors. Applications such as games typically involve complex and highly detailed graphics renderings that involve a substantial amount of ongoing computations. To match the demands made by consumers for increased graphics capabilities in computing applications, such as games, computer configurations have also changed.
As computers, particularly personal computers, have been programmed to handle ever-increasing demanding entertainment and multimedia applications, such as high definition video and the latest 3-D games, increasing demands have been placed on system bandwidth. To meet these changing requirements, methods have arisen to deliver the bandwidth needed for current bandwidth hungry applications, as well as providing additional headroom, or bandwidth, for future generations of applications.
This increase in bandwidth has been realized in recent years in the bus system of the computer's motherboard. A bus is comprised of conductors that are hardwired onto a printed circuit board that comprises the computer's motherboard. A bus may be typically split into two channels, one that transfers data and one that manages where the data has to be transferred. This internal bus system is designed to transmit data from any device connected to the computer to the processor and memory.
One bus system is the PCI bus, which was designed to connect I/O (input/output) devices with the computer. PCI bus accomplished this connection by creating a link for such devices to a south bridge chip with a 32-bit bus running at 33 MHz.
The PCI bus was designed to operate at 33 MHz and therefore able to transfer 133 MB/s, which is recognized as the total bandwidth. While this bandwidth was sufficient for early applications that utilized the PCI bus, applications that have been released more recently have suffered in performance due to this relatively narrow bandwidth.
More recently, a new interface known as AGP, Advanced Graphics Port, was introduced for 3-D graphics applications. Graphics cards coupled to computers via an AGP 8× link realized bandwidths approximately at 2.1 GB/s, which was a substantial increase over the PCI bus described above.
Even more recently, a new type of bus has emerged with an even higher bandwidth over both PCI and AGP standards. A new standard, which is known as PCI Express, is typically known to operate at 2.5 GB/s, or 250 MB/s per lane in each direction, thereby providing a total bandwidth of 10 GB/s in a 20-lane configuration. PCI Express (which may be abbreviated herein as “PCIe”) architecture is a serial interconnect technology that is configured to maintain the pace with processor and memory advances. As stated above, bandwidths may be realized in the 2.5 GHz range using only 0.8 volts.
At least one advantage with PCI Express architecture is the flexible aspect of this technology, which enables scaling of speeds. When combining the links to form multiple lanes, PCIe links can support ×1, ×2, ×4, ×8, ×12, ×16, and ×32 lane widths. Nevertheless, in many desktop applications, motherboards may be populated with a number of ×1 lanes and/or one or even two ×16 lanes for PCIe compatible graphics cards.
As a nonlimiting example, one or more peripheral devices 22a-22d may be coupled to north bridge chip 14 via an individual pair of point-to-point data lanes, which may be configured as ×1 communication paths 24a-24d, as described above. Likewise, a south bridge chip 16, as known in the art, may be coupled by one or more PCIe lanes 26a and 26b to peripheral devices 28a and 28b, respectively.
A graphics processing device 30 (which may hereinafter be referred to as GPU 30) may be coupled to the north bridge chip 14 via a PCIe 1×16 link 32, which essentially may be characterized as 16×1 PCIe links, as described above. Under this configuration, the 1×16 PCIe link 32 may be configured with a bandwidth of approximately 4 GB/s.
Even with the advent of PCIe communication paths and other high bandwidth links, graphics applications have still reached limits at times due to the processing capabilities of the processors on devices such as GPU 30 in
Thus, in one nonlimiting application, GPU 30 and GPU 36 should be configured to operate in harmony with each other. In at least one nonlimiting example, as shown in
Thus, there is a heretofore-unaddressed need to overcome the deficiencies and shortcomings described above.
This disclosure describes a system and method related to supporting multiple graphics processing units (GPUs), which may be positioned on one or multiple graphics cards coupled to a motherboard. The system and method disclosed herein comprises a first path coupled to a north bridge device (or a root complex device) and a first GPU, which may include a portion of the first GPU's total communication lanes. As a nonlimiting example, the first path may be coupled to connection points 0-7 of the first GPU (in a 16 lane configuration) and to connection points 0-7 of the northbridge device.
A second path may be coupled to the north bridge device and a second GPU and may include a portion of the second GPU's total communication lanes. As a nonlimiting example, the second path may be coupled to connection points 0-7 of the second GPU and connection points 8-15 of the north bridge device.
A third communication path may be coupled between the first and second GPUs directly or through one or more switches that can be configured for single or multiple GPU operations. In one nonlimiting example, the third path may be coupled to connection points 8-15 on each of the first and second GPUs. However, the third communication path may include some or all of the remaining communication lanes for the first and second GPUs. As a nonlimiting example, the first and second GPUs may each utilize an 8-lane PCI express communication path with the north bridge device and an 8-lane PCI express communication path with each other.
If the second GPU is not utilized, as a nonlimiting example, switches on the graphics cards or the motherboard may be controlled so that connection points 8-15 of the first GPU are coupled to connection points 8-15 of the north bridge device. In this nonlimiting example, the one or more switches may include one or more multiplexing and/or demutiplexing devices.
Other systems, methods, features, and advantages of the present disclosure will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the disclosure, and be protected by the accompanying claims.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure.
As described above, configuring multiple graphics processors provides a difficult set of problems involving inter-GPU traffic and the coordination of graphics processing operations so that the multiple graphics processors operate in harmony.
In this nonlimiting example, GPUs 30 and 36 are coupled to north bridge chip 14 via two 8-lane PCIe interfaces 33 and 38, respectively, as described above. More specifically, GPU 30 may be coupled to north bridge chip 14 via 8-lane PCI interface 33 at link interface 1, which is denoted as referenced numeral 49 in
An additional PCIe interface 48 may be coupled between a second link interfaces 53 and 55 for each of GPUs 30 and 36, respectively. In this way, each of GPUs 30 and 36 communicate with each other via this second PCIe interface 48 without involving north bridge chip 14, system memory, or other components in computer 45. In this configuration, inter-GPU traffic realizes low latency times, as compared to the configurations described above. In addition, 16 lanes of PCIe bandwidth are utilized between the GPUs 30 and 36 and north bridge chip 14 via PCIe interfaces 33 and 38. In this nonlimiting example, PCIe interface 48 is configured with 8 PCIe lanes, or at ×8. However, one of ordinary skill in the art would know that this interface linking each of GPUs 30 and 36 could be scalable to one or more different lane configurations, thereby adjusting the bandwidth between each of GPUs 30 and 36, respectively.
As one implementation of a dual graphics card format, which is depicted in
As described above, 8 PCIe lanes are used for each of the first and second GPUs 30 and 36 for communication with north bridge chip 14 of
In similar fashion, the second GPU 36 communicates with north bridge chip 14 via lanes 0-7 of interface 65. More specifically, the first 8 PCIe lanes of interface 65 (numbered as lanes 0-7) are coupled to connection points 8-15 of connector 71, which is referenced as connection points 8-15. Thus, data communicated between the second GPU 36 and north bridge chip 14 is routed through lanes 0-7 of interface 65, connection points 8-15 of connector 71, and across 8 PCIe lanes 38 of
In this nonlimiting example, inter-GPU communication takes place on the graphics card 60 between the lanes 8-15 in each of interfaces 62 and 65, respectively. As shown in
In this nonlimiting example, graphics card 60 may also include a reference clock input that is coupled to north bridge chip 14 so that a clock buffer 73 coordinates processing of each of GPUs 30 and 36. However, one or more other clocking configurations may work as well.
In this nonlimiting example, communications, which may include data, commands, and other related instructions may be routed through lanes 0-7 of interface 79 to PCIe slot 77, as represented by communication path 83. Communication path 83 may be further relayed to the primary PCIe link 51 for GPU 30 via communication path 85. More specifically, PCIe lanes 0-7 of primary PCIe link 51 may receive the logical communication 85. Likewise, return traffic may be routed through lanes 0-7 of primary PCIe link 51 to PCIe slot 77 via logical communication path 92 and further on to interface 79 via logical communication path 94, which may be configured on a printed circuit board. These communication paths occur on lanes 0-7 and are therefore configured as an 8 lane PCIe link between north bridge chip 14 and GPU 30.
In communicating with GPU 36, north bridge chip 14 routes communications through interface 81 via communication path 88 (on a printed circuit board) over lanes 0-7 to PCIe slot 77. GPU 36 receives this communication from PCIe slot 77 via communication path 89 that is coupled to the receiving lanes 0-7, which are coupled to primary PCIe link 49. For communications that GPU 36 communicates back to north bridge chip 14, primary PCIe link 49 routes such communications over lanes 0-7, as shown in communication path 96 to PCIe slot 77. Interface 81 receives the communication from GPU 36 via communication path 98 on receiving lanes 0-7. In this way, as described above, GPU 36 has an 8 lane PCIe link with north bridge chip 14.
Each of GPUs 30 and 36 include a secondary link 53, 55 respectively for inter-GPU communication. More specifically, an ×8 PCIe link 101 may be established between each of GPU 30 and 36 at links 53 and 55, respectively. Lanes 8-15 for each of the secondary links 53, 55 are utilized for this communication path 101. Thus, each of GPUs 30 and 36 are able to communicate with each other to maintain prosecution harmony of graphics related operations. Stated another way, inter-GPU communication, at least in this nonlimiting example, is not routed through PCIe slot 77 and north bridge chip 14, but is instead maintained on graphics card 60.
It should further be understood that north bridge chip 14 in
Because graphics card 60 with its dual GPUs 30 and 36 utilize a single ×16 lane PCIe slot 77, existing SLI configured motherboards may be set to one ×16 mode and therefore utilize the dual processing engines with no further changes. Furthermore, the graphics card 60 of
As an alternate embodiment, the multiple GPU configuration may be implemented wherein each of GPU 30 and 36 are located on separate graphics cards.
Similarly, graphics card 108 with GPU 36 is coupled to PCIe slot 112, which also has 16 PCIe lanes. One of ordinary skill in the art would understand that each of PCIe slots 110 and 112 are coupled to a motherboard and further coupled to a north bridge chip 14, as similarly described above.
Each of graphics cards 106 and 108 may be configured to communicate with north bridge chip 14 and also with each other for inter-GPU traffic in the configuration shown in
Since GPUs 30 and 36 are on separate cards 106 and 108, inter-GPU traffic cannot take place in this nonlimiting example on a single card. Thus, PCIe lanes 8-15 on each of cards 106 and 108 are used for inter-GPU traffic. In
Graphics card 108 communicates in a similar fashion as graphics card 106. More specifically, interface 81 on north bridge chip 14 uses the transmission paths of lanes 0-7 to create a communication path 132 that is coupled to PCIe slot 112. The communication path 134 is received at primary PCIe link interface 49 on graphics card 108 in the receive lanes 0-7.
Return communications are transmitted on the transmission lanes of 0-7 from primary PCI link interface 49 back to PCIe slot 112 and are thereafter forwarded to interface 81 and received in lanes 0-7. Stated another way, communication path 138 is routed from PCIe slot 112 to the receiving lanes 0-7 of interface 81 for north bridge 14. In this way, each of graphics cards 106 and 108 maintain individual 8 PCIe communication lanes with north bridge chip 14. However, inter-GPU communication does not take place on a single card, as the separate GPUs 30 and 36 are on different cards in this nonlimiting example. Therefore, inter-GPU communication takes place via PCIe slots 110 and 112 on the motherboard for which the GPU cards are coupled.
In this nonlimiting example, the graphics cards 106 and 108 each have a secondary PCIe link 53 and 55 that corresponds to lanes 8-15 of the 16 total communication lanes for the card. More specifically, lanes 8-15 coupled to secondary link 53 on graphics card 106 enable communications to be received and transmitted between PCIe slot 110 for which graphics card 106 is coupled. Such communications are routed on the motherboard to PCIe slot 112 and thereafter to communication lanes 8-15 of the secondary PCIe link 55 on graphics card 108. Therefore, even though this implementation utilizes two separate 16 lane PCIe slots, 8 of the 16 lanes in the separate slots are essentially coupled together to enable inter-GPU communication.
In this configuration of
The configuration of
As described above, north bridge chip 14 may be configured with 16 lanes dedicated for graphics communications. In the nonlimiting example shown in
Configuration 150 of
More specifically, GPU 30 may transmit outputs on lanes 8-15 to demultiplexer 157 which may be coupled to an input into multiplexer 159, which may be switched to the receiving lanes 8-15 of north bridge chip 14. For return communications, north bridge chip 14 may transmit on lanes 8-15 to demultiplexer 154 that itself may be coupled into multiplexer 152. Multiplexer 152 may be switched such that it couples the output of demultiplexer 154 with the receiving lanes 8-15 of GPU 30.
More specifically, which the transmission and receiving lanes 0-7 of GPU 30 may remain unchanged with the configuration of
Inter-GPU traffic transmissions from GPU 36 over lanes 8-15 may be forwarded to multiplexer 152 and on to receiving lanes 8-15 of GPU 30. Similarly, inter-GPU traffic communicated on transmission lanes 8-15 from GPU 30 may be forwarded to demultiplexer 157 and on to receiving lanes 8-15 of GPU 36. As a result, north bridge chip 14 maintains 2×8 PCIe lanes with each of GPUs 30 and 36 in this configuration 160 of
As described above in regard to
Conversely, switches 182 and 184 may be similarly configured such that transmissions from north bridge chip 14 on lanes 8-11 may be routed to receiving lanes 8-11 of GPU 30, which is the first graphics engine on graphics card 60. The same switching configuration is set for lanes 12-15 of the first GPU 30. Switches 177 and 179 may be configured to couple transmissions on lanes 12-15 from GPU 30 to the receiving lanes 12-15 of north bridge chip 14.
Likewise, transmissions from lanes 12-15 of north bridge chip 14 may be coupled via switches 186 and 188 through receiving lanes 12-15 of GPU 30. Consequently, if only GPU 30 is utilized for a particular application, such that GPU 36 is disabled or otherwise maintained in an idle state, the switches described in
However, if graphics card 60 activates GPU 36, then the switches described above may be configured so as to route communications from GPU 36 to north bridge chip 14 and also to provide for inter-GPU traffic between each of GPUs 30 and 36.
In this nonlimiting example wherein GPU 36 is activated, transmissions on lanes 0-3 may be coupled to receiving lanes 8-11 of north bridge 14 via switch 174. That means, therefore, that switch 172 toggles the output of lanes 8-11 of GPU 30 to the receiving lanes 8-11 of GPU 36, thereby providing four lanes of inter-GPU communication.
Likewise, transmissions on lanes 4-7 of GPU 36 may be output via switch 179 to receiving input lanes 12-15 of north bridge chip 14. In this situation, switch 177 therefore routes transmissions on lanes 12-15 of GPU 30 to lanes 12-15 of GPU 36.
Switch 182 may also be reconfigured in this nonlimiting example such that transmissions from lanes 8-11 of north bridge chip 14 are coupled to receiving lanes 0-3 of GPU 36, which is the second GPU engine on graphics card 60 in this nonlimiting example. This change, therefore, means that switch 184 couples the transmission output on lanes 8-11 to the receiving input lanes 8-11 of GPU 30, thereby providing four lanes of inter-GPU communication.
Finally, switch 186 may be toggled such that the transmissions on lanes 12-15 are coupled to the receiving lanes 4-7 of GPU 36. This change also results in switch 188 coupling transmissions on lanes 12-15 of GPU 36 with the receiving lanes 12-15 of GPU 30, which is the first GPU engine of graphics card 60. In this second configuration, each of GPUs 30 and 36 have eight PCIe lanes of communication with north bridge chip 14, as well as eight PCIe lanes of inter-GPU traffic between each of the GPUs on graphics card 60.
For this reason, then, the diagram 190 of
In this nonlimiting example, graphics cards 106 and 108 may be essentially identical and/or otherwise similar cards in configuration, both having one multiplexer and one demultiplexer, as described above. As also described above, an interconnect may be used to bridge the communication of 8 PCIe lanes between each of graphic cards 106 and 108. As a nonlimiting example, a bridge may be physically placed on coupling connectors on the top portion of each card so that an electrical communication path is established.
In this configuration, transmissions on lanes 0-7 from GPU 36 on graphics card 108 may be coupled via multiplexer 201 to the receiving lanes 8-15 of north bridge chip 14. Transmissions from lanes 8-15 of GPU 30 may be demultiplexed by demultiplexer 192 and coupled to the input of multiplexer 196 on graphics card 108 such that the output of multiplexer 196 is coupled to the input lanes 8-15 of GPU 36. In this nonlimiting example, the output from demultiplexer 192 communicates over the printed circuit board bridge to an input of multiplexer 196.
Continuing with this nonlimiting example, transmissions on lanes 8-15 from north bridge chip 14 may be coupled to the receiving lanes 0-7 of GPU 36 on graphics card 108 via multiplexer 203 logically located at north bridge 14. Also, inter-GPU traffic originated from GPU 36 on lanes 8-15 may be routed by demultiplexer 198 across the printed circuit board bridge to multiplexer 194 on graphics card 106. The output of multiplexer 194 may thereafter route the communication to the receiving lanes 8-15 of GPU 30. In this configuration, therefore, a motherboard configured for SLI mode may still be configured to utilize multiple graphics cards according to this methodology.
In each of the configurations described above, wherein a single or multiple GPU configuration may be implemented, the initialization sequence may vary according to whether the GPUs are on a single or multiple cards and whether the single card has one or more GPUs attached thereto. Thus,
In this nonlimiting example, the process starts at starting point 209, which denotes the case as fixed multiple GPU mode. In step 212, system BIOS is set to 2×8 mode, which means that two groups of 8 PCIe lanes are set aside for communication with each of the graphics GPUs 30 and 36. In step 215, each of GPUs 30 and 36 start a link configuration and default to 16 lane switch setting configurations. However, in step 216, the first links of each of the GPUs (such as GPU 30 and 36) settle to an 8 lane configuration. More specifically, the primary PCI interfaces 51 and 49 on each of GPUs 30 and 36, respectively, as shown in
In step 234, the second GPU (GPU 36) has its primary PCIe link 49 settle to an 8-lane PCIe configuration, as in similar fashion to step 229. Thereafter, each GPU secondary link (link 53 with GPU 30 and link 55 with GPU 36) settles to an 8-lane PCIe configuration for inter-GPU traffic.
A third sequence of GPU initialization may be depicted in diagram 240 of
Starting point 242 describes this diagram 240 for the situation wherein multiple cards are interfaced with a motherboard such that the motherboard is configured for switching between the cards, as described above regarding
One of ordinary skill in the art would know that the features described herein may be implemented in configurations involving more than two GPUs. As a nonlimiting example, this disclosure may be extended to three or even four cooperating GPUs that may either be on a single card, as described above, multiple cards, or perhaps even a combination, which may also include a GPU on a motherboard.
In one nonlimiting example, this alternative embodiment may be configured to support four GPUs operating in concert in similar fashion as described above. In this nonlimiting example, 16 PCIe lanes may still be implemented but in a revised configuration as discussed above so as to accommodate all GPUs. Thus, each of the four GPUs in this nonlimiting example could be coupled to the north bridge chip 14 via 4 PCIe lanes each.
As described above, these four connections paths between the four GPUs and the north bridge chip 14 consume 16 PCIe lanes at the north bridge chip 14. However, 12 free PCIe lanes for each GPU remain for communication with the other three GPUs. Thus, for GPU1284, PCIe lanes 4-7 may be coupled via link 302 to PCIe lanes 4-7 of GPU2285, PCIe lanes 8-11 may be coupled via link 304 to PCIe lanes 4-7 of GPU3286, and PCIe lanes 12-15 may be coupled via link 306 to PCIe lanes 4-7 of GPU4287.
For GPU2285, as stated above, PCIe lanes 0-3 may be coupled via link 293 to north bridge chip 14, and communication with GPU1284 may occur via link 302 with GPU2's PCIe lanes 4-7. Similarly, PCIe lanes 8-11 may be coupled via link 312 to PCIe lanes 8-11 for GPU3286. Finally PCIe lanes 12-15 for GPU2285 may be coupled via link 314 to PCIe lanes 8-11 for GPU4. Thus, all 16 PCIe lanes for GPU2285 are utilized in this nonlimiting example.
For GPU3286, PCIe lanes 0-3, as stated above, may be coupled via link 295 to north bridge chip 14. As already mentioned above, GPU3's PCIe lanes 4-7 may be coupled via link 304 to PCIe lanes 8-11 of GPU1284. GPU3's PCIe lanes 8-11 may be coupled via link 312 to PCIe lanes 8-11 of GPU2285. Thus, the final four lanes of GPU3286, which are PCIe lanes 12-15 are coupled via link 322 to PCIe lanes 12-15 of GPU4287.
All communication paths for GPU4287 are identified above; however for clarification the connections may be configured as follows: PCIe lanes 0-3 via link 297 to north bridge chip 14; PCIe lanes 4-7 via link 306 to GPU1284; PCIe lanes 8-11 via link 314 to GPU2285; and PCIe lanes 12-15 via link 322 to GPU3286. Thus, 16 PCIe lanes on each of the four GPUs in this nonlimiting example are utilized.
One of ordinary skill in the are would know from this alternative embodiment that different numbers of GPUs can be utilized according to this disclosure. So this disclosure is not limited to two GPUs, as one of ordinary skill would understand that topologies to connect multiple GPUs in excess of two may vary.
The foregoing description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Obvious modifications or variations are possible in light of the above teachings. As a nonlimiting example, instead of PCIe bus, other communication formats and protocols could be utilized in similar fashion as described above. The embodiments discussed, however, were chosen, and described to illustrate the principles disclosed herein and the practical application to thereby enable one of ordinary skill in the art to utilize the disclosure in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variation are within the scope of the disclosure as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly and legally entitled.
Number | Name | Date | Kind |
---|---|---|---|
5331315 | Crosette | Jul 1994 | A |
5371849 | Peaslee et al. | Dec 1994 | A |
5430841 | Tannenbaum et al. | Jul 1995 | A |
5440538 | Olsen | Aug 1995 | A |
5973809 | Okayama | Oct 1999 | A |
6208361 | Gossett | Mar 2001 | B1 |
6437788 | Milot et al. | Aug 2002 | B1 |
6466222 | Kao et al. | Oct 2002 | B1 |
6674841 | Johns et al. | Jan 2004 | B1 |
6782432 | Nelson et al. | Aug 2004 | B1 |
6919896 | Sasaki et al. | Jul 2005 | B2 |
6956579 | Diard et al. | Oct 2005 | B1 |
6985152 | Rubinstein et al. | Jan 2006 | B2 |
7174411 | Ngai | Feb 2007 | B1 |
20020073255 | Davidson et al. | Jun 2002 | A1 |
20020172320 | Chapple | Nov 2002 | A1 |
20030001848 | Doyle et al. | Jan 2003 | A1 |
20030058249 | Malka et al. | Mar 2003 | A1 |
20030142037 | Pinedo et al. | Jul 2003 | A1 |
20040252126 | Margittai et al. | Dec 2004 | A1 |
20050024385 | Gruber | Feb 2005 | A1 |
20050088445 | Gonzalez et al. | Apr 2005 | A1 |
20050270298 | Thieret | Dec 2005 | A1 |
20060095593 | Kranich | May 2006 | A1 |
20060098020 | Shen et al. | May 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20070139423 A1 | Jun 2007 | US |