The present application relates generally to an improved data processing apparatus and method and more specifically to mechanisms for a liquid cooling system for stackable modules in an energy-efficient computing system.
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. The HPC term is most commonly associated with computing used for scientific research. A related term, high-performance technical computing (HPTC), generally refers to the engineering applications of cluster-based computing (such as computational fluid dynamics and the building and testing of virtual prototypes). Recently, HPC has come to be applied to business uses of cluster-based supercomputers, such as data intensive, commercial analytics applications, and transaction processing.
However, many HPC systems are hindered by limits in the power consumption, space, cooling, and adaptability. That is HPC systems are composed out of thousands of components which occupy considerable space, require considerable cooling, use massive power, and are not readably deployable.
In one illustrative embodiment, a processing module is provided that comprises a set of processing module sides. In the illustrative embodiment, each processing module side comprises: a circuit board, a plurality of connectors coupled to the circuit board, and a plurality of processing nodes coupled to the circuit board. In the illustrative embodiment, each processing module side in the set of processing module sides couples to another processing module side using at least one connector in the plurality of connectors such that when all of the set of processing module sides are coupled together a modular processing module is formed. In the illustrative embodiment, the modular processing module comprises: an exterior connection to a power source and a communication system and a plurality of cold plates coupled to the plurality of processing nodes. In the illustrative embodiment, liquid coolant is circulated through the plurality of cold plates in a closed loop by at least one pump through a plurality of tubes and through at least one heat exchanger of a plurality of heat exchangers. In the illustrative embodiment, the at least one heat exchanger is coupled to an exterior portion of the processing module and the at least one heat exchanger cools the liquid coolant using air surrounding the processing module.
These and other features and advantages of the present invention will be described in, or will become apparent to those of ordinary skill in the art in view of, the following detailed description of the example embodiments of the present invention.
The invention, as well as a preferred mode of use and further objectives and advantages thereof, will best be understood by reference to the following detailed description of illustrative embodiments when read in conjunction with the accompanying drawings, wherein:
The illustrative embodiments provide a ubiquitous high-performance computing (UHPC) system that packages the thousands of components of a high-performance computing (HPC) system into building-block modules that may be coupled together to form a space-optimized and energy-efficient product. The illustrative embodiments also provide for various heatsink designs that enable an elegant assembly and in place maintenance for the heatsink and the module, while maintaining large effective heat exchange area and high pressure for efficient cooling. The illustrative embodiments also provide for an alternative to air cooling using a liquid cooling system with coolant/air heat exchanging enabled by skin heat exchangers mounted either on the interior or the exterior surface of the UHPC system.
Thus, the illustrative embodiments may be utilized in many different types of data processing environments including a distributed data processing environment, a single data processing device, or the like. In order to provide a context for the description of the specific elements and functionality of the illustrative embodiments,
With reference now to the figures and in particular with reference to
With reference now to the figures,
In the depicted example, ubiquitous high-performance computing (UHPC) server 104 and server 106 are connected to network 102 along with storage unit 108. In addition, clients 110, 112, and 114 are also connected to network 102. These clients 110, 112, and 114 may be, for example, personal computers, network computers, or the like. In the depicted example, UHPC server 104 provides data, such as boot files, operating system images, and applications to the clients 110, 112, and 114. Clients 110, 112, and 114 are clients to UHPC server 104 in the depicted example. Distributed data processing system 100 may include additional servers, clients, and other devices not shown.
In the depicted example, distributed data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages. Of course, the distributed data processing system 100 may also be implemented to include a number of different types of networks, such as for example, an intranet, a local area network (LAN), a wide area network (WAN), or the like. As stated above,
With reference now to
In data processing system 200, ubiquitous high-performance computing (UHPC) server 202 is connected to network 206 along with storage unit 208 and client 204. UHPC server 202 may further comprise one or more of compute modules 210, storage modules 212, and input/output (I/O) modules 214 using interconnect 216. Data processing system 200 may include additional servers, clients, storage devices, and network connects not shown. As with network 102 of
Additionally, stacked memory chips 323 provide processor memory at each MCM 314. Each of multi-chip modules 314 may be identical in its hardware configuration but configured by firmware during system initialization to support varying system topologies and functions as, e.g. enablement of master and slave functions or connectivity between various combinations of multiple nodes in a scalable multi-node symmetric multi-processor system.
Within a particular multi-chip module 314 there may be found processor unit 320 that may comprise one or more processor cores. Processor node 300 may have one or more oscillators 324 routed to each chip found on processor node 300. Connections between oscillators 324 and functional units extend throughout the board and chips but are not shown in
Those of ordinary skill in the art will appreciate that the hardware in
Again, the illustrative embodiments provide a ubiquitous high-performance computing (UHPC) system that packages the thousands of components of a high-performance computing (HPC) system into building-block modules that may be coupled together to form a space-optimized and energy-efficient product. In a first embodiment, a modular processing device is composed of a plurality of identical printed circuit boards and processing nodes housed in identical processor packages referred to as processing modules. Each processing module comprises memory, processing layers, and connectivity to power, other processing nodes, storage, input/output (I/O), or the like. The connectivity may be provided through wire, wireless communication links, or fiber optic cables and/or interconnects. In the processing module, various heatsink designs remove heat from the components on each processing node. In addition to the processing module, storage and I/O modules are also provided in similarly formed modules. The storage and I/O modules may be composed of a plurality of printed circuit boards mounting solid state storage devices and/or optical interconnects. The physical design of the modules offers advantages in communication bandwidth, cooling, and manufacturing costs.
In other embodiments, the heatsink designs are a composite design that is comprised of two components: the per processing node cooling component and the common core component. Each heatsink component is mounted directly on one or more processing nodes. Since air flow tends to follow the path of least resistance, one heatsink design fills a majority of the air space such that the flow of air passes between the fins of the heatsink increasing the heat exchange surface area. In another heatsink design, the sizing of the heatsink allows the removal of the heatsink and the processing node from the processing module, while the three other heatsinks remain in place. To enable this type of heatsink design, an empty space is left in the center of the module. Since air flow tends to follow the path of least resistance, to eliminate the loss of beneficial air flow over the heatsinks, a core is inserted into the empty area of the module to fill the air gap, increasing the heat exchange surface area of the heatsinks. The core may be either a solid core that air flows around, increasing the air pressure on the board mounted heatsinks, or may be another heatsink that increases the heat exchange surface area. Since the core is removable, it is still possible to perform in place maintenance tasks on the module, without dissembling the module.
In another embodiment, the modules are combined to create a scalable space optimized and energy efficient ubiquitous high performance computing (UHPC) system. The UHPC system reduces communication cost, reduces cooling cost, provides reliable operation, and facilitates maintainability. The UHPC system does so by using a modular design, where processing nodes are built as modules and assembled as a hexadron (non-regular cube) according to the computing needs of the end-user. This arrangement results in a reduced distance for the communication links, which allows an all-to-all solution.
In still another embodiment, the processing, storage, and/or I/O modules are constructed such that the modules are liquid tight and are then liquid cooled in order to increase heat dissipation. Using liquid cooling provides for more modules to be placed in a UHPC system. In order to cool the liquid flowing through the modules, heat exchangers are coupled to the external surfaces of a UHPC system. Pumping the module coolant between the modules and the heat exchangers circulates the module coolant through the heat exchange elements. Using the external surface of the UHPC system allows heat to be dissipated using ambient air.
While the following embodiments are described with relation to a module of cubical design, the illustrative embodiments are not limited to only a cubical design. That is, other three-dimensional geometric configurations may also be used, such as a rectangular box, without departing from the spirit and scope of the present invention.
However, in an alternative embodiment (not shown), in order to provide faster access to processor nodes 508 during maintenance, heatsink 502 may be of a width and depth to cover only one portion of processor nodes 508, which would require another one of heatsink 502 to cover the other portion of processor nodes 508 such that the majority of the air space within the middle of processing module is still filled such that the flow of air passes between the fins of the heatsink increasing the heat exchange surface area. For example, one smaller depth heatsink may cover two processor nodes while a similar smaller depth heatsink covers two other processor nodes. While the illustrative embodiments show four of processor nodes 508 on processing module side 510, the illustrative embodiments recognize that more or fewer processing nodes may be implemented such that the width and depth of heatsink 502 requires changing while the height of heatsink 502 in conjunction with other one of heatsink 502 still fill a majority of the air space within the middle of processing module such that the flow of air passes between the fins of the heatsink increasing the heat exchange surface area.
However, in an alternative embodiment (not shown), in order to provide faster access to processor nodes 608 during maintenance, heatsink 602 may be of a width and depth to cover only one portion of processor nodes 608, which would require another one of heatsink 602 to cover the other portion of processor nodes 608. For example, one smaller depth heatsink may cover two processor nodes while a similar smaller depth heatsink covers two other processor nodes. While the illustrative embodiments show four of processor nodes 608 on processing module side 610, the illustrative embodiments recognize that more or fewer processing nodes may be implemented such that the width and depth of heatsink 602 requires changing. Heatsink 602 is constructed in a manner such that within processing module 600 with four of heatsink 602, the sizing of the heatsinks allows the removal of the heatsink 602 and circuit board 603 from the processing module side 610, while the three other heatsinks remain in place, as is shown in
If core 620 is to provide additional heat exchange, then core 620 may be comprised of multiple core sections 622 as shown in overhead view 624. Core sections 622 may be attached in various methods so that once inserted in the empty area between heatsinks 602 in processing module 600, core 620 may expand to maintain thermal conduction of each of heatsinks 602. In one embodiment, core sections 622 may be coupled using expansion mechanisms, such that when a plurality of retention fasteners, such as latches, snap rings, pins, or the like, are released at the top and bottom of core 620, expansion mechanisms 630 expand forcing cores sections 622 apart and onto heatsinks 602 as in show in expansion view 626. When core 620 is to be removed, a user may use the plurality of latches, snap rings, pins, or the like, to pull core sections 622 back together and away from heatsinks 602 so that core 620 may easily be removed. In another embodiment, core sections 622 may be coupled using retention mechanisms 632, such that when expansion rod 634 is inserted into the center of core 620, retention mechanisms 632 are forced to expand which forces cores sections 622 apart and onto heatsinks 602 as in show in expansion view 628. When the expansion rod is removed from the center of core 620 retention mechanisms 632 pull the cores sections 622 back together and away from heatsinks 602 so that core 620 may easily be removed.
The use of core 620 allows maintenance to be performed on processing module 600 without dissembling processing module 600. In order to increase heat conductivity between heatsinks 602 and core sections 622, the edges of core 620 that will come in contact with heatsink 602 may be coated with a thermal conductive paste prior to being inserted into the empty area between heatsinks 602 in processing module 600.
Air inlet 706 may be a compartment that has a solid bottom with open sides and top. Each of the sides of air inlet 706 may be constructed such that access panels provide for the insertion and replacement of air filters. Air would flow through the air filters in the sides of air inlet 706 and up through the top of air inlet 706 through module compartments 712 to air mixing plenum 708. The top of air inlet 706 may be constructed in a way that the top section provides knock outs in the dimensions of module compartments 712 so that a user may remove only those knock outs for those columns of module compartments 712 that are populated in frame 704. Using knock outs in air inlet 706 allows the user to cool only those areas of frame 704 that are occupied by modules 702. Further, in the event a knock out is erroneously removed or if modules 702 are removed such that a column of module compartments 712 no longer has any occupying modules 702, filler plates may be provided to replace the knock out.
Additionally, while the exemplary embodiment illustrates four of fans 710, the illustrative embodiment recognizes that more or fewer fans may be used without departing from the spirit and scope of the invention. Further, while fans 710 are shown on top of the UHPC system, fans 710 may be placed anywhere in the UHPC system such that air is pushed or pulled through the UHPC system. For example, fans 710 may be located below the air inlet, or between the air inlet and the module compartments.
For each of edges 821 that connect air vertex 823 to the 3D mesh, a capacity is assigned that is equal to the number of modules with which the heat exchanger is allowed to be connected. For the remainder of edges 821, their capacities are set as the number of tubes, such as exit tubes 808, return tubes 810, coupling tubes 814, and extension tube 815 of
Thus, the illustrative embodiments provide a ubiquitous high-performance computing (UHPC) system that packages the thousands of components of a high-performance computing (HPC) system into building-block modules that may be coupled together to form a space-optimized and energy-efficient product. The illustrative embodiments also provide for various heatsink designs that enable an elegant assembly and in place maintenance for the heatsink and the module, while maintaining large effective heat exchange area and high pressure for efficient cooling. The illustrative embodiments also provide for an alternative to air cooling using a liquid cooling system with coolant/air heat exchanging enabled by skin heat exchangers mounted either on the interior or the exterior surface of the UHPC system.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
This application is a divisional of application Ser. No. 12/788,863, filed May 27, 2010 now U.S. Pat. No. 8,174,826, status allowed.
Number | Name | Date | Kind |
---|---|---|---|
3070729 | Heidler | Dec 1962 | A |
3277346 | McAdam et al. | Oct 1966 | A |
3648113 | Rathjen et al. | Mar 1972 | A |
3754596 | Ward | Aug 1973 | A |
3904933 | Davis | Sep 1975 | A |
4122508 | Rumbaugh | Oct 1978 | A |
4186422 | Laermer | Jan 1980 | A |
4302793 | Rohner | Nov 1981 | A |
4841355 | Parks | Jun 1989 | A |
5053856 | Davidson | Oct 1991 | A |
5063475 | Balan | Nov 1991 | A |
5150279 | Collins et al. | Sep 1992 | A |
5181167 | Davidson et al. | Jan 1993 | A |
5251097 | Simmons et al. | Oct 1993 | A |
5270571 | Parks et al. | Dec 1993 | A |
5289694 | Nordin | Mar 1994 | A |
5424919 | Hielbronner | Jun 1995 | A |
5497288 | Otis et al. | Mar 1996 | A |
5546274 | Davidson | Aug 1996 | A |
5568361 | Ward et al. | Oct 1996 | A |
5586004 | Green et al. | Dec 1996 | A |
5691885 | Ward et al. | Nov 1997 | A |
5903432 | McMahon | May 1999 | A |
5930154 | Thalhammer-Reyero | Jul 1999 | A |
6008530 | Kano | Dec 1999 | A |
6157533 | Sallam et al. | Dec 2000 | A |
6172874 | Bartilson | Jan 2001 | B1 |
6198628 | Smith | Mar 2001 | B1 |
6229704 | Hoss et al. | May 2001 | B1 |
6308394 | Boe | Oct 2001 | B1 |
6557357 | Spinazzola et al. | May 2003 | B2 |
6618254 | Ives | Sep 2003 | B2 |
6628520 | Patel et al. | Sep 2003 | B2 |
6778389 | Glovatsky et al. | Aug 2004 | B1 |
6828675 | Memory et al. | Dec 2004 | B2 |
6896612 | Novotny | May 2005 | B1 |
6986066 | Morrow et al. | Jan 2006 | B2 |
7009530 | Zigdon et al. | Mar 2006 | B2 |
7038553 | Garner et al. | May 2006 | B2 |
7106590 | Chu et al. | Sep 2006 | B2 |
7123477 | Coglitore et al. | Oct 2006 | B2 |
7269018 | Bolich et al. | Sep 2007 | B1 |
7301772 | Tilton et al. | Nov 2007 | B2 |
7312987 | Konshak | Dec 2007 | B1 |
7352576 | McClure | Apr 2008 | B2 |
7378745 | Hayashi et al. | May 2008 | B2 |
7403392 | Attlesey et al. | Jul 2008 | B2 |
7411785 | Doll | Aug 2008 | B2 |
7414845 | Attlesey et al. | Aug 2008 | B2 |
7427809 | Salmon | Sep 2008 | B2 |
7444205 | Desmond | Oct 2008 | B2 |
7457118 | French et al. | Nov 2008 | B1 |
7511960 | Hillis et al. | Mar 2009 | B2 |
7518871 | Campbell et al. | Apr 2009 | B2 |
7534167 | Day | May 2009 | B2 |
7539020 | Chow et al. | May 2009 | B2 |
7548170 | Griffel et al. | Jun 2009 | B1 |
7586747 | Salmon | Sep 2009 | B2 |
7599761 | Vinson et al. | Oct 2009 | B2 |
7609518 | Hopton et al. | Oct 2009 | B2 |
7630795 | Campbell et al. | Dec 2009 | B2 |
7657347 | Campbell et al. | Feb 2010 | B2 |
7660116 | Claassen et al. | Feb 2010 | B2 |
7738251 | Clidaras et al. | Jun 2010 | B2 |
7835151 | Olesen | Nov 2010 | B2 |
7864527 | Whitted | Jan 2011 | B1 |
8004836 | Kauranen et al. | Aug 2011 | B2 |
8096136 | Zheng et al. | Jan 2012 | B2 |
8174826 | El-Essawy et al. | May 2012 | B2 |
8179674 | Carter et al. | May 2012 | B2 |
8279597 | El-Essawy et al. | Oct 2012 | B2 |
8547692 | El-Essawy et al. | Oct 2013 | B2 |
20050061541 | Belady | Mar 2005 | A1 |
20050286226 | Ishii et al. | Dec 2005 | A1 |
20070095507 | Henderson et al. | May 2007 | A1 |
20070121295 | Campbell et al. | May 2007 | A1 |
20070217157 | Shabany et al. | Sep 2007 | A1 |
20070235167 | Brewer et al. | Oct 2007 | A1 |
20070256815 | Conway et al. | Nov 2007 | A1 |
20080253085 | Soffer | Oct 2008 | A1 |
20090284923 | Rytka et al. | Nov 2009 | A1 |
20100109137 | Sasaki et al. | May 2010 | A1 |
20110292594 | Carter et al. | Dec 2011 | A1 |
20110292595 | El-Essawy et al. | Dec 2011 | A1 |
20110292596 | El-Essawy et al. | Dec 2011 | A1 |
20110292597 | Carter et al. | Dec 2011 | A1 |
Number | Date | Country |
---|---|---|
1566087 | Aug 2005 | EP |
82274738 | Aug 1994 | GB |
WO2010019517 | Feb 2010 | WO |
Entry |
---|
Interview Summary mailed Jun. 19, 2013 for U.S. Appl. No. 13/596,828; 4 pages. |
Notice of Allowance mailed May 24, 2013 for U.S. Appl. No. 13/596,828; 15 pages. |
U.S. Appl. No. 12/788,863. |
U.S. Appl. No. 12/788,925. |
U.S. Appl. No. 12/789,583. |
U.S. Appl. No. 12/789,617. |
U.S. Appl. No. 13/435,811. |
“IBM System p 570 and System Storage DS4800 Oracle Optimized Warehouse”, IBM/Oracle, 2009-2010, 12 pages. |
“Oracle Solaris and Sun SPARC Systems-Integrated and Optimized for Enterprise Computing”, An Oracle White paper, Jun. 2010, 35 pages. |
“Rugged Computer Boards and Systems for Harsh, Mobile and Mission-Critical Environments”, http://docs.google.com/viewer?a=v&q=cache:hXSDxxK6ZMkJ:www.prosoft.ru/cms/f/432160.pdf+rack+mounted+air+cooled+convection+stack+computer+memory+module+assembly&hl=en&gl=in&pid=bl&srcid=ADGEESghFg9KMvzmcTvmnYWf—SJViA9wR0N0bTZA—JTVOKFOpLIBdzUvH-cpZnujNEYiKnh Printed Sep. 30, 2010 , 12 pages. |
“Sun Storagetek SL8500 Modular Library System”, Oracle, 2008/2010, 6 pages. |
Alkalai, Leon, “Advanced Packaging Technologies used in NASA's 3D Space Flight”, International Conference & Exhibition on High Density Packaging & MCMs, Apr. 7, 1999, 6 pages. |
Culhane, Candy et al., “Marquise—An Embedded High Performance Computer Demonstration”, http://docs.google.com/viewer?a=v&q=cache:GAzkbRUVuFcJ:citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.30.6885%26rep%3Drep1%26type%3Dpdf+rack+mounted+air+cooled+convection+stack+computer+memory+module+assembly&hl=en&gl=in&pid=bl&srcid=ADGEESgTLgP1g6b Printed Sep. 30, 2010 , 15 pages. |
Ghaffarian, Reza, “3D Chip Scale Package (CSP)”, http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/17367/1/99-0814.pdf, Printed on Sep. 29, 2010, 31 pages. |
Honer, Ken, “Silent Air-Cooling Technology: A Solution to the Cooling Quandary?”, http://www2.electronicproducts.com/Silent—air-cooling—technology—a—solution—to—the—cooling—quandary—article-fapo—Tessera—may2010.html.aspx, May 2010, 5 pages. |
Hu, Jin et al., “Completing High-quality Global Routes”, Proc. Int'l. Symp. on Physical Design (ISPD), Mar. 2010, 7 pages. |
Moffitt, Michael D. et al., “The Coming of Age of (Academic) Global Routing”, ISPD 2008, Apr. 13-16, 2008, 8 pages. |
Patterson, Michael K. et al., “The State of Data Center Cooling, a Review of Current Air and Liquid Cooling Solutions”, Intel, White Paper, Digital Enterprise Group, Mar. 2008, 12 pages. |
Roy, J.A. et al., “High-performance Routing at the Nanometer Scale”, IEEE Trans. on Computer-Aided Design 27 (6), Jun. 2008, pp. 1066-1077. |
Straznicky, Ivan et al., “Taking the Heat: Strategies for Cooling SBCs in Commercial and Military Environments”, Reprinted from Embedded Computing Design, Jan. 2005, 4 pages. |
Interview Summary mailed May 16, 2012 ; U.S. Appl. No. 12/788,925; 3 pages. |
Notice of Allowance mailed May 25, 2012 for U.S. Appl. No. 12/788,925; 12 pages. |
Response to Office Action filed with the USPTO on May 15, 2012 ; U.S. Appl. No. 12/788,925; 8 pages. |
Response to Restriction Requirement filed Jul. 17, 2012, U.S. Appl. No. 12/789,617, 8 pages. |
Restriction Requirement mailed Jun. 29, 2012 for U.S. Appl. No. 12/789,617; 6 pages. |
Notice of Allowance mailed Aug. 16, 2012 for U.S. Appl. No. 12/789,617; 16 pages. |
U.S. Appl. No. 13/596,828, 1 page. |
Number | Date | Country | |
---|---|---|---|
20120188719 A1 | Jul 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12788863 | May 2010 | US |
Child | 13435998 | US |