The present disclosure relates generally to communications networks, and more particularly, to power, data, management, and cooling integration in a communications network.
In conventional communications systems, installation of network devices in an equipment rack is often complex due to the use of individual cables to provide power, data, and other utilities. Network devices may have both their data connectivity and power needs met over a single combined function cable through the use of PoE (Power over Ethernet) or Universal Serial Bus (USB). However, conventional PoE systems have limited power capacity, which may be inadequate for many classes of devices. Also, if the power is increased, traditional cooling methods may be inadequate for high powered devices.
Corresponding reference characters indicate corresponding parts throughout the several views of the drawings.
Overview
In one embodiment, a system generally comprises a central hub comprising a power source, a data switch, a coolant distribution system, and a management module, a plurality of network devices located within an interconnect domain of the central hub, and at least one combined cable connecting the central hub to the network devices and comprising a power conductor, a data link, a coolant tube, and a management communications link contained within an outer cable jacket.
In one or more embodiments, the central hub and network devices are rack mounted devices.
In one or more embodiments, the combined cable connects to a back of the network devices with the network devices inserted into a front of the rack.
In one or more embodiments, the combined cable comprises a plurality of combined cables, each of the combined cables connecting the central hub to one of the network devices.
In one or more embodiments, the combined cable comprises multi-tap connections to each of the network devices.
In one or more embodiments, the central hub and the network devices form a passive optical network over the optical fiber.
In one or more embodiments, the system further comprises a redundant central hub connected to the network devices with at least one backup combined cable.
In one or more embodiments, the power source is operable to provide at least 1000 watts of pulse power.
In one or more embodiments, the data link comprises a pair of optical fibers operable to deliver at least 100 Gb/s to each of the network devices.
In one or more embodiments, the central hub comprises a reserve power supply operable to supply power to the network devices for a specified period of time.
In one or more embodiments, the coolant distribution system comprises a chilled reserve coolant tank.
In one or more embodiments, the management communications link comprises a single pair of wires for Single Pair Ethernet (SPE) management communications.
In one or more embodiments, the management communications link defines a management overlay network.
In one or more embodiments, the central hub forms a storage overlay network with the network devices over the combined cable.
In one or more embodiments, the combined cable further comprises a cable identifier light emitting diode located within the combined cable or a connector coupled to the combined cable for use in identifying the combined cable or a status of the combined cable.
In one or more embodiments, the central hub operates as a Top of Rack (ToR) switch and the network devices comprise servers.
In another embodiment, an apparatus generally comprises a power source, a data switch, a coolant distribution system, a management module, at least one port for connection to a combined cable comprising a power conductor, a data link, a coolant tube, and a management communications link contained within an outer cable jacket, and a control processor for control of interactions between power, data, and cooling delivered on the combined cable to a plurality of network devices. The power source, data switch, coolant distribution system, management module, and control processor are contained within a chassis.
In yet another embodiment, a method generally comprises inserting a central hub into a rack, the central hub comprising a power source, a data switch, a coolant distribution system, and a management module contained within a chassis, connecting a combined cable comprising a power conductor, a data link, a coolant tube and a management communications link within an outer cable jacket to the central hub, inserting a network device into the rack and connecting the network device to the combined cable, and providing power, data, cooling, and management to the network device from the central hub over the combined cable.
Further understanding of the features and advantages of the embodiments described herein may be realized by reference to the remaining portions of the specification and the attached drawings.
The following description is presented to enable one of ordinary skill in the art to make and use the embodiments. Descriptions of specific embodiments and applications are provided only as examples, and various modifications will be readily apparent to those skilled in the art. The general principles described herein may be applied to other applications without departing from the scope of the embodiments. Thus, the embodiments are not to be limited to those shown, but are to be accorded the widest scope consistent with the principles and features described herein. For purpose of clarity, details relating to technical material that is known in the technical fields related to the embodiments have not been described in detail.
Installation of servers, routers, storage engines, accelerators, fog nodes, IoT (Internet of Things) devices, gateways, and similar network devices is often complex. The hardware is typically secured to its mounting position, and then power, data, and out of band management cables are separately connected. These cables contribute significantly to system complexity and cost, and often increase failure modes of the system. In one example, an equipment rack with 40 1 RU (Rack Unit) servers may have hundreds of discrete cables that need to be purchased, installed, and maintained.
In conventional Power over Ethernet (PoE) systems used to simultaneously transmit power and data communications, power is delivered over the same twisted pair cable used for data. The maximum power delivery capacity of standard PoE is approximately 100 Watts (W), but many classes of powered devices would benefit from power delivery of 1000 W or more. The data capability is also limited to the bandwidth of the twisted pair, which is typically 10 Gb/s (Gigabit per second) or less. While use of PoE as a single cable interconnect in large scale and distributed computing systems would simplify installation and maintenance and reduce cable congestion, conventional PoE systems may not scale to the power requirements (e.g., about 1000 W), interconnect bandwidth requirements (e.g., over 40 Gb/s per server), or provide needed cooling.
For high-powered devices, especially those with high thermal density packaging or total dissipation over a few hundred watts, traditional convection cooling methods may be inadequate. Forced air convection with fans typically becomes impractical once the volumetric power density exceeds about 150 W per liter. Next generation servers (e.g., with eight or more high power CPU (Central Processing Unit), GPU (Graphics Processing Unit), and/or TPU (Tensor Processing Unit) chips) would benefit from power dissipation capabilities on the order of 1000 W per 1 RU package. Routers supporting dozens of 100 Gb/s or greater links have similar power requirements. This power density is very difficult to cool using fans and may result in air cooling systems that are so loud that they exceed OSHA (Occupational Safety and Health Administration) acoustic noise limits. Research is being conducted into replacing forced air cooling with pumped liquid coolant, which is an important trend in future data center designs. However, use of a separate set of tubes to deliver liquid coolant further increases the complexity of cable systems.
Out of band management and storage networking is also a key capability in rack level server installations. One or more overlay networks (beyond the mainstream Ethernet interconnect) are often provided to each server to establish a side channel for management traffic, alarm monitoring, and connection to storage disk farms, and the like. However, these overlay networks increase system costs and complexity.
The embodiments described herein provide interconnect technology to simultaneously address the above noted issues. One or more embodiments provide a highly efficient, compact, cost effective way to interconnect network devices such as servers, routers, storage engines, or similar devices in a rack (e.g., cabinet, server rack, or other frame or enclosure for supporting network devices) with central data, management, power, and cooling resources. In one or more embodiments, a combined cable provides data, power, cooling, and management. For example, a combined cable may carry optical fiber delivered data, management (e.g., traffic management, alarm monitoring, connection to storage disk farms, or other management or storage overlay network functions), power (e.g., pulse power, power ≥100 W, power over ≥1000 W), and cooling (e.g., liquid, gas, or multi-phase coolant) from a central hub to a large number of network devices (e.g., servers, routers, storage engines, fog nodes, IoT devices, or similar network devices) within the central hub's interconnect domain. In one or more embodiments, the management capabilities associated with the combined cable and hub implements interaction modes between the data interconnect, power, cooling, and management overlay capabilities of the infrastructure. As described in detail below, a central hub configured to provide power, data, cooling, and management may include a hub control processor, data switch (switch, router, switch/router), power distribution system, management module (e.g., providing physical or virtual management function), and coolant distribution system. In one or more embodiments, the central hub may also provide short-term power and coolant backup capability. The combined cable and unified central hub communications system described herein may greatly improve efficiency, reduce complexity of installation and maintenance, and reduce cost of high density and distributed computing systems, while facilitating tighter coupling between systems.
The embodiments described herein operate in the context of a data communications network including multiple network devices. The network may include any number of network devices in communication via any number of nodes (e.g., routers, switches, gateways, controllers, access points, or other network devices), which facilitate passage of data within the network. The network devices may communicate over or be in communication with one or more networks (e.g., local area network (LAN), metropolitan area network (MAN), wide area network (WAN), virtual private network (VPN) (e.g., Ethernet virtual private network (EVPN), layer 2 virtual private network (L2VPN)), virtual local area network (VLAN), wireless network, enterprise network, corporate network, data center, Internet of Things (IoT), optical network, Internet, intranet, fog network, or any other network). The network may include any number of communications systems (e.g., server farms, distributed computation environments (industrial computing, edge computers, fog nodes), data center racks, or other communications systems with a centralized interconnect domain) comprising a central hub operable to deliver data, power, management networking, and cooling over a combined cable to a plurality of network devices, as described herein.
Referring now to the drawings, and first to
As shown in the example of
In the example shown in
The network devices 12 may include, for example, servers, routers, or storage engines located in a rack or cabinet or IoT devices or fog nodes located in a distributed computational environment (e.g., industrial computing, edge, fog) in which the combined cables provide data, power, management, and cooling to distributed endpoints within the central hub's interconnect domain. In one or more embodiments, the network devices 12 may operate at power levels greater than 100 W (e.g., 1000 W or any other power level). The network devices 12 may also be in communication with one or more other devices (e.g., fog node, IoT device, sensor, and the like) and may deliver power to equipment using PoE or USB. For example, one or more of the network devices 12 may deliver power using PoE to electronic components such as IP (Internet Protocol) cameras, VoIP (Voice over IP) phones, video cameras, point-of-sale devices, security access control devices, residential devices, building automation devices, industrial automation, factory equipment, lights (building lights, streetlights), traffic signals, and many other electrical components and devices.
In one or more embodiments, a PON (Passive Optical Network) (e.g., 10G PON) may use multiple taps over the optical fibers with a multi-tap configuration of the power (e.g., pulse power) and cooling systems. For example, 10G of PON communications bandwidth may be split between a small community of servers. PON may provide, for example, dynamic bandwidth on demand for a cluster of servers 12 in the same cabinet sharing one combined cable 14 and may also be valuable in situations where client devices are widely distributed (e.g., series of street-corner fog nodes down a linear shared cable or a series of Wi-Fi or Li-Fi APs (Access Points) down a long corridor). The multi-tap power may start by sourcing, for example, 4000 W or more at the central hub 10 to the cable 14, with each server 12 tapping off the power line until the power is diminished. The servers 12 may also communicate with one another (e.g., through management data links in the combined cable 14) and dynamically reallocate their usage of cooling, power, and bandwidth based on need or requested loading.
The system may be used, for example, to create a cost effective means of creating a server farm within a rack or set of racks with a minimum amount of cabling. Maintenance is simplified since a tap may easily be removed and reattached with no disruption to the other servers 12 on the cable 14. The multi-tap variant (
It is to be understood that the network devices and topologies shown in
In the example shown in
The power distribution module 20 provides power to a power supply module 23 at the remote device 12 over conductors 26. The main switch/router 21 at the central hub 10 is in communication with a network interface 24 at the remote device 12 via data link (e.g., optical fibers, data wires) 27. The management module 32 provides management functions and may be used, for example, in management and storage overlay networking. It is to be understood that the term management module as used herein may refer to a physical or virtual management function. For example, the management module may comprise one or more smaller data switches that may be integrated into the central hub 10 to supplement the main data switch 21 or provide virtualized management of traffic on the primary data switch 21.
The coolant distribution system 22 at the central hub 10 forms a cooling loop with coolant tubes 28 and one or more heat sinks 25 at the network device 12. The hub control processor 30 may provide control logic for the cooling loop and power and data transport functions of the combined cable 14. The hub control processor 30 may also provide control information to the management switch 32 for management of the network device 12 or a management or storage overlay. In one or more embodiments, the central hub 10 may also include a coolant backup store (e.g., chilled reserve coolant tank) 31 and a short term power source (e.g., reserve battery) 36, as described in detail below.
The cable 14 comprises power conductors 26 (e.g., heavy stranded wires for pulsed power), management communications link 35 (e.g., one or more wire pairs for transmission of Ethernet data (e.g., Single Pair Ethernet (SPE), fiber delivered management or storage overlay networks), data link 27 for transmission of data (e.g., at least one optical fiber in each direction for conventional systems or at least one optical fiber for bidirectional fiber systems, metallic main data interconnects (conductors, wires)), coolant tubes 28 (at least one in each direction for liquid systems, or at least one for compressed air systems), and a protective outer shield 33. These components, along with one or more additional components that may be used to isolate selected elements from each other, manage thermal conductivity between the elements, or provide protection and strength, are contained within the outer cable jacket 33 of the single combined cable 14.
In the example shown in
The conductors 26 may comprise heavy power conductors capable of delivering, for example, several kilowatts of power to each endpoint 12. In one example pulse power may be used in which short pulses of high voltage energy are transmitted on the cable 14 and reception is acknowledged by the endpoint 12. The system may include one or more safety features for higher power operation (e.g., insulation, process for power/cable compatibility confirmation, control circuit check for open/short, or thermal sensor). In one embodiment, the pulse power may comprise low voltage fault detection between high voltage power pulses, for example. Fault sensing may include, for example, line-to-line fault detection with low voltage sensing of the cable or powered device and line-to-ground fault detection with midpoint grounding. Touch-safe fault protection may also be provided through cable and connector designs that are touch-safe even with high voltage applied. The power safety features provide for safe system operation and installation and removal (disconnect) of components.
An optional overlay management network may be configured as one or more extra conductors 35 in the cable 14. In one or more embodiments, the overlay management network may use SPE to reduce cabling complexity. If Fibre Channel (FC) is needed for storage and use of converged Ethernet over the main fiber optical links is not possible or desired, additional FC strands may be provided. These overlay and additional storage networks may be broken out as logical interfaces on the servers themselves.
The optical fibers 27 may be operable to deliver, for example, 400+Gb/s (or other data rates including rates between 10 Gb/s and 100 Gb/s) to each endpoint 12.
The coolant distribution system 22 at the central hub 10 maintains a source of low-temperature coolant that is sent through distribution plumbing (such as a manifold), through the connector 29a, and down the cable's coolant supply line 28 to the remote device 12. The connector 29b on the remote device 12 is coupled to the cable 14, and the supply coolant is routed through elements inside the device such as heat sinks 25 and heat exchangers that remove heat (described further below with respect to
In an alternate embodiment, only a single coolant tube is provided within the cable 14 and high pressure air (e.g., supplied by a central compressor with an intercooler) is used as the coolant. When the air enters the remote device 12, it is allowed to expand and/or impinge directly on heat dissipating elements inside the device. Cooling may be accomplished by forced convection via the mass flow of the air and additional temperature reduction may be provided via a Joule-Thomson effect as the high pressure air expands to atmospheric pressure. Once the air has completed its cooling tasks, it can be exhausted to the atmosphere outside the remote device 12 via a series of check valves and mufflers (not shown).
In one or more embodiments, the coolant tubes 28 support the flow of liquid coolant or other fluid capable of cooling a thermal load. The coolant may comprise, for example, water, antifreeze, liquid or gaseous refrigerants, or mixed-phase coolants (partially changing from liquid to gas along the loop). The central hub 10 may also include one or more support systems to filter the coolant, supply fresh coolant, adjust anti-corrosion chemicals, bleed air from the loops, or fill and drain loops as needed for installation and maintenance of the cables 14. In one example, approximately 25 liters per minute of 25 degree C. water-based coolant may be provided to cool a 40 kW communications system contained within a rack. It is to be understood that this is only an example and other cooling rates or temperatures may be used to cool various loads. The cooling loops from all of the remote devices 12 may be isolated from one another or intermixed through a manifold and a large central heat exchanger for overall system thermal efficiency.
As previously noted, various sensors may monitor aggregate and individual branch coolant temperatures, pressures, and flow rate quantities at strategic points around the coolant loop (coolant distribution system 22, coolant tubes 28, heat sinks 25). Other sensors may monitor the current and voltage of the power delivery system at either end of power conductors 26. One or more valves may be used to control the amount of cooling delivered to the remote device 12 based upon its instantaneous needs. For example, the hub control processor 30 may control coolant distribution based on thermal and power sensors.
The hub control processor 30 may implement algorithms to provide various integrated management functions. For example, pulse power techniques may utilize continuous feedback from the receiving endpoint to close a feedback loop and maintain safe high power connectivity. Since the data and management networks are included in the same cable 14 and their routing/switching capability is included in the same chassis as the power hub function, the hub processor 30 can coordinate the two systems to efficiently interact. Combination of power and cooling also provides advantages. Pulse power can precisely measure and regulate the instantons power delivery to each endpoint. If the central hub's coolant delivery hub has valves to adjust the coolant flow down each combined cable, the hub control processor can perform closed-loop control over the coolant network to match the supplied power. Location of the data router in the same hub allows the power and cooling systems to monitor and quickly respond to changes in the computation loads as evident by changes in network traffic. Integration of the management networks into the same cable 14 and central hub 10 also opens up possibilities for closer monitoring and faster response to abnormal conditions in the data, power, or cooling networks, thereby enhancing the efficiency and safety of the entire data center.
As previously noted, the coolant distribution system 22 may interact with the data and power elements in the central hub 10 through the hub control processor 30. For example, each branch may drive a distinct combined cable to an individual server and have its own coolant metering function, which may include a network of valves or small pumps within the hub's coolant manifold assembly. Since the central hub 10 knows the instantaneous power draw of each server from its power system telemetry, the coolant flow down each branch can react to the cooling load required much faster, potentially eliminating the instabilities caused by thermal inertia, sensing lags, or delays in changing flow rates. Control algorithms at the hub control processor 30 may combine the operational states of the power, data, and cooling systems to optimize the operation and efficiency of the connected servers in both normal and emergency modes.
All utilities (power, data, cooling, management) provided by the combined cable 14 may interact with the hub control processor 30 to keep the system safe and efficient. In one or more embodiments, a distributed control system comprising components located on the central hub's control processor 30 and on the remote device's manager processor 34 may communicate over the management Ethernet conductors 35 in the combined cable 14. Sensors at the central hub 10 and remote device 12 may be used by the hub control processor 30 to monitor temperature, pressure, or flow. Servo valves or variable speed pumps may be used to insure the rate of coolant flow matches requirements of the remote thermal load. Temperature, pressure, and flow sensors may be used to measure coolant characteristics at multiple stages of the cooling loop (e.g., at the inlet of the central hub 10 and inlet of the remote device 12) and a subset of these sensors may also be strategically placed at outlets and intermediate points. The remote device 12 may include, for example, temperature sensors to monitor die temperatures of critical semiconductors, temperatures of critical components (e.g., optical modules, disk drives), or the air temperature inside a device's sealed enclosure. If the system detects additional power flow in power conductors 26 (e.g., due to a sudden load increase in CPU at remote device 12), the hub control processor 30 may proactively increase coolant flow in anticipation of an impending increase in heat sink temperature, even before the temperature sensors register it. The hub control processor 30 may also monitor the remote device's internal temperatures and adjust the coolant flow to maintain a set point temperature. This feedback system insures the correct coolant flow is always present. Too much coolant flow will waste energy, while too little coolant flow will cause critical components in the remote device 12 to overheat.
The central hub 10 may also include support for power and cooling resiliency. For example, a UPS (Uninterrupted Power Supply) function may provide support between the moment of an AC grid failure and stable power being available from a backup generator. As shown in
As shown in
Pre-chilling of the reserve coolant in the tank 31 allows a limited volume of coolant that can be stored in a reasonably sized hub tank to go further in emergency cooling situations. For example, if the design temperature of liquid heat sinks in a server is 55 degrees C. and the coolant is stored at 30 degrees C. ambient, a certain run time may be supported based upon flow, dissipation, etc., with the 25 degrees C. increase through the servers. By keeping the reserve coolant below ambient (e.g., 5 degrees C.), a 50 degrees C. temperature rise may be used, doubling the cooling run time of the small reserve tank 31. There may also be different control modes implied for situations where the primary coolant supply lines run dry or run too hot. The reserve coolant may be metered to dilute the main coolant supply to cool it down in some cases (e.g., chiller plant coolant too hot) or isolated and recirculated to the loads in other cases (e.g., chiller plant flow failure).
In one or more embodiments, the reserve coolant tank 31 may be sized to have similar run-time under the expected load as the reserve battery 36. In one example, the run-time of the reserve battery 36 and reserve coolant tank 31 may be 5-10 minutes, which may be adequate to ride through many short-term utility interruptions and maintenance actions to the data center's power and cooling plant. If an interruption is expected to last longer than the supported run time, the reserve stores provide sufficient time to allow the servers 12 to save their states and perform an orderly shutdown before running out of power or dangerously overheating.
In one or more embodiments, a cable identifier may be provided for use in identifying a cable since there may be many cables 14 homing on the central hub 10 and it may be confusing to a technician trying to identify a cable that needs to be worked on. In one example, an identification capability may be integrated into the cable 14, connector 29a, connector 29b, or any combination thereof. The identifier element may cause the selected cable or connector to glow in order to identify the cable and may comprise, for example, an element (fiber) 37 in the cable 14 or LED 38 in one or both of the connectors 29a, 29b that may be illuminated in easily identifiable colors or blink patterns to quickly indicate a fault, such as power failure, loss of coolant flow/pressure, network error, etc. In one embodiment, the optical fiber 37 may be integrated along the length of the cable and the LED 38 provided within the central hub connector 29a to illuminate the cable. In another embodiment, a small LED is integrated into the connectors 29a, 29b on both ends of the combined cable 14 to provide a driver circuit within the connector body for receiving control messages and illuminating the LED with the selected color, blink pattern, or both. The entire length of the cable 14 may be illuminated through the use of “leaky” fiber, appropriate cable jacket material, and optical termination, for example.
The cable 14 may comprise various configurations of power conductors 26, optical fibers 27, management data wires (overlay networking link) 35, and coolant tubes 28 contained within the outer jacket 33 of the cable 14. The coolant tubes 28 may have various cross-sectional shapes and arrangements, which may yield more space and thermally efficient cables. Supply and return tube wall material thermal conductivity may be adjusted to optimize overall system cooling. The cable 14 may also be configured to prevent heat loss through supply-return tube-tube conduction, external environment conduction, coolant tube-power conductor thermal conduction, or any combination of these or other conditions. For example, a thermal isolation material may be located between coolant tubes 28 to prevent heat flow between hot coolant return and cold coolant supply tubes. The thermal isolation material may also be placed between the coolant tubes 28 and the outer jacket 33. In another embodiment, one or both coolant tubes 28 may be provided with a low thermal impedance path to the outside. Thermal paths may also be provided between the power conductors 26 and one of the coolant tubes 28 to use some of the cooling power of the loop to keep the power conductors 26 in the cables 14 cool.
In one or more embodiments, the cable's jacket 33 may include two small sense conductors (not shown) for use in identifying a leak in the cooling system. If a coolant tube develops a leak, the coolant within the jacket 33 causes a signal to be passed between these conductors, and a device such as a TDR (Time-Domain Reflectometer) at the central hub 10 may be used to locate the exact position of the cable fault, thereby facilitating repair.
In order to prevent coolant leakage when the cable 14 is uncoupled from the central hub 10 or remote device 12, the coolant lines 28 and connectors 29a, 29b preferably include valves (not shown) that automatically shut off flow into and out of the cable, and into and out of the device or hub. In one or more embodiments, the connector 29a, 29b may be configured to allow connection sequencing and feedback to occur. For example, electrical connections may not be made until a verified sealed coolant loop is established. The cable connectors 29a, 29b may also include visual or tactile evidence of whether a line is pressurized, thereby reducing the possibility of user installation or maintenance errors. The connectors 29a, 29b are preferably configured to mate and de-mate (couple, uncouple) easily by hand or robotic manipulator. The connectors 29a, 29b may also comprise quick disconnects for blind mating of the connector to a port at the central hub 10 or network device 12 as it is inserted into a rack, as described below with respect to
In one or more embodiments, a redundant central hub (not shown) may provide backup or additional power, bandwidth, cooling, or management as needed in the network. For example, each heat sink 25 (or heat exchanger) at the network device 12 may comprise two isolated fluid channels, each linked to one of the redundant central hubs. If the coolant flow stops from one hub, the other hub may supply enough coolant (e.g., throttled up by the hub control processor 30) to keep the critical components operational. Isolation is essential to prevent loss of pressure incidents in one fluid loop from also affecting the pressure in the redundant loop. Both the primary and backup hub may also be used simultaneously to provide power to an equipment power circuit to provide higher power capabilities. Similarly, redundant data fibers may provide higher network bandwidth, and redundant coolant loops may provide higher cooling capacity. The hub control processor 30 may manage failures and revert the data, power, and cooling to lower levels if necessary.
As previously described, discrete data, power, management, and cooling interconnects typically found in data center racks are replaced with combined cable interconnects that provide all of these functions to greatly simplify installation, maintenance, and repair. The centralized hub 40 combines ToR switch/router functions, control, power distribution, cooling distribution, and management into a single integrated package, which minimizes rack space used by support functions. In this example, the central hub 40 is located at a top of the rack 43 and replaces a ToR switch. An optional redundant hub 44 may also be located on the rack 43, as described below. It is to be understood that the central hub 40 and redundant central hub 44 (if included) may be located in any position on the rack (e.g., top, bottom, or any other slot). In the example shown in
As previously described with respect to
Fault tolerance may be a concern for critical devices. If redundancy is needed, the backup hub 44 may be provided, with one or more of the servers 42 interfacing with two of the combined cables 49 (one connected to each hub). Each cable 49 may home on an independent hub 40, 44, with each hub providing data, power, cooling, and management. Redundant connections for power, data, cooling, and management may be provided to protect against failure of the central hub 40, its data connections to the Internet, primary power supplies, cooling system, or management module.
It is to be understood that the terms front, rear, or back, as used herein are relative terms based on the orientation of the rack 43 and network components 40, 42, 44 and should not be construed as limiting the arrangement or orientation of the components within the rack 43. In one or more examples, the rack 43 may be positioned next to a wall or another rack and may have limited accessibility to either a front or back opening. Thus, the cable connections (interfaces, ports) 46, 47 for coupling the combined cable 49 to the central hub 40, redundant hub 44, or servers 42 may also be located on a back panel, as described below with respect to
As shown in
Power, data, and cooling interfaces 55 at the central hub 50 and redundant hub 54 may be located on the front (face plate) or back of the hub.
It is to be understood that the systems shown in
The network device 60 may include any number of processors 62 (e.g., single or multi-processor computing device or system). The processor 62 may receive instructions from a software application or module, which causes the processor to perform functions of one or more embodiments described herein. The processor 62 may also operate one or more components of the management system 63, cooling system 65, or data system 66.
Memory 64 may be a volatile memory or non-volatile storage, which stores various applications, operating systems, modules, and data for execution and use by the processor 62. For example, components of the management system 63, control logic for cooling components 65, or other parts of the control system (e.g., code, logic, or firmware, etc.) may be stored in the memory 64. The network device 60 may include any number of memory components, which may also form part of a storage overlay.
Logic may be encoded in one or more tangible media for execution by the processor 62. For example, the processor 62 may execute codes stored in a computer-readable medium such as memory 64. The computer-readable medium may be, for example, electronic (e.g., RAM (random access memory), ROM (read-only memory), EPROM (erasable programmable read-only memory)), magnetic, optical (e.g., CD, DVD), electromagnetic, semiconductor technology, or any other suitable medium. In one example, the computer-readable medium comprises a non-transitory computer-readable medium. Logic may be used to perform one or more functions described below with respect to the flowchart of
The interfaces 66 may comprise any number of interfaces (e.g., power, data, and fluid connectors, line cards, ports, combined connectors 29a, 29b for connecting to cable 14 in
It is to be understood that the network device 60 shown in
The power detection module 72 may detect power, energize the optical components 71, and return a status message to the power source. A return message may be provided via state changes on the power wires, over the optical channel, or over the Ethernet management channel. In one embodiment, the power is not enabled by the power enable/disable module 74 until the optical transceiver and the source have determined that the device is properly connected and the network device is ready to be powered. In one embodiment, the device 70 is configured to calculate available power and prevent the cabling system from being energized when it should not be powered (e.g., during cooling failure).
The power monitor and control device 73 continuously monitors power delivery to ensure that the system can support the needed power delivery, and no safety limits (voltage, current) are exceeded. The power monitor and control device 73 may also monitor optical signaling and disable power if there is a lack of optical transitions or management communication with the power source. Temperature, pressure, or flow sensors, 80, 87 may also provide input to the power monitor and control module 73 so that power may be disabled if the temperature at the device 70 exceeds a specified limit.
Cooling is supplied to the device 70 via cooling (coolant) tubes in a cooling loop 78, which provides cooling to the powered equipment through a cooling tap (heat sink, heat exchanger) 76, 79 and returns warm (hot) coolant to the central hub. The network device 70 may also include a number of components for use in managing the cooling. The cooling loop 78 within the network device 70 may include any number of sensors 80, 87 for monitoring aggregate and individual branch temperature, pressure, and flow rate at strategic points around the loop (e.g., entering and leaving the device, at critical component locations). The sensor 87 may be used, for example, to check that the remote device 70 receives approximately the same amount of coolant as supplied by the central hub to help detect leaks or blockage in the combined cable 84, and confirm that the temperature and pressure are within specified limits.
Distribution plumbing routes the coolant in the cooling loop 78 to various thermal control elements within the network device 70 to actively regulate cooling through the individual flow paths. For example, a distribution manifold 75 may be included in the network device 70 to route the coolant to the cooling tap 76 and heat exchanger 79. If the manifold has multiple outputs, each may be equipped with a valve 82 (manual or servo controlled) to regulate the individual flow paths. Thermal control elements may include liquid cooled heatsinks, heat pipes, or other devices directly attached to the hottest components (e.g., CPUs, GPUs, TPUs, power supplies, optical components, etc.) to directly remove their heat. The network device 70 may also include channels in cold plates or in walls of the device's enclosure to cool anything they contact. Air to liquid heat exchangers, which may be augmented by a small internal fan, may be provided to circulate cool the air inside a sealed box. Once the coolant passes through these elements and removes the device's heat, it may pass through additional temperature, pressure, or flow sensors, through another manifold to recombine the flows, and out to the coolant return tube. In the example shown in
The distribution manifold 75 may comprise any number of individual manifolds (e.g., supply and return manifolds) to provide any number of cooling branches directed to one or more components within the network device 70. Also, the cooling loop 78 may include any number of pumps 81 or valves 82 to control flow in each branch of the cooling loop. This flow may be set by an active feedback loop that senses the temperature of a critical thermal load (e.g., die temperature of a high power semiconductor), and continuously adjusts the flow in the loop that serves the heat sink or heat exchanger 79. The pump 81 and valve 82 may be controlled by the management system/controller 77 and operate based on control logic received from the central hub 10 over the management communications channel in response to monitoring at the network device 70.
It is to be understood that the network device 70 shown in
It is to be understood that the process shown in
Although the method and apparatus have been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations made to the embodiments without departing from the scope of the embodiments. Accordingly, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Number | Name | Date | Kind |
---|---|---|---|
3335324 | Buckeridge | Aug 1967 | A |
4811187 | Nakajima | Mar 1989 | A |
4997388 | Dale | Mar 1991 | A |
6220955 | Posa | Apr 2001 | B1 |
6259745 | Chan | Jul 2001 | B1 |
6685364 | Brezina | Feb 2004 | B1 |
6826368 | Koren | Nov 2004 | B1 |
6855881 | Khoshnood | Feb 2005 | B2 |
7325150 | Lehr | Jan 2008 | B2 |
7420355 | Liu | Sep 2008 | B2 |
7583703 | Bowser | Sep 2009 | B2 |
7589435 | Metsker | Sep 2009 | B2 |
7593747 | Karam | Sep 2009 | B1 |
7616465 | Vinciarelli | Nov 2009 | B1 |
7813646 | Furey | Oct 2010 | B2 |
7835389 | Yu | Nov 2010 | B2 |
7915761 | Jones | Mar 2011 | B1 |
7921307 | Karam | Apr 2011 | B2 |
7924579 | Arduini | Apr 2011 | B2 |
7940787 | Karam | May 2011 | B2 |
7973538 | Karam | Jul 2011 | B2 |
8020043 | Karam | Sep 2011 | B2 |
8037324 | Hussain | Oct 2011 | B2 |
8184525 | Karam | May 2012 | B2 |
8276397 | Carlson | Oct 2012 | B1 |
8310089 | Schindler | Nov 2012 | B2 |
8345439 | Goergen | Jan 2013 | B1 |
8350538 | Cuk | Jan 2013 | B2 |
8358893 | Sanderson | Jan 2013 | B1 |
8700923 | Fung | Apr 2014 | B2 |
8781637 | Eaves | Jul 2014 | B2 |
8829917 | Lo | Sep 2014 | B1 |
8836228 | Xu | Sep 2014 | B2 |
8842430 | Hellriegel | Sep 2014 | B2 |
9024473 | Huff | May 2015 | B2 |
9184795 | Eaves | Nov 2015 | B2 |
9189043 | Vorenkamp | Nov 2015 | B2 |
9273906 | Goth | Mar 2016 | B2 |
9319101 | Lontka | Apr 2016 | B2 |
9373963 | Kuznelsov | Jun 2016 | B2 |
9419436 | Eaves | Aug 2016 | B2 |
9484771 | Braylovskiy | Nov 2016 | B2 |
9510479 | Vos | Nov 2016 | B2 |
9531551 | Balasubramanian | Dec 2016 | B2 |
9590811 | Hunter, Jr. | Mar 2017 | B2 |
9640998 | Dawson | May 2017 | B2 |
9665148 | Hamdi | May 2017 | B2 |
9693244 | Maruhashi | Jun 2017 | B2 |
9734940 | McNutt | Aug 2017 | B1 |
9853689 | Eaves | Dec 2017 | B2 |
9874930 | Vavilala | Jan 2018 | B2 |
9882656 | Sipes | Jan 2018 | B2 |
9893521 | Lowe | Feb 2018 | B2 |
9948198 | Imai | Apr 2018 | B2 |
10007628 | Pitigoi-Aron | Jun 2018 | B2 |
10028417 | Schmidtke | Jul 2018 | B2 |
10407995 | Moeny | Sep 2019 | B2 |
20020126967 | Panak | Sep 2002 | A1 |
20040000816 | Khoshnood | Jan 2004 | A1 |
20040033076 | Song | Feb 2004 | A1 |
20040043651 | Bain | Mar 2004 | A1 |
20040073703 | Boucher | Apr 2004 | A1 |
20050197018 | Lord | Sep 2005 | A1 |
20050268120 | Schindler | Dec 2005 | A1 |
20060202109 | Delcher | Sep 2006 | A1 |
20070103168 | Batten | May 2007 | A1 |
20070288125 | Quaratiello | Dec 2007 | A1 |
20080198635 | Hussain | Aug 2008 | A1 |
20080229120 | Diab | Sep 2008 | A1 |
20080310067 | Diab | Dec 2008 | A1 |
20100077239 | Diab | Mar 2010 | A1 |
20100117808 | Karam | May 2010 | A1 |
20100171602 | Kabbara | Jul 2010 | A1 |
20100190384 | Lanni | Jul 2010 | A1 |
20100290190 | Chester | Nov 2010 | A1 |
20110290497 | Stenevik | Jan 2011 | A1 |
20110083824 | Rogers | Apr 2011 | A1 |
20110266867 | Schindler | Nov 2011 | A1 |
20120064745 | Ottliczky | Mar 2012 | A1 |
20120170927 | Huang | Jul 2012 | A1 |
20120201089 | Barth | Aug 2012 | A1 |
20120231654 | Conrad | Sep 2012 | A1 |
20120317426 | Hunter, Jr. | Dec 2012 | A1 |
20120319468 | Schneider | Dec 2012 | A1 |
20130077923 | Peeters Weem | Mar 2013 | A1 |
20130079633 | Peeters Weem et al. | Mar 2013 | A1 |
20130103220 | Eaves | Apr 2013 | A1 |
20130249292 | Blackwell, Jr. | Sep 2013 | A1 |
20130272721 | van Veen | Oct 2013 | A1 |
20130329344 | Tucker | Dec 2013 | A1 |
20140126151 | Campbell | May 2014 | A1 |
20140258742 | Chien | Sep 2014 | A1 |
20150078740 | Sipes, Jr. | Mar 2015 | A1 |
20150106539 | Leinonen | Apr 2015 | A1 |
20150115741 | Dawson | Apr 2015 | A1 |
20150215001 | Eaves | Jul 2015 | A1 |
20150333918 | White, III | Nov 2015 | A1 |
20160020911 | Sipes, Jr. | Jan 2016 | A1 |
20160064938 | Balasubramanian | Mar 2016 | A1 |
20160111877 | Eaves | Apr 2016 | A1 |
20160134331 | Eaves | May 2016 | A1 |
20160142217 | Gardner | May 2016 | A1 |
20160188427 | Chandrashekar | Jun 2016 | A1 |
20160197600 | Kuznetsov | Jul 2016 | A1 |
20160365967 | Tu | Jul 2016 | A1 |
20160241148 | Kizilyalli | Aug 2016 | A1 |
20160262288 | Chainer | Sep 2016 | A1 |
20160294500 | Chawgo | Oct 2016 | A1 |
20160308683 | Pischl | Oct 2016 | A1 |
20160352535 | Hiscock | Dec 2016 | A1 |
20170054296 | Daniel | Feb 2017 | A1 |
20170110871 | Foster | Apr 2017 | A1 |
20170146260 | Ribbich | May 2017 | A1 |
20170155517 | Cao | Jun 2017 | A1 |
20170164525 | Chapel | Jun 2017 | A1 |
20170155518 | Yang | Jul 2017 | A1 |
20170214236 | Eaves | Jul 2017 | A1 |
20170229886 | Eaves | Aug 2017 | A1 |
20170234738 | Ross | Aug 2017 | A1 |
20170248976 | Moller | Aug 2017 | A1 |
20170294966 | Jia | Oct 2017 | A1 |
20170325320 | Wendt | Nov 2017 | A1 |
20180024964 | Mao | Jan 2018 | A1 |
20180060269 | Kessler | Mar 2018 | A1 |
20180088648 | Otani | Mar 2018 | A1 |
20180098201 | Torello | Apr 2018 | A1 |
20180102604 | Keith | Apr 2018 | A1 |
20180123360 | Eaves | May 2018 | A1 |
20180188712 | MacKay | Jul 2018 | A1 |
20180191513 | Hess | Jul 2018 | A1 |
20180254624 | Son | Sep 2018 | A1 |
20180313886 | Mlyniec | Nov 2018 | A1 |
Number | Date | Country |
---|---|---|
1209880 | Jul 2005 | CN |
201689347 | Dec 2010 | CN |
205544597 | Aug 2016 | CN |
104081237 | Oct 2016 | CN |
104412541 | May 2019 | CN |
1936861 | Jun 2008 | EP |
2120443 | Nov 2009 | EP |
2693688 | Feb 2014 | EP |
WO199316407 | Aug 1993 | WO |
WO2010053542 | May 2010 | WO |
WO2017054030 | Apr 2017 | WO |
WO2017167926 | Oct 2017 | WO |
2018017544 | Jan 2018 | WO |
WO2019023731 | Feb 2019 | WO |
2019177774 | Sep 2019 | WO |
Entry |
---|
https://www.fischerconnectors.com/us/en/products/fiberoptic. |
http://www.strantech.com/products/tfoca-genx-hybrid-2x2-fiber-optic-copper-connector/. |
http://www.qpcfiber.com/product/connectors/e-link-hybrid-connector/. |
https://www.lumentum.com/sites/default/files/technical-library-items/powerovertiber-tn-pv-ae_0.pdf. |
“Network Remote Power Using Packet Energy Transfer”, Eaves et al., www.voltserver.com, Sep. 2012. |
Product Overview, “Pluribus VirtualWire Solution”, Pluribus Networks, PN-PO-VWS-05818, https://www.pluribusnetworks.com/assets/Pluribus-VirtualWire-PO-50918.pdf, May 2018, 5 pages. |
Implementation Guide, “Virtual Chassis Technology Best Practices”, Juniper Networks, 8010018-009-EN, Jan. 2016, https://wwwjuniper.net/us/en/local/pdf/implementation-guides/8010018-en.pdf, 29 pages. |
Yencheck, Thermal Modeling of Portable Power Cables, 1993. |
Zhang, Machine Learning-Based Temperature Prediction for Runtime Thermal Management across System Components, Mar. 2016. |
Data Center Power Equipment Thermal Guidelines and Best Practices. |
Dynamic Thermal Rating of Substation Terminal Equipment by Rambabu Adapa, 2004. |
Chen, Real-Time Termperature Estimation for Power MOSEFETs Conidering Thermal Aging Effects:, IEEE Trnasactions on Device and Materials Reliability, vol. 14, No. 1, Mar. 2014. |
Number | Date | Country | |
---|---|---|---|
20200296856 A1 | Sep 2020 | US |