The present disclosure relates generally to communications networks, and more particularly, to an interface module for transmitting and receiving power, data, and cooling in a communications network.
Network devices such as computer peripherals, network access points, and IoT (Internet of Things) devices may have both their data connectivity and power needs met over a single combined function cable such as PoE (Power over Ethernet). In conventional PoE systems, power is delivered over the cables used by the data over a range from a few meters to about one hundred meters. When a greater distance is needed or fiber optic cables are used, power is typically supplied through a local power source such as a nearby wall outlet due to limitations with capacity, reach, and cable loss in conventional PoE. Today's PoE systems also have limited power capacity, which may be inadequate for many classes of devices. If the available power over combined function cables is increased, cooling may also need to be delivered to the high powered remote devices.
Corresponding reference characters indicate corresponding parts throughout the several views of the drawings.
Overview
In one embodiment, an apparatus generally comprises an interface module for coupling a cable delivering combined power, data, and cooling to a network device, the interface module comprising an electrical interface for receiving power for powering the network device, an optical transceiver for receiving optical communications signals, a fluid interface for receiving coolant, and sensors for monitoring the power and cooling and providing information to a central network device delivering the combined power, data, and cooling.
In another embodiment, an apparatus generally comprises an interface module for coupling a cable delivering combined power, data, and cooling to power sourcing equipment, the interface module comprising an electrical interface for delivering power for powering a remote network device, an optical interface for delivering optical communications signals to the remote network device, a fluid interface for delivering coolant to the remote network device, and a control system for receiving power and cooling information from the remote network device and controlling delivery of the power and cooling.
In another embodiment, an interface module generally comprises a first interface for coupling with a cable connector of a cable comprising an electrical wire for carrying power, an optical fiber for carrying data, and a cooling tube for carrying coolant, a second interface for coupling with a network device, power contacts for transferring power between the cable and the network device at the first interface, a cooling path for cooling components in the interface module, and sensors for monitoring power and cooling at the interface module. Monitoring information is provided to a control system for controlling power, data, and cooling at the interface module when coupled to the cable and the network device.
Further understanding of the features and advantages of the embodiments described herein may be realized by reference to the remaining portions of the specification and the attached drawings.
The following description is presented to enable one of ordinary skill in the art to make and use the embodiments. Descriptions of specific embodiments and applications are provided only as examples, and various modifications will be readily apparent to those skilled in the art. The general principles described herein may be applied to other applications without departing from the scope of the embodiments. Thus, the embodiments are not to be limited to those shown, but are to be accorded the widest scope consistent with the principles and features described herein. For purpose of clarity, details relating to technical material that is known in the technical fields related to the embodiments have not been described in detail.
In conventional Power over Ethernet (PoE) systems used to simultaneously transmit power and data communications, power is delivered over the same twisted pair cable used for data. These systems are limited in range to a few meters to about 100 meters. The maximum power delivery capacity of standard PoE is approximately 100 Watts, but many classes of powered devices would benefit from power delivery of 1000 Watts or more. In conventional systems, when a larger distance is needed, fiber optic cabling is used to deliver data and when larger power delivery ratings are needed, power is supplied to a remote device through a local power source.
As previously noted, it is desirable to increase the power available over multi-function cables to hundreds and even thousands of watts. This capability may enable many new choices in network deployments where major devices such as workgroup routers, multi-socket servers, large displays, wireless access points, fog nodes, or other devices are operated over multi-function cables. This capability would greatly decrease installation complexity and improve the total cost of ownership of a much wider set of devices that have their power and data connectivity needs met from a central hub.
Beyond the data and power supply capabilities noted above, there is also a need for cooling. For high-powered devices, especially those with high thermal density packaging or total dissipation over a few hundred Watts, traditional convection cooling methods may be inadequate. This is particularly apparent where special cooling challenges are present, such as with a device that is sealed and cannot rely on drawing outside air (e.g., all-season outdoor packaging), a hermetically sealed device (e.g., used in food processing or explosive environments), or where fan noise is a problem (e.g., office or residential environments), or any combination of the above along with extreme ambient temperature environments. In these situations, complex and expensive specialized air cooling systems are often used.
In order to overcome the above issues, PoE may be augmented to allow it to carry higher data rates, higher power delivery, and integrated thermal management cooling combined into a single cable, as described, for example, in U.S. patent application Ser. No. 15/910,203 (“Combined Power, Data, and Cooling Delivery in a Communications Network”), filed Mar. 2, 2018, which is incorporated herein by reference in its entirety. These connections may be point-to-point, such as from a central hub to one or more remote devices (e.g., full hub and spoke layout). In another example, a single combined function cable may be run most of the way to a cluster of powered devices and then split, as described, for example, in U.S. patent application Ser. No. 15/918,972 (“Splitting of Combined Delivery Power, Data, and Cooling in a Communications Network”), filed Mar. 12, 2018, which is incorporated herein by reference in its entirety.
In addition to the cables to deliver the power, data, and cooling, and control systems operable to control delivery of the power, data, and cooling, what is needed is an interface module at the network device to deliver the combined power, data, and cooling from the PSE (Power Sourcing Equipment) and receive the power, data, and cooling at the PD (Powered Device).
The embodiments described herein provide an interface module incorporating wires for power, optical fibers for data, and coolant paths (pipes, tubes) for cooling, for use in delivery of power, data, and cooling from a PSE (Power Sourcing Equipment) or receiving power, data, and cooling at a PD (Powered Device). In one or more embodiments, an optical transceiver module may be configured to deliver (or receive) power and cooling along with the optical data. The interface module may include one or more sensors, monitors, valves, or controllers for use in monitoring and controlling the power, data, and cooling.
Referring now to the drawings, and first to
The network is configured to provide power (e.g., power greater than 100 Watts), data (e.g., optical data), and cooling (liquid, gas, or multi-phase coolant) from a central network device 10 to a plurality of remote network devices 12 (e.g., switches, routers, servers, access points, computer peripherals, IoT devices, fog nodes, or other electronic components and devices). Signals may be exchanged among communications equipment and power transmitted from power sourcing equipment (PSE) (e.g., central hub 10) to powered devices (PDs) (e.g., remote communications devices 12).
In one or more embodiments, a system, referred to herein as a PoE (Power over Ethernet)+Fiber+Cooling (PoE+F+C) system, provides high power energy delivery, fiber delivered data, and cooling within a single cable. As described in detail below, the PoE+F+C system delivers combined power, data, and cooling to a network (e.g., switch/router system) configured to receive power, data, and cooling over a cabling system comprising optical fibers, electrical wires (e.g., copper wires), and coolant tubes connected to the network devices 10, 12 through an interface module 13. The PoE+F+C system may include a control system that receives input from sensors located throughout the system for detecting and managing faults or dangerous conditions and controlling delivery of power, data, and cooling.
The PoE+F+C system (power, data, and cooling system) allows high power devices to be located in remote locations, extreme temperature environments, or noise sensitive environments, with their cooling requirements met through the same cable that carries data and power. The use of a single cable for all interconnect features needed by a remote device greatly simplifies installation and ongoing operation of the network and network devices.
The network may include any number or arrangement of network devices (e.g., switches, access points, routers, or other devices operable to route (switch, forward) data communications). The remote devices 12 may be located at distances greater than 100 meters (e.g., 1 km, 10 km, or any other distance) from the central hub 10, and/or operate at greater power levels than 100 Watts (e.g., 250 Watts, 1000 Watts, or any other power level). The remote devices 12 may also be in communication with one or more other devices (e.g., fog node, IoT device, sensor, and the like). In one or more embodiments, a redundant central hub (not shown) may provide backup or additional power, bandwidth, or cooling, as needed in the network. In this case, the remote network device 12 would include another interface module 13 for connection with another cable 14 delivering power, data, and cooling from the redundant central hub.
As previously noted, the network may also include one or more splitting devices (not shown) to allow the network to go beyond point-to-point topologies and build passive stars, busses, tapers, multi-layer trees, etc. In this case, a single long PoE+F+C cable would run to a conveniently located intermediary splitter device (e.g., passive splitter) servicing a cluster of physically close endpoint devices (remote network devices, remote communications devices). One or more control systems for the power, data, and cooling may interact between the central hub 10 and the remote devices 12, and their interface modules 13 to ensure that each device receives its fair share of each resource.
In the example shown in
The central hub 10 may be operable to provide high capacity power from an internal power system (e.g., PSU 15 capable of delivering power over and including 5 kW, 100 kW, etc., and driving the plurality of devices 12, each in the 100-3000 W range). The PSU 15 may provide, for example, PoE, pulsed power, DC power, or AC power. The central hub 10 (PSE (Power Sourcing Equipment)) is operable to receive power external from a communications network and transmit the power, along with data and cooling to the remote network devices (PDs (Powered Devices)) 12. The central hub 10 may comprise, for example, a router, convergence device, access device, or any other suitable network device operable to deliver power, data, and cooling. As described in detail below, the central hub 10 provides control logic for the cooling loop, as well as the power and data transport functions of the combined cable 14. Additional components and functions of the central hub 10 are described below with respect to
Cables 14 extending from the central hub 10 to the remote network devices 12 are configured to transmit power, data, and cooling in a single cable (combined cable, multi-function cable, multi-use cable, hybrid cable). The cables 14 may be formed from any material suitable to carry electrical power, data (e.g., copper, fiber), and coolant (liquid, gas, or multi-phase) and may carry any number of electrical wires, optical fibers, and cooling tubes in any arrangement.
The interface module 13 (also referred to herein as an optical transceiver, optical module, data/power/cooling interface module, or PoE+F+C interface module) couples the network devices 10, 12 to the cables 14 for delivery of the combined power, data, and cooling. In one or more embodiments, the interface module 13 comprises an optical transceiver modified to incorporate power and coolant components to deliver power and cooling through the optical transceiver. For example, the interface module 13 may comprise an optical transceiver modified along with a connector system to incorporate electrical (copper) wires to deliver power through the optical transceiver and coolant lines to deliver cooling from the central hub 10 to the remote network devices 12 for use by the remote network devices. The interface module 13 allows power to be delivered to the remote network devices 12 in locations where standard power is not available and provides cooling for use in cooling higher power devices (e.g., greater than 100 W). As described below, the interface module 13 may be configured to tap some of the energy and make intelligent decisions so that the power source 10 knows when it is safe to increase power on the wires without damaging the system or endangering an operator. Details of the interface module 13 in accordance with one embodiment are described below with respect to
Internet of Things (IoT) applications such as remote sensors/actuators and fog computing may also take advantage of the greater reach and power delivery capacity of this system. For example, one or more of the network devices 12 may deliver power using PoE or USB to electronic components such as IP (Internet Protocol) cameras, VoIP (Voice over IP) phones, video cameras, point-of-sale devices, security access control devices, residential devices, building automation devices, industrial automation, factory equipment, lights (building lights, streetlights), traffic signals, and many other electrical components and devices. With an extended reach (e.g., one to ten km), all power to communications equipment throughout a building or across a neighborhood may be delivered from one source, along with the communications link for the equipment, thereby providing a user with complete control of the location of communications equipment without the 100 m limitation of traditional PoE.
In one embodiment, one or more of the network devices 12 may comprise dual-role power ports that may be selectively configurable to operate as a PSE (Power Source Equipment) port to provide power to a connected device or as a PD (Powered Device) port to sink power from the connected device, and enable the reversal of energy flow under system control, as described in U.S. Pat. No. 9,531,551 (“Dynamically Configurable Power-Over-Ethernet Apparatus and Method”, issued Dec. 27, 2016), for example. The dual-role power ports may be PoE or PoE+F ports, enabling them to negotiate their selection of either PoE or higher power PoE+F in order to match the configuration of the ports on line cards 16 with the corresponding ports on each remote network device 12, for example.
In one or more embodiments, there is no need for additional electrical wiring for the communications network and all of the network devices operate using the power provided by the PoE+F+C system. In other embodiments, in addition to the remote communications devices 12 configured to receive power, data, and cooling from the central hub 10, the network may also include one or more network devices comprising conventional network devices that only process and transmit data. These network devices receive electrical power from a local power source such as a wall outlet. Similarly, one or more of the network devices may eliminate the data interface, and only interconnect power (e.g., moving data interconnection to wireless networks). Also, one or more devices may be configured to receive only power and data, or only power and cooling, for example.
It is to be understood that the network devices and topology shown in
As described in detail below, the interface module 13a, 13b comprises an electrical interface for delivering or receiving power for powering the network device 12, an optical transceiver for transmitting or receiving data comprising optical communications signals, and a fluid interface for delivering or receiving cooling. The interface module 13a, 13b may include one or more sensors 17a, 17b for monitoring power and cooling and providing monitoring information to a control system operable to control delivery of the power, data, and cooling in the PoE+F+C system. In the example shown in
The central hub 10 includes a power distribution module 20 for receiving power from a power grid, network interface 21 for receiving data from and transmitting data to a network (e.g., Internet), and a heat exchanger 22 for fluid communication with a cooling plant. The power distribution module 20 provides power to a power supply module 23 at the remote device 12. The network interface 21 at the central hub 10 is in communication with network interface 24 at the remote device 12. The heat exchanger 22 at the central hub 10 forms a cooling loop with one or more heat sinks 25 at the remote device 12. The central hub 10 may provide control logic for the cooling loop, as well as the power and data transport functions of the combined cable 14, as described below. One or more of the components shown at the central hub 10 and remote device 12 (e.g., sensors 17a, 17b, valve 17c, network interface 24, heat sink 25) may be located within the interface module, as described below with respect to
In the example shown in
The central hub 10 maintains a source of low-temperature coolant that is sent through distribution plumbing (such as a manifold), through the interface module 13a, connector 29a, and down cable's 14 coolant supply line 28a to the remote device 12. The connector 29b at the other end of the cable 14 is coupled to the interface module 13b and the supply coolant is routed through elements inside the device 12 such as heat sinks 25 and heat exchangers that remove heat. The warmed coolant may be aggregated through a return manifold and returned to the central hub 10 out the device's interface module 13b, connector 29b, and through the return tube 28b in the cable 14. The cable 14 returns the coolant to the central hub 10, via connector 29a and interface module 13a where the return coolant passes through the heat exchanger 22 to remove the heat from the cooling loop to an external cooling plant, and the cycle repeats.
The heat exchanger 22 may be a liquid-liquid heat exchanger, with the heat transferred to chilled water or a cooling tower circuit, for example. The heat exchanger 22 may also be a liquid-air heat exchanger, with fans provided to expel the waste heat to the atmosphere. The hot coolant returning from the cable 14 may be monitored by sensor 17a for temperature, pressure, and flow. Once the coolant has released its heat, it may pass back through a pump 19 and sensor 17a, and then sent back out to the cooling loop. One or more variable-speed pumps 19 may be provided at the central hub 10 or remote device 12 to circulate the fluid around the cooling loop. The coolant may comprise, for example, water, antifreeze, liquid or gaseous refrigerants, or mixed-phase coolants (partially changing from liquid to gas along the loop).
In an alternate embodiment, only a single coolant tube is provided within the cable 14 and high pressure air (e.g., supplied by a central compressor with an intercooler) is used as the coolant. When the air enters the remote device 12, it is allowed to expand and/or impinge directly on heat dissipating elements inside the device. Cooling may be accomplished by forced convection via mass flow of the air and additional temperature reduction may be provided via a Joule-Thomson effect as the high pressure air expands to atmospheric pressure. Once the air has completed its cooling tasks, it can be exhausted to the atmosphere outside the remote device 12 via a series of check valves and mufflers (not shown).
In cold environments the coolant may be supplied above ambient temperature to warm the remote device 12. This can be valuable where remote devices 12 are located in cold climates or in cold parts of industrial plants, and the devices have cold-sensitive components such as optics or disk drives. This may be more energy efficient than providing electric heaters at each device, as is used in conventional systems.
The cooling loops from all of the remote devices 12 may be isolated from one another or be intermixed through a manifold and a large central heat exchanger for overall system thermal efficiency. The central hub 10 may also include one or more support systems to filter the coolant, supply fresh coolant, adjust anti-corrosion chemicals, bleed air from the loops, or fill and drain loops as needed for installation and maintenance of the cables 14 and remote devices 12.
The interface modules 13a, 13b are configured to interface with the cable connectors 29a, 29b at the central hub 10 and remote device 12 for transmitting and receiving power, data, and cooling. In one embodiment, the connectors 29a, 29b carry power, fiber, and coolant in the same connector body. The connectors 29a, 29b are preferably configured to mate and de-mate (couple, uncouple) easily by hand or robotic manipulator. In order to prevent coolant leakage when the cable 14 is uncoupled from the central hub 10 or remote device 12, the connectors 29a, 29b and interface modules 13a, 13b preferably include valves (e.g., quick disconnects) (not shown) that automatically shut off flow into and out of the cable, and into and out of the network device. In one or more embodiments, the interface module 13a, 13b may be configured to allow connection sequencing and feedback to occur. For example, electrical connections may not be made until a verified sealed coolant loop is established. The cable connectors 29a, 29b may also include visual or tactile evidence of whether a line is pressurized, thereby reducing the possibility of user installation or maintenance errors.
In one or more embodiments, a distributed control system comprising components located on the central hub's controller and on the remote device's processor may communicate over the fiber links 27 in the combined cable 14. One or more components of the control system may be located within the interface module 13a, 13b. For example, one or more sensors 17a, 17b, or valves 17c may be located within the interface module 13a, 13b, as described below with respect to
Monitoring information from power sensors 17b (e.g., current, voltage) or data usage (e.g., bandwidth, buffer/queue size) may also be used by the control system in managing cooling at the remote device 12. The control system may also use the monitoring information to allocate power and data.
As described in detail below with respect to
Machine learning may also be used within the control system to compensate for the potentially long response times between when coolant flow rates change and the remote device's temperatures react to the change. The output of a control algorithm may be used to adjust the pumps 19 to move the correct volume of coolant to the device 12, and may also be used to adjust valves 17c within the remote device to direct different portions of the coolant to different internal heat sinks to properly balance the use of coolant among a plurality of thermal loads.
The control system may also include one or more safety features. For example, the control system may instantly stop the coolant flow and begin a purge cycle if the coolant flow leaving the central hub 10 does not closely match the flow received at the remote devices 12, which may indicate a leak in the system. The control system may also shut down a remote device if an internal temperature exceeds a predetermined high limit or open relief valves if pressure limits in the coolant loop are exceeded. The system may also predictively detect problems in the cooling system such as a pressure rise caused by a kink in the cable 14, reduction in thermal transfer caused by corrosion of heat sinks 25, or impending bearing failures in the pump 19, before they become serious. The cable's jacket may also include two small sense conductors for use in identifying a leak in the cooling system. If a coolant tube develops a leak, the coolant within the jacket causes a signal to be passed between these conductors, and a device such as a TDR (Time-Domain Reflectometer) at the central hub 10 may be used to locate the exact position of the cable fault, thereby facilitating repair.
All three utilities (power, data, cooling) provided by the combined cable 14 may interact with the control system to keep the system safe and efficient. For example, sensor 17b located in the power supply 23 of the remote device 12 may be used to notify the central hub 10 when it is safe to increase power on the wires to the remote device without damaging the system or endangering an operator.
In one or more embodiments, the interface module 13b at the remote network device 12 may use a small amount of power at startup to communicate its power, data, and cooling requirements. The powered device 12 may then configure itself accordingly for full power operation. In one example, power type, safety operation of the module, data rates, and cooling capabilities are negotiated between the central hub 10 and network device 12 through data communications signals on optical fiber 27. The interface module 13b communicates back to the powered device 12 any operational fault, including the loss of data. Such fault may result in power immediately being turned off. Full power supply may not be reestablished until the powered device is able to communicate back in low power mode that higher power may be safely applied.
Initial system modeling and characterization may be used to provide expected power, flow properties, and thermal performance operating envelopes, which may provide an initial configuration for new devices and a reference for setting system warning and shut-down limits. This initial characteristic envelope may be improved and fine-tuned over time heuristically through machine learning and other techniques. If the system detects additional power flow in power conductors 26 (e.g., due to a sudden load increase in CPU (Central Processing Unit) in remote device 12), the control system may proactively increase coolant flow in anticipation of an impending increase in heat sink 25 temperature, even before the temperature sensors 17a register it. This interlock between the various sensors and control systems helps to improve the overall responsivity and stability of the complete system.
In one or more embodiments, the central hub 10 may utilize control algorithms that know what proportion of bandwidth and power are being used by each of the remote devices 12, and use this data to predict its energy and cooling needs. This may be used to ensure that the cooling and power capabilities remain in balance for each of the remote device's needs, and also are fairly allocated across the network. As previously noted, machine learning techniques may be employed to automatically establish system characteristic response times, thereby improving power and cooling control loops heuristically over time.
In one or more embodiments, the central hub 10 may periodically (e.g., at least tens of times per second or any other suitable interval) receive multiple sensor readings associated with all of the remote devices 12. These readings may include, for example, current and voltage measurements at both the hub 10 and remote devices 12 for the power, transmit and receive queue sizes at both central hub 10 and remote device 12 for the data channel, and temperature, pressure, and flow readings at both ends of the coolant distribution tubes 28a, 28b. The controller may perform detailed control loop calculations to determine set-points (settings) for the various control actuators (pumps, valves, power control device (timeslot allocation), bandwidth controller (bandwidth allocation)) in the system. These calculations may be assisted through the use of artificial intelligence or machine learning techniques, as previously described. The calculations preferably take into account the many interactions between data, power, and cooling for each of the remote devices, and also the complex interactions and potential instabilities between devices sharing a loop or between multiple devices and loops sharing central hub 10. The results of the calculations may be used to actuate control devices in the distribution system operable to recalculate an interleave pattern for power packets, recalculate a passive optical network timeslot allocation, or modify the coolant pump 19 and valve 17c setting for one or more of the remote devices 12. The data channel 27 may be used to provide closed-loop communication paths between the sensors, central control algorithms, and actuators.
As previously noted, the cable 14 may comprise various configurations of power conductors, optical fiber, and coolant tubes. These components, along with one or more additional components that may be used to isolate selected elements from each other, manage thermal conductivity between the elements, provide thermal paths, or provide protection and strength, are contained within an outer jacket of the cable. The coolant tubes may have various cross-sectional shapes and arrangements, which may yield more space and thermally efficient cables. Supply and return tube wall material thermal conductivity may be adjusted to optimize overall system cooling.
The cable 14 may also be configured to prevent heat loss through supply-return tube-tube conduction, external environment conduction, coolant tube-power conduction, or any combination of these or other conditions. For example, a thermal isolation material may be located between coolant tubes to prevent heat loss. The thermal isolation material may also be placed between the coolant tubes and the outer jacket. In another embodiment, one or both coolant tubes may be provided with a low thermal impedance path to the outside. Thermal paths may also be provided between the power conductors and one of the coolant tubes to use some of the cooling power of the loop to keep the power conductors in the cables cool.
In one or more embodiments, in order to reduce fluid frictional effects, tube interiors may be treated with hydrophobic coatings and the coolant may include surfactants. Also, the supply and return coolant tubes 28a, 28b may be composed of materials having different conductive properties so that the complete cable assembly may be thermally tuned to enhance system performance. It is to be understood that the configuration, arrangement, and number of power wires 26, optical fibers 27, coolant tubes 28a, 28b, and insulation regions, conduction regions, sense conductors, shields, coatings, or layers described herein are only examples and that other configurations or arrangements may be used without departing from the scope of the embodiments.
The network device 30 may include any number of processors 32 (e.g., single or multi-processor computing device or system), which may communicate with a forwarding engine or packet forwarder operable to process a packet or packet header. The processor 32 may receive instructions from a software application or module, which causes the processor to perform functions of one or more embodiments described herein. The processor 32 may also operate one or more components of the control system 33. The control system (controller) 33 may comprise components (modules, code, software, logic) located at the central hub 10 and the remote device 12, and interconnected through the combined cable 14 (
Memory 34 may be a volatile memory or non-volatile storage, which stores various applications, operating systems, modules, and data for execution and use by the processor 32. For example, components of the interface module 38, control logic for cooling components 35, or other parts of the control system 33 (e.g., code, logic, or firmware, etc.) may be stored in the memory 34. The network device 30 may include any number of memory components.
Logic may be encoded in one or more tangible media for execution by the processor 32. For example, the processor 32 may execute code stored in a computer-readable medium such as memory 34. The computer-readable medium may be, for example, electronic (e.g., RAM (random access memory), ROM (read-only memory), EPROM (erasable programmable read-only memory)), magnetic, optical (e.g., CD, DVD), electromagnetic, semiconductor technology, or any other suitable medium. In one example, the computer-readable medium comprises a non-transitory computer-readable medium. Logic may be used to perform functions such as power level negotiations, safety subsystems, or thermal control, as described herein. The network device 30 may include any number of processors 32.
The interfaces 36 may comprise any number of interfaces (e.g., power, data, and fluid connectors, line cards, ports, combined power, data, and cooling connectors) for receiving power, data, and cooling, or transmitting power, data, and cooling to other devices. A network interface may be configured to transmit or receive data using a variety of different communications protocols and may include mechanical, electrical, and signaling circuitry for communicating data over physical links coupled to the network. One or more of the interfaces 36 may be configured for PoE+F+C, PoE+F, PoE, PoF (Power over Fiber), or similar operation. As described below, one or more interfaces 36 may be incorporated into the interface module 38 or communicate therewith.
The PoE+F+C interface module 38 may comprise hardware or software for use in power detection, power monitor and control, or power enable/disable, as described below. The interface module 38 may further comprise one or more of the processor or memory components, or interfaces. For example, the interface module 38 may comprise an electrical interface for delivering power from the PSE or receiving power at the PD, an optical interface for receiving or transmitting optical communications signals comprising data and control signals, and a fluid interface for receiving and delivering coolant.
In one or more embodiments, the interface module 38 comprises a PoE+F+C optical module (e.g., optical transceiver module configured for receiving (or delivering) power from power supply 37, data to or from network interface 26, and receiving (or delivering) cooling at cooling components 35), as previously described. Details of an interface module 38 in accordance with one embodiment are described below with respect to
It is to be understood that the network device 30 shown in
The network device 40 includes optical/electrical components 41 and power components including, power detection modules 42a, 42b, power monitor and control modules 43, and power enable/disable modules 44. Although PoE and pulse power are described in conjunction with detection elements 42a, 42b, it should be understood that other power delivery schemes including AC, DC, and USB may be supported with similar elements. The power components may be isolated from the optical components 41 via an isolation component (e.g., isolation material or element), which electromagnetically isolates the power circuit from the optical components to prevent interference with operation of the optics. In the example shown in
The power monitor and control modules 43 continuously monitor power delivery to ensure that the system can support the needed power delivery, and no safety limits (voltage, current) are exceeded. The power monitor and control modules 43 may also monitor optical signaling and disable power if there is a lack of optical transitions or communication with the power source. Temperature, pressure, or flow sensors 47, 50 may also provide input to the power monitor and control modules 43 so that power may be disabled if the temperature at the network device 40 exceeds a specified limit. Power monitor and control function may sense the voltage and current flow, and report these readings to the central control function. As previously described, the network device 40 may use a small amount of power at startup to communicate its power, data, and cooling requirements. The network device 40 may then be configured for full power operation (e.g., at high power enable/disable module 44). If a fault is detected, full power supply may not be established until the network device communicates in low power mode that high power can be safely applied.
Cooling is supplied to the network device 40 via cooling (coolant) tubes in a cooling (coolant) loop 48, which provides cooling to the powered equipment through a cooling tap (heat sink, heat exchanger) 46, 53 and returns warm (hot) coolant to the central hub. The network device 40 may also include a number of components for use in managing the cooling. The cooling loop 48 within the network device 40 may include any number of sensors 47, 50 for monitoring aggregate and individual branch temperature, pressure, and flow rate at strategic points around the loop (e.g., entering and leaving the device, at critical component locations). The sensor 47 may be used, for example, to check that the remote device 40 receives approximately the same amount of coolant as supplied by the central hub to help detect leaks or blockage in the cable, and confirm that the temperature and pressure are within specified limits.
Distribution plumbing routes the coolant in the cooling loop 48 to various thermal control elements within the network device 40 to actively regulate cooling through the individual flow paths. For example, a distribution manifold 51 may be included in the network device 40 to route the coolant to the cooling tap 46 and heat exchanger 53. If the manifold 51 has multiple outputs, each may be equipped with a valve 52 (manual or servo controlled) to regulate the individual flow paths. For simplification,
The distribution manifold 51 may comprise any number of individual manifolds (e.g., supply and return manifolds) to provide any number of cooling branches directed to one or more components within the network device 40. Also, the cooling loop 48 may include any number of pumps 49 or valves 52 to control flow in each branch of the cooling loop. This flow may be set by an active feedback loop that senses the temperature of a critical thermal load (e.g., die temperature of a high power semiconductor), and continuously adjusts the flow in the loop that serves the heat sink or heat exchanger 53. The pump 49 and valve 52 may be controlled by the control system and operate based on control logic received from the central hub in response to monitoring at the network device 40.
One or more of the components shown in
It is to be understood that the network device 40 shown in
In one embodiment, the interface module 55 includes an optical transceiver (optical module, optical device, optics module, transceiver, silicon photonics optical transceiver) configured to source or receive power and data, as described in U.S. patent application Ser. No. 15/707,976 (“Power Delivery Through an Optical System”, filed Sep. 18, 2017), incorporated herein by reference in its entirety. As described below, the optical transceiver module is further modified to deliver and receive cooling. The transceiver modules operate as an engine that bidirectionally converts optical signals to electrical signals or in general as an interface to the network element copper wire or optical fiber. In one or more embodiments, the optical transceiver may be a pluggable transceiver module in any form factor (e.g., SFP (Small Form-Factor Pluggable), QSFP (Quad Small Form-Factor Pluggable), CFP (C Form-Factor Pluggable), and the like), and may support data rates up to 400 Gbps, for example. Hosts for these pluggable optical modules include line cards 16 on the central hub 10 or network devices 12 (
The interface module (optical transceiver) 55 may also be configured for operation with AOC (Active Optical Cable) and form factors used in UWB (Ultra-Wideband) applications, including for example, Ultra HDMI (High-Definition Multimedia Interface), serial high bandwidth cables (e.g., thunderbolt), and other form factors. Also, the optical module 55 may be configured for operation in point-to-multipoint or multipoint-to-point topology. For example, QFSP may breakout to SFP+. One or more embodiments may be configured to allow for load shifting. In one or more embodiments, the interface module 55 comprises a silicon photonics optical transceiver modified to source power or receive power, and deliver or receive cooling.
Referring now to
The interface module 64 comprises a first interface for coupling with the cable connector 65 and a second interface for coupling with the network device 67 through PCB (printed circuit board) 68, which may be located on a line card at the network device. In the example shown in
As previously noted, the interface module 64 may comprise a modified version of an optical transceiver module and optical module cage (e.g., modified SFP+optical transceiver). In one or more embodiments, the interface module 64 is configured to fit within a standard optical module cage and footprint. In one embodiment, a first portion 77a of a housing (optical/power module) of the interface module 64 may generally correspond to a standard optical module cage and a second portion 77b of the housing (cooling module) has a similar form factor to the optical module cage, and may be situated immediately above or adjacent to the first portion of the housing on the circuit board 68.
The cooling is preferably maintained in a separate portion of the housing from the power and data. For example, as shown in
The power contacts (e.g., pulse power contacts) 74 are provided to integrate high power energy distribution with the fiber optical signals. For example, heavy power conductors 70 may terminate on the optical ferrule contacts 82, which in turn deliver energy from the PCB through the cage, and onto the module for the central hub 10 (
In one or more embodiments, contact points 74 for power within the interface module 64 may be configured as described in U.S. patent application Ser. No. 15/707,976, referenced above. For example, the contact points 74 may comprise metalized barrels around optical ferrules 82 of standard optical connectors (e.g., LC type). Contacts associated with the interface module 64 may connect these barrels to the optical module cage, which in turn connects to the circuit board 68. Heavy copper wires 70 within the combined cable 66 may be field terminated to the other end of the connector interface using similar operations to terminating the fibers, for example.
Two large manifolds 83 may be used at the central hub to supply chilled coolant to the plurality of interface modules 64 and return warmed coolant from the interface modules. The coolant supply and return channels 76 are tapped into the manifolds 83 for each of the interface modules. In one embodiment, the coolant flows through a pair of motorized ball valves 78 (or other suitable valves) to precisely control the flow in each combined cable 66 (or to the network device 67). In this example, the ball valves 78 are activated by a motor 80 and quadrant worm gear 81 that is operable to adjust a position of the valve from fully open (e.g., zero degree (horizontal as viewed in
As shown in
The interface module 64 at the remote network device may also be configured to access cooling pipes 76 delivering cooling to various parts of the powered device 67, including using cooling inside the module for specific TOSA, ROSA, laser, or other component cooling requirements, as well as the ability to apply cooling to the module cage assembly for full module cooling as needed. Similar to the host module, cooling may be integrated within the optical module cage (housing) to allow for advanced cooling of the internal components of the optical module.
Although the method and apparatus have been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations made to the embodiments without departing from the scope of the embodiments. Accordingly, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Number | Name | Date | Kind |
---|---|---|---|
3335324 | Buckeridge | Aug 1967 | A |
4811187 | Nakajima | Mar 1989 | A |
5652893 | Ben-Meir | Jul 1997 | A |
6008631 | Johari | Dec 1999 | A |
6220955 | Posa | Apr 2001 | B1 |
6259745 | Chan | Jul 2001 | B1 |
6636538 | Stephens | Oct 2003 | B1 |
6685364 | Brezina | Feb 2004 | B1 |
6826368 | Koren | Nov 2004 | B1 |
6855881 | Khoshnood | Feb 2005 | B2 |
6860004 | Hirano | Mar 2005 | B2 |
7325150 | Lehr | Jan 2008 | B2 |
7490996 | Sommer | Feb 2009 | B2 |
7492059 | Peker | Feb 2009 | B2 |
7509505 | Randall | Mar 2009 | B2 |
7583703 | Bowser | Sep 2009 | B2 |
7589435 | Metsker | Sep 2009 | B2 |
7593747 | Karam | Sep 2009 | B1 |
7603570 | Schindler | Oct 2009 | B2 |
7616465 | Vinciarelli | Nov 2009 | B1 |
7813646 | Furey | Oct 2010 | B2 |
7835389 | Yu | Nov 2010 | B2 |
7854634 | Filipon | Dec 2010 | B2 |
7881072 | DiBene | Feb 2011 | B2 |
7915761 | Jones | Mar 2011 | B1 |
7921307 | Karam | Apr 2011 | B2 |
7924579 | Arduini | Apr 2011 | B2 |
7940787 | Karam | May 2011 | B2 |
7973538 | Karam | Jul 2011 | B2 |
8020043 | Karam | Sep 2011 | B2 |
8037324 | Hussain | Oct 2011 | B2 |
8081589 | Gilbrech | Dec 2011 | B1 |
8184525 | Karam | May 2012 | B2 |
8276397 | Carlson et al. | Oct 2012 | B1 |
8279883 | Diab | Oct 2012 | B2 |
8319627 | Chan | Nov 2012 | B2 |
8345439 | Goergen | Jan 2013 | B1 |
8350538 | Cuk | Jan 2013 | B2 |
8358893 | Sanderson | Jan 2013 | B1 |
8700923 | Fung | Apr 2014 | B2 |
8712324 | Corbridge | Apr 2014 | B2 |
8750710 | Hirt | Jun 2014 | B1 |
8781637 | Eaves | Jul 2014 | B2 |
8787775 | Earnshaw | Jul 2014 | B2 |
8829917 | Lo et al. | Sep 2014 | B1 |
8836228 | Xu | Sep 2014 | B2 |
8842430 | Hellriegel et al. | Sep 2014 | B2 |
8849471 | Daniel | Sep 2014 | B2 |
8966747 | Vinciarelli | Mar 2015 | B2 |
9019895 | Li | Apr 2015 | B2 |
9024473 | Huff | May 2015 | B2 |
9184795 | Eaves | Nov 2015 | B2 |
9189036 | Ghoshal | Nov 2015 | B2 |
9189043 | Vorenkamp | Nov 2015 | B2 |
9273906 | Goth et al. | Mar 2016 | B2 |
9319101 | Lontka | Apr 2016 | B2 |
9321362 | Woo | Apr 2016 | B2 |
9373963 | Kuznelsov | Jun 2016 | B2 |
9419436 | Eaves | Aug 2016 | B2 |
9510479 | Vos | Nov 2016 | B2 |
9590811 | Hunter, Jr. | Mar 2017 | B2 |
9618714 | Murray | Apr 2017 | B2 |
9640998 | Dawson et al. | May 2017 | B2 |
9665148 | Hamdi | May 2017 | B2 |
9693244 | Maruhashi et al. | Jun 2017 | B2 |
9734940 | McNutt | Aug 2017 | B1 |
9853689 | Eaves | Dec 2017 | B2 |
9874930 | Vavilala | Jan 2018 | B2 |
9882656 | Sipes et al. | Jan 2018 | B2 |
9893521 | Lowe | Feb 2018 | B2 |
9948198 | Imai | Apr 2018 | B2 |
9979370 | Xu | May 2018 | B2 |
9985600 | Xu | May 2018 | B2 |
10007628 | Pitigoi-Aron | Jun 2018 | B2 |
10028417 | Schmidtke | Jul 2018 | B2 |
10128764 | Vinciarelli | Nov 2018 | B1 |
10248178 | Brooks | Apr 2019 | B2 |
10439432 | Eckhardt | Oct 2019 | B2 |
20010024373 | Cuk | Sep 2001 | A1 |
20020126967 | Panak | Sep 2002 | A1 |
20040000816 | Khoshnood | Jan 2004 | A1 |
20040033076 | Song | Feb 2004 | A1 |
20040043651 | Bain | Mar 2004 | A1 |
20040073703 | Boucher | Apr 2004 | A1 |
20050197018 | Lord | Sep 2005 | A1 |
20050268120 | Schindler | Dec 2005 | A1 |
20060202109 | Delcher | Sep 2006 | A1 |
20060209875 | Lum | Sep 2006 | A1 |
20070103168 | Batten | May 2007 | A1 |
20070236853 | Crawley | Oct 2007 | A1 |
20070263675 | Lum | Nov 2007 | A1 |
20070284946 | Robbins | Dec 2007 | A1 |
20070288125 | Quaratiello | Dec 2007 | A1 |
20080229120 | Diab | Sep 2008 | A1 |
20080310067 | Diab | Dec 2008 | A1 |
20100077239 | Diab | Mar 2010 | A1 |
20100117808 | Karam | May 2010 | A1 |
20100171602 | Kabbara | Jul 2010 | A1 |
20100190384 | Lanni | Jul 2010 | A1 |
20100237846 | Vetteth | Sep 2010 | A1 |
20100290190 | Chester | Nov 2010 | A1 |
20110290497 | Stenevik | Jan 2011 | A1 |
20110083824 | Rogers | Apr 2011 | A1 |
20110228578 | Serpa | Sep 2011 | A1 |
20110266867 | Schindler | Nov 2011 | A1 |
20120064745 | Ottliczky | Mar 2012 | A1 |
20120170927 | Huang | Jul 2012 | A1 |
20120201089 | Barth et al. | Aug 2012 | A1 |
20120231654 | Conrad | Sep 2012 | A1 |
20120317426 | Hunter, Jr. | Dec 2012 | A1 |
20120319468 | Schneider | Dec 2012 | A1 |
20130077923 | Peeters Weem et al. | Mar 2013 | A1 |
20130079633 | Peeters Weem | Mar 2013 | A1 |
20130103220 | Eaves | Apr 2013 | A1 |
20130249292 | Blackwell, Jr. | Sep 2013 | A1 |
20130272721 | van Veen | Oct 2013 | A1 |
20140111180 | Vladan | Apr 2014 | A1 |
20140129850 | Paul | May 2014 | A1 |
20140258742 | Chien | Sep 2014 | A1 |
20140265550 | Milligan | Sep 2014 | A1 |
20140372773 | Heath | Dec 2014 | A1 |
20150078740 | Sipes, Jr. | Mar 2015 | A1 |
20150106539 | Leinonen | Apr 2015 | A1 |
20150115741 | Dawson | Apr 2015 | A1 |
20150215001 | Eaves | Jul 2015 | A1 |
20150215131 | Paul | Jul 2015 | A1 |
20150333918 | White, III | Nov 2015 | A1 |
20160020911 | Sipes, Jr. | Jan 2016 | A1 |
20160064938 | Balasubramanian | Mar 2016 | A1 |
20160111877 | Eaves | Apr 2016 | A1 |
20160118784 | Saxena | Apr 2016 | A1 |
20160133355 | Glew | May 2016 | A1 |
20160134331 | Eaves | May 2016 | A1 |
20160142217 | Gardner et al. | May 2016 | A1 |
20160197600 | Kuznetsov | Jul 2016 | A1 |
20160365967 | Tu | Jul 2016 | A1 |
20160241148 | Kizilyalli | Aug 2016 | A1 |
20160262288 | Chainer et al. | Sep 2016 | A1 |
20160273722 | Crenshaw | Sep 2016 | A1 |
20160294500 | Chawgo | Oct 2016 | A1 |
20160308683 | Pischl | Oct 2016 | A1 |
20160352535 | Hiscock | Dec 2016 | A1 |
20170041152 | Sheffield | Feb 2017 | A1 |
20170041153 | Picard | Feb 2017 | A1 |
20170054296 | Daniel | Feb 2017 | A1 |
20170110871 | Foster | Apr 2017 | A1 |
20170123466 | Carnevale | May 2017 | A1 |
20170146260 | Ribbich | May 2017 | A1 |
20170155517 | Cao | Jun 2017 | A1 |
20170164525 | Chapel | Jun 2017 | A1 |
20170155518 | Yang | Jul 2017 | A1 |
20170214236 | Eaves | Jul 2017 | A1 |
20170229886 | Eaves | Aug 2017 | A1 |
20170234738 | Ross | Aug 2017 | A1 |
20170248976 | Moller | Aug 2017 | A1 |
20170325320 | Wendt | Nov 2017 | A1 |
20180024964 | Mao | Jan 2018 | A1 |
20180053313 | Smith | Feb 2018 | A1 |
20180054083 | Hick | Feb 2018 | A1 |
20180060269 | Kessler | Mar 2018 | A1 |
20180088648 | Otani | Mar 2018 | A1 |
20180098201 | Torello | Apr 2018 | A1 |
20180102604 | Keith | Apr 2018 | A1 |
20180123360 | Eaves | May 2018 | A1 |
20180188712 | MacKay | Jul 2018 | A1 |
20180191513 | Hess | Jul 2018 | A1 |
20180254624 | Son | Sep 2018 | A1 |
20180313886 | Mlyniec | Nov 2018 | A1 |
20190267804 | Matan | Aug 2019 | A1 |
20190280895 | Mather | Sep 2019 | A1 |
Number | Date | Country |
---|---|---|
201689347 | Dec 2010 | CN |
205544597 | Aug 2016 | CN |
1936861 | Jun 2008 | EP |
2120443 | Nov 2009 | EP |
2693688 | Feb 2014 | EP |
WO199316407 | Aug 1993 | WO |
WO2010053542 | May 2010 | WO |
WO2017054030 | Apr 2017 | WO |
WO2017167926 | Oct 2017 | WO |
WO2018017544 | Jan 2018 | WO |
WO2019023731 | Feb 2019 | WO |
Entry |
---|
https://www.fischerconnectors.com/us/en/products/fiberoptic. |
http://www.strantech.com/products/tfoca-genx-hybrid-2×2-fiber-optic-copper-connector/. |
http://www.qpcfiber.com/product/connectors/e-link-hybrid-connector/. |
https://www.lumentum.com/sites/default/files/technical-library-items/powerovertiber-tn-pv-ae_0.pdf. |
“Network Remote Power Using Packet Energy Transfer”, Eaves et al., www.voltserver.com, Sep. 2012. |
Product Overview, “Pluribus VirtualWire Solution”, Pluribus Networks, PN-PO-VWS-05818, https://www.pluribusnetworks.com/assets/Pluribus-VirtualWire-PO-50918.pdf, May 2018, 5 pages. |
Implementation Guide, “Virtual Chassis Technology Best Practices”, Juniper Networks, 8010018-009-EN, Jan. 2016, https://wwwjuniper.net/us/en/local/pdf/implementation-guides/8010018-en.pdf, 29 pages. |
Yencheck, Thermal Modeling of Portable Power Cables, 1993. |
Zhang, Machine Learning-Based Temperature Prediction for Runtime Thermal Management across System Components, Mar. 2016. |
Data Center Power Equipment Thermal Guidelines and Best Practices. |
Dynamic Thermal Rating of Substation Terminal Equipment by Rambabu Adapa, 2004. |
Chen, Real-Time Termperature Estimation for Power MOSEFETs Conidering Thermal Aging Effects:, IEEE Trnasactions on Device and Materials Reliability, vol. 14, No. 1, Mar. 2014. |
Number | Date | Country | |
---|---|---|---|
20190304630 A1 | Oct 2019 | US |