Combined power, data, and cooling delivery in a communications network

Information

  • Patent Grant
  • 11093012
  • Patent Number
    11,093,012
  • Date Filed
    Friday, March 2, 2018
    6 years ago
  • Date Issued
    Tuesday, August 17, 2021
    2 years ago
Abstract
In one embodiment, a method includes delivering power, data, and cooling from a central network device to a plurality of remote communications devices over cables connecting the central network device to the remote communications devices, each of the cables carrying said power, data, and cooling, and receiving at the central network device, power and thermal data from the remote communications devices based on monitoring of power and cooling at the remote communications devices. The remote communications devices are powered by the power and cooled by the cooling delivered from the central network device. An apparatus is also disclosed herein.
Description
TECHNICAL FIELD

The present disclosure relates generally to communications networks, and more particularly, to power, data, and cooling delivery in a communications network.


BACKGROUND

Network devices such as computer peripherals, network access points, and IoT (Internet of Things) devices may have both their data connectivity and power needs met over a single combined function cable. Examples of technologies that provide this function are USB (Universal Serial Bus) and PoE (Power over Ethernet). In conventional PoE systems, power is delivered over the cables used by the data over a range from a few meters to about one hundred meters. When a greater distance is needed or fiber optic cables are used, power is typically supplied through a local power source such as a wall outlet due to limitations with capacity, reach and cable loss in conventional PoE. Today's PoE systems also have limited power capacity, which may be inadequate for many classes of devices. If the available power over combined function cables is increased, traditional convection cooling methods may be inadequate for high powered devices.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an example of a network in which embodiments described herein may be implemented.



FIG. 2 illustrates the network of FIG. 1 with a redundant central hub.



FIG. 3 illustrates an example of power, data, and cooling delivery from a central hub to a remote device in the network of FIG. 1.



FIG. 4 depicts an example of a network device useful in implementing embodiments described herein.



FIG. 5 is block diagram illustrating power and cooling monitoring and control at the remote device, in accordance with one embodiment.



FIG. 6A is a cross-sectional view of a composite cable, in accordance with one embodiment.



FIG. 6B is a cross-sectional view of a composite cable, in accordance with another embodiment.



FIG. 6C is a cross-sectional view of a composite cable, in accordance with yet another embodiment.



FIG. 7 is a flowchart illustrating an overview of a process for combined power, data, and cooling delivery in a communications network, in accordance with one embodiment.





Corresponding reference characters indicate corresponding parts throughout the several views of the drawings.


DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview


In one embodiment, a method generally comprises delivering power, data, and cooling from a central network device to a plurality of remote communications devices over cables connecting the central network device to the remote communications devices, each of the cables carrying said power, data, and cooling, and receiving at the central network device, power and thermal data from the remote communications devices based on monitoring of power and cooling at the remote communications devices. The remote communications devices are powered by the power and cooled by the cooling delivered from the central network device.


In another embodiment, an apparatus generally comprises a connector for connecting the apparatus to a cable delivering power, data, and cooling to the apparatus, the connector comprising an optical interface for receiving optical communications signals, an electrical interface for receiving power for powering the apparatus, and a fluid interface for receiving coolant. The apparatus further comprises a cooling loop for cooling electrical components of the apparatus with the coolant and a monitoring system for monitoring the cooling loop and providing feedback to a central network device delivering the power, data, and cooling to the apparatus over the cable.


In yet another embodiment, an apparatus generally comprises a connector for connecting the apparatus to a cable delivering power, data, and cooling to a plurality of remote communications devices, the connector comprising an optical interface for delivering optical communications signals, an electrical interface for delivering power for powering the remote communications devices, and a fluid interface for delivering cooling to the remote communications devices. The apparatus further comprises a control system for modifying delivery of the cooling to the remote communications devices based on feedback received from the remote communications devices.


Further understanding of the features and advantages of the embodiments described herein may be realized by reference to the remaining portions of the specification and the attached drawings.


Example Embodiments


The following description is presented to enable one of ordinary skill in the art to make and use the embodiments. Descriptions of specific embodiments and applications are provided only as examples, and various modifications will be readily apparent to those skilled in the art. The general principles described herein may be applied to other applications without departing from the scope of the embodiments. Thus, the embodiments are not to be limited to those shown, but are to be accorded the widest scope consistent with the principles and features described herein. For purpose of clarity, details relating to technical material that is known in the technical fields related to the embodiments have not been described in detail.


In conventional Power over Ethernet (PoE) systems used to simultaneously transmit power and data communications, power is delivered over the same twisted pair cable used for data. These systems are limited in range to a few meters to about 100 meters. The maximum power delivery capacity of standard PoE is approximately 100 Watts, but many classes of powered devices would benefit from power delivery of 1000 Watts or more. In conventional systems, when a larger distance is needed fiber optic cabling is used to deliver data and when larger power delivery ratings are needed power is supplied to a remote device through a local power source.


As previously noted, it is desirable to increase the power available over multi-function cables to hundreds and even thousands of watts. This capability may enable many new choices in network deployments where major devices such as workgroup routers, multi-socket servers, large displays, wireless access points, or fog nodes are operated over multi-function cables. This capability would greatly decrease installation complexity and improve the total cost of ownership of a much wider set of devices that have their power and data connectivity needs met from a central hub.


Beyond the data and power supply capabilities noted above, there is also a need for cooling. For high-powered devices, especially those with high thermal density packaging or total dissipation over a few hundred Watts, traditional convection cooling methods may be inadequate. This is particularly apparent where special cooling challenges are present, such as with a device that is sealed and cannot rely on drawing outside air (e.g., all-season outdoor packaging), a hermetically sealed device (e.g., used in food processing or explosive environments), or where fan noise is a problem (e.g., office or residential environments), or any combination of the above along with extreme ambient temperature environments. In these situations, complex and expensive specialized cooling systems are often used.


The embodiments described herein provide cooling capability along with data and power, thereby significantly enhancing the functionality of multi-function cables. In one or more embodiments, a cable system, referred to herein as PoE+Fiber+Cooling (PoE+F+C), provides high power energy delivery, fiber delivered data, and cooling within a single cable. The PoE+F+C system allows high power devices to be located in remote locations, extreme temperature environments, or noise sensitive environments, with their cooling requirements met through the same cable that carries data and power. As described in detail below, coolant flows through the cable carrying the power and data to remote communications devices to provide a single multi-use cable that serves all of the functions that a high power node would need, including cooling. This use of a single cable for all interconnect functions required by a remote device can greatly simplify installation and ongoing operation of the device.


Referring now to the drawings, and first to FIG. 1, an example of a network in which embodiments described herein may be implemented is shown. For simplification, only a small number of nodes are shown. The embodiments operate in the context of a data communications network including multiple network devices. The network may include any number of network devices in communication via any number of nodes (e.g., routers, switches, gateways, controllers, access points, or other network devices), which facilitate passage of data within the network. The network devices may communicate over or be in communication with one or more networks (e.g., local area network (LAN), metropolitan area network (MAN), wide area network (WAN), virtual private network (VPN) (e.g., Ethernet virtual private network (EVPN), layer 2 virtual private network (L2VPN)), virtual local area network (VLAN), wireless network, enterprise network, corporate network, data center, Internet of Things (IoT), optical network, Internet, intranet, or any other network).


The network is configured to provide power (e.g., power greater than 100 Watts), data (e.g., optical data), and cooling from a central network device 10 to a plurality of remote network devices 12 (e.g., switches, routers, servers, access points, computer peripherals, Internet of Things (IoT) devices, fog nodes, or other electronic components and devices). Signals may be exchanged among communications equipment and power transmitted from power sourcing equipment (e.g., central hub 10) to powered devices (e.g., remote communications devices 12). As described in detail below, the PoE+F+C system delivers power, data, and cooling to a network (e.g., switch/router system) configured to receive data, power, and cooling over a cabling system comprising optical fibers, electrical wires (e.g., copper wires), and coolant tubes.


As shown in the example of FIG. 1, the PoE+F+C system comprises the central hub 10 in communication with the remote devices 12 via a plurality of cables 14, each cable configured for delivering power, data, and cooling. The central hub 10 may be in communication with any number of remote devices 12. For example, the central hub 10 may serve anywhere from a few remote devices 12 to hundreds of remote devices (or any number in between). The remote devices 12 may also be in communication with one or more other device (e.g., fog node, IoT device, sensor, and the like). The network may include any number or arrangement of network communications devices (e.g., switches, access points, routers, or other devices operable to route (switch, forward) data communications). The remote devices 12 may be located at distances greater than 100 meters (e.g., 1 km, 10 km, or any other distance), and/or operate at greater power levels than 100 Watts (e.g., 250 Watts, 1000 Watts, or any other power level). In one or more embodiments, there is no need for additional electrical wiring for the communications network and all of the network communications devices operate using the power provided by the PoE+F+C system.


One or more network devices may also deliver power to equipment using PoE. For example, one or more of the network devices 12 may deliver power using PoE to electronic components such as IP (Internet Protocol) cameras, VoIP (Voice over IP) phones, video cameras, point-of-sale devices, security access control devices, residential devices, building automation devices, industrial automation, factory equipment, lights (building lights, streetlights), traffic signals, and many other electrical components and devices.


In the example shown in FIG. 1, the central hub 10 comprises a power supply unit (PSU) (power distribution module) 15 for receiving power (e.g., building power from a power grid, renewable energy source, generator or battery), a network interface (e.g., fabric, line cards) 16 for receiving data from or transmitting data to a network (e.g., Internet), and a heat exchanger 18 in fluid communication with a cooling plant.


The central hub 10 may be operable to provide high capacity power from an internal power system (e.g., PSU providing over and including 5 kW (e.g., 10 kW, 12 kW, 14 kW, 16 kW), or PSU providing over and including 100 W (e.g., 500 W, 1 kW) of useable power or any other suitable power capacity). The PSU 15 may provide, for example, PoE, pulsed power, DC power, or AC power. The central hub 10 (PSE (Power Sourcing Equipment)) is operable to receive power external from a communications network and transmit the power, along with data and cooling, over the cables 14 in the communications network to the remote network devices (PDs (Powered Devices)) 12. The central hub 10 may comprise, for example, a router, convergence device, or any other suitable network device operable to deliver power, data, and cooling. Additional components and functions of the central hub 10 are described below with respect to FIG. 3.


Cables 14 extending from the central hub 10 to the remote communications devices 12 are configured to transmit power, data, and cooling in a single cable (combined cable, multi-function cable, multi-use cable, hybrid cable). The cables 14 may be formed from any material suitable to carry electrical power, data (copper, fiber), and coolant (liquid, gas, or multi-phase) and may carry any number of electrical wires, optical fibers, and cooling tubes in any arrangement. Examples of cable configurations are shown in FIGS. 6A, 6B, 6C, and described below.


In one embodiment, power and data are received at an optical transceiver (optical module, optical device, optics module, transceiver, silicon photonics optical transceiver) configured to source or receive power, as described in U.S. patent application Ser. No. 15/707,976 (“Power Delivery Through an Optical System”, filed Sep. 18, 2017), incorporated herein by reference in its entirety. The transceiver modules operate as an engine that bidirectionally converts optical signals to electrical signals or in general as an interface to the network element copper wire or optical fiber. In one or more embodiments, the optical transceiver may be a pluggable transceiver module in any form factor (e.g., SFP (Small Form-Factor Pluggable), QSFP (Quad Small Form-Factor Pluggable), CFP (C Form-Factor Pluggable), and the like), and may support data rates up to 400 Gbps, for example. Hosts for these pluggable optical modules include line cards on the central hub 10 or network devices 12. The host may include a printed circuit board (PCB) and electronic components and circuits operable to interface telecommunications lines in a telecommunications network. The host may be configured to perform one or more operations and receive any number or type of pluggable transceiver modules configured for transmitting and receiving signals.


The optical transceiver may also be configured for operation with AOC (Active Optical Cable) and form factors used in UWB (Ultra-Wideband) applications, including for example, Ultra HDMI (High-Definition Multimedia Interface), serial high bandwidth cables (e.g., thunderbolt), and other form factors. Also, it may be noted that the optical transceivers may be configured for operation in point-to-multipoint or multipoint-to-point topology. For example, QFSP may breakout to SFP+. One or more embodiments may be configured to allow for load shifting.


In one embodiment, one or more network devices may comprise dual-role power ports that may be selectively configurable to operate as a PSE (Power Source Equipment) port to provide power to a connected device or as a PD (Powered Device) port to sink power from the connected device, and enable the reversal of energy flow under system control, as described in U.S. Pat. No. 9,531,551 (“Dynamically Configurable Power-Over-Ethernet Apparatus and Method”, issued Dec. 27, 2016), for example. The dual-role power ports may be PoE or PoE+F ports, for example, enabling them to negotiate their selection of, for example, either PoE or higher power POE+F in order to match the configurations of the ports on line cards 16 with the corresponding ports on each remote network device 12.


In addition to the remote communications devices 12 configured to receive power, data, and cooling from the central hub 10, the network may also include one or more network devices comprising conventional network devices that only process and transmit data. These network devices receive electrical power from a local power source such as a wall outlet. Similarly, one or more network devices may eliminate the data interface, and only interconnect power (e.g., moving data interconnection to wireless networks). Also, one or more devices may be configured to receive only power and data, or only power and cooling, for example.



FIG. 2 illustrates an example of a redundant PoE+F+C system. Fault tolerance is a concern for critical remote devices. Redundant connections for power and data are needed to protect against the failure of a central hub, its data connections to the Internet, or primary power supplies. If the coolant flow stops, or the supplied coolant is too hot, a remote device's high power components could exceed their safe operating temperature in just a few seconds. The network shown in the example of FIG. 2 provides backup power, data, and cooling in case of failure of the central hub 10a or any single cable. Critical remote network devices 12 may have two combined cables 14a, 14b serving them, as shown in FIG. 2. Each cable 14a, 14b may home on an independent central hub 10a, 10b, with each central hub providing data, power, and cooling. In very critical applications, cables 14a and 14b may be routed using different physical paths to each remote network device 12, so mechanical damage at one point along the cable route will not interrupt the data, power, or coolant to the remote device.


In one embodiment, each heat sink or heat exchanger at the remote device 12 (shown in FIG. 3 and described below) comprises two isolated fluid channels, each linked to one of the redundant central hubs 10a, 10b. If the coolant flow stops from one hub, the other hub may supply enough coolant (e.g., throttled up by a control system described below) to keep the critical components operational. Isolation is essential to prevent loss of pressure incidents in one fluid loop from also affecting the pressure in the redundant loop.


The cable's jacket may include two small sense conductors for use in identifying a leak in the cooling system. If a coolant tube develops a leak, the coolant within the jacket causes a signal to be passed between these conductors, and a device such as a TDR (Time-Domain Reflectometer) at the central hub 10a, 10b may be used to locate the exact position of the cable fault, thereby facilitating repair.


In one or more embodiments, the central hubs 10a, 10b may provide additional power, bandwidth, or cooling as needed in the network. Both circuits 14a, 14b may be used simultaneously to provide power to an equipment power circuit to provide higher power capabilities. Similarly, redundant data fibers may provide higher network bandwidth, and redundant coolant loops may provide higher cooling capacity. The control systems (described below) manage failures and revert the data, power, and cooling to lower levels if necessary. In another example, redundant central hubs 10a, 10b may form a dual-star topology.


It is to be understood that the network devices and topologies shown in FIGS. 1 and 2, and described above are only examples and the embodiments described herein may be implemented in networks comprising different network topologies or a different number, type, or arrangement of network devices, without departing from the scope of the embodiments. For example, the network may comprise any number or type of network communications devices that facilitate passage of data over the network (e.g., routers, switches, gateways, controllers), network elements that operate as endpoints or hosts (e.g., servers, virtual machines, clients), and any number of network sites or domains in communication with any number of networks. Thus, network nodes may be used in any suitable network topology, which may include any number of servers, virtual machines, switches, routers, or other nodes interconnected to form a large and complex network, which may include cloud or fog computing. For example, the PoE+F+C system may be used in a fog node deployment in which computation, networking, and storage are moved from the cloud to locations much closer to IoT sensors and actuators. The fog nodes may provide power to PoE devices such as streetlights, traffic signals, 5G cells, access points, base stations, video cameras, or any other electronic device serving a smart building, smart city, or any other deployment. Multiple branching topologies (not shown) may be supported, where, for example a central hub provides PoE+F+C cables to a plurality of intermediate hubs, which divide the power, data, and cooling capabilities to further PoE+F+C cables that serve the remote network devices.



FIG. 3 schematically illustrates the cable 14 transmitting power, data, and cooling from the central hub 10 to the remote device 12, in accordance with one embodiment. In this example, the central hub 10 includes a power distribution module 30 for receiving power from a power grid, network interface 31 for receiving data from and transmitting data to a network (e.g., Internet), and a heat exchanger 32 for fluid communication with a cooling plant. The power distribution module 30 provides power to a power supply module 33 at the remote device 12. The network interface 31 at the central hub 10 is in communication with the network interface 34 at the remote device 12. The heat exchanger 32 at the central hub 10 forms a cooling loop with one or more heat sinks 35 at the remote device 12. The central hub 10 may provide control logic for the cooling loop, as well as the power and data transport functions of the combined cable 14, as described below.


In the example shown in FIG. 3, the cable 14 includes two power lines (conductors) 36, two data lines (optical fibers) 37, and two coolant tubes (supply 38a and return 38b) coupled to connectors 39a and 39b located at the central hub 10 and remote device 12, respectively. The closed coolant loop is established through the two coolant tubes 38a, 38b that share the same combined cable jacket with the fibers 37 that provide bidirectional data connectivity to the network and conductors 36 that provide power from the power grid.


In one or more embodiments, various sensors 28a monitor aggregate and individual branch coolant temperatures, pressures, and flow rate quantities at strategic points around the loop. Other sensors 28b monitor the current and voltage of the power delivery system at either end of power conductors 36. One or more valves may be used to control the amount of cooling delivered to the remote device 12 based upon its instantaneous needs, as described below. The coolant may comprise, for example, water, antifreeze, liquid or gaseous refrigerants, or mixed-phase coolants (partially changing from liquid to gas along the loop).


The central hub 10 maintains a source of low-temperature coolant that is sent through distribution plumbing (such as a manifold), through the connector 39a, and down cable's 14 coolant supply line 38a to the remote device 12. The connector 39b on the remote device 12 is coupled to the cable 14, and the supply coolant is routed through elements inside the device such as heat sinks 35 and heat exchangers that remove heat (described further below with respect to FIG. 5). The warmed coolant may be aggregated through a return manifold and returned to the central hub 10 out the device's connector 39b and through the return tube 38b in the cable 14. The cable 14 returns the coolant to the central hub 10, where the return coolant passes through the heat exchanger 32 to remove the heat from the coolant loop to an external cooling plant, and the cycle repeats. The heat exchanger 32 may be a liquid-liquid heat exchanger, with the heat transferred to chilled water or a cooling tower circuit, for example. The heat exchanger 32 may also be a liquid-air heat exchanger, with fans provided to expel the waste heat to the atmosphere. The hot coolant returning from the cable 14 may be monitored by sensor 28a for temperature, pressure, and flow. Once the coolant has released its heat, it may pass back through a pump 29 and sensor 28a, and then sent back out to the cooling loop. One or more variable-speed pumps 29 may be provided at the central hub 10 or remote device 12 to circulate the fluid around the cooling loop.


In an alternate embodiment, only a single coolant tube is provided within the cable 14, and high pressure air (e.g., supplied by a central compressor with an intercooler) is used as the coolant. When the air enters the remote device 12, it is allowed to expand and/or impinge directly on heat dissipating elements inside the device. Cooling may be accomplished by forced convection via the mass flow of the air and additional temperature reduction may be provided via a Joule-Thomson effect as the high pressure air expands to atmospheric pressure. Once the air has completed its cooling tasks, it can be exhausted to the atmosphere outside the remote device 12 via a series of check valves and mufflers (not shown).


In cold environments the coolant may be supplied above ambient temperature to warm the remote device 12. This can be valuable where remote devices 12 are located in cold climates or in cold parts of industrial plants, and the devices have cold-sensitive components such as optics or disk drives. This may be more energy efficient than providing electric heaters at each device, as is used in conventional systems.


The cooling loops from all of the remote devices 12 may be isolated from one another or be intermixed through a manifold and a large central heat exchanger for overall system thermal efficiency. The central hub 10 may also include one or more support systems to filter the coolant, supply fresh coolant, adjust anti-corrosion chemicals, bleed air from the loops, or fill and drain loops as needed for installation and maintenance of cables 14 and remote devices 12.


The connectors 39a and 39b at the central hub 10 and remote device 12 are configured to mate with the cable 14 for transmitting and receiving power, data, and cooling. In one embodiment, the connectors 39a, 39b carry power, fiber, and coolant in the same connector body. The connectors 39a, 39b are preferably configured to mate and de-mate (couple, uncouple) easily by hand or robotic manipulator.


In order to prevent coolant leakage when the cable 14 is uncoupled from the central hub 10 or remote device 12, the coolant lines 38a, 38b and connectors 39a, 39b preferably include valves (not shown) that automatically shut off flow into and out of the cable, and into and out of the device or hub. In one or more embodiments, the connector 39a, 39b may be configured to allow connection sequencing and feedback to occur. For example, electrical connections may not be made until a verified sealed coolant loop is established. The cable connectors 39a, 39b may also include visual or tactile evidence of whether a line is pressurized, thereby reducing the possibility of user installation or maintenance errors.


In one or more embodiments, a distributed control system comprising components located on the central hub's controller and on the remote device's processor may communicate over the fiber links 37 in the combined cable 14. The sensors 28a at the central hub 10 and remote device 12 may be used in the control system to monitor temperature, pressure, or flow. Servo valves or variable speed pumps 29 may be used to insure the rate of coolant flow matches requirements of the remote thermal load. As previously described, temperature, pressure, and flow sensors 28a may be used to measure coolant characteristics at multiple stages of the cooling loop (e.g., at the inlet of the central hub 10 and inlet of the remote device 12) and a subset of these sensors may also be strategically placed at outlets and intermediate points. The remote device 12 may include, for example, temperature sensors to monitor die temperatures of critical semiconductors, temperatures of critical components (e.g., optical modules, disk drives), or the air temperature inside a device's sealed enclosure. The control system may monitor the remote device's internal temperatures and adjust the coolant flow to maintain a set point temperature. This feedback system insures the correct coolant flow is always present. Too much coolant flow will waste energy, while too little coolant flow will cause critical components in the remote device 12 to overheat.


Machine learning may also be used within the control system to compensate for the potentially long response times between when coolant flow rates change and the remote device's temperatures react to the change. The output of a control algorithm may be used to adjust the pumps 29 to move the correct volume of coolant to the device 12, and may also be used to adjust valves within the remote device to direct different portions of the coolant to different internal heat sinks to properly balance the use of coolant among a plurality of thermal loads.


The control system may also include one or more safety features. For example, the control system may instantly stop the coolant flow and begin a purge cycle if the coolant flow leaving the central hub 10 does not closely match the flow received at the remote devices 12, which may indicate a leak in the system. The control system may also shut down a remote device if an internal temperature exceeds a predetermined high limit or open relief valves if pressure limits in the coolant loop are exceeded. The system may also predictively detect problems in the cooling system such as a pressure rise caused by a kink in the cable 14, reduction in thermal transfer caused by corrosion of heat sinks 35, or impending bearing failures in pump 29, before they become serious.


All three utilities (power, data, cooling) provided by the combined cable 14 may interact with the control system to keep the system safe and efficient. For example, sensors 28b may be located in the power distribution module 30 of the central hub and power supply 33 of the remote device 12. Initial system modeling and characterization may be used to provide expected power, flow properties, and thermal performance operating envelopes, which may provide an initial configuration for new devices and a reference for setting system warning and shut-down limits. This initial characteristic envelope may be improved and fine-tuned over time heuristically through machine learning and other techniques. If the system detects additional power flow in power conductors 36 (e.g., due to a sudden load increase in CPU in remote device 12), the control system may proactively increase coolant flow in anticipation of an impending increase in heat sink 35 temperature, even before the temperature sensors register it. This interlock between the various sensors and control systems helps to improve the overall responsivity and stability of the complete system.



FIG. 4 illustrates an example of a network device 40 (e.g., central hub 10, remote device 12 in FIG. 3) that may be used to implement the embodiments described herein. In one embodiment, the network device 40 is a programmable machine that may be implemented in hardware, software, or any combination thereof. The network device 40 includes one or more processors 42, control system 43, memory 44, cooling components (pumps, valves, sensors) 45, and interfaces (electrical, optical, fluid) 46. In one or more embodiments, the network device 40 may include a PoE+F optical module 48 (e.g., optical module configured for receiving power from power supply 47 and data).


The network device 40 may include any number of processors 42 (e.g., single or multi-processor computing device or system), which may communicate with a forwarding engine or packet forwarder operable to process a packet or packet header. The processor 42 may receive instructions from a software application or module, which causes the processor to perform functions of one or more embodiments described herein. The processor 42 may also operate one or more components of the control system 43. The control system (controller) 43 may comprise components (modules, code, software, logic) located at the central hub 10 and remote device 12, and interconnected through the combined cable 14 (FIGS. 1 and 4). The cooling components 45 may include any number of sensors and actuators within the cooling loop to provide input to the control system 43 and react to its commands.


Memory 44 may be a volatile memory or non-volatile storage, which stores various applications, operating systems, modules, and data for execution and use by the processor 42. For example, components of the optical module 48, control logic for cooling components 45, or other parts of the control system 43 (e.g., code, logic, or firmware, etc.) may be stored in the memory 44. The network device 40 may include any number of memory components.


Logic may be encoded in one or more tangible media for execution by the processor 42. For example, the processor 42 may execute codes stored in a computer-readable medium such as memory 44. The computer-readable medium may be, for example, electronic (e.g., RAM (random access memory), ROM (read-only memory), EPROM (erasable programmable read-only memory)), magnetic, optical (e.g., CD, DVD), electromagnetic, semiconductor technology, or any other suitable medium. In one example, the computer-readable medium comprises a non-transitory computer-readable medium. Logic may be used to perform one or more functions described below with respect to the flowchart of FIG. 7 or other functions such as power level negotiations, safety subsystems, or thermal control, as described herein. The network device 40 may include any number of processors 42.


The interfaces 46 may comprise any number of interfaces (e.g., power, data, and fluid connectors, line cards, ports, combined connectors 39a, 39b for connecting to cable 14 in FIG. 3)) for receiving data, power, and cooling, or transmitting data, power, and cooling to other devices. A network interface may be configured to transmit or receive data using a variety of different communications protocols and may include mechanical, electrical, and signaling circuitry for communicating data over physical links coupled to the network or wireless interfaces. One or more of the interfaces 46 may be configured for PoE+F+C, PoE+F, PoE, PoF, or similar operation.


The optical module 48 may comprise hardware or software for use in power detection, power monitor and control, or power enable/disable, as described below. The optical module 48 may further comprise one or more of the processor or memory components, or interface for receiving power and optical data from the cable at a fiber connector, for delivering power and signal data to the network device, or transmitting control signals to the power source, for example. Power may be supplied to the optical module by the power supply 47 and the optical module (e.g., PoE+F optical module) 48 may provide power to the rest of the components at the network device 40.


It is to be understood that the network device 40 shown in FIG. 4 and described above is only an example and that different configurations of network devices may be used. For example, the network device 40 may further include any suitable combination of hardware, software, algorithms, processors, devices, components, or elements operable to facilitate the capabilities described herein.



FIG. 5 is a block diagram illustrating PoE+F+C components at a remote device 50, in accordance with one embodiment. The system components provide for communication with the power source (e.g., network device 10 in FIG. 1) during power up of the powered device and may also provide fault protection and detection. The network device 50 includes optical/electrical components 51 for receiving optical data and converting it to electrical signals (or converting electrical signals to optical data) and power components including power detection module 52, power monitor and control unit 53, and power enable/disable module 54. The power components 52, 53, 54 may be isolated from the optical components 51 via an isolation component (e.g., isolation material or element), which electromagnetically isolates the power circuit from the optical components to prevent interference with operation of the optics.


The power detection module 52 may detect power, energize the optical components 51, and return a status message to the power source. A return message may be provided via state changes on the power wires or over the optical channel. In one embodiment, the power is not enabled by the power enable/disable module 54 until the optical transceiver and the source have determined that the device is properly connected and the network device to be powered is ready to be powered. In one embodiment, the device 50 is configured to calculate available power and prevent the cabling system from being energized when it should not be powered (e.g., during cooling failure). The power detection module 52 may also be operable to detect the type of power applied to the device 50, determine if PoE or pulsed power is a more efficient power delivery method, and then use the selected power delivery mode once the power is enabled. Additional modes may support other power+data standards (e.g., USB (Universal Serial Bus)).


The power monitor and control device 53 continuously monitors power delivery to ensure that the system can support the needed power delivery, and no safety limits (voltage, current) are exceeded. The power monitor and control device 53 may also monitor optical signaling and disable power if there is a lack of optical transitions or communication with the power source. Temperature, pressure, or flow sensors 57, 60 may also provide input to the power monitor and control module 53 so that power may be disabled if the temperature at the device 50 exceeds a specified limit.


Cooling is supplied to the device 50 via cooling (coolant) tubes in a cooling (coolant) loop 58, which provides cooling to the powered equipment through a cooling tap (heat sink, heat exchanger) 56, 59 and returns warm (hot) coolant to the central hub. The network device 50 may also include a number of components for use in managing the cooling. The cooling loop 58 within the network device 50 may include any number of sensors 57, 60 for monitoring aggregate and individual branch temperature, pressure, and flow rate at strategic points around the loop (e.g., entering and leaving the device, at critical component locations). The sensor 57 may be used, for example, to check that the remote device 50 receives approximately the same amount of coolant as supplied by the central hub to help detect leaks or blockage in the cable, and confirm that the temperature and pressure are within specified limits.


Distribution plumbing routes the coolant in the cooling loop 58 to various thermal control elements within the network device 50 to actively regulate cooling through the individual flow paths. For example, a distribution manifold 55 may be included in the network device 50 to route the coolant to the cooling tap 56 and heat exchanger 59. If the manifold has multiple outputs, each may be equipped with a valve 62 (manual or servo controlled) to regulate the individual flow paths. Thermal control elements may include liquid cooled heatsinks, heat pipes, or other devices directly attached to the hottest components (CPUs (Central Processing Units), GPUs (Graphic Processing Units), power supplies, optical components, etc.) to directly remove their heat. The network device 50 may also include channels in cold plates or in walls of the device's enclosure to cool anything they contact. Air to liquid heat exchangers, which may be augmented by a small internal fan, may be provided to cool the air inside a sealed box. Once the coolant passes through these elements and removes the device's heat, it may pass through additional temperature, pressure, or flow sensors, through another manifold, and out to the coolant return tube. In the example shown in FIG. 5, the cooling system includes a pump 61 operable to help drive the coolant around the cooling loop 58 or back to the central hub.


The distribution manifold 55 may comprise any number of individual manifolds (e.g., supply and return manifolds) to provide any number of cooling branches directed to one or more components within the network device 50. Also, the cooling loop 58 may include any number of pumps 61 or valves 62 to control flow in each branch of the cooling loop. This flow may be set by an active feedback loop that senses the temperature of a critical thermal load (e.g., die temperature of a high power semiconductor), and continuously adjusts the flow in the loop that serves the heat sink or heat exchanger 59. The pump 61 and valve 62 may be controlled by the control system and operate based on control logic received from the central hub in response to monitoring at the network device 50.


It is to be understood that the network device 50 shown in FIG. 5 is only an example and that the network device may include different components or arrangement of components, without departing from the scope of the embodiments. For example, the cooling system may include any number of pumps, manifolds, valves, heat sinks, heat exchangers, or sensors located in various locations within the coolant loop or arranged to cool various elements or portions of the device. Also, the network device 50 may include any number of power sensors or control modules operable to communicate with the control system at the central hub to optimize power delivery and cooling at the network device.



FIGS. 6A, 6B, and 6C illustrate three examples of multi-function cables 14 that may be used to carry utilities (power, data, and cooling) between the central hub 10 and the remote device 12 as shown in FIGS. 1, 2, and 3. The cable may be a few kilometers long or any other suitable length.


In the examples shown in FIGS. 6A, 6B, and 6C, the cable comprises optical fibers 65 for data (at least one in each direction for conventional systems, or at least one for bi-directional fiber systems), power conductors 66 (for each polarity) (e.g., heavy stranded wires for pulsed power), coolant tubes 67 (at least one in each direction for liquid systems, or at least one for compressed air systems), and a protective shield 68. These components, along with one or more additional components that may be used to isolate selected elements from each other, manage thermal conductivity between the elements, or provide protection and strength, are contained within an outer jacket 64 of the cable.


The components may have various cross-sectional shapes and arrangements, as shown in FIGS. 6A-6C. For example, the coolant tubes 67 may be cylindrical in shape as shown in FIGS. 6A and 6C or have a semi-circle cross-sectional shape, as shown in FIG. 6B. The coolant tubes 67 may also have more complex shaped cross-sections (e.g., “C” or “D” shape), which may yield more space and thermally efficient cables. The complex shaped coolant tube profiles may also include rounded corners to reduce flow head pressure loss. Supply and return tube wall material thermal conductivity may be adjusted to optimize overall system cooling.


The cable may be configured to prevent heat loss through supply-return tube-tube conduction, external environment conduction, coolant tube-power wire conduction, or any combination of these or other conditions, as described below.


Over a long cable, a type of unwelcome counter flow heat exchange may be created as the coolant supply tube receives heat via internal conduction in the cable from the hotter coolant return tube, which tends to equalize the two temperatures along the length of the cable (referred to as supply-return tube-tube conduction). For example, the supply coolant may be so preheated by the return coolant flowing in the opposite direction that it is much less effective in cooling the remote device. In one embodiment, a thermal isolation material 69 located between the two coolant tubes 67 may be used to prevent undesirable heat conduction, as shown in FIGS. 6A, 6B, and 6C. The insulation material 69 may be, for example, a foamed elastomer or any other suitable material.


External cable temperatures may influence thermal energy flow into and out of the cable, potentially reducing system cooling effectiveness. Placement of the thermal isolator material 69 between the coolant tubes and the outer jacket 64 as shown in FIG. 6A may be used to control this flow. However, in some cases, it may be desired to deliberately provide one or both coolant tubes 67 with a low thermal impedance path to the outside, as shown in FIG. 6B. Regions 70 replace the thermal insulation with a thermally conductive material. This may be useful, for example in buried or undersea cables where a linear ground coupled heat exchanger is created. Heat from the device is transferred by the circulating fluid to the ground, and reduced mechanical cooling is needed at the central hub.


A third mode of heat transfer that may be controlled by the design of the cable is between the power conductors and the coolant tubes. The cross-sectional size of the power conductors is preferably minimized to reduce volume, weight, and cost of copper and improve flexibility of the cable. However, smaller conductors have higher resistance, and I2R losses will heat the length of the cable (potentially hundreds of Watts in systems that deliver kilowatt levels of power over multi-kilometer distances). By providing thermally conductive paths inside the cable between the power conductors 66 and coolant tube 67, as depicted by regions 71 in FIG. 6C, some of the cooling power of the loop may be used to keep the power conductors in the cables cool. In this example, the conductive thermal paths 71 extend between the return coolant tube 67 and power conductors 66. The selective use of insulation and thermally conductive materials may be used to control conduction within the cable. Additionally, reflective materials and coatings (e.g., aluminized Mylar) may be applied to control radiative heat transfer modes, as shown by layer 72 in FIG. 6C.


In one or more embodiments, in order to reduce fluid frictional effects, tube interiors may be treated with hydrophobic coatings and the coolant may include surfactants. Also, the supply and return coolant tubes 67 may be composed of materials having different conductive properties so that the complete cable assembly may be thermally tuned to enhance system performance.


It is to be understood that the configuration, arrangement, and number and size of power wires, fibers, coolant tubes, and insulation regions, shields, coatings, or layers shown in FIGS. 6A-6C are only examples and that other configurations may be used without departing from the scope of the embodiments.



FIG. 7 is a flowchart illustrating an overview of a process for delivering combined power, data, and cooling in a communications network, in accordance with one embodiment. At step 74, power, data, and cooling are delivered in the combined cable 14 from central network device 10 to a plurality of remote communications devices 12 (FIGS. 1 and 7). The central network device 10 receives power and thermal data from the remote devices over the cable, based on monitoring of power and cooling at the remote devices (step 76). The central network device 10 adjusts delivery of power and cooling as needed at the remote devices (step 78). The remote communications devices are powered by the power and cooled by the cooling delivered by the central network device, thereby eliminating the need for a separate power supply or external cooling.


It is to be understood that the process shown in FIG. 7 is only an example of a process for delivering combined power, data, and cooling, and that steps may be added, removed, combined, or modified, without departing from the scope of the embodiments.


Although the method and apparatus have been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations made to the embodiments without departing from the scope of the embodiments. Accordingly, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.

Claims
  • 1. A method comprising: delivering power, data, and cooling from a central network device to a plurality of remote communications devices over cables connecting the central network device to the remote communications devices, each of the cables carrying said power, data, and cooling; andreceiving at the central network device, power and thermal data from the remote communications devices based on monitoring of power and cooling at the remote communications devices;wherein the remote communications devices are continuously powered by said power and cooled by said cooling delivered from the central network device over the cables throughout operation of the remote communications devices; andwherein the central network device comprises a router in communication with the remote communications devices over a communications network comprising the cables for delivering power, data, and cooling to each of the remote communications devices, wherein at least one of the remote communications devices comprises a network switch.
  • 2. The method of claim 1 further comprising adjusting delivery of said cooling to at least one of the remote communications devices based on said thermal data from the remote communications devices.
  • 3. The method of claim 1 further comprising adjusting delivery of said cooling to at least one of the remote communications devices based on said power data from the remote communications devices.
  • 4. The method of claim 1 further comprising monitoring temperature, pressure, and flow of a coolant loop delivering said cooling to the remote communications devices.
  • 5. The method of claim 1 further comprising identifying a coolant leak based on flow data received from the remote communications devices and stopping delivery of said cooling to at least one of the remote communications devices.
  • 6. The method of claim 1 wherein the central network device is located at least 1 km from each of said plurality of remote communications devices and wherein said power comprises a power output of at least 100 Watts.
  • 7. The method of claim 1 wherein said power comprises pulsed power.
  • 8. The method of claim 1 further comprising adjusting delivery of the power, data, and cooling to the remote communications devices based on the power and thermal data received from the remote communications devices.
  • 9. The method of claim 8 further comprising utilizing machine learning to adjust said delivery of the power, data, and cooling to the remote communications devices.
  • 10. A method comprising: delivering power, data, and cooling from a central network device to a plurality of remote communications devices over cables connecting the central network device to the remote communications devices, each of the cables carrying said power, data, and cooling;receiving at the central network device, power and thermal data from the remote communications devices based on monitoring of power and cooling at the remote communications devices; andadjusting delivery of said cooling to at least one of the remote communications devices to compensate for response time between changes in delivery of said cooling and said thermal data based on machine learning;wherein the remote communications devices are powered by said power and cooled by said cooling delivered from the central network device.
  • 11. The method of claim 10 further comprising identifying a coolant leak based on flow data received from the remote communications devices and stopping delivery of said cooling to at least one of the remote communications devices.
  • 12. The method of claim 10 wherein said power comprises pulsed power.
  • 13. The method of claim 10 further comprising determining a type of power applied to one of the remote communications devices, determining if Power over Ethernet or pulsed power is a more efficient power delivery method, and selecting a power delivery mode for the remote communications device.
  • 14. A network device comprising: network switch comprising at least one line card or fabric card;a processor in communications with a forwarding engine at the network switch and operable to process a packet;a connector for connecting the network switch to a cable delivering power, data, and cooling to the network switch, the connector comprising:an optical interface for receiving optical communications signals;an electrical interface for receiving power for powering the network switch;a fluid interface for receiving coolant;a cooling loop for cooling electrical components of the network switch with the coolant; anda monitoring system for continuously monitoring the cooling loop and providing feedback to a central network device delivering said power, data, and cooling to the network switch over the cable during operation of the network switch;wherein the central network device comprises a router in communication with the network switch over a communications network comprising the cable.
  • 15. The apparatus of claim 14 further comprising a second connector for receiving said power, data, and cooling from a redundant central network device.
  • 16. The apparatus of claim 14 wherein the cable comprises optical fibers, power conductors, coolant tubes, and a thermal isolation material between the coolant tubes contained within an outer cable jacket.
  • 17. The apparatus of claim 14 wherein the cable comprises a thermal path between power conductors and a coolant tube and through an outer jacket of the cable.
  • 18. The apparatus of claim 14 wherein the processor is in communication with a controller at the central network device over optical fiber in the cable, the processor and the controller defining a distributed control system for controlling cooling at the apparatus.
  • 19. The apparatus of claim 14 further comprising a manifold comprising at least one valve operable to direct said coolant to different portions of the apparatus based on input from a controller at the central network device.
  • 20. The apparatus of claim 14 wherein the monitoring system comprises temperature, pressure, and flow sensors located in the cooling loop and wherein the fluid interface comprises a coolant supply interface and a coolant return interface.
  • 21. The apparatus of claim 14 wherein the monitoring system is further configured to monitor power at the apparatus and provide feedback to the central network device.
  • 22. The network device of claim 14 wherein the power comprises pulsed power.
  • 23. An apparatus comprising: a connector for connecting the apparatus to a cable delivering power, data, and cooling to a plurality of remote communications devices, the connector comprising:an optical interface for delivering optical communications signals;an electrical interface for delivering power for powering the remote communications devices; anda fluid interface for delivering cooling to the remote communications devices; anda control system for modifying delivery of said cooling to the remote communications devices based on feedback received from the remote communications devices;wherein the control system utilizes machine learning to modify delivery of said cooling.
  • 24. The apparatus of claim 23 further comprising at least one servo valve and at least one pump for controlling delivery of said cooling based on said feedback.
  • 25. The apparatus of claim 23 wherein the control system is configured to modify delivery of said power based on said feedback.
  • 26. The apparatus of claim 23 wherein the control system is operable to identify a coolant leak based on flow data received from the remote communications devices and stop delivery of said cooling to at least one of the remote communications devices.
  • 27. The apparatus of claim 23 wherein the control system is operable to select a power delivery mode for each of the remote communications devices, wherein the power delivery mode comprises Power over Ethernet or pulsed power.
  • 28. The apparatus of claim 23 wherein the power comprises pulsed power.
US Referenced Citations (226)
Number Name Date Kind
3335324 Buckeridge Aug 1967 A
3962529 Kubo Jun 1976 A
4811187 Nakajima Mar 1989 A
4997388 Dale Mar 1991 A
5652893 Ben-Meir Jul 1997 A
5723848 Bilenko Mar 1998 A
6008631 Johari Dec 1999 A
6220955 Posa Apr 2001 B1
6259745 Chan Jul 2001 B1
6636538 Stephens Oct 2003 B1
6685364 Brezina Feb 2004 B1
6784790 Lester Aug 2004 B1
6826368 Koren Nov 2004 B1
6855881 Khoshnood Feb 2005 B2
6860004 Hirano Mar 2005 B2
7325150 Lehr Jan 2008 B2
7420355 Liu Sep 2008 B2
7490996 Sommer Feb 2009 B2
7492059 Peker Feb 2009 B2
7509505 Randall Feb 2009 B2
7566987 Black et al. Jul 2009 B2
7583703 Bowser Sep 2009 B2
7589435 Metsker Sep 2009 B2
7593747 Karam Sep 2009 B1
7603570 Schindler Oct 2009 B2
7616465 Vinciarelli Nov 2009 B1
7813646 Furey Oct 2010 B2
7835389 Yu Nov 2010 B2
7854634 Filipon Dec 2010 B2
7881072 DiBene Feb 2011 B2
7915761 Jones Mar 2011 B1
7921307 Karam Apr 2011 B2
7924579 Arduini Apr 2011 B2
7940787 Karam May 2011 B2
7973538 Karam Jul 2011 B2
8020043 Karam Sep 2011 B2
8037324 Hussain Oct 2011 B2
8081589 Gilbrech Dec 2011 B1
8184525 Karam May 2012 B2
8276397 Carlson et al. Oct 2012 B1
8279883 Diab Oct 2012 B2
8310089 Schindler Nov 2012 B2
8319627 Chan Nov 2012 B2
8345439 Goergen Jan 2013 B1
8350538 Cuk Jan 2013 B2
8358893 Sanderson Jan 2013 B1
8386820 Diab Feb 2013 B2
8638008 Baldwin et al. Jan 2014 B2
8700923 Fung Apr 2014 B2
8712324 Corbridge Apr 2014 B2
8750710 Hirt Jun 2014 B1
8768528 Millar et al. Jul 2014 B2
8781637 Eaves Jul 2014 B2
8787775 Earnshaw Jul 2014 B2
8829917 Lo et al. Sep 2014 B1
8836228 Xu Sep 2014 B2
8842430 Hellriegel et al. Sep 2014 B2
8849471 Daniel Sep 2014 B2
8966747 Vinciarelli Mar 2015 B2
9019895 Li Apr 2015 B2
9024473 Huff May 2015 B2
9184795 Eaves Nov 2015 B2
9189036 Ghoshal Nov 2015 B2
9189043 Vorenkamp Nov 2015 B2
9273906 Goth et al. Mar 2016 B2
9319101 Lontka Apr 2016 B2
9321362 Woo Apr 2016 B2
9373963 Kuznelsov Jun 2016 B2
9419436 Eaves Aug 2016 B2
9484771 Braylovskiy Nov 2016 B2
9510479 Vos Nov 2016 B2
9531551 Balasubramanian Dec 2016 B2
9590811 Hunter, Jr. Mar 2017 B2
9618714 Murray Apr 2017 B2
9640998 Dawson et al. May 2017 B2
9665148 Hamdi May 2017 B2
9693244 Maruhashi et al. Jun 2017 B2
9734940 McNutt Aug 2017 B1
9853689 Eaves Dec 2017 B2
9874930 Vavilala Jan 2018 B2
9882656 Sipes et al. Jan 2018 B2
9893521 Lowe Feb 2018 B2
9948198 Imai Apr 2018 B2
9979370 Xu May 2018 B2
9985600 Xu May 2018 B2
10007628 Pitigoi-Aron Jun 2018 B2
10028417 Schmidtke Jul 2018 B2
10128764 Vinciarelli Nov 2018 B1
10248178 Brooks Apr 2019 B2
10263526 Sandusky et al. Apr 2019 B2
10281513 Goergen May 2019 B1
10407995 Moeny Sep 2019 B2
10439432 Eckhardt Oct 2019 B2
10541543 Eaves Jan 2020 B2
10541758 Goergen Jan 2020 B2
10631443 Byers Apr 2020 B2
10735105 Goergen et al. Aug 2020 B2
20010024373 Cuk Sep 2001 A1
20020126967 Panak Sep 2002 A1
20040000816 Khoshnood Jan 2004 A1
20040033076 Song Feb 2004 A1
20040043651 Bain Mar 2004 A1
20040073703 Boucher Apr 2004 A1
20040264214 Xu Dec 2004 A1
20050197018 Lord Sep 2005 A1
20050268120 Schindler Dec 2005 A1
20060202109 Delcher Sep 2006 A1
20060209875 Lum Sep 2006 A1
20070103168 Batten May 2007 A1
20070143508 Linnman Jun 2007 A1
20070236853 Crawley Oct 2007 A1
20070263675 Lum Nov 2007 A1
20070284941 Robbins Dec 2007 A1
20070284946 Robbins Dec 2007 A1
20070288125 Quaratiello Dec 2007 A1
20070288771 Robbins Dec 2007 A1
20080054720 Lum Mar 2008 A1
20080198635 Hussain Aug 2008 A1
20080229120 Diab Sep 2008 A1
20080310067 Diab Dec 2008 A1
20090027033 Diab Jan 2009 A1
20100077239 Diab Mar 2010 A1
20100117808 Karam May 2010 A1
20100171602 Kabbara Jul 2010 A1
20100190384 Lanni Jul 2010 A1
20100237846 Vetteth Sep 2010 A1
20100290190 Chester Nov 2010 A1
20110004773 Hussain Jan 2011 A1
20110007664 Diab Jan 2011 A1
20110290497 Stenevik Jan 2011 A1
20110057612 Taguchi Mar 2011 A1
20110083824 Rogers Apr 2011 A1
20110228578 Serpa Sep 2011 A1
20110266867 Schindler Nov 2011 A1
20120043935 Dyer Feb 2012 A1
20120064745 Ottliczky Mar 2012 A1
20120170927 Huang Jul 2012 A1
20120201089 Barth et al. Aug 2012 A1
20120231654 Conrad Sep 2012 A1
20120287984 Lee Nov 2012 A1
20120317426 Hunter, Jr. Dec 2012 A1
20120319468 Schneider Dec 2012 A1
20130077923 Peeters Weem et al. Mar 2013 A1
20130079633 Peeters Weem Mar 2013 A1
20130103220 Eaves Apr 2013 A1
20130249292 Blackwell, Jr. Sep 2013 A1
20130272721 van Veen Oct 2013 A1
20130329344 Tucker Dec 2013 A1
20140111180 Vladan Apr 2014 A1
20140126151 Campbell May 2014 A1
20140129850 Paul May 2014 A1
20140258742 Chien Sep 2014 A1
20140258813 Lusted Sep 2014 A1
20140265550 Milligan Sep 2014 A1
20140372773 Heath Dec 2014 A1
20150078740 Sipes, Jr. Mar 2015 A1
20150106539 Leinonen Apr 2015 A1
20150115741 Dawson Apr 2015 A1
20150207317 Radermacher Jul 2015 A1
20150215001 Eaves Jul 2015 A1
20150215131 Paul Jul 2015 A1
20150333918 White, III Nov 2015 A1
20150340818 Scherer Nov 2015 A1
20150365003 Sadwick Dec 2015 A1
20160018252 Hanson Jan 2016 A1
20160020911 Sipes, Jr. Jan 2016 A1
20160053596 Rey Feb 2016 A1
20160064938 Balasubramanian Mar 2016 A1
20160111877 Eaves Apr 2016 A1
20160118784 Saxena Apr 2016 A1
20160120059 Shedd Apr 2016 A1
20160133355 Glew May 2016 A1
20160134331 Eaves May 2016 A1
20160142217 Gardner et al. May 2016 A1
20160188427 Chandrashekar Jun 2016 A1
20160197600 Kuznetsov Jul 2016 A1
20160365967 Tu Jul 2016 A1
20160241148 Kizilyalli Aug 2016 A1
20160262288 Chainer et al. Sep 2016 A1
20160273722 Crenshaw Sep 2016 A1
20160294500 Chawgo Oct 2016 A1
20160294568 Chawgo et al. Oct 2016 A1
20160308683 Pischl Oct 2016 A1
20160352535 Hiscock Dec 2016 A1
20170041152 Sheffield Feb 2017 A1
20170041153 Picard Feb 2017 A1
20170054296 Daniel Feb 2017 A1
20170110871 Foster Apr 2017 A1
20170123466 Carnevale May 2017 A1
20170146260 Ribbich May 2017 A1
20170155517 Cao Jun 2017 A1
20170164525 Chapel Jun 2017 A1
20170155518 Yang Jul 2017 A1
20170214236 Eaves Jul 2017 A1
20170229886 Eaves Aug 2017 A1
20170234738 Ross Aug 2017 A1
20170244318 Giuliano Aug 2017 A1
20170248976 Moller Aug 2017 A1
20170294966 Jia Oct 2017 A1
20170325320 Wendt Nov 2017 A1
20180024964 Mao Jan 2018 A1
20180053313 Smith Feb 2018 A1
20180054083 Hick Feb 2018 A1
20180060269 Kessler Mar 2018 A1
20180088648 Otani Mar 2018 A1
20180098201 Torello Apr 2018 A1
20180102604 Keith Apr 2018 A1
20180123360 Eaves May 2018 A1
20180159430 Albert Jun 2018 A1
20180188712 MacKay Jul 2018 A1
20180191513 Hess Jul 2018 A1
20180254624 Son Sep 2018 A1
20180313886 Mlyniec Nov 2018 A1
20180340840 Bullock Nov 2018 A1
20190064890 Donachy Feb 2019 A1
20190126764 Fuhrer May 2019 A1
20190267804 Matan Aug 2019 A1
20190277899 Goergen Sep 2019 A1
20190277900 Goergen Sep 2019 A1
20190278347 Goergen Sep 2019 A1
20190280895 Mather Sep 2019 A1
20190304630 Goergen Oct 2019 A1
20190312751 Goergen Oct 2019 A1
20190342011 Goergen Oct 2019 A1
20190363493 Sironi Nov 2019 A1
20200044751 Goergen Feb 2020 A1
Foreign Referenced Citations (20)
Number Date Country
1209880 Jul 2005 CN
201689347 Dec 2010 CN
204836199 Dec 2015 CN
205544597 Aug 2016 CN
104081237 Oct 2016 CN
103490907 Dec 2017 CN
104412541 May 2019 CN
1936861 Jun 2008 EP
2120443 Nov 2009 EP
2257009 Dec 2010 EP
2432134 Mar 2012 EP
2693688 Feb 2014 EP
WO199316407 Aug 1993 WO
WO2006127916 Nov 2006 WO
WO2010053542 May 2010 WO
WO2017054030 Apr 2017 WO
WO2017167926 Oct 2017 WO
WO2018017544 Jan 2018 WO
WO2019023731 Feb 2019 WO
WO2019212759 Nov 2019 WO
Non-Patent Literature Citations (30)
Entry
https://www.fischerconnectors.com/us/en/products/fiberoptic.
http://www.strantech.com/products/tfoca-genx-hybrid-2x2-fiber-optic-copper-connector/.
http://www.qpcfiber.com/product/connectors/e-link-hybrid-connector/.
https://www.lumentum.com/sites/default/files/technical-library-items/poweroverfiber-tn-pv-ae_0.pdf.
“Network Remote Power Using Packet Energy Transfer”, Eaves et al., www.voltserver.com, Sep. 2012.
Product Overview, “Pluribus VirtualWire Solution”, Pluribus Networks, PN-PO-VWS-05818, https://www.pluribusnetworks.com/assets/Pluribus-VirtualWire-PO-50918.pdf, May 2018, 5 pages.
Implementation Guide, “Virtual Chassis Technology Best Practices”, Juniper Networks, 8010018-009-EN, Jan. 2016, https://wwwjuniper.net/us/en/local/pdf/implementation-guides/8010018-en.pdf, 29 pages.
Yencheck, Thermal Modeling of Portable Power Cables, 1993.
Zhang, Machine Learning-Based Temperature Prediction for Runtime Thermal Management across System Components, Mar. 2016.
Data Center Power Equipment Thermal Guidelines and Best Practices.
Dynamic Thermal Rating of Substation Terminal Equipment by Rambabu Adapa, 2004.
Chen, Real-Time Termperature Estimation for Power MOSEFETs Conidering Thermal Aging Effects:, IEEE Trnasactions on Device and Materials Reliability, vol. 14, No. 1, Mar. 2014.
Jingquan Chen et al: “Buck-boost PWM converters having two independently controlled switches”, 32nd Annual IEEE Power Electronics Specialists Conference. PESC 2001. Conference Proceedings, Vancouver, Canada, Jun. 17-21, 2001; [Annual Power Electronics Specialists Conference], New York, NY: IEEE, US, vol. 2,Jun. 17, 2001 (Jun. 17, 2001), pp. 736-741, XP010559317, DOI: 10.1109/PESC.2001.954206, ISBN 978-0-7803-7067-8 paragraph [SectionII]; figure 3.
Cheng K W E et al: “Constant Frequency, Two-Stage Quasiresonant Convertor”, IEE Proceedings B. Electrical Power Applications, 1271980 1, vol. 139, No. 3, May 1, 1992 (May 1, 1992), pp. 227-237, XP000292493, the whole document.
Petition for Post Grant Review of U.S. Pat. No. 10,735,105 [Public] with Exhibits, filed Feb. 16, 2021, PGR 2021-00055.
Petition for Post Grant Review of U.S. Pat. No. 10,735,105 [Public] with Exhibits, filed Feb. 16, 2021, PGR 2021-00056.
Eaves, S. S., Network Remote Powering Using Packet Energy Transfer, Proceedings of IEEE International Conference on Telecommunications Energy (INTELEC) 2012, Scottsdale, AZ, Sep. 30-Oct. 4, 2012 (IEEE 2012) (EavesIEEE).
Edelstein S., Updated 2016 Tesla Model S also gets new 75-kWhbattery option, (Jun. 19, 2016), archived Jun. 19, 2016 by Internet Archive Wayback machine at https://web.archive.org/web/20160619001148/https://www.greencarreports.com/news/1103 782_updated-2016-tesla-model-s-also-gets-new-7 5-kwh-battery-option (“Edelstein”).
NFPA 70 National Electrical Code, 2017 Edition (NEC).
International Standard IEC 62368-1 Edition 2.0 (2014), ISBN 978-2-8322-1405-3 (“IEC-62368”).
International Standard IEC/TS 60479-1 Edition 4.0 (2005), ISBN 2-8318-8096-3 (“IEC-60479”).
International Standard IEC 60950-1 Edition 2.2 (2013), ISBN 978-2-8322-0820-5 (“IEC-60950”).
International Standard IEC 60947-1 Edition 5.0 (2014), ISBN 978-2-8322-1798-6 (“IEC-60947”).
Tanenbaum, A. S., Computer Networks, Third Edition (1996) (“Tanenbaum”).
Stallings, W., Data and Computer Communications, Fourth Edition ( 1994) (“Stallings”).
Alexander, C. K., Fundamentals of Electric Circuits, Indian Edition (2013) (“Alexander”).
Hall, S. H., High-Speed Digital System Design, A Handbook of Interconnect Theory and Design Practices (2000) (“Hall”).
Sedra, A. S., Microelectronic Circuits, Seventh Edition (2014) (“Sedra”).
Lathi, B. P., Modem Digital and Analog Communication Systems, Fourth Edition (2009) (“Lathi”).
Understanding 802.3at PoE Plus Standard Increases Available Power (Jun. 2011) (“Microsemi”).
Related Publications (1)
Number Date Country
20190272011 A1 Sep 2019 US