The present disclosure relates generally to communications networks, and more particularly, to splitting combined delivery power, data, and cooling in a communications network.
Network devices such as computer peripherals, network access points, and IoT (Internet of Things) devices may have both their data connectivity and power needs met over a single combined function cable such as PoE (Power over Ethernet). In conventional PoE systems, power is delivered over the cables used by the data over a range from a few meters to about one hundred meters. When a greater distance is needed or fiber optic cables are used, power is typically supplied through a local power source such as a nearby wall outlet due to limitations with capacity, reach, and cable loss in conventional PoE. Today's PoE systems also have limited power capacity, which may be inadequate for many classes of devices. If the available power over combined function cables is increased, cooling may also need to be delivered to the high powered remote devices. Use of point-to-point architectures for combined function cables may result in complex and expensive cable systems.
Corresponding reference characters indicate corresponding parts throughout the several views of the drawings.
Overview
In one embodiment, a method generally comprises delivering power, data, and cooling on a cable from a central network device to a splitter device for splitting and transmitting the power, data, and cooling to a plurality of remote communications devices over a plurality of cables, each of the cables carrying the power, data, and cooling, receiving at the central network device, monitoring information from the remote communications devices on the cable, processing the monitoring information, and allocating the power, data, and cooling to each of the remote communications devices based on the monitoring information.
In another embodiment, a method generally comprises receiving at a communications device, power, data, and cooling from a splitter device receiving the power, data, and cooling on a combined cable from a central network device and splitting the power, data, and cooling among a plurality of communications devices, monitoring the power, data, and cooling at the communications device, transmitting monitoring information to the central network device through the splitter device and on the combined cable, and modifying at least one of power, data, and cooling settings in response to a control system message from the central network device allocating the power, data, and cooling to the communications devices.
In another embodiment, a system generally comprises a central network device comprising a connector for connection to a cable delivering power, data, and cooling to a splitter device for splitting the power, data, and cooling for delivery to a plurality of remote communications devices over a plurality of cables, each of the cables carrying the power, data, and cooling, the remote communications devices comprising sensors for monitoring the power, data, and cooling, and a control system for receiving power, data, and cooling information for the remote communications devices and allocating the power, data, and cooling to the remote communications devices.
Further understanding of the features and advantages of the embodiments described herein may be realized by reference to the remaining portions of the specification and the attached drawings.
The following description is presented to enable one of ordinary skill in the art to make and use the embodiments. Descriptions of specific embodiments and applications are provided only as examples, and various modifications will be readily apparent to those skilled in the art. The general principles described herein may be applied to other applications without departing from the scope of the embodiments. Thus, the embodiments are not to be limited to those shown, but are to be accorded the widest scope consistent with the principles and features described herein. For purpose of clarity, details relating to technical material that is known in the technical fields related to the embodiments have not been described in detail.
In conventional Power over Ethernet (PoE) systems used to simultaneously transmit power and data communications, power is delivered over the same twisted pair cable used for data. These systems are limited in range to a few meters to about 100 meters. The maximum power delivery capacity of standard PoE is approximately 100 Watts, but many classes of powered devices would benefit from power delivery of 1000 Watts or more. In conventional systems, when a larger distance is needed, fiber optic cabling is used to deliver data and when larger power delivery ratings are needed, power is supplied to a remote device through a local power source.
As previously noted, it is desirable to increase the power available over multi-function cables to hundreds and even thousands of watts. This capability may enable many new choices in network deployments where major devices such as workgroup routers, multi-socket servers, large displays, wireless access points, fog nodes, or other devices are operated over multi-function cables. This capability would greatly decrease installation complexity and improve the total cost of ownership of a much wider set of devices that have their power and data connectivity needs met from a central hub.
Beyond the data and power supply capabilities noted above, there is also a need for cooling. For high-powered devices, especially those with high thermal density packaging or total dissipation over a few hundred Watts, traditional convection cooling methods may be inadequate. This is particularly apparent where special cooling challenges are present, such as with a device that is sealed and cannot rely on drawing outside air (e.g., all-season outdoor packaging), a hermetically sealed device (e.g., used in food processing or explosive environments), or where fan noise is a problem (e.g., office or residential environments), or any combination of the above along with extreme ambient temperature environments. In these situations, complex and expensive specialized cooling systems are often used.
In order to overcome the above issues, PoE may be augmented to allow it to carry higher data rates, higher power delivery, and integrated thermal management cooling combined into a single cable, as described, for example, in U.S. patent application Ser. No. 15/910,203 (“Combined Power, Data, and Cooling Delivery in a Communications Network”), filed Mar. 2, 2018, which is incorporated herein by reference in its entirety. These connections may be point-to-point, such as from a central hub to one or more remote devices (e.g., full hub and spoke layout). However, there may be topologies in which it is difficult, not convenient, or inefficient to run power, data, and cooling cables from every remote device all the way to the central hub. For example, use of point-to-point architectures for combined function cables may result in complex and expensive cable systems due to the long, largely parallel cables that may be routed along similar paths to serve clusters of remote devices. If a single combined function cable could be run most of the way to this cluster, and then split, significant savings could be realized.
The embodiments described herein provide for splitting of power, data, and cooling delivered over a combined cable. In one or more embodiments, a single cable carries power, data, and cooling from a central hub to a remote splitting device, which directs a share of all three services to a plurality of endpoint (remote) devices that utilize the services. This allows for use of a single long combined cable from the central hub to an intermediary location for subsequent splitting of the combined power, data, and cooling and delivery to multiple remote devices with short combined cable runs. As described below, the central hub may deliver power, data, and cooling over combined cables to a plurality of intermediate hubs, which divide the power, data, and cooling capabilities for delivery on combined cables in communication with the remote communications devices. The total length and cost of the cable needed to serve a number of remote devices can be minimized through optimal location of the distribution splitter physically near a cluster of remote devices. The embodiments allow a combined cable delivery network to go beyond a point-to-point topology and form passive stars, busses, tapers, multi-layer trees, and the like.
The splitting of combined delivery power, data, and cooling may be particularly beneficial if the remote devices are clustered in relatively high-density groupings served by a comparatively long cable distance back to a central hub. For example, the splitting of services may be beneficial when there are many IoT sensors in close proximity to each other but far away from the central hub, in data centers where a rack full of devices may be run over a shared cable hundreds of meters from the central infrastructure, residential or smart ceiling applications, IoT and server networks such as Top of Rack (ToR) devices, manholes, ceiling junction boxes, roadside cabinets, multi-unit apartment buildings, or any other application in which it is advantageous to have short cable runs from an intermediary device to clustered remote devices. The remote branching topology may greatly reduce large expenses in purchasing, installing, and maintaining long individual cables to each device. As an additional benefit, it is helpful if the splitting of the data, power, and cooling is performed passively (i.e., not requiring active elements such as data routers, power switching, or active flow regulating components that complicate the intermediary split point) since the splitter may be located in inaccessible, environmentally hostile, or mechanically constrained places.
In one or more embodiments, a cable system, referred to herein as PoE+Fiber+Cooling (PoE+F+C), provides high power energy delivery, fiber delivered data, and cooling within a single cable. The PoE+F+C system allows high power devices to be located in remote locations, extreme temperature environments, or noise sensitive environments, with their cooling requirements met through the same cable that carries data and power. The use of a single cable for all interconnect features needed by a remote device greatly simplifies installation and ongoing operation of the device.
Referring now to the drawings, and first to
The network is configured to provide power (e.g., power greater than 100 Watts), data (e.g., optical data), and cooling from a central network device 10 to a plurality of remote network devices 12 (e.g., switches, routers, servers, access points, computer peripherals, IoT devices, fog nodes, or other electronic components and devices) through one or more splitter devices 13. Signals may be exchanged among communications equipment and power transmitted from power sourcing equipment (PSE) (e.g., central hub 10) to powered devices (PDs) (e.g., remote communications devices 12). The PoE+F+C system delivers power, data, and cooling through one or more splitter devices 13, to a network (e.g., switch/router system) configured to receive data, power, and cooling over a cabling system comprising optical fibers, electrical wires (e.g., copper wires), and coolant tubes. The splitter 13 allows the network to go beyond point-to-point topologies and build passive stars, busses, tapers, multi-layer trees, etc. A single long PoE+F+C cable 14 runs to a conveniently located intermediary splitter device 13 servicing a cluster of physically close endpoint devices (remote network devices, remote communications devices) 12. As described in detail below, control systems for the power, data, and cooling interact between the central hub 10 and the remote devices 12 to ensure that each device receives its fair share of each resource and that faults or dangerous conditions are detected and managed.
As shown in the example of
The network may include any number or arrangement of network communications devices (e.g., switches, access points, routers, or other devices operable to route (switch, forward) data communications). The remote devices 12 may be located at distances greater than 100 meters (e.g., 1 km, 10 km, or any other distance) from the central hub 10, and/or operate at greater power levels than 100 Watts (e.g., 250 Watts, 1000 Watts, or any other power level). The remote devices 12 may also be in communication with one or more other devices (e.g., fog node, IoT device, sensor, and the like), as described below.
In one or more embodiments, a redundant central hub (not shown) may provide backup or additional power, bandwidth, or cooling, as needed in the network. Additional combined cables 14 would run from the redundant central hub to one or more of the splitter devices 13.
In the example shown in
The central hub 10 may be operable to provide high capacity power from an internal power system (e.g., PSU 15 capable of delivering power over and including 5 kW, 100 kW, etc., and driving the plurality of devices 12 in the 100-3000 W range). The PSU 15 may provide, for example, PoE, pulsed power, DC power, or AC power. The central hub 10 (PSE (Power Sourcing Equipment)) is operable to receive power external from a communications network and transmit the power, along with data and cooling to the remote network devices (PDs (Powered Devices)) 12 through the splitters 13. The central hub 10 may comprise, for example, a router, convergence device, or any other suitable network device operable to deliver power, data, and cooling. As described in detail below, the central hub 10 provides control logic for the cooling loop, as well as the power and data transport functions of the combined cable 14, 17. Additional components and functions of the central hub 10 are described below with respect to
The splitter device 13 is operable to split the optical energy N ways, split the power N ways, and split and recombine coolant flows N ways, thereby splitting and directing a portion of the data, power, and cooling (thermal management) capabilities supplied by the main cable 14 from the central hub 10, enabling the power, data, and, cooling to be shared by a number of the remote devices 12. The splitter 13 may be configured to provide any suitable split ratio (e.g., 2:1 up to about 32:1). If the network contains multiple splitters 13 as shown in
In one or more embodiments, the splitter 13 is a passive device, requiring no active electronics, routers, valves, or computer control. In an alternate embodiment, more advanced splitting scenarios may place some intelligence and active control elements in the intermediary splitter site. For example, the splitter may be active with respect to optical data and passive with respect to power and cooling.
As previously noted, cables 14 extending from the central hub 10 to the splitter devices 13 and cables 17 extending from the splitter devices to the remote network devices 12 are configured to transmit power, data, and cooling in a single cable (combined cable, multi-function cable, multi-use cable, hybrid cable). The cables 14, 17 may be formed from any material suitable to carry electrical power, data (copper, fiber), and coolant (liquid, gas, or multi-phase) and may carry any number of electrical wires, optical fibers, and cooling tubes in any arrangement.
In the example shown in
One or more of the remote network devices 12 may also deliver power to equipment using PoE. For example, one or more of the network devices 12 may deliver power using PoE to electronic components such as IP (Internet Protocol) cameras, VoIP (Voice over IP) phones, video cameras, point-of-sale devices, security access control devices, residential devices, building automation devices, industrial automation, factory equipment, lights (building lights, streetlights), traffic signals, and many other electrical components and devices.
In one embodiment, one or more of the network devices 12 may comprise dual-role power ports that may be selectively configurable to operate as a PSE (Power Source Equipment) port to provide power to a connected device or as a PD (Powered Device) port to sink power from the connected device, and enable the reversal of energy flow under system control, as described in U.S. Pat. No. 9,531,551 (“Dynamically Configurable Power-Over-Ethernet Apparatus and Method”, issued Dec. 27, 2016), for example. The dual-role power ports may be PoE or PoE+F ports, enabling them to negotiate their selection of either PoE or higher power PoE+F in order to match the configuration of the ports on line cards 16 with the corresponding ports on each remote network device 12, for example.
In one or more embodiments, there is no need for additional electrical wiring for the communications network and all of the network communications devices operate using the power provided by the PoE+F+C system. In other embodiments, in addition to the remote communications devices 12 configured to receive power, data, and cooling from the central hub 10, the network may also include one or more network devices comprising conventional network devices that only process and transmit data. These network devices receive electrical power from a local power source such as a wall outlet. Similarly, one or more network devices may eliminate the data interface, and only interconnect power (e.g., moving data interconnection to wireless networks). Also, one or more devices may be configured to receive only power and data, or only power and cooling, for example.
It is to be understood that the network devices and topology shown in
The long combined cable 14 originates at the central hub 10 that provides utility services for the entire network of data connectivity, power distribution, and cooling. The splitter (e.g., passive intermediary distribution splitter) 13 is located near a physical center of a cluster of the remote devices 12a, 12b, 12c and comprises three splitting elements (depicted by circles (or pairs of circles) in
As shown in
In the example shown in
In this example, a bidirectional optical system is utilized with one wavelength of light going downstream and a different wavelength of light going upstream, thereby reducing the fiber count in the cable from two to one (optical fiber 27 in
In one or more embodiments, sensors 31a monitor the current and voltage of the power delivery system at either end of the power conductors 26. As described below, this information may be used by the control system to adjust power or coolant delivery to one or more of the remote devices 12a, 12b, 12c.
The system further includes sensors 31b for measuring critical die temperatures, coolant temperatures, pressures, and flows within the cooling loop (e.g., at the central hub 10 and in each remote device 12a, 12b, 12c). In one or more embodiments, the sensors 31b monitor aggregate and individual branch coolant temperatures, pressures, and flow rate quantities at strategic points around the loop. In the example shown in
The central hub 10 with heat exchanger 22 maintains a source of low-temperature coolant that is sent through distribution plumbing (such as a manifold), through the connector 29, and down the cable's coolant supply line 28a to the remote devices 12a, 12b, 12c. The coolant may comprise, for example, water, antifreeze, liquid or gaseous refrigerants, or mixed-phase coolants (partially changing from liquid to gas along the loop).
In an alternative embodiment, the heat exchanger may just be a distribution manifold (if the same physical coolant is used in the cooling plant as is transported in tubes 28a, 28b). The heat exchanger is needed if isolation is required, or if there is a liquid to gas interface.
The connectors 29 at the remote devices 12a, 12b, 12c are coupled to the cables 17a, 17b, or 17c, respectively, and the supply coolant is routed through elements inside the device such as heat sinks (heat exchangers, cooling taps, heat pipes) 25 that remove heat. The warmed coolant may be aggregated through a return manifold within device 12, and returned to the central hub 10 from the device's connector 29 and through the return coolant tube 28b in the cable 17a, 17b, 17c, fluid manifold in the splitter 13, and cable 14. The cable 14 returns the coolant to the central hub 10, where the return coolant passes through the heat exchanger 22 to remove the heat from the coolant loop to an external cooling plant, and the cycle repeats. The heat exchanger 22 may be a liquid-to-liquid heat exchanger, with the heat transferred to chilled water or a cooling tower circuit, for example. The heat exchanger 22 may also be a liquid-to-air heat exchanger, with fans provided to expel the waste heat to the atmosphere. The hot coolant returning from the cable 14 may be monitored by sensor 31b for temperature, pressure, and flow. Once the coolant has released its heat, it may pass back through a pump 19 and sensor, and then sent back out on the cooling loop. One or more variable-speed pumps 19 may be provided at the central hub 10 (or remote devices 12a, 12b, 12c) to circulate the fluid around the cooling loop.
In an alternate embodiment, only a single coolant tube is provided within the cables 14, 17a, 17b, 17c and high pressure air (e.g., supplied by a central compressor with an intercooler) is used as the coolant. When the air enters the remote device, it is allowed to expand and/or impinge directly on heat dissipating elements inside the device. Cooling may be accomplished by forced convection via the mass flow of the air and additional temperature reduction may be provided via a Joule-Thomson effect as the high pressure air expands to atmospheric pressure. Once the air has completed its cooling tasks, it can be exhausted to the atmosphere outside the remote device via a series of check valves and mufflers (not shown).
In cold environments the coolant may be supplied above ambient temperature to warm the remote devices 12a, 12b, 12c. This may be valuable where the remote devices 12a, 12b, 12c are located in cold climates or in cold parts of industrial plants, and the devices have cold-sensitive components such as optics or disk drives. This may be more energy efficient than providing electric heaters at each device, as is used in conventional systems.
The central hub 10 may also include one or more support systems to filter the coolant, supply fresh coolant, adjust anti-corrosion chemicals, bleed air from the loops, or fill and drain loops as needed for installation and maintenance of cables 14, 17a, 17b, 17c and remote devices 12a, 12b, 12c.
The connectors 29 at the central hub 10 and remote devices 12a, 12b, 12c (and similar connectors optionally equipped at the splitter 13) are configured to mate with the cables 14, 17a, 17b, 17c for transmitting and receiving combined power, data, and cooling. In one embodiment, the connectors 29 carry power, fiber, and coolant in the same connector body. The connectors 29 are preferably configured to mate and de-mate (couple, uncouple) easily by hand or robotic manipulator.
In order to prevent coolant leakage when the cables 14, 17a, 17b, 17c are uncoupled from the central hub 10 or remote devices 12a, 12b, 12c, the coolant lines 28a, 28b and connectors 29 preferably include valves (not shown) that automatically shut off flow into and out of the cable, and into and out of the device or hub. In one or more embodiments, the connector 29 may be configured to allow connection sequencing and feedback to occur. For example, electrical connections may not be made until a verified sealed coolant loop is established. The cable connectors 29 may also include visual or tactile evidence of whether a line is pressurized, thereby reducing the possibility of user installation or maintenance errors.
In one or more embodiments, a distributed control system comprising components located on the central hub's controller and on the remote device's processor may communicate over the fiber link 27 in the combined cables 14, 17a, 17b, 17c. Control systems for all three utilities interact between the remote devices 12a, 12b, 12c and the central hub 10 to ensure that each remote device receives its fair share of power, data, and cooling. For example, the cooling loop sensors 31b at the central hub 10 and remote devices 12a, 12b, 12c may be used in the control system to monitor temperature, pressure, flow, or any combination thereof. The servo valves 39 or variable speed pump 19 may be used to insure the rate of coolant flow matches requirements of the remote thermal loads. Monitoring information from power sensors 31a (e.g., current, voltage) or data usage (e.g., bandwidth, buffer/queue size) may also be used by the control system in managing cooling at the remote devices 12a, 12b, 12c. The control system also uses the monitoring information to allocate power and data, as described in detail below.
Machine learning may be used within the control system to compensate for the potentially long response times between when coolant flow rates change and the remote device's temperatures react to the change. The output of a control algorithm may be used to adjust the pumps 19 to move the correct volume of coolant to the devices 12a, 12b, 12c, and may also be used to adjust coolant valve settings within the remote devices to control the split ratio of coolant between remote devices 12a, 12b, 12c, and to direct different portions of the coolant to different internal heat sinks within each device to properly balance the use of coolant among a plurality of thermal loads.
The control system may also include one or more safety features. In one or more embodiments, the control system may be operable to monitor for abnormal or emergency conditions among power, data, or cooling, and react by adjusting power, data, or cooling to respond to the condition. For example, the control system may instantly stop the coolant flow and begin a purge cycle if the coolant flow leaving the central hub 10 does not closely match the flow received at the remote devices 12a, 12b, 12c, or the flow returned to the hub, which may indicate a leak in the system. The control system may also shut down one or more of the remote devices 12a, 12b, 12c if an internal temperature exceeds a predetermined high limit or open relief valves if pressure limits in the coolant loop are exceeded. The control system may also use its sensors 31b and machine learning algorithms to predictively detect problems in the cooling system, such as a pressure rise caused by a kink in the cables 14, 17a, 17b, 17c, reduction in thermal transfer caused by corrosion of heat sinks, or impending bearing failure in pump 19, before they become serious.
All three utilities (power, data, cooling) provided by the combined cables 14, 17a, 17b, 17c may interact with the control system to keep the system safe and efficient. For example, the power sensors 31a located in the power distribution module 20 of the central hub and power supply 23 of the remote devices 12a, 12b, 12c may provide input to the control system for use in modifying cooling delivery or power allocation. Initial system modeling and characterization may be used to provide expected power, flow properties, and thermal performance operating envelopes, which may provide an initial configuration for new devices and a reference for setting system warning and shut-down limits. This initial characteristic envelope may be improved and fine-tuned over time heuristically through machine learning and other techniques. For example, if the system detects additional power flow in power conductors 26 (e.g., due to a sudden load increase in the CPU (Central Processing Unit) in one of the remote devices 12a, 12b, 12c), the control system may proactively increase coolant flow in anticipation of an impending increase in heat sink temperature even before the temperature sensors 31b register it. This interlock between the various sensors 31a, 31b, control systems, and actuators such as pump 19 and valves 39 help to improve the overall responsivity and stability of the complete system.
In one or more embodiments, the central hub 10 may utilize control algorithms that know what proportion of bandwidth and power are being used by each of the remote devices 12a, 12b, 12c, and use this data to predict its energy and cooling needs. This may be used to ensure that the cooling and power capabilities remain in balance for each of the remote device's needs, and also are fairly allocated across the network. As previously noted, machine learning techniques may be employed to automatically establish system characteristic response times, thereby improving power and cooling control loops heuristically over time.
Additional details of splitting, monitoring, and controlling (managing, allocating) the power, data, and cooling and the control system are described further below with respect to
As previously noted, the cables 14, 17a, 17b, 17c may comprise various configurations of power conductors, optical fiber, and coolant tubes. These components, along with one or more additional components that may be used to isolate selected elements from each other, manage thermal conductivity between the elements, provide thermal paths, or provide protection and strength, are contained within an outer jacket of the cable. The coolant tubes may have various cross-sectional shapes and arrangements, which may yield more space and thermally efficient cables. Supply and return tube wall material thermal conductivity may be adjusted to optimize overall system cooling.
The cable may also be configured to prevent heat loss through supply-return tube-tube conduction, external environment conduction, coolant tube-power conduction, or any combination of these or other conditions. For example, a thermal isolation material may be located between coolant tubes to prevent heat loss. The thermal isolation material may also be placed between the coolant tubes and the outer jacket. In another embodiment, one or both coolant tubes may be provided with a low thermal impedance path to the outside. Thermal paths may also be provided between the power conductors and one of the coolant tubes to use some of the cooling power of the loop to keep the power conductors in the cables cool.
In one or more embodiments, in order to reduce fluid frictional effects, tube interiors may be treated with hydrophobic coatings and the coolant may include surfactants. Also, the supply and return coolant tubes may be composed of materials having different conductive properties so that the complete cable assembly may be thermally tuned to enhance system performance. It is to be understood that the configuration, arrangement, and number of power wires, optical fibers, coolant tubes, and insulation regions, shields, coatings, or layers described herein are only examples and that other configurations or arrangements may be used without departing from the scope of the embodiments.
The network device 30 may include any number of processors 32 (e.g., single or multi-processor computing device or system), which may communicate with a forwarding engine or packet forwarder operable to process a packet or packet header. The processor 32 may receive instructions from a software application or module, which causes the processor to perform functions of one or more embodiments described herein. The processor 32 may also operate one or more components of the control system 33. The control system (controller) 33 may comprise components (modules, code, software, logic) located at the central hub 10 and the remote device 12, and interconnected through the combined cable 14, 17 (
Memory 34 may be a volatile memory or non-volatile storage, which stores various applications, operating systems, modules, and data for execution and use by the processor 32. For example, components of the optical module 38, control logic for cooling components 35, or other parts of the control system 33 (e.g., code, logic, or firmware, etc.) may be stored in the memory 34. The network device 30 may include any number of memory components.
Logic may be encoded in one or more tangible media for execution by the processor 32. For example, the processor 32 may execute code stored in a computer-readable medium such as memory 34. The computer-readable medium may be, for example, electronic (e.g., RAM (random access memory), ROM (read-only memory), EPROM (erasable programmable read-only memory)), magnetic, optical (e.g., CD, DVD), electromagnetic, semiconductor technology, or any other suitable medium. In one example, the computer-readable medium comprises a non-transitory computer-readable medium. Logic may be used to perform one or more functions described below with respect to the flowcharts of
The interfaces 36 may comprise any number of interfaces (e.g., power, data, and fluid connectors, line cards, ports, combined power, data, and cooling connectors) for receiving power, data, and cooling, or transmitting power, data, and cooling to other devices. A network interface may be configured to transmit or receive data using a variety of different communications protocols and may include mechanical, electrical, and signaling circuitry for communicating data over physical links coupled to the network or wireless interfaces. One or more of the interfaces 36 may be configured for PoE+F+C, PoE+F, PoE, PoF (Power over Fiber), or similar operation.
The optical module 38 may comprise hardware or software for use in power detection, power monitor and control, or power enable/disable, as described below. The optical module 38 may further comprise one or more of the processor or memory components, or interface for receiving power and optical data from the cable at a fiber connector, for delivering power and signal data to the network device, or transmitting control signals to the power source, for example. Power may be supplied to the optical module by the power supply 37 and the optical module (e.g., PoE+F, PoE+F+C optical module) 38 may provide power to the rest of the components at the network device 30.
In one embodiment, the optical module 38 comprises an optical transceiver (optical module, optical device, optics module, transceiver, silicon photonics optical transceiver) configured to source or receive power and data, as described in U.S. patent application Ser. No. 15/707,976 (“Power Delivery Through an Optical System”, filed Sep. 18, 2017), incorporated herein by reference in its entirety. The transceiver modules operate as an engine that bidirectionally converts optical signals to electrical signals or in general as an interface to the network element copper wire or optical fiber. In one or more embodiments, the optical transceiver may be a pluggable transceiver module in any form factor (e.g., SFP (Small Form-Factor Pluggable), QSFP (Quad Small Form-Factor Pluggable), CFP (C Form-Factor Pluggable), and the like), and may support data rates up to 400 Gbps, for example. Hosts for these pluggable optical modules include line cards on the central hub 10 or network devices 12 (
The optical transceiver may also be configured for operation with AOC (Active Optical Cable) and form factors used in UWB (Ultra-Wideband) applications, including for example, Ultra HDMI (High-Definition Multimedia Interface), serial high bandwidth cables (e.g., thunderbolt), and other form factors. Also, it may be noted that the optical transceivers may be configured for operation in point-to-multipoint or multipoint-to-point topology. For example, QFSP may breakout to SFP+. One or more embodiments may be configured to allow for load shifting.
It is to be understood that the network device 30 shown in
The remote network device 42 includes optical/electrical components 49 for receiving optical data and converting it to electrical signals (or converting electrical signals to optical data) and power components including power detection module 46, power monitor and control unit 47, and power enable/disable module 48. The power components 46, 47, 48 may be isolated from the optical components 49 via an isolation component (e.g., isolation material or element), which electromagnetically isolates the power circuit from the optical components to prevent interference with operation of the optics.
In one or more embodiments, the electrical distribution system 44 comprises a pulsed power system set up with an interleave pattern, where each packet of energy 41 is directed to a different remote device, repeating after N packets. Each of the remote devices 42 receive all power packets from the combined cable, but only draw energy from the specific packets as needed and negotiated with a central energy manager (control system 45 at central hub 40), and would appear as a suitably high impedance load for all other packets. The remote devices 42 that need more energy than others have more power timeslots allocated to them in the interleave frame. As the remote device's power demands increase, its local energy reserves (e.g., hold up capacitor in its power supply 23 (
In one embodiment, the system is integrated with an SMPS (Switched-Mode Power Supply) in a first stage power converter/isolator/pre-regulator in each remote device 42. If the remote device 42 needs more or less energy, it notifies the central hub 40 via the data network (power message 51), and the interleave pattern is dynamically tailored as needed.
The power detection module 46 may detect power, energize the optical components 49, and return a status message (packet 56 on optical channel) to the central hub 40. In one embodiment, power is not enabled by the power enable/disable module 48 until the optical transceiver and the source have determined that the device is properly connected and the remote device 42 to be powered is ready to be powered. In one embodiment, the remote device 42 is configured to calculate available power and prevent the cabling system from being energized when it should not be powered (e.g., during a cooling failure). The power detection module 46 may also be operable to detect the type of power applied to the remote device 42, determine if PoE or pulsed power is a more efficient power delivery method, and then use the selected power delivery mode. Additional modes may support other power+data standards (e.g., USB (Universal Serial Bus)).
The power monitor and control module 47 continuously monitors power delivery to ensure that the system can support the needed power delivery, and no safety limits (e.g., voltage, current, ground fault current, arc flash) are exceeded. The power monitor and control device 47 may also monitor optical signaling and disable power if there is a lack of optical transitions or communication with the power source. Temperature, pressure, or flow sensors (described below with respect to
As the workload on a specific device 52 changes, its transmit buffers (e.g., at queue 59) feeding the upstream data channel, and the downstream buffers on the central hub 50 will fill and empty. A central controller 55 in the central network device 50 monitors the buffers for all remote devices 52, and the network adjusts rapidly by allocating more or less bandwidth by dedicating more or fewer timeslots on the network to each remote device 52. In one embodiment, a MAC (Media Access Control) protocol dynamically allocates portions of downstream bandwidth between the remote devices 52 and manages the timing of the upstream packets so that they interleave without interference. As shown in
The coolant loop 68 comprises a continuous loop of fluid from the central hub 60, through the splitter 63 and the remote devices 62, and back through splitter 63 to the central hub. In this example, the passive distribution splitter 63 comprises two fluid manifolds 63a, 63b for coolant supply and return, respectively. As described above, the distribution splitter 63 splits and recombines coolant flows (e.g., using 1:N and N:1 fluid distribution manifolds). If the system uses compressed air as a coolant, which is exhausted to the atmosphere at each remote device 62, only the supply manifold 63a is used.
Cooling is supplied to the device 62 via cooling (coolant) tubes in the cooling (coolant) loop 68, which provide cooling to the powered equipment through a heat exchanger (cooling tap, heat sink) 69 and returns warm (hot) coolant to the central hub 60. A heat exchanger 67 at the central hub 60 forms the cooling loop 68 with one or more heat exchangers 69 at the remote device 62. For the cooling flows, there may be one or more valves (e.g., servo valve) 70 at the coolant input to each remote device 62. As described below, the control system may adjust coolant valve settings to adjust the coolant flow at one or more of the remote devices.
Distribution plumbing routes the coolant in the cooling loop 68 to various thermal control elements within the network device 62 to actively regulate cooling through the individual flow paths. The remote device 62 may also include any number of distribution manifolds (not shown) with any number of outputs to route the coolant to one or more heat exchangers. If the manifold has multiple outputs, each may be equipped with a valve 70 to regulate the individual flow paths (e.g., adjust coolant valve settings). The distribution manifold may comprise any number of individual manifolds (e.g., supply and return manifolds) to provide any number of cooling branches directed to one or more components within the remote device 62.
Thermal control elements may include liquid cooled heatsinks, heat pipes, or other devices directly attached to the hottest components (e.g., CPUs (Central Processing Units), GPUs (Graphic Processing Units), power supplies, optical components, etc.) to directly remove their heat. The remote device 62 may also include channels in cold plates or in walls of the device's enclosure to cool anything they contact. Air to liquid heat exchangers, which may be augmented by a small internal fan, may be provided to cool the air inside a sealed box. Once the coolant passes through these elements and removes the device's heat, it may pass through additional temperature, pressure, or flow sensors, through another manifold, and out to the coolant return tube.
The coolant loop 68 at the remote device 62 may also include one more pumps (not shown) to help drive the coolant around the cooling loop or back to the central hub 60 or valves 70 to control flow in one or more branches of the cooling loop. The pump and valve 70 may be controlled by the control system 66 and operate based on control logic (message 72) received from the central hub 60 in response to monitoring at the remote device 62. The flow may be set by an active feedback loop that senses the temperature of a critical thermal load (e.g., the die temperature of a high power semiconductor) and continuously adjusts the flow in the loop that serves its heat exchanger 69.
The cooling loop 68 within the remote device 62 may include any number of sensors 71 for monitoring aggregate and individual branch temperature, pressure, and flow rate at strategic points around the loop (e.g., entering and leaving the device, at critical component locations). The remote device 62 may include, for example, temperature sensors to monitor die temperatures of critical semiconductors, temperatures of critical components (e.g., optical modules, disk drives), coolant temperatures, or the air temperature inside a device's sealed enclosure. The sensors 71 may also be used to check that the remote devices 62 receive approximately the same amount of coolant as supplied by the central hub 60 to help detect leaks or blockage in the cable, and confirm that the temperature and pressure are within specified limits. If, for example, a remote device's main CPU is running too hot, a message may be transmitted through the data channel requesting more coolant flow for the device 62. If the remote device 62 is cooler than required, a message to reduce coolant flow may be sent to economize on the total cooling used in the network. The control system may adjust the coolant flow to maintain a set point temperature. This feedback system insures the correct coolant flow is always present. Too much coolant flow wastes energy, while too little coolant flow may cause critical components in the remote devices 62 to overheat and prematurely fail.
As shown in the example of
It is to be understood that the network devices and control systems shown in
As described above with respect to
It is to be understood that the processes shown in
Although the method and apparatus have been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations made to the embodiments without departing from the scope of the embodiments. Accordingly, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
The present application is a continuation of U.S. patent application Ser. No. 15/918,972, entitled SPLITTING OF COMBINED DELIVERY POWER, DATA, AND COOLING IN A COMMUNICATIONS NETWORK, filed Mar. 12, 2018, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3335324 | Buckeridge | Aug 1967 | A |
3962529 | Kubo | Jun 1976 | A |
4811187 | Nakajima | Mar 1989 | A |
4997388 | Dale | Mar 1991 | A |
5652893 | Ben-Meir | Jul 1997 | A |
6008631 | Johari | Dec 1999 | A |
6220955 | Posa | Apr 2001 | B1 |
6259745 | Chan | Jul 2001 | B1 |
6636538 | Stephens | Oct 2003 | B1 |
6685364 | Brezina | Feb 2004 | B1 |
6784790 | Lester | Aug 2004 | B1 |
6826368 | Koren | Nov 2004 | B1 |
6855881 | Khoshnood | Feb 2005 | B2 |
6860004 | Hirano | Mar 2005 | B2 |
7325150 | Lehr | Jan 2008 | B2 |
7420355 | Liu | Sep 2008 | B2 |
7490996 | Sommer | Feb 2009 | B2 |
7492059 | Peker | Feb 2009 | B2 |
7509505 | Randall | Mar 2009 | B2 |
7566987 | Black et al. | Jul 2009 | B2 |
7583703 | Bowser | Sep 2009 | B2 |
7589435 | Metsker | Sep 2009 | B2 |
7593747 | Karam | Sep 2009 | B1 |
7603570 | Schindler | Oct 2009 | B2 |
7616465 | Vinciarelli | Nov 2009 | B1 |
7813646 | Furey | Oct 2010 | B2 |
7835389 | Yu | Nov 2010 | B2 |
7854634 | Filipon | Dec 2010 | B2 |
7881072 | DiBene | Feb 2011 | B2 |
7915761 | Jones | Mar 2011 | B1 |
7921307 | Karam | Apr 2011 | B2 |
7924579 | Arduini | Apr 2011 | B2 |
7940787 | Karam | May 2011 | B2 |
7973538 | Karam | Jul 2011 | B2 |
8020043 | Karam | Sep 2011 | B2 |
8037324 | Hussain | Oct 2011 | B2 |
8081589 | Gilbrech | Dec 2011 | B1 |
8184525 | Karam | May 2012 | B2 |
8276397 | Carlson | Oct 2012 | B1 |
8279883 | Diab | Oct 2012 | B2 |
8310089 | Schindler | Nov 2012 | B2 |
8319627 | Chan | Nov 2012 | B2 |
8345439 | Goergen | Jan 2013 | B1 |
8350538 | Cuk | Jan 2013 | B2 |
8358893 | Sanderson | Jan 2013 | B1 |
8386820 | Diab | Feb 2013 | B2 |
8638008 | Baldwin et al. | Jan 2014 | B2 |
8700923 | Fung | Apr 2014 | B2 |
8712324 | Corbridge | Apr 2014 | B2 |
8750710 | Hirt | Jun 2014 | B1 |
8768528 | Millar et al. | Jul 2014 | B2 |
8781637 | Eaves | Jul 2014 | B2 |
8787775 | Earnshaw | Jul 2014 | B2 |
8829917 | Lo | Sep 2014 | B1 |
8836228 | Xu | Sep 2014 | B2 |
8842430 | Hellriegel | Sep 2014 | B2 |
8849471 | Daniel | Sep 2014 | B2 |
8966747 | Vinciarelli | Mar 2015 | B2 |
9019895 | Li | Apr 2015 | B2 |
9024473 | Huff | May 2015 | B2 |
9184795 | Eaves | Nov 2015 | B2 |
9189036 | Ghoshal | Nov 2015 | B2 |
9189043 | Vorenkamp | Nov 2015 | B2 |
9273906 | Goth | Mar 2016 | B2 |
9319101 | Lontka | Apr 2016 | B2 |
9321362 | Woo | Apr 2016 | B2 |
9373963 | Kuznelsov | Jun 2016 | B2 |
9419436 | Eaves | Aug 2016 | B2 |
9484771 | Braylovskiy | Nov 2016 | B2 |
9510479 | Vos | Nov 2016 | B2 |
9531551 | Balasubramanian | Dec 2016 | B2 |
9590811 | Hunter, Jr. | Mar 2017 | B2 |
9618714 | Murray | Apr 2017 | B2 |
9640998 | Dawson | May 2017 | B2 |
9665148 | Hamdi | May 2017 | B2 |
9693244 | Maruhashi | Jun 2017 | B2 |
9734940 | McNutt | Aug 2017 | B1 |
9853689 | Eaves | Dec 2017 | B2 |
9874930 | Vavilala | Jan 2018 | B2 |
9882656 | Sipes, Jr. | Jan 2018 | B2 |
9893521 | Lowe | Feb 2018 | B2 |
9948198 | Imai | Apr 2018 | B2 |
9979370 | Xu | May 2018 | B2 |
9985600 | Xu | May 2018 | B2 |
10007628 | Pitigoi-Aron | Jun 2018 | B2 |
10028417 | Schmidtke | Jul 2018 | B2 |
10128764 | Vinciarelli | Nov 2018 | B1 |
10248178 | Brooks | Apr 2019 | B2 |
10263526 | Sandusky et al. | Apr 2019 | B2 |
10407995 | Moeny | Sep 2019 | B2 |
10439432 | Eckhardt | Oct 2019 | B2 |
10541543 | Eaves | Jan 2020 | B2 |
10735105 | Goergen et al. | Aug 2020 | B2 |
20010024373 | Cuk | Sep 2001 | A1 |
20020126967 | Panak | Sep 2002 | A1 |
20040000816 | Khoshnood | Jan 2004 | A1 |
20040033076 | Song | Feb 2004 | A1 |
20040043651 | Bain | Mar 2004 | A1 |
20040073703 | Boucher | Apr 2004 | A1 |
20040264214 | Xu | Dec 2004 | A1 |
20050044431 | Lang | Feb 2005 | A1 |
20050197018 | Lord | Sep 2005 | A1 |
20050268120 | Schindler | Dec 2005 | A1 |
20060202109 | Delcher | Sep 2006 | A1 |
20060209875 | Lum | Sep 2006 | A1 |
20070103168 | Batten | May 2007 | A1 |
20070143508 | Linnman | Jun 2007 | A1 |
20070173202 | Binder | Jul 2007 | A1 |
20070236853 | Crawley | Oct 2007 | A1 |
20070263675 | Lum | Nov 2007 | A1 |
20070284946 | Robbins | Dec 2007 | A1 |
20070288125 | Quaratiello | Dec 2007 | A1 |
20080054720 | Lum | Mar 2008 | A1 |
20080198635 | Hussain | Aug 2008 | A1 |
20080229120 | Diab | Sep 2008 | A1 |
20080310067 | Diab | Dec 2008 | A1 |
20090027033 | Diab | Jan 2009 | A1 |
20090132679 | Binder | May 2009 | A1 |
20100077239 | Diab | Mar 2010 | A1 |
20100117808 | Karam | May 2010 | A1 |
20100171602 | Kabbara | Jul 2010 | A1 |
20100190384 | Lanni | Jul 2010 | A1 |
20100237846 | Vetteth | Sep 2010 | A1 |
20100290190 | Chester | Nov 2010 | A1 |
20110004773 | Hussain | Jan 2011 | A1 |
20110007664 | Diab | Jan 2011 | A1 |
20110057612 | Taguchi | Mar 2011 | A1 |
20110083824 | Rogers | Apr 2011 | A1 |
20110228578 | Serpa | Sep 2011 | A1 |
20110266867 | Schindler | Dec 2011 | A1 |
20110290497 | Stenevik | Dec 2011 | A1 |
20120043935 | Dyer | Feb 2012 | A1 |
20120064745 | Ottliczky | Mar 2012 | A1 |
20120170927 | Huang | Jul 2012 | A1 |
20120201089 | Barth | Aug 2012 | A1 |
20120231654 | Conrad | Sep 2012 | A1 |
20120287984 | Lee | Nov 2012 | A1 |
20120317426 | Hunter, Jr. | Dec 2012 | A1 |
20120319468 | Schneider | Dec 2012 | A1 |
20130077923 | Weem | Mar 2013 | A1 |
20130079633 | Weem | Mar 2013 | A1 |
20130103220 | Eaves | Apr 2013 | A1 |
20130249292 | Blackwell, Jr. | Sep 2013 | A1 |
20130272721 | Veen | Oct 2013 | A1 |
20130329344 | Tucker | Dec 2013 | A1 |
20140111180 | Vladan | Apr 2014 | A1 |
20140126151 | Campbell | May 2014 | A1 |
20140129850 | Paul | May 2014 | A1 |
20140258742 | Chien | Sep 2014 | A1 |
20140258813 | Lusted | Sep 2014 | A1 |
20140265550 | Milligan | Sep 2014 | A1 |
20140355204 | Gusat | Dec 2014 | A1 |
20140372773 | Heath | Dec 2014 | A1 |
20150078740 | Sipes, Jr. | Mar 2015 | A1 |
20150106539 | Leinonen | Apr 2015 | A1 |
20150115741 | Dawson | Apr 2015 | A1 |
20150207317 | Radermacher | Jul 2015 | A1 |
20150215001 | Eaves | Jul 2015 | A1 |
20150215131 | Paul | Jul 2015 | A1 |
20150333918 | White, III | Nov 2015 | A1 |
20150340818 | Scherer | Nov 2015 | A1 |
20150375695 | Grimm | Dec 2015 | A1 |
20160018252 | Hanson | Jan 2016 | A1 |
20160020911 | Sipes, Jr. | Jan 2016 | A1 |
20160064938 | Balasubramanian | Mar 2016 | A1 |
20160111877 | Eaves | Apr 2016 | A1 |
20160118784 | Saxena | Apr 2016 | A1 |
20160133355 | Glew | May 2016 | A1 |
20160134331 | Eaves | May 2016 | A1 |
20160142217 | Gardner | May 2016 | A1 |
20160188427 | Chandrashekar | Jun 2016 | A1 |
20160197600 | Kuznetsov | Jul 2016 | A1 |
20160365967 | Tu | Jul 2016 | A1 |
20160241148 | Kizilyalli | Aug 2016 | A1 |
20160262288 | Chainer | Sep 2016 | A1 |
20160273722 | Crenshaw | Sep 2016 | A1 |
20160294500 | Chawgo | Oct 2016 | A1 |
20160294568 | Chawgo et al. | Oct 2016 | A1 |
20160308683 | Pischl | Oct 2016 | A1 |
20160352535 | Hiscock | Dec 2016 | A1 |
20170041152 | Sheffield | Feb 2017 | A1 |
20170041153 | Picard | Feb 2017 | A1 |
20170054296 | Daniel | Feb 2017 | A1 |
20170110871 | Foster | Apr 2017 | A1 |
20170123466 | Carnevale | May 2017 | A1 |
20170146260 | Ribbich | May 2017 | A1 |
20170155517 | Cao | Jun 2017 | A1 |
20170155518 | Yang | Jun 2017 | A1 |
20170164525 | Chapel | Jun 2017 | A1 |
20170214236 | Eaves | Jul 2017 | A1 |
20170229886 | Eaves | Aug 2017 | A1 |
20170234738 | Ross | Aug 2017 | A1 |
20170244318 | Giuliano | Aug 2017 | A1 |
20170248976 | Moller | Aug 2017 | A1 |
20170294966 | Jia | Oct 2017 | A1 |
20170325320 | Wendt | Nov 2017 | A1 |
20180024964 | Mao | Jan 2018 | A1 |
20180053313 | Smith | Feb 2018 | A1 |
20180054083 | Hick | Feb 2018 | A1 |
20180060269 | Kessler | Mar 2018 | A1 |
20180088648 | Otani | Mar 2018 | A1 |
20180098201 | Torello | Apr 2018 | A1 |
20180102604 | Keith | Apr 2018 | A1 |
20180123360 | Eaves | May 2018 | A1 |
20180159430 | Albert | Jun 2018 | A1 |
20180188712 | MacKay | Jul 2018 | A1 |
20180191513 | Hess | Jul 2018 | A1 |
20180254624 | Son | Sep 2018 | A1 |
20180313886 | Mlyniec | Nov 2018 | A1 |
20190126764 | Fuhrer | May 2019 | A1 |
20190267804 | Matan | Aug 2019 | A1 |
20190280895 | Mather | Sep 2019 | A1 |
Number | Date | Country |
---|---|---|
1209880 | Jul 2005 | CN |
201689347 | Dec 2010 | CN |
204836199 | Dec 2015 | CN |
205544597 | Aug 2016 | CN |
104081237 | Oct 2016 | CN |
104412541 | May 2019 | CN |
1936861 | Jun 2008 | EP |
2120443 | Nov 2009 | EP |
2257009 | Jan 2010 | EP |
2693688 | Feb 2014 | EP |
WO199316407 | Aug 1993 | WO |
WO2010053542 | May 2010 | WO |
WO2017054030 | Apr 2017 | WO |
WO2017167926 | Oct 2017 | WO |
WO2018017544 | Jan 2018 | WO |
WO2019023731 | Feb 2019 | WO |
Entry |
---|
https://www.fischerconnectors.com/US/en/products/fiberoptic. |
http://www.strantech.com/products/tfoca-genx-hybrid-2x2-fiber-optic-copper-connector/. |
http://www.qpcfiber.com/product/connectors/e-link-hybrid-connector/. |
https://www.lumentum.com/sites/default/files/technical-library-items/poweroverfiber-tn-pv-ae_0.pdf. |
“Network Remote Power Using Packet Energy Transfer”, Eaves et al., www.voltserver.com, Sep. 2012. |
Product Overview, “Pluribus VirtualWire Solution”, Pluribus Networks, PN-PO-VWS-05818, https://www.pluribusnetworks.com/assets/Pluribus-VirtualWire-PO-50918.pdf, May 2018, 5 pages. |
Implementation Guide, “Virtual Chassis Technology Best Practices”, Juniper Networks, 8010018-009-EN, Jan. 2016, https://wwwjuniper.net/US/en/local/pdf/implementation-guides/8010018-en.pdf, 29 pages. |
Yencheck, Thermal Modeling of Portable Power Cables, 1993. |
Zhang, Machine Learning-Based Temperature Prediction for Runtime Thermal Management across System Components, Mar. 2016. |
Data Center Power Equipment Thermal Guidelines and Best Practices. |
Dynamic Thermal Rating of Substation Terminal Equipment by Rambabu Adapa, 2004. |
Chen, Real-Time Termperature Estimation for Power MOSEFETs Conidering Thermal Aging Effects:, IEEE Trnasactions on Device and Materials Reliability, vol. 14, No. 1, Mar. 2014. |
Jingquan Chen et al.: “Buck-boost PWM converters having two independently controlled switches”, 32nd Annual IEEE Power Electronics Specialists Conference. PESC 2001. Conference Proceedings, Vancouver, Canada, Jun. 17-21, 2001; [Annual Power Electronics Specialists Conference], New York, NY: IEEE, US, vol. 2,Jun. 17, 2001 (Jun. 17, 2001), pp. 736-741, XP010559317, DOI: 10.1109/PESC.2001.954206, ISBN 978-0-7803-7067-8 paragraph [SectionII]; figure 3. |
Cheng K W E et al.: “Constant Frequency, Two-Stage Quasiresonant Convertor”, IEE Proceedings B. Electrical Power Applications, 1271980 1, vol. 139, No. 3, May 1, 1992 (May 1, 1992), pp. 227-237, XP000292493, the whole document. |
Petition for Post Grant Review of U.S. Pat. No. 10,735,105 [Public], filed Feb. 16, 2021, PGR 2021-00055. |
Petition for Post Grant Review of U.S. Pat. No. 10,735,105 [Public], filed Feb. 16, 2021, PGR 2021-00056. |
Eaves, S. S., Network Remote Powering Using Packet Energy Transfer, Proceedings of IEEE International Conference on Telecommunications Energy (INTELEC) 2012, Scottsdale, AZ, Sep. 30-Oct. 4, 2012 (IEEE 2012) (EavesIEEE). |
Edelstein S., Updated 2016 Tesla Model S also gets new 75-kWhbattery option, (Jun. 19, 2016), archived Jun. 19, 2016 by Internet Archive Wayback machine at https://web.archive.org/web/20160619001148/https://www.greencarreports.com/news/1103 782_updated-2016-tesla-model-s-also-gets-new-7 5-kwh-battery-option (“Edelstein”). |
NFPA 70 National Electrical Code, 2017 Edition (“NEC”). |
International Standard IEC 62368-1 Edition 2.0 (2014), ISBN 978-2-8322-1405-3 (“IEC-62368”). |
International Standard IEC/TS 60479-1 Edition 4.0 (2005), ISBN 2-8318-8096-3 (“IEC-60479”). |
International Standard IEC 60950-1 Edition 2.2 (2013), ISBN 978-2-8322-0820-5 (“IEC-60950”). |
International Standard IEC 60947-1 Edition 5.0 (2014), ISBN 978-2-8322-1798-6 (“IEC-60947”). |
Tanenbaum, A. S., Computer Networks, Third Edition (1996) (“Tanenbaum”). |
Stallings, W., Data and Computer Communications, Fourth Edition (1994) (“Stallings”). |
Alexander, C. K., Fundamentals of Electric Circuits, Indian Edition (2013) (“Alexander”). |
Hall, S. H., High-Speed Digital System Design, A Handbook of Interconnect Theory and Design Practices (2000) (“Hall”). |
Sedra, A. S., Microelectronic Circuits, Seventh Edition (2014) (“Sedra”). |
Lathi, B. P., Modem Digital and Analog Communication Systems, Fourth Edition (2009) (“Lathi”). |
Understanding 802.3at PoE Plus Standard Increases Available Power (Jun. 2011) (“Microsemi”). |
Number | Date | Country | |
---|---|---|---|
20200221601 A1 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15918972 | Mar 2018 | US |
Child | 16819431 | US |