This disclosure generally relates to information handling systems, and more particularly relates to power and thermal management of disaggregated server subsystems.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option is an information handling system. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes. Because technology and information handling needs and requirements may vary between different applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software resources that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
A disaggregated information handling system may include a processing sleds and an abstraction layer module. The abstraction layer module may discover the processing elements, determine an availability score to each of the processing elements, receive an allocation request for an allocation of at least one of the processing elements, and allocate a first one of the processing elements based upon the first processing element having a highest availability score
It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the Figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements. Embodiments incorporating teachings of the present disclosure are shown and described with respect to the drawings presented herein, in which:
The use of the same reference symbols in different drawings indicates similar or identical items.
The following description in combination with the Figures is provided to assist in understanding the teachings disclosed herein. The following discussion will focus on specific implementations and embodiments of the teachings. This focus is provided to assist in describing the teachings, and should not be interpreted as a limitation on the scope or applicability of the teachings. However, other teachings can certainly be used in this application. The teachings can also be used in other applications, and with several different types of architectures, such as distributed computing architectures, client/server architectures, or middleware server architectures and associated resources.
In a particular case, sleds 110, 115, 120, 125, 130, 135, 140, and 145 each represent a removable chassis-mounted device. Here, a common chassis can be rack mountable in a standard 19-inch rack, and can provide common functionality for the installed sleds, such as power supplies, cooling fans, management, I/O, storage, and the like. Here further, each installed sled can represent a dedicated computing capability. For example, CPU sleds 110 and 115 can include a large number of processors, with enough associated memory, storage, and I/O to support the processors, but with not so much memory, storage, and I/O as would be normally associated with processors in a server system. Likewise, GPU sleds 120 and 125 may include a large number of GPUs and a modest general-purpose CPU sufficient to manage the GPUs, and with sufficient I/O capacity to handle the I/O needs of the GPUs. Further, memory sleds 130 and 135 may include large arrays of memory devices, such as Dual In-Line Memory Modules (DIMMs), again with sufficient processing and I/O to manage the memory devices. Finally, I/O sleds 140 and 145 may include large I/O capacity for storage arrays and network interfaces. Other types of sleds, such as dedicated storage sleds, network interface sleds, Field Programmable Gate Array (FPGA) sleds, Digital Signal Processing (DSP) sleds, and the like, may be included in disaggregated information handling system 100, as needed or desired. In another case, one or more installed sled can represent a general-purpose blade server with a more balanced mix of CPUs, co-processors such as GPUs, FPGAs, or DSPs, memory devices, and I/O capacity. Here, the various types of processing elements of a sled, while resident on a common removable chassis-mounted device, may be logically separated into the distinct processing modules based upon the type of processing elements included thereon: CPUs, co-processors, memory devices, and I/O devices, as needed or desired.
It may be understood that one or more of sleds 110, 115, 120, 125, 130, 135, 140, and 145 may represent an information handling system on its own, except that the information handling system in this case is provided to facilitate the use of each sled as a particular processing capacity. For example, GPU sleds 120 and 125 may include one or more general purpose processor or CPU, memory devices, storage devices, I/O devices, and the like. However, the use of such additional processing elements are provided to facilitate the function of the GPU sleds to provide GPU co-processing capabilities. For example, a CPU, memory, storage, and I/O may be provided to facilitate the receipt of processing task and the communication of the finished result of the processing tasks to aggregated information handling system 100.
SAL 160 represents a system orchestrator that presents the processing elements of sleds 110, 115, 120, 125, 130, 135, 140, and 145 as separate but virtualizable mix-and-match capabilities, and that allocates the processing elements as needed to virtual machines 170 and 172. For example, a particular virtual machine may be instantiated to provide a host for a workflow that has a fairly steady processing demand in terms of processing threads needed (CPU sleds 110 and 115), memory capacity (memory sleds 130 and 135), and storage and I/O capacity (I/O sleds 140 and 145). Another virtual machine may be instantiated to provide a host for a workflow that has a heavy demand for GPU processing (GPU sleds 120 and 125). A third virtual machine may be instantiated to provide a host for a workflow that has varying demands for processing power, GPU processing, memory, and I/O. Here, SAL 160 can operate to allocate the processing elements of sleds 110, 115, 120, 125, 130, 135, 140, and 145 to meet the needs of the virtual machines as the virtual machines are instantiated. As such, SAL 160 operates to dispatch and monitor workloads to the remote resources of sleds 110, 115, 120, 125, 130, 135, 140, and 145 as needed or desired. The details of allocating resources and dispatching and monitoring workloads in a disaggregated information handling system by a Server Abstraction Layer are known in the art and will not be further described herein except as needed to illustrate the present embodiments. SAL 160 may be implemented as hardware, software, firmware, or the like, as needed or desired. In particular, SAL 160 may represent a particular information handling system instantiated within disaggregated information handling system 100, or may be implemented utilizing a set of the processing elements of the disaggregated information handling system. In another case, SAL 160 may represent a module for managing disaggregated information handling system 100 that is included in operating system 180, or within a system Basic Input/Output System or Universal Extensible Firmware Interface (BIOS/UEFI) of the disaggregated information handling system.
Broadly, the disaggregation of processing elements as described herein is referred to under the moniker of “XaaS:” CPU-as-a-Service (CPUaaS), GPU-as-a-Service (CPUaaS), Memory-as-a-Service, I/O-as-a-Service, and the like. The ability to effectively disaggregate the processing elements of disaggregated information handling system 100, and to provide XaaS functionality, is facilitated by the emergence of various high-speed open-standard data communication standards for communications between processor/compute nodes, co-processor nodes, memory arrays, storage arrays, network interfaces, and the like. Examples of such communication standards include the Gen-Z Consortium standard, the Open Coherent Accelerator Processor Interface (OpenCAPI) standard, the Open Memory Interface (OMI) standard, the Compute Express Link (CXL) standard, or the like. As illustrated herein, the disaggregated information handling systems of the present embodiments are shown as linking the various sleds via Gen-Z links, but this is not necessarily so, and other high-speed communication links may be utilized in connection with the present embodiments, as needed or desired. It will be further understood that the division of processing capacities as shown and described herein are for illustrative purposes, and are not meant to limit the scope of the teachings herein. For example, other divisions may be contemplated as needed or desired, such as where various sleds may incorporate CPU and memory capacities, or where other types of co-processors, such as FPGAs or DSPs, or other co-processors are utilized in place of, or in addition to the illustrated GPU sleds.
In a particular embodiment, one or more of sleds 210, 215, 220, 225, 230, 235, 240, and 245 represent an information handling system on its own, except that the information handling system in this case is provided to facilitate the use of each sled as a particular processing capacity. For example, GPU sleds 220 and 225 may include one or more general purpose processor or CPU, memory devices, storage devices, I/O devices, and the like. However, the use of such additional processing elements are provided to facilitate the function of the GPU sleds to provide GPU co-processing elements. For example, a CPU, memory, storage, and I/O may be provided to facilitate the receipt of processing task and the communication of the finished result of the processing tasks to aggregated information handling system 2d00.
Sleds 110, 115, 120, 125, 130, 135, 140, and 145 each include an associated baseboard management controller (BMC) 112, 117, 122, 127, 132, 137, 142, and 147, respectively. BMCs 112, 117, 122, 127, 132, 137, 142, and 147 are connected to system management engine 190 by a management network. BMCs 112, 117, 122, 127, 132, 137, 142, and 147 each represent one or more processing devices, such as a dedicated BMC System-on-a-Chip (SoC) device, one or more associated memory devices, one or more network interface devices, a complex programmable logic device (CPLD), and the like, that operate together to provide a management environment for disaggregated information handling system 100. In particular, each BMC 112, 117, 122, 127, 132, 137, 142, and 147 is connected to various components of the associated sled 110, 115, 120, 125, 130, 135, 140, and 145 via various internal communication interfaces, such as a Low Pin Count (LPC) interface, an Inter-Integrated-Circuit (I2C) interface, a Peripheral Component Interconnect-Express (PCIe) interface, or the like, to provide an out-of-band (OOB) mechanism to retrieve information related to the operation of the associated sled, to provide BIOS/UEFI or system firmware updates, to manage non-processing components of the sleds, such as system cooling fans and power supplies. An example of BMCs 112, 117, 122, 127, 132, 137, 142, and 147 may include a commercially available BMC product or other device that operates in accordance with an Intelligent Platform Management Initiative (IPMI) specification, a Web Services Management (WSMan) interface, a Redfish Application Programming Interface (API), another Distributed Management Task Force (DMTF), or other management standard, and can include an Integrated Dell Remote Access Controller (iDRAC), an Embedded Controller (EC), or the like. In a particular case where sleds 110, 115, 120, 125, 130, 135, 140, and 145 each represent a removable chassis-mounted device, BMCs 112, 117, 122, 127, 132, 137, 142, and 147 may represent dedicated BMCs, one for each sled. Here, the common chassis may include a Chassis Management Controller (CMC) that is connected to the BMC of each sled in the chassis, and that provides a central point of communication for managing the functions of the chassis and of the sleds within the chassis.
In the prior art solutions for providing XaaS, such as in disaggregated information handling system 100, the management, monitoring, and maintenance of power, thermal, and acoustic properties of sleds 110, 115, 120, 125, 130, 135, 140, and 145 is typically done on a per-sled basis. Here, for example, system management engine 100 operates to aggregate thermal information for sleds 110, 115, 120, 125, 130, 135, 140, and 145 individually, or, at most, upon a per-chassis basis. Moreover, in large datacenters that may provide XaaS capabilities, various sleds may be employed by different manufacturers that each provide different tools, techniques, and algorithms for the management of their respective power, thermal, and acoustic functions. However, because management of sleds 110, 115, 120, 125, 130, 135, 140, and 145 by system management engine 190 is on a per-sled or per-chassis basis, power or thermal issues on a particular sled may necessitate degrading the performance of all workloads operating on that particular sled, or, in a worst case, may necessitate the complete shut down of the processing elements of the particular sled. In such extreme cases, the shutting down of a particular sled may necessitate the un-mapping and remapping of the computing resources for each particular workload, and the associated migration and re-instantiation of the associated virtual machines. Such migration and re-instantiation of virtual machines typically result in unacceptable performance degradation within the datacenter. Issues that may result in degraded performance may include: power efficiency degradations resulting from increased fan power consumption, the operating of power supply units (PSUs) on less efficient points on the associated PSU efficiency curve, or the like; power delivery related performance degradation resulting from PSU, power grid faults due to over-subscribed configuration, or the like; thermal related performance degradation resulting from operations at higher than supported ambient temperatures, fan faults, configurations that exceed fan-only thermal management parameters, exhaust temperature limitations, or the like; and datacenter related performance degradation due to user-defined power caps assigned to the sleds or chassis or the like.
DISAL 260 represents the processing elements of sleds 210, 215, 220, 225, 230, 235, 240, and 245 as separate but virtualizable mix-and-match processing elements that can be allocated as needed to virtual machines 270 and 272. For example, a particular virtual machine may be instantiated to provide a host for a workflow that has a fairly steady processing demand in terms of processing threads needed (CPU sleds 210 and 215), memory capacity (memory sleds 230 and 235), and storage and I/O capacity (I/O sleds 240 and 245). Another virtual machine may be instantiated to provide a host for a workflow that has a heavy demand for GPU processing (GPU sleds 220 and 225). A third virtual machine may be instantiated to provide a host for a workflow that has varying demands for processing power, GPU processing, memory, and I/O. Here, DISAL 260 can operate to allocate the processing elements of sleds 210, 215, 220, 225, 230, 235, 240, and 245 to meet the needs of the virtual machines as the virtual machines are instantiated. As such, DISAL 260 operates to dispatch and monitor workloads to the remote resources of sleds 210, 215, 220, 225, 230, 235, 240, and 245 as needed or desired. In a particular embodiment, DSSAL 250 and DISAL 260 are implemented as hardware, software, firmware, or the like, as needed or desired. In particular, DSSAL 250 and DISAL 260 may represent a particular information handling system instantiated within disaggregated information handling system 200, or may be implemented utilizing a set of the processing elements of the disaggregated information handling system. In another embodiment, DSSAL 250 and DISAL 260 each represent a module for managing disaggregated information handling system 200 that is included in operating system 280, or within a system BIOS/UEFI of the disaggregated information handling system.
Sleds 210, 215, 220, 225, 230, 235, 240, and 245 each include an associated baseboard management controller (BMC) 212, 217, 222, 227, 232, 237, 242, and 247, respectively. BMCs 212, 217, 222, 227, 232, 237, 242, and 247 are similar to BMCs 112, 127, 122, 127, 132, 137, 142, and 147 in their operations with respect to their associated sleds 210, 215, 220, 225, 230, 235, 240, and 245. That is, in terms of a particular BMCs operability to monitor, manage, and maintain the associated sled, BMCs 212, 217, 222, 227, 232, 237, 242, and 247 operate similarly to BMCs 112, 127, 122, 127, 132, 137, 142, and 147, as described above. However, where BMCs 112, 127, 122, 127, 132, 137, 142, and 147 are each connected to system management engine 190 via a management network, BMCs 212, 217, 222, 227, 232, 237, 242, and 247 are each connected to system management engine 290 via DSSAL 250. In particular, DSSAL 250 includes a CPU subsystem abstraction layer 252 that is connected to BMCs 212 and 217, a GPU subsystem abstraction layer 254 that is connected to BMCs 222 and 227, a memory subsystem abstraction layer 256 that is connected to BMCs 232 and 237, and an I/O subsystem abstraction layer 258 that is connected to BMC 242, and 247. Here, the common chassis may include a CMC that is connected to the BMC of each sled in the chassis, and that provides a central point of communication for managing the functions of the chassis and of the sleds within the chassis. Here CPU subsystem abstraction layer 252, GPU subsystem abstraction layer 254, memory subsystem abstraction layer 256, and I/O subsystem abstraction layer 258 represent abstraction layers not necessarily for the allocation of the resources of sleds 210, 215, 220, 225, 230, 235, 240, and 245 to virtual machines 270 and 270, as provided by DISAL 260, but more particularly represent abstraction layers for the aggregate computing functions, and the maintenance, management, and monitoring of the resources of the sleds.
In particular, the management, monitoring, and maintenance of power, thermal, and acoustic properties of sleds 210, 215, 220, 225, 230, 235, 240, and 245 is done on an aggregate basis. For example, the CPUs included on CPU sleds 210 and 215 are presented by CPU subsystem abstraction layer 252 as an aggregate processing capacity, and can allocate and deallocated particular CPUs to the use of virtual machines 170 and 172 without having to halt or migrate the virtual machines when the allocations of CPUs changes. In particular, CPU subsystem abstraction layer maintains knowledge of the operating functions and features of CPU sleds 212 and 215, and so can store machine state for a particular CPU and capture instructions provided by a virtual machine, and so can halt the flow of instructions to a particular CPU, save the machine state for that CPU, initialize a different CPU on a different sled with the saved machine state, and provide the halted instruction flow to the new CPU, and this can be performed seamlessly, without interrupting the associated virtual machine, DISAL 260, or operating system 280. Similarly, GPU subsystem abstraction layer 254 presents the GPUs of GPU sleds 220 and 225 as an aggregated GPU capacity, and can allocate and deallocated particular GPUs to the use of virtual machines 170 and 172 without having to halt or migrate the virtual machines when the allocation of GPUs changes. Likewise memory subsystem abstraction layer 256 presents the memory devices of memory sleds 230 and 235 as aggregated memory capacity, and I/O subsystem abstraction layer 258 presents the I/O devices of I/O sleds 240 and 245 as aggregated I/O capacities. In this way, when DISAL 260, virtual machines 170 and 172, and operating system 280 determine that a resource is needed, DISAL provides an aggregated demand to DSSAL 150, and the DSSAL manages the allocation seamlessly, and without further management or instruction from the DISAL.
Here, where the various sleds employ different tools, techniques, and algorithms for the management of their respective power, thermal, and acoustic functions, the subsystem abstraction layers are provided with the specific knowledge of the functions, thereby freeing DISAL 260, and operating system 280 from having to maintain specific knowledge to manage the power, thermal, and acoustic functions of the sleds. As noted above, DSSAL 250 and subsystem abstraction layers 252, 254, 256, and 258 can operate at the behest of one or more of DISAL 260, virtual machines 170 and 172, and operating system 280, as needed to meet the processing demands of the instantiated virtual machines.
In addition, DSSAL 250 and subsystem abstraction layers 252, 254, 256, and 258 can operate at the behest of system management engine 290 to manage the power, thermal, and acoustic functions of sleds 210, 215, 220, 225, 230, 235, 240, and 245. In particular, system management engine 290 operates through DSSAL 250 and subsystem abstraction layers 252, 254, 256, and 258 to actively manage, maintain, and monitor the power, thermal, and acoustic properties of sleds 210, 215, 220, 225, 230, 235, 240, and 245. For example, system management engine 290 can determine from memory subsystem abstraction layer 256 that sled 230 is consuming excess power, is running too hot, is running too loud, is having other environmental or auxiliary problems, or the like, and the system management engine can direct the memory subsystem abstraction layer to reallocate memory from memory sled 230 to memory sled 235. Here, memory subsystem abstraction layer 256 operates to transfer the data stored on a memory device of sled 230 to a memory device of sled 235, and then to remap memory access requests from the initial memory device to the new memory device. Here, not only is the hosted environment, represented by DISAL 260, virtual machines 170 and 172, and operating system 180, unaware of any change in the configuration, but system management engine 290 gains the ability to manage the processing resources of disaggregated information handling system 200 in order to optimize the resource utilization, to manage power, thermal, acoustic, an other environmental or auxiliary characteristics of the information handling system as an integrated whole, as opposed to the case with disaggregated information handling system 100, where management is provided on a per-sled basis.
In a particular embodiment, DSSAL 250 operates to allocate the resources of disaggregated information handling system 200 on a per node basis, where each resource is treated as a particular type of processing node. Here, each node is given a score for a variety of parameters, including node power efficiency, node performance, and node availability. In a particular embodiment, each node is provided a score as provided in Equation 1:
A=xN
PE
+yN
PF
+zN
A Equation 1
where A is the availability score, NPE is the node power efficiency, NPF is the node performance level, NA is the node availability, and x, y, and z are weighting factors. The node parameters NPE, NPF, and NA are each ascribed a score, for example of 1-10, where “1” is a least optimal state for the node for the particular parameter, and “10” is a most optimal state for the node for the particular parameter. The weighting factors x, y, and z are set to equal “1,” indicating that each parameter is given an equal weight. When DISAL 260 requests a resources of a particular type, then DSSAL 250 selects the nodes of the particular type with the highest scores. In a particular embodiment, the weighting factors x, y, and z are configurable, such that a user of disaggregated information handling system 200 may set the weighting factors to be greater than or less than “1,” as needed or desired.
In a particular embodiment, the node power efficiency, NPE, is measured as a Power Usage Effectiveness (PUE) for the sled that includes the particular node. That is, it may be difficult to determine a power efficiency on a per-node basis, but the PUE for each sled can be utilized as an assumed power efficiency for the nodes within each sled. Here, the PUE can be calculated as:
PUE=PS/PC Equation 2
where PS is the total sled power level and PC is the compute component power level. The total sled power level, PS, includes fans, voltage regulators, power supply efficiency losses, board power distribution losses, and the like. The compute power level, PC, includes processor power, memory power, I/O power, storage power, and the like. Here, a BMC in each sled operates to monitor the various power levels within the sled and to calculate the PUE for the sled in accordance with Equation 2. For example, a BMC may receive indications as to a PSU efficiency, such as by evaluating a PSU load percentage against a stored PSU efficiency curve, and may determine whether an increase in the load will result in the PSU operating less efficiently. The BMC may also receive indications as to fan power levels, such as a measured power, a Pulse-Width Modulation (PWM) level at which the fans are being operated, and the like.
The node performance level, NPF, can be determined by the BMC within a particular sled based upon a the operating frequency for the various processing elements of the sled, such as CPUs, memory devices, I/O devices, and the like, based upon a determination that an increase in the load on the sled may result in the activation of various power and thermal control loops, waring or error status indications for power or thermal throttling, and the like. The node availability, NA, may be determined based upon warning or error status indications that power, fan, memory, I/O, or other redundancy has been lost, indications that the source power for the sled or chassis is considered dirty, resulting in frequent drop-outs of the PSU, and the like.
A decision is made as to whether or not the selected sled is the last sled in the disaggregated information handling system in decision block 416. If not, the “NO” branch of decision block 406 is taken and the method returns to block 404 where a next sled of the disaggregated information handling system is selected. If the selected sled is the last sled in the disaggregated information handling system, the “YES” branch of decision block 406 is taken, and a DISAL of the disaggregated information handling system requests resources to ascribe to a virtual machine in block 418. The DSSAL allocates one or more node to satisfy the resource request based upon each node's availability score in block 420 and the method ends in block 422.
Information handling system 500 can include devices or modules that embody one or more of the devices or modules described below, and operates to perform one or more of the methods described below. Information handling system 500 includes a processors 502 and 504, an input/output (I/O) interface 510, memories 520 and 525, a graphics interface 530, a basic input and output system/universal extensible firmware interface (BIOS/UEFI) module 540, a disk controller 550, a hard disk drive (HDD) 554, an optical disk drive (ODD) 556, a disk emulator 560 connected to an external solid state drive (SSD) 562, an I/O bridge 570, one or more add-on resources 574, a trusted platform module (TPM) 576, a network interface 580, a management device 590, and a power supply 595. Processors 502 and 504, I/O interface 510, memory 520, graphics interface 530, BIOS/UEFI module 540, disk controller 550, HDD 554, ODD 556, disk emulator 560, SSD 562, I/O bridge 570, add-on resources 574, TPM 576, and network interface 580 operate together to provide a host environment of information handling system 500 that operates to provide the data processing functionality of the information handling system. The host environment operates to execute machine-executable code, including platform BIOS/UEFI code, device firmware, operating system code, applications, programs, and the like, to perform the data processing tasks associated with information handling system 500.
In the host environment, processor 502 is connected to I/O interface 510 via processor interface 506, and processor 504 is connected to the I/O interface via processor interface 508. Memory 520 is connected to processor 502 via a memory interface 522. Memory 525 is connected to processor 504 via a memory interface 527. Graphics interface 530 is connected to I/O interface 510 via a graphics interface 532, and provides a video display output 536 to a video display 534. In a particular embodiment, information handling system 500 includes separate memories that are dedicated to each of processors 502 and 504 via separate memory interfaces. An example of memories 520 and 530 include random access memory (RAM) such as static RAM (SRAM), dynamic RAM (DRAM), non-volatile RAM (NV-RAM), or the like, read only memory (ROM), another type of memory, or a combination thereof.
BIOS/UEFI module 540, disk controller 550, and I/O bridge 570 are connected to I/O interface 510 via an I/O channel 512. An example of I/O channel 512 includes a Peripheral Component Interconnect (PCI) interface, a PCI-Extended (PCI-X) interface, a high speed PCI-Express (PCIe) interface, another industry standard or proprietary communication interface, or a combination thereof. I/O interface 510 can also include one or more other I/O interfaces, including an Industry Standard Architecture (ISA) interface, a Small Computer Serial Interface (SCSI) interface, an Inter-Integrated Circuit (I2C) interface, a System Packet Interface (SPI), a Universal Serial Bus (USB), another interface, or a combination thereof. BIOS/UEFI module 540 includes BIOS/UEFI code operable to detect resources within information handling system 500, to provide drivers for the resources, initialize the resources, and access the resources. BIOS/UEFI module 540 includes code that operates to detect resources within information handling system 500, to provide drivers for the resources, to initialize the resources, and to access the resources.
Disk controller 550 includes a disk interface 552 that connects the disk controller to HDD 554, to ODD 556, and to disk emulator 560. An example of disk interface 552 includes an Integrated Drive Electronics (IDE) interface, an Advanced Technology Attachment (ATA) such as a parallel ATA (PATA) interface or a serial ATA (SATA) interface, a SCSI interface, a USB interface, a proprietary interface, or a combination thereof. Disk emulator 560 permits SSD 564 to be connected to information handling system 500 via an external interface 562. An example of external interface 562 includes a USB interface, an IEEE 1394 (Firewire) interface, a proprietary interface, or a combination thereof. Alternatively, solid-state drive 564 can be disposed within information handling system 500.
I/O bridge 570 includes a peripheral interface 572 that connects the I/O bridge to add-on resource 574, to TPM 576, and to network interface 580. Peripheral interface 572 can be the same type of interface as I/O channel 512, or can be a different type of interface. As such, I/O bridge 570 extends the capacity of I/O channel 512 when peripheral interface 572 and the I/O channel are of the same type, and the I/O bridge translates information from a format suitable to the I/O channel to a format suitable to the peripheral channel 572 when they are of a different type. Add-on resource 574 can include a data storage system, an additional graphics interface, a network interface card (NIC), a sound/video processing card, another add-on resource, or a combination thereof. Add-on resource 574 can be on a main circuit board, on separate circuit board or add-in card disposed within information handling system 500, a device that is external to the information handling system, or a combination thereof.
Network interface 580 represents a NIC disposed within information handling system 500, on a main circuit board of the information handling system, integrated onto another component such as I/O interface 510, in another suitable location, or a combination thereof. Network interface device 580 includes network channels 582 and 584 that provide interfaces to devices that are external to information handling system 500. In a particular embodiment, network channels 582 and 584 are of a different type than peripheral channel 572 and network interface 580 translates information from a format suitable to the peripheral channel to a format suitable to external devices. An example of network channels 582 and 584 includes InfiniBand channels, Fibre Channel channels, Gigabit Ethernet channels, proprietary channel architectures, or a combination thereof. Network channels 582 and 584 can be connected to external network resources (not illustrated). The network resource can include another information handling system, a data storage system, another network, a grid management system, another suitable resource, or a combination thereof.
Management device 590 represents one or more processing devices, such as a dedicated baseboard management controller (BMC) System-on-a-Chip (SoC) device, one or more associated memory devices, one or more network interface devices, a complex programmable logic device (CPLD), and the like, that operate together to provide the management environment for information handling system 500. In particular, management device 590 is connected to various components of the host environment via various internal communication interfaces, such as a Low Pin Count (LPC) interface, an Inter-Integrated-Circuit (I2C) interface, a PCIe interface, or the like, to provide an out-of-band (OOB) mechanism to retrieve information related to the operation of the host environment, to provide BIOS/UEFI or system firmware updates, to manage non-processing components of information handling system 500, such as system cooling fans and power supplies. Management device 590 can include a network connection to an external management system, and the management device can communicate with the management system to report status information for information handling system 500, to receive BIOS/UEFI or system firmware updates, or to perform other task for managing and controlling the operation of information handling system 500. Management device 590 can operate off of a separate power plane from the components of the host environment so that the management device receives power to manage information handling system 500 when the information handling system is otherwise shut down. An example of management device 590 include a commercially available BMC product or other device that operates in accordance with an Intelligent Platform Management Initiative (IPMI) specification, a Web Services Management (WSMan) interface, a Redfish Application Programming Interface (API), another Distributed Management Task Force (DMTF), or other management standard, and can include an Integrated Dell Remote Access Controller (iDRAC), an Embedded Controller (EC), or the like. Management device 590 may further include associated memory devices, logic devices, security devices, or the like, as needed or desired.
Although only a few exemplary embodiments have been described in detail herein, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the embodiments of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the embodiments of the present disclosure as defined in the following claims. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures.
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover any and all such modifications, enhancements, and other embodiments that fall within the scope of the present invention. Thus, to the maximum extent allowed by law, the scope of the present invention is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.