This disclosure generally relates to information handling systems, and more particularly relates to a system and method extending system uptime while running on backup power.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option is an information handling system. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes. Because technology and information handling needs and requirements can vary between different applications, information handling systems can also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information can be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems can include a variety of hardware and software components that can be configured to process, store, and communicate information and can include one or more computer systems, data storage systems, and networking systems.
An information handling system, such as a server, can include an uninterruptible power supply (UPS) to store backup power for the server if the server loses alternating current (AC) power or Direct Current (DC) power. The length of time that the server can operate on the backup power from the UPS depends on a power storage capacity of the UPS.
It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the Figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements. Embodiments incorporating teachings of the present disclosure are shown and described with respect to the drawings presented herein, in which:
The use of the same reference symbols in different drawings indicates similar or identical items.
The following description in combination with the Figures is provided to assist in understanding the teachings disclosed herein. The following discussion will focus on specific implementations and embodiments of the teachings. This focus is provided to assist in describing the teachings and should not be interpreted as a limitation on the scope or applicability of the teachings. However, other teachings can certainly be utilized in this application.
The server rack chassis 100 includes servers 102, 104, and 106, and uninterruptible power supplies (UPSs) 108. Each of the servers 102, 104, and 106 can include a controller 110, which each are in communication with the UPSs 108. In an embodiment, both of the UPSs 108 can combine to provide backup power to the servers 102, 104, and 106. In another embodiment, one of the UPSs 108 can provide primary backup power to the servers 102, 104, and 106, and the other UPS can be a reserve UPS to provide power to the servers only if the primary UPS fails. Each of the controllers 110 can be any type of controller, such as an integrated Dell Remote Access Controller (iDRAC), which is an interface card that can provide out-of-band management between the server 102, 104, or 106 and a remote user. The controllers 110 can each have a processor, a memory, a battery, a network connection, and access to a server chassis bus. The controller 110 can provide different functions for the server 102, 104, or 106, such as power management, virtual media access, and remote console capabilities. The power management, the virtual media access, and the remote console capabilities can all be available to the remote user through a graphical user interface (GUI) on a web browser. Thus, the remote user can configure the servers 102, 104, and 106 of the server rack chassis 100, as if the remote user was at the local console.
A user, either remote or local to the server rack chassis 100, can access the controller 110 of each of the servers 102, 104, and 106 via the GUI to configure power settings for the servers in situations when AC power or DC power is lost to the server rack chassis 100. For example, the user can set a desired uptime for each individual server, a percentage of power available from the UPS allocated to each server, whether a power limit or cap for the servers is fixed over the desired uptime as shown by waveform 302 of
When the AC power is lost, the UPS 108 can send a power loss notification signal to each of the controllers 110 of the servers 102, 104, and 106. The power loss notification signal can be any type of communication signal, such as a simple network management protocol (SNMP) signal or trap, or the like. When the controller 110 receives the power loss notification signal, the controller can send a reserve power query to the UPS 108 to request a reserve power capacity of the UPS. After receiving the reserve power capacity of the UPS 108, the controller 110 can compute the power limit for the server 102 based on the reserve power capacity, on whether the power limit is fixed or decreasing, on the percentage of the reserve power capacity allocated to the server, and on desired uptime for the server. The power limit can be a maximum amount of power that the server 102 is allowed to use during given time period.
After setting the power limit, the controller 110 can communicate with a host processor of the server 102, and cut back the host processor to enforce the power limit in the server. The host processor can then disable particular components in the server 102 so that the server can operate at the power limit determined by the controller 110. The controller 110 can dynamically adjust the power limit for the server at fixed intervals over the desired uptime for the server 102. For example, the controller 110 can poll the UPS 108 at the fixed intervals to receive a current reserve power capacity of the UPS. When the controller 110 has received the current reserve power capacity, the controller can calculate and set a new power limit for the server 102. If the current reserve power capacity of the UPS 108 is higher than expected because the servers 102, 104, and 106 did not use all of the allocated power for a given time period, the new power limit for the server 102 can higher than the original power limit. The controller 110 can continue to re-poll the UPS 108 at the fixed intervals to receive the current reserve power capacity of the UPS, and can re-calculate and set the new power limit for the server based on the current reserve power capacity until the AC power is restored to the server rack chassis 100.
In another embodiment, the controllers 110 can communicate with each other to further adjust the power limits for the servers 102, 104, and 106. For example, the controller 110 of server 102 can communicate with the controller of server 104, and determine that server 104 is operating below an expected capacity and that server 102 is operating above an expected capacity. In this situation, the controller 110 of server 102 can decrease the percentage of the current reserve power capacity of the UPS 108 allocated to server 102, and the controller of server 104 can increase the percentage of the current reserve power capacity allocated to server 104. Thus, the new power limit for the server 102 can decrease, and the new power limit for the server 104 can increase.
The user can access the CMC 212 via a GUI to configure power settings for the servers 202, 204, and 206 when the AC power is lost to the blade server chassis 200. For example, the user can set in the CMC 212 a desired uptime for each of the servers 202, 204, and 206, a percentage of power available from the UPS allocated to each server, whether a power limit for the servers is fixed over the desired uptime, whether the power limit for the servers decreases over the desired uptime, or the like.
When the AC power is lost, the UPS 208 can send the power loss notification signal to the CMC 212, which in response can send the reserve power query to the UPS 208 to request the reserve power capacity of the UPS. After receiving the reserve power capacity of the UPS 208, the CMC 212 can compute the power limit for each of the servers 202, 204, and 206 based on the reserve power capacity, on whether the power limit is fixed or decreasing, on the percentage of the reserve power capacity allocated to each server, and on desired uptime for the servers.
The CMC 212 can also determine to shut down one of the servers 202, 204, and 206, to prevent any server that is not currently powered on from powering on, or the like based on the reserve power capacity of the UPS 208. If the CMC 212 determines that server 202 needs to be powered down, the CMC can send a power down signal to the controller 210 of the server 202. Similarly, if the CMC 212 determines that server 204 should not be powered on, the CMC can send a do not power on signal to or reject a power on request from the controller 210 of the server 204. The CMC 212 can send the power limit for each of the servers 202, 204, and 206 to the controller 210 of the respective server.
When the controller 210 of each of the servers 202, 204, and 206 has received the power limit for the server, the controller can enforce the power limit via the host processor of the server. For example, the controller 210 can communicate with the host processor of the server 202, and cut back the host processor to enforce the power limit in the server. The host processor can then disable particular components in the server so that the server can operate at the power limit determined by the CMC 212.
The CMC 212 can also dynamically adjust the power limit for each of the server 202, 204, and 206 over the desired uptime for the blade server chassis 200. For example, the CMC 212 can poll the UPS 208 at fixed intervals to receive the current reserve power capacity of the UPS. When the CMC 212 has received the current reserve power capacity, the controller can calculate and set the new power limit for each of the servers 202, 204, and 206. If the current reserve power capacity of the UPS 208 is higher than expected because the servers 202, 204, and 206 did not use all of the allocated power for a given time period, the new power limit for each of the servers can be higher than the original power limit. The CMC 212 can continue to re-poll the UPS 208 at the fixed intervals to receive the current reserve power capacity of the UPS, and can re-calculate and set the new power limit for each of the servers 202, 204, and 206 based on the current reserve power capacity until the AC power is restored to the blade server chassis 200.
At block 510, a reserve power capacity of the UPS is received at the controller. The power limit for the server is calculated based on the reserve power capacity of the UPS and on the desired server uptime at block 512. At block 514, the power limit of the server is enforced by the controller. The power limit can be enforced by the controller reducing the power consumption of the host processor, memory, and other components, or even shutting down unused components like redundant network adapters. At block 516, a determination is made whether the primary power has been restored. If the primary power has not been restored, the flow repeats as stated above at block 508. If the primary power has been restored, the power limit for the server is cleared at block 518.
At block 610, a reserve power capacity of the UPS is received at the CMC. The power limit for the server is calculated based on the reserve power capacity of the UPS and on the desired system uptime at block 612. At block 614, the power limit for the server is sent to a controller of the server. At block 616, the power limit on the server is enforced by the controller. The power limit can be enforced by the controller by reducing the power consumption of the host processor, memory, and other components, or even shutting down unused components like redundant network adapters. At block 618, a determination is made whether the primary power has been restored. If the primary power has not been restored, the flow repeats as stated above at block 608. If the primary power has been restored, the power limit for the server is cleared at block 620.
According to one aspect, the chipset 710 can be referred to as a memory hub or a memory controller. For example, the chipset 710 can include an Accelerated Hub Architecture (AHA) that uses a dedicated bus to transfer data between first physical processor 702 and the nth physical processor 706. For example, the chipset 710, including an AHA enabled-chipset, can include a memory controller hub and an input/output (I/O) controller hub. As a memory controller hub, the chipset 710 can function to provide access to first physical processor 702 using first bus 704 and nth physical processor 706 using the second host bus 708. The chipset 710 can also provide a memory interface for accessing memory 712 using a memory bus 714. In a particular embodiment, the buses 704, 708, and 714 can be individual buses or part of the same bus. The chipset 710 can also provide bus control and can handle transfers between the buses 704, 708, and 714.
According to another aspect, the chipset 710 can be generally considered an application specific chipset that provides connectivity to various buses, and integrates other system functions. For example, the chipset 710 can be provided using an Intel® Hub Architecture (IHA) chipset that can also include two parts, a Graphics and AGP Memory Controller Hub (GMCH) and an I/O Controller Hub (ICH). For example, an Intel 820E, an 815E chipset, or any combination thereof, available from the Intel Corporation of Santa Clara, Calif., can provide at least a portion of the chipset 710. The chipset 710 can also be packaged as an application specific integrated circuit (ASIC).
The information handling system 700 can also include a video graphics interface 722 that can be coupled to the chipset 710 using a third host bus 724. In one form, the video graphics interface 722 can be an Accelerated Graphics Port (AGP) interface to display content within a video display unit 726. Other graphics interfaces may also be used. The video graphics interface 722 can provide a video display output 728 to the video display unit 726. The video display unit 726 can include one or more types of video displays such as a flat panel display (FPD) or other type of display device.
The information handling system 700 can also include an I/O interface 730 that can be connected via an I/O bus 720 to the chipset 710. The I/O interface 730 and I/O bus 720 can include industry standard buses or proprietary buses and respective interfaces or controllers. For example, the I/O bus 720 can also include a Peripheral Component Interconnect (PCI) bus or a high speed PCI-Express bus. In one embodiment, a PCI bus can be operated at approximately 76 MHz and a PCI-Express bus can be operated at approximately 728 MHz. PCI buses and PCI-Express buses can be provided to comply with industry standards for connecting and communicating between various PCI-enabled hardware devices. Other buses can also be provided in association with, or independent of, the I/O bus 720 including, but not limited to, industry standard buses or proprietary buses, such as Industry Standard Architecture (ISA), Small Computer Serial Interface (SCSI), Inter-Integrated Circuit (I2C), System Packet Interface (SPI), or Universal Serial buses (USBs).
In an alternate embodiment, the chipset 710 can be a chipset employing a Northbridge/Southbridge chipset configuration (not illustrated). For example, a Northbridge portion of the chipset 710 can communicate with the first physical processor 702 and can control interaction with the memory 712, the I/O bus 720 that can be operable as a PCI bus, and activities for the video graphics interface 722. The Northbridge portion can also communicate with the first physical processor 702 using first bus 704 and the second bus 708 coupled to the nth physical processor 706. The chipset 710 can also include a Southbridge portion (not illustrated) of the chipset 710 and can handle I/O functions of the chipset 710. The Southbridge portion can manage the basic forms of I/O such as Universal Serial Bus (USB), serial I/O, audio outputs, Integrated Drive Electronics (IDE), and ISA I/O for the information handling system 700.
The information handling system 700 can further include a disk controller 732 coupled to the I/O bus 720, and connecting one or more internal disk drives such as a hard disk drive (HDD) 734 and an optical disk drive (ODD) 736 such as a Read/Write Compact Disk (R/W CD), a Read/Write Digital Video Disk (R/W DVD), a Read/Write mini-Digital Video Disk (R/W mini-DVD), or other type of optical disk drive.
Although only a few exemplary embodiments have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the embodiments of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the embodiments of the present disclosure as defined in the following claims. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures.
Number | Name | Date | Kind |
---|---|---|---|
5283905 | Saadeh et al. | Feb 1994 | A |
5532935 | Ninomiya et al. | Jul 1996 | A |
5719800 | Mittal et al. | Feb 1998 | A |
5781448 | Nakamura et al. | Jul 1998 | A |
5923099 | Bilir | Jul 1999 | A |
6195754 | Jardine et al. | Feb 2001 | B1 |
6304981 | Spears et al. | Oct 2001 | B1 |
6496103 | Weiss et al. | Dec 2002 | B1 |
6591399 | Wyrzykowska et al. | Jul 2003 | B1 |
6721892 | Osborn et al. | Apr 2004 | B1 |
6804792 | Nishikawa | Oct 2004 | B2 |
7036035 | Allison et al. | Apr 2006 | B2 |
7043647 | Hansen et al. | May 2006 | B2 |
7058826 | Fung | Jun 2006 | B2 |
7254016 | Strickland et al. | Aug 2007 | B1 |
7287175 | Vereen et al. | Oct 2007 | B2 |
7350088 | Allison et al. | Mar 2008 | B2 |
7415623 | Rapps et al. | Aug 2008 | B2 |
7418608 | Kumar et al. | Aug 2008 | B2 |
7451336 | Manuell et al. | Nov 2008 | B2 |
7483992 | Bahl | Jan 2009 | B2 |
7523329 | Ezra et al. | Apr 2009 | B2 |
7594131 | Ozaki | Sep 2009 | B2 |
7607129 | Rosu et al. | Oct 2009 | B2 |
7643439 | Kochman et al. | Jan 2010 | B2 |
7669071 | Cochran et al. | Feb 2010 | B2 |
7844839 | Palmer et al. | Nov 2010 | B2 |
8195340 | Haney et al. | Jun 2012 | B1 |
8261109 | Kim et al. | Sep 2012 | B2 |
8386809 | Spitaels et al. | Feb 2013 | B2 |
20010027479 | Delaney et al. | Oct 2001 | A1 |
20040073817 | Liu et al. | Apr 2004 | A1 |
20040177283 | Madany et al. | Sep 2004 | A1 |
20040187134 | Suzuki | Sep 2004 | A1 |
20050028017 | Janakiraman et al. | Feb 2005 | A1 |
20050044429 | Gaskins et al. | Feb 2005 | A1 |
20050086543 | Manuell et al. | Apr 2005 | A1 |
20050138438 | Bodas | Jun 2005 | A1 |
20050229037 | Egan et al. | Oct 2005 | A1 |
20050267929 | Kitamura | Dec 2005 | A1 |
20060143483 | Liebenow | Jun 2006 | A1 |
20060156041 | Zaretsky et al. | Jul 2006 | A1 |
20060192434 | Vrla et al. | Aug 2006 | A1 |
20060230299 | Zaretsky et al. | Oct 2006 | A1 |
20070067657 | Ranganathan et al. | Mar 2007 | A1 |
20070226539 | May | Sep 2007 | A1 |
20080114999 | Terry et al. | May 2008 | A1 |
20090006875 | Varma et al. | Jan 2009 | A1 |
20090037145 | Suzuki et al. | Feb 2009 | A1 |
20090049316 | Khatri et al. | Feb 2009 | A1 |
20090144566 | Bletsch et al. | Jun 2009 | A1 |
20090158072 | Radhakrishnan et al. | Jun 2009 | A1 |
20100064170 | Gross et al. | Mar 2010 | A1 |
20120054520 | Ben-Tsion | Mar 2012 | A1 |
20120331317 | Rogers et al. | Dec 2012 | A1 |
20130007515 | Shaw et al. | Jan 2013 | A1 |
Entry |
---|
“The Benefits of Event-Driven Energy Accounting in Power-Sensitive Systems,” F. Bellosa, Proceedings of the 9th Workshop on ACM SIGOPS European Workshop, 2002. |
“vEC: Virtual Energy Counters,” I. Kadayif et al., Proceedings of the ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, 2001. |
“Runtime Identification of Microprocessor Energy Saving Opportunities,” W.L. Bircher et al., Laboratory for Computer Architecture, Dept. of Electrical and Computer Engineering, The University of Texas, Aug. 2005. |
“CoolThreads Selection Tool (cooltst),” R. Lane et al., Sun Microsystem's Cooltst on BigAdmin System Administration Portal at /bigadmin/content/cooltst—tool/, Apr. 2006. |
Number | Date | Country | |
---|---|---|---|
20120192007 A1 | Jul 2012 | US |