Demand based power allocation

Information

  • Patent Grant
  • 8713334
  • Patent Number
    8,713,334
  • Date Filed
    Monday, February 4, 2013
    11 years ago
  • Date Issued
    Tuesday, April 29, 2014
    10 years ago
Abstract
A demand based power re-allocation system includes one or more subsystems to assign a power allocation level to a plurality of servers, wherein the power allocation level is assigned by priority of the server. The system may throttle power for one or more of the plurality of servers approaching the power allocation level, wherein throttling includes limiting performance of a processor, track server power throttling for the plurality of servers. The method compares power throttling for a first server with power throttling for remaining servers in the plurality of servers and adjusts throttling of the plurality of servers, wherein throttled servers receive excess power from unthrottled servers.
Description
BACKGROUND

The present disclosure relates generally to information handling systems, and more particularly to a demand based power allocation for multiple information handling systems.


As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option is an information handling system (IHS). An IHS generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes. Because technology and information handling needs and requirements may vary between different applications, IHSs may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in IHSs allow for IHSs to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, IHSs may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.


A server IHS is generally understood as an IHS dedicated to running a server application. A server application is a program or a set of instructions that accepts network connections to service requests from other IHSs by sending back responses to the requesting IHSs. Examples of server applications include mail servers, file servers, proxy servers, and others. A server is simply an IHS that provides services or resources to other IHSs.


Blade servers are generally understood as self-contained IHS servers designed for high density computing using a minimum of extra components. While a standard rack-mount server IHS may include with (at least) a power cord and network cable, blade servers may have many components removed for space, power and other considerations, while still having the functional components to be considered an IHS. A blade enclosure to hold multiple blade servers may provide services such as, power, cooling, networking, interconnects and management. Together the blade servers and the blade enclosure form the blade system.


A problem with server systems is that electrical power may be withheld/throttled from one server and provided to another server, causing the throttled server to be forced to run below maximum performance. As such, there is no server power re-balancing mechanism in server chassis post power allocation to blades such as for blade servers. A subset of blades may end up continuously getting throttled (i.e., continue to run at much lower performance) while another subset of blades may have a surplus (based on current load).


Accordingly, it would be desirable to provide an improved demand based power allocation/reallocation absent the disadvantages discussed above.


SUMMARY

According to one embodiment, a demand based power re-allocation system includes one or more subsystems to assign a power allocation level to a plurality of servers, wherein the power allocation level is assigned by priority of the server. The system may throttle power for one or more of the plurality of servers approaching the power allocation level, wherein throttling includes limiting performance of a processor, and track server power throttling for the plurality of servers. The system compares power throttling for a first server with power throttling for remaining servers in the plurality of servers and adjusts throttling of the plurality of servers, wherein throttled servers receive excess power from unthrottled servers.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an embodiment of an information handling system (IHS).



FIG. 2 illustrates an embodiment of a method for throttling/unthrottling servers and related communication between a chassis management controller and a remote access controller.



FIG. 3 illustrates an embodiment of a server power allocation and reclaimable power.



FIG. 4 illustrates an embodiment of an active chassis management controller and server throttle profile data before power re-balancing.



FIG. 5 illustrates an embodiment of a server power re-balancing and wattage pool.



FIG. 6 illustrates an embodiment of a server throttle profile after power re-balancing.





DETAILED DESCRIPTION

For purposes of this disclosure, an IHS 100 includes any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an IHS 100 may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The IHS 100 may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, read only memory (ROM), and/or other types of nonvolatile memory. Additional components of the IHS 100 may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The IHS 100 may also include one or more buses operable to transmit communications between the various hardware components.



FIG. 1 is a block diagram of one IHS 100. The IHS 100 includes a processor 102 such as an Intel Pentium™ series processor or any other processor available. A memory I/O hub chipset 104 (comprising one or more integrated circuits) connects to processor 102 over a front-side bus 106. Memory I/O hub 104 provides the processor 102 with access to a variety of resources. Main memory 108 connects to memory I/O hub 104 over a memory or data bus. A graphics processor 110 also connects to memory I/O hub 104, allowing the graphics processor to communicate, e.g., with processor 102 and main memory 108. Graphics processor 110, in turn, provides display signals to a display device 112.


Other resources can also be coupled to the system through the memory I/O hub 104 using a data bus, including an optical drive 114 or other removable-media drive, one or more hard disk drives 116, one or more network interfaces 118, one or more Universal Serial Bus (USB) ports 120, and a super I/O controller 122 to provide access to user input devices 124, etc. The IHS 100 may also include a solid state drive (SSDs) 126 in place of, or in addition to main memory 108, the optical drive 114, and/or a hard disk drive 116. It is understood that any or all of the drive devices 114, 116, and 126 may be located locally with the IHS 100, located remotely from the IHS 100, and/or they may be virtual with respect to the IHS 100.


Not all IHSs 100 include each of the components shown in FIG. 1, and other components not shown may exist. Furthermore, some components shown as separate may exist in an integrated package or be integrated in a common integrated circuit with other components, for example, the processor 102 and the memory I/O hub 104 can be combined together. As can be appreciated, many systems are expandable, and include or can include a variety of components, including redundant or parallel resources.


The present disclosure relates to a system to perform server power re-balancing in a set of a plurality of servers such as, a server chassis 130 including a plurality of blade servers. It should be readily understood by a person having ordinary skill in the art that a server may be substantially similar to the IHS 100. However, the term server, blade or blade server may be used interchangeably throughout this application instead of IHS 100 for simplicity. An embodiment of the present disclosure may limit some servers in the set from being continuously run at lower performance levels while other servers in the set that are at the same power priority level, never fully utilize power allocated to that server. In this light, the present disclosure describes a mechanism to perform incremental and continuous server power reallocations from servers not using all allocated power to servers needing more power. In an embodiment, the power reallocation may be performed by tracking length and frequency of periods that servers are in a throttled power state.


Servers may be throttled during a run-time using current monitor chipsets embedded in the server. Servers in a server chassis 130 may automatically perform a server throttle and unthrottle as the server's measured power consumption reaches a server's internal warning threshold which may be set by the chassis management controller (CMC). A CMC is a controller for controlling operations for a server rack/chassis 130. In an embodiment, a server management controller, such as, a remote access controller (RAC) sends a CMC notification to the CMC as the server enters and exits power throttle states. These notifications may be designed to allow the CMC to be aware of the server's status for informational purposes.



FIG. 2 illustrates an embodiment of a method for throttling/unthrottling servers and related communication between a CMC and a RAC. The RAC sends reports of server events to the CMC upon actions such as, server throttle initiation action at block 1. The server throttle initiation action may be performed by a server's current monitor. In an embodiment, communication between the RAC and the CMC may be facilitated using an intelligent platform management interface (IPMI). However, other specifications may be used for this communication. The CMC receives the notification of a blade throttle initiation from the RAC and starts a timer to track throttle period and frequency at block 2. The RAC sends reports again upon server throttle termination action at block 3. Similarly, the server throttle termination action may be performed by the server's current monitor. The CMC receives the notification of a blade throttle termination from the RAC and stops the timer to track throttle period and frequency at block 4. The CMC runs a rolling sampling window to track the server throttle/unthrottle events for each of a plurality of servers in a server chassis 130 (e.g., the blade servers in a blade server rack). The CMC may then use the throttle and unthrottle event information to track how long each blade was throttled during this window, frequency of throttling/unthrottling for each server, amount of excess power available (watts) and a variety of other data. This throttling information (e.g., percent of throttling time vs. unthrottling time) is compared against the information for all servers in the chassis 130 at block 5. Servers that were not throttled may contribute power (wattage) to an excess power/wattage pool 132. Servers that have been throttled may be provided or otherwise allowed to use an allocation of power (wattage) from the pool 132. For example, at block 6, the RAC on throttled servers receive micro-allocations (e.g., watts) from the CMC that are retrieved from the wattage pool 132.


A power priority setting as well as a percentage of time throttled in relation to the other server may be used to factor in how much a server contributes or withdraws from the excess power pool 132. In an embodiment, the CMC constantly calculates these reallocations/micro-reallocations while the server chassis 130 is in the “on” state. Over time, the heavily throttled servers end throttling as their power consumption level falls below the server's predetermined current monitor threshold based on allocated power for each server. In an embodiment, instructions power allocation may be contained within CMC firmware, within RAC firmware, or within other media.


The demand based power allocation may also allow lower priority servers to receive full power allocations if the higher priority servers become idle for a pre determined period of time. When the higher priority servers become loaded, they may quickly recover their rightful power allocations in the chassis 130, such as with instructions from the CMC and/or the RAC.



FIGS. 3-6 illustrate an embodiment of the method of FIG. 2 using blade servers. The server/blade numbers, percentages priorities, chassis 130 slots, and the like are provided only for example. Other information may be used in the method of FIG. 2. FIG. 3 illustrates an embodiment of a server power allocation and reclaimable power. As such, FIG. 3 illustrates an example of blade power allocation where Blades 1, 5, 14 and 15 are allocated a minimum amount of power and therefore end up with heavy throttling. Reclaimable power (shown in cross-hatch) is the amount of power that may be reclaimed from the blades. Note that Blades 1, 5, 14 and 15 have no Reclaimable power (e.g., these blades were allocated the minimum power that was needed).



FIG. 4 illustrates an embodiment of an active chassis management controller and server throttle profile data before power re-balancing. FIG. 4 also illustrates an active CMC tracking heavily throttled blades by tracking throttle and unthrottle event messages as they are received from respective blades RAC. The active CMC identifies Blades 1, 5, 14 and 15 as heavily throttled blades based on throttle period and frequency. In an embodiment, both the number of watts and frequency of the window may be adjusted indirectly by a user/administrator of the blades. In other words, the user/administrator may specify how aggressive the reallocation method will behave. This reallocation specification request may be used by the method of FIG. 2 behind the scenes by altering the sample window as well as the number of watts taken/given to a server. For the sake of discussion, numbers such as 1 watt or 5 minutes are used. However, the actual values may be chosen after analysis and user/administrator feedback for “more aggressive” or “less aggressive” reallocation behavior.



FIG. 5 illustrates an embodiment of a server power re-balancing and wattage pool 132 where the method of FIG. 2 is used for Blade power re-balancing. Values such as, watts, priority, etc. are used for illustration and any values may be used in the present system. Excess power from the wattage pool 132 may be incrementally allocated to throttled blades by reclaiming power from unthrottled blades. Block arrows pointing upwards indicate the incremental value (e.g., W watts) increase units of time while block arrows pointing downwards indicate the incremental value (e.g., W watts) being reclaimed by the CMC or RAC on unthrottled blades to perform blade power re-balancing. The variable W may be assigned any value and may be user configurable.



FIG. 6 illustrates an embodiment of a server throttle profile after power re-balancing. Thus, FIG. 4 demonstrates an effect of blade power re-balancing where the excess power available from the wattage pool 132 is re-allocated to the blades in most need of power (e.g., blade 1 P3, blade 5 P3, and blade 14 P3. After re-allocation, only blade 15 (lowest priority of P4) remains throttled. However, if the wattage pool 132 had enough excess power available, blade 15 may have also been provided extra power and therefore eliminate the need for throttling blade 15.


Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the embodiments disclosed herein.

Claims
  • 1. A power allocation system, comprising: a plurality of information handling systems (IHSs) including a high priority IHS and a low priority IHS, wherein the high priority IHS has a higher priority than the low priority IHS; anda power allocation controller coupled to each of the plurality of IHSs, wherein the power allocation controller is operable to: determine that the high priority IHS is being throttled;determine that at least one of the plurality of IHSs is not being throttled and, in response, reclaim power from the at least one of the plurality of IHSs that is not being throttled; andallocate the reclaimed power to the high priority IHS.
  • 2. The system of claim 1, wherein the power allocation controller is operable to: determine that the allocating the reclaimed power to the high priority IHS has resulted in the high priority IHS not being throttled; andallocate a remaining portion of the reclaimed power to the low priority IHS.
  • 3. The system of claim 1, wherein the power allocation controller is operable to: determine that the low priority IHS is being throttled;determine that the high priority IHS is not using all of a total allocated power that is allocated to the high priority IHS; andreclaim a first power amount of the total allocated power from the high priority IHS and allocate the first power amount to the low priority IHS.
  • 4. The system of claim 3, wherein the power allocation controller is operable to: determine that the high priority IHS needs the first power amount that was allocated to the low priority IHS; andreclaim the first power amount that was allocated to the lower priority IHS and allocate the first power amount back to the high priority IHS.
  • 5. The system of claim 1, wherein each of the plurality of IHSs includes a priority and at least two of the plurality of IHSs are not being throttled, and wherein power is reclaimed from the at least two of the plurality of IHSs based on the priority of each of the at least two of the plurality of IHSs.
  • 6. The system of claim 1, wherein at least one of the plurality of IHSs that is not being throttled and from which power is reclaimed includes at least one IHS that has the same priority as the high priority IHS.
  • 7. The system of claim 1, further comprising: a chassis, wherein the plurality of IHSs are a plurality of servers housed in the chassis.
  • 8. A power allocation system, comprising: a first subset of a plurality of IHSs;a second subset of the plurality of IHSs, wherein the first subset of the plurality of IHSs have higher priorities than the second subset of the plurality of IHSs; anda power allocation controller coupled to each of the plurality of IHSs, wherein the power allocation controller is operable to: determine that the first subset of the plurality of IHSs are being throttled;determine that at least one of the plurality of IHSs is not being throttled and, in response, reclaim power from the at least one of the plurality of IHSs that is not being throttled; andallocate the reclaimed power to the first subset of the plurality of IHSs.
  • 9. The system of claim 8, wherein the power allocation controller is operable to: determine that the allocating the reclaimed power to the first subset of the plurality of IHSs has resulted in the first subset of the plurality of IHSs not being throttled; andallocate a remaining portion of the reclaimed power to the second subset of the plurality of IHSs.
  • 10. The system of claim 8, wherein the power allocation controller is operable to: determine that the second subset of the plurality of IHSs are being throttled;determine that the first subset of the plurality of IHSs are not using all of a total allocated power that is allocated to the first subset of the plurality of IHSs; andreclaim a first power amount of the total allocated power from the first subset of the plurality of IHSs and allocate the first power amount to the second subset of the plurality of IHSs.
  • 11. The system of claim 10, wherein the power allocation controller is operable to: determine that the first subset of the plurality of IHSs need the first power amount that was allocated to the second subset of the plurality of IHSs; andreclaim the first power amount that was allocated to the second subset of the plurality of IHSs and allocate the first power amount back to the first subset of the plurality of IHSs.
  • 12. The system of claim 8, wherein each of the plurality of IHSs includes a priority and at least two of the plurality of IHSs are not being throttled, and wherein power is reclaimed from the at least two of the plurality of IHSs based on the priority of each of the at least two of the plurality of IHSs.
  • 13. The system of claim 8, wherein at least one of the plurality of IHSs that is not being throttled and from which power is reclaimed includes at least one IHS that has the same priority as an IHS in the first subset of the plurality of IHSs.
  • 14. The system of claim 8, further comprising: a chassis, wherein the plurality of IHSs are a plurality of servers housed in the chassis.
  • 15. A method for re-allocation power, comprising: determining, by a power allocation controller that is coupled to a plurality of IHSs, that a high priority IHS of the plurality of IHSs is being throttled;determining, by the power allocation controller, that at least one of the plurality of IHSs is not being throttled and, in response, reclaiming power from the at least one of the plurality of IHSs that is not being throttled; andallocating, by the power allocation controller, the reclaimed power to the high priority IHS.
  • 16. The method of claim 15, further comprising: determining, by the power allocation controller, that the allocating the reclaimed power to the high priority IHS has resulted in the high priority IHS not being throttled; andallocating, by the power allocation controller, a remaining portion of the reclaimed power to a low priority IHS of the plurality of IHSs.
  • 17. The method of claim 15, further comprising: determining, by the power allocation controller, that a low priority IHS of the plurality of IHSs is being throttled;determining, by the power allocation controller, that the high priority IHS is not using all of a total allocated power that is allocated to the high priority IHS; andreclaiming, by the power allocation controller, a first power amount of the total allocated power from the high priority IHS and allocating the first power amount to the low priority IHS.
  • 18. The method of claim 17, further comprising: determining, by the power allocation controller, that the high priority HIS needs the first power amount that was allocated to the low priority IHS; andreclaiming, by the power allocation controller, the first power amount that was allocated to the lower priority IHS and allocating the first power amount back to the high priority IHS.
  • 19. The method of claim 15, wherein each of the plurality of IHSs includes a priority and at least two of the plurality of IHSs are not being throttled, and wherein power is reclaimed by the power allocation controller from the at least two of the plurality of IHSs based on the priority of each of the at least two of the plurality of IHSs.
  • 20. The method of claim 15, wherein at least one of the plurality of IHSs that is not being throttled and from which power is reclaimed includes at least one IHS that has the same priority as the high priority IHS.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application claims priority to and is a continuation of co-owned, U.S. patent application Ser. No. 13,160,999 filed Jun. 15, 2011, now U.S. Pat. No. 8,381,000, the disclosure of which is incorporated herein by reference. The present application is related to U.S. Utility Application Ser. No. 12/135,320, filed on Jun. 9, 2008, now U.S. Pat. No. 8,006,112, and U.S. Utility Application Ser. No. 12/135,323, filed on Jun. 9, 2008, now U.S. Pat. No. 8,051,316, and U.S. Utility Application Ser. No. 12/143,522, filed on Jun. 20, 2008, now, U.S. Pat. No. 8,032,768, the disclosures of which are assigned to the assignee of record in the present application and incorporated herein by reference in their entirety.

US Referenced Citations (48)
Number Name Date Kind
5842027 Oprescu et al. Nov 1998 A
6289212 Stein et al. Sep 2001 B1
6542176 Camis Apr 2003 B1
6618811 Berthaud et al. Sep 2003 B1
6785827 Layton et al. Aug 2004 B2
6826758 Chew et al. Nov 2004 B1
6865585 Dussud Mar 2005 B1
6961944 Chew et al. Nov 2005 B2
7051215 Zimmer et al. May 2006 B2
7197433 Patel et al. Mar 2007 B2
7308449 Fairweather Dec 2007 B2
7702931 Goodrum et al. Apr 2010 B2
7739548 Goodrum et al. Jun 2010 B2
7757107 Goodrum et al. Jul 2010 B2
7840824 Baba et al. Nov 2010 B2
7984311 Brumley et al. Jul 2011 B2
7996690 Shetty et al. Aug 2011 B2
8006112 Munjal et al. Aug 2011 B2
8341300 Karamcheti et al. Dec 2012 B1
8381000 Brumley et al. Feb 2013 B2
20020067763 Suzuki et al. Jun 2002 A1
20030065958 Hansen et al. Apr 2003 A1
20040163001 Bodas Aug 2004 A1
20040255171 Zimmer et al. Dec 2004 A1
20050172158 McClendon et al. Aug 2005 A1
20060136754 Liu et al. Jun 2006 A1
20060242462 Kasprzak et al. Oct 2006 A1
20070110360 Stanford May 2007 A1
20070118771 Bolan et al. May 2007 A1
20070180280 Bolan et al. Aug 2007 A1
20070234090 Merkin et al. Oct 2007 A1
20070245162 Loffink et al. Oct 2007 A1
20070260896 Brundridge et al. Nov 2007 A1
20070260897 Cochran et al. Nov 2007 A1
20080077817 Brundridge et al. Mar 2008 A1
20080222435 Bolan et al. Sep 2008 A1
20090019202 Shetty et al. Jan 2009 A1
20090113221 Holle et al. Apr 2009 A1
20090132842 Brey et al. May 2009 A1
20090193276 Shetty et al. Jul 2009 A1
20090307512 Munjal et al. Dec 2009 A1
20090307514 Roberts et al. Dec 2009 A1
20090319808 Brundridge et al. Dec 2009 A1
20100038963 Shetty et al. Feb 2010 A1
20110001358 Conroy et al. Jan 2011 A1
20110060932 Conroy et al. Mar 2011 A1
20130103967 Conroy et al. Apr 2013 A1
20130103968 Conroy et al. Apr 2013 A1
Related Publications (1)
Number Date Country
20130145185 A1 Jun 2013 US
Continuations (2)
Number Date Country
Parent 13160999 Jun 2011 US
Child 13758308 US
Parent 12188502 Aug 2008 US
Child 13160999 US