The present invention relates to power management in computer operating systems.
The following listed references are expressly incorporated by reference herein. Throughout the specification, these references are referred to by citing to the numbers in the brackets [#].
Exhibits A and B attached to the present application are also expressly incorporated herein reference. Exhibit A is an article entitled “A Homogeneous Architecture for Power Policy Integration in Operating Systems”. Exhibit B is an article entitled “Workload Adaptation with Energy Accounting in a Multi-Process Environment”.
Reducing energy consumption is an important issue in modern computers. A significant volume of research has concentrated on operating-system directed power management (OSPM). One primary focus of previous research has been the development of OSPM policies. An OSPM policy is an algorithm that chooses when to change a component's power states and which power states to use. Existing studies on power management make an implicit assumption: only one policy can be used to save power.
The present invention provides a plurality of OSPM policies that are eligible to be selected to manage a hardware component, such as an I/O device. The illustrated power management system then automatically selects the best policy from a power management standpoint and activates the selected policy for a particular component.
New policies may be added using the architecture of an illustrated embodiment described herein. The system and method compares the plurality of eligible policies to determine which policy can save more power for a current request pattern of a particular component. The eligible policy with the lowest average power value based on the current request pattern of the particular component is selected to manage the component and then automatically activated. The previously active policy is deactivated for the particular component.
The system and method of the present invention permits OSPM policies to be added, compared, and selected while a system is running without rebooting the system. Therefore, the present system and method allows easier implementation and comparison of policies. In the illustrated embodiment, the available policies are compared simultaneously so repeatable workloads are unnecessary.
Another approach to reducing energy consumption in computers is the use of dynamic power management (DPM). DPM has been extensively studied in recent years. One approach for DPM is to adjust workloads, such as clustering or eliminating requests, as a way to trade-off energy consumption and quality of services. Previous studies focus on single processes. However, when multiple concurrently running processes are considered, workload adjustment must be determined based on the interleaving of the processes' requests. When multiple processes share the same hardware component, adjusting one process may not save energy.
In another illustrated embodiment of the present invention, energy responsibility is assigned to individual processes based on how they affect power management. The assignment is used to estimate potential energy reduction by adjusting the processes. An illustrated embodiment uses the estimation to guide runtime adaptation of workload behavior. Results from experiments are included to demonstrate that the illustrated embodiment saves energy and improves energy efficiency.
The above mentioned and other features of this invention, and the manner of attaining them, will become more apparent and the invention itself will be better understood by reference to the following description of illustrated embodiments of the invention.
A significant volume of research has concentrated on operating-system directed power management (OSPM). The primary focus of previous research has been the development of OSPM policies. Under different conditions, one policy may outperform another and vice versa. Hence, it is difficult, or even impossible, to design the “best” policy for all computers. In the system and method of the present invention, the best policies are selected at run-time without user or administrator intervention. Policies are illustratively compared simultaneously and improved iteratively without rebooting the system. In the system and method of the present invention, the energy savings of several policies is improved by up to 41 percent.
Operating systems (OSs) manage resources, including processor time, memory space, and disk accesses. Due to the growing popularity of portable systems that require long battery life, energy has become a crucial resource for OSs to manage [1]-[4]. Power management is also important in high-performance servers because performance improvements are limited by excessive heat [5][7]. Finding better policies has been the main focus of OSPM research in recent years [8]. A policy is an algorithm that chooses when to change a component's power states and which power states to use.
Existing studies on power management assume that only one policy can be used to save power and focus on finding the best policies for unique request patterns. Although some policies allow their parameters to be adjusted at run-time [9], [10], the algorithms remain the same. Previous studies demonstrate that significantly different policies may be needed to achieve better power savings in different scenarios. Most studies evaluate their policies using a single hardware component. For example, hard disks and CD-ROM drives are both block devices, but their workload behaviors are different.
The embodiments disclosed below are not intended to be exhaustive or to limit the invention to the precise forms disclosed in the following detailed description. Rather, the embodiments are chosen and described so that others skilled in the art may utilize their teachings.
Automatic Run-Time Selection of Power Policies for Operating Systems
In one illustrated embodiment, homogeneous requirements are established for all OSPM policies so they can be easily integrated into the OS and selected at run-time. This homogeneous architecture is described herein as the Homogeneous Architecture for Power Policy Integration (HAPPI). In the illustrated embodiment, HAPPI currently supports power policies for disk, DVD-ROM, and network devices but can easily be extended to support other I/O devices.
Each component or device has a set of OSPM policies that are capable of managing the device. A policy is said to be “eligible” to manage a device if it is in the device's policy set. A policy becomes eligible when it is loaded into the OS and is no longer eligible when it is removed from the OS. The policy is considered “active” if it is selected to manage the power states of a specific device by HAPPI. Each device is assigned only one active policy at any time. However, a policy may be active on multiple devices at the same time by creating an instance of the policy for each device. When a policy is activated, it obtains exclusive control of the device's power state. The policy is responsible for determining when the device should be shut down and requesting state changes. An active policy may update its predictions and request device state changes on each device access or a periodic timer interrupt. The set always includes a “null policy” that keeps the device in the highest power state.
As discussed above, the illustrated embodiment permits OSPM policies to be added, compared, and selected while a system is running without rebooting the system. Therefore, the best eligible policy is selected to manage the particular device and then automatically activated to reduce power consumption. Any previously active policy is deactivated for the particular device.
Details of the policy selection process are described in Exhibit A. To accomplish this policy selection at run-time, each policy includes an estimation function (also called an “estimator”) to provide a quantitative measure of the policy's ability to control a device. An estimator accepts a list of recent device accesses from HAPPI. The length of this list is determined experimentally. For the illustrated version of HAPPI, the list contains eight accesses, with disk and DVD accesses closer than 1 s and network accesses closer than 100 ms merged together into a single access. The accesses are merged because the Linux kernel issues several accesses in rapid succession, although they would be serviced as continuous request from the device.
The estimator determines what decision would have been made after each access if the policy had been controlling that device during the trace. The specific decision is entirely dependent upon the policy and not influenced by HAPPI. The energy consumption and access latency for this decision are added to a total. Once all accesses have been handled, the estimator determines how much energy would have been consumed between the last access and the current time and adds this amount to the total energy. The total energy consumption and device access latency constitute the “estimate.” This value is returned to the evaluator to determine the best policy for the current workload.
To compute energy consumption, the illustrated embodiment uses a state-based model. The amount of time in a state is multiplied by the power to compute the amount of energy consumed in that state. The state transition energy is added for each power state transition. To compute access latency, the illustrated embodiment uses the amount of time required to awaken the device from a sleeping state if the device was asleep before the access occurred. If the device was awake, the illustrated embodiment does not add latency because the latency is insignificant compared to the amount of time required to awaken the device.
As discussed above, the present system and method provide automatic policy selection. Instead of choosing one policy in advance, a group of policies are eligible at run-time, and one is selected in response to the changing request patterns. This is especially beneficial for a general—purpose system, such as a laptop computer, where usage patterns can vary dramatically when the user executes different programs.
The system and method of the present invention utilizes automatic policy selection to help designers improve policies and select the proper policy for a given application. Several fundamental challenges arise for automatic policy selection. First, a group of policies must be eligible to be selected. A Homogeneous Architecture for Power Policy Integration (HAPPI) is illustratively uses as the framework upon which new policies can be easily added without modifying the OS kernel or rebooting the system. When power management is conducted by OSs, changing a policy requires rebooting the system [12]. Second, eligible policies must be compared to predict which policy can save the most energy for the current request pattern. Third, the best eligible policy must be selected to manage a hardware component and the previous policy must stop managing the same component.
In the system and method of the present invention, new policies can be added and selected without rebooting the system. This allows researchers to implement policies in a commodity OS, namely Linux. Several studies [13], [14] have demonstrated that simulations suffer from poor accuracy and long runtimes. The present invention simplifies the implementation of policies and compares these policies simultaneously, considering multiple processes, nondeterminism, actual OS behavior, and real hardware. Simultaneous comparison is important because repeatable workloads are difficult to produce [14]. Furthermore, experiments may be run in real-time rather than long-running, detailed simulations.
Dynamic Power Management
Most users are familiar with power management for block access devices, such as hard disks. Users can set the timeout values in Windows' Control Panel or using Linux's hdparm command. This is the most widely-used “timeout policy.” Karlin et al. [15] propose a 2-competitive timeout algorithm, where the timeout value is the break-even time of the hardware device. The breakeven time is defined as the amount of time a device must be shut down to save energy. Douglis et al. [9] suggest an adaptive timeout scheme to reduce the performance penalties for state transitions while providing energy savings. Hwang and Wu [16] use exponential averages to predict idleness and shut down the device immediately after an access when the predicted idleness exceeds the break-even time. Several studies focus on stochastic optimization using Markov models [10], [17]-[19] and generalized stochastic Petri Nets [20]. However, OS behaviors, such as deferred work, cause these policies to mispredict consistently. We will describe in Section V how HAPPI may be used to improve predictions for these policies.
Operating System-Directed Power Management
The Advanced Configuration and Power Interface (ACPI) specification [21] defines a platform-independent interface for power management. ACPI describes the power consumption of devices and provides a mechanism to change the power states. However, ACPI requires an operating system-directed power manager to implement policies. Microsoft Windows' OnNow API [22] uses ACPI to allow individual devices' power states to be controlled by the device driver, which presumably implements a single policy as discussed herein above. OnNow provides a mechanism to set the timeout values and the device state after timeout, but policies cannot be changed without rebooting. Linux handles power management similarly using ACPI [23] but requires user-space applications with administrative privilege, such as hdparm, to modify timeouts and policies. In the present system and method, policies manage power states above the driver level because a significant number of policies require cooperation between devices [7], [24]-[28] that cannot be achieved at the device driver level. An embodiment of the system and method of the present invention implements policies above the device driver levels so that single area multi-device policies may be implemented. Therefore, complex policies may be implemented without rebooting the system and operate on multiple devices simultaneously.
The present system and method dynamically selects a single policy from a set of policies for each device without rebooting the system, allowing experiments of new policies without disrupting system availability. This is particularly useful in high-performance servers. The present system provides a simple, modular interface that simplifies policy implementation and experimentation, allowing OS designers and policy designers to work independently. That is, policy designers can experiment with different policies without modifying the core OS, and power management is modular enough that it can be removed without impacting OS designers.
This design specifies homogeneous requirements for all policies so they can be easily integrated into the OS and selected at run-time. Homogeneous requirements are necessary to allow significantly different policies to be compared by the OS. This architecture is referred to as the Homogeneous Architecture for Power Policy Integration (HAPPI). HAPPI is currently capable of supporting power policies for disk, CD-ROM, and network devices but can easily be extended to support other I/O devices. To implement a policy in HAPPI, the policy designer may provide: 1) A function that predicts idleness and controls a device's power state, and 2) A function that accepts a trace of device accesses, determines the actions the control function would take, and returns the energy consumption and access delay from the actions.
Each device 26 has a set of policies 26 that are capable of managing the device 26. A policy 26 is said to be eligible to manage a device if the policy is in the device's policy set. A policy illustratively becomes eligible when it is loaded into the OS as a kernel module and is no longer eligible when it is removed from the OS. The policy is active if it is selected to manage the power states of a specific device by HAPPI. Each device 26 is assigned only one active policy 22 at any time. However, a policy may be active on multiple devices 26 at the same time by creating data structures for each device within the policy and multiplexing HAPPI function calls. When a policy is activated, it obtains exclusive control of the device's power state. The policy is responsible for predicting idleness, determining when the device should be shut down, and requesting state changes. An active policy 22 may update its predictions and request device state changes on each device access or after a specified timeout.
Measuring Device Accesses
Policies monitor device accesses to predict idleness and determine when to change power states. We refer to the data required by policies to make decisions as measurements. One such measurement is a trace of recent accesses. Policies use access traces to make idleness predictions. Whenever the device is accessed, the present invention captures the size and the time of the access. An access trace is a measurement, but not all measurements are traces. More advanced policies may require additional measurements, such as a probability distribution of accesses. The present system and method also records the energy and the delay for each device. Energy is accumulated periodically and after each state transition. The present invention defines delay as the amount of time that an access waits for a device to awaken. Delay is only accumulated for a process's first access while sleeping or awakening because Linux prefetches adjacent blocks on each access. Delay may be used to determine power management's impact on system performance.
Policy Selection
Policy selection is performed by the evaluator 24 and is illustrated in
After evaluator 24 asks for a policy estimate for the first eligible policy at block 32, the evaluator 24 computes an optimization metric from the estimate as illustrated at block 34. Evaluator 24 determines whether the currently evaluated policy is better than the previously stored policy (or the null policy as discussed below) as illustrated at block 36. If so, the currently evaluated policy is set as the active policy 22 as illustrated at block 38. If the currently evaluated policy is not better than the previously stored policy, evaluator 24 next determines whether there are any more policies to evaluate as illustrated at block 40. If so, the evaluator 24 asks for a policy estimate on the next eligible policy at block 32 and repeats the cycle discussed above. If no more policies are eligible at block 40, evaluator 24 permits the active policy 22 to control the device 26 as illustrated at block 42. A periodic evaluation of the eligible policies occurs at each device access or after a specific time out.
As discussed above, an active policy 22 for each device is selected by the evaluator 24 after the evaluator receives estimates from all eligible policies. The evaluator 24 selects each active policy 22 by choosing the best estimate for an optimization metric, such as total energy consumption or energy-delay product as illustrated at block 34. If another policy's estimate is better than the currently active policy at block 36, the inferior policy is deactivated and returned to the set of eligible policies. The superior policy is activated at block 38 and assumes control of the device's energy management. Otherwise, the currently active policy 22 remains active. The policy set includes a “null policy” that keeps the device in the highest power state to achieve the best performance. If the null policy produces the best estimate, none of the eligible power management policies can save power for the current workload. Under this condition, the power management function is disabled until the evaluator 24 is triggered again.
The evaluator 24 determines when re-evaluation should take place and performs the evaluation of eligible policies. In an illustrated embodiment, average power is used as the optimization metric at block 34. To minimize average power, the evaluator 24 requests an estimate from each policy and selects the policy with the lowest energy estimate for the device access trace. Since average power is energy consumption over time and the traces record the same amount of time, the two metrics are equivalent.
The present system and method may illustratively be implemented in a Linux 2.6.17 kernel to demonstrate HAPPI's ability to select policies at run-time, quantify performance overhead, and provide a reference for future OSPM. Functions are added to maintain policy sets and issue state change requests. Policies, evaluators, and most measurements are implemented as loadable kernel modules that may be inserted and removed at run-time as discussed herein. The only measurement that is not implemented as a loadable module is a device's access history.
The Linux kernel is optimized for performance and exploits disk idleness to perform maintenance operations such as dirty page writeback and swapping. To facilitate power management, the 2.6 kernel's laptop mode option is used, which delays dirty page writeback until the disk services a demand access or the number of dirty pages becomes too large.
Inserting and Removing Policies
The present system and method manages the different policies in the system and ensures that only one policy is active on each device at a time.
Recording Device Accesses and State Transitions
Previous studies assume that each I/O access is a single “impulse-like” event. However, the impulse-like model of an access is insufficient to manage policies' predictions. Accesses should be defined by a time span of activity extending from the completion of an access to the completion of the last access after a filter length. In reality, an I/O access consists of two parts: a top-half and a bottom-half [29]. When it performs a read or write operation, an application uses system calls to the OS to generate requests. The OS passes these requests on to the device driver on behalf of the application. This process is called the top-half. The application may continue executing after issuing write requests but must wait for read requests to complete. The device driver constitutes the bottom-half. The bottom-half interfaces with the device, returns data to the application's memory space, and marks the application as ready to resume execution. The mechanism allows top-half actions to perform quickly by returning to execution as soon as possible.
Bottom-half tasks are deferred until a convenient time. This mechanism allows the OS to merge adjacent blocks into a single request or enforce priority among accesses. Since the bottom-half waits until a convenient time to execute, the mechanism is referred to as deferred work. Since accesses may be deferred, multiple accesses may be issued to a device consecutively. Simunic et al. [18] observe that policies predict more effectively if a 1000 ms filter is used for disk accesses and 250 ms filter is used for network accesses. These filters allow multiple deferred accesses to be merged into a single access.
Deferred work plays an important role in managing state transitions in Linux. When a state transition is requested, a command is passed to a bottom-half to update the device's power state. The actual state transition may require several seconds to complete and does not notify Linux upon completion. The exact power state of a device during a transition is unknown to Linux because the commands are handled at the device-driver level. Device accesses are managed in device drivers, as well, implying that the status of outstanding requests are also unknown and cannot be used to infer power states. HAPPI could obtain the exact power state of a device by modifying the bottom-half in the device driver. However, drivers constitute 70 percent of Linux's source code [30]. Any solution that requires modifying all device drivers is not scalable. Modifying the subset of drivers for the target machine is not portable. Hence, the present system and method estimates state transition time using ACPI information and update the state after the time expires.
Maintaining Access History
Policies require knowledge of device accesses to predict idleness and provide estimates for policy selection. The method for measuring device accesses directly affects HAPPI's ability to select the proper policy for different workloads. We describe above how a filter merges deferred accesses into a single access. When a request passes through the filter, HAPPI records the access in a circular buffer. A circular buffer illustratively is used rather than a dynamically-allocated list to reduce the time spent in memory allocation and release and limit the amount of memory consumed by HAPPI. However, other types of storage may be used. After HAPPI records the access, the active policy and all measurements are notified of the event. Since all policies require information about device accesses, these functions are statically compiled into the kernel. Access histories are the only components of HAPPI that cannot be loaded or removed at run-time.
The system and method of the present invention determines the circular buffer's length experimentally because the proper buffer length depends on workloads.
The present system and method targets interactive workloads, common to desktop environments. An 8-entry buffer is illustratively used because this buffer quickly discards history when workloads change but maintains sufficient history to select policies accurately.
Advanced Measurements
The present system and method provides an access history for each device to facilitate policy selection. However, some policies require more complex data than access history, such as request state transition probability matrices [10]. Advanced measurements can be directly computed from the history of recent accesses. Since such information is not required by all policies, HAPPI does not provide the information directly. HAPPI provides the minimum common requirements for policies. This design is based upon the end-to-end argument of system design [31] by providing the minimum common requirements to avoid unnecessary overhead. Although it does not directly provide these complex measurements, HAPPI provides an interface for measurements to be added as loadable kernel modules. A new measurement registers a callback function pointer with HAPPI that returns the measurement and requests events similar to the other policies. If a policy requires additional measurements, the policy calls the “happi_request measurement” function with an identifier for the measurement. HAPPI returns a function pointer that the policy can use to retrieve the measurement data.
The system and method of the present invention implements measurements as separate kernel modules because several policies may require the same measurement. By separating the measurement from the policies, the measurement is computed once for all the policies in the system. Since measurements are always needed, they receive all requested events, whereas inactive policies do not respond to events. If policies were individually responsible for generating measurements, their measurements would only consider the time when the policy has been active. Thus, policies would consider different time spans in their estimator functions. Implementing measurement as separate modules also allows measurements to be improved independently of policies.
Evaluating and Changing Policies
The system and method of the present invention automatically chooses the best policy for each device for the current workload and allows power policies to change at run-time, whereas existing power management implementations require a system reboot. HAPPI's evaluator is responsible for selecting the active policy. The evaluator is a loadable kernel module, allowing the system administrator to select an evaluator that optimizes for specific power management goals, for example, to minimize energy consumption under performance constraints. Since the evaluator is a loadable module, the administrator may change evaluators without rebooting if power management goals change. The administrator inserts the module using the “insmod” command. From this point onward, the evaluator selects power policies automatically. When a policy is inserted into the kernel using “insmod”, the evaluator is notified that a new policy is present and re-evaluates all policies. After the best policy is selected for each device, HAPPI enters the steady-state operation above.
If the evaluator 24 changes the active policy 22, the old policy must relinquish control of the device's power states and the new policy must acquire control.
Acquiring the spin lock prevents HAPPI from interrupting the old policy if it is currently issuing a command to the device. Once the spin lock is acquired, the old policy is no longer capable of controlling the device. The old policy's remove function is called to delete any pending timers and force the policy to stop controlling the device. After the old policy has successfully stopped controlling the device, HAPPI enables the events for the new policy and calls the new policy's “initialize” function. The new policy uses this function to update any stale data structures and activate its timers. At this point, HAPPI enables interrupts and releases the device's spin lock, allowing the new policy to become active. The performance loss for disabling interrupts and acquiring locks is negligible.
Replaced policies may elect to save or discard their current predictions. If history is saved, the information may be used when selected in the future or in future estimates. In one illustrative embodiment, previous history are discarded when a policy is replaced in favor of a different policy. A policy is replaced because its estimate indicates that it is incapable of saving as much energy as another eligible policy for the current workload. Replacement implies that a policy's idleness prediction is poor. Hence, discarding previous history resets the policy's predictions to an initial value when providing another estimate and often allows the policy to revise its prediction much more quickly than by saving history.
Changing policies indicates that (a) the old policy is mispredicting or (b) the new policy can exploit additional idleness. HAPPI must evaluate policies frequently enough to detect these conditions.
Performance Overhead
We use “oprofile” [32] to quantify the computational overhead for automatic policy selection. HAPPI consists of two types of overhead beyond the traditional single-policy power management approach: recording access history and policy estimation. Recording access history is unnecessary in single policy systems because the responds to accesses immediately. Estimation is required by HAPPI to determine the best policy from a policy set.
To compute the performance overhead from HAPPI, we run the benchmark described below and add the execution times of the history and estimation functions. A summary of profiling results for HAPPI's default configuration is shown in Table I. This configuration uses an 8-entry history buffer, a 20 second evaluation period, and five policies. Profiling indicates that 0.265 percent of all execution time is HAPPI overhead. Of this overhead, 0.155 percent is spent recording access history and 0.041 percent of execution time is spent evaluating policies. The cumulative execution time of all HAPPI components, including policies, policy selection, and state changes, is 0.299 percent. Hence, automatic policy selection causes little decrease in system performance, implying that it is practical for a variety of systems, including high-performance computers.
When five policies are eligible, the total overhead is less than 0.3 percent. These results indicate that HAPPI is capable of supporting many policies with acceptable overhead. The overhead to record access history is independent of the number of policies. The evaluation function overhead is proportional to the number of policies in the system and their complexity. Since evaluation occurs infrequently (every 20 seconds), estimation's impact on performance is small. The performance overhead from policies is bounded by the complexity of the most computationally intensive policy and independent.
The system and method of the present invention allows an existing policy to be removed and a new policy to be added without rebooting the system. Hence, HAPPI can be used as a framework for iteratively improving policies. All the examples provided herein may be performed without rebooting the machine. This is important because policies may require many modifications (i.e., tuning) to achieve energy savings. Two policies, exponential averages [16] and nonstationary Markov models [10], are illustrated and iterative improvements are performed to the policies.
One illustrated example managed three devices including an IBM DeskStar 3.5″ disk (HDD), a Samsung CD-ROM drive (CD-ROM), and a Linksys NC100 PCI network card (NIC). The parameters for the devices were determined by experimental measurement using a National Instruments data acquisition card (NI-DAQ). A PCI extender card was used to measure energy consumption for the NIC.
Table II lists the information required by the ACPI specification for each device. The active state is the state where the device can serve requests. The sleep state is a reduced power state in which requests cannot be served. Changing between states incurs energy and wakeup delay shown in Table II. For reference, the break-even time is included of each device.
To illustrate the present system and method's ability to track changes in workloads and selected policies, applications were executed that provide a wide range of activities for HAPPI to manage. The activity level of each device for each workload is indicated in Table III. The workloads include:
Workload 1: Web browsing+buffered media playback from CD-ROM.
Workload 2: Download video and buffered media playback from disk.
Workload 3: CVS checkout from remote repository.
Workload 4: E-mail synchronization+sequential access from CD-ROM.
Workload 5: Kernel compile.
Accuracy of Estimator Models
Accurate power models are required by estimates to determine the correct power policies for each device. This section performs a series of experiments to indicate how accurate estimators are in practice, compared to the hardware they model. We run a sample workload on the hardware and compare the estimates at each time interval to the hardware measurements. The relative error between estimators is more important than the absolute error of individual policies because HAPPI needs only to choose the best policy from the eligible policies. We observe differences of 14 percent, 20 percent, and 13 percent between policies for the HDD, CD-ROM, and NIC, respectively. The exponential average policy exhibits a higher percent error than the other policies for the CD-ROM because the CD-ROM automatically enters a low-power mode after long periods of idleness. This state cannot be controlled or disabled by the OS. However, we note that the exponential average policy is very unlikely to be selected in these circumstances, as the other policies are able to exploit the idleness more effectively.
The accuracy of these estimators dependent upon the power model. Many papers [33]-[38] have studied power models. In HAPPI, power models may also be inserted as loadable modules. This mechanism allows power models to be improved independently of policies. Simple state-based power models are illustratively used for the estimators and determine power consumption for our devices through physical measurement. These measurements should be available through ACPI, but most I/O hardware devices do not fully implement the ACPI specification yet. The present system and method simplifies the implementation of policies and provides a motivation for hardware manufacturers to fully implement the ACPI specification.
Exponential Average Policy
The exponential average policy [16] predicts a device's idleness and makes decisions to shut down a device immediately following each access. The exponential average policy is abbreviated herein as “EXP.” This policy illustratively uses the recursive relationship I[n+1]=αin+(1−α)I[n] to predict the idleness after the current access I[n+1] from the previous prediction I[n] and the previous actual idle length in. The parameter α is a tunable parameter (0≦α≦1) that determines how much to weight the most recent idle length. The authors in [16] suggest α=0.5.
Accesses must pass through a filter to record deferred accesses properly as discussed above. If the policy is implemented exactly as described in [16], EXP exhibits poor performance and energy savings because the policy does not account for deferred accesses. EXP makes decisions immediately following an access, but the policy cannot ensure additional deferred accesses will not occur until the filter length has expired. The original version of EXP is referred to herein as “EXP-unfiltered.” In an illustrated embodiment of the present system, modification to EXP is delayed until the filter length has expired before making state transition requests. This modification improves the likelihood that a burst of deferred accesses is completed before shutting down a device. This new version of the policy is referred to herein as “EXP-filtered.”
The present system and method uses HAPPI to compare the two policies.
The illustrative embodiment of
The present system and method may be used to tune various policy parameters for the target hardware. Tuning is important because it allows the policy to achieve better energy savings on each device. The main parameter of EXP is the exponential weight a. The policy in [16] suggests a=0.5. Values for a of 0.25, 0.5, and 0.75 were considered to determine the best a for each device. Table V indicates that very little difference exists between different a values. Hence, designers should not spend too much effort tuning α's value, since changes have a negligible impact on energy savings. The present system and method allows us to simultaneously compare the effects of different α values and conclude that the difference is negligible. Hence, the present system and method can help designers decide where to focus efforts for energy savings.
D. Nonstationary Markovian Policy
The nonstationary Markovian policy [10] models device accesses using Markov chains. This the nonstationary Markovian policy is abbreviated herein as “NSMARKOV.” At fixed periods, called time slices, NSMARKOV computes a state transition probability matrix for the device. This matrix contains the probability that a request occurred in each power state and is implemented as a measurement in HAPPI. At each time slice, NSMARKOV uses the matrix's measurement to index into a lookup table that specifies the probability of issuing each power transition command. NSMARKOV also uses preemptive wakeup, where the device may be awakened before an access to improve performance.
1) Preemptive Wakeup: NSMARKOV described in [10] may awaken a device before an access occurs. This mechanism provides statistical guarantees for performance. The authors of [10] demonstrate similar energy savings to other policies for a laptop disk and a desktop disk. The system and method of the present invention determines if these conclusions are valid for different devices. The policy with preemptive wakeup is referred to herein as NSMARKOV-preempt and the policy without preemptive wakeup as NSMARKOV-no-preempt.
The energy measurements in Table VI support this claim. The NSMARKOV-preempt policy consumes 40 percent and 79 percent more energy than the NSMARKOV-no-preempt policy for the HDD and CD-ROM, respectively. NSMARKOV-preempt consumes 5 percent less energy than NSMARKOV-no-preempt for the NIC. Closer inspection of the experiment reveals that NSMARKOV-preempt's performance improvements reduce overall run-time of the experiment by 6 percent. Hence, the energy consumption is lower than NSMARKOV-no-preempt. The system and method of the present invention compares policies automatically and chooses the best policy for the current workload and hardware. A system may include both preemptive and nonpreemptive policies. The system and method of the present invention selects the most effective policy based on the workload. In this example, only energy savings are compared. The evaluator 24 can also consider performance when selecting policies and may select NSMARKOV-preemptive due to its improved performance.
2) Tuning Decision Period: NSMARKOV makes decisions at periodic intervals called time slices. The time slice length is important because the length affects the expected time between device state changes. Different access patterns and power parameters may require different time slices to reduce energy consumption. The system and method of the present invention assists the process of selecting a proper time slice for each device. FIGS. 11 (a) and 11 (b) show the estimates for 1-second, 3-second, and 5-second time slices and the selected time slices for each device. Table VII indicates that the HDD saves more energy with the 3-second policy than the 1-second and 5-second policies. The system and method of the present invention confirms this result by selecting the 3-second policy most frequently.
The CD-ROM selects a 3-second time slice because CD-ROM accesses tend to be bursty. Since the CD-ROM is a read-only device, it is only accessed on demand reads. The accesses cease when the application finishes reading files, creating bursty behavior. A 3-second time slice exploits this behavior by shutting down shortly after bursts. The 1-second time slice is too short and occasionally mispredicts bursts. The policy selection varies during Workload 4. In this workload, idleness varies more widely and decisions should become more conservative to avoid wasting energy.
For the NIC, a 5-second time slice saves more energy because accesses are more frequent and less predictable than the other devices. Smaller time slices shut down more aggressively and mispredict frequently under the NIC's workloads. Little difference is observed between estimates because little energy penalty results from misprediction when long idleness is considered. The difference is more obvious during Workload 3 (point C) because the history length is much shorter. Shorter time slices mispredict the bursts, resulting in much higher estimates. A similar instance is observed at the start of Workload 4 (point D). Table VII validates the present system's selection, indicating that the 5-second time slice saves 18 percent and 11 percent more energy with respect to NULL than the 1-second and 3-second time slices, respectively.
Since the present system and method selects the best policy among all eligible policies, it is easy to determine the values for the policy parameters. In fact, the same policies can be loaded into HAPPI with different parameters. HAPPI selects the policy with better energy savings, hence, removing parameter tuning altogether. The policy designer need only specify a set of reasonable values and insert all the policies into HAPPI.
Selecting the Best Policies Using HAPPI
The system and method of the present invention may also be used to select the best policy for a given workload. The same evaluation mechanism discussed above is used to select the best policy from a set of distinct policies because HAPPI makes no distinction between the same policies with different parameters and completely different policies. Five power management policies are illustratively considered including the null policy (NULL), 2-competitive timeout (2-COMP) [15], adaptive timeout (ADAPT) [39], exponential averages (EXP) [16], and the nonstationary Markovian policy (NSMARKOV) [10]. In ADAPT, the policy uses the breakeven time as its initial value and changes by 10 percent of the break-even time on each access. EXP-filtered and NSMARKOV-no-preempt are used in an illustrated embodiment. Any other desired policies may also be used. However, each policy is tuned individually to improve readability of figures. The present system is capable of selecting the correct policy for different workloads. In this embodiment, distinct policies, rather than different parameters of the same policy are compared.
We begin by observing the estimates for the HDD.
The CD-ROM exhibits a very different workload from the HDD. It was determined above that a 3-second time slice saves more energy than longer periods because CD-ROM accesses are very bursty. The beginning of Workload 1 exhibits this behavior and is indicated at point D in
The NIC experiences bursty accesses, as well. However, the NIC's accesses are often followed by more bursty accesses during spans K and L. As described above, NSMARKOV predicts these accesses well using a 5-second time slice. EXP, ADAPT, and 2-COMP mispredict frequently, as indicated by sharp spikes in their estimates. NSMARKOV uses statistical information about the workload to become more conservative in its shutdowns. Table VIII indicates that HAPPI selects the proper policy.
This illustrated embodiment compares several distinct policies simultaneously on different devices and provides insight into policies' properties that make them effective in commodity OSs. Several opportunities exist to save energy for the HDD. However, the workloads frequently change before some policies can adapt to the new workloads. Two properties of the HDD access trace indicate that ADAPT and EXP policies are unlikely to achieve significant energy savings beyond NSMARKOV. First, accesses do not arrive quickly enough to adapt to idle workloads because Linux's 1 apt op_node clusters accesses together. Second, when accesses arrive quickly, insufficient idleness exists to save significant energy. Hence, ADAPT and EXP are unlikely to save more energy than NSMARKOV for HDD workloads.
In contrast, for the CD-ROM and NIC, accesses arrive in bursts for both devices and allow many opportunities to save energy if bursts can be predicted accurately. However, we observe that NSMARKOV's probabilistic models detect bursts more accurately than ADAPT and EXP, which are heavily weighted by recent history. In the illustrated embodiment, NSMARKOV is the best policy among the five eligible policies. The present system allows experiments to be performed easily, even for advanced policies.
Even though only five policies are illustrated herein, it is understood that many new policies may be added due to HAPPI's low overhead. Users may perform experiments with their innovative policies on real machines easily. The simple interface of the present invention encourages the development of sophisticated policies that can save more energy.
The illustrated embodiment considers policies that control devices independently. Many policies [7], [24]-[28] have been designed to control multiple devices simultaneously. The present system and method provides a mechanism that may be adapted to choose between multiple independent policies or a single policy that controls multiple devices.
Many policies rely on application-directed power management [40]-[42]. Application programs issue power commands intelligently based on applications' future access patterns. Although we have a different goal than application-directed power management, HAPPI does not preclude the use of application-directed power management. Policies for application-directed power management can be implemented as kernel modules and export interfaces to applications. These policies provide estimates to HAPPI to determine if application-level adaptation can provide more energy savings than other policies in the system. No studies consider a mix of adaptive and nonadaptive applications. HAPPI provides a mechanism to compare application-directed policies and allows comparison of application-directed policies with unmodified applications.
Source code for HAPPI and the policies included herein are available for download at http://engineering.purdue.edu/AOSEM.
The system and method of the present invention provide an improved architecture that allows policies to be compared and selected automatically at run-time. Policy configurations are heavily dependent on a device's power parameters and workload. Therefore, policies should be tuned for specific platforms for best performance. The system and method of the present invention simplifies this configuration process by automatically selecting the proper policy for each device.
Workload Adaptation with Energy Accounting in a Multi-Process Environment
The following listed references are expressly incorporated by reference herein. Throughout the specification, these references are referred to by citing to the numbers in the brackets [#].
Further details of an illustrated embodiment of the present invention related to energy accounting in both runtime policy selection systems and workload adaptation systems are included in Exhibit C of U.S. Provisional Application Ser. No. 60/779,248, filed Mar. 3, 2006, which is expressly incorporated by reference herein entitled, “Power Management with Energy Accounting in a Multi-Process Environment”. In addition, References [1]-[32] listed in Exhibit C of U.S. Provisional Application Ser. No. 60/779,248 are all expressly incorporated by reference herein.
Dynamic power management (DPM) has been extensively studied in recent years. One approach for DPM is to adjust workloads, such as rescheduling or removing requests, as a way to trade-off energy consumption and quality of services. Since adjusting workloads often requires understanding the internal context and mechanisms of applications, some studies allow applications themselves, instead of operating system (OS), to perform the adjustment. These studies focus on a single application or process. However, when multiple concurrent processes share the same hardware component, adjusting one process may not save energy. In another embodiment of the present invention, a system and method is provided that instruments OS to provide across-process information to individual processes for better workload adjustment. The present system performs “energy accounting” in the OS to analyze how different processes share energy consumption and assign energy responsibility to individual processes based on how they affect power management. The assignment is used to guide individual processes to adjust workloads. The illustrated method is implemented in Linux for evaluation. The examples show that: (a) Our energy accountant can accurately estimate the potential amounts of energy savings for workload adjustment. (b) Guided by our energy accountant, workload adjustment by applications can achieve better energy savings and efficiency.
Many techniques [1A] have been proposed in the last several years to reduce energy consumption in computer systems. Among these techniques, dynamic power management (DPM) has been widely studied. DPM saves energy by shutting down hardware components when they are idle. Since shutting down and waking up a component consume energy, only long idle periods can justify such overhead and obtain energy savings. Most studies on DPM focus on improving power management policies to predict the lengths of future idle periods more accurately [4A], [8A], [9A], [11A], [15A], [17A]. Even though improving the power manager can effectively reduce the energy during long idle periods, a workload without long idle periods provides no opportunity for the power manager to save energy. To resolve this, the workload needs to be adjusted to create long idle periods. The studies in the literature present two types of workload adjustment: (a) clustering (also called “rescheduling”) programs' requests [2A], [12A], [16A], [18A] and (b) removing requests [6A], [7A], [20A] to tradeoff quality of service for energy savings.
In terms of what performs the workload adjustment, these studies can be classified into two approaches: (a) centralized adjustment by operating system (OS) and (b) individual adjustments by applications themselves. In the first approach, applications inform OS of the release time and the deadline of each request and OS reschedules the requests based on their time constraints [12A], [13A]. This approach can handle multiple concurrent processes. However, OS has limited understanding of the internal context and mechanisms of applications and it is often more effective to allow applications themselves to perform the adjustment. For one example, a video streaming application can lower its resolution and request fewer data from the server so the workload on the network card is reduced. For another example, a data-processing program may prefetch needed data based on the program's internal context to cluster the reading requests to the storage device. Previous studies [2A], [6A], [7A] have demonstrated the effectiveness of workload adjustment by applications for both clustering and removal (also called “reduction”).
The limitation of application-performed adjustment is that previous studies focus on a single application or process. In a multi-process environment, energy reduction may be affected by all concurrent processes.
In
In the system and method of the present invention, the OS provides information to individual processes such that each process can consider other concurrent processes for better workload adjustment.
The present system and method (1) determines how much energy can be saved by adjusting an individual process in a multi-process environment (2) and then determines how such information be used at runtime to improve workload adjustment by the individual process for better energy savings and efficiency. The system uses energy accounting by OS to analyze energy sharing among multiple processes and the opportunities for energy savings. Energy accounting is performed by OS because the OS can observe the requests from multiple processes. OS also determines when to shut down hardware components to exploit the idleness between requests. The present system and method analyzes how different processes share the energy consumption of the hardware components and estimate the potential energy reduction by adjusting individual processes.
Examples are presented on workload clustering and reduction, respectively. These examples show illustrated embodiments of how to provide the accounting information to individual processes at runtime to guide their workload adjustments. For example, if clustering the requests of the current process can save little energy, the clustering can be stopped to save the energy consumed by buffer memory [2A]. The present system and method accurately reports the potential amounts of energy savings for clustering and removing requests. The method illustrated guides runtime workload adjustment to save more energy and achieve better energy efficiency.
Previous studies have considered adjusting workloads for power management. One approach is centralized adjustment by OS. Lu et al. [12A] order and cluster the tasks for multiple devices to create long idle periods for power management. Weissel et al. [18A] propose to assign timeouts to file operations so they can be clustered within the time constraints. Rong et al. [16A] divide power management into system and component levels and propose clustering requests by modeling them as stochastic processes. Zeng et al. [20A] assign an energy budget to each process and the process is suspended when its budget is consumed. Another approach is application-performed adjustment. Cai et al. [2A] use buffers to cluster accesses to a device for data streaming applications. Flinn et al. [6A] reduce the quality of service, such as the frame size and the resolution for multimedia applications, when battery energy is scarce. These studies consider only a single application or process. The system and method of the present invention considers multiple processes and instruments OS to provide across-process information to individual processes for better energy savings and efficiency.
There have been several studies profiling processes' energy responsibilities. PowerScope [7A] uses a multimeter to measure the whole computer's power consumption and correlates the measurements to programs by sampling the program counter. Their study provides information about procedural level energy consumption. Chang et al. [3A] conduct a similar measurement with a special hardware called Energy Counter. Energy Counter reports when a predefined amount of energy is consumed. ECOSystem [20A] models the energy consumption of different components individually and assigns energy to the processes by monitoring their usage of individual components. ECOSystem controls processes' energy consumption using operating system (OS) resource allocation. Neugebauer et al. [14A] perform similar energy assignment in a system called Nemesis OS providing quality of service guarantees. None of these studies examine the relationship between processes' energy responsibilities and the potential energy savings by adjusting the processes' requests. Moreover, they do not examine energy sharing in a multi-process environment and the effects of workload adjustment. Hence, these studies are insufficient for estimating the energy savings of workload adaptation in a multi-process environment.
The system and method of the present invention (a) estimates the energy savings from workload adjustment when concurrent processes are considered, and (b) provides runtime adaptation method to use the estimation to guide workload adjustment.
Energy Accounting
The present system and method uses energy accounting to integrate the power management by OS and the workload adjustment by applications in a multi-process system 100, as shown in
In the energy accounting analysis, it is first assumed that there are three power states: busy (serving requests from processes), sleeping (requests have to wait for the component to wake up), and idle (not serving requests but ready to serve without delay). The component consumes power in busy and idle states and consumes no power in the sleeping state. Power management intends to reduce unnecessary power consumption during idleness. The component wakes up if it changes from the sleeping state to the busy or the idle state. The component is shut down if it enters the sleeping state. The component's break-even time (tbe) is defined as the minimum duration of an idle period during which shutting down the component can save energy; namely, the energy saved in the sleeping state can compensate the switching energy for shutdown and wakeup [1A].
Energy Responsibility and Sharing
Energy responsibility is divided between the power manager 104 and the user processes so the processes' energy assignments are independent of specific power management policies. Energy responsibility is then divided among the processes based on how they affect the effectiveness of dynamic power management. The assignments are used to estimate potential energy savings from changing the workload.
Energy consumption that can be reduced by improving the shutdown accuracy is assigned to the power manager 104 and the remaining energy is assigned to the user processes. The energy assigned to a process can be reduced by adjusting the process. For example, if a hardware component serves only a single request as shown in
Any additional energy can be reduced by performing wakeup and shutdown immediately before and after the service. The energy ew,+ea+ed is assigned to the process because this energy can be reduced only by removing the request. When multiple requests access a component, the necessary energy consumption is calculated based on the component's break-even time. The break-even time (tbe) is the minimum duration of an idle period during which the energy saved in the sleeping state can compensate the state-change energy (ed+ew) [1A]. If the idle period is longer than tbe as shown in
Symbols used herein illustratively have the meaning and units shown in Table I.
When a component is used by multiple processes, these processes may share the responsibility of energy consumption. The following example illustrates energy sharing among processes.
Example: Two processes use the same component, shown as r1 and r2 in
To calculate energy sharing, we extend the concept from
where ρ(t) is the power at time t. The value of tw,is calculated backward from the service, i.e., at the moment of the service, t=0. In
Two processes do not share energy if their tw and td do not overlap, as shown in
This approach can be extended to three or more processes by calculating their sharing periods. If their periods overlap, they equally share the energy during the overlapped interval. This method can be applied to handle the situation when multiple processes use the same component at the same time. For example, a full-duplex network card can transmit and receive packets for different processes simultaneously. From the OSs' viewpoint, the service time (ta) of these processes overlaps. The energy assigned to these processes is calculated using the overlap of the service time together with the forward and backward sharing periods.
Estimation of Energy Reduction
As explained herein, the shared energy cannot be reduced by removing the requests from only one of the sharing processes. Let Ep be the total responsible energy of a process and Eh be the portion of the process' responsible energy shared with other processes. Then the potential energy savings from removing the process' requests are Sr=Ep−Eh.
Clustering obtains the maximum energy savings when the process' requests are all clustered together because this can create the longest idle period. This is equivalent to two steps removing the process' requests first and then adding the cluster of the requests back. Consequently, the cluster's energy consumption can be subtracted from Sr to obtain the potential energy savings (Sc) from clustering the process' requests. Let Ea denote the sum of all requests' ea in the cluster, the energy consumption of the cluster is ew+Ea+ed and Sc=Sr−(ew+Ea+ed).
The potential energy savings indicate the possible energy reduction by future workload adjustment. The current energy savings Su, namely, the energy savings that have been obtained is then calculated. This is used at runtime to determine whether the workload adjustment that has been performed is beneficial as further explained below. Su is equal to the total reducible energy by perfect power management excluding the responsible energy of the actual power manager. The idle period between the two requests in
Effect of Expedition
The above analysis assumes that adjusting one process' requests does not affect the serving times of other processes' requests. This assumption should be reexamined because the completion time of the remaining processes may be expedited when a process' requests are removed or clustered. This is illustrated in
On the other hand,
Combining the two cases, the additional energy savings due to expedition is within the range [−ea,1, ea,1]. If process 1 has multiple requests that are immediately followed by other processes' requests, we use E′a to denote the total service energy of such requests of process 1. The total additional energy savings due to expedition after removing process 1 are then within the range [−E′a, E′a].
Multiple Sleeping States
The energy accounting rules may be extended to consider multiple sleeping states. A component cannot serve requests in any sleeping state and encounters switching delay and energy for entering a sleeping state and returning to the active state. Multiple sleeping states provide more energy saving opportunities than a single sleeping state. If another sleeping state is available with a shorter break-even time, the component can be shut down to save energy for a short idle period. We use s1, s2, . . . , sn as the n sleeping states. Without loss of generality, we assume that these states are ordered by decreasing power consumption. The component consumes the most power in s, and the least power in sn. State sj is a deeper sleeping state than si if 1≦i≦j≦n. A deeper sleeping state has larger wakeup and shutdown energy; otherwise, the shallower sleeping states should not be used. The terms ew,si, ed,si, Tw,si, and Td,si are used to denote si's wakeup energy, shutdown energy, wakeup delay, and shutdown delay, respectively.
With multiple sleeping states, the power manager's responsibility cannot be determined by simply comparing the length of an idle period with the component's break-even time. The component has multiple break-even times, one for each sleeping state.
The minimum length of an idle period when entering s2 saves more energy. Let t be the length of an idle period. Using the two states achieves the same energy savings if
This is the minimum duration of an idle period to use s2. Notice that this threshold time is different from s2's break-even time because tbe is defined between a sleeping state and the idle state, not between two sleeping states. Similarly, when there are more than two sleeping states, such a threshold time can be calculated for every sleeping state si (i≧2) by comparing its energy consumption with all the other sleeping states. Let ti be the threshold time for state si, then the component should enter the sleeping state si if the length of the idle period is within the range [ti, ti+1]
If there is only a single request, the component should be kept in the deepest sleeping state before and after serving the request in order to consume the minimum energy. Based on this principle, we use ew and ed of the deepest sleeping state to calculate the sharing periods tw and td for each request to calculate energy sharing. Then, the same procedure described above is used to estimate energy reduction.
Workload Adaptation
As discussed above, after assigning each process its energy responsibility, the potential energy savings may be estimated by adjusting the process. Request removal and clustering for adaptation is first considered. Energy accounting can be performed at runtime such that the process can perform runtime adaptation by either requests removal or clustering for better energy savings and efficiency. Specifically, energy responsibility is periodically calculated and assigned to each process and the process is informed of the estimated energy savings Sc or Sr. The estimation from the previous period is assumed to be usable for the following period. This assumption is adopted by many adaptation methods. We focus on how one process should adjust its workload in a multi-process environment and assume the other processes are not allowed to adjust their workloads simultaneously.
1) Requests Removal: We consider a method that allows a process to suspend for a period of time such that its requests are “removed” from the period. For example, when battery energy is scarce, a low-priority program may save its current progress and suspend as a tradeoff for energy savings. The program resumes later when energy becomes plentiful (e.g., by recharging the battery).
The third and fourth rows in
2) Requests Clustering: We consider the method that uses a buffer to clustering the requests to a hardware component. For example, a video streaming program allocates additional memory to prefetch more frames from the server and the network card is used to fetch frames only when the buffered frames have been consumed.
The six vertical arrows in
We can determine the size of the buffer for clustering as follows. Let T be the length of the period, B be the total bytes processed by the requests during the period, W be the power of each page (4096 bytes) of the memory. We then illustratively allocate x pages of memory buffer for prefetching. Then, after the x pages of data are consumed, the system wakes up the hardware component to refill the buffer. After the buffer is refilled, the component returns to the sleep state to save energy. The average number of wakeups per period of
The average energy overhead incurred per period of
The buffer energy per period of T is xWT. We should allocate buffer to cluster requests if the following criteria is satisfied.
The left-hand side of the above equation is the net potential energy savings per period of T. By calculating the derivative of x, we find the maximum of the left-hand side is
By including this result, our adaptation decision is quantified as following: If
we allocate a buffer with size
for clustering. If
we deallocate the buffer if it has been allocated because clustering in this case saves no energy.
An illustrated system and method of the present invention is implemented in Linux to discover the opportunities for energy reduction. The present system's energy accountant can accurately estimate the energy savings of workload adjustment, and the accounting information can guide workload adjustment at runtime to save more energy and achieve better energy efficiency. The energy efficiency is defined as the ratio of the amount of work to the energy consumption.
An illustrative embodiment of the present system and method has experimental board called Integrated Development Platform (IDP) by Accelent Systems running Linux 2.4.18.
We implemented a Linux kernel module to perform energy accounting. The input to this module is the starting and ending times of requests or idle periods of a hardware component 122. The timing information is obtained by inserting the kernel function “do_gettimeofday” into the process scheduler for the CPU or the device drivers for the I/O components. For the CPU, the request of a process is the duration between the time the process is switched in and the time it is switched out. For the other components, the request's duration is between the request starts to execute and completes. If several consecutive requests are from the same process, they are merged as one request. With the timing information of requests, the accountant reconstructs the relationship between processes and their energy consumption for estimating the potential energy reduction. The accountant module provides three APIs “get_sr (pid, cname)”, “get_sc (pid, cname)”, and “get_su (pid, cname)” to provide the estimations of potential energy savings by removal, potential energy savings by clustering, and current energy savings for a process on a component, respectively. The process's pid is obtained by calling the Linux function “etpid( )”. The component's “cname” is the device name defined in Linux, e.g., “/dev/hda” for the disk.
Table II shows the parameters of the four illustrated hardware components in our example: the IBM Microdrive, the Netgear full-duplex network card, the Orinoco wireless network card, and the Intel XScale processor. All values are obtained from the experiments. Our experiments do not set the processor to the sleeping state so we do not report the processor's values of td, Tw, ed, and ew. The break-even time is calculated by:
The Microdrive has two sleeping states s1 and s2. If the Microdrive's idle time is shorter than the sleeping state si's break-even time (0.65 seconds), the Microdrive should not sleep. If the idle time is longer than s2's breakeven time (1.05 second), the Microdrive may enter s1 or s2. As explained above, to determine which state to choose, the threshold where entering s2 can save more energy is calculated.
Let t be the length of idleness. The energy by entering s1 is ed,1+ew,1+ps,1(t−Td,1−Tw,1)=0.124+0.207+0.24(t−0.159−0.273). The energy by entering s2 is s ed,2+ew,2+ps,2(t−Td,2−Tw,2)=0.135+0.475+0.066(t−0.160−0.716). The threshold is the value of t so that the energy is the same in either state 0.124+0.207+0.24(t−0.159−0.273)=0.135+0.475+0.066(t−0.160−0.716)→t=1.87. Therefore, the Microdrive enters s2 only if the idle period is longer than 1.87 seconds (not s2's break-even time, 1.05 seconds). If the idle time is between 0.65 seconds and 1.87 seconds, the Microdrive enters s1.
The Measured Parameters of the IBM Microdrive, the Netgear Full-Duplex Network Card, the Orinoco Wireless Network Card, and the XScale Processor (PXA250). The Microdrive has Two Sleeping States, Shown as “s1/s2”. The Wireless Card has Two Operational Modes: Transmission (tx) and Reception (rx).
The application programs used in one illustrated example include: “madplay”: an audio player, “xmms”: an audio streaming program, “mpegplayer”: an MPEG video player, “gzip”: a compression tool, “scp”: a secure file transferring utility, “httperf”: a program retrieving web pages. These programs have different workload characteristics on different components to demonstrate that the present system and method is applicable in different scenarios.
The programs chosen for different illustrative embodiments are based on two considerations: (a) Offline experiments are used to show that the energy accountant accurately reports the potential energy savings and we use simpler workloads for easier explanation of the details of the workloads. (b) Online experiments are used to show that the energy accountant handles more complex workloads to improve energy reduction. We use up to five programs running concurrently. Several components are used in illustrated embodiments to demonstrate that the present system and method is applicable to different components. A two-competitive time-out shutdown policy [10A] is used for each component, i.e., the timeout value of each component is set to be its break-even time.
Accuracy of Estimation
1) Clustering Requests: In this illustrated embodiment, the energy accountant is used to predict the energy savings by clustering for two programs “xmms” and “scp” on the Netgear network card. The two programs run concurrently. Program “xmms” retrieves data from the server periodically and stores the data in a buffer of 400 KB. When the amount of data in the buffer drops below 40 KB, the program refills the buffer again. When the buffer is full, “xmms” stops using the network card. Program “scp” has no buffering. This embodiment keeps the average bit rate of both programs at 50 Kbps. The purpose is to show that, even at the same average data rate, the energy responsibilities of the two programs can be significantly different if they have different degrees of burstiness in their requests. This embodiment evaluates the accuracy of Sc, for the network card. Considering memory power to determine optimal buffer size is evaluated below.
FIGS. 23 (a)-(f) shows the potential and reported energy savings by clustering. In FIGS. 23 (a) and (b), “xmms” uses a 400 KB buffer while “scp” has no buffering.
2) Process Removal: This illustrated embodiment uses three programs “gzip”, “scp”, and “httperf” running concurrently. The energy accountant estimates the range of energy savings from the Microdrive, the wireless network card, and the XScale processor for removing one of the processes. FIGS. 24 (a)-(c) shows the estimation and measurement results of the three components. These figures show the energy savings of the three components. The numbers 1, 2, and 3 in FIGS. 24 (a)-(c) indicate which process is removed: 1—gzip, 2—scp, and 3—httperf. If the estimated energy savings is a range, it is shown as a vertical line over the white bar and the white bar represents the middle value of the range.
The measured data shows that the actual energy savings are close to the middle value of the estimation range. Since “httperf” does not use the Microdrive, the estimated energy savings is zero. However, the measurement shows that there are small energy savings (2.3%) on the Microdrive if “httperf” is removed. This reason is that removing “httperf” expedites the execution of the other two processes on the processor and this further expedites the accesses of the two processes on the Microdrive.
Similarly, removing “gzip” results in small energy savings (2%) on the wireless network card even though “gzip” does not use the network.
Runtime Workload Adjustment
1) Adaptive Clustering: As discussed above, clustering a process may save little energy. Therefore, the memory buffer can be released for other programs, (b) the released memory may be turned off to save energy, and (c) the performance degradation due to clustering can be avoided. Another illustrated embodiment evaluates only the energy savings assuming the unused memory can be turned off to save power as suggested in [5A].
In this embodiment, “scp” is always running as the background process to upload data files from the Microdrive to a remote server. We choose “scp” because it has no stringent timing constraints. We perform clustering for “scp” on the Microdrive. A memory buffer is allocated to prefetch data from the Microdrive. The memory consumes 5×10−5 W for every page of 4 KB. The power is calculated using the SDRAM datasheet from the Micron website. We modified the program “scp” such that it periodically inquires from our energy accountant about the potential energy savings Sc and the current energy savings Su. The period is chosen as 10 seconds. A sensitivity analysis of this parameter will be performed is illustrated below.
The present method for clustering is described above. To test the effectiveness of the method under different degrees of concurrency, the other programs, madplay, xmms, mpegplayer, gzip, and httperf, are occasionally selected to execute concurrently with “scp”. The degree of concurrency indicates how many concurrent user processes are running. When the degree of concurrency is one, only “scp” is running. When the degree is higher, the other six programs are randomly selected to execute. For example, when the degree of concurrency is three, two other programs execute concurrently with “scp”. We divide the whole duration of the experiment into 300-second intervals and randomly determine a degree of concurrency for each interval. Five examples are provided with increasing average and maximum degree of concurrency. A 0.65 s timeout is used to shutdown Microdrive.
The present method is compared with the method (called clustering) that allocates memory buffer based on the requests of only an individual process [2A]. Method clustering does not use the accounting information, Ec, and Eu, to adaptively deallocate and re-allocate the buffer for “scp”.
2) Dynamically Suspending Processes: In this embodiment, a process is suspended only when the suspension can save a significant amount of energy. We use an Orinoco wireless card to transfer data and measure the number of data bytes transmitted. Similar to the workload used for dynamic clustering, “scp” is used as the background process. It is assume that “scp” is a low-priority process and it can be suspended if at least 5% energy can be saved. In this embodiment, the energy accountant periodically (every 10 seconds) calculates the current energy savings (Su) and the potential energy savings (Sr) from removing or suspending the requests of “scp”. We use the mid-value of the estimation range of Sr. If ST is larger than 5%, “scp” is suspended. If “scp” has been suspended and the current energy savings Su is less than 5%, “scp” is resumed. Occasionally, other programs are selected (madplay, xmms, mpegplayer, gzip, httperf) to execute concurrently with “scp”.
FIGS. 26 (a) and (b) show the energy savings and efficiency for removing or adaptively suspending “scp”. The efficiency is measured as the number of bytes transferred by all programs for every Joule. The efficiency is normalized to the original workload (with the degree of concurrency 1/1/1) as 100%. When the degree of concurrency is one and the requests are removed, over 92% energy can be saved as shown in
3) Hybrid Workload Adjustment: In this embodiment, hybrid workload adjustment is performed by combining the adaptive clustering and suspension. The motivation is that clustering can finish more work than suspension so clustering is chosen when its potential energy savings is comparable to suspension. When Sr is less than 5% (note that S,<Sr), we do not perform either clustering or suspension. If Sr is larger than 5%, we consider two cases: if S, (excluded the buffer energy) is within 5% of Sr, we perform clustering; otherwise, we perform suspension. The experiment is performed on the Microdrive. The energy savings and efficiency of hybrid adjustment is compared with adaptive clustering and adaptive suspension.
FIGS. 27 (a) and (b) show the results. Adaptive suspension obtains the largest energy savings but its energy efficiency is as much as 100% lower than the other two methods. On the other hand, adaptive clustering obtains the best energy efficiency but its energy savings is as much as 40% lower than the other two methods. Hybrid adjustment takes advantages of the other two methods. It saves energy comparable to adaptive suspension and achieves energy efficiency comparable to adaptive clustering. The reason is that hybrid adjustment performs clustering when its potential energy savings are close to suspension and thus completes more requests than adaptive suspension.
The time overheads of the three methods are shown in
In all the illustrated runtime embodiments, an adaptation period of 10 seconds is used. The adaptation period should be small in order to catch the runtime change of workloads in time. However, it should not be too small because the instantaneous workload variation may not reflect the future workload characteristics.
The present system and method illustratively assigns energy responsibilities to individual processes and estimates how much energy can be saved when a process clusters or removes requests by considering other concurrent processes. Each process can be utilized such energy accounting information is used to improve its workload adjustment for better energy savings and efficiency. Energy savings and efficiency can be affected by the presence of other concurrent processes. The illustrative methods are effective especially when the degree of concurrency is high. An OS can be instrumental to provide across-process information to individual processes for better workload adjustment. A coordination framework that allows multiple processes to adjust their workloads simultaneously may also be provided using the features of the present system and method.
While this invention has been described as having exemplary designs or embodiments, the present invention may be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains.
Although the invention has been described in detail with reference to certain illustrated embodiments, variations and modifications exist within the scope and spirit of the present invention as described and defined in the following claims.
This application claims the benefit of U.S. Provisional Application Ser. No. 60/779,248, filed Mar. 3, 2006, which is expressly incorporated by reference herein.
This invention was partially funded with government support under grant award number 0347466 awarded by National Science Foundation (NSF). The Government may have certain rights in portions of the invention.
Number | Date | Country | |
---|---|---|---|
60779248 | Mar 2006 | US |