Scheduling jobs to run on a single machine is a fundamental optimization problem, as is scheduling jobs to run on unrelated machines. Typically in such scheduling problems the jobs arrive “online” and over time. In order to complete a job, the job needs to be assigned a certain amount of processing, referred to as its processing volume.
Traditionally, devices given a set of jobs were run at their fastest possible speed, with the goal being to minimize the average flow time, where the flow time of a job (sometimes refereed as the response time) is the duration of time between its release and completion. A standard objective is minimizing (a weighted) sum of flow times.
However, the amount of energy consumed by the processor or processors has become an important consideration, because of the high energy cost (e.g., of a datacenter), along with the wear on components, (and possibly the battery life on mobile devices). A machine can run at many different speeds, with the tradeoff that higher speeds process jobs faster but consume more energy. The power (the rate of energy consumed) by a processor is a given function of the speed, e.g., the cube of the speed. Thus, if there is time to complete a job at a slower speed, running at the fastest speed is not desirable.
Scheduling jobs in a way that saves energy yet complete the jobs in a desired time is a question of knowing which job to schedule next, (which may change as more important jobs arrive), and at what speed to run the machine. This is a complex problem that heretofore did not have a good solution or solutions.
This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards determining a speed and job using a clairvoyant algorithm (in which job volume data is known) to estimate a speed for a non-clairvoyant algorithm in which the job's volume is not known in advance. One or more aspects are directed towards scheduling a job based upon an energy and time objective, in which the job has an unknown volume. A starting speed for the job is computed based upon clairvoyant simulation information obtained from running at least part of at least one job. The job is run at the starting speed; weight-related information while running the job is obtained used to change the job running speed.
In one or more aspects, a job scheduler is coupled to a job executer, in which the job scheduler is configured to input jobs and schedule the jobs for execution by the job executer. The job scheduler includes a non-clairvoyant algorithm configured to determine a speed for a job having an unknown volume based upon a simulation performed by a clairvoyant algorithm. The job scheduler provides the speed and a job to the job executer for execution.
One or more aspects are directed towards selecting a job having an unknown volume based upon a highest rounded density first and a queuing order. An estimated speed for the job is estimated based upon running a clairvoyant algorithm simulation using any information available from running other jobs. The job is run at a speed that is based upon the estimated speed.
The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
Various aspects of the technology described herein are generally directed towards a job scheduling solution including an algorithm that decides which job to schedule next and at what speed to run the processor based on a history of jobs run so far. In a single machine version of the problem, each job has an importance, referred to as a density, which is known. However, the volume of a given job is not known beforehand. A non-clairvoyant algorithm may use a clairvoyant algorithm to determine a speed to run the job, which may change as the weight changes because of partial job execution. In the unrelated machines version of the problem, each job can have a different volume and a different weight for each machine.
It should be understood that any of the examples herein are non-limiting. As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used in various ways that provide benefits and advantages in computing, energy saving and job scheduling in general.
Most existing solutions to the problem of which job to run and at what speed are in a “clairvoyant” setting, where the algorithm knows in advance how much processing volume is needed for a job as soon as the job is released. A more difficult problem and often a more realistic one is where the processing volume is known only when the job is completed, i.e., in a “non-clairvoyant” setting.
In many instances, only the density (corresponding to the relative importance of a job) is available, not the volume or weight (density=weight/volume). For example, consider a server that gets requests for service in several queues where each queue has a different importance. The requests in each queue are ordered in FIFO order according to the arrival time, but the server may not know anything else other than that the requests need to be run as jobs.
Described herein is using the weights of partially or fully completed jobs to determine the speed for a job that is executing, including the weight of any part of the executing job itself that is known via partial completion. Indeed, with only densities known, setting the speed is difficult. In fact the problem of known density is non-trivial even for a single job. The optimal speed may vary greatly with the processing volume of the job; therefore, the algorithm has to continuously adapt as it learns more about the volume of the job. Furthermore, with multiple jobs, the order of job selection matters as well, because the choice of the job affects the information the algorithm obtains, which in turn affects the speed. FIFO, (“first-in first-out”) is used in one implementation, however there is still a conflict between the FIFO rules and the HDF “highest density first” rules.
Described herein are constant-competitive algorithms for non-clairvoyant scheduling where the goal function is energy plus weighted fractional flow time, (where competitive refers to comparison against the schedule that would have been run had the volume knowledge gained in hindsight after the execution of the jobs been known beforehand). When a job is released, the system knows its density (but not its volume); the technology described herein considers any power function of the form P(s)=sα and gives:
In a clairvoyant setting, an optimal offline solution to the problem has an intrinsic dependence on the job volumes. For large volumes, the optimal speed starts high and gradually decreases over time as jobs are processed.
In the non-clairvoyant setting, not knowing the volumes is therefore problematic. If the machine is run too slow then the flow-time may be too large if the total job volume is large. However, running the jobs too fast wastes energy when the volume is small. Described herein is using clairvoyant-based estimation to determine a speed in a non-clairvoyant setting, and then adapting the clairvoyant-based estimation and thus the speed as more information becomes available.
As described herein, in one implementation the incoming jobs 102 may be rounded (quantized) by their densities into a multiplicative grid, i.e., powers of some constant, that is, queued into buckets. This is shown in
As described herein, the jobs are removed from the rounded queues 108 in FIFO order according to a highest “bucket” density first scheme. There is thus a hybrid approach between pure FIFO and pure HDF rules. A general reason for this is that running jobs in FIFO order provides information that makes the non-clairvoyant algorithm operate more efficiently, yet still allows higher density jobs to generally run before lower density jobs, that is, before those in “lower” buckets.
Once a job is selected, a non-clairvoyant algorithm 110 uses a clairvoyant algorithm 112 to determine a speed for running that job, based upon past results (clairvoyant information) 114 corresponding to earlier job execution. The speed and the job are sent to a job executor 116 to run the selected job. As will be understood, the speed is changed as more information is obtained, and the job that is being run may be preempted by a higher density job (e.g., in a higher density bucket).
When the system selects a job to execute, the system wants to know the volumes of the jobs released earlier to simulate the clairvoyant algorithm on the selected jobs. For the uniform density case, this does not conflict with the HDF order of the clairvoyant algorithm because all the densities are the same. In this case, the system gets a strong relation between the clairvoyant and the non-clairvoyant algorithms; there is a measure-preserving map from the timeline of one to the other so that under this map, the speeds of the two algorithms are the same. This implies that the energy consumed is the same for the two algorithms; bounding the flow time requires more work. The system then continuously modifies the measure-preserving map between the two algorithms to maintain the required property.
As can be seen, the area represented by the remaining weight (e.g., the integral) corresponds to the flow time, while the processed weight area represents the power. The non-clairvoyant starts based upon the clairvoyant, e.g., essentially at zero processor speed because there is no volume yet. Note that in actuality, some non-zero speed is used to get some volume computed that can be further used as information. As the weight increases, more information is known, whereby the speed is increased as needed to complete the job.
A more complicated approach is needed for the batch-processing case. Here the system assumes that the jobs are released at the same time, therefore any order is FIFO and the algorithm schedules them in HDF order without any conflict. Once again, a local argument suffices, that as time goes on, the change in the energy consumed by the clairvoyant algorithm due to the change in the intermediate instance can be related to the change in the energy and flow time of the non-clairvoyant algorithm. Because the total energy and flow time of the clairvoyant algorithm are the same, this gives the required competitive ratio.
Most of the difficulty in the general case comes from the conflict between the FIFO and the HDF orders. While the system wants to process the jobs released earlier first in order to learn their volumes, the jobs with the higher densities incur a bigger cost so they have to be scheduled first. As set forth above, one implementation adopts a hybrid approach via the job density rounder 106 and rounded queues 108; the system rounds the densities to a multiplicative grid, i.e., powers of some constant. Jobs with equal densities (after the rounding, if rounding is used) are processed in FIFO order, while jobs of higher densities preempt ones with lower densities. The non-clairvoyant algorithm is competitive against the clairvoyant algorithm.
However, other difficulties arise. First, even if only two jobs of unequal density are released over time, if the system sets the speed of the non-clairvoyant algorithm so that its completion time matches with that of the clairvoyant algorithm, then the flow time of the non-clairvoyant algorithm cannot be locally bounded against that of the clairvoyant algorithm. In other words, there are situations where, on transforming the instance by adding infinitesimal weight to the job being processed currently by the non-clairvoyant algorithm, the flow-time of the clairvoyant algorithm increases by lower order terms compared to the increase in the non-clairvoyant algorithm.
Further, consider that a job j was preempted by job j′ in the non-clairvoyant algorithm. The local competitive analysis fails exactly when the machine resumes processing j after having completed the processing of j′. Note that if the non-clairvoyant algorithm matches the completion time of the clairvoyant algorithm, then this resumption happens at the same time in both algorithms.
To overcome this difficulty, in one implementation, the system deliberately speeds up the non-clairvoyant algorithm (e.g., by a constant factor) so that its completion time is earlier than that of the clairvoyant algorithm, thereby eliminating the scenario where both algorithms resume processing a job at the same time. In fact, the local competitive analysis can be restored if on transforming an instance, the increase in the remaining weight for the clairvoyant algorithm at the current time is at least a constant fraction of the weight added in the transformation. However, this property does not hold in general. Instead there is a weaker property along the same lines, and an amortized analysis is used to add a global component to the local competitive analysis. In particular, the system shows that while in certain situations, the increase in flow time of the clairvoyant algorithm is negligible compared to that of the non-clairvoyant algorithm, there are other situations where the increase is larger than what the system needs for local competitiveness. This suggests the use of a potential function that stores the additional flow-time of the clairvoyant algorithm for later use in the competitive analysis.
To summarize, the online problem of scheduling a single machine to minimize the flow-time plus energy is as follows. There is a single machine that can run at any non-negative speed, and there are jobs that need certain amounts of processing power. Running the machine at a higher speed processes jobs faster but consumes higher energy, as given by a power function P: R+→R+ that is monotonically non-decreasing and convex; P(0)=0. The general problem is to process the jobs in a way that minimizes the sum of the total energy consumed and the total weighted flow-time of the jobs (which measures how long the jobs wait). Note that the power function is predefined and is not considered part of an instance of the problem.
Input: the input is a set of jobs J. For each job j εJ, its release time r[j], volume V[j] and density ρ[j]. Let the weight of job j be W[j]=ρ[j]·V[j].
Output: the output, for each time t ε[0, ∞], is the job to be scheduled at time t, denoted by j(t) and the speed of the machine s(t). For brevity, s is imply written when the dependence on t is clear from the context.
Constraints: a job can only be scheduled after its release time. For each job j the total computation time allocated needs to be equal to its volume:
∫tε[r[j],∞):j(t)=js(t)dt=V[j].
Objectives: the total energy consumed is simply the integral of the power function over time.
E=∫
t≈0
∞
P(s(t))dt.
The fractional flow-time (for a given job j) is:
F[j]=ρ[j]·∫
tε[r[j],∞]:j(t)=j(t−r[j])s(t)dt.
whereby
F[j]=ρ[j]·∫
t=r[j]
∞
(t)[j]dt,
where
(t)[j]=V[j]−∫t′ε[r[j],t]:j(t′)=js(t′)dt′.
The problem is to minimize the sum of the energy and the sum of the flow-times of all the jobs, which is:
The difference from the online clairvoyant version of the same problem is that the details of job j are given at time r[j]. The algorithm makes its decisions at any time without knowing which jobs will be released in the future. In the online non-clairvoyant version of the problem, upon the release of job j at time r[j], only the density is given; the volume is not given. At any future point of time, only knows whether
Turning to an algorithm for the online clairvoyant version of the problem, referred to as Algorithm
The speed at time t is such that
P(s(t))=
For Algorithm c, the total energy is equal to the total flow-time. This is because the total flow time is
With respect to the uniform density case, i.e., ρ[j]=1 for all j, described herein is an algorithm for the online non-clairvoyant version of the problem, referred to as Algorithm
be the remaining weight of the active jobs in algorithm
P(s)=
The system considers power functions of the form P(s)=sα for some α>1. A competitive ratio may be obtained for Algorithm
The competitive ratio of Algorithm
Turning to a non-clairvoyant algorithm for jobs of non-uniform density, the system needs to specify, for every time t, the job selected for processing at time t, and the speed at which the selected job is processed. As set forth above, in the non-clairvoyant version, the algorithm only has the densities of the jobs that have been released until time t, the volume/weight of the jobs that have been completed until time t, the set of active jobs, and a lower bound on the volume/weight of every active job given by the volume/weight of the job processed by the non-clairvoyant algorithm until time t. As in the case of uniform densities, the non-clairvoyant algorithm is closely related to the clairvoyant algorithm (algorithm
The Non-clairvoyant Algorithm (Algorithm
Step 604 uses the associated density to round the job into one of the rounded queues, (e.g., quantize the job into a corresponding bucket). Note that step 604 may dequeue a job from a priority queue representative of the associated density and queue the job into the rounded queue.
Step 606 represents waiting for the next job.
If a job is running, step 706 represents determining whether any job is queued that can preempt the running job. If so, a job is selected at step 708 from the highest priority (e.g., rounded) queue based upon FIFO order. If not, step 706 branches to step 712 to adjust the speed of the currently running job based upon the change in weight that has occurred since the speed was last computed.
As part of determining the speed for a dequeued job, step 710 represents evaluating whether the job is the first one, e.g., there is no weight yet that may be used as information for computing the speed. If so, at step 712 the speed is set to (zero plus) ε as described above. Otherwise step 714 sets the speed based upon the clairvoyant simulation.
Step 716 represents increasing the speed by a constant factor to compensate for some of the problems described above. Step 718 sends the job and speed to the job executer, (or at least any changed speed for a running job that is not preempted). Any preemption and speed changes may be handled by the job executer.
Step 720 updates the information that is used by the non-clairvoyant algorithm to simulate the clairvoyant setting. Thus, a job that is completed in whole or preempted may be handled as in other systems, however its results are used for future simulations. Moreover, a job that continues running has its weight factored into future simulations.
Another aspect is directed towards the online scheduling of jobs on unrelated machines with dynamic speed scaling to minimize the sum of energy and weighted flow time. Note that as with the above-described single machine approach, preemption/resumption is allowed, but migration of a job from one machine to another is not.
Described herein is one example algorithm with an almost optimal competitive ratio for arbitrary power functions; (known prior results do not handle arbitrary power functions for unrelated machines). For power functions of the form f(s)=sα for some constant α>1, an improved competitive ratio is obtained, along with a matching lower bound.
The algorithm has to schedule the jobs on one or more machines so as to complete them as soon as possible. A standard objective is (a weighted) sum of flow times; each job can have a different volume and a different weight for each machine.
In one implementation, The objective has two components, energy and flow-time. Recall that f is the power function, which gives the power consumption as a function of the speed. Power is the rate at which energy is consumed therefore energy consumed is the integral of power over time. The energy consumed by machine i is therefore:
E
i=∫0∞f(sit)dt.
The fractional flow-time is an aggregated measure of the waiting time of a job. Suppose job j is scheduled on machine i. Let {circumflex over (v)}j(t) be the remaining volume of job j at time t, i.e.,
The fractional flow-time of job j is then defined to be:
The objective is to minimize the total energy consumed by the machines and the sum of the flow-times of all the jobs, weighted by their densities:
In the online version of the problem the details of job j are given only at time rj. The algorithm has to make decisions at time t without knowing anything about the jobs released in the future.
The algorithms exemplified herein are based on a convex programming relaxation of the problem and its dual, which are as follows. The dual convex program is obtained using Fenchel duality. (In particular f* is the Fenchel conjugate of f.)
The variables sit denote the speed at which job j is scheduled on machine i at time t. sit=Σj sijt is the total speed of machine i at time t. The first summation in the objective function corresponds to the fractional flow-time: sitdt units of job j are processed between t and t+dt, which waited for a duration of t−rj resulting in
dt amount of fractional flow-time. The second summation is the total energy consumed. The third summation is used because the convex program allows a job to be split among many machines and have different parts run in parallel. This sometimes allows the convex program to have a much lower objective than the optimal solution to the problem.
We will explain how the third term fixes this problem a little later. Constraint (1) defines sit. For each job j, constraint (2) enforces that the scheduling needs to complete job j. Hence, the primal program is a valid relaxation of the scheduling problem and the first two terms in the objective capture the fractional flow time and energy cost of the given schedule.
The first two terms in the primal objective are not enough to give a good lower bound for the cost of the optimal schedule, hence the third term. Note that the system does not enforce that all of job j must be processed on the same machine, therefore both job migrations and parallel processing of the same job on multiple machines are allowed. Consider an instance with only one job released at time 0 and a large number of machines. The optimal solution to the convex program schedules the job simultaneously on all the machines and the total cost with respect to the first two terms will tend to zero as the number of machines tends to infinity. The optimal algorithm has to schedule the job on a single machine and hence pays a fixed non-zero cost. Without the third term, the convex program fails to provide a good lower bound on the cost of the optimal solution.
Consider a modified instance where there are multiple copies of each machine (as many as the number of jobs); the cost of the optimal solution to this instance is only lower. In this modified instance, without loss of generality, no two jobs are ever scheduled on the same machine. It can be shown that if job j is scheduled on a copy of machine i by itself, then the optimal cost (energy+flow-time) due to job j is
Now, still allowing a job to be split among different machines, an
fraction of job j is scheduled on machine i. Thus
is a lower bound on the cost of scheduling job j. This implies that the optimum of the convex program with an additional factor of two is a lower bound on
Turning to optimal scheduling for single job, a simpler convex program and its dual is obtained for the problem of scheduling a single job on a single machine. The third term in the objective is dropped because that deals with non-integral assignment of jobs to machines. Because there is only one job, it may be assumed that that rj=0.
The conjugate function ƒ* is defined as
The conjugate function is also convex and monotonically non-decreasing. If the function is strictly convex, then so is the conjugate function. One property of the conjugate function is the notion of a complementary pair; β and s are said to be a complementary pair if any one of the following conditions hold. (It can be shown that if one of them holds, then so do the others.)
f′(s)=β. 1.
f*′(β)=s. 2.
f(s)=f*(β)=sβ. 3.
The optimal solutions to these programs are characterized by the (generalized) complementary slackness or KKT conditions. These are:
The first condition implies that for the entire duration that the machine is running (with non-zero speed), the quantity pijt+βit remains the same, since it must always equal αj. In other words, βit linearly decreases with time, at the rate of ρij.
The main result is that the optimum solution has a closed form expression where sit and βit are set as a function of the remaining weight of the job at time t, which is denoted by ŵit. (More generally, ŵit denotes the total remaining weight of all the jobs on machine i.) Also the remaining volume at time t is denoted {circumflex over (v)}it.
Turning to optimal scheduling for a single machine, that is, where there are multiple jobs to be scheduled on a single machine, with corresponding convex programs. Again assume that rj=0 for all jobs j.
The complementary slackness conditions for these pair of programs are basically as before. To begin with,
As before, this implies that βit decreases at rate ρij whenever job j is scheduled, but now there is a choice of jobs to schedule. The above complementary slackness condition implies that job j needs to be scheduled when the term ρijt+βit attains its minimum. The first part, ρijt, always increases at rate ρij, while the second part, βit decreases at rate ρij(t) where j(t) is the job scheduled at time t. So if ρij<ρij(t) then ρijt+βit is decreasing and vice-versa, if ρij>ρij(t) then ρijt+βit is increasing. This implies that the “highest density first” (HDF) rule is optimal, i.e., schedule the jobs in the decreasing order of the density. For any job j, ρijt+βit first decreases when higher density jobs are scheduled, then remains constant as job j is scheduled and then increases as lower density jobs are scheduled. Given the choice of jobs scheduled, the choice of speed is very similar to the single-job case.
Described herein is a primal-dual algorithm which referred to as conservative-greedy. The basic idea of algorithm is that given the choice of job assignments to machines, the algorithm schedules the jobs as if no other jobs will be released in the future. That is, it schedules the jobs as per the optimal schedule for the current set of jobs, as described above. The choice of job assignments to machines is done via a natural primal-dual method, the one dictated by the complementary slackness conditions; (a more aggressive algorithm is described below; if a job is released in the future, then it is better to run faster than what the current optimal solution suggests. To the contrary, if there is no future job, then running faster is sub-optimal. The aggressive algorithm balances these tradeoffs.
With respect to the conservative-greedy algorithm, at any point, given the jobs already released and their assignment to machines, the algorithm picks the optimal scheduling on each machine, assuming no future jobs are released. This also gives dual solutions, in particular the variables βit for all i and t in the future. When a new job j is released, its assignment to a machine is naturally driven by the following dual constraints and the corresponding complementary slackness conditions. For all i, t
For a given machine i, the right hand side (RHS) is minimized (over all t) at t*i where t*i is the first time job j is scheduled on i given the HDF rule. That remains true because the third term above is independent of t. The algorithm minimizes all i as well, by assigning job j to the machine i that minimizes the RHS of the inequality above with t=t*i. It sets the dual αj so that the corresponding constraint is tight, then updates the schedule and βit's for machine i. Note that as more jobs are added, the βit's can only increase, thus preserving dual feasibility. The conservative greedy online scheduling algorithm for minimizing fractional flow time plus energy with arbitrary power functions is shown below:
Such a conservative approach already achieves a meaningful competitive ratio for arbitrary power functions and a near optimal competitive ratio for polynomial power functions. An alternate algorithm with essentially the same analysis is the following: assign job j to machine i for which the increase in the total cost is the minimum. The dual αj needs to be set as done currently, so there may be a disconnect between which machine the job is assigned to and which machine dictates the dual solution.
In the conservative greedy algorithm, the speed is scaled to the conservative extreme as the speed is optimal assuming no future jobs will arrive. However, in an online instance there may be future jobs and some of these future jobs will be effectively delayed by the current job. Therefore, a good online algorithm needs to take this into account when choosing the speed. Described herein are algorithms with different aggressiveness in terms of speed scaling.
Given any parameter C≧1, one C-aggressive greedy algorithm for minimizing weighted fractional flow time plus energy (with arbitrary power functions) is:
Turning to the problem of online scheduling for minimizing weighted (integral) flow-time plus energy, the problem for weighted integral flow time has the same input, output, and constraints as the fractional flow time version. The only difference is the objective. Formally defining the weighted integral flow time of an instance given a schedule: Let At denote the set of jobs such that have been released before or at time t but have not been completed according to the schedule until time t, i.e.,
r
j
≦t and ∫tε[r
The weighted flow time is defined as:
Thus, the main difference is that when a job is partially completed, the entire weight of the job will contribute to the weighted integral flow time, while only the incomplete fraction will contribute to the weighted fractional flow time.
With respect to convex programming relaxation and the dual, similar to the fractional, the primal-dual analysis may be used via following convex program for the problem of minimizing integral flow time plus energy and consider its dual program:
Here the same notation is used as in the fractional case and thus additional details are omitted for brevity. The only change is the third term in the primal program (and the corresponding part in the dual). This is because conditioned on being allocated to machine i, the optimal cost for job j in a single-job instance with respect to integral flow time plus energy is vij(f*)−1(wij). Hence, the share of the optimal single-job cost for the
fraction of job j is processed on machine i from t to t+dt is (f*)−1(wij)sijtdt.
Similar to the fractional case, a conservative greedy algorithm is considered, which use the optimal speed scaling assuming there are no future jobs, along with a more general family of C-aggressive greedy algorithms. The main difference comparing to the fractional case is the job selection rule on a single machine is no longer HDF. Instead, the algorithms will combine the job assignment rule job selection rule by maintaining a processing queue for each machine. The machines will process the jobs in their queues in order. When a new job arrives, the algorithm will insert the new job to a position in one of the processing queue according to the dual variables. A formal description of the conservative greedy online scheduling algorithm for minimizing weighted integral flow time plus energy with arbitrary power functions is:
A C-aggressive greedy online scheduling algorithm for minimizing weighted integral flow time plus energy with arbitrary power function is:
As described herein, as the jobs are dequeued from the input, one of the above online algorithms 808 (e.g., fractional conservative greedy algorithm, fractional C-aggressive greedy algorithm, integral conservative greedy algorithm and/or integral C-aggressive greedy algorithm) queues the jobs for each machine in a per machine queue 8101-810n where n is greater or equal to one. The machines 8121-812n then execute the jobs.
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to: personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
With reference to
The computer 910 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer 910 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computer 910. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above may also be included within the scope of computer-readable media.
The system memory 930 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 931 and random access memory (RAM) 932. A basic input/output system 933 (BIOS), containing the basic routines that help to transfer information between elements within computer 910, such as during start-up, is typically stored in ROM 931. RAM 932 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 920. By way of example, and not limitation,
The computer 910 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media, described above and illustrated in
The computer 910 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 980. The remote computer 980 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 910, although only a memory storage device 981 has been illustrated in
When used in a LAN networking environment, the computer 910 is connected to the LAN 971 through a network interface or adapter 970. When used in a WAN networking environment, the computer 910 typically includes a modem 972 or other means for establishing communications over the WAN 973, such as the Internet. The modem 972, which may be internal or external, may be connected to the system bus 921 via the user input interface 960 or other appropriate mechanism. A wireless networking component 974 such as comprising an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a WAN or LAN. In a networked environment, program modules depicted relative to the computer 910, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
An auxiliary subsystem 999 (e.g., for auxiliary display of content) may be connected via the user interface 960 to allow data such as program content, system status and event notifications to be provided to the user, even if the main portions of the computer system are in a low power state. The auxiliary subsystem 999 may be connected to the modem 972 and/or network interface 970 to allow communication between these systems while the main processing unit 920 is in a low power state.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.