Scheduling computer processing jobs that have stages and precedence constraints among the stages

Information

  • Patent Grant
  • 8281313
  • Patent Number
    8,281,313
  • Date Filed
    Thursday, September 29, 2005
    19 years ago
  • Date Issued
    Tuesday, October 2, 2012
    12 years ago
Abstract
An embodiment of a method of scheduling computer processing begins with a first step of receiving job properties for a plurality of jobs to be processed in a multi-processor computing environment. At least some of the jobs each comprise a plurality of stages, one or more tasks for each stage, and precedence constraints among the stages. The method continues with a second step of determining a schedule for processing at least a subset of the plurality of jobs on processors within the multi-processor computing environment from a solution of a mathematical program that provides a near maximal completion reward. The schedule comprises a sequence of tasks for each processor. In a third step, the computer processing jobs are processed on the processors according to the sequence of tasks for each processor.
Description
FIELD OF THE INVENTION

The present invention relates to the field of computing. More particularly, the present invention relates to the field of computing where computer processing jobs are scheduled for execution.


BACKGROUND OF THE INVENTION

Scheduling is a basic research problem in both computer science and operations research. The space of problems is vast. A subset of this problem space is non-preemptive multiprocessor scheduling without processor-sharing. Generally, techniques for solving non-preemptive multi-processor scheduling problems are based upon an objective function, which a scheduling tool seeks to optimize. Such objective functions include, the completion time of the last job (i.e., the makespan) or mean completion time. Jobs may be made up of one or more tasks.


In many cases, task dispatching decisions are made manually by human operators. For example, a human operator may assign ordinal priorities (e.g., most important, medium importance, least important) to a set of tasks and as resources become available tasks are selected from an assignable pool of tasks according to their ordinal priority. This approach does not scale, is labor intensive, error prone, and often results in undesirable dispatching sequences (e.g., low utilization, uneven load, violated assignment constraints, and violated precedence constraints).


Automated dispatchers are based on fixed dispatching rules such as FIFO (first-in, first-out), round robin, lowest utilization first, and fair share. As a result, automated dispatching sequences are inflexible. In some cases, automated dispatching rules can be changed by a human operator while a system is in operation. This allows for improved performance but requires human intervention.


SUMMARY OF THE INVENTION

The present invention comprises a method of scheduling computer processing jobs. According to an embodiment, the method begins with a first step of receiving job properties for a plurality of jobs to be processed in a multi-processor computing environment. At least some of the jobs each comprise a plurality of stages, one or more tasks for each stage, and precedence constraints among the stages. The method continues with a second step of determining a schedule for processing at least a subset of the plurality of jobs on processors within the multi-processor computing environment from a solution of a mathematical program that provides a near maximal completion reward. The schedule comprises a sequence of tasks for each processor. In a third step, the computer processing jobs are processed on the processors according to the sequence of tasks for each processor.


These and other aspects of the present invention are described in more detail herein.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is described with respect to particular exemplary embodiments thereof and reference is accordingly made to the drawings in which:



FIG. 1 illustrates a set of multi-stage computer processing jobs processed according to an embodiment of a method of scheduling computer processing jobs as a task chart in accordance with an embodiment of the present invention;



FIG. 2 schematically illustrates a multi-processor computing environment that processes computer processing jobs according to a method of scheduling computer processing jobs in accordance with an embodiment of the present invention; and



FIG. 3 illustrates an embodiment of a method of scheduling computer processing jobs of the present invention as a flow chart.





DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT

The present invention comprises a method of scheduling computer processing jobs in a multi-processor computing environment. The computer processing jobs comprise multi-task jobs each having a plurality of stages with one or more tasks per stage and precedence constraints among the stages. Each stage includes computational tasks that may be executed in parallel. All tasks in a stage must complete before any tasks in the next stage may begin execution. In other words, the tasks in a later stage are subject to a precedence constraint that requires the preceding stage's tasks finish processing before any of the tasks in the later stage may begin processing. There are no precedence constraints between tasks of different computer processing jobs.


An embodiment of a set of multi-stage computer processing jobs that may be processed according to embodiments of the method of the present invention is illustrated as a task chart in FIG. 1. The set of multi-stage jobs 100 includes first and second jobs, 102 and 104. The first job 102 (also indicated by j=1) includes three stages, indicated by gε{1, 2, 3}, and six tasks, indicated by iε{1, 2, . . . , 6}. The second job 104 (also indicated by j=2) includes two stages, indicated by gε{1, 2}, and six tasks, indicated by iε{1, 2, . . . , 6}. The first and second jobs, 102 and 104, may be characterized by first and second critical path lengths, 106 and 108, which are the times required to process the first and second jobs, 102 and 104, respectively, if an unlimited number of processors are available.


The problem of scheduling the multi-stage computing processing jobs may be described more formally as follows. The multi-stage computer processing jobs comprise a set of jobs jεJ. Job j contains a set of stages gεG(j). A set of tasks i in stage g of job j is denoted as S(g,j). Stages encode precedence constraints among tasks within a job. No task in stage g+1 may begin until all tasks in stage g have completed. Stages represent a special case of “series-parallel” precedence constraints or interval based precedence constraints. Precedence constraints do not exist among tasks of different jobs.


A job completes when all of its tasks have completed, and a reward is accrued if the job completes by its due time Dj. In any schedule of all of the jobs jεJ, each job j has a completion time Cj and a completion reward Rj. The goal is to sequence tasks onto processors in such a way that the final schedule maximizes the aggregate reward RΣ. The aggregate reward RΣ may be given by:

RΣj=1JUD(j)(Cj)

where UD(j)(Cj) is utility for the completion time Cj of job j and may be given by UD(j)(Cj)=Rj if Cj≦Dj and UD(j)(Cj)=0 otherwise. (Note that the due time Dj appears as the due time D(j) in places where the due time is a subscript.) Utility may be expressed as a dollar value or as points on an arbitrary scale or in some other units. The definition of UD(j)(Cj) may be extended to allow a lesser positive reward for jobs that complete after the due time.


The problem of scheduling the multi-stage computer processing jobs with arbitrary completion rewards and arbitrary task execution times is NP-hard to approximate within any constant factor. A particular problem is NP-hard if another problem known to be NP-complete can be reduced to the particular problem in polynomial time. With unit completion rewards and unit task execution times, the problem of scheduling the computer processing jobs is NP-complete. The terms “NP-hard” or “NP-complete” mean that an exact solution can only be obtained within a feasible time period, if at all, for a small problem size. Providing more computing power only slightly increases the problem size for which one can expect to find the exact solution if such a solution exists at all. The terms “NP-hard” and “NP-complete” come from complexity theory, which is well known in the computer science and scheduling disciplines.


In an embodiment, the multi-stage computer processing jobs comprise a batch of animation processing jobs. For example, the batch of animation processing jobs may be brief excerpts of a computer-animated film that is in production. Typically, each of the brief excerpts is processed in a series of stages which must be processed in a particular order. For example, the series of stages may begin with simulation of physical movement followed by model baking, then frame rendering, and concluding with film clip assembly.


Other computer processing jobs have a similar multi-stage structure in which stages include tasks that may be executed in parallel and later stages are subject to precedence constraints that require tasks in earlier stages to complete processing before the tasks of the later stages may begin processing. Examples include protein sequence matching, certain classes of fast Fourier transformations, petroleum exploration workloads, and distributed data processing.


An embodiment of a multi-processor computing environment that employs a method of scheduling computer processing jobs of the present invention is schematically illustrated in FIG. 2. The multi-processor computing environment 200 comprises a scheduler 202 and a plurality of clusters 205, which are coupled together by a network 208. Each cluster 205 comprises a plurality of servers 204 and storage 206. In an embodiment, the scheduler 202 comprises a processor 210 and memory 212. In an embodiment, each server 204 comprises a processor 210 and memory 212. In another embodiment, one or more servers 204 further comprise one or more additional processors. In an alternative embodiment, the storage 206 of each of the clusters 205 is a portion of a storage pool that is coupled to the servers 204 by a SAN (storage area network). More generally, a multi-processor computing environment that employs a method of scheduling computer processing jobs of the present invention includes a plurality of processors, memory, and storage. In this general multi-processor computing environment, one or more of the processors acts as a scheduler. Alternatively, the scheduler is located separately from the multi-processor computing environment. The scheduler is a computing entity that determines a schedule as part of a method of scheduling computer processing jobs of the present invention.


An embodiment of a method of scheduling computer processing jobs in a multi-processor computing environment of the present invention is illustrated as a flow chart in FIG. 3. The method 300 begins with a first step 302 of receiving job properties for a plurality of jobs to be processed in the multi-processor computing environment. At least some of the jobs each comprise a plurality of stages, one or more tasks for each stage, and precedence constraints among the stages. The method continues with a second step 304 of determining a schedule for processing at least a subset of the plurality of jobs from a solution of a mathematical program that sequences tasks to processors within the multi-processor computing environment and that provides a near maximal completion reward. The schedule determined in the second step includes a sequence of tasks for each of the processors within the multi-processor computing environment.


In an embodiment, the mathematical program is a Mixed Integer Program (MIP), which may be solved using a commercially available solver such as CPLEX. Alternatively, the mathematical program is an integer program, which also may be solved using a commercially available solver. The MIP and an appropriate solver guarantee an optimal solution of the formulated MIP if run to termination. Beyond some large real-world problem size, the MIP and the appropriate solver provide a feasible solution for the formulated MIP and a conservative estimate of a lower bound of an optimal solution. In an embodiment, the MIP includes decision variables, state variables, input and derived parameters, constraints, and an objective. The decision variables include task-to-processor assignment decision variables, job selection decision variables, job-to-processor assignment decision variables, and stage sequence decision variables. The state variables are derived from the decision variables; they are intermediate results that improve the formulated MIP. The state variables include stage-at-processor completion time variables and job completion time variables. The constraints include tasks precedence constraints, job completion time constraints, job selection constraints, processor constraints, and sequence constraints. In an embodiment, the objective comprises maximizing a sum of completion rewards for jobs completed by the due time.


A more detailed description of an embodiment of a MIP of the present invention is described as follows. The MD employs indices and sets listed in table 1.










TABLE 1





Indices and sets
Description







ψ ε Ψ
Set of users, user groups, or a combination thereof


j ε J(ψ)
Set of jobs from user or user group ψ


k ε K
Cluster of processors


p ε P(k)
Set of processors at cluster k


k(p)
Denotes the corresponding cluster k for processor p


i ε S(gj)
Set of tasks i of stage g of job j


g ε G(j)
Set of stages g for job j


l = l(j)
Last stage l of job j









Input and derived parameters for the MIP are listed in Table 2. The completion reward Rj of job j has some value if job j is completed by the due time. Otherwise, it is zero. Alternatively, the completion reward Rj may be reduced by a penalty if job j is not completed by the due time. A total processing time available W between a release time and a due time may be given by:






W
=

H





k

K






P


(
k
)


















TABLE 2





Parameters
Description







Dj
Time available between release time (or present time)



and due time for job j


Rj
Completion reward for job j if it is finished on time


tg,jp
Average processing time at processor p for stage g of job j


ng,j
Number of tasks of stage g of job j


H
Maximum time available in a processing period


fψ
Percentage of total processing time allocated to user



or group of users ψ over a period of time, where














ψ

Ψ




f
ψ


=
1









W
Total processing time available between release



time and due time


τψ
Consumption of time by user or user group ψ









The decision variables include task-to-processor assignment decision variables, job selection decision variables, job-to-cluster assignment decision variables, job-to-processor assignment decision variables, and stage sequence decision variables. The task-to-processor assignment decision variables indicate quantity of tasks of stage g of job j that are assigned to processor p and may be given by:

xg,jp≧0


The job selection decision variables indicate the jobs that are processed in a processing period and may be given by:







z
j

=

{



1



job





j





is





selected





0


otherwise








Each of the job-to-cluster assignment decision variables indicates an assignment of a job to a cluster of processors and may be given by:







y

j
,
k


=

{



1



cluster





k





process





all





tasks





of





job





j





0


otherwise








Each of the job-to-processor assignment decision variables indicates an assignment of at least one task of a job to a processor and may be given by:







δ

j
,
p


=

{



1



processor





p





processes





one





or





more





tasks





of





job





j





0


otherwise








The stage sequence decision variables indicate stage and job sequences and may be given by:







v


(

g
,
j

)

,

(


g


,

j



)



=

{



1







tasks





in






(


g


,

j



)






follow





tasks





in












(

g
,
j

)






if





assigned





to





processor





p








0


otherwise









The stage sequence decision variables cover stage sequences within a single job (i.e., stage g and stage g′ are part of a single job indicated as both job j and job j′) and stage sequences between two jobs (i.e., stage g is part of a first job j and stage g′ is part of a second job j′).


The state variables include stage-at-processor completion time variables, stage-at-cluster completion time variables, and job completion time variables. The stage-at-processor completion time variables cg,jp provides a completion time of stage g of job j at processor p.


The stage-at-cluster completion time variables Cg,jk provide a completion time of stage g of job j at cluster k and may be given by:

Cg,jk=Max{cg,jp:pεP(k)}


The job completion time variables CJi provide a completion time of job j and may be given by:

CJj=Max{Cl(j),jk:kεK}


The constraints comprise tasks precedence constraints, job completion time constraints, job selection constraints, fair share constraints, affinity constraints, cluster constraints, processor constraints, and sequence constraints.


The task precedence constraints ensure that tasks at a particular stage cannot start processing until a previous stage completes processing. The task precedence constraints may be given by:

cg,jp≧cg′,j′p+tg,jpxg,jp−B(1−v(g′,j′)(g,j))
cg,jp≧Cg-1,jk(p)+tg,jpg≧2
Cg,jk(p)≧cg,jp

where B is a sufficiently large constant to ensure correct operation of these constraints when stage g′ of job j′ precedes stage g of job j.


The job completion time constraints ensure that the completion time of job j equals or exceeds the completion time of the last stage l(j) of job j on any cluster. The job completion time constraints may be given by:

CJi≧Cl,jk∀kεK


The job selection constraints assure that jobs are completed by the due time and may be given by:

zj≦Dj−CJj+1

If the right hand side of a particular job selection constraint is zero or negative (e.g., 1 sec. late), the particular constraint is infeasible. This forces the second decision variable zj to take a value of zero, which forces the job completion time CJj to zero and satisfying the particular job selection constraint.


The fair share constraints may assure that at least some jobs for each user or user group ψ are processed. Alternatively, the fair share constraints may limit the number of jobs that are processed for each user or user group ψ. Or, the fair share constraints may assure that at least some jobs for each user or user group ω are processed while limiting the number of jobs that are processed for each user or user group ψ. The fair share constraints may be implemented as hard constraints, either as a hard fraction or a range of fractions, or as soft constraints. The fair share constraints implemented as hard fractions may be given by:

















g


G


(
j
)



,

j


J


(
ψ
)











p


p


(
k
)



,

k

K







t

g
,
j

p



x

g
,
j

p







f
ψ

*
W

-

τ
ψ







The fair share constraints implemented as soft constraints may allow violation of either the hard fractions or the range of fractions while imposing penalty terms, which reduce the completion reward. Fair share constraints that assign different factions to different users or user groups may be referred to as weighted fair share constraints as opposed to un-weighted fair share constraints that provide equal fractions to users or user groups.


The affinity constraints ensure that all tasks of job j are assigned to no more than one of the clusters of processors and may be given by:










k

K




y

j
,
k



=

z
j





The cluster constraints ensure that all of the tasks of stage g of job j are processed by cluster k and may be given by:










p


P


(
k
)






x

g
,
j

p


=


y

j
,
k




n

g
,
j







The processor constraints ensure that tasks are assigned to a processor and may be given by:







x

g
,
j

p




n

g
,
j


*

δ

j
,
p










δ

j
,
p







g


G


(
j
)






x

g
,
j

p






The sequence constraints ensure a consistent ordering of jobs and stages. The sequence constraints may be given by:

v(g,j)(g′,j′)+v(g′,j′)(g,j)≧δj,pj′,p−1
v(g,j)(g′,v′)+v(g′,v′)(g,v)≦1


In an embodiment, the objective seeks to maximize the completion reward and may be given by:






Maximize









j


J


(
ψ
)







R
j



z
j







In an alternative embodiment, the objective seeks to maximize the completion reward and to minimize penalties. For example, such penalties may be for not completing jobs by the due time or for violating fair share constraints.


The method 300 (FIG. 3) concludes with a third step 306 of processing the computer processing jobs on the processors within the multi-processor computing environment according to the sequence of tasks for each processor.


The foregoing detailed description of the present invention is provided for the purposes of illustration and is not intended to be exhaustive or to limit the invention to the embodiments disclosed. Accordingly, the scope of the present invention is defined by the appended claims.

Claims
  • 1. A method of scheduling a plurality of computer processing jobs comprising: determining, by executing instructions on a scheduling processor, a schedule for processing the plurality of computer processing jobs on processors within a multi-processor computing environment, wherein each job has a plurality of stages, at least one task for each stage, precedence constraints among the stages, and a job completion due time, wherein the precedence constraints include a particular precedence constraint between a first of the stages and a second of the stages, where each of the first and second stages has plural tasks, wherein a completion reward accrues for a job if the job completes by the corresponding job completion due time, and wherein the determined schedule assigns sequences of tasks to the processors in a manner that maximizes a sum of the completion rewards for the plurality of computer processing jobs; andprocessing the plurality of computer processing jobs on the processors within the multi-processor computing environment according to the determined schedule.
  • 2. The method of claim 1 wherein the instructions executed by the scheduling processor comprise a mixed integer program.
  • 3. The method of claim 2 wherein the mixed integer program comprises decision variables, state variables, input and derived parameters, constraints, and an objective.
  • 4. The method of claim 3 wherein the decision variables comprise task-to-processor assignment decision variables, job selection decision variables, job-to-processor assignment decision variables, and stage sequence decision variables.
  • 5. The method of claim 3 wherein the state variables comprise stage-at-processor completion time variables and job completion time variables.
  • 6. The method of claim 5 wherein the state variables further comprise stage-at-cluster completion time variables.
  • 7. The method of claim 3 wherein the constraints of the mixed integer program comprise tasks precedence constraints, job completion time constraints, job selection constraints, processor constraints, and sequence constraints.
  • 8. The method of claim 3 wherein the objective comprises minimizing penalties.
  • 9. The method of claim 8 wherein the penalties comprise fair share violation penalties.
  • 10. The method of claim 8 wherein the penalties comprise due time violation penalties.
  • 11. The method of claim 1, wherein the determining comprises finding a solution to a program that includes solving for decision variables of the program, wherein the decision variables include task-to-processor assignment variables, where each of the task-to-processor assignment variables represents a number of tasks assigned to a corresponding one of the processors.
  • 12. The method of claim 11, wherein the decision variables further include job-to-processor assignment variables each indicating an assignment of at least one task of a corresponding one of the jobs to a corresponding one of the processors.
  • 13. A non-transitory computer readable medium comprising computer code that upon execution cause a computer to: determine a schedule for processing a plurality of computer processing jobs on processors within a multi-processor computing environment, wherein each job has a plurality of stages, at least one task for each stage, precedence constraints among the stages, and a job completion due time, wherein the precedence constraints include a particular precedence constraint between a first of the stages and a second of the stages, where each of the first and second stages has plural tasks, wherein a completion reward accrues for a job if the job completes by the corresponding job completion due time, and wherein the determined schedule assigns sequences of tasks to the processors in a manner that maximizes a sum of the completion rewards for the plurality of computer processing jobs; andprocess the plurality of computer processing jobs on the processors within the multi-processor computing environment according to the determined schedule.
  • 14. The computer readable medium of claim 13, wherein the determining comprises finding a solution to a program that includes solving for decision variables of the program, wherein the decision variables include task-to-processor assignment variables, where each of the task-to-processor assignment variables represents a number of tasks assigned to a corresponding one of the processors.
  • 15. The computer readable medium of claim 14, wherein the decision variables further include job-to-processor assignment variables each indicating an assignment of at least one task of a corresponding one of the jobs to a corresponding one of the processors.
US Referenced Citations (27)
Number Name Date Kind
5392430 Chen et al. Feb 1995 A
5742821 Prasanna Apr 1998 A
6009452 Horvitz Dec 1999 A
6223205 Harchol-Balter et al. Apr 2001 B1
6434590 Blelloch et al. Aug 2002 B1
6779183 Chekuri et al. Aug 2004 B1
6836241 Stone et al. Dec 2004 B2
7072960 Graupner et al. Jul 2006 B2
7146353 Garg et al. Dec 2006 B2
7353488 Coffin et al. Apr 2008 B1
7383337 Denton et al. Jun 2008 B2
7480913 Buco et al. Jan 2009 B2
7711588 Ouimet May 2010 B2
20020178260 Chang Nov 2002 A1
20030037089 Cota-Robles et al. Feb 2003 A1
20030084157 Graupner et al. May 2003 A1
20030197641 Stone et al. Oct 2003 A1
20040117794 Kundu Jun 2004 A1
20050066239 Keeton et al. Mar 2005 A1
20050183087 Kubota Aug 2005 A1
20050198634 Nielsen et al. Sep 2005 A1
20050204358 Hellerstein et al. Sep 2005 A1
20050289312 Ghosal et al. Dec 2005 A1
20060129771 Dasgupta et al. Jun 2006 A1
20060150189 Lindsley Jul 2006 A1
20060218551 Berstis et al. Sep 2006 A1
20070124733 Bril et al. May 2007 A1