The present invention relates to methods and apparatus for accelerating and dynamically scaling-out and/or scaling-in the infrastructure capacity in response to an application load demand change in a distributed cloud computing environment
Cloud computing is an emerging technology that is becoming more and more popular. This is primarily due to its elastic nature: users can acquire and release resources on-demand, and pay only for those resources needed in the form of virtual machines (VMs). Elasticity allows users to acquire and release resources dynamically according to changing demands, but deciding on the right amount of resources is a difficult task since cloud applications face large and fluctuating loads. In some particular and predictable situations such as sporting or daily events, resources can be provisioned in advance through capacity planning techniques. Unfortunately, most events are widely unplanned and characterized by unpredictable load spikes of the network traffic.
The present invention will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which:
a and 12b are graphical illustrations of dynamic scaling provisioning of execution resources over the time for a particular application class in accordance with a further embodiment of the present invention.
In one embodiment, a method is described. The method includes: monitoring workloads of a plurality of application classes, each of the application classes describing services provided by one or more applications in a multi-tiered system and including a plurality of instantiated execution resources; estimating, for each of the application classes, a number of execution resources able to handle the monitored workloads, to simultaneously maintain a multi-tiered system response time below a deter mined value and minimize a cost per execution resource; and dynamically adjusting the plurality of instantiated execution resources for each of the application classes based on the estimated number of execution resources.
Existing cloud infrastructures are providing some auto-scaling capabilities allowing a cloud infrastructure to scale up or down its processing capacity according to the infrastructure workload. Usually, this scaling capability is associated with the load balancing capability. The load balancer distributes the incoming requests against a group of machines (e.g. virtual and/or physical machines for instance) implementing a particular application or a set of applications. Typically, an auto-scaler receives incoming load information from the load balancer as well as further information, such as, for example but not limited to, CPU load from the running machines. The auto-scaler then uses the received information and combines it together to manage creation of new machines (scale-out) or destruction of existing running machines (scale-in). This solution is well-suited in situations where a given group of machines is connected to a load balancer and executes one or more applications.
However, real deployed systems are typically made of more than one application or group of applications thus resulting in different groups of machines. The scaling feature described hereinabove still works individually for each group of machines (i.e. implementing a particular application or group of applications) but does not “understand” the global system structure and dependencies between the different groups of machines. In a situation where one group of applications (A) is interacting with another group of applications (B), any load increase on (A) automatically causes a load increase on (B). However, since (A) and (B) are different groups of applications, they are typically load balanced and scaled independently. Consequently, (B) is reacting to the load increase on (A) too late causing some (A) performance degradation even though (A) was scaled-out on time. To solve this issue, (B) is over provisioned most of the time so that input load increases on (A) do not instantly saturate (B).
The present invention, in embodiments thereof, relates to methods and apparatus for accelerating and dynamically scaling-out and/or scaling-in the infrastructure capacity in response to an application load demand change in a distributed cloud computing environment. An auto-scaler, independent from the hosting cloud infrastructure, which knows the functional dependencies between the different application groups, is provided. This knowledge of the system structure, combined with traditional activity monitoring from load balancers and running machines, allows implementation of an intelligent scaling-out/in mechanism where the cost can be minimized (by reducing over provisioning) and system performance can be maintained. Also, having a single scaling control point enables anticipation of peak hours and therefore each application group can be independently scaled-out in advance in a coherent manner. The present invention, in embodiments thereof, further provides some auto adaptive capability and is therefore able to react differently to different usage patterns while maintaining global system performance.
The meanings of certain abbreviations and symbols used herein are given in Table 1.
Reference is now made to
The multi-tiered system 100 includes clients 130 accessing applications distributed on a plurality of tiers (T1, . . . , TM) through a communications networks 120 (such as for example, but not limited to, a wide area network, the Internet, etc.). In such a multi-tiered system 100, a front end tier T1 receives the different client requests and then either processes them or forwards them to a subsequent tier Ti. As shown in
The multi-tiered system 100 of
Applications providing similar services or part of a same service may be packaged into a same application class. An application class is typically a logical description of a deployment and describes the services provided by one or more applications. A physical realization of an application class represented as a group of VMs—launched, for example but not limited to, from a single Amazon Machine Image (AMI)—is a cluster. Application class and application cluster will be used interchangeably hereinafter in the present specification for sake of simplicity.
In an embodiment of the present invention, an auto-scaler 110 is provided. The auto-scaler 110 comprises a workload monitor module 111 that monitors the incoming client requests (or incoming traffic) arriving at each tier and/or each application class (AC(11), AC(12), . . . , AC(M3), AC(M4), AC(M5)), typically on a continuous basis. Additionally, the workload monitor module 111 gathers performance metrics such as for example, but not limited to, arrival requests rate, CPU utilization, disk access, network interface access or memory usage, etc. The auto-scaler 110 also comprises a controller module 112 (also referred to hereinafter as a real-time controller module), which receives the gathered performance metrics from the workload monitor module 111. The controller module 112 is able to determine an optimal number of VMs for each application class (AC(11), AC(12), . . . , AC(M3), AC(M4), AC(M5)) of the multi-tiered system 100 that maximizes the net revenues according to metrics described in the Service Level Agreement (SLA) contract. In other words, the controller module 112 provides estimates of future service requests workloads using the performance metrics received from the workload monitor module 111 and performs dynamic and optimal VM allocation and provisioning for each application class (AC(11), AC(12), . . . , AC(M3), AC(M4), AC(M5)) accordingly, typically on a continuous basis. The SLA contract typically specifies that the global response time be lower than a maximum acceptable level. Therefore, the auto-scaler 110 may ensure that the global response time is independent from the multi-tiered system load changes or at least remains within an acceptable performance degradation level. In such a situation, the controller module 112 of the auto-scaler 110 is able to dynamically adjust the number of VMs allocated for each application class (AC(11), AC(12), . . . , AC(M3), AC(M4), AC(M5)) to simultaneously maintain the global response time quasi-constant and minimize the monetary cost of the VMs instantiated per application class.
Reference is now made to
The multi-tiered system 100 of
The nested system model 200 is also based on the following assumptions:
In embodiments of the present invention, the dynamic scaling is performed on a per application class AC(k) basis. A single group of homogeneous VMs running the same software within an application class AC(k) is physically represented by a cluster. Multiple clusters may implement an application class AC(k) at one time. Therefore, a cluster aggregator (not shown) may be provided for identifying and registering all the machines to aggregate from clusters. For a given application, the cluster aggregator identifies all the clusters, and all instances for all clusters. The aggregated incoming workload is typically transmitted to the auto-scaler 110 which estimates the number of VMs on this basis.
Reference is now made to
As explained hereinabove in relation to
where λja is the mean Poisson arrival rate and the random service time is exponentially distributed with a mean value of 1/μj.
When a request arrives at the j-th VM, it can trigger zero, one or more subsequent requests that may be passed to the next tier.
The request arriving at VMj may not trigger any further request (i.e. zero requests) when the requested resource is found in the cache. The probability that the request leaves the multi-tiered system because of a cache hit is (1-pj). Therefore, the cache hit average rate of requests leaving the system is (1-pj).λja.
The request arriving at VMj may trigger one or more subsequent requests that may be passed to the next tier when a cache miss is detected. In such a situation, with probability pj, the request is passed to the next tier for processing. Also, a multiplicity factor denoted by mj is introduced to cover the cases where multiple subsequent requests are generated because of a cache miss, where mj is a strictly positive, time-dependent, discrete, random variable.
In this generic model, it is assumed that both the requests arrival rate and the cache hit rate have Poisson arrival distributions. However, this is not the case for the requests departure rate—being in turn the request arrival rate for the subsequent tier—because of the time dependency of the requests multiplicity factor mj.
The request service rate μj depends on the average consumption of computing resources of a heterogeneous cluster of servers and also on the computational capacity requested by the application class AC(k). Namely, for the generic VMj, these two dependencies can be computed as follows:
μj(k)={circumflex over (μ)}j·C(k)
where {circumflex over (μ)}j is a constant vector representing the average requests service rate that depends only on the physical characteristics of the VMs, and C(k) is a dimensionless parameter representing the computational capacity (CPU) of a VM normalized by a defined capacity unit specific to the application class AC(k).
Furthermore, the request cache hit rate is not taken into account in this model because the optimization is directed to the system workload, i.e. the requests that have not abandoned the system because of a cache hit. When a request leaves the system, it no longer contributes to the system workload and thus can be removed from the system. Therefore, the average departure rate λjd from VMj can be computed as follows:
λjd=pj·mj·λja
Reference is now made to
After having defined a model for the VM, it is also useful to model an application class comprising a plurality of VMs.
Also, the requests departure rate of the VMs within a same application class AC(k) are typically aggregated to obtain the requests departure rate λid(k) leaving an application class AC(k):
It will be remembered that the average departure rate λjd(k) from the j-th execution resource VMj of the application class AC(k) is:
λjd(k)=pj·mj·λja(k)
Therefore, the requests departure rate λid(k) of a particular application class AC(k) is given by:
λid(k)=γi(k)·λia(k)
with,
γi(k)≅{circumflex over (p)}·E[mj]
where {circumflex over (p)} represents the cache miss probability pj for all VMs instantiated for the application class AC(k), and E[mj] is the average value of the multiplicity factors mj.
Then, the average response time Ri(k) for the application class AC(k) belonging to the i-th multiclass tier Ti can be computed applying Little's law to the ni(k) parallel VMs shown in
R
i(k)·λia(k)=xi(k)
where xi(k) represents the average number of items waiting for a service for application class AC(k) and is equal to the sum of all items waiting in the ni(k) queues. By introducing the system utilization ρj(k)—i.e. the average proportion of time during which the ni(k) VMs for the application class AC(k) are occupied—xi(k) is given by:
Where the system utilization is given by:
in a stationary process.
Therefore, the average response time Ri(k) for the application class AC(k) belonging to the i-th multiclass tier Ti becomes:
Reference is now made to
The requests arrival rate Λia (respectively the requests departure rate Λid) for the multiclass tier Ti typically corresponds to the aggregated requests arrival rates (respectively the aggregated requests departure rates) of the different application classes AC(k) belonging to the multiclass tier Ti and are therefore given by:
In other words, Λia is the offered traffic i.e. the rate at which requests are queued (arrived at the multiclass tier Ti), and Λid is the throughput i.e. the rate at which requests are served (depart from the multiclass tier Ti).
Also, according to this model, the aggregated departure rate for the multiclass tier Ti is the aggregated arrival rate for the multiclass tier Ti+1 (in other words, the input workload for the multiclass tier Ti+1 is the aggregated throughput from the preceding multiclass tier Ti):
Λid=Λi+1a
Reference is now made to
The arrival and departure rates are monitored on a per application class basis and the dynamic scaling is also performed on a per application class basis.
The average response time E[Ri] for the multiclass tier Ti can be computed applying Little's law to the multiclass tier border. For sake of clarity and conciseness, it is assumed that the aggregated arrival rate can be written as Λi instead of Λia (i.e. dropping the superscript index (a)) and that the arrival rate for the application class AC(k) can be written as λi(k) instead of λia(k). Therefore, the average response time E[Ri] for the multiclass tier Ti is given by:
In a situation where the average response times for each multiclass tier E[Ri] are independent and additive, the end-to-end average response time E[R] is found aggregating the average response time for each multiclass tier across the multi-tiered system and is given by:
Those skilled in the art will appreciate that the following equation is verified and that the equivalence holds only in the case where one application class AC(k) belongs only to one tier (i.e. the tier coincides with the application class):
In embodiments of the present invention, the auto-scaling problem is described as an economic metaphor in which the service provisioning problem is a Nash game characterized by a collection of tiers providing application classes (the players) making local and selfish decisions in response to local observed network properties.
In game theory, a Nash equilibrium is a term used to describe equilibrium where each player's strategy is optimal given the strategies of all other players. Nash equilibrium exists when there is no unilateral profitable deviation from any of the players involved. Indeed, each application class can be seen as a competing “firm” positioned in a geographic region (a tier) producing a unique homogeneous product (the VMs) for a unique global market (the multi-tiered system). The market is therefore constituted of identical firms competing on a continuous time horizon. In this context, no individual player has the incentive to change its own production level if the other players do not change their production level as well. Finally, the challenge of such a multi-agent system is to derive a global acceptable behaviour (i.e. social welfare) through the design of individual agent control algorithms to reach a global equilibrium subjected to the maximum allowed global response time as specified in the SLA contract i.e. subjected to a stationary local net flow. Indeed, when the input loads increase, more requests are queued and the response time increase as well. However, the auto-scaler typically increases the number of VMs instantiated so that the local flow remains constant.
The economic metaphor for the auto-scaling game can be applied as follows:
The risk that an application class becomes a system bottleneck is implicitly mitigated. This is typically a consequence of the feed-forward model explained hereinabove in which the aggregated throughput from a tier is the offered load for the tier that follows in the chain.
Going further with the economic metaphor, a payoff function (in game theory, a payoff function is a mathematical function describing the award given to a single player at the outcome of a game) can be defined for the multi-tiered system model 600. The payoff function Φi(k) for an application class AC(k) belonging to the i-th tier Ti, expressed in arbitrary monetary unit, may be modelled as the difference between the gross product value and the cost of production, namely:
Φi(k)=Gross_Product_Value(k)−Cost_of_Product(k)
The unitary gross product value, denoted by Pi(k), is the gross revenue for a single execution resource. The costs function, denoted by Ci(k), is the cost for producing the ni(k) execution resources instantiated to provide the services requested by the application class AC(k) at a unit cost βi(k). Therefore, the payoff function is given by:
Φi(k)=Pi(k)·ni(k)−Ci(k) ∀kεTi
where
C
i(k)=β(k)·ni(k) ∀iε(1;M)
Also, it is common practice in distributed network systems to employ the Network Power Metric (NPM) because it is a scalable metric that typically captures key system performance indicators such as the requests arrival rate (offered traffic), the requests departure rate (throughput), local response times, global response time, etc., and identifies the operating points tradeoffs where the distributed system delivers the best performances. The NPM is defined as the ratio of the system throughput to the system average response time for the point of interest that can be, for example, the local application class AC(k) within the i-th multiclass tier Ti. Applied to the multi-tiered system model 600, the NPM is defined by:
Those skilled in the art will appreciate that the average system throughput and the average system response time are measured at homogeneous points to represent consistent network power estimations. In particular, the ratio of the aggregated throughput from a tier to the average response time of the same tiers may be used as a suitable solution to capture correctly the network power of the tier. The NPM is directly proportional to the system throughput for fixed response time and inversely proportional to the response time for fixed throughput. Therefore:
and when there is exactly one entity waiting to be served (Kleinrock condition).
In a typical scenario where the requests arrival rate increases, the requests departure rate increases as long as there is a limited number of cache hits. However, the response time typically increases since the number of requests waiting to be served in the queue increases thereby leading to a situation where the system may eventually saturate and any further incoming requests are lost. This results in a system bottleneck. System bottlenecks can generally be avoided by reducing the service time either by increasing the CPU rate (system speed-up) thereby increasing the number of requests served per time unit or by adding more execution resources in parallel (system scale-out) thereby reducing the number of requests entering the VMs. Embodiments of the present invention relate to system scale-out and not system speed-up. Indeed, when the input requests per second increases, the multi-tiered system 600 typically reacts by increasing the number of VMs under control of the controller module. However, the multi-tiered system capacity, in terms of CPU rate, still remains the same. In the description that follows, and without loss of generality, it is assumed that the service rate for all the execution resources is equal to 1 or, in other words, that the generic VM can process one request per time unit. Indeed, the input workload will typically always be normalized to the actual service rate that captures the dependency on the application class and on the physical cluster for which the execution resources are provided. Those skilled in the art will appreciate that the unitary gross product value (market price) Pi(k) is modelled as the network power per execution resource:
The production model described in the latter equation assumes that each player (application class) is engaged in an economic game at a chosen level of production ni(k). Indeed, the production of one application class (the local response time) also depends on the global response time induced by the other application classes participating in the game. However, increasing the number of execution units in order to reduce the input load of each execution resources typically increases the cost and reduces the unitary gross product value (inverse market demand).
Finally, three different situations may appear depending on the throughput and the offered traffic at a specific tier:
According to the description given hereinabove, the payoff function for an application class AC(k) belonging to the i-th tier Ti is given by:
Φi(k)=NPMi(k)−βi(k)·ni(k)
∀kεTi, ∀i:i=1,M
thereby leading to,
The scaling game for the multi-tiered system 600 may then be described as the simultaneous solution of the control problems described by the latter equation that gives the optimal number of VMs to be instantiated for a particular application AC(k) belonging to the i-th tier Ti. The solutions are subjected to the following coupled constraints:
∀i: i=1, M. This is the queue stability condition that is to be met to avoid the situation where an execution resource belonging to the application class AC(k) saturates thereby becoming a system bottleneck;
The dynamic scaling problem defined hereinabove as a particular Nash game is symmetric because all application classes participating in the game have the same model, and play the same game with the same rules on the same market. The goal of the controller module is to optimize the payoff function for each application class AC(k) adapting the number of execution resources over the time to the input requests rate for reaching a social welfare where the global response time is below a maximum agreed level (SLA). For sake of simplicity, but without limiting the generality of the invention, the calculations hereinbelow apply to a single generic application class AC(k) where the explicit dependency on the tier (i.e. the subscript i) has been removed. The results found for the generic application class AC(k) will then be extended to the multiclass/multi-tiered system.
First, it is preferable to rewrite the payoff function to show explicitly the dependency on time:
Also, the dependency on the i-th tier is now removed:
Reference is now made to
The local response time Rk(t) per application class AC(k) can therefore be computed using Little's law:
R
k(t)·λka(t)=xk(t)
Initially, the control variable nk(t) depends only on the time. In other words, the strategy adopted by each player (application class) is described as an open-loop strategy. An open-loop strategy is a strategy which computes its input into a system using only the current state and its model of the system. For a network of nk(t) M/M/1 queues in parallel (VMs)—using the flow conservation principle and assuming that there are no losses—the rate of change of the average number of entities queued per application class AC(k) is computed as the difference between the input flow (i.e. the offered requests rate) and the output flow (i.e. the average number of entities waiting in the queue at steady state). The state variable xk(t) shown in
{dot over (x)}
k(t)=Offered_Flow(t)−Output_Flow(t)
or,
The goal of the k-th player (application class AC(k)) is to maximize its own payoff function Φk(t) to reach the global welfare, i.e. an acceptable global response time lower than the maximum value Rmax defined in the SLA. The optimal number of execution units (VMs) can be computed using the Pontryagin's Maximum Principle (PMP). Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamic system from one state to another, especially in the presence of constraints for the state or input controls. The corresponding Hamiltonian Hk for the single node AC(k) optimal control is defined as follows:
and shall satisfy the following relation for the optimal values (denoted by an asterisk) of the control variable n*k and for the state variable x*k
H
k(x*k,n*k;t)≧Hk(x*k,nk;t)
i.e. the payoff function is maximised by n*k for all the possible values of the control variable nk. The variable ξk(t) is the co-state variable and {tilde over (μ)}k is the service rate per application class that depends on the application class AC(k) and on the cluster capacity This condition is equivalent to saying that the offered request rate is normalized to the actual service rate and may be therefore written as follows:
The same argument can be used for the departure rate that may be normalized to the actual service rate and therefore written as follows:
And finally, we shall define an equivalent normalized load as follows:
The equivalent normalized load ρk takes into account the cache miss probability {circumflex over (p)} and the average request multiplicity factor E[mj] already defined in another embodiment of the present invention. In particular, the following relation holds:
Assuming,
βk(t)=(ũk·wk)2=constant
then, the payoff function Φk(t) is given by:
Furthermore, using the PMP technique to find the necessary condition for optimality, from the equation defining the Hamiltonian, it is possible to derive the stationary condition for the control variable:
and, the optimal co-state trajectory:
Those skilled in the art will appreciate that these last two equations coupled to the one defining the state trajectory {dot over (x)}k (t) typically solve the single node generic application class AC(k) optimization problem.
Also, it is common practice to consider the special case when the co-state trajectory is stationary i.e. {dot over (ξ)}k=0. In such a case, the latest two equations can be solved with algebraic methods that are readily applicable to the real-time implementation of an optimal controller. In particular, the following two conditions at equilibrium point are found:
This condition typically corresponds to the M/M/1 queue stability condition constrain that is always satisfied because wk<1.
This condition typically corresponds to the end-to-end response time Rk(t) that shall be always lower than a maximum level Rmax as defined in the SLA contract. Indeed, by direct substitution of the parameters definition we have:
Therefore, the value of the unitary cost wk shall satisfy the following relation:
Furthermore, the optimal condition that holds between the state variable x*k(t) and the control variable n*k(t) is given by:
This equation typically defines a feedback proportional controller since the control variable nk(t) is proportional to the state variable xk(t). Under this condition of optimality, the state variable trajectory is specified by the following optimal equilibrium:
{dot over (x)}
k(t)=ρk(t)−(1−wk)·xk(t)
while the control state trajectory is specified by:
It will be appreciated by those skilled in the art that the last equation corresponds to a low pass filter assuming that ρk(t) is the input signal and nk(t) is the output signal of a linear and memory less system. The Laplace transform {.}, with complex parameter p, applied to the last equation is as follows:
Then, by applying the bilinear transform to the Laplace transform, the corresponding equivalent Z-transform transfer function H[z] is found to be:
Where b and c are constant parameters that depend only on the configuration parameter wk.
Anti-transforming the previous equation, the real-time controller (i.e. the controller module of the auto-scaler) in the discrete domain for the single node application class AC(k) optimization problem may be implemented as follows:
u1[n]=−c·u1[n−1]+b·u[n]+b·u[n−1]
where u1[n] is the output discrete signal for the n-th sample, i.e. the discrete mapping of the continuous signal nk(t); u[n] is the input discrete signal for the n-th sample, i.e. the discrete mapping of the continuous signal ρk(t); and b and c are constant parameters that depend only on the configuration parameter wk.
Going further in this electrical analogy and assuming that the input signal is ρk(t) and the output signal nk(t) is the voltage across the RC cell (Resistor-Capacitor cell), it is found that the RC time constant is also the local response time:
Therefore, the cutoff frequency fc for such a low pass filter is given by:
Also, in the electrical circuit analogy described hereinabove, it is to be assumed that the input load (input requests load) has low variations compared with the low pass filter rise time trise:
Applying this electrical analogy to the overall multi-tiered system leads to a model in which the global multi-tiered system may therefore be seen as a circuit of equivalent RC cells each of them contributing to the global delay and the global response time.
Reference is now made to
In an embodiment of the present invention, the strategy adopted by each application class AC(k) part of the multi-tiered system is an open-loop strategy. An open-loop strategy is a strategy which computes its input into a system using only the current state and its model of the system. In this strategy, the control variable nk(t) depends only on the time and not on state parameters (such as CPU, I/O, memory usage, etc.). The open-loop strategy described hereinabove for a single node generic application class AC(k) is typically extended to the other application classes participating to the provisioning game in such a way that the net flow in each queue for any new VM instantiated does not overflow even in situations where the offered load changes. The offered request rate is sampled every 1/Fs second to provide a sample u[n] used by the real-time controller 812 to estimate the optimal number of execution resources y[n] required. The optimal controller configuration ensures that the optimal stationary condition is reached for all players (the application classes) participating in the game before acquiring a new sample and the global response time is kept below the maximum acceptable value Rmax specified in the SLA. In particular, for a multiclass system the total delay can be estimated as the sum of the response time of the application classes participating in the game by direct application of Little's law, as follows:
where x1, x2, . . . , xk are the state variables for the k application classes representing the number of items waiting in the queue. Extending to the multiclass system the same argument already used in the case of single application class AC(k), the maximum acceptable response time will be partitioned as follows:
A system operator typically specifies the configuration parameter wk depending on two empirical parameters αk and πk.
Having a single control point enables the real-time controller 812 to adapt the offered request rate depending on the service rate so that the scaling rate is controlled by a suitable selection of the parameters αk and πk. The parameter αk may be roughly estimated using {tilde over (μ)}k i.e. the service rate per application class. However, depending on the offered traffic history at the input of each tier, it may become possible to make the multi-tiered system adaptable to an increase of the incoming traffic arriving at a specific tier. The parameter πk provides a simple configuration method to partition the requested global response time among the application classes. Different methods may be applied to achieve this result. One of them may be to assign an equal contribution to the application classes and in this case πk=1/K where K is the total number of application classes. On the other side, an operator may decide to assign an uneven partition per application class. In the more general case of multiclass and multi-tiered system where a tier can provide more application classes, the offered load per application class will be substituted by the aggregated offered load Λka but the argument remains the same. In brief, the strategy is to apply an auto-scaling rule per single application class by specifying the parameter wk (that in turns depends on αk and πk) dynamically adapting the provisioning rate estimated by the auto-scaling logic to the actual aggregated offered load.
Furthermore, in the particular case where each tier includes a single application class AC(k), then the global response time is the sum of the local response times. In the more realistic case where each tier includes a plurality of application classes, then the response time of the individual application class AC(k) is proportional to the number of requests waiting in the queue and changes dynamically as the offered load increases/decreases. However, the global response time, being computed as the weighted mean of the response times of each application class included in a tier, is lower than the sum of the local response times taken into account for the optimization. The SLA contract typically requires that the average global response time across multiple tiers is lower than a maximum allowed value. If this limit is to be applied to the sum of the local response times across multiple application classes, the result is that the real-time controller instantiates more execution units than required. This limitation may be useful since it anticipates the greater number of execution units required when unpredictable bursts load arrive at the cost of a minimal over-provisioning controllable via the configuration parameter wk computed via the two parameters αk and πk.
In this embodiment, the strategy adopted by each application class AC(k) part of the multi-tiered system is an open-loop strategy. The real-time controller module 812 is configured accordingly and therefore seeks to optimize a local payoff function to reach a global welfare (or optimal equilibrium condition) and minimize a provisioning cost on per application class AC(k) basis over the time, typically on a continuous basis. Those skilled in the art will appreciate that although the dynamic scaling is described as performed on a continuous time basis, this dynamic scaling may be also performed in a similar manner but at regular/pre-defined time intervals. The global welfare—i.e. approximate constant global response time—is reached performing execution resources (VMs) allocation and provisioning for each application class AC(k) under control of the real-time controller module 812 on the basis of a set of observable parameters (incoming client requests or incoming traffic arriving at each tier and/or each application class) received from the workload monitor module. To do so, the real-time controller module 812 is provided with sub-modules including a dynamic scaling estimator sub-module 813, a scheduled provisioning sub-module 817 and a decimator sub-module 818 as shown in
The dynamic scaling estimator sub-module 813 can be modeled as an Infinite-Impulse-Response (IRR) low pass filter typically including a raw provisioning estimator sub-component 914 as well as first and second moving average sub-components 915-916. The dynamic scaling estimator sub-module 813 typically provides, for each application class AC(k), an estimation of the number of VMs that are to be instantiated depending on the normalized input load and may be implemented as a digital filter in its transposed direct form II as indicated in
For sake of clarity, the definition of the normalized offered load is rewritten below:
where, λk is the incoming aggregated requests for a particular application class AC(k); and {tilde over (μ)}k is the service rate for an application class AC(k) and is provided as a configuration parameter denoted by max_nb_requests_per_sec. In practice, this parameter can be estimated by monitoring the application resident times by offline experiments to measure the average service rate or by rule-of-thumb assumptions observing the time spent by an application on a tier and on the subsequent tier that processed the request. Also, this parameter is typically lower than the critical service rate to prevent under-provisioning when unpredicted incoming bursts appear. The lower the value of the max_nb_requests_per_sec parameter compared to the critical service rate is, the higher is the provisioning rate at the cost of minimal over-provisioning. This parameter depends on the pair (Application class id, Cluster id) assuming that the cluster referenced is actually the physical cluster. In particular, the max_nb_requests_per_sec parameter depends on the average consumption of computing resources (i.e. physical characteristics of the cluster on which the application class is implemented) and on the computational capacity requested by the application class. This parameter is typically set between 50 to 80% of the critical service rate. However, other strategies are possible like subtracting a fixed constant value to the critical service rate.
Also, the raw estimation u1[n] of the number of VMs required per application class is output from the raw provisioning estimator sub-component 914 (step 1014) and is done with the following filter:
u1[n]=−c·u1[n−1]+b·u[n]+b·u[n−1]
where, u[n] is the n-th sample of the input time series representing the normalized equivalent load ρk(t); b and c are coefficients depending on the filter configuration and the sampling frequency Fs; and n is the n-th sample of the time series considered. Also, we have:
As it is apparent from the equations described hereinabove, the estimated number of VMs per application class AC(k) depends on a plurality of configuration parameters including the sampling frequency Fs and the unitary cost per execution resource wk. Although all the configuration parameters are defined on per application class basis, the sampling frequency Fs is unique to the system and assigned to comply with Nyquist's stability condition defined as follows:
The parameter wk is the unitary cost per execution resource instantiated and actually controls the scaling rate. When wk is close to 1, the cost for scaling execution resources is the highest, disabling de facto the dynamic auto-scaling functionality. On the contrary, a value close to 0 corresponds to the lowest cost. In such a situation, the dynamic auto-scaling typically over-provisions execution resources. A suggested value of 0.5 is an acceptable tradeoff between these two extremes.
Then, the raw estimates—number u1[n] of VMs to be instantiated for each application class—are sent to moving average sub-components 915-916. For each application class AC(k), the raw estimation u1[n] varies with the input load. However, booting up a VM suffers a delay that can be greater than the notification of a new instance. Therefore, to reduce the dependency on the input, a two stages exponential moving average filter (D-EWMA comprising EWMA1 and EWMA2) is used. It will be appreciated by those skilled in the art that the invention is not limited to a two stage exponential moving average filter but that any other suitable multiple stage filter may be implemented. The output from this double exponentially-weighted moving average filter 915-916 is an estimation y[n] that is less sensitive to the input load fluctuations.
The D-EWMA filter comprises two EWMA filters (EWMA1 915 and EWMA2 916) coupled in tandem, each of them having the following recursive form:
u2[n]=e1·u2[n−1]+(1−e1)·u1[n]
y[n]=e
2
·y[n−1]+(1−e2)·u2[n]
The coefficients e1 and e2 are the constant smoothing factors provided as the configuration parameters for the first and second EWMA filter sub-components 915-916.
The received estimation of execution resources u1[n] depends on the sampled input request rate u[n] and is fed into the D-EWMA filter at sampling time n/FS. A first exponentially-weighted moving average sub-component 915 filters the estimation u1[n] with weight e1 to provide a resultant u2[n] (step 1015). Then, the second exponentially-weighted moving average sub-component 916 receives u2[n] and filters this estimation with weight e2 thereby producing y[n] as output. The y[n] output has a floating point representation and is typically rounded in excess to the following smallest integer (ceiling operation) prior to being sent to the following sub-modules 817 and 818 of the real-time controller 812 (step 1016).
After passing through the D-EWMA sub-components 915-916, the estimates are sent to the scheduled provisioning sub-module 817 that typically implements a manual auto-scaling logic to work with the real-time controller 812 to notify a predetermined and scheduled number of execution resources when the value y[n] resulting from the dynamic scaling estimator sub-module 813 is lower than expected for a specified time of the day based on known historical load charges (step 1017). This sub-module 817 compares the estimates to a minimum number of VMs to be instantiated provided as the configuration parameter min_nb_instances. In a case where the estimates are lower than the min_nb_instances parameter, the scheduled provisioning sub-module 817 corrects the estimates and sets them to the min_nb_instances. If the estimates are higher than the min_nb_instances, then the estimates are not corrected. The configuration parameter min_nb_instances can be provided in different ways. For example, but without limiting the generality of the invention, an operator may enter, for each application class AC(k), a time range as well as a minimum number of execution resources to be instantiated for a specified time range. A time range is defined by a pair of epoch_start and epoch_end values with a granularity which is a tradeoff between accuracy and flexibility to avoid the situation where the scheduled provisioning sub-module 817 prevents the dynamic scaling. Additionally and/or alternatively, an operator can also enter a default value specifying the default minimum number of VMs to be instantiated for a particular application class AC(k). This default value can be used as the min_nb_instances configuration parameter for the application class AC(k) outside of the specified time range and/or when a time range was not specified. At configuration time, the operator may enter, for a particular application class AC(k), the triplet (epoch_start, epoch_end, nbScheduled) to associate a minimum number of execution resources to a specified time range.
The output y′[n] from the scheduled provisioning sub-module 817 is then sent to a decimator sub-module 818 (step 1018). As explained hereinabove, the real-time controller module 812 seeks to optimize a local payoff function to reach a global welfare (or equilibrium condition) and minimize a provisioning cost on per application class AC(k) basis dynamically. The real-time controller module 812 receives observable parameters (incoming client requests or incoming traffic arriving at each tier and/or each application class) from the monitor module and therefore, the dynamic scaling estimator sub-module 813 provides raw estimates y[n] for each application class AC(k) every 1/Fs second. However, the auto-scaler 812 typically notifies the multi-tiered system to update execution resources allocation and provisioning when a change between any two successive estimates provided by the real-time controller module 812 is actually detected. Therefore, the auto-scaler 812 does not overflow the multi-tiered system with useless and potentially dangerous execution resources allocation and provisioning requests when the estimates did not change. To do so, a decimator sub-module 818 is provided for decorrelating the internal filter estimator sampling rate Fs to the samples notification rate. In particular, if the number of execution resources provisioned does not change, the number of execution resources to be provisioned is notified at regular intervals, e.g. once every scalingDecimationFactor (provided as a configuration parameter) samples, and/or every scalingDecimationfactor/Fs seconds. Otherwise, if a change is detected between two successive estimates, the auto-scaler 812 notifies immediately the multi-tiered system. In such a situation, the decimator sub-module counter is restarted.
Those skilled in the art will appreciate that although portions of the auto-scaler 110 (such as the workload monitor module 111 and the controller module 112) and embodiments in the drawings (
Table 2 describes the different configuration parameters explained hereinabove:
An example of configuration parameters set for different application classes is given in the script below:
In this example, some configuration parameters are set for two different application classes entitled “coretvsearchapp” and “coronaclimcs”. The default configuration parameters are then set such as the:
Then, additional configuration parameters are associated to the description of the application classes and some of them may override the default values. Therefore, for both application classes defined in the example, the unitary cost per VM is set according to the configuration parameters provided in the application class descriptions.
Reference is now made to
As shown in
In another embodiment of the present invention, the strategy adopted by each application class AC(k) part of the multi-tiered system is a hybrid strategy comprising an open-loop strategy associated to a feedback strategy. In a feedback strategy, the control variable depends on the time and also on a state variable. Therefore, a dependency to an observable state parameter x(t) for the control variable such as the CPU, I/O, memory usage, etc. is introduced and the control variable is denoted by nk(t; x(t)). Combining these strategies contributes to remedy some limitations of the open-loop strategy such as the implicit over-provisioning or the latency to reach steady conditions and stable operations. In open-loop strategy, wk is a constant configuration parameter that, among other things, determines the cut-off frequency for the low pass filter. In a feedback strategy, the low pass filter cut-off frequency is typically controlled by a parametric design. Also, in the electrical analogy, the feedback strategy is equivalent to change the RC block into variable components that depend on external state variables.
In a further embodiment of the present invention, the strategy adopted by each application class AC(k) part of the multi-tiered system is another hybrid strategy comprising an open-loop strategy associated to a scheduled provisioning strategy. A scheduled provisioning strategy ensures that a proper number of VMs, assigned statically by an operator, is instantiated at a scheduled time of the day. Indeed, in some particular or predictable situations such as sports (e.g. Olympic games or football match) or daily events (e.g. 8.00 pm evening news), the number of execution resources required are known in advance through capacity planning techniques. In these situations, the provisioning rate change does not follow a daily pattern and the operator may therefore decide to set the default lower min_nb_instances configuration parameter to a higher value. This is a heuristic design that typically works with the dynamic auto-scaling and avoids anticipating high input load to support rare but known events.
It is appreciated that various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable sub-combination.
Also, it will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the invention is defined by the appended claims and equivalents thereof.