The present invention relates generally to a method for managing physical resources, and in particular to a method and associated system for generating a free server pool for enabling a physical resource management process.
Performing system management includes an inaccurate process with little flexibility. Maintaining elements of a system includes a complicated process that may be time consuming and require a large amount of resources. Accordingly, there exists a need in the art to overcome at least some of the deficiencies and limitations described herein above.
A first aspect of the invention provides a method comprising: generating, by a computer processor of a computing system, a physical server pool defining a dedicated group of physical servers associated with a user; monitoring, by the computer processor, resources of the physical server pool and additional resources of additional physical server pools defining additional groups of physical servers associated with additional users, wherein each physical server pool of the additional physical server pools is associated with a different user of the additional users; consuming, by the computer processor, monitored data retrieved during the monitoring; first determining, by the computer processor based on the monitoring data, that a utilization rate of the additional physical server pools is less than a specified threshold value; selecting, by the computer processor based on the first determining, a group of physical servers of the additional physical server pools for providing to a logical free server pool; migrating, by the computer processor, the group of physical servers to the free server pool; determining, by the computer processor, that the physical server pool requires an additional server; rating, by the computer processor, servers within the free server pool based on a calculated chance for required usage within an associated physical server pool of the additional physical server pools; and allocating, by the computer processor based on results of the rating, a first physical server of the free server pool to the physical server pool requesting a physical server.
A second aspect of the invention provides a computing system comprising a computer processor coupled to a computer-readable memory unit, the memory unit comprising instructions that when executed by the computer processor implements a method comprising: generating, by the computer processor, a physical server pool defining a dedicated group of physical servers associated with a user; monitoring, by the computer processor, resources of the physical server pool and additional resources of additional physical server pools defining additional groups of physical servers associated with additional users, wherein each physical server pool of the additional physical server pools is associated with a different user of the additional users; consuming, by the computer processor, monitored data retrieved during the monitoring; first determining, by the computer processor based on the monitoring data, that a utilization rate of the additional physical server pools is less than a specified threshold value; selecting, by the computer processor based on the first determining, a group of physical servers of the additional physical server pools for providing to a logical free server pool; migrating, by the computer processor, the group of physical servers to the free server pool; determining, by the computer processor, that the physical server pool requires an additional server; rating, by the computer processor, servers within the free server pool based on a calculated chance for required usage within an associated physical server pool of the additional physical server pools; and allocating, by the computer processor based on results of the rating, a first physical server of the free server pool to the physical server pool requesting a physical server.
A third aspect of the invention provides a computer program product, comprising a computer readable hardware storage device storing a computer readable program code, the computer readable program code comprising an algorithm that when executed by a computer processor of a computer system implements a method, the method comprising: generating, by the computer processor, a physical server pool defining a dedicated group of physical servers associated with a user; monitoring, by the computer processor, resources of the physical server pool and additional resources of additional physical server pools defining additional groups of physical servers associated with additional users, wherein each physical server pool of the additional physical server pools is associated with a different user of the additional users; consuming, by said computer processor, monitored data retrieved during said monitoring; first determining, by the computer processor based on the monitoring data, that a utilization rate of the additional physical server pools is less than a specified threshold value; selecting, by the computer processor based on the first determining, a group of physical servers of the additional physical server pools for providing to a logical free server pool; migrating, by the computer processor, the group of physical servers to the free server pool; determining, by the computer processor, that the physical server pool requires an additional server; rating, by the computer processor, servers within the free server pool based on a calculated chance for required usage within an associated physical server pool of the additional physical server pools; and allocating, by the computer processor based on results of the rating, a first physical server of the free server pool to the physical server pool requesting a physical server.
The present invention advantageously provides a simple method and associated system capable of performing system management.
System 100 performs the following processes:
1. Physical server management within a virtualized environment that includes shared storage (on SAN) and LAN.
2. Scrubbing hosts and a hypervisor before allocating to a new server pool. For example, performing an automatic vLAN removal and extension based on a customer selected for a server.
3. A lazy approach to perform a scrubbing process with respect to free servers until needed by a server pool. For example, rating free servers based on: a prediction of use within an original server pool, a number of free servers within a same server pool, a variability of load specified for a workload during a recent past time period, etc. in order to help minimize thrashing.
4. Accounting for resources such as, inter alia, memory and network communication between any two VMs, in order to locate out a method for densely packing VMs on hosts. A dense packing process is crucial to maintain a latency of responses by leveraging TCP/IP within a memory structure.
5. Modeling a cost of migration and providing a method to perform the reconfiguration using a low cost approach. The migration process may include generating a rating/score for an approach dependent on a size of RAM and rate of change of RAM.
6. System 100 allow physical servers to include a different number of CPUs.
Notations associated with processes performed by maintenance system 104 are defined as follows:
1. n=number of VMs in a given server pool.
2. m=number of physical servers in a given server pool.
3. #cj=number of cpus on a given physical server j.
4. cui and mui comprise (respectively) a physical CPU and memory utilization (or demands) of the ith VM process.
5. fpq comprises a network flow (bits/sec) between VM p and VM q of a customer. Note that:
A. fpq=fqp, for each (p, q).
B. fii=a network flow with components other than VMs such as, inter alia, any components outside of a customer premise within a cloud.
6. xij=1 if an ith VM is on the jth host otherwise it is 0.
The following description comprises a process associated with flows on links due to placement of VMs on servers. If a VM p and a VM q communicate with each other and are placed on a server r and a server s respectively then all communication links connecting physical servers r and s will carry communication traffic between the two VMs. Therefore, an assumption is made with respect to providing a single primary path between any two servers. For example, if servers r and s are connected to a switch, then a path comprises switching and cable connections from the two servers r and s to the switches. If communication is within a server, then the link l corresponds to a software switch in a hypervisor. A link l may include a switch, a router, an actual physical link between a server and a switch, a physical link between two switches, a switch or a router, two routers, etc. Therefore, it is necessary to provide constraints for each of the communication links on the path between any two physical servers as follows:
Let Link(l, r, s) be 1 if the 1th link is used for communication between r and s, o.w. 0. A flow contribution on link l if VM p and VM q are situated on server r and s respectively comprises: Link(l, r, s)fpqypqrs, where (xpr+xqs)/2≧ypqrs≧(xpr xqs)/2−½ and ypqrs in {0, 1}(i.e., ypqrs is 1 if p is hosted on r and q is hosted on s otherwise 0). Therefore, a total flow on link l due to the placement of all the VMs on servers comprises: ΣsΣr>sΣpΣq≧pLink(l, r, s)fpqypqrs.
Optimization detector 104b enables a process for determining when to free a physical server. The process is described as follows:
1. At regular intervals, optimization detector 104b determines a sum of CPU, memory, and link utilizations. The sum for CPU, memory, link utilizations, and NIC flows are defined respectively as follows: ΣjΣicuixij, ΣjΣimuixij, and ΣsΣr>sΣpΣq≧pLink(l, r, s)fpqypqrs. The first two terms define CPU and memory, respectively. If any of the following inequalities are true then a process for running an optimization process to densely pack VMs on existing hosts to free one or more hosts is executed:
1. Σj #cj 100%−ΣjΣicuixij≧kc 100%, where kc≧1.
2. m 100%−ΣjΣimuixij≧km 100%, where km≧1.
3. m Tl−ΣsΣr>sΣpΣq≧pLink(l, r, s)fpqypqrs≧kl Tl, where Tl is the total capacity of the link l and kl≧1 is the required number of Tl capacity drops required to initiate optimization to densely pack the VMs. kc, km, and kl comprise predefined constants.
Dense packing finder module 104a: executes a consolidation method, identifies free servers in customer server pools, and computes a rating for safely using a free server for another customer. Additionally, dense packing finder module 104a determines that once upper thresholds for resource usage by a workload are violated within a measurement interval (i.e., a set of priorities), system 100 raises a demand for adding a free server to support a work load if a server pool comprises a free server. System 100 obtains a new server from FSP 114 for a given customer request for a new physical server as follows:
Create a sorted list of free servers ordered in a descending order of a rating given to a free server that depends on:
1. A likelihood that a server will not be re-requested in an original server pool in the next measurement interval.
a. A goodness rating of the server with respect to a target server pool.
b. Select a topmost free server in a sorted list.
2. Perform a server deactivation process.
3. Scrub a server if a target server pool is different from a source server pool.
4. Extend a vLAN(s) for customers of the server.
5. Register the server with the target server pool.
6. Zone storage pools to the server.
7. Perform a server activation process.
System 100 enables a process for densely packing VMs for freeing servers. Therefore an optimization problem is solved. The objective function of the optimization problem represents a cost that penalizes a configuration comprising usage of excessive servers and therefore to capture this we define zj to be 1 if at least one VM is hosted on physical server j, otherwise zj is defined to be 0. A cost function is defined as Σj#cjzj, where #cj is multiplied such that selecting a server with higher number of CPUs is penalized with respect to selecting another server with lower number of CPUs. zj is expressed in terms of decision variables xij as follows: Σixij≧zj≧xij, for all i and j. When there is no VM on j, xij=0 for all i and therefore Σixij≧zj drives zj to 0, whereas even if there is even one VM on host j then there will exist some i for which xij=1 and therefore zj≧xij will drive zj to 1.
A process for calculating capacity constraints per host j is described as follows:
For each host j, upper bounds are defined. The upper bounds are not be exceeded by any valid configuration that an optimization search problem detects. A CPU utilization value is constrained as follows: A sum of the CPU utilizations of the VMs on host j should be upper bounded by CUj and therefore, Σicuixij≦CUj≦#cj*100%. A memory utilization value is constrained as follows: A sum of the memory utilizations of the VMs on host j should be upper bounded by MUj and therefore, Σimuixij≦MUj. A link utilization value is constrained as follows:
A total link l utilization of the VMs on host j should be upper bounded by NUlj Tl and therefore ΣsΣr>sΣpΣq≧pLink(l, r, s)fpqypqrs≦NUlj Tl where NUlj comprises a percentage.
Integrity constraints are defined as follows:
1. Σjxij=1: Each VM i must be hosted on at least one physical server.
2. ΣjΣiΣij=n: There are n VMs hosted on at most m physical servers.
3. Σjzj≦m−1: At most m physical servers are selected after dense packing Since consolidation is the objective, r.h.s. of the above inequality equals m−1 as at least one server must be freed.
Co-location constraints are calculated within a workload. Co-location constraints comprise constraints for VMs of a workload which may be placed on particular hosts. If there are VMs for a workload which may not be placed on particular hosts then the respective xij variable is defined as 0. If two VMs (e.g., 1 and 3) may not be co-hosted on host j, then the following constraints are added: Σi=1,3xij=1 (anti-colocation).
An overall scenario for performing a dense pack process is described as follows:
Σj#cjzj is minimized and subjected to the following threshold constraints, integrity constraints, and zj x, y constraints:
Threshold Constraints
1. For all servers: Σicuixij≦CUj
2. For all servers' MEM: Σimuixij≦MUj
3. For all links l: ΣsΣr>sΣpΣq≧pLink(l, r, s)fpqypqrs≦NUlj Tl
Integrity Constraints
1. Σjxij=1
2. ΣjΣixij=n
3. Σjzj≦m−1
z, x, and y Constraints
1. Σixij≦zj≦xij, for all i and j
2. (xpr+xqs)/2≧ypqrs≧(xpr+xqs)/2−½ and ypqrs in {0, 1}
An existing configuration (i.e., a current placement of VMs on a host) is transformed by an optimization dense packing process. A cost of the transformation depends on a business criticality (e.g., Ci, of a VM, where VM i to be migrated is represented in terms of a loss to business if the VM goes down). An overall probability of failure if the VM is migrated (e.g., say Ri) comprises an expected cost of: Ci*Ri. Additionally, the cost of the transformation depends on a size of the memory state of the VM (i.e., the bigger the memory, the more time it will take for migration and a potentially higher probability of failure due to the software related transient errors). A memory state comprises a memory size of the VM. A normalized memory size is denoted as Memi. The cost of the transformation is proportional to a rate of change of memory for a VM which is directly proportional to the write-rate of the VM. Let the write-rate of the VM i be WRi. A process for normalized this number comprises dividing by the sum of the write-rates of all VMs. Therefore, the cost of the transformation is defined as Costi:=max(aCi*Ri, bMemi, cWRi)+aCi*Ri*bMemi+aCi*Ri*cWRi+cWRi*bMemi+aCi*Ri*bMemi*cWRi, where a, b, and c are user-defined in [0, 1] interval. The constants a, b, and c are user-defined constants to give relative importance to their associated terms contribution to the overall cost. The aforementioned transformation process results in one or more physical servers bring freed up such that the physical servers are removed from server pool (e.g., server pools 108a . . . 108m) and added to a free server pool (e.g., free server pool 114). When a physical server is freed up, a service provider may stop charging a tenant for that server. The following key cleansing operations are performed when a physical server is selected for a customer from a free server pool belonging to another customer:
1. Decommissioning the server for the previous owner.
2. Unregistering the physical server from the server pool to which it belonged.
3. Removing vLAN extensions made to software switches in the physical servers.
4. Unzone storage pools from this server.
5. Performing a service activation process for the new customer.
The following factors 1 and 2 are associated with a heuristic for estimating a likelihood of choosing a free server from the free server pool when a customer makes a demand for a free server:
Factor 1
Factor 1 describes a number of free servers in an original server pool. If a number of free servers comprises a high value then it is determined that a likelihood of a server from the pool being demanded back in its original pool comprises a smaller value. Factor 1 is determined as follows:
1. Let FSkj=1 if the jth server is in the kth server pool, otherwise 0.
2. Define a likelihood of the jth server to not be re-requested in the next interval within its own server pool as: LFSj:=ΣtFSk′t/ΣkΣtFSkt, where k′ comprises a server pool to which the jth server belongs.
Essentially LFSj is nothing but the fraction:number of free servers in the pool of free server j/total number of free servers.
LFSj defines a likelihood that a selected server is not re-requested in the next interval and increases as a number of free servers in the original physical server pool from which it was freed increases.
Factor 2
Factor 2 describes a demand in the next interval T (i.e., T is a measurement interval for SLO). A demand estimation depends on a combination of history and domain knowledge. For example a combination: a recent history used to predict the demand (i.e., if in the recent history the demand from a customer is low it is expected that to happen in the next interval T), a time of day from past history (i.e., seasonality) to predict the demand, and domain knowledge of expected demand in the interval T. Various approaches from literature exist to predict demand and any of these can be used in our invention. Factor 2 is determined as follows: Let LDj comprise a likelihood that the jth server is not re-requested for the next interval within its own server pool to which the jth server belongs and LDj:=1−Dk′/ΣtDt. LDj comprises one minus a ratio of the demand for server pool k′ to which the server j belongs and the total demand across all the server pools.
An overall likelihood for obtaining a free server j (Rj) is defined as Rj=min(LFSj, LDj)+LFSj*LDj. The overall likelihood for obtaining a free server comprises at least a minimum of the two likelihood components LFSj and LDj. A finer level gradation amongst the servers is further achieved by the product of the two likelihood components LFSj and LDj.
A goodness rating of a free server is motivated by the fact that if a free server is included in a server pool and workload components are associated with on the free server it might lead to increased traffic on the vLAN. The following heuristic is used to compute a goodness rating:
For each VM p in a target server pool's workload that may potentially be hosted on the chosen free server t:
1. initialize #n=1; goodnesst=0.
2. For each VM q for which fpq>0
A. currentFlow:=ΣlΣrΣsLink(l, r, s)fpqypqrs
B. targetFlow:=ΣlΣsLink(l, t, s)fpqypqts, // note that VM p is placed on the new server t
C. goodnesst:=(#n−1)goodnesst/#n+currentFlow/targetFlow/#n
2. goodness=Normalize across all free servers t
3. Output=goodness
An overall rating comprises a weighted average of the overall likelihood where weight comprises a goodness rating.
Still yet, any of the components of the present invention could be created, integrated, hosted, maintained, deployed, managed, serviced, etc. by a service supplier who offers to generate a free server pool for enabling a physical resource management process. Thus the present invention discloses a process for deploying, creating, integrating, hosting, maintaining, and/or integrating computing infrastructure, including integrating computer-readable code into the computer system 90, wherein the code in combination with the computer system 90 is capable of performing a method for generating a free server pool for enabling a physical resource management process. In another embodiment, the invention provides a business method that performs the process steps of the invention on a subscription, advertising, and/or fee basis. That is, a service supplier, such as a Solution Integrator, could offer to generate a free server pool for enabling a physical resource management process. In this case, the service supplier can create, maintain, support, etc. a computer infrastructure that performs the process steps of the invention for one or more customers. In return, the service supplier can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service supplier can receive payment from the sale of advertising content to one or more third parties.
While
While embodiments of the present invention have been described herein for purposes of illustration, many modifications and changes will become apparent to those skilled in the art. Accordingly, the appended claims are intended to encompass all such modifications and changes as fall within the true spirit and scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
6598071 | Hayashi | Jul 2003 | B1 |
6976134 | Lolayekar | Dec 2005 | B1 |
7478107 | Yehuda | Jan 2009 | B1 |
7503045 | Aziz | Mar 2009 | B1 |
7703102 | Eppstein | Apr 2010 | B1 |
7765299 | Romero | Jul 2010 | B2 |
8032634 | Eppstein | Oct 2011 | B1 |
8179809 | Eppstein | May 2012 | B1 |
8484355 | Lochhead | Jul 2013 | B1 |
20020156984 | Padovano | Oct 2002 | A1 |
20030018927 | Gadir | Jan 2003 | A1 |
20030079019 | Lolayekar | Apr 2003 | A1 |
20030177176 | Hirschfeld | Sep 2003 | A1 |
20040051731 | Chang | Mar 2004 | A1 |
20040054780 | Romero | Mar 2004 | A1 |
20050021848 | Jorgenson | Jan 2005 | A1 |
20050038890 | Masuda | Feb 2005 | A1 |
20050091215 | Chandra et al. | Apr 2005 | A1 |
20050091217 | Schlangen | Apr 2005 | A1 |
20070078988 | Miloushev | Apr 2007 | A1 |
20070233866 | Appleby | Oct 2007 | A1 |
20070258388 | Bose | Nov 2007 | A1 |
20070260721 | Bose | Nov 2007 | A1 |
20070297428 | Bose | Dec 2007 | A1 |
20080027961 | Arlitt | Jan 2008 | A1 |
20080205377 | Chao | Aug 2008 | A1 |
20090019535 | Mishra | Jan 2009 | A1 |
20090222544 | Xiao | Sep 2009 | A1 |
20090222562 | Liu | Sep 2009 | A1 |
20100094967 | Zuckerman | Apr 2010 | A1 |
20100095004 | Zuckerman | Apr 2010 | A1 |
20100162032 | Dodgson | Jun 2010 | A1 |
20120054346 | Lee | Mar 2012 | A1 |
20120066371 | Patel | Mar 2012 | A1 |
20120233316 | Nakajima | Sep 2012 | A1 |
20120233418 | Barton | Sep 2012 | A1 |
20120254443 | Ueda | Oct 2012 | A1 |
20120297068 | Arrowood | Nov 2012 | A1 |
20120324082 | Lee | Dec 2012 | A1 |
20130013783 | Breiter et al. | Jan 2013 | A1 |
20130080626 | Thireault | Mar 2013 | A1 |
20130111467 | Sundararaj | May 2013 | A1 |
20130173809 | Hueter | Jul 2013 | A1 |
20140047084 | Breternitz | Feb 2014 | A1 |
20140279201 | Iyoob | Sep 2014 | A1 |
20150222516 | Deval | Aug 2015 | A1 |
Entry |
---|
Gmach et al.; Selling T-shirts and Time Shares in the Cloud; 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing; May 13-16, 2012; pp. 539-546. |
Andrzejak et al; Bounding the Resource Savings of Utility Computing Models; Internet Systems and Storage Laboratory, Hewlett Packard Laboratories; HPL-2002-339; Dec. 6, 2002; 22 pages. |
Number | Date | Country | |
---|---|---|---|
20150195173 A1 | Jul 2015 | US |