Virtual machines can be provided in a computer to enhance flexibility and utilization. A virtual machine typically refers to some arrangement of components (software and/or hardware) for virtualizing or emulating an actual computer, where the virtual machine can include an operating system and software applications. Virtual machines can allow different operating systems to be deployed on the same computer, such that applications written for different operating systems can be executed in different virtual machines (that contain corresponding operating systems) in the same computer. Moreover, the operating system of a virtual machine can be different from the host operating system that may be running on the computer on which the virtual machine is deployed.
In addition, a greater level of isolation is provided between or among applications running in different virtual machines. In some cases, virtual machines also allow multiple applications to more efficiently share common resources (processing resources, input/output or I/O resources, and storage resources) of the computer.
Referring to
As non-limiting examples, the system 10 may be an application server farm, a cloud server farm, a storage server farm (or storage area network), a web server farm, a switch, a router farm, and so forth. Although three physical machines 20 are depicted in
As non-limiting examples, each of the physical machines 20 may be a computer (an application server, a storage server, a web server, etc., for example), a communications module (a switch, a router, etc.) and/or another type of machine. In general, the language “physical machine” refers to the machine as being an actual machine, which is made up of software (i.e., machine executable instructions) and hardware. Moreover, although each of the physical machines 20 as depicted in
Each physical machine 20 provides a platform for the installation of one or multiple virtual machines. In this manner, a given physical machine 20 may host, or contain, one or multiple virtual machines (such as, for example, virtual machines 40, which are depicted in
A virtual machine refers to some partition or segment (made of software and/or hardware) of the physical machine 20, which is provided to virtualize, or emulate, a physical machine. From the perspective of a user, a virtual machine has the same appearance as a physical machine. As an example, a particular virtual machine may include one or more software applications, an operating system and one or more device drivers.
The operating systems that are part of the corresponding virtual machines within a physical machine 20 may be different types of operating systems or different versions of an operating system. This allows software applications designed for different operating systems to execute on the same physical machine 20.
The virtual machines within a physical machine 20 are designed to share the physical resources of the physical machine 20. As a more specific example, exemplary physical machine 20-1 includes hardware 30, which, in turn, includes one or more central processing units (CPUs) 32, a memory 34 (a system memory, for example) and possibly other hardware components, such as a network interface, a display driver, and so forth. It is noted that these components are listed as mere examples, as the hardware 30 may include other and/or different physical components, such as a storage area network (SAN) interface, as a non-limiting example. The other physical machines 20 (such as the physical machine 20-2 and the physical machine 20-N, for example, which are also depicted in
Using the physical machine 20-1 as an example, in addition to hardware, the physical machine 20-1 contains other software components (i.e., components formed in part by machine executable instructions), such as the virtual machines 40, an operating system 50. The physical machine 20-1 further includes a set of machine executable instructions that form a “scheduler 60” to determine a virtual machine placement, as further described herein. It is noted that the physical machine 20-1 may contain other software components that are not depicted in
Similar to the physical machine 20-1, the other physical machines 20-2 . . . 20-N of the system 10 may contain similar hardware 66 and machine executable instructions 64, in accordance with example implementations.
Each virtual machine 40 is associated with a particular hardware container, called a “bin” herein. In this regard, the bins represent partitions (overlapping and/or non-overlapping partitions, depending on the particular implementation) of the hardware that contains, or hosts, the virtual machines 40. For the example of
As a more specific example,
Multiple virtual machines 40 may be associated with performing a certain job; and in the performance of a given job, different pairs of the virtual machines 40 communicate with each other. Each virtual machine pair may have an associated desired communication bandwidth, or traffic, minimum to support their inter-communication. Techniques and systems are disclosed herein for purposes of determining the placement, or distribution, of the virtual machines 40 among the bins 100 such that the virtual machines 40 are placed in the bins 100 in a distribution that allows all inter-virtual machine traffic to be accommodated, while constraining the number of virtual machines 40 assigned to a particular bin 100 to be less than the total size of the bin 100. Such a placement permits an efficient use of a minimum number of bins (i.e., a minimum number of physical machines and switches, for example) in a data center (for example) to accommodate a given load of virtual machines with certain communication requirements, thereby allowing the remaining bins (i.e., the remaining physical machines, switch ports, switches, and so forth) to accommodate more jobs or be turned off for purposes of conserving power.
As a more specific example, using the techniques and systems that are disclosed herein, a virtual network of virtual machines may be mapped onto a physical network of physical machines in a manner that maintains a guaranteed bandwidth between the physical machines, as specified by a service level agreement (SLA). For this example, the physical network of physical machines may be a cloud network. The bin in this case refers to a physical machine, and the size of the physical machine refers to the maximum number of virtual machines, which may be simultaneously hosted on the machine. This maximum number may be selected by a system administrator and may be dependent on a number of factors, such as available memory, the number of processing cores of the physical machine, and so forth.
Another application involving the mapping, or placement, of a virtual network of virtual machines onto a physical network of physical machines is network testbed mapping. In the regard, in this application, a virtual network of virtual machines is mapped onto a physical network of physical machines, while maintaining guaranteed bandwidth on the links. The network testbed facility may be used to run network experiments such as, for example, testing the performance properties of new network protocols. Fidelity of the mapping of the virtual to the physical network may be relatively important for such purposes of establishing experimental validity and establishing reliability of the results.
As further described herein, the systems and techniques that are disclosed herein may be used to further map a virtual network of virtual machines onto a physical network of physical machines, which implement a cloud service, while conserving the amount of consumed power in the physical network.
In accordance with an example implementation, for purposes of determining an optimum configuration for placing a group of the virtual machines 40 (i.e., determining a configuration for distributing the virtual machines 40 among the bins 100), initially, the virtual machines 40 may be placed into the bins at random, or by another technique (such as the Eigenvector method, for example). Using this initial, candidate configuration as a starting point, one or multiple alternate candidate configurations are evaluated for purposes of determining a particular final configuration for placing the virtual machines 40, taking into account the communication capacities among the bins 100, the bin size, power consumption desires, and so forth.
More specifically, a given candidate configuration for placing virtual machines among available bins is evaluated, pursuant to the techniques and systems disclosed herein, by evaluating the gain, or benefit (called “benefit(i,a,b)” below), of moving a given virtual machine i from its current bin a to another bin b:
benefit(i,a,b)=Σkεneighbors(i)(cap(b,bin(k))−cap(a,bin(k)))*com(i,k), Eq. 1
where “k” represents an index to represent the virtual machines k that communicate with the virtual machine i; “(cap(b,bin(k))” represents the communications capacity between bin b (i.e., the new bin) and bin k; “cap(a,bin(k))” represents the communications capacity between bin a and bin k; and “com(i,k)” represents the communication requirement for communications between virtual machine i and virtual machine k.
In Eq. 1, com(i,k) captures the desired communication bandwidth between virtual machines i and k. It is noted that cap(a, a)=∞ (in practice, a sufficiently large number that cap(a,a)−cap(a,b) is large and positive for each b≠a). Hence, in mathematical terms, cap(b,bin(k))−cap(a, bin(k)) is positive if there is greater communications capacity between b and bin(k), i.e., if the available physical capacity between i and k increases. The value of this gain depends on how much capacity is desired between the virtual machines i and k, and this is captured in the function com(i,k): multiplying by this term weights the value of this proposed move from the perspective of this pair of virtual machines.
In accordance with an exemplary implementation, a technique 120 that is set forth in
The cost of the current candidate configuration is then determined, pursuant to block 126. More specifically, in accordance with an exemplary implementation, a benefit of the move described in block 124, such as the benefit determined from Eq. 1, is determined and subtracted from a total cost associated with the previous candidate configuration to determine the cost of the current candidate configuration. This cost, in turn, is compared to previously determined costs associated with other candidate configurations to determine whether the current cost is the best cost. If all of the candidate configurations have been evaluated, then the candidate configuration that has the lowest cost is determined or identified, pursuant to block 130. Otherwise, if more candidate configurations may be determined, the technique 120 includes repeating block 124 (see decision block 128) to derive at least one other candidate configuration.
As a more specific and non-limiting example, a technique 200 of
Using the derived benefits stored in the array(s), the technique 200 creates (block 206) a move array. In general, the move array sets forth the “best” next virtual machine move, considering the virtual machines in a particular bin. In this manner, if a virtual machine move is being contemplated for a given bin, the move array sets forth the best virtual machine move from the bin that results in the greatest benefit (as determined from Eq. 1, for example).
In accordance with an example implementation, the technique 200 creates a particular new candidate configuration from a prior candidate configuration by making a single virtual machine move from the bin in which the virtual machine currently resides into a target bin. Thus, using an existing candidate configuration, a single virtual machine is moved from one of the bins into another bin to create the next candidate configuration. Moreover, in accordance with an example implementation, the technique 200 moves a given virtual machine into a target bin once and selects a virtual machine for the next move from the target bin.
Turning now to the more specific details, pursuant to the technique 200, the technique 200 determines (decision block 208) whether another move is to be performed. In general, another move may be performed if 1. the movement of a previously-unmoved virtual machine is the best move; and 2. the move may be made into a bin that has sufficient capacity.
If another move is to be made, the next best virtual machine move is selected (block 212) from the current bin (i.e., the previous target bin) and a determination is made (decision block 214) whether the target bin is at the maximum capacity or the virtual machine is not moveable. If another move may be made, then the selected virtual machine is moved, pursuant to block 216, and the cost of the resulting current candidate configuration is determined, pursuant to block 218. Referring to
Next, pursuant to the technique 200, the benefits are updated, pursuant to block 224. In this manner, due to the move, the technique 200 includes re-determining the benefits for the virtual machines that communicate with the moved virtual machine. Consequently, the technique 200 includes updating (block 226) the move array.
If another move is not to be made (block 208), then the technique 200 includes returning (block 210) the best cost and the best configuration.
In accordance with example implementations, the techniques 120 and/or 200 may be used in connection with an online processing center in which new jobs are mapped into an existing assignment as the jobs enter into the system. As non-limiting examples, the system may be a cloud system or a network testbed, which is, in general, continuously available, as jobs stream in and exit the system. The techniques 120 and/or 200 may be used for purposes of mapping jobs into the system upon entry, thereby reducing the capacity of communication links and available bin sizes as directed by the returned configuration. On exit, these capacities and sizes are restored.
As a more specific example,
As another non-limiting example, the scheduler 60 may perform a technique 350 in connection with
The techniques 120 and 200 minimize the consumed communication bandwidth subject to a capacity (bin size) constraint. In accordance with further implementations, the techniques 120 and/or 200 may be inverted for purposes of either minimizing the maximum number of virtual machines that are packed into a bin subject to a communications constraint or minimize the number of bins that are used in packing subject to a communications constraint. This latter constraint may be of particular interest when computation costs are dominant, such as in, for example, a power minimization application. More specifically, for power minimization, minimizing the number of bins, in turn, minimizes such factors as the number of physical machines that are employed, the number of switches or switch ports that are employed, and so forth. Minimizing the maximum number of items packed into a bin may be of particular interest when new jobs are expected to consume resources uniformly across a cluster of physical machines and additional capacities are expected to be consumed across the cluster as new jobs are added.
Referring to
Minimizing the number of used bins that are subject to a communications constraint involves two issues. The first issue concerns selecting the right subset of bins. In other words, assuming that the virtual machines are to be packed into m bins, a decision is made regarding which m of the n bins should be used. The second issue involves selecting the best move, not simply the best move away from the bin that just received a virtual machine.
For the first issue, the best m of n subset is problem independent if the bins have unit weights and interconnections between the bins are uniform. For this case, in which all subsets of m bins are identical, the scheduler 60 may apply a technique 420 that is depicted in
When the bins do not have equal sizes and inter-communication capacities, the scheduler 60 may perform a technique 450 that is depicted in
While a limited number of examples have been disclosed herein, those skilled in the art, having the benefit of this disclosure, will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2012/035810 | 4/30/2012 | WO | 00 | 10/22/2014 |