N/A
1. Background and Relevant Art
Conventional computer systems are now commonly used for a wide range of objectives, whether for productivity, or entertainment, and so forth. One reason for this is that, not only computer systems tend to add efficiency with task automation, but computer systems can also be easily configured and reconfigured over time for such tasks. For example, if a user finds that one or more application programs are running too slowly, it can be a relatively straightforward matter for the user to add more memory (e.g., RAM), add or swap out one or more processors (e.g., a CPU, GPU, etc.), add or improve the current storage, or even add or replace other peripheral devices that may be used to share or handle the workload. Similarly, it can be relatively straightforward for the user to install or upgrade various application programs on the computer, including the operating system. This tends to be true in theory even on large, enterprise scale.
In practice, however, the mere ability to add or upgrade physical and/or software components for any given computer system is often daunting, particularly on a large scale. For example, although upgrading the amount of a memory tends to be fairly simple for an individual computer system, upgrading storage, peripheral devices, or even processors for several different computer systems, often involves some accompanying software reconfigurations or reinstallations to account for the changes. Thus, if company's technical staff were to determine that the present computer system resources in a department (or in a server farm) were inadequate for any reason, the technical staff might be more inclined to either add entirely new physical computer systems, or completely replace existing physical systems instead of adding individual component system parts.
Replacing or adding new physical systems, however, comes with another set of costs, and cannot typically occur instantaneously. For example, one or more of the technical staff may need to spend hours in some cases physically lifting and moving the computer systems into position, connecting each of the various wires to the computer system, and loading various installation and application program media thereon. The technical staff may also need to perform a number of manual configurations on each computer system to ensure the new computer systems can communicate with other systems on the network, and that the new computer systems can function at least as well for a given end-user as the prior computer system.
Recent developments in virtual machine (“VM”) technology have improved or remediated many of these types of constraints with physical computer system upgrades. In short, a virtual machine comprises a set of files that operate as an additional, unique computer system within the confines and resource limitations of a physical host computer system. As with any conventional physical computer system, a virtual machine comprises an operating system and various user-based files that can be created and modified, and comprises a unique name or identifier by which the virtual computer system be found or otherwise communicate on a network. Virtual machines, however, differ from conventional physical systems since virtual machines typically comprise a set of files that are used within a well-defined boundary inside another physical host computer system. In particular, there can be several different virtual machines installed on a single physical host, and the users of each virtual machine can use each different virtual machine as though it were a separate and distinct physical computer system.
A primary difference with physical systems, however, is that the resources allocated to and used by a virtual machine can be assigned and allocated electronically. For example, an administrator can use a user interface to assign and provide a virtual machine with access to one or more physical host CPUs, as well as access to one or more storage addresses, and memory addresses. Specifically, the administrator might delegate the resources of a physical host with 4 GB of RAM and 2 CPUs so that two different virtual machines are assigned 1 CPU and 2 GB of RAM. An end-user of the given virtual machines in this particular example might thus believe they are using a unique computer system that has 1 CPU and 2 GB of RAM.
In view of the foregoing, one will appreciate that adding new virtual machines, or improving the resources of virtual machines, can also be done through various electronic communication means. That is, a system administrator can add new virtual machines within a department (e.g., for a new employee), or to the same physical host system to share various processing tasks (e.g., on a web server with several incoming and outgoing communications) by executing a request to copy a set of files to a given physical host. The system administrator might even use a user interface from a remote location to set up the virtual machine configurations, including reconfiguring the virtual machines when operating inefficiently. For example, the administrator might use a user interface to electronically reassign more CPUs and/or memory/storage resources to virtual machines that the administrator identifies as running too slowly.
Thus, the ability to add, remove, and reconfigure virtual machines can provide a number of advantages when comparing similar tasks with physical systems. Notwithstanding these advantages, however, there are still a number of difficulties when deploying and configuring virtual machines that can be addressed. Much of these difficulties relate to the amount and type of information that can be provided to an administrator pursuant to identifying and configuring operations in the first instance. For example, conventional virtual machine monitoring systems can be configured to indicate the extent of host resource utilization, such as the extent to which one or more virtual machines on the host are taxing the various physical host CPUs and/or memory. Conventional monitoring software might even be configured to send one or more alerts through a given user interface to indicate some default resource utilizations at the host.
In some cases, the monitoring software might even provide one or more automated load balancing functions, which includes automatically redistributing various network-based send/receive functions among various virtual machine servers. Similarly, some conventional monitoring software may have one or more automated configurations for reassigning processors and/or memory resources among the virtual machines as part of the load balancing function. Unfortunately, however, such alerts and automated reconfigurations tend to be minimal in nature, and tend to be limited in highly customized environments. As a result, a system administrator often has to perform a number of additional, manual operations if a preferred solution involves introduction of a new machine, or movement of an existing virtual machine to another host.
Furthermore, the alerts themselves tend to be fairly limited in nature, and often require a degree of analysis and application by the system administrator in order to determine the particular cause of the alert. For example, conventional monitoring software only monitors physical host operations/metrics, but not ordinarily virtual machine operations, much less application program performance within the virtual machines. As a result, the administrator can usually only infer from the default alerts regarding host resource utilization that the cause of poor performance of some particular application program might have something to do with virtual machine performance.
Accordingly, there are a number of difficulties with virtual machine management and deployment that can be addressed.
Implementations of the present invention overcome one or more problems in the art with systems, methods, and computer program products configured to automatically monitor and reallocate physical host resources among virtual machines in order to optimize performance. In particular, implementations of the present invention provide a widely extensible system in which a system administrator can set up customized alerts for a customized use environment. Furthermore, these customized alerts can be based not only on specific physical host metrics, but also on specific indications of virtual machine performance and application program performance, and even on other sources of relevant information (e.g., room temperature). In addition, implementations of the present invention allow the administrator to implement customized reallocation solutions, which can be used to optimize performance not only of virtual machines, but also of application programs operating therein.
For example, a method of automatically optimizing performance of an application program by the allocation physical host resources among the one or more virtual machines can involve identifying one or more changes in performance of one or more application programs running on one or more virtual machines at a physical host. The method can also involve identifying one or more resource allocations of physical host resources for each of the one or more virtual machines. In addition, the method can involve automatically determining a new resource allocation of physical host resources for each of the virtual machines based on the change in application performance. Furthermore, the method can involve automatically implementing the new resource allocations for the virtual machines, wherein performance of the one or more application programs is optimized.
In addition to the foregoing, an additional or alternative method of automatically managing physical host resource allocations among one or more virtual machines based on information from an end-user can involve receiving one or more end-user configurations regarding allocation of physical host resources by one or more hosted virtual machines. The method can also involve receiving one or more messages regarding performance metrics related to the one or more virtual machines and of the physical host. In addition, the method can involve automatically determining that the one or more virtual machines are operating at a suboptimal level defined by the received one or more end-user configurations. Furthermore, the method can involve automatically reallocating physical host resources for the one or more of the virtual machines based on the received end-user configurations. As such, the one or more virtual machines use physical host resources at an optimal level defined by the received end-user configurations.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Implementations of the present invention extend to systems, methods, and computer program products configured to automatically monitor and reallocate physical host resources among virtual machines in order to optimize performance. In particular, implementations of the present invention provide a widely extensible system in which a system administrator can set up customized alerts for a customized use environment. Furthermore, these customized alerts can be based not only on specific physical host metrics, but also on specific indications of virtual machine performance and application program performance, and even on other sources of relevant information (e.g., room temperature). In addition, implementations of the present invention allow the administrator to implement customized reallocation solutions, which can be used to optimize performance not only of virtual machines, but also of application programs operating therein.
To these and other ends, implementations of the present invention include the use a framework that a user can easily extend and/or otherwise customize to create their own rules. Such rules, in turn, can be used for various, customized alerting functions, and to ensure efficient allocation and configuration of a virtualized environment. In one implementation, for example, the components and modules described herein can thus provide for automatic (and manual) recognition of issues within virtualized environments, as well as solutions thereto. Furthermore, users can customize the policies for these various components and modules, whereby the components and modules take different action depending on the hardware or software that is involved in the given issue.
In addition, and as will be understood more fully herein, implementations of the present invention further provide automated solutions for fixing issues, and/or for recommending more efficient environment configurations for virtualized environments. Such features can be turned “on,” or “off.” When enabled, the customized rules allow the monitoring service to identify the resources for a user-specified condition. Once any of the conditions arise, the monitoring service can then provide an alert (or “tip”) that can then be presented to the user. Depending on the configuration that the user has specified in the rules, these alerts or tips can be configured to automatically implement the related resolution, and/or can require user initiation of the recovery process. In at least one implementation, an application-specific solution would mean a solution for a virtual machine that is running a mail server can be different that a solution for a virtual machine that is running a database server.
In addition, and as previously mentioned, such customizations can also extend to specific hardware configurations that are identified and determined by the end-user (e.g., system administrator). In on implementation, for example, an end-user can customize an alert so that when the number of transactions handled by certain resources reaches some critical point, the monitoring service can deploy a virtual machine that runs a web server with the necessary applications inside. Accordingly, implementations of the present invention allow users and administrators to solve issues proactively, or reactively as needed, by using information about the specific hardware and software that is running, and even about various environmental factors in which the hardware and software are running, even in highly customized environments.
Referring now to the figures,
In addition,
Of course, one will appreciate that this particular configuration is not meant to be limiting in any way. That is, one will appreciate that host 130 can further comprise various storage resources, whether accessible locally or over a network, as well as various other peripheral components for storage and processing. Furthermore, implementations of the present invention are equally applicable to physical hosts that comprise more or less than the illustrated resources. Still further, there can be more than one physical host that is hosting one or more still additional virtual machines in this particular environment. Only one physical host, however, is shown herein for purposes of convenience in illustration.
In any event, and as previously mentioned,
Thus, one will appreciate that at least one “trigger” for reallocating resources can be the memory requirements of any given virtual machine and/or corresponding application program operating therein, particularly considered in the context of other virtual machines and applications at host 130. Along these lines,
As a preliminary matter, the figures illustrate VM monitoring service 110 as a single component, such as a single application program. One will appreciate, however, that, monitoring service 110 can comprise several different application components that are distributed across multiple different physical servers. In addition, the functions of monitoring various metric information, receiving and processing end-user policy information, and implementing policies on the various physical hosts can be performed by any of the various monitoring service 110 components at different locations. Accordingly, the present figures illustrate a single service component for handling these functions by way of convenience in explanation.
In any event, this particular example of
By contrast,
In addition, one will appreciate that there can many additional types of metric information beyond those specifically described above. As understood herein, many of these metrics can be heavily end-user customized based on the user's knowledge of a particular physical or virtual operating environment. For example, the end-user may have particular knowledge about the propensity of a particular room where a set of servers are used to rise in temperature. The end-user could then configure the metric messages 125, 127 to report various temperature counter information, as well. In other cases, the end-user could direct such information from some other third-party counter that monitors environmental factors and reports directly to the monitoring service 110. Thus, not only can the metric information reported to monitoring service 110 be variedly widely, but the monitoring service 110 can also be configured to receive and monitor relevant information from a wide variety of different sources, which information could ultimately implicate performance of the virtual machines 143 and/or physical hosts 130.
In any event,
For example,
As a result, when determination module 120 detects (e.g., comparing metrics 125b with configuration policy 115) that these particularly defined conditions are met, determination module 120 automatically reallocates the memory and processing resources in accordance with message 200. For example,
Accordingly,
Simply reallocating resources for existing virtual machines, however, is only one way to optimize resource utilization by virtual machines, and accompanying application performance therein. In some cases, for example, it may be preferable to reallocate resources by adding a new virtual machine, whether on host 130, or on some other physical host system (not shown), or even moving an existing virtual machine to another host. For example,
For example,
In either case, the load needed to run Application 155 would then be shared by two different virtual machines. Again, as previously stated with
In particular,
Of course, one will appreciate that instructions 230 could further include some additional reallocations of memory resources 107 and processing resources 113 among all the previously existing virtual machines 140a and 140b. For example, in addition to adding new virtual machine 140c, monitoring service could include instructions to drop/add, or otherwise alter the resource allocations 143a and/or 143b for virtual machines 140a and 140b. Monitoring service 110 could send such instructions regardless of whether adding new virtual machine 140c to host 130 or to another physical host (not shown).
In any event, and as with the solution provided by instructions 210, the solution provided by instructions 230 result in a significant decrease in memory and CPU usage for virtual machine 140b, since the workload used by Application 155 is now shared over two different virtual machines. Specifically,
Of course, one will appreciate that there can still be several other ways that monitoring service 110 reallocates resources. For example, monitoring service 110 can be configured to iteratively adjust resource allocations over some specified period. In particular with respect to
The monitoring service 110 might then reallocate the resources of both virtual machine 140a and 140b (again) on a recurring, iterative basis in conjunction with some continuously received metrics (e.g., 125) to achieve an appropriate balance in resources. For example, the monitoring service 110 could automatically downwardly adjust the memory and processing assignments for virtual machine 140a, while simultaneously and continuously upwardly adjusting the memory and processing resources of virtual machine 140b. If the monitoring service 110 could not achieve a balance, the monitoring service might then move virtual machine 140b to another physical host, or provide yet another alert (e.g., as defined by the user) that indicates that the automated solution was only partly effective (or ineffective altogether). In such a case, rather than automatically move the virtual machine 140b, monitoring service 110 could provide a number of potential recommendations, including that the user request a move of the virtual machine 140b to another physical host.
Along similar lines, monitoring service 110 can be configured by the end-user to continuously adjust resource assignments downwardly on a period basis any time that the monitoring service identifies that a virtual machine 140 is rarely using its resource allocations. In addition, the monitoring service 110 can continually maintain a report of such activities across a large farm of physical hosts 130, which can allow the monitoring service 110 to readily identify where new virtual machines can be created, as needed, and/or where virtual machines can be moved (or where application programs assignments can be assigned/shared). Again, since each of these solutions can be provided on a highly configurable and automated basis, such solutions can save a great deal of effort and time for a given administrator, particularly in an enterprise environment.
One will appreciate, therefore, that the components and mechanisms described with respect to
In addition to the foregoing, implementations of the present invention can also be described in terms of flow charts comprising one or more acts in a method for accomplishing a particular result. For example,
For example,
In addition,
Furthermore,
In addition to the foregoing,
In addition,
Furthermore,
Accordingly, implementations of the present invention provide a number of components, modules, and mechanisms for ensuring that virtual machines, and corresponding application programs executing therein, can continue to operate at an efficient level with minimal or no human interaction. Specifically, implementations of the present invention provide an end-user (e.g., an administrator) with an ability to tailor resource utilization to specific configurations of virtual machines. In addition, implementations of the present invention provide the end-user with the ability to receive customized alerts for specific, end-user identified operations of the virtual machines and application programs. These and other features, therefore, provide the end-user with the added ability to automatically implement complex resource allocations without otherwise having to take such conventional steps of physically/manually adding, removing, or updating various hardware and software-based resources.
The embodiments of the present invention may comprise a special purpose or general-purpose computer including various computer hardware, as discussed in greater detail below. Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer.
By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.