Cloud computing is a form of network-accessible computing that provides shared computer processing resources and data to computers and other devices on demand over the Internet. Cloud computing enables the on-demand access to a shared pool of configurable computing resources, such as computer networks, servers, storage, applications, and services. The resources can be rapidly provisioned and released to a user with reduced management effort relative to the maintenance of local resources by the user. In some implementations, cloud computing and storage enables users, including enterprises, to store and process their data in third-party data centers that may be located far from the user, including distances that range from within a same city to across the world. The reliability of cloud computing is enhanced by the use of multiple redundant sites, where multiple copies of the same applications/services may be dispersed around different data centers (or other cloud computing sites), which enables safety in the form of disaster recovery when some cloud computing resources are damaged or otherwise fail. Each instance of the applications/services may implement and/or manage a set of focused and distinct features or functions on the corresponding server set including virtual machines.
Cloud applications and platforms usually have some notion of fault isolation in them by segregating resources into logical divisions. Each logical division may include a corresponding number and variety of resources, and may be duplicated at multiple sites. Such resources, such as servers, switches, and other computing devices that run software and/or firmware, may need to be periodically updated with the latest software/firmware. Conventionally, updating the latest software/firmware on resources requires shutting down a server and any virtual machines running on the server. After the server has been updated, the server (and the virtual machines running on the server) are rebooted. During this process, the user of the virtual machines can experience sixty minutes of downtime.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Methods, systems, and computer program products are provided for increasing virtual machine availability during server updates. A first resource set is designated to include one or more servers needing an update. A first set of virtual machines running in a live manner on the one or servers is migrated from the first resource set to a second resource set to convert the first resource set to an empty resource set. The first set of virtual machines is migrated to continue running in a live manner on the second resource set. The update is performed on the one or more servers of the empty resource set to create an updated empty resource set.
Further features and advantages of the invention, as well as the structure and operation of various embodiments, are described in detail below with reference to the accompanying drawings. It is noted that the embodiments are not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein.
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present application and, together with the description, further serve to explain the principles of the embodiments and to enable a person skilled in the pertinent art to make and use the embodiments.
The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
The present specification and accompanying drawings disclose one or more embodiments that incorporate the features of the present invention. The scope of the present invention is not limited to the disclosed embodiments. The disclosed embodiments merely exemplify the present invention, and modified versions of the disclosed embodiments are also encompassed by the present invention. Embodiments of the present invention are defined by the claims appended hereto.
References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical.” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures described herein can be spatially arranged in any orientation or manner.
In the discussion, unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended.
Numerous exemplary embodiments are described as follows. It is noted that any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, embodiments disclosed in any section/subsection may be combined with any other embodiments described in the same section/subsection and/or a different section/subsection in any manner.
Cloud computing is a form of network-accessible computing that provides shared computer processing resources and data in a network-accessible resource (e.g., server) infrastructure to computers and other devices on demand over the Internet. Cloud computing enables the on-demand access to a shared pool of configurable computing resources, such as computer networks, servers, storage, applications, and services, which can be rapidly provisioned and released to a user with reduced management effort relative to the maintenance of local resources by the user.
A cloud supporting service is defined herein as the service that manages the network-accessible server infrastructure. Examples of such a supporting service includes Microsoft® Azure®, Amazon Web Services™. Google Cloud Platform™, IBM® Smart Cloud, etc. The supporting service may be configured to build, deploy, and manage applications and services on the corresponding set of servers. For example, a virtual machine (VM) is software that executes in at least one processor circuit of a computing device and is configured to emulate a computer system, being based on a computer architecture and providing functionality of a physical computer. An operating system (OS) may run on top of a virtual machine that, in turn, executes applications, and a hypervisor may be present on the computing device that creates and runs virtual machines, using native execution to share and manage hardware, thereby allowing for multiple environments which are isolated from one another, to yet exist on the same physical machine.
Cloud applications and platforms usually have some notion of fault isolation in them by segregating resources into logical divisions. Each logical division may include a corresponding number and variety of resources (e.g., servers, operating systems, virtual machines, health monitors, network switches, applications, storage devices, etc.), and may be duplicated at multiple sites. Such resources, such as servers, switches, and other computing devices that run software and/or firmware, may need to be periodically updated with the latest software/firmware. Conventionally, updating the latest software/firmware on resources requires shutting down a server and any virtual machines running on the server. After the server has been updated, the server and the virtual machines running on the server are rebooted. During this process, the users of the virtual machines experience downtime which can last minutes, hours, or even longer. This can be very inconvenient to the users, which may include enterprises (e.g., businesses) that rely on the resources to be running to perform computing functions (e.g., providing access to documents, databases, communications applications, marketing applications, websites, etc.).
As follows, example embodiments are described that re directed to techniques for increasing resource availability during server updates, including the availability of resources such as virtual machines. For instance.
Resource sets 110 and 112 may form a network-accessible server set, such as a cloud computing server network. For example, each of resource sets 110 and 112 may comprise a group or collection of servers (e.g., computing devices) that are each accessible by a network such as the Internet (e.g., in a “cloud-based” embodiment) to store, manage, and process data. For example, as shown in
In accordance with such an embodiment, each of resource sets 110 and 112 may be configured to service a particular geographical region. For example, resource set 110 may be configured to service the northeastern region of the United States, and resource set 112 may be configured to service the southwestern region of the United States. It is noted that the network-accessible server set may include any number of resource sets, and each resource set may service any number of geographical regions worldwide.
Note that the variable “N” is appended to various reference numerals identifying illustrated components to indicate that the number of such components is variable, for example, with any value of 2 and greater. Note that for each distinct component/reference numeral, the variable “N” has a corresponding value, which may be different for the value of “N” for other components/reference numerals. The value of “N” for any particular component/reference numeral may be less than 10, in the 10s, in the hundreds, in the thousands, or even greater, depending on the particular implementation.
Each of server(s) 114, 116, 118, 120 may be configured to execute one or more services (including microservices), applications, and/or supporting services. As shown in
Computing devices 102 includes the computing devices of users (e.g., individual users, family users, enterprise users, governmental users, etc.) that access network-accessible resource sets 110 and 112 for cloud computing resources through network 108. Computing devices 102 may include any number of computing devices, including tens, hundreds, thousands, millions, or even greater numbers of computing devices. Computing devices of computing devices 102 may each be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., a Microsoft® Surface® device, a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer such as an Apple iPad™, a netbook, etc.), a mobile phone, a wearable computing device, or other type of mobile device, or a stationary computing device such as a desktop computer or PC (personal computer), or a server. Computing devices 102 may each interface with servers of server(s) 114, 116, 118, 120 through application programming interfaces (API)s and/or by other mechanisms. Note that any number of program interfaces may be present.
Resource update engine 106 performs management functions for resource sets 110 and 112 including managing updates. Resource update engine 106 is configured to increase virtual machine availability of virtual machines 122A-122N, 124A-124N, 126A-126N, 128A-128N, etc., operating within resource sets 110 and 112 during updates. For instance, resource update engine 106 may designate one or more servers of server(s) 114, 116, 118, 120 as a first resource set for an update, and accordingly may migrate virtual machines (e.g., one or more of 122A-122N, 124A-124N, 126A-126N, and 128A-128N) running on the designated server(s) in a live manner to a second resource set to convert the servers of the first resource set to an empty resource set, and such that the migrated virtual machines run in a live manner on the second resource set. Migrating a virtual machine in a live manner may include moving the memory, session state, storage, network connectivity, and/or any other necessary attributes of the running virtual machine from the first server set to the second server set without substantial perceived downtime, including limiting the downtime to a couple of seconds or less. In this manner, a user of an application executing on a live running virtual machine suffers no significant loss of application/virtual machine functionality, and in fact may perceive no downtime at all. Resource update engine 106 is configured to perform the update (e.g., software and/or firmware update) on the server(s) emptied resource set to create an updated empty resource set. Resource update engine 106 may then designate virtual machines for invoking on and/or moving to the updated empty resource set.
Accordingly, embodiments enable an increased virtual machine availability during server updates in network-accessible server infrastructure 108. Resource update engine 106 may increase virtual machine availability during server updates in various ways. For instance,
Flowchart 200 begins with step 202. In step 202, a first resource set is designated to include one or more servers needing an update. For example, with reference to
In step 204, the first set of virtual machines running on the one or servers in a live manner is migrated to a second resource set to convert the first resource set to an empty resource set, and such that the first set of virtual machines runs in a live manner on the second resource set. For instance, with reference to
As described above, resource update engine 106 is configured to perform “live migration” of virtual machines such that users of the virtual machines suffer no substantial downtime (e.g., downtime in terms of a couple of seconds or less) in their live use of the virtual machines. In an embodiment, resource update engine 106 is configured to perform the “live migration” such that the virtual machines are migrated from the first resource set to a second resource set that has already been updated. In this manner, virtual machines may be migrated to resources that run newer software, while the first resource set is emptied and made available for updating.
Note that in one embodiment, the second resource set is empty of virtual machines prior to migrating the first set of virtual machines running on the server(s) in a live manner to the second resource set to run in a live manner on the second resource set. In this instance, the migrated virtual machines have access to all capacity (e.g., processing, storage, etc.) of the second resource set. In another embodiment, the second resource set contains at least one running virtual machine prior to migrating the first set of virtual machines running on the one or servers in a live manner to the second resource set to run in a live manner on the second resource set. In this alternative, the migrated virtual machines share capacity of the second resource set (e.g., memory, processing cores, etc.) with the already present virtual machine(s) (e.g., to co-run in a live manner with the running virtual machine(s)).
Referring back to flowchart 200 in
Note that resource update engine 106 may be configured in various ways to perform its functions, including performing flowchart 200 of
Resource designator 308 may be configured to perform step 202 of flowchart 200, including being configured to designate a first server set that includes one or more servers needing an update. Resource designator 308 may designate such a set of servers one server at a time, may search for a block of related servers needing update, and/or may designate the servers for the first server set in any manner. Resource designator 308 may access information regarding the operational state of a server (e.g., a record stored in storage device 322 of virtual machines running on the server) and/or may obtain the state via a request directly to the server, and may use this information to determine whether the server is a candidate for update. For instance, as shown in
Storage device 322 may include a hardware media such as the hard disk associated with hard disk drive, removable magnetic disk, removable optical disk, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media.
Live resource migrator 310 may be configured to perform step 204 of flowchart 200, including being configured to empty the servers designated by resource designator 308 of live virtual machines so that the designated servers may be updated. As shown in
To migrate a virtual machine from source server to a destination server in a live manner, live resource migrator 310 may move the memory, session state, storage, network connectivity, and/or any other necessary attributes of the running virtual machine from source server to the destination server without substantial perceived downtime, including limiting the virtual machine downtime to a couple of seconds or less.
Live resource migrator 310 may implement one or more live migration techniques to migrate virtual machines. In one embodiment, to migrate a virtual machine in a live manner, live resource migrator 310 may copy all the memory pages associated with the virtual machine from the source server to the destination server while the virtual machine still runs on the source server, stop execution of the virtual machine on the source server, copy any dirty pages (changed memory pages since originally copying the memory pages), and then restart execution of the virtual machine on the destination server. In another embodiment, live resource migrator 310 may pause execution of the virtual machine on the source server, transfer the execution state (e.g., CPU state, registers, non-pageable memory, etc.) to the destination server, and then restart execution of the virtual machine on the destination server, while concurrently pushing the remaining memory pages from the source server to the destination server. Other techniques may be used for live migration by live resource migrator 310, as would be known to persons skilled in the relevant art(s).
Resource updater 312 of
By updating software/firmware on servers after migrating virtual machines in a live manner from the servers, the users of the virtual machines suffer no substantial downtime, while the servers being updated can be shut down, rebooted, restarted, etc., as needed for the update without affecting the virtual machine users.
Note that resource update engine 106 of
Configure logic 402 and classify logic 404 may be configured to designate a first resource set to include one or more servers needing an update, as indicated in step 202 of flowchart 200 (
Select/action logic 406 may be configured to migrate from the resource set (designated by configure logic 402 and classify logic 404) a set of virtual machines running on the servers in a live manner to a second resource set (e.g., as in step 204 of
In other embodiments, resource update engine 106 may be configured in other ways, as would be apparent to persons skilled in the relevant art(s) from the teachings herein.
After live migration and updating of servers is performed (e.g., according to the embodiments of
Flowchart 500 includes step 502. In step 502, a second set of virtual machines running in the live manner is migrated to the updated empty resource set. For example, with reference to
Updates of software and/or firmware may be performed by resource updater 312 on further types of resources after the live migration by live resource migrator 310. For instance,
Flowchart 600 includes step 602. In step 602, a network switch associated with the first resource set is updated after all virtual machines running in the live manner on the first resource are migrated. For example, as shown in
With reference to
Note that the selecting of servers (step 202 of
For instance,
Flowchart 700 includes step 702. In step 702, a server for the first resource set is selected based on an amount of time one or more virtual machines have been running in the live manner on the server. For example, with reference to
Flowchart 800 includes step 802. In step 802, a server for the first resource set is selected based on a version of at least one of software or firmware operating on the server. For example, with reference to
Flowchart 900 includes step 902. In step 902, a server for the first resource set is selected based on a number of virtual machines running in the live manner on the server. For example, with reference to
Note that as described with respect to
For instance.
Flowchart 1000 includes step 1002. In step 1002, the first set of virtual machines running in a live manner is migrated from the first resource set to the second resource set that already includes at least one virtual machine running in a live manner. For example, as described above with respect to
Resource sets 1102 and 1104 each include one or more servers (not shown in
Flowchart 1200 includes step 1202. In step 1202, the first set of virtual machines running in a live manner is migrated from the first resource set to the second resource set that already includes at least one virtual machine running in a live manner. For example, with reference to
Resource sets 1302 and 1304 each include one or more servers (not shown in
Note that in embodiments, the migration of virtual machines by live resource migrator 310 may be performed as a form of “defragmentation.”
In one example, virtual machines may be migrated from servers containing relatively fewer numbers of virtual machines to servers already running greater numbers of virtual machines in an effort to consolidate virtual machines to a smallest number of servers able to accommodate the virtual machines while minimizing the number of virtual machines migrated. In this manner, a greatest number of servers are emptied for update, while concentrating any disruption due to the migration to the fewest virtual machines. Such defragmentation may be performed over any number of servers.
Computing device(s) 102, resource update engine 106, resource sets 110 and 112, servers 114, 116, 118, and 120, network switch 130, network switch 132, resource designator 308, live resource migrator 310, resource updater 312, configure logic 402, classify logic 404, select/action logic 406, evaluate logic 408, terminate logic 410, suspend logic 412, resource sets 1102 and 1104, resource sets 1302 and 1304, flowchart 200, flowchart 500, flowchart 600, flowchart 700, flowchart 800, flowchart 900, flowchart 100, and flowchart 1200 may be implemented in hardware, or hardware combined with software and/or firmware. For example, resource update engine 106, resource designator 308, live resource migrator 310, resource updater 312, configure logic 402, classify logic 404, select/action logic 406, evaluate logic 408, terminate logic 410, suspend logic 412, flowchart 200), flowchart 500, flowchart 600, flowchart 700, flowchart 800, flowchart 900, flowchart 1000, and flowchart 1200 may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer readable storage medium. Alternatively, resource update engine 106, resource designator 308, live resource migrator 310, resource updater 312, configure logic 402, classify logic 404, select/action logic 406, evaluate logic 408, terminate logic 410, suspend logic 412, flowchart 200, flowchart 500, flowchart 600, flowchart 700, flowchart 800, flowchart 900, flowchart 1000, and/or flowchart 1200 may be implemented as hardware logic/electrical circuitry.
For instance, in an embodiment, one or more, in any combination, of resource update engine 106, resource designator 308, live resource migrator 310, resource updater 312, configure logic 402, classify logic 404, select/action logic 406, evaluate logic 408, terminate logic 410, suspend logic 412, flowchart 200, flowchart 500, flowchart 600, flowchart 700, flowchart 800, flowchart 900, flowchart 1000, and flowchart 1200 may be implemented together in a SoC. The SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a central processing unit (CPU), microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits, and may optionally execute received program code and/or include embedded firmware to perform functions.
As shown in
Computing device 1400 also has one or more of the following drives: a hard disk drive 1414 for reading from and writing to a hard disk, a magnetic disk drive 1416 for reading from or writing to a removable magnetic disk 1418, and an optical disk drive 1420 for reading from or writing to a removable optical disk 1422 such as a CD ROM, DVD ROM, or other optical media Hard disk drive 1414, magnetic disk drive 1416, and optical disk drive 1420 are connected to bus 1406 by a hard disk drive interface 1424, a magnetic disk drive interface 1426, and an optical drive interface 1428, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of hardware-based computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, RAMs, ROMs. and other hardware storage media.
A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include operating system 1430, one or more application programs 1432, other programs 1434, and program data 1436. Application programs 1432 or other programs 1434 may include, for example, computer program logic (e.g., computer program code or instructions) for implementing resource update engine 106, resource designator 308, live resource migrator 310, resource updater 312, flowchart 200, flowchart 400, flowchart 500, flowchart 600, flowchart 700, and/or flowchart 800 (including any suitable step of flowchart 200), and/or further embodiments described herein.
A user may enter commands and information into the computing device 1400 through input devices such as keyboard 1438 and pointing device 1440. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, a touch screen and/or touch pad, a voice recognition system to receive voice input, a gesture recognition system to receive gesture input, or the like. These and other input devices are often connected to processor circuit 1402 through a serial port interface 1442 that is coupled to bus 1406, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB).
A display screen 1444 is also connected to bus 1406 via an interface, such as a video adapter 1446. Display screen 1444 may be external to, or incorporated in computing device 1400. Display screen 1444 may display information, as well as being a user interface for receiving user commands and/or other information (e.g., by touch, finger gestures, virtual keyboard, etc.). In addition to display screen 1444, computing device 1400 may include other peripheral output devices (not shown) such as speakers and printers.
Computing device 1400 is connected to a network 1448 (e.g., the Internet) through an adaptor or network interface 1450, a modem 1452, or other means for establishing communications over the network. Modem 1452, which may be internal or external, may be connected to bus 1406 via serial port interface 1442, as shown in
As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to refer to physical hardware media such as the hard disk associated with hard disk drive 1414, removable magnetic disk 1418, removable optical disk 1422, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media).
Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Embodiments are also directed to such communication media that are separate and non-overlapping with embodiments directed to computer-readable storage media.
As noted above, computer programs and modules (including application programs 1432 and other programs 1434) may be stored on the hard disk, magnetic disk, optical disk, ROM, RAM, or other hardware storage medium. Such computer programs may also be received via network interface 1450, serial port interface 1442, or any other interface type. Such computer programs, when executed or loaded by an application, enable computing device 1400 to implement features of embodiments discussed herein. Accordingly, such computer programs represent controllers of the computing device 1400.
Embodiments are also directed to computer program products comprising computer code or instructions stored on any computer-readable medium. Such computer program products include hard disk drives, optical disk drives, memory device packages, portable memory sticks, memory cards, and other types of physical storage hardware.
In an embodiment, a method for increasing virtual machine availability during server updates comprises: designating a first resource set to include one or more servers needing an update: migrating from the first resource set a first set of virtual machines running on the one or servers in a live manner to a second resource set to convert the first resource set to an empty resource set, and such that the first set of virtual machines runs in a live manner on the second resource set; and performing the update on the one or more servers of the empty resource set to create an updated empty resource set.
In an embodiment, the further comprises: migrating a second set of virtual machines running in the live manner to the updated empty resource set.
In an embodiment, the further comprises: updating a network switch associated with the first resource set after all virtual machines running in the live manner on the first resource set are migrated from the first resource set.
In an embodiment, the designating comprises: selecting a server for the first resource set based on an amount of time one or more virtual machines have been running in the live manner on the server.
In an embodiment, the designating comprises: selecting a server for the first resource set based on a version of at least one of software or firmware operating on the server.
In an embodiment, the designating comprises: selecting a server for the first resource set based on a number of virtual machines running in the live manner on the server.
In an embodiment, the selecting comprises: selecting the server as having a lowest number of virtual machines running in the live manner of a plurality of servers.
In an embodiment, the migrating comprises: migrating the first set of virtual machines running in a live manner from the first resource set to the second resource set that is empty of virtual machines.
In an embodiment, the further comprises: migrating the first set of virtual machines running in a live manner from the first resource set to the second resource set that already includes at least one virtual machine running in a live manner.
In another embodiment, a system comprises: a resource update engine configured to increase virtual machine availability during server updates comprising: a resource designator configured to designate a first resource set to include one or more servers needing an update: a live resource migrator configured to migrate from the first resource set a first set of virtual machines running on the one or servers in a live manner to a second resource set to convert the first resource set to an empty resource set, and such that the first set of virtual machines runs in a live manner on the second resource set; and a resource updater configured to perform the update on the one or more servers of the empty resource set to create an updated empty resource set.
In an embodiment, the live resource migrator is further configured to migrate a second set of virtual machines running in the live manner to the updated empty resource set.
In an embodiment, the resource updater is further configured to update a network switch associated with the first resource set after all virtual machines running in the live manner on the first resource set are migrated from the first resource set.
In an embodiment, the resource designator is further configured to select a server for the first resource set based on an amount of time one or more virtual machines have been running in the live manner on the server.
In an embodiment, the resource designator is further configured to select a server for the first resource set based on a version of at least one of software or firmware operating on the server.
In an embodiment, the resource designator is further configured to select a server for the first resource set based on a number of virtual machines running in the live manner on the server.
In an embodiment, the resource designator is further configured to select the server as having a lowest number of virtual machines running in the live manner in a plurality of servers.
In an embodiment, the second resource set is empty of virtual machines prior to migrating the first set of virtual machines running on the one or more servers in a live manner to the second resource set to run in a live manner on the second resource set.
In an embodiment, the second resource set contains at least one running virtual machine prior to migrating the first set of virtual machines running on the one or more servers in a live manner to the second resource set to run in a live manner on the second resource set.
In another embodiment, a computer-readable storage medium having program instructions recorded thereon that, when executed by at least one processing circuit, perform a method on a first computing device for increasing virtual machine availability during server updates, the method comprising: designating a first resource set to include one or more servers needing an update; migrating from the first resource set a first set of virtual machines running on the one or servers in a live manner to a second resource set to convert the first resource set to an empty resource set, and such that the first set of virtual machines runs in a live manner on the second resource set; and performing the update on the one or more servers of the empty resource set to create an updated empty resource set.
In an embodiment, the method further comprises: migrating a second set of virtual machines running in the live manner to the updated empty resource set.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims. Accordingly, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application claims the benefit of U.S. Provisional Application No. 62/503,840, filed on May 9, 2017, titled “Increasing Virtual Machine Availability During Server Updates,” which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62503840 | May 2017 | US |