Examples described herein pertain to distributed and cloud computing systems. Examples of hypervisor agnostic customization of virtual machines are described.
A virtual machine or a “VM” generally refers to a specific software-based implementation of a machine in a virtualized computing environment, in which the hardware resources of a real computer (e.g., CPU, memory, etc.) are virtualized or transformed into underlying support for the virtual machine that can run its own operating system and applications on the underlying physical resources just like a physical computer.
Virtualization generally works by inserting a thin layer of software directly on the computer hardware or on a host operating system. This layer of software contains a virtual machine monitor or “hypervisor” that allocates hardware resources dynamically and transparently. Many different types of hypervisors exist, such as ESX(i), Hyper-V, XenServer, etc. Typically, each hypervisor has its own unique application programming interface (API) through which a user can interact with the physical resources. For example, a user can provide a command through the particular API of the hypervisor executing on the computer to create a new VM instance in the virtualized computing environment. The user may specify certain properties of the new VM through the API, such as the operating system of the VM.
Multiple operating systems can run concurrently on a single physical computer and share hardware resources with each other as provisioned by the hypervisor. By encapsulating an entire machine, including CPU, memory, operating system, and network devices, a virtual machine is completely compatible with most standard operating systems, applications, and device drivers. Most modern implementations allow several operating systems and applications to safely run at the same time on a single computing node, with each operating system having access to the resources it needs when it needs them.
In many traditional virtualized computing environments, a virtual machine launched in the computing environment may be automatically provisioned or customized at boot up time with the help of VM customization tools, such as Cloud-init (for Linux VMs) or Sysprep (for Windows VMs). The boot image of the VM typically has the customization tool pre-installed therein, and the customization tool runs when the VM is powered on. The customization tool can discover the user-specified configuration which is then applied to the VM. The user-specified configuration for the VM can be applied to the VM through a disk image file, such as an ISO image file attached to the VM, prepared as specified by the discovery protocol of the customization tool.
Examples of systems are described herein. An example system may include a computing node configured to execute a hypervisor and a hypervisor independent interface software layer configured to execute on the computing node. The interface software layer is configured to determine configuration information and an operating system for a virtual machine to be created, receive an instruction to create the virtual machine through the hypervisor independent interface software layer, convert the instruction to create the virtual machine into a hypervisor specific command, create a virtual machine instance responsive to the hypervisor specific command, generate an image file by accessing a customization tool library from a plurality of customization tool libraries based, at least in part, on the customization information and the operating system for the virtual machine, attach the image file to the virtual machine, and power on the virtual machine instance.
Examples of methods are described herein. An example method may include determining configuration information and an operating system for a virtual machine to be created, receiving an instruction to create the virtual machine through a hypervisor independent interface software layer, converting the instruction to create the virtual machine into a hypervisor specific command, creating a virtual machine instance responsive to the hypervisor specific command, generating an image file by accessing a customization tool library from a plurality of customization tool libraries based, at least in part, on the customization information and operating system for the virtual machine, attaching the image file to the virtual machine, and powering on the virtual machine instance.
Another example method comprises providing configuration information for a virtual machine instance to a hypervisor agnostic interface software layer and providing an instruction to create the virtual machine instance through the hypervisor independent interface software layer. The hypervisor agnostic interface software layer is configured to determine an operating system for a virtual machine instance, convert the instruction to create the virtual machine instance into a hypervisor specific command, create the virtual machine instance responsive to the hypervisor specific command, generate an image file by accessing a customization tool library from a plurality of customization tool libraries based, at least in part, on the customization information and the operating system for the virtual machine to be created, attach the image file to the virtual machine instance, and power on the virtual machine instance.
Another example method comprises determining a type of a hypervisor configured to execute on a computing node, receiving a command having a first format through a hypervisor agnostic interface software layer, determining a hypervisor abstraction library associated with the type of hypervisor, wherein the hypervisor abstraction library is selected from a plurality of hypervisor abstraction libraries, converting the command having the first format to a command having a second format based, at least in part, on the hypervisor abstraction library, and providing the command having the second format to the hypervisor.
Certain details are set forth below to provide a sufficient understanding of embodiments of the invention. However, it will be clear to one skilled in the art that embodiments of the invention may be practiced without one or more of these particular details. In some instances, wireless communication components, circuits, control signals, timing protocols, computing system components, and software operations have not been shown in detail in order to avoid unnecessarily obscuring the described embodiments of the invention.
Typical methods for customizing VMs may suffer from several limitations. Limitations are discussed herein by way of example and to facilitate appreciation for technology described herein. It is to be understood that not all examples described herein may address all, or even any, limitations of conventional systems. However, one limitation may be that creation of new VMs typically requires usage of hypervisor specific APIs. Therefore, if a user or process wishes to create a new virtual machine instance, the user or process typically needs specific knowledge of the hypervisor that is managing the virtualization environment. Each time a new hypervisor is introduced to the virtualized environment, a new API typically needs to be learned to enable creation of new VMs. Moreover, provisioning of a VM with an image file typically requires the user creating the VM to generate an image file in a specific manner in accordance with the operating system in which the VM will operate. There is therefore a need for a mechanism to abstract the creation of VMs to a hypervisor agnostic environment, while maintaining and automating the benefits of creating customized VMs based on user-specifications.
The storage 160 may include local storage 122A, 122B, cloud storage 126, and networked storage 128. The local storage may include, for example, one or more solid state drives (SSD) 125A and one or more hard disk drives (HDD) 127A. Similarly, local storage 122B may include SSD 125B and HDD 127B. Local storages 122A, 122B may be directly coupled to, included in, and/or accessible by a respective computing node 100A, 100B without communicating via the network 140. Cloud storage 126 may include one or more storage servers that may be stored remotely to the computing nodes 100A, 100B and accessed via the network 140. The cloud storage 126 may generally include any type of storage device, such as HDDs SSDs, or optical drives. Networked storage 128 may include one or more storage devices coupled to and accessed via the network 140. The networked storage 128 may generally include any type of storage device, such as HDDs SSDs, or optical drives. In various embodiments, the networked storage 128 may be a storage area network (SAN).
The computing node 100A is a computing device for hosting VMs in the distributed computing system of
The computing node 100A is configured to execute a hypervisor 130, a controller VM 110A and one or more user VMs, such as user VMs 102A, 102B. The user VMs 102A, 102B are virtual machine instances executing on the computing node 100A. The user VMs 102A, 102B may share a virtualized pool of physical computing resources such as physical processors and storage (e.g., storage 160). The user VMs 102A, 102B may each have their own operating system, such as Windows or Linux. The user VMs 102A, 102B may also be customized upon instantiation. VMs. may be customized, for example, by loading certain software, drivers, network permissions, etc. onto the user VMs 102A, 102B when they are powered on (e.g., when they are launched in the distributed computing system).
The hypervisor 130 may be any type of hypervisor. For example, the hypervisor 130 may be ESX, ESX(i), Hyper-V, KVM, or any other type of hypervisor. The hypervisor 130 manages the allocation of physical resources (such as storage 160 and physical processors) to VMs (e.g., user VMs 102A, 102B and controller VM 110A) and performs various VM related operations, such as creating new VMs and cloning existing VMs. Each type of hypervisor may have a hypervisor-specific API through which commands to perform various operations may be communicated to the particular type of hypervisor. The commands may be formatted in a manner specified by the hypervisor-specific API for that type of hypervisor. For example, commands may utilize a syntax and/or attributes specified by the hypervisor-specific API.
The controller VM 110A includes a hypervisor independent interface software layer that provides a uniform API through which hypervisor commands may be provided. Throughout this disclosure, the terms “hypervisor independent” and “hypervisor agnostic” are used interchangeably and generally refer to the notion that the interface through which a user or VM interacts with the hypervisor is not dependent on the particular type of hypervisor being used. For example, the API that is invoked to create a new VM instance appears the same to a user regardless of what hypervisor the particular computing node is executing (e.g. an ESX(i) hypervisor or a Hyper-V hypervisor). The controller VM 110A may receive a command through its uniform interface (e.g., a hypervisor agnostic API) and convert the received command into the hypervisor specific API used by the hypervisor 130.
The computing node 100B may include user VMs 102A, 102B, a controller VM 110B, and a hypervisor 132. The user VMs 102A, 102B, the controller VM 110B, and the hypervisor 132 may be implemented similarly to analogous components described above with respect to the computing node 100A. For example, the user VMs 102C and 102D may be implemented as described above with respect to the user VMs 102A and 102B. The controller VM 110B may be implemented as described above with respect to controller VM 110A. The hypervisor 132 may be implemented as described above with respect to the hypervisor 130. In the embodiment of
The controller VMs 110A, 110B may communicate with one another via the network 140. By linking the controller VMs 110A, 110B together via the network 140, a distributed network of computing nodes 100A, 100B, each of which is executing a different hypervisor, can be created. The ability to link computing nodes executing different hypervisors may improve on typical distributed computing systems in which communication among computing nodes is limited to those nodes that are executing the same hypervisor. For example, computing nodes running ESX(i) may only communicate with other computing nodes running ESX(i). The controller VMs 110A, 110B may reduce or remove this limitation by providing a hypervisor agnostic interface software layer that can communicate with multiple (e.g. all) hypervisors in the distributed computing system.
With reference to
Returning again to
In operation 206, the controller VM converts the received instruction to initialize the create/clone VM operation into a hypervisor specific command.
In operation 506, the controller VM 110 queries a hypervisor abstraction library. Referring to
In operation 508, the controller VM 110 generates a hypervisor specific command. The controller VM 110 may receive the results of the query submitted to the hypervisor abstraction libraries 418 in operation 506 and convert the format of the hypervisor agnostic command received in operation 504 to a hypervisor specific command based on the results of the query. For example, the controller VM 110 may reformat the command into the hypervisor specific API of the hypervisor 130. In operation 510, the controller VM 110 provides the hypervisor specific command to the hypervisor 130. In response to receiving the hypervisor specific command, the hypervisor 130 may perform the command. The method of
In operation 208, the controller VM creates an image file. The image file may contain the configuration information for the new VM instance. The image file may be, for example, an ISO file, an XML file, or any other type of file that is discoverable and readable by the new VM instance to set one or more customizable settings.
Referring again to
In operation 306, the controller VM generates the image file based on the customization tool identified in operation 304 and an associated customization tool library. Referring to
Referring again to
The computing node 600 includes a communications fabric 602, which provides communications between one or more computer processors 604, a memory 606, a local storage 608, a communications unit 610, and an input/output (I/O) interface(s) 612. The communications fabric 602 can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, the communications fabric 602 can be implemented with one or more buses.
The memory 606 and the local storage 608 are computer-readable storage media. In this embodiment, the memory 606 includes random access memory (RAM) 614 and cache memory 616. In general, the memory 606 can include any suitable volatile or non-volatile computer-readable storage media. The local storage 608 may be implemented as described above with respect to local storage 122A, 122B. In this embodiment, the local storage 608 includes an SSD 622 and an HDD 624, which may be implemented as described above with respect to SSD 125A, 125B and HDD 127A, 127B, respectively.
Various computer instructions, programs, files, images, etc. may be stored in local storage 608 for execution by one or more of the respective computer processors 604 via one or more memories of memory 606. In some examples, local storage 608 includes a magnetic hard disk drive 624. Alternatively, or in addition to a magnetic hard disk drive, local storage 608 can include the solid state hard drive 622, a semiconductor storage device, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, or any other computer-readable storage media that is capable of storing program instructions or digital information.
The media used by local storage 608 may also be removable. For example, a removable hard drive may be used for local storage 608. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer-readable storage medium that is also part of local storage 608.
Communications unit 610, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 610 includes one or more network interface cards. Communications unit 610 may provide communications through the use of either or both physical and wireless communications links.
I/O interface(s) 612 allows for input and output of data with other devices that may be connected to computing node 600. For example, I/O interface(s) 612 may provide a connection to external devices 618 such as a keyboard, a keypad, a touch screen, and/or some other suitable input device. External devices 618 can also include portable computer-readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Software and data used to practice embodiments of the present invention can be stored on such portable computer-readable storage media and can be loaded onto local storage 608 via I/O interface(s) 612. I/O interface(s) 612 also connect to a display 620.
Display 620 provides a mechanism to display data to a user and may be, for example, a computer monitor.
The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.
Those of ordinary skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software executed by a processor, or combinations of both. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Skilled artisans may implement the described functionality in varying ways for each particular application and may include additional operational steps or remove described operational steps, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure as set forth in the claims.