This disclosure pertains generally to computing device virtualization, and more specifically to providing optimized quality of service to prioritized virtual machines and applications based on the varying quality of shared computing resources.
Clusters are groups of computers that use groups of redundant computing resources in order to provide continued service when individual system components fail. More specifically, clusters eliminate single points of failure by providing multiple servers, multiple network connections, redundant data storage, etc. Clustering systems are often combined with storage management products that provide additional useful features, such as journaling file systems, logical volume management, multipath input/output (I/O) functionality, etc. For example, some storage management products such as Veritas Volume Manager and Dynamic Multipathing support multipathed storage devices, in which a virtual disk device is made available to initiators of I/O, wherein multiple physical paths exist between the virtual disk and the underlying physical storage.
In a high-availability clustering system, the failure of a server (or of a specific computing resource used thereby such as a network adapter, storage device, etc.) is detected, and the application that was being run on the failed server is automatically restarted on another computing system. This process is called “failover.” The high availability clustering system can also detect the failure of the application itself, and failover the application to another node. In effect, the high availability clustering system monitors applications, the servers the applications run on, and the resources used by the applications, to ensure that the applications remain highly available. Clusters can be used to provide applications to customers according to service level agreements guaranteeing varying levels of availability.
Virtualization of computing devices can be employed in high availability clustering and in other contexts. One or more virtual machines (VMs or guests) can be instantiated at a software level on physical computers (host computers or hosts), such that each VM runs its own operating system instance. Just as software applications, including server applications such as databases, enterprise management solutions and e-commerce websites, can be run on physical computers, so too can these applications be run on virtual machines. A high availability cluster of VMs can be built, in which the applications being monitored by the high availability clustering system run on and are failed over between VMs, as opposed to physical servers.
In some virtualization scenarios, a software component often called a hypervisor can act as an interface between the guests and the host operating system for some or all of the functions of the guests. In other virtualization implementations, there is no underlying host operating system running on the physical, host computer. In those situations, the hypervisor acts as an interface between the guests and the hardware of the host computer, in effect functioning as the host operating system, on top of which the guests run. Even where a host operating system is present, the hypervisor sometimes interfaces directly with the hardware for certain services. In some virtualization scenarios, the host itself is in the form of a guest (i.e., a virtual host) running on another host.
A hypervisor receives requests for resources from VMs, and allocates shared resources such as CPU, memory, I/O bandwidth, I/O channels, storage, performance boosting cache, replication links, etc. In a storage management environment, multipathed storage can also be shared between VMs or hosts. Although conventional hypervisors can allocate different shares of the resources to different VMs, conventional hypervisors treat all available resources of a given type (e.g., CPU, memory and I/O channels) as being similar and operating in essentially the same way. This limits the extent to which varying quality of service can be provided to different VMs and applications based on their priority or the underlying service level agreements with customers.
It would be desirable to address this issue.
Quality of service is provided to prioritized VMs or other applications on a computer, based on the varied quality of different shared computing resources. Each VM or application has a priority, which can indicate the quality of service it is to be provided with relative to other VMs or applications. Shared computing resources are accessible by multiple VMs or applications. Shared computing resources can be shared among multiple VMs or applications running on a single computer, for example to facilitate virtualization. A quality rating is assigned to each shared computing resource. In some embodiments, a quality rating comprises a single quantification of the overall quality of a specific shared computing resource. Assigned quality ratings can also quantify a plurality of qualitative factors concerning specific types of shared computing resources, or specific instances of shared computing resources. Shared computing resources can be periodically evaluated in order to determine current quality ratings based on their current status. The current quality ratings are then assigned to the shared computing resources.
Requests for shared computing resources made by specific VMs or applications are received. For example, the received requests can be in the form of requests for shared computing resources made by specific VMs to a hypervisor, for example to access a virtual disk. For each specific received request, the priority of the requesting application is identified. Identifying the priority of a requesting application can further comprise identifying the specific application that made the request for shared computing resources, for example from a tag in the request itself. Where the received request is in the form of an IO operation, the shared computing resource can be identified by the targeted LUN. In response to each received request, a specific shared computing resource is assigned to the specific requesting application. This assignment is made based on the priority of the requesting application and the quality rating of the shared computing resource, thereby providing quality of service to the requesting application corresponding to its priority. In some embodiments, information documenting usage of shared computing resources by applications over time is logged for future reference.
In one embodiment, received requests for shared computing resources comprise requests made by specific applications to initiate IO operations targeting a specific storage device. In this case, the shared computing resources are in the form of a plurality of queues for accessing the specific storage device. Each queue is configured to access the specific storage device with a different level of priority. In this embodiment, requesting applications are assigned a specific one of the queues for processing the IO operation.
In another embodiment, received requests for shared computing resources are in the form of requests made by specific applications to a multipathing component, in order to access a multipathed storage device. In this embodiment, the shared computing resources can comprise a plurality of queues for accessing a specific one of multiples paths to physical storage, wherein assigning quality ratings further comprises assigning a specific level of priority to each queue. The shared computing resources can instead further comprise the plurality of paths to physical storage, in which case a quality rating is assigned to a specific path as a quantification of its quality.
The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
The Figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Many different networking technologies can be used to provide connectivity from each of client computer systems 103A-N to network 107. Some examples include: LAN, WAN and various wireless technologies. Client systems 103A-N are able to access applications and/or data on server 105A or 105N using, for example, a web browser or other client software (not shown). This enables client systems 103A-N to run applications from an application server 105 and/or to access data hosted by a storage server 105 or one of storage devices 160A(1)-(N), 160B(1)-(N), 180(1)-(N) or intelligent storage array 190.
Although
Other components (not illustrated) may be connected in a similar manner (e.g., document scanners, digital cameras, printers, etc.). Conversely, all of the components illustrated in
The bus 212 allows data communication between the processor 214 and system memory 217, which, as noted above may include ROM and/or flash memory as well as RAM. The RAM is typically the main memory into which the operating system and application programs are loaded. The ROM and/or flash memory can contain, among other code, the Basic Input-Output system (BIOS) which controls certain basic hardware operations. Application programs can be stored on a local computer readable medium (e.g., hard disk 244, optical disk 242) and loaded into system memory 217 and executed by the processor 214. Application programs can also be loaded into system memory 217 from a remote location (i.e., a remotely located computer system 210), for example via the network interface 248 or modem 247. In
The storage interface 234 is coupled to one or more hard disks 244 (and/or other standard storage media). The hard disk(s) 244 may be a part of computer system 210, or may be physically separate and accessed through other interface systems.
The network interface 248 and or modem 247 can be directly or indirectly communicatively coupled to a network 107 such as the Internet. Such coupling can be wired or wireless.
As illustrated in
In one embodiment, the virtualization environment 311 is in the form of software provided by VMware, Inc. In this case, the hypervisor 307 is in the form of VMware's hardware level hypervisor VMware ESX 307. It is to be understood that the name of VMware's hardware level hypervisor 307 can change between product releases (for example, it used to be called ESX Server and in the future could be called something else). In a VMware based virtualization environment 311, the supporting software suite can be VMware vSphere, which is a VMware cloud enabled virtualization software package. VMware vSphere runs on top of ESX. It is to be understood that the name of VMware's cloud enabled virtualization software package can change between product releases. It is to be further understood that although VMware virtualization environments 311 are discussed herein, other embodiments can be implemented in the context of other virtualization environments 311 that provide similar functionality and features. For example, in other embodiments virtualization environments such as Microsoft's Hyper-V are used.
Note that although the shared storage 309 utilized by the cluster is illustrated and described in conjunction with
As explained in greater detail below in conjunction with
Turning to
It is to be understood that the quality of shared computing resources can be a function of its programmatic configuration, instead of or in addition to the characteristics of any underlying hardware. For example, in one embodiment described in greater detail below in conjunction with
As the term is used herein, a quality rating 503 is a quantification of the quality of a shared computing resource 315. Different internal formats can be used to represent quality ratings 503 in different embodiments (e.g., numbers on a scale, alphanumeric descriptors, percentages, etc.). In some embodiments, quality ratings 503 quantify different qualitative factors for different types of resources 315 (e.g., capacity for storage devices, bandwidth for I/O channels). In some embodiments, multiple qualitative factors are quantified for individual resources 315 by a single quality rating 503 (e.g., capacity, bandwidth and latency, represented by, for example, separate fields in a quality rating object). In other embodiments, a quality rating 503 is in the form of a single quantification of a resource's overall quality.
The different VMs 305 have different assigned priorities 507, which can be based on the corresponding service level agreements of the applications 313 running thereon, or on other factors that determine the VM's priority 507 relative to that of the other VMs 305 running on the same host 210. In one embodiment, the priorities 507 are assigned to the VMs 305, and applications 313 are run on VMs 305 with priorities 507 corresponding to the level of service to be provided to the specific application 313. In another embodiment, the priorities 507 are assigned to the applications 313 themselves, and each VM 305 takes its priority 507 from that of the application 313 that runs thereon. As described in greater detail below in conjunction with
A request receiving module 509 of the quality of service manager 101 receives requests 511 made to the hypervisor 307 for computing resources 315. Recall that the quality of service manager 101 runs at the hypervisor 307 level. Therefore, the request receiving module 509 can receive the requests 511 of interest made to the hypervisor 307 by intercepting or otherwise filtering calls made to the hypervisor 307, and identifying those that request shared computing resources 315.
A priority identifying module 513 of the quality of service manager 101 identifies the priority 507 of the VM 305 (or application 313) that made the request 511 for the shared resource 315. In one embodiment, requests 511 for shared resources are tagged with an identifier of the originator (e.g., the ID of the VM 305 that made the request 511). In this case, the priority identifying module 513 identifies the originator of the request 511 from the tag, and retrieves the corresponding priority 507, e.g., from the global data structure. In other embodiments, the priority identifying module 513 identifies the originator of the request 511 (and hence is able to look up and retrieve its priority 507) in other ways. For example, where the request 511 is in the form of an attempt to access shared storage media 309, the priority identifying module 513 can identify the originator of the request 511 by determining the LUN 401 on which the attempted I/O operation is occurring.
In response to requests 511 for shared computing resources 315, a resource assigning module 515 of the quality of service manager 101 assigns specific shared resources 315 of the type requested from the pool 505, based on priority 507 of the requester (i.e., the VM 305 or application 313) and the quality rating 503 of the resource 315. In one embodiment, this process can comprise assigning the resources 315 with higher quality ratings 503 to service requests 511 made by components with higher priorities 507. In other embodiments, more specific levels of granularity are used to make the assignments. For example, priorities can indicate specific factors of importance such as reliability, speed, bandwidth, etc., and shared resources 315 having varying quality ratings 503 concerning these specific factors can be assigned to VMs 305 with corresponding factor-specific priorities 507. The exact levels of granularity to use for both quality ratings 503 and priorities 507, both generally and concerning specific factors, can vary between embodiments as desired. Likewise, the exact logic to use to assign resources 315 to requesters based on the mapping between quality ratings 503 and priorities 507 can vary between embodiments. By assigning shared computing resources 315 to VMs 305 and applications 313 based on the correspondence between quality ratings 503 and priorities 507, the quality of service manager 101 can provide quality of service to specific components in line with their associated specific priorities 507, as determined by service level agreement or otherwise. This makes more judicious use of the common pool 505 of shared resources 315.
In some embodiments, a logging module 517 of the quality of service manager 101 monitors the usage of shared computing resources 315 by specific VMs 305 (or applications 313) over time, and writes corresponding information to a log 519 for future reference. The logging module 517 can log information of varying levels of detail in different embodiments as desired. For example, the log 519 can document basic audit/statistical resource usage information, or the actual values utilized in accessing specific resources 315, such as the offsets of I/O sent to a given path, storage device, replication link etc., e.g., at the level of a file change log. The logged information can be used, for example, to compute incremental data updates (e.g., incremental data sent on a given pipe) and provide it to an off-host processing service (not illustrated), such as an incremental backup system or the like.
Drawing attention back to
In the use case being described, the priority identifying module 513 determines the VM 305 from which a given I/O request 511 originated (and hence its priority 507) by determining to or from which underlying LUN 401 the request 511 is directed. Recall that in this use case the shared storage 309 is in the form of a VMDK virtual disk. Thus, to determine the target LUN 401, the quality of service manager 101 creates a mapping 317 of the VMDK disk blocks affected by the I/O operation and the corresponding offsets in the set of LUNs 401 underlying the VMDK 309. This mapping 317 indicates the target LUN 401, and hence the originating VM 305 and its priority 507.
Different methodologies can be used to determine the VMDK disk 309 to LUN 401 mapping 317 in different implementations of this use case. For example, in one implementation VMware web-services APIs are used to determine the set of LUNs 401 which are part of a given VMware datastore. The storage mapping is determined using a given VMware command with a specific command line option (currently “vmkfstools −t0” although the command and calling parameter(s) could change in future versions of VMware products). This command outputs the mapping for VMDK blocks to offsets in a set of universally unique identifiers (UUIDs). Note that these UUIDs do not directly correlate to actual storage LUNs 401, but are stored in individual storage LUNs 401 beginning at a fixed offset (currently offset 00100080 in VMware Virtual Machine File System 4.1). By reading data at this offset on devices which are part of the given VMware datastore (as determined via the web-service APIs as described above), it is determined which LUNs 401 have which given UUIDs. Because the quality of service manager 101 has have the mapping for VMDK blocks to offsets in UUIDs as returned by the “vmkfstools −t0” command, and has determined which LUN 401 has which UUID, the quality of service manager 101 can now construct a mapping 317 of VMDK blocks to LUNs 401. Note that in other implementations, this mapping 317 is obtained other ways, for example by using certain VMware APIs where available. In any case, the mapping 317 indicates which blocks of VMDK data reside in which offset of a given LUN 401. Based on this mapping 317, a multipathing component (e.g., VxDMP in a VMware ESX environment) can determine which VM 305 a given I/O packet is coming from or going to, and hence which VM 305 made the request 511. Multipathing is discussed in more detail below in conjunction with
Turning now to
As illustrated in
In one use case illustrated in
The priority identifying module 513 identifies the application 313 from which the I/O originated (for example, from a tag in the I/O request 511 or the LUN 401 on which the I/O operation is occurring), and hence identifies the application's priority 507. The resource assigning module 515 inserts the I/O in the appropriate priority queue 607, based on the priority 507 of the application 313. For example, in the illustrated three queue 607 embodiment, I/Os originating from applications 313 with a priority 507 of high are inserted in high priority queue 607, those from medium priority applications 313 in the medium priority queue 607 and those from the low priority applications 313 in the low priority queue 607.
The quality of service manager 101 sends I/Os down each HBA 235 from its priority queues 607 based on their respective priorities. In other words, the quality of service manager 101 selects the most I/Os from the high priority queue 607, fewer I/Os from the middle priority queue 607 and the least number of I/Os from low priority queue 607. The specific proportion of I/Os to select from each queue 607 can be determined based on relative priority, or can be set by an administrator or other user. The exact proportion to use is a variable design parameter, and different proportions can be used in different embodiments as desired. This servicing of I/O operations in proportion to the priority 507 of their originating applications 313 provides a higher level of service to higher priority applications 313.
In another embodiment of the use case illustrated in
To apply the use case of
As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the portions, modules, agents, managers, components, functions, procedures, actions, layers, features, attributes, methodologies, data structures and other aspects are not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions and/or formats. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or limiting to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain relevant principles and their practical applications, to thereby enable others skilled in the art to best utilize various embodiments with or without various modifications as may be suited to the particular use contemplated.
Number | Name | Date | Kind |
---|---|---|---|
6101180 | Donahue et al. | Aug 2000 | A |
7051098 | Masters et al. | May 2006 | B2 |
7096248 | Masters et al. | Aug 2006 | B2 |
7171654 | Werme et al. | Jan 2007 | B2 |
7181743 | Werme et al. | Feb 2007 | B2 |
7552438 | Werme et al. | Jun 2009 | B1 |
7636781 | Li et al. | Dec 2009 | B2 |
7711822 | Duvur et al. | May 2010 | B1 |
7961608 | Couturier | Jun 2011 | B2 |
8321558 | Sirota et al. | Nov 2012 | B1 |
8321569 | Bello et al. | Nov 2012 | B2 |
8347302 | Vincent et al. | Jan 2013 | B1 |
8584128 | Don et al. | Nov 2013 | B1 |
8676976 | Heller, Jr. | Mar 2014 | B2 |
20010033646 | Porter et al. | Oct 2001 | A1 |
20020115443 | Freiberg et al. | Aug 2002 | A1 |
20030058804 | Saleh et al. | Mar 2003 | A1 |
20030167270 | Werme et al. | Sep 2003 | A1 |
20030177218 | Poirot et al. | Sep 2003 | A1 |
20030191829 | Masters et al. | Oct 2003 | A1 |
20030217152 | Kasper, II | Nov 2003 | A1 |
20040205414 | Roselli et al. | Oct 2004 | A1 |
20050055322 | Masters et al. | Mar 2005 | A1 |
20050055350 | Werme et al. | Mar 2005 | A1 |
20050083834 | Dunagan et al. | Apr 2005 | A1 |
20060182119 | Li et al. | Aug 2006 | A1 |
20070132770 | Stefanidis et al. | Jun 2007 | A1 |
20070169126 | Todoroki et al. | Jul 2007 | A1 |
20080162735 | Voigt et al. | Jul 2008 | A1 |
20090138752 | Graham et al. | May 2009 | A1 |
20090172315 | Iyer et al. | Jul 2009 | A1 |
20090288090 | Ujibashi et al. | Nov 2009 | A1 |
20100214996 | Santhanam et al. | Aug 2010 | A1 |
20100262973 | Ernst et al. | Oct 2010 | A1 |
20110153825 | Bello et al. | Jun 2011 | A1 |
20110158254 | Basso et al. | Jun 2011 | A1 |
20110219263 | Goel | Sep 2011 | A1 |
20120054768 | Kanna et al. | Mar 2012 | A1 |
20120124591 | Cadambi et al. | May 2012 | A1 |
20120159090 | Andrews et al. | Jun 2012 | A1 |
20130007755 | Chambliss et al. | Jan 2013 | A1 |
20130227562 | Tsirkin et al. | Aug 2013 | A1 |
20140006620 | Assuncao et al. | Jan 2014 | A1 |
20140068607 | Tsirkin et al. | Mar 2014 | A1 |
20140096135 | Kundu et al. | Apr 2014 | A1 |
20140156705 | Beecham et al. | Jun 2014 | A1 |
20140173614 | Konik et al. | Jun 2014 | A1 |
Number | Date | Country |
---|---|---|
WO 2004107196 | Dec 2004 | WO |
Entry |
---|
PCT International Search Report and Written Opinion for Counterpart Application PCT/US13/56967 dated Jan. 7, 2014, 24 pages. |
Oliver Nieharster et al., “Cost-Aware and SLO-Fulfilling Software as a Service”, Journal of Grid Computing, Kluwer Academic Publishers, DO, vol. 10, No. 3, Sep. 1, 2012, pp. 553-577. |
Number | Date | Country | |
---|---|---|---|
20140173113 A1 | Jun 2014 | US |