1. Field of the Invention
The present invention relates to application reservation and delivery, and more specifically, to an application reservation and delivery system with capacity pooling which defines application configurations in an application library, which pools computer resource assets, which lists computer resource assets, and which manages computer resources to deploy requested application configurations during a reserved future time period.
2. Description of the Related Art
Modeling capacity to satisfy the needs of on-demand application delivery is challenging, especially in the face of application configurations with heterogeneous resource requirements. For example, an application requiring 2 small virtual machines and an application requiring 1 large virtual machine might end up consuming effectively the same capacity from a virtualization host. At delivery time, however, the resources required for each of the 2 applications need to be accounted for and configured differently. Prior application delivery systems were limited to hosts preconfigured for specific application configurations, did not accurately account for or track computing assets, and had limited scheduling capability.
The benefits, features, and advantages of the present invention will become better understood with regard to the following description, and accompanying drawings where:
The following description is presented to enable one of ordinary skill in the art to make and use the present invention as provided within the context of a particular application and its requirements. Various modifications to the preferred embodiment will, however, be apparent to one skilled in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described herein, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed.
A virtual machine monitor (VMM) 603 is installed as an application of the host OS 613, which effectively forms a software abstraction layer between the physical host system 609 and one or more VMs 601. As shown, a number “N” VMs are shown VM1, VM2, . . . , VMN. It is appreciated that any number (one or more) of VMs may be defined for any given physical server depending upon its performance and capabilities. Each VM 601 is implemented as a layered stack of functions in a similar manner as the host OS stack 105. In particular, each VM 601 includes a VM OS (not shown) interfacing VM filter drivers (not shown), which interface a VM kernel (not shown), which further interfaces VM kernel mode drivers (not shown). The VM kernel mode drivers further interface virtual hardware (not shown) at the bottom of the virtual stack of each VM 601. The VMs collectively share the resources of the underlying physical host system 609. A physical server or host system supporting virtualization is referred to herein as a virtualization host.
An on-demand application delivery system 100 according to an embodiment of the present invention is supported by or otherwise linked to the data center system 800. The on-demand application delivery system 100 exists as a file system, relational database, links to computing resources, etc. Various alternative embodiments of the system 100 are shown as systems 100A, 100B, 100C or 100D within the data center system 800, although it is appreciated that the various embodiments illustrated are exemplary only and other embodiments are possible. In one embodiment, the on-demand application delivery system 100 is implemented as a separate virtualized application configuration 100A implemented using one or more virtual machines or virtual servers supported by an underlying virtualization host manager (VHM) 811 coupled to the network 803. In another embodiment illustrated, the on-demand application delivery system 100 is implemented as a set of applications operating on a selected one of the physical servers 805, such as an on-demand application delivery system 100B shown residing on the physical server PS1. In another embodiment illustrated, the on-demand application delivery system 100 is implemented as a set of applications operating on multiple ones of the physical servers 805, such as an on-demand application delivery system 100C shown spanning the physical servers PS1 and PS2. In yet another embodiment illustrated, the on-demand application delivery system 100 is implemented as a virtualized application configuration including one or more virtual machines or virtual servers within the virtual server cloud 803, such as an on-demand application delivery system 100D shown within the virtual server cloud 803. In any of the embodiments of the on-demand application delivery system 100, the remote end users submit application requests via the intermediate network 809, which are intercepted and handled by the on-demand application delivery system 100. The on-demand application delivery system 100 reserves the resources for the application configuration and deploys the application at the appropriate time in accordance with that described herein.
Administrators also contribute different computing resources, such as virtualization hosts, physically-provisioned servers, network addresses, etc., to the data center system 800. The computing resources are described within a logical resource pool 109, which divides the resources into resource categories, including processing power 111, memory 113, storage 115 and networking 117. The processing power 111 represents the number and capacity of virtualization hosts and the number of virtual machines that each virtualization host supports, and the number of physical servers if included. It is noted that physical servers are treated as separate entities in which each may be provisioned as a single application server within an application configuration. The memory 113 describes the total amount of available volatile memory provided on the virtualization hosts for supporting software and applications, such as random access memory (RAM) and the like. The storage 115 represents the amount of non-volatile disk drive storage provided in the system including total storage on each virtualization host and the shared storage 807. The networking resources 117 includes available network resources, such as, for example, specific media access control (MAC) addresses, internet protocol (IP) network addresses, etc.
Users request a particular application configuration as needed, shown as application requests 121. In one embodiment, users select from among predefined application configurations within the application library 101. In another embodiment, the user may request a new or different application configuration which is defined and constructed from the resource pool and stored within the application library 101. The on-demand application delivery system 100 includes a resource manager 119 interfacing the application library 101 and the logical resource pool 109. The resource manager 119 receives each application request 121 for a particular application configuration, matches the application configuration with those provided in the application library 101 to identify the application configuration's resource requirements, and compares the application configuration's resource requirements against the available pooled computing resources provided in the logical resource pool 109. If sufficient resources are available in the logical resource pool 109 to meet the application request 121 either immediately or at a future time requested, the resource manager 119 reserves those resources for the time requested, enabling the chosen application configuration to be deployed to the reserved resources and made available to the requesting user at the appropriate time. When an application configuration is to be deployed, the resource manager 119 cooperates with a deployment manager 123, which accesses the logical resource pool 109 and deploys the application configuration as a deployed application 125. In one embodiment, the resource manager 119 is involved in deployment of each requested application configuration. The resource manager 119 also tracks deployed and pooled resources, and schedules computer resource assets for future application requests (see
Alternatively, if any one of the physical servers 805 is configured according to the server configuration of server 201 or 203, or if a pair of physical servers are configured according to both servers 201 and 203, then those physical servers may be deployed as the application configuration 107.
An application configuration further provides the networking specification which defines the relationship between the server configurations. As shown in
The physical servers 805 may also contribute to the logical resource pool 109 in an individual capacity. Physical servers generally do not support concurrency and are not decomposed into their constituent parts. In particular, each “raw” physical server is preconfigured according to a particular hardware configuration. In other words, a physical server typically has a fixed hardware configuration, but can be provisioned to serve as any number of software configurations utilizing a number of different physical provisioning tools readily available to someone skilled in the art. Furthermore, it can only be allocated for use as an indivisible entity.
As shown in table 409, each virtualization host 403 contributes 10 GM RAM of memory. The physical servers 401 PS1 and PS2 have preconfigured hardware configurations and are contributed without further granularity. Each MAC and IP address is listed separately in table 409. Table 409 also lists multiple virtual machines, shown as VM1, VM2, VM3, VM4, etc., supported by the virtualization hosts VH1-VH3. Although each virtualization host is shown supporting two virtual machines for purposes of illustration, it is understood that each virtualization host may contribute any number of virtual machines to the logical resource pool 400.
When consuming 1 GB of RAM on VH1, it is sufficient to account for the memory usage by decrementing the 10 GB of available RAM by 1 GB, leaving 9 GB still available. It is not necessary to specifically define which exact bytes of RAM are to be used out of the total 10 GB available, since the operating system of each virtualization host manages RAM allocation on a per-process basis. However, when consuming 1 virtual machine on VH1, the system accounts for that specific virtual machine individually in order to track exactly which virtual machine is in use. As such, each virtual machine is represented as a unique asset, while the entire sum of pooled RAM from a virtualization host is aggregated into one asset. In a similar manner to pooled virtual machines, other computing resources are indivisible and require dedicated accounting of their availability, including physically-provisioned servers and network addresses, which are modeled as discrete assets.
In one embodiment, an “asset” object within the pooling model includes the concept of one asset source explicitly depending upon another asset source, referred to as source dependency. This allows logically related computing capacity to be decomposed into different asset sources, yet still be used or allocated together. One example of this relationship is that the assets contributed by a VM may depend upon assets contributed by the corresponding virtualization host. As shown, the virtual machines sourced by VM1 and VM2 have a source dependency of VH1, the virtual machines sourced by VM3 and VM4 have a source dependency of VH2, etc. If there is no source dependency, table 409 lists the source dependency as “NONE”. The resource manager 119 has access to this dependency relationship in order to ensure that RAM is reserved from the same virtualization host as is depended upon by the reserved VM. In other words, without this relationship, the resource manager 119 might very well allocate RAM from VH1 and a particular VM residing on VH2, which could cause an invalid hardware configuration.
A system according to one embodiment performs the task of accounting for pooled asset usage. The table 409 lists all the assets in a pool without regard for tracking the consumption of those assets. A separate persistence model is used by the resource manager 119 to track asset usage. When including time in the usage tracking model, however, the capacity pooling system serves as a resource scheduling system, supporting the ability for users to schedule the delivery of an application for a specific duration some time in the future.
An on-demand application delivery system with capacity pooling according to one embodiment includes a logical resource pool, an application library, a resource manager, and a deployment manager. The logical resource pool includes computer resource assets which includes asset type, amount, and asset source in which each computer resource asset is decomposed to a specified level of granularity. The application library includes application configurations, each including at least one server configuration which includes computer resource asset requirements. The resource manager tracks availability of the computer resource assets, receives requests for application configurations, compares each requested application configuration with available computer resource assets, and reserves resources for each requested application configuration. The deployment manager deploys each requested application configuration using the reserved resources.
A data center system according to one embodiment includes a shared network, one or more virtualization hosts coupled to the shared network, and an on-demand application delivery system. Each virtualization host supports at least one virtual machine. The on-demand application delivery system a logical resource pool, an application library, a resource manager, and a deployment manager as previously described. The on-demand application delivery system may be supported by one of the virtualization hosts, by a dedicated virtualization host, one or more physical servers, etc.
A method of on-demand application delivery with capacity pooling according to one embodiment includes providing computer resource assets including asset type, amount, and asset source, decomposing each computer resource asset to a specified level of granularity, defining application configurations, each including at least one server configuration which includes computer resource asset requirements, tracking availability of the computer resource assets, receiving a request for an application configuration, comparing the requested application configuration with available computer resource assets, and reserving resources for the requested application configuration, and deploying the requested application configuration using the reserved resources.
Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions and variations are possible and contemplated. Those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for providing out the same purposes of the present invention without departing from the spirit and scope of the invention.
This application claims the benefit of U.S. Provisional Application Ser. No. 60/788,129 filed on Mar. 31, 2006 which is incorporated herein by reference for all intents and purposes.
Number | Name | Date | Kind |
---|---|---|---|
4912628 | Briggs | Mar 1990 | A |
5062037 | Shorter et al. | Oct 1991 | A |
5201049 | Shorter | Apr 1993 | A |
5611050 | Theimer et al. | Mar 1997 | A |
5757924 | Friedman et al. | May 1998 | A |
5802290 | Casselman | Sep 1998 | A |
5805824 | Kappe | Sep 1998 | A |
5917997 | Bell et al. | Jun 1999 | A |
5996026 | Onodera et al. | Nov 1999 | A |
5999518 | Nattkemper et al. | Dec 1999 | A |
6003050 | Silver et al. | Dec 1999 | A |
6006264 | Colby et al. | Dec 1999 | A |
6023724 | Bhatia et al. | Feb 2000 | A |
6038566 | Tsai | Mar 2000 | A |
6041347 | Harsham et al. | Mar 2000 | A |
6067545 | Wolff | May 2000 | A |
6069894 | Holender et al. | May 2000 | A |
6075938 | Bugnion et al. | Jun 2000 | A |
6092178 | Jindal et al. | Jul 2000 | A |
6104699 | Holender et al. | Aug 2000 | A |
6118784 | Tsuchiya et al. | Sep 2000 | A |
6130892 | Short et al. | Oct 2000 | A |
6185601 | Wolff | Feb 2001 | B1 |
6192417 | Block et al. | Feb 2001 | B1 |
6247057 | Barrera, III | Jun 2001 | B1 |
6256637 | Venkatesh et al. | Jul 2001 | B1 |
6263358 | Lee et al. | Jul 2001 | B1 |
6272523 | Factor | Aug 2001 | B1 |
6272537 | Kekic et al. | Aug 2001 | B1 |
6282602 | Blumenau et al. | Aug 2001 | B1 |
6347328 | Harper et al. | Feb 2002 | B1 |
6370560 | Robertazzi et al. | Apr 2002 | B1 |
6453426 | Gamache et al. | Sep 2002 | B1 |
6496847 | Bugnion et al. | Dec 2002 | B1 |
6535511 | Rao | Mar 2003 | B1 |
6553401 | Carter et al. | Apr 2003 | B1 |
6567839 | Borkenhagen et al. | May 2003 | B1 |
6571283 | Smorodinsky | May 2003 | B1 |
6607545 | Kammerer et al. | Aug 2003 | B2 |
6609213 | Nguyen et al. | Aug 2003 | B1 |
6625705 | Yanai et al. | Sep 2003 | B2 |
6633916 | Kauffman | Oct 2003 | B2 |
6640239 | Gidwani | Oct 2003 | B1 |
6665304 | Beck et al. | Dec 2003 | B2 |
6745303 | WataWatanabe | Jun 2004 | B2 |
6760775 | Anerousis et al. | Jul 2004 | B1 |
6865613 | Millet et al. | Mar 2005 | B1 |
6880002 | Hirschfeld et al. | Apr 2005 | B2 |
6931003 | Anderson | Aug 2005 | B2 |
6985479 | Leung et al. | Jan 2006 | B2 |
6985485 | Tsuchiya et al. | Jan 2006 | B2 |
6985937 | Keshav et al. | Jan 2006 | B1 |
6990666 | Hirschfeld et al. | Jan 2006 | B2 |
7020720 | Donahue et al. | Mar 2006 | B1 |
7043665 | Kern et al. | May 2006 | B2 |
7065589 | Yamagami | Jun 2006 | B2 |
7076560 | Lango et al. | Jul 2006 | B1 |
7139841 | Somasundaram et al. | Nov 2006 | B1 |
7154891 | Callon | Dec 2006 | B1 |
7200622 | Nakatani et al. | Apr 2007 | B2 |
7215669 | Rao | May 2007 | B1 |
7219161 | Fagundo et al. | May 2007 | B1 |
7222172 | Arakawa et al. | May 2007 | B2 |
7234075 | Sankaran et al. | Jun 2007 | B2 |
7280557 | Biswas et al. | Oct 2007 | B1 |
7287186 | McCrory et al. | Oct 2007 | B2 |
7356679 | Le et al. | Apr 2008 | B1 |
7421505 | Berg | Sep 2008 | B2 |
7574496 | McCrory et al. | Aug 2009 | B2 |
20020065864 | Hartsell et al. | May 2002 | A1 |
20020103889 | Markson et al. | Aug 2002 | A1 |
20020129082 | Baskey et al. | Sep 2002 | A1 |
20020152310 | Jain et al. | Oct 2002 | A1 |
20020159447 | Carey et al. | Oct 2002 | A1 |
20020184642 | Lude et al. | Dec 2002 | A1 |
20020194251 | Richter et al. | Dec 2002 | A1 |
20030005104 | Deboer et al. | Jan 2003 | A1 |
20030005166 | Seidman | Jan 2003 | A1 |
20030018927 | Gadir et al. | Jan 2003 | A1 |
20030023774 | Gladstone et al. | Jan 2003 | A1 |
20030105829 | Hayward | Jun 2003 | A1 |
20030188233 | Lubbers et al. | Oct 2003 | A1 |
20040044778 | Alkhatib et al. | Mar 2004 | A1 |
20040052216 | Roh | Mar 2004 | A1 |
20040078467 | Grosner et al. | Apr 2004 | A1 |
20040186905 | Young et al. | Sep 2004 | A1 |
20050013280 | Buddhikot et al. | Jan 2005 | A1 |
20050044220 | Madhavan | Feb 2005 | A1 |
20050228835 | Roa | Oct 2005 | A1 |
20050229175 | McCrory et al. | Oct 2005 | A1 |
20050240668 | Rolia et al. | Oct 2005 | A1 |
20050240964 | Barrett | Oct 2005 | A1 |
20050246436 | Day et al. | Nov 2005 | A1 |
20050249199 | Albert et al. | Nov 2005 | A1 |
20060013209 | Somasundaram | Jan 2006 | A1 |
20060136490 | Aggarwal et al. | Jun 2006 | A1 |
20060282892 | Jonnala et al. | Dec 2006 | A1 |
20060288251 | Jackson | Dec 2006 | A1 |
20070005769 | Ammerlaan et al. | Jan 2007 | A1 |
20070088721 | Srivastava | Apr 2007 | A1 |
20070180453 | Burr et al. | Aug 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
60788129 | Mar 2006 | US |