The invention relates generally to systems and methods for brokering optimized resource supply costs in a host cloud-based network using predictive workloads, and more particularly, to platforms and techniques for generating sets of predictively re-assigned workloads that can leverage better operating costs or other factors for cloud operators whose users demonstrate reduced resource demand and/or other consumption behavior at offpeak or other times.
The advent of cloud-based computing architectures has opened new possibilities for the rapid and scalable deployment of virtual Web stores, media outlets, social networking sites, and many other on-line sites or services. In general, a cloud-based architecture deploys a set of hosted resources such as processors, operating systems, software and other components that can be combined together to form virtual machines. A user or customer can request the instantiation of a virtual machine or set of machines from those resources from a central server or cloud management system to perform intended tasks, services, or applications. For example, a user may wish to set up and instantiate a virtual server from the cloud to create a storefront to market products or services on a temporary basis, for instance, to sell tickets to or merchandise for an upcoming sports or musical performance. The user can subscribe to the set of resources needed to build and run the set of instantiated virtual machines on a comparatively short-term basis, such as hours or days, for their intended application.
Typically, when a user utilizes a cloud, the user must track the software applications executed in the cloud and/or processes instantiated in the cloud. For example, the user must track the cloud processes to ensure that the correct cloud processes have been instantiated, that the cloud processes are functioning properly and/or efficiently, that the cloud is providing sufficient resources to the cloud processes, and so forth. Due in part to the user's requirements and overall usage of the cloud, the user may have many applications and/or processes instantiated in a cloud at any given instant, and the user's deployment of virtual machines, software, and other resources can change dynamically over time. In cases, the user may also utilize multiple independent clouds to support the user's cloud deployment. That user may further instantiate and use multiple applications or other software or services inside or across multiple of those cloud boundaries, and those resources may be used or consumed by multiple or differing end-user groups in those different cloud networks.
In terms of the procurement of processor, memory, storage, and/or other resources required to support one or more sets of users, the cloud management system of a set of host clouds may locate and install or provide those resources on a subscription, marketplace, and/or other basis. For users whose resource demands demonstrates a timer-dependent pattern, such as a relaxation of resource consumption during overnight, offpeak, and other hours or periods, it could be advantageous to time or predict the need to procure resources based on known consumption behavior. In aspects, a cloud operator could scale down or otherwise adjust the resources procured from a set of cloud resource servers if service level needs are known or could be predicted in advance, permitting potentially lower cost, reduced storage needs, decreased failover or rollover capacity, and/or otherwise altered configurations of the resources needed to support users in the operator's set of host clouds. It may be desirable to provide systems and methods for brokering optimized resource supply costs in a host cloud-based network using predictive workloads, in which workloads of users can be predictively re-assigned or shifted to leverage advantageous cloud support conditions in varying resource servers, based on estimated support requirements or levels in predetermined time periods.
Embodiments described herein can be implemented in or supported by a cloud network architecture. As used herein, a “cloud” can comprise a collection of hardware, software, services, and/or resources that can be invoked to instantiate a virtual machine, process, or other resource for a limited or defined duration. As shown for example in
In embodiments, the entire set of resource servers 108 and/or other hardware or software resources used to support one or more clouds 102, along with the set of instantiated virtual machines, can be managed by a cloud management system 104. The cloud management system 104 can comprise a dedicated or centralized server and/or other software, hardware, services, and network tools that communicate via network 106, such as the Internet or other public or private network, with all servers in set of resource servers 108 to manage the cloud 102 and its operation. To instantiate a new or updated set of virtual machines, a user can transmit an instantiation request to the cloud management system 104 for the particular type of virtual machine they wish to invoke for their intended application. A user can for instance make a request to instantiate a set of virtual machines configured for email, messaging or other applications from the cloud 102. The virtual machines can be instantiated as virtual client machines, virtual appliance machines consisting of special-purpose or dedicated-task machines as understood in the art, and/or as other virtual machines or entities. The request to invoke and instantiate the desired complement of virtual machines can be received and processed by the cloud management system 104, which identifies the type of virtual machine, process, or other resource being requested in that platform's associated cloud. The cloud management system 104 can then identify the collection of hardware, software, service, and/or other resources necessary to instantiate that complement of virtual machines or other resources. In embodiments, the set of instantiated virtual machines or other resources can, for example, and as noted, comprise virtual transaction servers used to support Web storefronts, Web pages, and/or other transaction sites.
In embodiments, the user's instantiation request can specify a variety of parameters defining the operation of the set of virtual machines to be invoked. The instantiation request, for example, can specify a defined period of time for which the instantiated collection of machines, services, or processes is needed. The period of time can be, for example, an hour, a day, a month, or other interval of time. In embodiments, the user's instantiation request can specify the instantiation of a set of virtual machines or processes on a task basis, rather than for a predetermined amount or interval of time. For instance, a user could request a set of virtual provisioning servers and other resources until a target software update is completed on a population of corporate or other machines. The user's instantiation request can in further regards specify other parameters that define the configuration and operation of the set of virtual machines or other instantiated resources. For example, the request can specify a specific minimum or maximum amount of processing power or input/output (I/O) throughput that the user wishes to be available to each instance of the virtual machine or other resource. In embodiments, the requesting user can for instance specify a service level agreement (SLA) acceptable for their desired set of applications or services. Other parameters and settings can be used to instantiate and operate a set of virtual machines, software, and other resources in the host clouds. One skilled in the art will realize that the user's request can likewise include combinations of the foregoing exemplary parameters, and others. It may be noted that “user” herein can include a network-level user or subscriber to cloud-based networks, such as a corporation, government entity, educational institution, and/or other entity, including individual users and groups of users.
When the request to instantiate a set of virtual machines or other resources has been received and the necessary resources to build those machines or resources have been identified, the cloud management system 104 can communicate with one or more set of resource servers 108 to locate resources to supply the required components. Generally, the cloud management system 104 can select servers from the diverse set of resource servers 108 to assemble the various components needed to build the requested set of virtual machines, services, or other resources. It may be noted that in some embodiments, permanent storage, such as optical storage or hard disk arrays, may or may not be included or located within the set of resource servers 108 available to the cloud management system 104, since the set of instantiated virtual machines or other resources may be intended to operate on a purely transient or temporary basis. In embodiments, other hardware, software or other resources not strictly located or hosted in one or more clouds 102 can be accessed and leveraged as needed. For example, other software or services that are provided outside of one or more clouds 102 acting as hosts, and are instead hosted by third parties outside the boundaries of those clouds, can be invoked by in-cloud virtual machines or users. For further example, other non-cloud hardware and/or storage services can be utilized as an extension to the one or more clouds 102 acting as hosts or native clouds, for instance, on an on-demand, subscribed, or event-triggered basis.
With the resource requirements identified for building a network of virtual machines, the cloud management system 104 can extract and build the set of virtual machines or other resources on a dynamic, on-demand basis. For example, one set of resource servers 108 may respond to an instantiation request for a given quantity of processor cycles with an offer to deliver that computational power immediately and guaranteed for the next hour or day. A further set of resource servers 108 can offer to immediately supply communication bandwidth, for example on a guaranteed minimum or best-efforts basis, for instance over a defined window of time. In other embodiments, the set of virtual machines or other resources can be built on a batch basis, or at a particular future time. For example, a set of resource servers 108 may respond to a request for instantiation of virtual machines at a programmed time with an offer to deliver the specified quantity of processor cycles within a specific amount of time, such as the next 12 hours. Other timing and resource configurations are possible.
After interrogating and receiving resource commitments from the set of resource servers 108, the cloud management system 104 can select a group of servers in the set of resource servers 108 that match or best match the instantiation request for each component needed to build the user's requested virtual machine, service, or other resource. The cloud management system 104 for the one or more clouds 102 acting as the destination for the virtual machines can then coordinate the integration of the identified group of servers from the set of resource servers 108, to build and launch the requested set of virtual machines or other resources. The cloud management system 104 can track the identified group of servers selected from the set of resource servers 108, or other distributed resources that are dynamically or temporarily combined, to produce and manage the requested virtual machine population, services, or other cloud-based resources.
In embodiments, the cloud management system 104 can generate a resource aggregation table or other record that identifies the various sets of resource servers in set of resource servers 108 that will be used to supply the components of the set of instantiated virtual machines, services, or processes. The selected sets of resource servers can be identified by unique identifiers such as, for instance, Internet protocol (IP) addresses or other addresses. In aspects, different sets of servers in set of resource servers 108 can be selected to deliver different resources to different users and/or for different applications. The cloud management system 104 can register the finalized group of servers in the set resource servers 108 contributing to or otherwise supporting the set of instantiated machines, services, or processes.
The cloud management system 104 can then set up and launch the initiation process to instantiate the virtual machines, processes, services, and/or other resources to be hosted and delivered from the one or more clouds 102. The cloud management system 104 can for instance transmit an instantiation command or instruction to the registered group of servers in the set of resource servers 108. The cloud management system 104 can receive a confirmation message back from each registered server in set of resource servers 108 indicating a status or state regarding the provisioning of their respective resources. Various registered resource servers may confirm, for example, the availability of a dedicated amount of processor cycles, amounts of electronic memory, communications bandwidth, services, and/or applications or other software prepared to be served and delivered.
As shown for example in
In embodiments, the cloud management system 104 can further store, track and manage each user's identity and associated set of rights or entitlements to software, hardware, and other resources. Each user that operates a virtual machine or service in the set of virtual machines in the cloud can have specific rights and resources assigned and made available to them, with associated access rights and security provisions. The cloud management system 104 can track and configure specific actions that each user can perform, such as the ability to provision a set of virtual machines with software applications or other resources, configure a set of virtual machines to desired specifications, submit jobs to the set of virtual machines or other host, manage other users of the set of instantiated virtual machines 116 or other resources, and/or other privileges, entitlements, or actions. The cloud management system 104 associated with the virtual machine(s) of each user can further generate records of the usage of instantiated virtual machines to permit tracking, billing, and auditing of the resources and services consumed by the user or set of users. In aspects of the present teachings, the tracking of usage activity for one or more user (including network level user and/or end-user) can be abstracted from any one cloud to which that user is registered, and made available from an external or independent usage tracking service capable of tracking software and other usage across an arbitrary collection of clouds, as described herein. In embodiments, the cloud management system 104 of an associated cloud can for example meter the usage and/or duration of the set of instantiated virtual machines 116, to generate subscription and/or billing records for a user that has launched those machines. In aspects, tracking records can in addition or instead be generated by an internal service operating within a given cloud. Other subscription, billing, entitlement and/or value arrangements are possible.
The cloud management system 104 can configure each virtual machine in set of instantiated virtual machines 116 to be made available to users via one or more networks 116, such as the Internet or other public or private networks. Those users can for instance access set of instantiated virtual machines via a browser interface, via an application server such as a JAVA™ server, via an application programming interface (API), and/or other interface or mechanism. Each instantiated virtual machine in set of instantiated virtual machines 116 can likewise communicate with its associated cloud management system 104 and the registered servers in set of resource servers 108 via a standard Web application programming interface (API), or via other calls, protocols, and/or interfaces. The set of instantiated virtual machines 116 can likewise communicate with each other, as well as other sites, servers, locations, and resources available via the Internet or other public or private networks, whether within a given cloud in one or more clouds 102, or between those or other clouds.
It may be noted that while a browser interface or other front-end can be used to view and operate the set of instantiated virtual machines 116 from a client or terminal, the processing, memory, communications, storage, and other hardware as well as software resources required to be combined to build the virtual machines or other resources are all hosted remotely in the one or more clouds 102. In embodiments, the set of virtual machines 116 or other services, machines, or resources may not depend in any degree on or require the user's own on-premise hardware or other resources. In embodiments, a user can therefore request and instantiate a set of virtual machines or other resources on a purely off-premise basis, for instance to build and launch a virtual storefront, messaging site, and/or any other application. Likewise, one or more clouds 102 can also be formed in whole or part from resources hosted or maintained by the users of those clouds, themselves.
Because the cloud management system 104 in one regard specifies, builds, operates and manages the set of instantiated virtual machines 116 on a logical or virtual level, the user can request and receive different sets of virtual machines and other resources on a real-time or near real-time basis, without a need to specify, install, or configure any particular hardware. The user's set of instantiated virtual machines 116, processes, services, and/or other resources can in one regard therefore be scaled up or down immediately or virtually immediately on an on-demand basis, if desired. In embodiments, the set of resource servers 108 that are accessed by the cloud management system 104 to support the set of instantiated virtual machines 116 or processes can change or be substituted, over time. The type and operating characteristics of the set of instantiated virtual machines 116 can nevertheless remain constant or virtually constant, since instances are assembled from a collection of abstracted resources that can be selected and maintained from diverse sources based on uniform specifications. Conversely, the users of the set of instantiated virtual machines 116 can also change or update the resource or operational specifications of those machines at any time. The cloud management system 104 and/or other logic can then adapt the allocated resources for that population of virtual machines or other entities, on a dynamic basis.
In terms of network management of the set of instantiate virtual machines 116 that have been successfully configured and instantiated, the one or more cloud management systems 104 associated with those machines can perform various network management tasks including security, maintenance, and metering for billing or subscription purposes. The cloud management system 104 of one or more clouds 102 can, for example, install, initiate, suspend, or terminate instances of applications or appliances on individual machines. The cloud management system 104 can similarly monitor one or more operating virtual machines to detect any virus or other rogue process on individual machines, and for instance terminate an application identified as infected, or a virtual machine detected to have entered a fault state. The cloud management system 104 can likewise manage the set of instantiated virtual machines 116 or other resources on a network-wide or other collective basis, for instance, to push the delivery a software upgrade to all active virtual machines or subsets of machines. Other network management processes can be carried out by cloud management system 104 and/or other associated logic.
In embodiments, more than one set of virtual machines can be instantiated in a given cloud at the same time, at overlapping times, and/or at successive times or intervals. The cloud management system 104 can, in such implementations, build, launch and manage multiple sets of virtual machines as part of the set of instantiated virtual machines 116 based on the same or different underlying set of resource servers 108, with populations of different virtual machines such as may be requested by the same or different users. The cloud management system 104 can institute and enforce security protocols in one or more clouds 102 hosting one or more sets of virtual machines. Each of the individual sets or subsets of virtual machines in the set of instantiated virtual machines 116 can be hosted in a respective partition or sub-cloud of the resources of the main cloud 102. The cloud management system 104 of one or more clouds 102 can for example deploy services specific to isolated or defined sub-clouds, or isolate individual workloads/processes within the cloud to a specific sub-cloud or other sub-domain or partition of the one or more clouds 102 acting as host. The subdivision of one or more clouds 102 into distinct transient sub-clouds, sub-components, or other subsets which have assured security and isolation features can assist in establishing a multiple user or multi-tenant cloud arrangement. In a multiple-user scenario, each of the multiple users can use the cloud platform as a common utility while retaining the assurance that their information is secure from other users of the same one or more clouds 102. In further embodiments, sub-clouds can nevertheless be configured to share resources, if desired.
In embodiments, and as also shown in
In the foregoing and other embodiments, the user making an instantiation request or otherwise accessing or utilizing the cloud network can be a person, customer, subscriber, administrator, corporation, organization, government, and/or other entity. In embodiments, the user can be or include another virtual machine, application, service and/or process. In further embodiments, multiple users or entities can share the use of a set of virtual machines or other resources.
In aspects, the cloud management system 104 and/or other logic or service that manages, configures, and tracks cloud activity can be configured to track and identify aggregate usage patterns of one or more users, and relate those patterns to the available set or sets of resource servers 108 to generate a predictive re-balancing or re-distribution of the servers and resources that will be required to support differing workloads over different time periods. In those regards,
In aspects, the consumption of resources in the set of host clouds 142, the assignment of user workloads to specific support servers in the set of support servers 108, generation of related billing events, and other workload and subscription-related activities can be tracked and managed by an workload management module 140, which can be hosted in the cloud management system 104 and/or in other locations, resources, or services. According to aspects, the workload management module 140 can communicate with the set of resource servers 108 including hardware support servers, and/or other resource providers, such as the vendors of software such as operating systems, applications, utilities, and/or other programs, services, and/or related resources. The cloud management system 104 can maintain part or all of the terms, conditions, limits, criteria, stipulations, and/or other parameters of the user's subscription to one or more resources hosted or provisioned in the set of host clouds 142, and for instance reflected in the set of subscription parameters 148. In embodiments, the relationship between the user premise 144 when present and the set of host clouds 142 can be configured to operate on a rollover or failover basis, for instance, to provide instances of virtual machines for the user when the installed hardware and associated resources of the user premise 144 is insufficient to support immediate processing, throughput, and/or other demands. In aspects, the user can operate virtual machines, virtual appliances, and/or other entities in the set of host clouds 142, and each host cloud in the set of host clouds 142 can capture and store a set of local usage data 152 reflecting those operations. The set of local usage data 152 can record the consumption or use of resources in a local host cloud in the set of host clouds 142, such as the number of instances of software including operating systems and applications, processor resources, memory resources, communications resources, storage resources, and/or other elements or resources. The cloud management system 104, workload management module 140, and/or other logic or service can periodically receive the set of local usage data 152 and/or updates to that information from one or more host clouds in the set of host clouds 142. The receipt of the set of local usage data 152 or any portion of the set of local usage data 152 can be performed in aspects on a pull or demand basis, where the workload management module 140 and/or other logic can issue commands or instructions to one or more host clouds in the set of host clouds 142, and receive that data back from the interrogated cloud or clouds. In aspects, the set of local usage data 152 can be transmitted to the workload management module 140 on a push basis, for instance, on a predetermined, event-triggered, and/or other basis initiated by one or more of the host clouds in set of host clouds 142, themselves. Other channels, schedules, and techniques for the collection of the set of local usage data 152 from any one or more of the set of host clouds 142 can be used.
After receipt of the set of local usage data 152, any portion or component of the set of local usage data 152, and/or updates to the same, the workload management module 140 can collect and aggregate the set of local usage data 152 from the various host clouds and organize that data in a set of aggregate usage history data 146. The set of aggregate usage history data 146 can reflect recent and/or accumulated usage consumption by the subject user(s) in all of the set of host clouds 142, over comparatively short-term periods or intervals such as minutes, one or more hours, one day, a number of days, a week, and/or over other periods. In aspects, the workload management module 140 can collect the set of local usage data 152 regardless of whether each of those clouds is configured to communicate with each other or not. In aspects, the set of aggregate usage history data 146 can present to the workload management module 140 and/or other logic the combined resource consumption by the user across the user premise 144 and/or all operating virtual machines or entities in the set of host clouds 142, on an hour-by-hour and/or other relatively short-term basis. In aspects, cloud management system 104, the workload management module 140 and/or other logic or service can operate on the set of aggregate usage history data 146 to generate a set of predictively re-assigned workloads 162 that can be advantageously shifted or re-distributed to different support servers in the set of support servers 108 over different time periods, to offer the cloud operator or other entity greater flexibility in procurement costs, service levels, and/or other factors or conditions.
More particularly, and as for instance shown in
In aspects according to the present teachings, the cloud management system 104, workload management module 140, and/or other logic or service can examine the set of operating workloads 166 and generate a predictive mapping or transition of those workloads to a different set or subset of the set of support servers 108, for one or more workload assignment period 104 in the future. As shown for instance in
In 612, the cloud management system 104, workload management module 140, and/or other logic or service can generate a set of predictively re-assigned workloads 162 based on, for instance, the set of predictive workloads 172, the aggregate resource usage history 146, the one or more workload assignment periods 164, information relating to the availability, pricing, service levels, and/or other parameters related to the resources provided by the set of resource servers 108, and/or other data or information. In aspects, the set of predictively re-assigned workloads 162 can include, for instance, a specification or indication to move the provisioning of processor and memory resources to a second operator of a group of servers in the set of resource servers from 7 p.m. to 8 p.m. from a first operator who provided or is scheduled to provide the same resources from 3 p.m. to 4 p.m. In aspects, the shift of the selected providers in or from the set of resource servers 108 can be made based on business rules or logic, such as to select a new resource host in those servers if the cost of those resources is at least 10% less than an original source during one or more workload assignment periods 164, while still providing assurance of at least 90% of required service levels with options for re-clouding or rollover of any needed shortfall. Other logic, functions, and/or rules can be used to identify or generate the set of predictively re-assigned workloads 162.
In 614, the cloud management system 104, workload management module 140, and/or other logic or service can migrate and/or distribute one or more, or all, of the set of predictively re-assigned workloads 162 to one or more selected servers in the set of resource servers 108 for the one or more workload assignment periods 164. In aspects, the cloud management system 104, workload management module 140, and/or other logic or service can initiate that migration, shift, and/or re-distribution of the user's set of operating workloads 166 and/or other executing applications, services, and/or processes to a new or different set of subset of the set of resource servers 108 using configuration commands transmitted via one or more networks 106 and/or other network management channels, connections, and/or operations. In 616, the cloud management system 104, workload management module 140, and/or other logic or service can capture the metered or other subscription or support costs, and/or other factors related to operating the set of predictively re-assigned workloads 162 in the selected set or subset of support servers 108 during the one or more workload re-assignment period 164. In 618, the cloud management system 104, workload management module 140, and/or other logic or service can capture and/or record additional usage data from the set of predictively re-assigned workloads 162, and store that data to the set of aggregate usage history 146, as appropriate. In 620, the cloud management system 104, workload management module 140, and/or other logic or service can return the set of predictively re-assigned workloads 162 to the original set of resource servers 108 from which the re-assigned workloads were migrated, for instance after the expiration of one or more workload assignment period 164. In 622, as understood by persons skilled in the art, processing can repeat, return to a prior processing point, jump to a further processing point, or end.
The foregoing description is illustrative, and variations in configuration and implementation may occur to persons skilled in the art. For example, while embodiments have been described in which the cloud management system 104 for a particular cloud resides in a single server or platform, in embodiments the cloud management system 104 and associated logic can be distributed among multiple servers, services, or systems. Similarly, while embodiments have been described in which one group of servers within a set of resource servers 108 can provide one component to build a requested set of virtual machines, in embodiments, one group of resource servers can deliver multiple components to populate the requested set of instantiated virtual machines 116, and/or other machines, entities, services, or resources. Other resources described as singular or integrated can in embodiments be plural or distributed, and resources described as multiple or distributed can in embodiments be combined. The scope of the invention is accordingly intended to be limited only by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6021402 | Takriti | Feb 2000 | A |
6463457 | Armentrout et al. | Oct 2002 | B1 |
6988087 | Kanai et al. | Jan 2006 | B2 |
7313796 | Hamilton et al. | Dec 2007 | B2 |
7439937 | Ben-Shachar et al. | Oct 2008 | B2 |
7529785 | Spertus et al. | May 2009 | B1 |
7546462 | Upton | Jun 2009 | B2 |
7596620 | Colton et al. | Sep 2009 | B1 |
8104041 | Belady et al. | Jan 2012 | B2 |
8214461 | Graupner et al. | Jul 2012 | B1 |
8413155 | Jackson | Apr 2013 | B2 |
8464255 | Nathuji | Jun 2013 | B2 |
8560677 | VanGilder et al. | Oct 2013 | B2 |
20010039497 | Hubbard | Nov 2001 | A1 |
20020069276 | Hino et al. | Jun 2002 | A1 |
20020120744 | Chellis et al. | Aug 2002 | A1 |
20020165819 | McKnight et al. | Nov 2002 | A1 |
20030037258 | Koren | Feb 2003 | A1 |
20030105810 | McCrory et al. | Jun 2003 | A1 |
20030110252 | Yang-Huffman | Jun 2003 | A1 |
20030135609 | Carlson et al. | Jul 2003 | A1 |
20040162902 | Davis | Aug 2004 | A1 |
20040210591 | Hirschfeld et al. | Oct 2004 | A1 |
20040210627 | Kroening | Oct 2004 | A1 |
20040268347 | Knauerhase et al. | Dec 2004 | A1 |
20050131898 | Fatula | Jun 2005 | A1 |
20050144060 | Chen et al. | Jun 2005 | A1 |
20050182727 | Robert et al. | Aug 2005 | A1 |
20050283784 | Suzuki | Dec 2005 | A1 |
20050289540 | Nguyen et al. | Dec 2005 | A1 |
20060075042 | Wang et al. | Apr 2006 | A1 |
20060085530 | Garrett | Apr 2006 | A1 |
20060085824 | Bruck et al. | Apr 2006 | A1 |
20060130144 | Wernicke | Jun 2006 | A1 |
20060177058 | Sarwono et al. | Aug 2006 | A1 |
20060224436 | Matsumoto et al. | Oct 2006 | A1 |
20070011291 | Mi et al. | Jan 2007 | A1 |
20070028001 | Phillips et al. | Feb 2007 | A1 |
20070226715 | Kimura et al. | Sep 2007 | A1 |
20070283282 | Bonfiglio et al. | Dec 2007 | A1 |
20070294676 | Mellor et al. | Dec 2007 | A1 |
20080080396 | Meijer et al. | Apr 2008 | A1 |
20080080718 | Meijer et al. | Apr 2008 | A1 |
20080082538 | Meijer et al. | Apr 2008 | A1 |
20080082601 | Meijer et al. | Apr 2008 | A1 |
20080083025 | Meijer et al. | Apr 2008 | A1 |
20080083040 | Dani et al. | Apr 2008 | A1 |
20080086727 | Lam et al. | Apr 2008 | A1 |
20080091613 | Gates et al. | Apr 2008 | A1 |
20080104608 | Hyser et al. | May 2008 | A1 |
20080215796 | Lam et al. | Sep 2008 | A1 |
20080240150 | Dias et al. | Oct 2008 | A1 |
20090012885 | Cahn | Jan 2009 | A1 |
20090025006 | Waldspurger | Jan 2009 | A1 |
20090037496 | Chong et al. | Feb 2009 | A1 |
20090070771 | Yuyitung et al. | Mar 2009 | A1 |
20090089078 | Bursey | Apr 2009 | A1 |
20090099940 | Frederick et al. | Apr 2009 | A1 |
20090132695 | Surtani et al. | May 2009 | A1 |
20090177514 | Hudis et al. | Jul 2009 | A1 |
20090210527 | Kawato | Aug 2009 | A1 |
20090210875 | Bolles et al. | Aug 2009 | A1 |
20090217267 | Gebhart et al. | Aug 2009 | A1 |
20090222805 | Faus et al. | Sep 2009 | A1 |
20090228950 | Reed et al. | Sep 2009 | A1 |
20090248693 | Sagar et al. | Oct 2009 | A1 |
20090249287 | Patrick | Oct 2009 | A1 |
20090260007 | Beaty et al. | Oct 2009 | A1 |
20090265707 | Goodman et al. | Oct 2009 | A1 |
20090271324 | Jandhyala | Oct 2009 | A1 |
20090276771 | Nickolov et al. | Nov 2009 | A1 |
20090287691 | Sundaresan et al. | Nov 2009 | A1 |
20090293056 | Ferris | Nov 2009 | A1 |
20090299905 | Mestha et al. | Dec 2009 | A1 |
20090299920 | Ferris et al. | Dec 2009 | A1 |
20090300057 | Friedman | Dec 2009 | A1 |
20090300149 | Ferris et al. | Dec 2009 | A1 |
20090300151 | Friedman et al. | Dec 2009 | A1 |
20090300152 | Ferris | Dec 2009 | A1 |
20090300169 | Sagar et al. | Dec 2009 | A1 |
20090300210 | Ferris | Dec 2009 | A1 |
20090300423 | Ferris | Dec 2009 | A1 |
20090300607 | Ferris et al. | Dec 2009 | A1 |
20090300608 | Ferris et al. | Dec 2009 | A1 |
20090300635 | Ferris | Dec 2009 | A1 |
20090300641 | Friedman et al. | Dec 2009 | A1 |
20090300719 | Ferris | Dec 2009 | A1 |
20100004965 | Eisen | Jan 2010 | A1 |
20100042720 | Stienhans et al. | Feb 2010 | A1 |
20100050172 | Ferris | Feb 2010 | A1 |
20100057831 | Williamson | Mar 2010 | A1 |
20100058347 | Smith et al. | Mar 2010 | A1 |
20100131324 | Ferris | May 2010 | A1 |
20100131590 | Coleman et al. | May 2010 | A1 |
20100131624 | Ferris | May 2010 | A1 |
20100131649 | Ferris | May 2010 | A1 |
20100131948 | Ferris | May 2010 | A1 |
20100131949 | Ferris | May 2010 | A1 |
20100132016 | Ferris | May 2010 | A1 |
20100169477 | Stienhans et al. | Jul 2010 | A1 |
20100211669 | Dalgas et al. | Aug 2010 | A1 |
20100217850 | Ferris | Aug 2010 | A1 |
20100217864 | Ferris | Aug 2010 | A1 |
20100217865 | Ferris | Aug 2010 | A1 |
20100220622 | Wei | Sep 2010 | A1 |
20100299366 | Stienhans et al. | Nov 2010 | A1 |
20100306354 | DeHaan et al. | Dec 2010 | A1 |
20100306377 | DeHaan et al. | Dec 2010 | A1 |
20100306379 | Ferris | Dec 2010 | A1 |
20100306566 | DeHaan et al. | Dec 2010 | A1 |
20100306765 | DeHaan | Dec 2010 | A1 |
20100306767 | DeHaan | Dec 2010 | A1 |
20110016214 | Jackson | Jan 2011 | A1 |
20110055034 | Ferris et al. | Mar 2011 | A1 |
20110055377 | DeHaan | Mar 2011 | A1 |
20110055378 | Ferris et al. | Mar 2011 | A1 |
20110055396 | DeHaan | Mar 2011 | A1 |
20110055398 | DeHaan et al. | Mar 2011 | A1 |
20110099403 | Miyata et al. | Apr 2011 | A1 |
20110131335 | Spaltro et al. | Jun 2011 | A1 |
20110138384 | Bozek | Jun 2011 | A1 |
20110145392 | Dawson et al. | Jun 2011 | A1 |
20110167469 | Letca et al. | Jul 2011 | A1 |
20110213508 | Mandagere et al. | Sep 2011 | A1 |
20110239010 | Jain | Sep 2011 | A1 |
20110289329 | Bose et al. | Nov 2011 | A1 |
20110302078 | Failing | Dec 2011 | A1 |
20120023223 | Branch et al. | Jan 2012 | A1 |
20120054345 | Sahu et al. | Mar 2012 | A1 |
20120254640 | Agarwala et al. | Oct 2012 | A1 |
20120296852 | Gmach et al. | Nov 2012 | A1 |
20120310765 | Masters | Dec 2012 | A1 |
20130159596 | Van De Ven et al. | Jun 2013 | A1 |
Entry |
---|
“rBuilder and the rPath Appliance Platform”, 2007 rPath, Inc., www.rpath.com, 3 pgs. |
White Paper—“Best Practices for Building Virtual Appliances”, 2008 rPath, Inc., www.rpath.com, 6 pgs. |
DeHaan et al., “Systems and Methods for Secure Distributed Storage”, U.S. Appl. No. 12/610,081, filed Oct. 30, 2009. |
Ferris et al., “Methods and Systems for Monitoring Cloud Computing Environments” U.S. Appl. No. 12/627,764, filed Nov. 30, 2009. |
Ferris et al., “Methods and Systems for Detecting Events in Cloud Computing Environments and Performing Actions Upon Occurrence of the Events”, U.S. Appl. No. 12/627,646, filed Nov. 30, 2009. |
Ferris et al, “Methods and Systems for Verifying Software License Compliance in Cloud Computing Environments”, U.S. Appl. No. 12/627,643, filed Nov. 30, 2009. |
Ferris et al, “Systems and Methods for Service Aggregation Using Graduated Service Levels in a Cloud Network”, U.S. Appl. No. 12/628,112, filed Nov. 30, 2009. |
Ferris et al, “Methods and Systems for Generating a Software License Knowledge Base for Verifying Software License Compliance in Cloud Computing Environments”, U.S. Appl. No. 12/628,156, filed Nov. 30, 2009. |
Ferris et al, “Methods and Systems for Converting Standard Software Licenses for Use in Cloud Computing Environments”, U.S. Appl. No. 12/714,099, filed Feb. 26, 2010. |
Ferris et al, “Systems and Methods for Managing a Software Subscription in a Cloud Network”, U.S. Appl. No. 12/714,096, filed Feb. 26, 2010. |
Ferris et al., “Methods and Systems for Providing Deployment Architectures in Cloud Computing Environments”, U.S. Appl. No. 12/714,427, filed Feb. 26, 2010. |
Ferris et al., “Methods and Systems for Matching Resource Requests with Cloud Computing Environments”, U.S. Appl. No. 12/714,113, filed Feb. 26, 2010. |
Ferris et al., “Synthesis and Methods for Generating Cross-Cloud Computing Appliances”, U.S. Appl. No. 12/714,315, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for Cloud-Based Brokerage Exchange of Software Entitlements”, U.S. Appl. No. 12/714,302, filed Feb. 26, 2010. |
Ferris et al., “Methods and Systems for Offering Additional License Terms During Conversion of Standard Software Licenses for Use in Cloud Computing Environments”, U.S. Appl. No. 12/714,065, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for or a Usage Manager for Cross-Cloud Appliances”, U.S. Appl. No. 12/714,334, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for Delivery of User-Controlled Resources in Cloud Environments Via a Resource Specification Language Wrapper”, U.S. Appl. No. 12/790,294, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Managing Multi-Level Service Level Agreements in Cloud-Based Networks”, U.S. Appl. No. 12/789,660, filed May 28, 2010. |
Ferris et al., “Methods and Systems for Generating Cross-Mapping of Vendor Software in a Cloud Computing Environment”, U.S. Appl. No. 12/790,527, filed May 28, 2010. |
Ferris et al., “Methods and Systems for Cloud Deployment Analysis Featuring Relative Cloud Resource Importance”, U.S. Appl. No. 12/790,366, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Generating Customized Build Options for Cloud Deployment Matching Usage Profile Against Cloud Infrastructure Options”, U.S. Appl. No. 12/789,701, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Exporting Usage History Data as Input to a Management Platform of a Target Cloud-Based Network”, U.S. Appl. No. 12/790,415, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Cross-Vendor Mapping Service in Cloud Networks”, U.S. Appl. No. 12/790,162, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Cross-Cloud Vendor Mapping Service in a Dynamic Cloud Marketplace”, U.S. Appl. No. 12/790,229, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Aggregate Monitoring of Utilization Data for Vendor Products in Cloud Networks”, U.S. Appl. No. 12/790,039, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Combinatorial Optimization of Multiple Resources Across a Set of Cloud-Based Networks”, U.S. Appl. No. 12/953,718, filed Nov. 24, 2010. |
Ferris et al., “Systems and Methods for Matching a Usage History to a New Cloud” U.S. Appl. No. 12/953,757, filed Nov. 24, 2010. |
Ferris et al., “Systems and Methods for Identifying Usage Histories for Producing Optimized Cloud Utilization”, U.S. Appl. No. 12/952,930, filed Nov. 23, 2010. |
Ferris at al., “Systems and Methods for Identifying Service Dependencies in a Cloud Deployment”, U.S. Appl. No. 12/952,857, filed Nov. 23, 2010. |
Ferris et al., “System and Methods for Migrating Subscribed Services in a Cloud Deployment”, U.S. Appl. No. 12/955,277, filed Nov. 29, 2010. |
Ferris et al., “Systems and Methods for Migrating Subscribed Services from a Set of lauds to a Second Set of Clouds”, U.S. Appl. No. 12/957,281, filed Nov. 30, 2010. |
Morgan, “Systems and Methods for Generating Multi-Cloud Incremental Billing Capture and Administration”, U.S. Appl. No. 12/954,323, filed Nov. 24, 2010. |
Morgan, “Systems and Methods for Aggregating Marginal Subscription Offsets in Set of Multiple Host Clouds”, U.S. Appl. No. 12/954,400, filed Nov. 24, 2010. |
Morgan, “Systems and Methods for Generating Dynamically Configurable Subscription Parameters for Temporary Migration of Predictive User Workloads in Cloud Network”, U.S. Appl. No. 12/954,378, filed Nov. 24, 2010. |
Morgan, “Systems and Methods for Managing Subscribed Resource Limits in Cloud Network Using Variable or Instantaneous Consumption Tracking Periods”, U.S. Appl. No. 12/964,352, filed Nov. 24, 2010. |
Ferris et al. “Systems and Methods for Migrating Software Modules into One or More Clouds”, U.S. Appl. No. 12/952,701, filed Nov. 23, 2010. |
Ferris et al., “Systems and Methods for Reclassifying Virtual Machines to Target Virtual Machines or Appliances Based on Code Analysis in a Cloud Environment”, U.S. Appl. No. 12/957,267, filed Nov. 30, 2010. |
Morgan, “Systems and Methods for Generating Optimized Resource Consumption Periods for Multiple Users on Combined Basis”, U.S. Appl. No. 13/037,359, filed Mar. 1, 2011. |
Morgan, “Systems and Methods for Metering Cloud Resource Consumption Using Multiple Hierarchical Subscription Periods”, U.S. Appl. No. 13/037,360, filed Mar. 1, 2011. |
Morgan, “Systems and Methods for Generating Marketplace Brokerage Exchange of Excess Subscribed Resources Using Dynamic Subscription Periods”, U.S. Appl. No. 13/037,351, filed Feb. 28, 2011. |
Morgan, “Systems and Methods for Detecting Resource Consumption Events Over Sliding Intervals in Cloud-Based Network”, U.S. Appl. No. 13/149,235, filed May 31, 2011. |
Morgan, “Systems and Methods for Triggering Workload Movement Based on Policy Stack Having Multiple Selectable Inputs”, U.S. Appl. No. 13/149,418, filed May 31, 2011. |
Morgan, “Systems and Methods for Cloud Deployment Engine for Selective Workload Migration or Federation Based on Workload Conditions”, U.S. Appl. No. 13/117,937, filed May 27, 2011. |
Morgan, “Systems and Methods for Tracking Cloud Installation Information Using Cloud-Aware Kernel of Operating System”, U.S. Appl. No. 13/149,750, filed May 31, 2011. |
Morgan, “Systems and Methods for Introspective Application Reporting to Facilitate Virtual Machine Movement Between Cloud Hosts”, U.S. Appl. No. 13/118,009, filed May 27, 2011. |
Morgan, “Systems and Methods for Self-Moving Operating System Installation in Cloud-Based Network”, U.S. Appl. No. 13/149,877, filed May 27, 2011. |
Healey, Matt, White Paper—“Virtualizing Support”, Mar. 2008, IDC, 9 pages. |
USPTO, Non-Final Office Action, U.S. Appl. No. 12/954,378 mail date Dec. 11, 2013, 39 pages. |
USPTO, Final Office Action, U.S. Appl. No. 12/954,378, mail date Mar. 20, 2014, 21 pages. |
USPTO, Advisory Action, U.S. Appl. No. 12/954,378, mail date May 2, 2014, 3 pages. |
USPTO, Non-Final Office Action, U.S. Appl. No. 12/954,378, mail date Jul. 15, 2014, 26 pages. |
USPTO, Final Office Action, U.S. Appl. No. 12/954,378, mail date Dec. 31, 2014, 25 pages. |
USPTO, Advisory Action, U.S. Appl. No. 12/954,378, mail date Mar. 12, 2015, 6 pages. |
USPTO, Non-Final Office Action, U.S. Appl. No. 12/954,378, mail date May 29, 2015, 26 pages. |
USPTO, Final Office Action, U.S. Appl. No. 12/954,378, mail date Sep. 22, 2015, 30 pages. |
USPTO, Advisory Action, U.S. Appl. No. 12/954,378, mail date Nov. 27, 2015, 3 pages. |
Number | Date | Country | |
---|---|---|---|
20120137002 A1 | May 2012 | US |