The present teachings relate to systems and methods for power management in a managed network having hardware-based and virtual resources, and more particularly to platforms and techniques for managing the power consumption patterns of a network having both installed hardware machines and mainframe-based or other virtual machines.
In the network management field, a number of platforms and tools exist to allow a systems administrator to manage the configuration and operation of a managed network, generally including servers, hosts, clients, targets, databases, and/or other devices or resources. In some cases, a managed network can contain a mix of both hardware-installed machines and a set of virtual machines, managed via one network management platform or other tool.
Network power consumption is one feature of operation that network management platforms have evolved to monitor and manage. Some network management platforms can receive power usage data as part of the collection of operating parameters used to monitor, configure, and optimize the operation of the network under management. Network management tools do not, however, today allow the integration of power management options in the case where the managed network contains not just hardware-installed or hardware-implemented machines, but also virtual machines whose power consumption footprint may be different than the power usage b hardware devices. It may be desirable to provide methods and systems capable of managing not just installed hardware resources, but also virtual machines or other resources incorporated in a managed network environment.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present teachings and together with the description, serve to explain the principles of the present teachings. In the figures:
Embodiments of the present teachings relate to systems and methods for power management in a managed network having hardware-based and virtual resources. More particularly, embodiments relate to platforms and techniques for integrating the management of mainframe-based, cloud-based, and/or other virtual machines or resources in the power management scheme for a managed network. According to embodiments in one regard, a network management platform can incorporate a power management engine which communicates with the hardware-installed resources of the managed network, as well as the mainframe or other host of the set of virtual machines populating the managed network. The power management engine can, in various regards, control the power-on, power-off, power-cycling, and other power management operations of resources under its control. In these regards, it may be noted that different types of hardware can require multiple stages for powering on, powering off, or undergoing other power operations. For instance, certain clients or other machines may initiate a power-on sequence, then require or accept a wake-on-local area network (LAN) signal, or insert a delay period, before actually turning that device on. The power management engine can in one regard permit a consistent management interface or protocol for various assets in a managed network, including hardware-based and virtual machines, regardless of power-on, power-off, or other power cycling sequences. According to embodiments, any of a range of systems can therefore have their power operations controlled via connection to a remote power management engine, including managed virtual systems, z/VM™ guests operating under mainframe control, or other fencing agents such as, for example, DRAC™ iLo™, Bladecenter™ or others. These and other embodiments described herein provide a network administrator with an ability to integrate a set of virtual mainframe-based machines, cloud-based machines, and/or other resources into a managed network environment having power management capabilities.
Reference will now be made in detail to exemplary embodiments of the present teachings, which are illustrated in the accompanying drawings. Where possible the same reference numbers will be used throughout the drawings to refer to the same or like parts.
While secure channel 148 is illustratively shown as one channel to managed network 116 or devices therein, it will be understood that in embodiments, secure channel 148 can comprise multiple channels or connections. In embodiments, secure channel 148 can be replaced by a non-secure channel or connection. In general, network management platform 102 can communicate with the managed network 116 and its constituent machines and resources, which can for instance comprise personal computers, servers, network-enabled devices, virtual machines, and/or other devices, and manage the security of those machines under the supervision of network management platform 102.
The network management platform 102 can host a set of engines, logic, and/or other resources to interrogate managed network 116 and manage the servers, hosts, clients, targets, services, and/or other resources of managed network 116. Network management platform 102 can communicate with associated network store 104 to store network-related management data. In embodiments, managed network 116 can comprise a set of hardware-implemented machines including, as illustrated, a set of hosts 112, set of targets 180, data stores, and/or other hardware resources. In embodiments, managed network 116 can likewise include an installed or instantiated set of virtual machines 166, in addition to hardware-implemented machines.
In embodiments as shown, set of virtual machines 166 can comprise a set of virtual machines instantiated under the supervision of a virtual machine operating platform 162, such as a hypervisor or virtualized operating system or platform. In embodiments, virtual machine operating platform 162 can be hosted in and/or run by a mainframe platform 160. In embodiments, mainframe platform 160 can comprise a processor, memory, storage, and/or other resources installed on a comparatively large scale, such as the System z10™ or other mainframe platforms available from IBM Corp. or other vendors.
In embodiments, virtual machine operating platform 162 can operate to build, configure, and instantiate the set of virtual machines 166 from the resources of mainframe platform 160. In embodiments, the set of virtual machines 166 can be virtualized from the hardware resources of mainframe platform 160. According to various embodiments, resources of mainframe platform 160 used to support set of virtual machines 166 can be allocated to partitions on a one-to-one mapping with the underlying physical hardware, without sharing resources among partitions. According to embodiments, those hardware resources can be managed by software, firmware, and/or other logic such as virtual machine operating platform 162. In embodiments, the underlying hardware resources can be shared between partitions, if desired.
According to embodiments, resources of mainframe platform 160 can be managed by virtual machine operating platform 162 and/or other software or logical layers, combined into shared resource pools, and allocated to users of the set of virtual machines 166 as logical resources, separating the presentation of the resources from the supporting physical hardware. According to various embodiments, virtual machine operating platform 162 can include software and logic components including a hypervisor, or a set of software or logic that virtualizes the underlying hardware environment of mainframe platform 160. In embodiments, virtual machine operating platform 162 can comprise a virtual machine-only operating system, supporting an operating environment on each virtual machine in set of virtual machines 166. According to embodiments, the virtual machine or other guest systems in set of virtual machines 166 can access, instantiate, and operate with or on virtual components including processors, memory, storage, I/O devices, network connections, and/or other hardware, software, data, and/or other resources. According to embodiments, operating systems and associated applications can execute in the set of virtual machines 166 as if the virtual machine or other guest system was executing on underlying physical hardware or other resources. In embodiments, different virtual machines in set of virtual machines 166 can host or execute the same or different operating systems and/or software applications. In embodiments, set of virtual machines 166 can be generated from the processor, memory, and/or other resources of mainframe platform 160 based on a time-shared or time-sliced basis, so that users of individual virtual machines populating the set of virtual machines 166 can access or receive all or some portion of the resources of mainframe platform 160 every predetermined time period, such as a 1 millisecond interval, a 500 millisecond interval, or other greater or lesser, regular or irregular interval. It may be noted that while embodiments are illustrated in which a set of virtual machines 166 are instantiated and managed using a mainframe platform 160, in embodiments, virtual machines can in addition or instead be instantiated and/or accessed via a cloud computing environment, such as those described in co-pending application U.S. Ser. No. 12/128,768 filed May 29, 2008, entitled “Systems and Methods for Identification and Management of Cloud-Based Virtual Machines,” assigned or under obligation of assignment to the same entity as the present application, which application is incorporated by reference herein.
In embodiments, in terms of acquisition of set of virtual machines 166 into managed network 116, network management platform 102 can host or access a pre-boot management tool 170 that acts to register, monitor, and track the constituent machines and services in managed network 116 during a pre-boot phase of operations of those machines. In embodiments, pre-boot management tool 170 can be or include a PXE-based or PXE-compatible application, logic, or other resources that operate to interrogate the complete complement of both hardware-implemented and virtual machines installed in managed network 116. In embodiments, hardware-implemented machines such as, for example, set of hosts 112, set of targets 180, and/or other hardware-implemented resources such as other services, clients, databases, or other devices can be interrogated by pre-boot management tool 170 during a start-up, pre-boot, or other initiation phase of operation. In embodiments, as noted pre-boot management tool 170 can detect the initial connection or power-on of a hardware-implemented machine to managed network 116. In embodiments, that initial connection or power-on can be detected via the detection of a media access control (MAC) address encoded in a local area network (LAN) card, or other identifier and/or other device or connection. In embodiments, the attachment or power-on of a hardware address or other hardware-based identifier can be detected pre-boot management tool 170, and used to initiate pre-boot processing of that device or devices. In embodiments, pre-boot management tool 170 can communicate with detected hardware devices to issue a set of pre-boot commands 168 to that device or devices. Set of pre-boot commands 168 can include commands and/or other data to control the operation of the subject device prior to loading an operating system or other software. Set of pre-boot commands 168 can include commands and/or other data to, for example, configure network connections, services, and/or software of the subject machine or device, authenticate or validate the use or operation of the subject machine or device, or perform other operations. According to embodiments, pre-boot management tool 170 can cause the subject machine or device to boot into an installed or native operating system of the device, once pre-boot operations are completed.
In embodiments, managed network 116 can likewise manage set of virtual machines 166 during pre-boot operations, despite the absence of hardware MAC addresses or other hardware-based identifiers. According to embodiments, pre-boot management tool 170 can interact with pre-boot translation engine 164 to communicate with mainframe platform 160 and/or virtual machine platform 162 to access, identify, and control pre-boot or pre-instantiation operations of set of virtual machines 166. In embodiments, pre-boot translation engine 164 can be hosted in mainframe platform 160, as shown. In embodiments, pre-boot translation engine 164 can be hosted in other locations or resources, including, for instance, network management platform 102. According to embodiments, pre-boot translation engine 164 can be configured in or with, or support execution of scripts in a language such as ReXX™ (Restructured Extended Executor) supported by IBM Corp., or other languages or protocols. In embodiments, pre-boot translation engine 164 can pass data including set of pre-boot commands 168 back and forth between pre-boot management tool 170 and set of virtual machines 166 via mainframe platform 160 and/or virtual machine platforms 162. In embodiments, pre-boot management tool 170 can thereby detect, configure and manage set of virtual machines 166 to control the pre-boot and subsequent operations of those resources, without a requirement for hardware identifiers and/or other hardware attributes.
As likewise shown in
In embodiments, set of power management commands 188 can contain other commands and/or data, for instance, to sequence the starting and stopping of individual machines required for certain operations, for example, to boot a server machine down before employing that machine for other purposes, or shutting down virtual machines running on a subject host to permit redeployment of that host for other purposes. For those and other purposes, it may be noted that power management engine 182 can be configured to control both the hardware-implemented machines of managed network 116, such as set of hosts 112 and set of targets 180, as well as regulate the power cycling operations of usage of set of virtual machines 166. In embodiments, set of power management commands 188 can contain instructions to sequence the power-on, power-off, and/or other power cycling operations between set of virtual machines 166 and hardware-based machines or resources, as necessary. Other power usage management and balancing operations can be conducted.
As, for, example more particularly shown in
According to embodiments in one regard, and as also shown in
In various embodiments, power management engine 182 can for instance access a power management settings 184, to determine operating parameters and policies for the power sequencing of managed network 116 power management settings 184 can comprise fields and other data governing the power usage parameters of managed network 116 and its constituent nodes and other parts. In embodiments, power management settings 184 can specify, for instance, that one or more identified machines or services be shut down or idled before applying power to another set of machines. For further instance, power management settings 184 can specify that set of hosts 112, set of targets 180, and/or other hardware-implemented devices be powered down after 6:00 pm in the evening. Other power management schedules, sequences, criteria or or logic can be built into power management settings 184.
In embodiments, as noted, power management engine 182 can transmit one of more power management commands 188 in response to or based on power status data 186, power management settings 184, and/or other information associated with the power usage of managed network 116. In embodiments, power management commands 188 can be communicated directly to hardware-implemented resources, such as set of hosts 112, set of targets 180, associated storage, and/or other hardware. In embodiments, power management commands 188 can also be communicated to virtual operating platform 162 to implement power management policies reflected in power management settings 184 in set of virtual machines 166. Because in one regard power management engine 182 can access and control both hardware-based and virtual machines and resources of managed network 116, power management can be effectively integrated on a network-wide basis.
It may be noted that while power management engine 182 is illustrated as a separate module hosted in network management 102, in embodiments, power management engine 182 can be hosted or located in other resources, or be combined with other logic or resources, including, for instance, pre-boot management tool 170 or other resources. Further, it may be noted that while embodiments are here illustrated and described in which a virtual operating platform 162 can be leveraged to access virtual machines and managing overall power operations, in embodiments, different virtual platforms and management configurations can be used.
As also shown in
The foregoing description is illustrative, and variations in configuration and implementation may occur to persons skilled in the art. For example, while embodiments have been described in which power management engine 182 is hosted in network management platform 102, in embodiments, power management logic can be hosted in one or multiple other local or remote locations or resources, such as local or remote servers. For further example, while embodiments have been described in which hardware-implemented machines are identified via a MAC address on a LAN card and set of virtual machines 166 are identified via a pseudo or temporary version of the same address, in embodiments, other types of address or identifiers for both hardware and virtual machines can be used. For further example, while embodiments have been described in which managed network 116 incorporates one set of virtual machines 116 which are instantiated via one mainframe platform 160, in embodiments, managed network 116 can incorporate more than one set of virtual machines. In embodiments, one mainframe platform can instantiate and manage more than one set of virtual machines. In embodiments, multiple mainframe computers or platforms can each instantiate and manage separate sets of virtual machines. In embodiments, in addition to or instead of mainframe-based virtual machines, one or more sets of virtual machines instantiated in or from a cloud computing environment can be incorporated in managed network 116. Other resources described as singular or integrated can in embodiments be plural or distributed, and resources described as multiple or distributed can in embodiments be combined. The scope of the present teachings is accordingly intended to be limited only by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6463457 | Armentrout et al. | Oct 2002 | B1 |
7313796 | Hamilton et al. | Dec 2007 | B2 |
7439937 | Ben-Shachar et al. | Oct 2008 | B2 |
7529785 | Spertus et al. | May 2009 | B1 |
7546462 | Upton | Jun 2009 | B2 |
7596620 | Colton et al. | Sep 2009 | B1 |
20010039497 | Hubbard | Nov 2001 | A1 |
20020069276 | Hino et al. | Jun 2002 | A1 |
20020165819 | McKnight et al. | Nov 2002 | A1 |
20030037258 | Koren | Feb 2003 | A1 |
20030110252 | Yang-Huffman | Jun 2003 | A1 |
20030135609 | Carlson et al. | Jul 2003 | A1 |
20040162902 | Davis | Aug 2004 | A1 |
20040210591 | Hirschfeld et al. | Oct 2004 | A1 |
20040210627 | Kroening | Oct 2004 | A1 |
20040268347 | Knauerhase et al. | Dec 2004 | A1 |
20050131898 | Fatula | Jun 2005 | A1 |
20050144060 | Chen et al. | Jun 2005 | A1 |
20050182727 | Robert et al. | Aug 2005 | A1 |
20050289540 | Nguyen et al. | Dec 2005 | A1 |
20060075042 | Wang et al. | Apr 2006 | A1 |
20060085530 | Garrett | Apr 2006 | A1 |
20060085824 | Bruck et al. | Apr 2006 | A1 |
20060130144 | Wernicke | Jun 2006 | A1 |
20060177058 | Sarwono et al. | Aug 2006 | A1 |
20060224436 | Matsumoto et al. | Oct 2006 | A1 |
20070011291 | Mi et al. | Jan 2007 | A1 |
20070028001 | Phillips et al. | Feb 2007 | A1 |
20070226715 | Kimura et al. | Sep 2007 | A1 |
20070283282 | Bonfiglio et al. | Dec 2007 | A1 |
20070294676 | Mellor et al. | Dec 2007 | A1 |
20080080396 | Meijer et al. | Apr 2008 | A1 |
20080080718 | Meijer et al. | Apr 2008 | A1 |
20080082538 | Meijer et al. | Apr 2008 | A1 |
20080082601 | Meijer et al. | Apr 2008 | A1 |
20080083025 | Meijer et al. | Apr 2008 | A1 |
20080083040 | Dani et al. | Apr 2008 | A1 |
20080086727 | Lam et al. | Apr 2008 | A1 |
20080089338 | Campbell et al. | Apr 2008 | A1 |
20080091613 | Gates et al. | Apr 2008 | A1 |
20080104608 | Hyser et al. | May 2008 | A1 |
20080215796 | Lam et al. | Sep 2008 | A1 |
20080240150 | Dias et al. | Oct 2008 | A1 |
20090012885 | Cahn | Jan 2009 | A1 |
20090025006 | Waldspurger | Jan 2009 | A1 |
20090037496 | Chong et al. | Feb 2009 | A1 |
20090089078 | Bursey | Apr 2009 | A1 |
20090099940 | Frederick et al. | Apr 2009 | A1 |
20090132695 | Surtani et al. | May 2009 | A1 |
20090177514 | Hudis et al. | Jul 2009 | A1 |
20090210527 | Kawato | Aug 2009 | A1 |
20090210875 | Bolles et al. | Aug 2009 | A1 |
20090217267 | Gebhart et al. | Aug 2009 | A1 |
20090222496 | Liu et al. | Sep 2009 | A1 |
20090222805 | Faus et al. | Sep 2009 | A1 |
20090228950 | Reed et al. | Sep 2009 | A1 |
20090248693 | Sagar et al. | Oct 2009 | A1 |
20090249287 | Patrick | Oct 2009 | A1 |
20090249354 | Yamaguchi et al. | Oct 2009 | A1 |
20090260007 | Beaty et al. | Oct 2009 | A1 |
20090265707 | Goodman et al. | Oct 2009 | A1 |
20090271324 | Jandhyala et al. | Oct 2009 | A1 |
20090276771 | Nickolov et al. | Nov 2009 | A1 |
20090287691 | Sundaresan et al. | Nov 2009 | A1 |
20090293056 | Ferris | Nov 2009 | A1 |
20090299905 | Mestha et al. | Dec 2009 | A1 |
20090299920 | Ferris et al. | Dec 2009 | A1 |
20090300057 | Friedman | Dec 2009 | A1 |
20090300149 | Ferris et al. | Dec 2009 | A1 |
20090300151 | Friedman et al. | Dec 2009 | A1 |
20090300152 | Ferris | Dec 2009 | A1 |
20090300169 | Sagar et al. | Dec 2009 | A1 |
20090300210 | Ferris | Dec 2009 | A1 |
20090300423 | Ferris | Dec 2009 | A1 |
20090300607 | Ferris et al. | Dec 2009 | A1 |
20090300608 | Ferris | Dec 2009 | A1 |
20090300635 | Ferris | Dec 2009 | A1 |
20090300641 | Friedman et al. | Dec 2009 | A1 |
20090300719 | Ferris | Dec 2009 | A1 |
20100042720 | Stienhans et al. | Feb 2010 | A1 |
20100050172 | Ferris | Feb 2010 | A1 |
20100057831 | Williamson | Mar 2010 | A1 |
20100058347 | Smith et al. | Mar 2010 | A1 |
20100131324 | Ferris | May 2010 | A1 |
20100131590 | Coleman et al. | May 2010 | A1 |
20100131624 | Ferris | May 2010 | A1 |
20100131649 | Ferris | May 2010 | A1 |
20100131948 | Ferris | May 2010 | A1 |
20100131949 | Ferris | May 2010 | A1 |
20100132016 | Ferris | May 2010 | A1 |
20100169477 | Stienhans et al. | Jul 2010 | A1 |
20100220622 | Wei | Sep 2010 | A1 |
20100299366 | Stienhans et al. | Nov 2010 | A1 |
20100306599 | Cota-Robles et al. | Dec 2010 | A1 |
20110016214 | Jackson | Jan 2011 | A1 |
20110022812 | van der Linden et al. | Jan 2011 | A1 |
20110131335 | Spaltro et al. | Jun 2011 | A1 |
Entry |
---|
Morgan, “Systems and Methods for Generating Optimized Resource Consumption Periods for Multiple Users on Combined Basis”, U.S. Appl. No. 13/037,359, filed Mar. 1, 2011. |
Morgan, “Systems and Methods for Metering Cloud Resource Consumption Using Multiple Hierarchical Subscription Periods”, U.S. Appl. No. 13/037,360, filed Mar. 1, 2011. |
Morgan, “Systems and Methods for Generating Marketplace Brokerage Exchange of Excess Subscribed Resources Using Dynamic Subscription Periods”, U.S. Appl. No. 13/037,351, filed Feb. 28, 2011. |
Ferris, et al., “Systems and Methods for Cominatorial Optimization of Multiple Resources Across a Set of Cloud-Based Networks”, U.S. Appl. No. 12/953,718, filed Nov. 24, 2010. |
Ferris et al., “Systems and Methods for Matching a Usage History to a New Cloud”, U.S. Appl. No. 12/953,757, filed Nov. 24, 2010. |
Ferris et al., “Systems and Methods for Identifying Usage Histories for Producing Optimized Cloud Utilization”, U.S. Appl. No. 12/952,930, filed Nov. 23, 2010. |
Ferris et al., “Systems and Methods for Identifying Service Dependencies in a Cloud Deployment”, U.S. Appl. No. 12/952,857, filed Nov. 23, 2010. |
Ferris et al., “Systems and Methods for Migrating Subscribed Services in a Cloud Deployment”, U.S. Appl. No. 12/955,277, filed Nov. 29, 2010. |
Ferris et al., “Systems and Methods for Migrating Subscribed Services from a Set of Clouds to a Second Set of Clouds”, U.S. Appl. No. 12/957,281, filed Nov. 30, 2010. |
Morgan, “Systems and Methods for Generating Multi-Cloud Incremental Billing Capture and Administration”, U.S. Appl. No. 12/954,323, filed Nov. 24, 2010. |
Morgan, “Systems and Methods for Aggregating Marginal Subscription Offsets in a Set of Multiple Host Clouds”, U.S. Appl. No. 12/954,400, filed Nov. 24, 2010. |
Morgan, “Systems and Methods for Generating Dynamically Configurable Subscription Parameters for Temporary Migration of Predictive User Workloads in Cloud Network”, U.S. Appl. No. 12/954,378, filed Nov. 24, 2010. |
Morgan, “Systems and Methods for Managing Subscribed Resource Limits in Cloud Network Using Variable or Instantaneous Consumption Tracking Periods”, U.S. Appl. No. 12/954,352, filed Nov. 24, 2010. |
Ferris et al., “Systems and Methods for Migrating Software Modules into One or More Clouds”, U.S. Appl. No. 12/952,701, filed Nov. 23, 2010. |
Ferris et al., “Systems and Methods for Brokering Optimized Resource Supply Costs in Host Cloud-Based Network Using Predictive Workloads”, U.S. Appl. No. 12/957,274, filed Nov. 30, 2010. |
Ferris et al., “Systems and Methods for Reclassifying Virtual Machines to Target Virtual Machines or Appliances Based on Code Analysis in a Cloud Environment”, U.S. Appl. No. 12/957,267, filed Nov. 30, 2010. |
Morgan, “Systems and Methods for Detecting Resource Consumption Events Over Sliding Intervals in Cloud-Based Network”, U.S. Appl. No. 13/149,235, filed May 31, 2011. |
Morgan, “Systems and Methods for Triggering Workload Movement Based on Policy Stack Having Multiple Selectable Inputs”, U.S. Appl. No. 13/149,418, filed May 31, 2011. |
Morgan, “Systems and Methods for Cloud Deployment Engine for Selective Workload Migration or Federation Based on Workload Conditions”, U.S. Appl. No. 13/117,937, filed May 27, 2011. |
Morgan, “Systems and Methods for Tracking Cloud Installation Information Using Cloud-Aware Kernel of Operating System”, U.S. Appl. No. 13/149,750, filed May 31, 2011. |
Morgan, “Systems and Methods for Introspective Application Reporting to Facilitate Virtual Machine Movement Between Cloud Hosts”, U.S. Appl. No. 13/118,009, filed May 27, 2011. |
Morgan, “Systems and Methods for Self-Moving Operating System Installation in Cloud-Based Network”, U.S. Appl. No. 13/149,877, filed May 31, 2011. |
“rBuilder and the rPath Appliance Platform”, 2007 rPath, Inc., www.rpath.com, 3 pages. |
White Paper—“rPath Versus Other Software Appliance Approaches”, Mar. 2008, rPath, Inc., www.rpath.com, 9 pages. |
White Paper—“Best Practices for Building Virtual Appliances”, 2008 rPath, Inc., www.rpath.com, 6 pages. |
DeHaan et al., “Methods and Systems for Flexible Cloud Management with Power Management Support”, U.S. Appl. No. 12/473,987, filed May 28, 2009. |
Ferris, “Methods and Systems for Providing a Market for User-Controlled Resources to be Provided to a Cloud Computing Environment”, U.S. Appl. No. 12/390,617, filed Feb. 23, 2009. |
Ferris, “Methods and Systems for Communicating with Third Party Resources in a Cloud Computing Environment”, U.S. Appl. No. 12/390,598, filed Feb. 23, 2009. |
Ferris, “Systems and Methods for Extending Security Platforms to Cloud-Based Networks”, U.S. Appl. No. 12/391,802, filed Feb. 24, 2009. |
DeHaan et al., “Methods and Systems for Flexible Cloud Management”, U.S. Appl. No. 12/473,041, fled May 27, 2009. |
Ferris et al., “Systems and Methods for Aggregate Monitoring of Utilization Data for Vendor Products In Cloud Networks”, U.S. Appl. No. 12/790,039, filed May 28, 2010. |
Ferris, “Methods and Systems for Providing a Universal Marketplace for Resources for Delivery to a Cloud Computing Environment”, U.S. Appl. No. 12/475,228, filed May 29, 2009. |
DeHaan, “Methods and Systems for Abstracting Cloud Management”, U.S. Appl. No. 12/474,113, filed May 28, 2009. |
DeHaan, “Methods and Systems for Automated Scaling of Cloud Computing Systems”, U.S. Appl. No. 12/474,707, filed May 29, 2009. |
DeHaan, “Methods and Systems for Securely Terminating Processes in a Cloud Computing Environment”, U.S. Appl. No. 12/550,157, filed Aug. 28, 2009. |
DeHaan et al., “Methods and Systems for Flexible Cloud Management Including External Clouds”, U.S. Appl. No. 12/551,506, filed Aug. 31, 2009. |
DeHaan, “Methods and Systems for Abstracting Cloud Management to Allow Communication Between Independently Controlled Clouds”, U.S. Appl. No. 12/551,096, filed Aug. 31, 2009. |
DeHaan, “Methods and Systems for Automated Migration of Cloud Processes to External Clouds”, U.S. Appl. No. 12/551,459, filed Aug. 31, 2009. |
Ferris et al., “Methods and Systems for Pricing Software Infrastructure for a Cloud Computing Environment”, U.S. Appl. No. 12/551,517, filed Aug. 31, 2009. |
Ferns et al., “Methods and Systems for Metering Software Infrastructure in a Cloud Computing Environment”, U.S. Appl. No. 12/551,514, filed Aug. 31, 2009. |
DeHaan et al., “Systems and Methods for Secure Distributed Storage”, U.S. Appl. No. 12/610,081, filed Oct. 30, 2009. |
Ferris et al., “Methods and Systems for Monitoring Cloud Computing Environments”, U.S. Appl. No. 12/627,764, filed Nov. 30, 2009. |
Ferris et al., “Methods and Systems for Detecting Events in Cloud Computing Environments and Performing Actions Upon Occurrence of the Events”, U.S. Appl. No. 12/627,646, filed Nov. 30, 2009. |
Ferris et al., “Methods and Systems for Verifying Software License Compliance in Cloud Computing Environments”, U.S. Appl. No. 12/627,643, filed Nov. 30, 2009. |
Ferris et al., “Systems and Methods for Service Aggregation Using Graduated Service Levels in a Cloud Network”, U.S. Appl. No. 12/628,112, filed Nov. 30, 2009. |
Ferris et aL, “Methods and Systems for Generating a Software License Knowledge Base for Verifying Software License Compliance in Cloud Computing Environments”, U.S. Appl. No. 12/628,156, filed Nov. 30, 2009. |
Ferris et al., “Methods and Systems for Converting Standard Software Licenses for Use in Cloud Computing Environments”, U.S. Appl. No. 12/714,099, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for Managing a Software Subscription in a Cloud Network”, U.S. Appl. No. 12/714,096, filed Feb. 26, 2010. |
Ferris et al., “Methods and Systems for Providing Deployment Architectures in Cloud Computing Environments”, U.S. Appl. No. 12/714,427, filed Feb. 26, 2010. |
Ferris et al., “Methods and Systems for Matching Resource Requests with Cloud Computing Environments”, U.S. Appl. No. 12/714,113, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for Generating Cross-Cloud Computing Appliances”, U.S. Appl. No. 12/714,315, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for Cloud-Based Brokerage Exchange of Software Entitlements”, U.S. Appl. No. 12/714,302, filed Feb. 26, 2010. |
Ferris et al., “Methods and Systems for Offering Additional License Terms During Conversion of Standard Software Licenses for Use in Cloud Computing Environments”, U.S. Appl. No. 12/714,065, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for or a Usage Manager for Cross-Cloud Appliances”, U.S. Appl. No. 12/714,334, filed Feb. 26, 2010. |
Ferris et al., “Systems and Methods for Delivery of User-Controlled Resources in Cloud Environments Via a Resource Specification Language Wrapper”, U.S. Appl. No. 12/790,294, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Managing Multi-Level Service Level Agreements in Cloud-Based Networks”, U.S. Appl. No. 12/789,660, filed May 28, 2010. |
Ferris et al., “Methods and Systems for Generating Cross-Mapping of Vendor Software in a Cloud Computing Environment”, U.S. Appl. No. 12/790,527, filed May 28, 2010. |
Ferris et al., “Methods and Systems for Cloud Deployment Analysis Featuring Relative Cloud Resource Importance”, U.S. Appl. No. 12/790,366, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Generating Customized Build Options for Cloud Deployment Matching Usage Profile Against Cloud Infrastructure Options”, U.S. Appl. No. 12/789,701, filed May 28, 2010. |
Ferns at al., “Systems and Methods for Exporting Usage History Data as Input to a Management Platform of a Target Cloud-Based Network”, U.S. Appl. No. 12/790,415, filed May 28, 2010. |
Ferris at al., “Systems and Methods for Cross-Cloud Vendor Mapping Service in Cloud NetWorks”, U.S. Appl. No. 12/790,162, filed May 28, 2010. |
Ferris et al., “Systems and Methods for Cross-Cloud Vendor Mapping Service in a Dynamic Cloud Marketplace”, U.S. Appl. No. 12/790,229, filed May 28, 2010. |
Number | Date | Country | |
---|---|---|---|
20100306566 A1 | Dec 2010 | US |