This disclosure relates generally to cloud computing and, more particularly, to systems, apparatus, articles of manufacture, and methods for schedule-based lifecycle management of a virtual computing environment.
Virtualizing computer systems provides benefits such as the ability to execute multiple computer systems on a single hardware computer, replicating computer systems, moving computer systems among multiple hardware computers, and so forth. “Infrastructure-as-a-Service” (also commonly referred to as “IaaS”) generally describes a suite of technologies provided by a service provider as an integrated solution to allow for elastic creation of a virtualized, networked, and pooled computing platform (sometimes referred to as a “cloud computing platform”). Enterprises may use IaaS as a business-internal organizational cloud computing platform (sometimes referred to as a “private cloud”) that gives an application developer access to infrastructure resources, such as virtualized servers, storage, and network resources. By providing ready access to the hardware resources required to run an application, the cloud computing platform enables developers to build, deploy, and manage the lifecycle of a web application (or any other type of networked application) at a greater scale and at a faster pace than ever before.
Cloud computing environments may be composed of many processing units (e.g., servers, computing resources, etc.). The processing units may be installed in standardized frames, known as racks, which provide efficient use of floor space by allowing the processing units to be stacked vertically. The racks may additionally include other components of a cloud computing environment such as storage devices, network devices (e.g., routers, switches, etc.), etc.
In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. The figures are not to scale.
Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc., are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.
As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
As used herein, “processor circuitry” is defined to include (i) one or more special purpose electrical circuits structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmable with instructions to perform specific operations and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of processor circuitry include programmable microprocessors, Field Programmable Gate Arrays (FPGAs) that may instantiate instructions, Central Processor Units (CPUs), Graphics Processor Units (GPUs), Digital Signal Processors (DSPs), XPUs, or microcontrollers and integrated circuits such as Application Specific Integrated Circuits (ASICs). For example, an XPU may be implemented by a heterogeneous computing system including multiple types of processor circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more DSPs, etc., and/or a combination thereof) and application programming interface(s) (API(s)) that may assign computing task(s) to whichever one(s) of the multiple types of processor circuitry is/are best suited to execute the computing task(s).
Cloud computing is based on the deployment of many physical resources across a network, virtualizing the physical resources into virtual resources, and provisioning the virtual resources in software defined data centers (SDDCs) for use across cloud computing services and applications. Examples disclosed herein can be used to manage network resources in SDDCs to improve performance and efficiencies of network communications between different virtual and/or physical resources of the SDDCs.
Examples disclosed herein can be used in connection with different types of SDDCs. In some examples, techniques disclosed herein are useful for managing network resources that are provided in SDDCs based on Hyper-Converged Infrastructure (HCI). In some examples, HCI combines a virtualization platform such as a hypervisor, virtualized software-defined storage, and virtualized networking in an SDDC deployment. An SDDC manager can provide automation of workflows for lifecycle management and operations of a self-contained private cloud instance. Such an instance may span multiple racks of servers connected via a leaf-spine network topology and connects to the rest of the enterprise network for north-south connectivity via well-defined points of attachment. The leaf-spine network topology is a two-layer data center topology including leaf switches (e.g., switches to which servers, load balancers, edge routers, storage resources, etc., connect) and spine switches (e.g., switches to which leaf switches connect, etc.). In such a topology, the spine switches form a backbone of a network, where every leaf switch is interconnected with each and every spine switch.
Examples disclosed herein can be used with one or more different types of virtualization environments. Three example types of virtualization environments are: full virtualization, paravirtualization, and operating system (OS) virtualization. Full virtualization, as used herein, is a virtualization environment in which hardware resources are managed by a hypervisor to provide virtual hardware resources to a virtual machine (VM). In a full virtualization environment, the VMs do not have access to the underlying hardware resources. In a typical full virtualization, a host OS with embedded hypervisor (e.g., a VMWARE® ESXI® hypervisor, etc.) is installed on the server hardware. VMs including virtual hardware resources are then deployed on the hypervisor. A guest OS is installed in the VM. The hypervisor manages the association between the hardware resources of the server hardware and the virtual resources allocated to the VMs (e.g., associating physical random-access memory (RAM) with virtual RAM, etc.). Typically, in full virtualization, the VM and the guest OS have no visibility and/or access to the hardware resources of the underlying server. Additionally, in full virtualization, a full guest OS is typically installed in the VM while a host OS is installed on the server hardware. Example virtualization environments include VMWARE® ESX® hypervisor, VMWARE® ESXi® hypervisor, Microsoft HYPER-V® hypervisor, and Kernel Based Virtual Machine (KVM).
Paravirtualization, as used herein, is a virtualization environment in which hardware resources are managed by a hypervisor to provide virtual hardware resources to a VM, and guest OSs are also allowed to access some or all the underlying hardware resources of the server (e.g., without accessing an intermediate virtual hardware resource, etc.). In a typical paravirtualization system, a host OS (e.g., a Linux-based OS, etc.) is installed on the server hardware. A hypervisor (e.g., the XEN® hypervisor, etc.) executes on the host OS. VMs including virtual hardware resources are then deployed on the hypervisor. The hypervisor manages the association between the hardware resources of the server hardware and the virtual resources allocated to the VMs (e.g., associating RAM with virtual RAM, etc.). In paravirtualization, the guest OS installed in the VM is configured also to have direct access to some or all of the hardware resources of the server. For example, the guest OS can be precompiled with special drivers that allow the guest OS to access the hardware resources without passing through a virtual hardware layer. For example, a guest OS can be precompiled with drivers that allow the guest OS to access a sound card installed in the server hardware. Directly accessing the hardware (e.g., without accessing the virtual hardware resources of the VM, etc.) can be more efficient, can allow for performance of operations that are not supported by the VM and/or the hypervisor, etc.
OS virtualization is also referred to herein as container virtualization. As used herein, OS virtualization refers to a system in which processes are isolated in an OS. In a typical OS virtualization system, a host OS is installed on the server hardware. Alternatively, the host OS can be installed in a VM of a full virtualization environment or a paravirtualization environment. The host OS of an OS virtualization system is configured (e.g., utilizing a customized kernel, etc.) to provide isolation and resource management for processes that execute within the host OS (e.g., applications that execute on the host OS, etc.). The isolation of the processes is known as a container. Thus, a process executes within a container that isolates the process from other processes executing on the host OS. Thus, OS virtualization provides isolation and resource management capabilities without the resource overhead utilized by a full virtualization environment or a paravirtualization environment. Example OS virtualization environments include Linux Containers LXC and LXD, the DOCKER™ container platform, the OPENVZ™ container platform, etc.
In some examples, a data center (or pool of linked data centers) can include multiple different virtualization environments. For example, a data center can include hardware resources that are managed by a full virtualization environment, a paravirtualization environment, an OS virtualization environment, etc., and/or any combination(s) thereof. In such a data center, a workload can be deployed to any of the virtualization environments. In some examples, techniques to monitor both physical and virtual infrastructure, provide visibility into the virtual infrastructure (e.g., VMs, virtual storage, virtual or virtualized networks and their control/management counterparts, etc.) and the physical infrastructure (e.g., servers, physical storage, network switches, etc.).
Examples disclosed herein can be employed with HCI-based SDDCs deployed using virtual server rack systems. A virtual server rack system can be managed using a set of tools that is accessible to all modules of the virtual server rack system. Virtual server rack systems can be configured in many different sizes. Some systems are as small as four hosts, and other systems are as big as tens of racks. Multi-rack deployments can include Top-of-the-Rack (ToR) switches (e.g., leaf switches, etc.) and spine switches connected using a Leaf-Spine architecture. A virtual server rack system also includes software-defined data storage (e.g., storage area network (SAN), VMWARE® VIRTUAL SAN™, etc.) distributed across multiple hosts for redundancy and virtualized networking software (e.g., VMWARE NSX™ etc.).
A drawback of some virtual server rack systems is that different hardware components located therein can be procured from different equipment vendors, and each equipment vendor can have its own independent OS installed on its hardware. For example, physical hardware resources include white label equipment such as white label servers, white label network switches, white label external storage arrays, and white label disaggregated rack architecture systems (e.g., Intel's Rack Scale Architecture (RSA), etc.). White label equipment is computing equipment that is unbranded and sold by manufacturers to system integrators that install customized software, and possibly other hardware, on the white label equipment to build computing/network systems that meet specifications of end users or customers. The white labeling, or unbranding by original manufacturers, of such equipment enables third-party system integrators to market their end-user integrated systems using the third-party system integrators' branding.
In some examples, virtual server rack systems additionally manage non-white label equipment such as original equipment manufacturer (OEM) equipment. Such OEM equipment includes OEM servers such as HEWLETT-PACKARD® (HP®) servers and LENOVO® servers, and OEM switches such as switches from ARISTA NETWORKS™, and/or any other OEM server, switches, or equipment. In any case, each equipment vendor can have its own independent OS installed on its hardware. For example, ToR switches and spine switches can have OSs from vendors like CISCO® and ARISTA NETWORKS, while storage and compute components may be managed by a different OS. Each OS actively manages its hardware at the resource level but there is no entity across all resources of the virtual server rack system that makes system-level runtime decisions based on the state of the virtual server rack system. For example, if a hard disk malfunctions, storage software has to reconfigure existing data into the remaining disks. This reconfiguration can require additional network bandwidth, which may not be released until the reconfiguration is complete.
Examples disclosed herein provide HCI-based SDDCs with system-level governing features that can actively monitor and manage different hardware and software components of a virtual server rack system even when such different hardware and software components execute different OSs. As described in connection with
When starting up a cloud computing environment or adding resources to an already established cloud computing environment, data center operators struggle to offer cost-effective services while making resources of the infrastructure (e.g., storage hardware, computing hardware, and networking hardware) work together to achieve simplified installation/operation and optimize the resources for improved performance. Prior techniques for establishing and maintaining data centers to provide cloud computing services often require customers to understand details and configurations of hardware resources to establish workload domains in which to execute customer services. As used herein, the term “workload domain” refers to virtual hardware policies or subsets of virtual resources of a VM mapped to physical hardware resources to execute a user application. For example, a workload domain can include one or more virtual resources, or portion(s) thereof, that can be utilized to execute a user application. In some examples, a workload domain can include a first VM including a first quantity of virtualized hardware resources (e.g., virtualized central processing units (CPUs), memories, mass storage discs or devices, security devices, hardware accelerators, switches, gateways, network interface cards (NICs), etc.), a second VM including a second quantity of virtualized hardware resources, etc., and/or any combination(s) thereof.
In some disclosed examples, data center operators have hundreds or thousands of resources (e.g., physical hardware resources such as servers or portion(s) thereof, virtualized hardware resources such as virtualized server racks, virtualized servers, etc., or portion(s) thereof, etc.) under management in their organizations. Such data center operators can start up, deploy, and/or maintain a cloud computing environment via different stages of operations. For example, data center operators can design a cloud computing environment in a design stage, which can be implemented via Day 0 operations, such as identifying the resources and/or requirements needed to start up the cloud computing environment. In some examples, the data center operators can deploy the cloud computing environment in a deploy or deployment stage, which can be implemented via Day 1 operations, such as installing, setting up, and/or configuring physical hardware resources (e.g., installing physical server racks, connecting power and/or network cables, etc.) and/or software resources (e.g., OS, applications, drivers, services, libraries, VMs, containers, etc.). In some examples, the data center operators can maintain the cloud computing environment in a maintenance stage, which can be implemented by Day 2 operations, such as prognostic health monitoring of resources (e.g., predicting or anticipating failures to be mitigated during scheduled maintenance), installing upgrades, updating systems, etc.
Managing Day 2 operations can be challenging as cloud computing environments are scaled to hundreds or thousands of resources. With such a substantial number of resources to manage, data center operators may have difficulty visualizing the performance of their systems and integrating updates. In some instances, data center operators may tediously carry out Day 2 operations resource-by-resource or cloud provider-by-cloud provider (if the data center operators have a heterogeneous cloud computing deployment, such as a deployment using two or more different cloud providers). In some instances, data center operators may have to carry out Day 2 operations on a regular or periodic basis, which can be substantially time consuming and inefficient.
Examples disclosed herein include schedule-based lifecycle management of virtualized environments. For example, the lifecycle of an application to be executed and/or instantiated by virtual resource(s) of a virtualized environment can include the configuration of the application, the provisioning and/or allocation of the virtual resource(s) to a workload domain to execute the application, the execution of the application, and/or the decommissioning or termination of the application (and/or, more generally, the workload domain) that can include releasing the virtual resource(s) from the application and back to a virtual resource pool.
In some disclosed examples, a lifecycle management controller can generate a schedule associated with virtual resource(s) that can be periodically checked to determine whether action(s) or operation(s) is/are to be performed or carried out in connection with the virtual resource(s). In some disclosed examples, the schedule can be implemented by a rules engine that evaluates rule(s) of the schedule to find matching virtual resource(s) to which operation(s) specified by the rule(s) is/are to be performed. For example, the lifecycle management controller can generate a schedule to include a rule (e.g., a schedule rule) that can be applied to a virtual resource, such as a VM. In some disclosed examples, the lifecycle management controller can specify the rule to be applicable to a virtual resource that matches a specific project, owner, set of tags (e.g., developer or user generated tags, data tags, metadata, metadata tags, etc.), and/or a value of a utilization parameter.
By way of example, the lifecycle management controller can determine that a time period has elapsed after which a schedule is to be evaluated. In some disclosed examples, the lifecycle management controller can determine that the schedule includes a rule to be enforced and/or otherwise be applicable to virtual resource(s) that is/are included in a first project, have a first owner, match explicitly one or more tags, and have a CPU utilization value of less than 30% (e.g., a CPU utilization value of less than 30% for a specific time period, such as the previous 24 hours, the previous 48 hours, etc.). In some disclosed examples, the lifecycle management controller can determine that the rule is applicable to a VM in a first workload domain.
Advantageously, the lifecycle management controller can execute one or more actions (e.g., schedule actions), operations (e.g., schedule operations), etc., associated with the VM to effectuate schedule-based lifecycle management of a virtual environment (e.g., a virtual computing environment). For example, the lifecycle management controller can resize (e.g., upsize, downsize, etc.) the VM based on whether the VM is underutilized or overutilized. For example, the lifecycle management controller can upsize a VM by adding resources (e.g., compute, network or networking, storage, etc., resources) to the VM or downsize a VM by removing resources (e.g., compute, network or networking, storage, etc., resources) from the VM. In some disclosed examples, the lifecycle management controller can power on or off the VM. In some disclosed examples, the lifecycle management controller can create snapshots of the VMs to achieve improved failure recovery or backup recovery features.
Advantageously, the example lifecycle management controller can enforce rule(s) on virtual resource(s) based on value(s) of parameter(s) associated with the virtual resource(s). For example, the parameter can be an availability parameter (e.g., a parameter representative of availability), a performance parameter (e.g., a parameter representative of performance), a capacity parameter (e.g., a parameter representative of capacity), a utilization parameter (e.g., a parameter representative of utilization), or any other type of parameter.
As used herein, availability refers to the level of redundancy required to provide continuous operation expected for a workload domain. For example, a value of an availability parameter can be 0 (zero) to represent no availability, which can correspond to a virtual resource having no backup or failover resources in case of failure of the virtual resource. In some disclosed examples, a value of an availability parameter can be 1 (one) to represent low or medium availability, which can correspond to a virtual resource having at least one backup or failover resource (e.g., at least one idle or non-used VM of which the failed VM may failover to) in case of failure of the virtual resource. In some disclosed examples, a value of an availability parameter can be 2 (two) to represent high availability, which can correspond to a virtual resource having at least two backup or failover resources (e.g., at least two idle or non-used VMs to which the failed VM may failover) in case of failure of the virtual resource.
As used herein, performance refers to the CPU operating speeds (e.g., CPU gigahertz (GHz)), memory (e.g., gigabytes (GB) of random access memory (RAM)), mass storage (e.g., GB hard drive disk (HDD), GB solid state drive (SSD), etc.), and power capabilities of a workload domain. As used herein, capacity refers to the aggregate number of resources (e.g., aggregate storage, aggregate CPU, aggregate respective hardware accelerators (e.g., field programmable gate arrays (FPGAs), graphics processing units (GPUs)), etc.) across all servers associated with a cluster and/or a workload domain. In some disclosed examples, resources are computing or electronic devices with set amounts of storage, memory, CPUs, etc., and/or any combination(s) thereof. In some disclosed examples, resources are individual devices (e.g., hard drives, processors, memory chips, etc.).
As used herein, utilization refers to a usage of a virtual resource, or portion(s) thereof. For example, the utilization can be a compute or processing utilization (e.g., 20% of the processing power of a virtualized CPU is utilized, 60% of the processing power of a hardware accelerator such as a GPU is utilized, etc.), a storage utilization (e.g., 40% of the storage capacity of a virtualized SSD is utilized), a memory utilization (e.g., 35% of a virtualized memory is utilized), a network utilization (e.g., 80% of the bandwidth, throughput, etc., of a virtualized switch, gateway, etc., is utilized), etc., and/or any combination(s) thereof.
The public cloud 104 of the illustrated example includes a first example cloud provider 108 (identified by CLOUD PROVIDER A), a second example cloud provider 110 (identified by CLOUD PROVIDER B), and a third example cloud provider 112 (identified by CLOUD PROVIDER C). For example, each of the cloud providers 108, 110, 112 can be associated with a different cloud computing entity.
The cloud providers 108, 110, 112 of the illustrated example have physical hardware resources (e.g., servers) in example geographical regions 114, 116. The geographical regions 114, 116 of the illustrated example can be further broken down, divided, partitioned, etc., into example subregions 118, 120. For example, the first cloud provider 108 of the illustrated example has physical servers in an example geographical region 114 (identified by EU-WEST-1 (REGION)), which is partitioned into a first example subregion 118 (identified by EU-WEST-1A (AVAILABILITY ZONE)) and a second example subregion 118 (identified by EU-WEST-1B (AVAILABILITY ZONE). The subregions 118, 120 of the illustrated example are availability zones. For example, the availability zones can be logical data centers in the subregions 118, 120. The logical data centers can be available for use by an end customer to execute application(s), service(s), workload(s), etc. In some examples, each availability zone in a region can have redundant and separate power, networking, and connectivity to reduce the likelihood of two availability zones failing simultaneously. In the illustrated example, the geographical region 114 is a western region of the European Union and the first and second subregions 118, 120 are respective portions of the western region of the European Union.
The subregions 118, 120 of the illustrated example can include physical servers, or portion(s) thereof, that can be used to execute and/or instantiate example virtual resources 122, 124, 126 (identified by CUSTOMER VIRTUAL MACHINE). The virtual resources 122, 124, 126 of the illustrated example include a first example virtual resource 122, a second example virtual resource 124, and a third example virtual resource 126. The virtual resources 122, 124, 126 are virtual machines (VMs). For example, the virtual resources 122, 124, 126 can be virtualizations of physical hardware resources that can be assembled, compiled, and/or otherwise organized into VMs. Additionally and/or alternatively, one(s) of the virtual resources 122, 124, 126 may be containers.
The private cloud 106 of the illustrated example is an on-premises customer environment associated with an enterprise. For example, enterprises can use Infrastructure-as-a-Service (IaaS) as a business-internal organizational cloud computing platform (sometimes referred to as a “private cloud”) that gives an application developer access to infrastructure resources, such as virtualized servers, storage, and network resources.
The private cloud 106 of the illustrated example includes a first example datacenter 128 (identified by DATACENTER 1 (REGION)), a second example datacenter 130 (identified by DATACENTER 2 (REGION)), and a third example datacenter 132 (identified by DATACENTER 3 (REGION)). In some examples, the datacenters 128, 130, 132 are logical data centers that correspond to respective ones of the cloud providers 108, 110, 112. For example, the first datacenter 128 can be a logical data center that corresponds to a first virtualized environment hosted and/or instantiated by the first cloud provider 108.
The datacenters 128, 130, 132 of the illustrated example can include one or more example clusters 134. In this example, the cluster 134 of the third datacenter 132 (identified by CLUSTER 3.1 (AVAILABILITY ZONE)) can instantiate an availability zone. For example, the cluster 134 of the third datacenter 132 can have redundant and separate power, networking, and connectivity from a different availability zone of the private cloud 106 to reduce the likelihood of two availability zones failing simultaneously. In the illustrated example, the cluster 134 instantiates a fourth example virtual resource 136 (identified by CUSTOMER VIRTUAL MACHINE). The fourth virtual resource 136 of the illustrated example is a VM. Additionally and/or alternatively, the fourth virtual resource 136 may be a container.
In some examples, the datacenters 128, 130, 132 can be managed by server management software, such as vCenter Server by VMware, Inc. For example, the server management software can be executed and/or instantiated by a virtual resource, such as a VM, to design, deploy, and/or maintain a cloud computing deployment, such as one(s) of the virtual resources 122, 124, 126 hosted by one(s) of the cloud providers 108, 110, 112.
In some examples, the server management software can be implemented by the lifecycle management controller 102, which is executed and/or instantiated by the fourth virtual resource 136. For example, the lifecycle management controller 102 can be implemented by hardware, software, and/or firmware that executes and/or instantiates server management software. In some examples, the lifecycle management controller 102 can execute and/or instantiate server management software to enable a user (e.g., a developer, information technology (IT) personnel, etc., of an enterprise that manages the private cloud 106) to manage virtual infrastructure hosted by one(s) of the cloud providers 108, 110, 112 from one or more locations (e.g., one or more centralized locations, satellite or remote locations, etc.). For example, the lifecycle management controller 102 can design, deploy, and/or maintain (e.g., manage) one(s) of the virtual resources 122, 124, 126.
The lifecycle management controller 102 of the illustrated example includes an example adapters host service 138 to interface with the private cloud 106 and/or one(s) of the cloud providers 108, 110, 112 of the public cloud 104. For example, the adapters host service 138 can be implemented by application programming interface(s) (API(s)). The adapters host service 138 of the illustrated example includes a first example adapter 140 (identified by CLOUD PROVIDER A ADAPTER), a second example adapter 142 (identified by CLOUD PROVIDER B ADAPTER), a third example adapter 144 (identified by CLOUD PROVIDER C ADAPTER), and a fourth example adapter 146 (identified by PRIVATE CLOUD ADAPTER). In some examples, the lifecycle management controller 102 can execute and/or instantiate the first adapter 140 to interface with the first cloud provider 108. In some examples, the lifecycle management controller 102 can execute and/or instantiate the second adapter 142 to interface with the second cloud provider 110. In some examples, the lifecycle management controller 102 can execute and/or instantiate the third adapter 144 to interface with the third cloud provider 112. In some examples, the lifecycle management controller 102 can execute and/or instantiate the fourth adapter 146 to interface with the private cloud 106.
The lifecycle management controller 102 of the illustrated example includes an example schedules service 148 to generate schedules (e.g., cloud computing schedules, virtual resource schedules, Day 0 schedules, Day 1 schedules, Day 2 schedules, etc.) that can be used to design, deploy, and/or maintain a virtual resource in a virtualized environment. For example, the schedules service 148 can be executed and/or instantiated periodically or aperiodically to analyze whether an action (e.g., a schedule action) or operation (e.g., a schedule operation) is to be performed or carried out in connection with one(s) of the virtual resources 122, 124, 126.
The lifecycle management controller 102 of the illustrated example includes an example rules service 150 to inspect, analyze, and/or evaluate rule(s) of a schedule to identify one(s) of the virtual resources 122, 124, 126 of which action(s)/operation(s) is/are to be applied. For example, the rules service 150 can be executed and/or instantiated to determine whether a schedule rule applies to one(s) of the virtual resources 122, 124, 126. By way of example, a schedule can include a rule that is applicable to and/or otherwise corresponds to a virtual resource hosted by the first cloud provider 108 that has a compute utilization greater than a 30% threshold. The example rules service 150 can be executed and/or instantiated to identify all or a portion of the virtual resources hosted by the first cloud provider 108. The example rules service 150 can be executed and/or instantiated to obtain utilization data associated with the virtual resources hosted by the first cloud provider 108. The example rules service 150 can be executed and/or instantiated to identify the first virtual resource 122 after a determination that the first virtual resource 122 has a compute utilization of 50%, which is greater than the threshold of 30% and thereby satisfies the threshold. The example rules service 150 can be executed and/or instantiated to determine that one or more actions/operations are to be carried out in connection with the first virtual resource 122 after a determination that the rule applies to the first virtual resource 122. Example actions/operations can include transferring portion(s) of a workload from the first virtual resource 122 to reduce the compute utilization below the threshold, allocating additional virtual resources to the first virtual resource 122 (e.g., instantiating another VM or container, adding an increased quantity of compute resources, etc.), etc., and/or any combination(s) thereof.
The lifecycle management controller 102 of the illustrated example includes an example metrics service 152 to obtain metrics, parameters, etc., representative of virtual resource utilization. In some examples, the metrics service 152 can request a virtual resource to provide utilization data, such as compute utilization data, storage utilization data, network utilization data, etc. For example, the metrics service 152 can determine that the first virtual resource 122 is overutilized based on a determination that a compute utilization of 80% of the first virtual resource 122 is greater than a utilization threshold of 50%. In some examples, the metrics service 152 can determine that the first virtual resource 122 is underutilized based on a determination that a compute utilization of 15% of the first virtual resource 122 is less than a utilization threshold of 40%.
The lifecycle management controller 102 of the illustrated example includes an example provisioning service 154 to configure, instantiate, and/or deploy virtual resources, such as one(s) of the virtual resources 122, 124, 126, in a virtualized environment, such as the public cloud 104. In some examples, the provisioning service 154 can be executed and/or instantiated to commission (e.g., instantiate, startup, power or turn on, allocate, etc.) or decommission (e.g., shutdown, power or turn off, deallocate, etc.) a virtual resource after an evaluation of a rule. For example, the rules service 150 can determine that the first virtual resource 122 is to be upsized by adding virtual resource(s), such as virtualized CPU(s), to the first virtual resource 122.
The LMC circuitry 200 of
The LMC circuitry 200 of the illustrated example of
In some examples, the adapters host service 138 of
The LMC circuitry 200 of the illustrated example includes the interface circuitry 210 to obtain and/or transmit data. In some examples, the interface circuitry 210 is instantiated by processor circuitry executing interface instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the interface circuitry 210 obtains data representative of a request. For example, the request can be a call for a creation of a schedule, which can include schedule data fields for enforcement of a rule, such as one(s) of the rules 274. In some examples, the schedule can be the schedule 272 stored in the datastore 270. By way of example, the interface circuitry 210 can obtain a request from a user via a graphical user interface (GUI) or human machine interface (HMI) of a computing or electronic system. The user can issue the request for the schedule 272 to check (e.g., aperiodically check, periodically check, etc.) whether one or more virtual resources managed by the user are to undergo a specified action or operation. In some examples, the schedule 272 can include the rule 274, which can be a condition, a circumstance, etc., that, when satisfied, triggered, and/or otherwise met, can cause the action/operation to be undertaken in connection with one(s) of the one or more virtual resources.
In some examples, the interface circuitry 210 obtains a request for utilization data for virtual resources of a cloud provider associated with the schedule 272. For example, the interface circuitry 210 can obtain a request for utilization data associated with the first virtual resource 122 hosted by the first cloud provider 108. In some examples, a hypervisor managing the first virtual resource 122, and/or, more generally, the first cloud provider 108, can collect and/or otherwise obtain utilization data associated with the first virtual resource 122. For example, the hypervisor can obtain compute utilization data, memory utilization data, storage utilization data, network utilization data, etc., associated with the first virtual resource 122. The hypervisor, and/or, more generally, the first cloud provider 108, can provide, deliver, and/or otherwise transmit the utilization data to the interface circuitry 210.
In some examples, the interface circuitry 210 obtains utilization data (e.g., utilization parameters such as the parameters 276) associated with a virtual resource. For example, the interface circuitry 210 can obtain utilization data from a virtual resource, such as one(s) of the virtual resources 122, 124, 126 hosted by one(s) of the cloud providers 108, 110, 112. In some examples, the interface circuitry 210 can store the utilization data in the datastore 270 as the parameters 276. In some examples, the interface circuitry 210 can determine that the utilization data includes one or more utilization parameters, such as the parameters 276, associated with one(s) of the virtual resources 122, 124, 126. For example, the interface circuitry 210 can receive utilization data including a compute utilization parameter, a memory utilization parameter, a storage utilization parameter, a network utilization parameter, etc., associated with the first virtual resource 122. For example, the interface circuitry 210 can store the compute utilization parameter, the memory utilization parameter, the storage utilization parameter, and/or the network utilization parameter in the datastore 270 as the parameters 276.
The LMC circuitry 200 of the illustrated example includes the schedule generation circuitry 220 to generate a schedule associated with managing a virtual resource in a virtualized environment. For example, the schedule generation circuitry 220 can generate one or more schedules, such as the schedule 272, to perform lifecycle management of virtual resources as disclosed herein. In some examples, the schedule generation circuitry 220 is instantiated by processor circuitry executing schedule generation instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the schedule generation circuitry 220 can generate the schedule 272 to include one or more data fields, which can be referred to herein as schedule data fields. For example, the schedule generation circuitry 220 can configure one of the schedule data fields with a name of a cloud provider (e.g., a name, description, or identifier of one of the cloud providers 108, 110, 112 of
In some examples, the schedule generation circuitry 220 can configure one of the schedule data fields with tags. For example, the tags can be implemented by data, such as metadata, that can associate alphanumerical-based descriptions to the schedule 272. In some examples, the schedule generation circuitry 220 can configure one of the schedule data fields with a type of operation to be executed in response to enforcement of the rule 274. For example, the type of operation can be a power off operation, a power on operation, a downsize operation, an upsize operation, a migration operation (e.g., migrating a workload or application from a first virtual resource to a second virtual resource), a snapshot operation, etc., and/or any combination(s) thereof. In some examples, the schedule generation circuitry 220 can configure one of the schedule data fields with threshold(s) (e.g., utilization threshold(s)) associated with triggering of the rule 274. In some examples, the schedule generation circuitry 220 can configure other one(s) of the schedule data fields with any other data, parameter(s), etc.
In some examples, the schedule generation circuitry 220 generates the schedule 272, which can include a rule, such as one(s) of the rules 274, to trigger an operation associated with a virtual resource of a virtualized environment when the rule is invoked. For example, the schedule generation circuitry 220 can generate the schedule 272 to manage the design, deployment, and/or maintenance of the first virtual resource 122 of
In some examples, after the schedule 272 has been inspected, analyzed, and/or otherwise evaluated, the schedule generation circuitry 220 can update the schedule 272 based on a last run time (e.g., a time at which the schedule 272 was last inspected, analyzed, evaluated, etc.) and/or status. For example, the status can include a result of the schedule evaluation, such as whether an action/operation is to be performed, which rule(s) is/are invoked, which virtual resource(s) is/are affected, etc., and/or any combination(s) thereof.
In some examples, the schedule generation circuitry 220 can generate the schedule 272 to include one or more cron expressions. For example, the schedule 272 can be implemented by a cron schedule, a cron job schedule, etc. As used herein, a cron expression is a string data format (e.g., a unix-cron string format), which can include one or more fields in a line. In some examples, a cron expression can be implemented by a string format of (* . . . *) where each “*” represents a data field. Alternatively, the cron expression may have any number of data fields. In some examples, the schedule generation circuitry 220 can generate the schedule 272 to include a cron expression that has 5 data fields, which can be represented by a cron expression of (* * * * *). For example, the first data field can be a data value representative of a minute in a range of 0-59, the second data field can be a data value representative of an hour in a range of 0-23, the third data field can be a data value representative of a day of the month in a range of 1-31, the fourth data field can be a data value representative of a month in a range of 1-12 (or JANUARY to DECEMBER), and the fifth data field can be a data value representative of a day of the week in a range of 0-6 (or SUNDAY to SATURDAY). In some examples, the schedule generation circuitry 220 can generate the schedule 272 to include the cron expression with the first through fifth data fields to represent when the schedule 272 is to be evaluated.
The LMC circuitry 200 of the illustrated example includes the schedule evaluation circuitry 230 to evaluate a schedule, such as the schedule 272, to determine whether rule(s) is/are triggered. In some examples, the schedule evaluation circuitry 230 is instantiated by processor circuitry executing schedule evaluation instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the schedule evaluation circuitry 230 determines whether it is time to check the schedule 272. For example, the schedule evaluation circuitry 230 can determine whether a timer associated with the schedule 272 has elapsed, expired, etc., to check the schedule. In some examples, the schedule evaluation circuitry 230 selects a schedule of interest to process. For example, assume that the private cloud 106 manages 15 schedules associated with the first cloud provider 108, 20 schedules associated with the second cloud provider 110, and 30 schedules associated with the third cloud provider 112. In some examples, the schedule evaluation circuitry 230 can select a first one of the 15 schedules associated with the first cloud provider 108 to evaluate. In some examples, the schedule evaluation circuitry 230 can select another schedule of interest to process, such as a second one of the 15 schedules or a first one of the 20 schedules associated with the second cloud provider 110. In some examples, the schedule evaluation circuitry 230 determines whether to monitor (e.g., continue to monitor, iteratively monitor, etc.) a virtual resource based on a schedule associated with the virtual resource.
The LMC circuitry 200 of the illustrated example includes the resource identification circuitry 240 to identify a virtual resource. In some examples, the resource identification circuitry 240 is instantiated by processor circuitry executing resource identification instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the resource identification circuitry 240 can identify that one(s) of virtual resources correspond to a schedule, such as the schedule 272. For example, the resource identification circuitry 240 can determine that the schedule 272 includes a rule, such as one of the rules 274, that is applicable to at least one of the first virtual resource 122, the second virtual resource 124, or the third virtual resource 126 of
In some examples, the resource identification circuitry 240 can identify a virtual resource corresponding to a cloud provider. For example, the resource identification circuitry 240 can determine that the schedule 272 includes a schedule data field that identifies the first cloud provider 108. In some examples, the resource identification circuitry 240 can identify virtual resources hosted by the first cloud provider 108, such as the first virtual resource 122, that correspond to the first cloud provider 108. In some examples, the resource identification circuitry 240 can identify the virtual resources as corresponding to the schedule 272 and/or the first cloud provider 108 based on a determination that the schedule data field of the schedule 272 identifies the first cloud provider 108.
The LMC circuitry 200 of the illustrated example includes the rule evaluation circuitry 250 to evaluate whether a schedule rule, such as one of the rules 274, is to be triggered and/or otherwise invoked. In some examples, the rule evaluation circuitry 250 is instantiated by processor circuitry executing rule evaluation instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the rule evaluation circuitry 250 identifies one(s) of virtual resources whose utilization data satisfies utilization threshold(s). For example, the rule evaluation circuitry 250 can select a virtual resource, such as the first virtual resource 122 of
In some examples, the rule evaluation circuitry 250 can determine that the first utilization resource 122 has a compute utilization of 40% and a storage utilization of 85%. For example, the rule evaluation circuitry 250 can determine whether the first virtual resource 122 has a utilization parameter that satisfies a threshold specified by a schedule rule, such as one of the rules 274. In some examples, the rule evaluation circuitry 250 can determine that the compute utilization of 40% is below a compute utilization threshold of 50% and thereby determine that the first virtual resource 122 is underutilized with respect to compute utilization. In some examples, the rule evaluation circuitry 250 can determine that the storage utilization of 85% is above a storage utilization threshold of 70% and thereby determine that the first virtual resource 122 is overutilized with respect to storage utilization.
In some examples, the rule evaluation circuitry 250 can determine whether to create a snapshot of a virtual source based on a schedule rule. For example, the rule evaluation circuitry 250 can determine that the schedule 272 includes a rule that, when triggered, can cause a snapshot of an applicable virtual resource to be captured. In some examples, the snapshot can be a backup of a virtual resource, such as storing a copy of the virtual resource, or portion(s) thereof. For example, the backup can be used to recover the virtual resource if the virtual resource has failed. In some examples, the backup of a first virtual resource can be used to failover the first virtual resource to a second virtual resource if the first virtual resource is executing a high availability application or workload. In some examples, the rule evaluation circuitry 250 can cause the snapshot to be stored in the datastore 270 as one(s) of the snapshots 278.
The LMC circuitry 200 of the illustrated example includes the operation execution circuitry 260 to execute an operation associated with a virtual resource based on a schedule rule, such as one of the rules 274. In some examples, the operation execution circuitry 260 is instantiated by processor circuitry executing operation execution instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the operation execution circuitry 260 executes an operation after a determination that a value of a utilization parameter of a virtual resource satisfies a threshold. For example, the operation execution circuitry 260 can execute an operation on the first virtual resource 122 after a determination that the first virtual resource 122 has a compute utilization of 10% that is less than a compute utilization threshold of 40%.
In some examples, the operation execution circuitry 260 can execute an action (e.g., a schedule action) or operation (e.g., a schedule operation) such as a resize operation. For example, the operation execution circuitry 260 can resize the first virtual resource 122 by upsizing the first virtual resource 122 or downsizing the first virtual resource 122. In some examples, the operation execution circuitry 260 can upsize the first virtual resource 122 by adding resources (e.g., compute, network or networking, storage, etc., resources) to the first virtual resource 122. In some examples, the operation execution circuitry 260 can downsize the first virtual resource 122 by removing resources (e.g., compute, network or networking, storage, etc., resources) from the first virtual resource 122.
In some examples, the operation execution circuitry 260 can execute an action (e.g., a schedule action) or operation (e.g., a schedule operation) such as a power on or off operation. For example, the operation execution circuitry 260 can power off the first virtual resource 122 in response to a determination that the first virtual resource 122 invoked a rule, such as one of the rules 274, that specifies a virtual resource to be powered off if the rule is triggered. In some examples, the operation execution circuitry 260 can power on the first virtual resource 122 in response to a determination that the first virtual resource 122 invoked a rule that specifies a virtual resource to be powered on if the rule is triggered.
In some examples, the operation execution circuitry 260 can execute an action (e.g., a schedule action) or operation (e.g., a schedule operation) such as a snapshot operation. For example, the operation execution circuitry 260 can create snapshots of the first virtual resource 122 to achieve improved failure recovery of the first virtual resource 122 or backup recovery features associated with the first virtual resource 122. In some examples, the operation execution circuitry 260 can store the snapshots in the datastore 270 as the snapshots 278. In some examples, the operation execution circuitry 260 can execute the snapshot operation by storing at least one of configuration data or workload data associated with the first virtual resource 122 in the datastore 270 as the snapshots 278 or as any other data.
In some examples, the configuration data can include a type of the first virtual resource 122, such as a VM, a container, a switch (e.g., a network switch), a gateway (e.g., a network gateway), a router (e.g., a network router), a load balancer, etc. In some examples, the configuration data can include a type and/or version of operating system (OS) installed on the first virtual resource 122. In some examples, the configuration data can include network configuration data, such as an Internet Protocol (IP) address, an IP port, a media access control (MAC) address, etc., of the first virtual resource 122. In some examples, the configuration data can be data representative of an availability parameter, a performance parameter, a capacity parameter, a utilization parameter, etc., associated with the first virtual resource 122. For example, the configuration data can include a number of CPU GHz, a number of RAM GB, a number of mass storage GB, etc., associated with the first virtual resource 122.
In some examples, the workload data can include a type of a workload, such as a machine learning workload, a data routing workload, a computationally-intensive workload, a vector processing workload, etc. In some examples, the workload data can include a description of a workload, such as a name and/or type of application or service being executed. In some examples, the workload data can include a progress of a workload, such as data representative of what portion(s) of the workload is/are complete and/or what portion(s) of the workload is/are to be processed or completed.
In some examples, the operation execution circuitry 260 can execute an action (e.g., a schedule action) or operation (e.g., a schedule operation) such as a migration operation. For example, the operation execution circuitry 260 can assign the first virtual resource 122 from a first workload domain to a second workload domain based on a determination that the first virtual resource 122 is underutilized and/or the second workload domain needs additional resources. In some examples, the operation execution circuitry 260 can migrate and/or otherwise cause a transfer of a workload, or portion(s) thereof, from the first virtual resource 122 to a different virtual resource hosted by the first cloud provider 108. In some examples, the operation execution circuitry 260 can migrate and/or otherwise cause a transfer of a workload, or portion(s) thereof, from the first virtual resource 122 to a different virtual resource hosted by a different cloud provider, such as the second virtual resource 124 hosted by the second cloud provider 110.
The LMC circuitry 200 of the illustrated example includes the datastore 270 to record data. In some examples, the datastore 270 is instantiated by processor circuitry executing datastore instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the LMC circuitry 200 includes means for obtaining data. For example, the means for obtaining can obtain configuration data, workload data, utilization data, etc. For example, the means for obtaining may be implemented by the interface circuitry 210. In some examples, the interface circuitry 210 may be instantiated by processor circuitry such as the example processor circuitry 1412 of
In some examples, the LMC circuitry 200 includes means for generating a schedule. For example, the means for generating may be implemented by the schedule generation circuitry 220. In some examples, the schedule generation circuitry 220 may be instantiated by processor circuitry such as the example processor circuitry 1412 of
In some examples, the LMC circuitry 200 includes means for evaluating a schedule. For example, the means for evaluating a schedule may be implemented by the schedule evaluation circuitry 230. In some examples, the schedule evaluation circuitry 230 may be instantiated by processor circuitry such as the example processor circuitry 1412 of
In some examples, the LMC circuitry 200 includes means for identifying a resource (e.g., a virtual resource). For example, the means for identifying may be implemented by the resource identification circuitry 240. In some examples, the resource identification circuitry 240 may be instantiated by processor circuitry such as the example processor circuitry 1412 of
In some examples, the LMC circuitry 200 includes means for evaluating a rule. For example, the means for evaluating a rule may be implemented by the rule evaluation circuitry 250. In some examples, the rule evaluation circuitry 250 may be instantiated by processor circuitry such as the example processor circuitry 1412 of
In some examples, the LMC circuitry 200 includes means for executing an action or operation. For example, the means for executing an action or operation may be implemented by the operation execution circuitry 260. In some examples, the operation execution circuitry 260 may be instantiated by processor circuitry such as the example processor circuitry 1412 of
In some examples, the LMC circuitry 200 includes means for storing data. For example, the means for storing data may be implemented by the datastore 270. In some examples, the datastore 270 may be instantiated by processor circuitry such as the example processor circuitry 1412 of
While an example manner of implementing the lifecycle management controller 102 of
In the first workflow 300, example operations 304, 306, 308, 310, 312, 314, 316 are to be executed for each schedule for which it is time to perform an action. For example, the schedules service 148 can check every 5 seconds whether the cron expression in the schedule 272 indicates that the schedule 272 is to be evaluated. In some examples, the schedules service 148 can determine that a timestamp represented by the cron expression has been met or surpassed since the last time the schedule 272 has been checked.
During a second example operation 304, the schedules service 148 gets resources (e.g., virtual resources) based on the provided schedule's rules 274. During a third example operation 306, the rules service 150 gets all resources with a given owner, project, and tags specified by the rules 274 of the schedule 272. In some examples, if the rules 274 of the schedule 272 do not include specified criteria, such as the owner, project, tags, etc., then one(s) of the rules 274 is/are bypassed from evaluation.
During a fourth example operation 308, the provisioning service 154 returns found resources. In some examples, the datastore 270 can store relevant data to the requested resources. During a fifth example operation 310, the rules service 150 can obtain metrics (e.g., values of compute utilization parameters, storage utilization parameters, etc.), such as the parameters 276, for the given resources. During a sixth example operation 312, the metrics service 152 returns the requested data. During a seventh example operation 314, the rules service 150 can filter and/or otherwise identify one(s) of the found resources based on the requested data. For example, the rules service 150 can identify the first virtual resource 122 of
In the first workflow 300, example operations 318, 320, 322, 324, 326, 328 are to be executed for each matched resource. During a ninth example operation 318, the schedules service 148 causes one or more schedule actions, operations, etc., to be performed on the resource. For example, the schedule 272 can include a schedule action of turning off a matched virtual resource if the matched virtual resource has a compute utilization that falls beneath a compute utilization threshold. During a tenth example operation 320, the provisioning service 154 causes the action to be performed on the resource. During an eleventh example operation 322, the adapters host service 138 causes the action to be performed on the resource. For example, the first adapter 140 can instruct the first cloud provider 108 to carry out the schedule action on the first virtual resource 122. During a twelfth example operation 324, the cloud providers 108, 110, 112 transmit an acknowledgment that the schedule action is successful to the adapters host service 138. During a thirteenth example operation 326, the adapters host service 138 transmits the acknowledgment to the provisioning service 154. During a fourteenth example operation 328, the provisioning service 154 transmits the acknowledgment to the schedules service 148. During a fifteenth example operation 330, the schedules service 148 can update the schedule with a last run time and/or status (e.g., a status of success based on the received acknowledgement).
During a first example operation 404 of the second workflow 400, the user interface 402 causes a creation of a schedule via the schedules service 148. For example, a user can interact with the user interface 402 to create a schedule, such as the schedule 272 of
During a first example operation 502 of the third workflow 500, the metrics service 152 can periodically (e.g., every X number of minutes where X can be configurable) send requests to one(s) of the cloud providers 108, 110, 112 and/or the private cloud 106 for metrics associated with one(s) of the virtual resources 122, 124, 126, 136. For example, the metrics service 152 can send 4 distinct requests (in parallel) to the adapters 140, 142, 144, 146 encapsulated by the adapters host service 138. During the first operation 502, the metrics service 152 can initiate the obtaining of the latest metrics for resources managed by a given one of the cloud providers 108, 110, 112 and/or the private cloud 106.
During a second example operation 504, the adapters host service 138 can obtain and/or otherwise identify the resources (e.g., the virtual resources 122, 124, 126, 136) for a given cloud provider type (e.g., the first cloud provider 108, the second cloud provider 110, the third cloud provider 112, the private cloud 106, etc.). During a third example operation 506, the provisioning service 154 can return identification(s) of the resources. For example, the provisioning service 154 can provide to the adapters host service 138 an identification of the first virtual resource 122 as being associated with the first cloud provider 108.
During a fourth example operation 508, the adapters host service 138 can request the metrics for the resources identified by the provisioning service 154. During a fifth example operation 510, the cloud providers 108, 110, 112 can return and/or otherwise output the metrics to the adapters host service 138. For example, the adapters host service 138 can request the parameters 276 associated with all or some of the virtual resources hosted by the first cloud provider 108. In some examples, the first cloud provider 108 can provide the parameters 276 associated with all or some of the requested virtual resources, such as the first virtual resource 122. During a sixth example operation 512, the adapters host service 138 can provide the metrics to the metrics service 152, which can present them to a user of the private cloud 106, the rules service 150 for evaluation, etc., and/or any combination(s) thereof.
The first schedule 802 of the illustrated example of
In the illustrated example, the operation schedule is implemented by a cron expression of “0 30 19 * * *,” where 0 can represent the hour (e.g., 0 in a 24-hour format, which can be midnight), 30 can represent the minute (e.g., 30 in a range of 0-59 minutes), and 19 can represent the day (e.g., day 19 in a month). The remaining fields of the cron expression are represented by “*” to indicate that other fields are not needed, such as the month or day of the week. For example, the cron expression of “0 30 19 * * *” in the illustrated example can represent that the snapshot operation is to be performed on the 19th day of the month at 00:30:00 (24-hour time format of hours:minutes:seconds (hh:mm:ss)).
The second schedule 804 of the illustrated example of
In the illustrated example, the operation schedule is implemented by a cron expression of “*,” where “*” indicates that the downsize operation is to be performed whenever one or more of the rules 274 are triggered. For example, the downsize operation can be performed when a utilization parameter of an applicable virtual resource falls below a utilization threshold.
Flowcharts representative of example machine readable instructions, which may be executed to configure processor circuitry to implement the LMC circuitry 200 of
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., as portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc., in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and/or stored on separate computing devices, wherein the parts when decrypted, decompressed, and/or combined form a set of machine executable instructions that implement one or more operations that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc., in order to execute the machine readable instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example operations of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc., may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, or (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B.
As used herein, singular references (e.g., “a,” “an,” “first,” “second,” etc.) do not exclude a plurality. The term “a” or “an” object, as used herein, refers to one or more of that object. The terms “a” (or “an”), “one or more,” and “at least one” are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., the same entity or object. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
At block 904, the example LMC circuitry 200 identifies the virtual resource after determining that the rule corresponds to the virtual resource. For example, the resource identification circuitry 240 (
At block 906, the example LMC circuitry 200 executes the operation after determining that a value of a utilization parameter of the virtual resource satisfies a threshold. For example, the operation execution circuitry 260 (
At block 1004, the example LMC circuitry 200 determines whether a timer has elapsed to check the schedules. For example, the schedule evaluation circuitry 230 (
If, at block 1004, the example LMC circuitry 200 determines that a timer has not elapsed to check the schedules, control proceeds to block 1020. Otherwise, control proceeds to block 1006.
At block 1006, the example LMC circuitry 200 selects a schedule of interest to process. For example, the schedule evaluation circuitry 230 can select the second schedule 804 of
At block 1008, the example LMC circuitry 200 identifies one(s) of the virtual resources corresponding to the schedule. For example, the resource identification circuitry 240 (
At block 1010, the example LMC circuitry 200 obtains utilization data associated with the one(s) of the virtual resources. For example, the interface circuitry 210 (
At block 1012, the example LMC circuitry 200 identifies of the virtual resources whose utilization data satisfies utilization threshold(s). For example, the rule evaluation circuitry 250 (
At block 1014, the example LMC circuitry 200 performs schedule action(s) on the identified one(s) of the one(s) of the virtual resources. For example, the operation execution circuitry 260 (
At block 1016, the example LMC circuitry 200 updates the schedule-based on last run time and status. For example, the schedule evaluation circuitry 230 can update the second schedule 804 with data, such as a timestamp corresponding to the instant schedule evaluation and/or a status, such as an execution of the downsize operation, a success status, etc.
At block 1018, the example LMC circuitry 200 determines whether to select another schedule of interest to process. For example, the schedule evaluation circuitry 230 can determine to select the first schedule 802 to process.
If, at block 1018, the example LMC circuitry 200 determines to select another schedule of interest to process, control returns to block 1006. Otherwise, control proceeds to block 1020.
At block 1020, the example LMC circuitry 200 determines whether to continue monitoring the virtual resources. For example, the schedule evaluation circuitry 230 can determine whether to evaluate (e.g., iteratively evaluate) one(s) of the schedules 802, 804 to perform lifecycle management associated with the virtual resources 122, 124, 126 of
At block 1104, the example LMC circuitry 200 configures one of the schedule data fields with a name of a cloud provider associated with a virtual resource. For example, the schedule generation circuitry 220 (
At block 1106, the example LMC circuitry 200 configures one of the schedule data fields with a time zone. For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with a time zone associated with at least one of the first cloud provider 108 or the private cloud 106 of
At block 1108, the example LMC circuitry 200 configures one of the schedule data fields with a first timestamp at which to start enforcement of the rule. For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with a first timestamp at which to start enforcement of one or more of the rules 274 on the first virtual resource 122.
At block 1110, the example LMC circuitry 200 configures one of the schedule data fields with a second timestamp at which to end enforcement of the rule. For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with a second timestamp at which to end enforcement of the one or more of the rules 274 on the first virtual resource 122.
At block 1112, the example LMC circuitry 200 configures one of the schedule data fields with a project name. For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with a project name associated with deployment of the private cloud 106 and/or the first virtual resource 122.
At block 1114, the example LMC circuitry 200 configures one of the schedule data fields with tags. For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with one or more tags.
At block 1116, the example LMC circuitry 200 configures one of the schedule data fields with a type of operation to be executed in response to enforcement of the rule. For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with a type of operation, such as a snapshot operation or resize operation, to be executed in response to enforcement of the rule on the first virtual resource 122.
At block 1118, the example LMC circuitry 200 configures one of the schedule data fields with threshold(s) associated with triggering of the rule. For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with a threshold, such as a compute utilization threshold, associated with triggering of the one or more of the rules 274.
At block 1120, the example LMC circuitry 200 configures other one(s) of the schedule data fields with other parameter(s). For example, the schedule generation circuitry 220 can set a value of one of the schedule data fields 602 with any other value, data, etc., to support evaluation of the schedules 802, 804. After configuring the other one(s) of the schedule data fields with other parameter(s) at block 1120, the example machine readable instructions and/or the example operations 1100 of
If, at block 1202, the example LMC circuitry 200 determines that a timer has not elapsed to check a schedule, control proceeds to block 1222. Otherwise, control proceeds to block 1204.
At block 1204, the example LMC circuitry 200 obtains a request for utilization data for virtual resources of a cloud provider associated with the schedule. For example, the interface circuitry 210 (
At block 1206, the example LMC circuitry 200 identifies the virtual resources corresponding to the cloud provider. For example, the resource identification circuitry 240 (
At block 1208, the example LMC circuitry 200 obtains utilization parameters for the virtual resources. For example, the interface circuitry 210 can obtain utilization parameters for the first virtual resource 122, which can include a value of a compute utilization parameter, a storage utilization parameter, a memory utilization parameter, etc.
At block 1210, the example LMC circuitry 200 selects a virtual resource. For example, the rule evaluation circuitry 250 (
At block 1212, the example LMC circuitry 200 determines whether the virtual resource has a utilization parameter that satisfies a threshold specified by a schedule rule. For example, the rule evaluation circuitry 250 can determine whether the first virtual resource 122 has a value of a utilization parameter, such as a compute utilization parameter, that satisfies a threshold specified by a schedule rule of the at least one of the first schedule 802 or the second schedule 804.
If, at block 1212, the example LMC circuitry 200 determines that the virtual resource does not have a utilization parameter that satisfies a threshold specified by a schedule rule, control proceeds to block 1216. Otherwise, control proceeds to block 1214.
At block 1214, the example LMC circuitry 200 at least one of powers on, powers off, or resizes the virtual resource. For example, after a determination that a value of a storage utilization of the first virtual resource 122 exceeds a storage utilization threshold, the operation execution circuitry 260 (
At block 1216, the example LMC circuitry 200 determines whether to create a snapshot of the virtual resource based on a schedule rule. For example, the rule evaluation circuitry 250 can determine whether at least one of the first schedule 802 or the second schedule 804 includes a rule that, when triggered, causes a snapshot of the first virtual resource 122 to be captured. If, at block 1216, the example LMC circuitry 200 determines not to create a snapshot of the virtual resource based on a schedule rule, control proceeds to block 1220. Otherwise, control proceeds to block 1218.
At block 1218, the example LMC circuitry 200 stores at least one of configuration data or workload data associated with the virtual resource to capture a snapshot of the virtual resource. For example, after a determination to capture a snapshot of the first virtual resource 122, the operation execution circuitry 260 can store at least one of configuration data or workload data associated with the first virtual resource 122 in the datastore 270 (
At block 1220, the example LMC circuitry 200 determines whether to select another virtual resource. For example, the rule evaluation circuitry 250 can determine whether there is another virtual resource hosted by the first cloud provider 108 that is associated with at least one of the first schedule 802 or the second schedule 804.
If, at block 1220, the example LMC circuitry 200 determines to select another virtual resource, control returns to block 1210. Otherwise, control proceeds to block 1222.
At block 1222, the example LMC circuitry 200 determines whether to continue monitoring the virtual resources based on the schedule. For example, the schedule evaluation circuitry 230 can determine whether to continue evaluating at least one of the first schedule 802 or the second schedule 804.
If, at block 1222, the example LMC circuitry 200 determines to continue monitoring the virtual resources based on the schedule, control returns to block 1202. Otherwise, the example machine readable instructions and/or the example operations 1200 of
If, at block 1302, the example LMC circuitry 200 determines that a value of a utilization parameter of a virtual resource is not below a threshold, control proceeds to block 1308. Otherwise, control proceeds to block 1304.
At block 1304, the example LMC circuitry 200 determines that the virtual resource is underutilized. For example, the rule evaluation circuitry 250 can determine that a network resource of the first virtual resource 122 is underutilized based on a determination that a value of the network utilization parameter is below and/or meets a network utilization threshold.
At block 1306, the example LMC circuitry 200 at least one of turns off the virtual resource or assigns the virtual resource to a different workload domain. For example, after a determination that the first virtual resource 122 is underutilized, the operation execution circuitry 260 (
At block 1308, the example LMC circuitry 200 determines whether a value of a utilization parameter of a virtual resource is above a threshold. For example, the rule evaluation circuitry 250 can determine whether a value of a network utilization parameter (e.g., a value of 20% utilized, 50% utilized, etc.) for the first virtual resource 122 of
If, at block 1308, the example LMC circuitry 200 determines that a value of a utilization parameter of a virtual resource is not above a threshold, the example machine readable instructions and/or the example operations 1300 of
If, at block 1308, the example LMC circuitry 200 determines that a value of a utilization parameter of a virtual resource is above a threshold, then, at block 1310, the LMC circuitry 200 determines that the virtual resource is overutilized. For example, the rule evaluation circuitry 250 can determine that a network resource of the first virtual resource 122 is overutilized based on a determination that a value of the network utilization parameter is above and/or meets a network utilization threshold.
At block 1312, the example LMC circuitry 200 at least one of transfers a portion of a workload of the virtual resource to a different virtual resource or adds a quantity of resources to the virtual resource. For example, after a determination that the first virtual resource 122 is overutilized, the operation execution circuitry 260 can determine that a workload, or portion(s) thereof, can be transferred from the first virtual resource 122 to a different virtual resource to reduce the utilization of the first virtual resource 122. In some examples, after a determination that the first virtual resource 122 is overutilized, the operation execution circuitry 260 can determine to add resources (e.g., virtualizations of hardware resources, virtual resources, etc.) to the first virtual resource 122 to reduce the utilization of the first virtual resource 122. For example, the operation execution circuitry 260 can add a virtualized gateway, switch, router, etc., to the first virtual resource 122 to distribute a workload executed by the first virtual resource 122 to reduce the utilization of the first virtual resource 122. After at least one of a transfer of a portion of a workload of the virtual resource to a different virtual resource or an addition of a quantity of resources to the virtual resource at block 1312, the example machine readable instructions and/or the example operations 1300 of
The processor platform 1400 of the illustrated example includes processor circuitry 1412. The processor circuitry 1412 of the illustrated example is hardware. For example, the processor circuitry 1412 can be implemented by one or more integrated circuits, logic circuits, FPGAs microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry 1412 may be implemented by one or more semiconductor based (e.g., silicon based) devices. In this example, the processor circuitry 1412 implements the schedule generation circuitry 220 (identified by SCHEDULE GEN CIRCUITRY), the schedule evaluation circuitry 230 (identified by SCHEDULE EVAL CIRCUITRY), the resource identification circuitry 240 (identified by RESOURCE ID CIRCUITRY), the rule evaluation circuitry 250 (identified by RULE EVAL CIRCUITRY), and the operation execution circuitry 260 (identified by OPERATION EXE CIRCUITRY) of
The processor circuitry 1412 of the illustrated example includes a local memory 1413 (e.g., a cache, registers, etc.). The processor circuitry 1412 of the illustrated example is in communication with a main memory including a volatile memory 1414 and a non-volatile memory 1416 by a bus 1418. In this example, the bus 1418 implements the bus 280 of
The processor platform 1400 of the illustrated example also includes interface circuitry 1420. In this example, the interface circuitry 1420 implements the interface circuitry 210 of
In the illustrated example, one or more input devices 1422 are connected to the interface circuitry 1420. The input device(s) 1422 permit(s) a user to enter data and/or commands into the processor circuitry 1412. The input device(s) 1422 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system.
One or more output devices 1424 are also connected to the interface circuitry 1420 of the illustrated example. The output devices 1424 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuitry 1420 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU.
The interface circuitry 1420 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network 1426. The communication can be by, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, an optical connection, etc.
The processor platform 1400 of the illustrated example also includes one or more mass storage devices 1428 to store software and/or data. Examples of such mass storage devices 1428 include magnetic storage devices, optical storage devices, floppy disk drives, HDDs, CDs, Blu-ray disk drives, redundant array of independent disks (RAID) systems, solid state storage devices such as flash memory devices, and DVD drives. In this example, the one or more mass storage devices 1428 implement the datastore 270 of
The machine executable instructions 1432, which may be implemented by the machine readable instructions of
The cores 1502 may communicate by a first example bus 1504. In some examples, the first bus 1504 may implement a communication bus to effectuate communication associated with one(s) of the cores 1502. For example, the first bus 1504 may implement at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the first bus 1504 may implement any other type of computing or electrical bus. The cores 1502 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 1506. The cores 1502 may output data, instructions, and/or signals to the one or more external devices by the interface circuitry 1506. Although the cores 1502 of this example include example local memory 1520 (e.g., Level 1 (L1) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor 1500 also includes example shared memory 1510 that may be shared by the cores (e.g., Level 2 (L2 cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 1510. The local memory 1520 of each of the cores 1502 and the shared memory 1510 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory 1414, 1416 of
Each core 1502 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 1502 includes control unit circuitry 1514, arithmetic and logic (AL) circuitry (sometimes referred to as an ALU) 1516, a plurality of registers 1518, the L1 cache 1520, and a second example bus 1522. Other structures may be present. For example, each core 1502 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry 1514 includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core 1502. The AL circuitry 1516 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 1502. The AL circuitry 1516 of some examples performs integer based operations. In other examples, the AL circuitry 1516 also performs floating point operations. In yet other examples, the AL circuitry 1516 may include first AL circuitry that performs integer based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry 1516 may be referred to as an Arithmetic Logic Unit (ALU). The registers 1518 are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 1516 of the corresponding core 1502. For example, the registers 1518 may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 1518 may be arranged in a bank as shown in
Each core 1502 and/or, more generally, the microprocessor 1500 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 1500 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The processor circuitry may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry.
More specifically, in contrast to the microprocessor 1500 of
In the example of
The interconnections 1610 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 1608 to program desired logic circuits.
The storage circuitry 1612 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 1612 may be implemented by registers or the like. In the illustrated example, the storage circuitry 1612 is distributed amongst the logic gate circuitry 1608 to facilitate access and increase execution speed.
The example FPGA circuitry 1600 of
Although
In some examples, the processor circuitry 1412 of
From the foregoing, it will be appreciated that example systems, methods, apparatus, and articles of manufacture have been disclosed for schedule-based lifecycle management. Disclosed systems, methods, apparatus, and articles of manufacture improve the efficiency of using a computing device by periodically evaluating schedules to effectuate Day 0, Day 1, and/or Day 2 operations to reduce the time needed to design, deploy, and/or maintain a virtualized environment. Disclosed systems, methods, apparatus, and articles of manufacture utilize schedule-based lifecycle management to reduce and/or eliminate downtime of a virtualized environment, which can result in additional workloads being completed. Disclosed systems, methods, apparatus, and articles of manufacture are accordingly directed to one or more improvement(s) in the operation of a machine such as a computer or other electronic and/or mechanical device.
Example methods, apparatus, systems, and articles of manufacture for schedule-based lifecycle management are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an apparatus for lifecycle management in a virtualized environment, the apparatus comprising at least one memory, machine readable instructions, and processor circuitry to at least one of execute or instantiate the machine readable instructions to at least generate a schedule including a rule, the rule to trigger an operation associated with a virtual resource of the virtualized environment, identify the virtual resource after a first determination that the rule corresponds to the virtual resource, and execute the operation after a second determination that a value of a utilization parameter of the virtual resource satisfies a threshold.
Example 2 includes the apparatus of example 1, wherein the processor circuitry is to configure a first data field of the schedule with a name of a cloud provider associated with the virtual resource, configure a second data field of the schedule with a first timestamp at which to start enforcement of the rule, configure a third data field of the schedule with a second timestamp at which to end enforcement of the rule, configure a fourth data field with the operation to be executed after the triggering of the rule, and generate the schedule based on at least one of the first data field, the second data field, the third data field, or the fourth data field.
Example 3 includes the apparatus of example 1, wherein the operation is a snapshot operation, and the processor circuitry is to obtain configuration data associated with a configuration of the virtual resource, obtain workload data associated with a progress of execution of a workload by the virtual resource, and store the configuration data and the workload data in a datastore to capture a snapshot of the virtual resource.
Example 4 includes the apparatus of example 1, wherein the virtual resource is in a first workload domain, the operation is a downsize operation, the value of the utilization parameter satisfies the threshold based on the value being less than the threshold, and the processor circuitry is to determine that the virtual resource is underutilized based on the value being less than the threshold, and at least one of turn off the virtual resource or assign the virtual resource to a second workload domain to execute a workload.
Example 5 includes the apparatus of example 1, wherein the virtual resource is a first virtual resource, the first virtual resource represents a first quantity of hardware resources, the operation is an upsize operation, the value of the utilization parameter satisfies the threshold based on the value being greater than the threshold, and the processor circuitry is to determine that the first virtual resource is overutilized based on the value being greater than the threshold, and at least one of transfer a portion of a workload of the first virtual resource to a second virtual resource or add a second quantity of hardware resources to the first virtual resource.
Example 6 includes the apparatus of example 1, wherein the virtual resource is powered off at a first time, and the processor circuitry is to turn on the virtual resource to execute the operation at a second time after the first time.
Example 7 includes the apparatus of example 1, wherein the utilization parameter is a compute utilization, a memory utilization, or a storage utilization.
Example 8 includes at least one non-transitory machine readable storage medium comprising instructions that, when executed, cause processor circuitry to at least generate a schedule including a rule, the rule to trigger an operation associated with a virtual resource of the virtualized environment, identify the virtual resource after a first determination that the rule corresponds to the virtual resource, and execute the operation after a second determination that a value of a utilization parameter of the virtual resource satisfies a threshold.
Example 9 includes the at least one non-transitory machine readable storage medium of example 8, wherein the instructions, when executed, cause the processor circuitry to configure a first data field of the schedule with a name of a cloud provider associated with the virtual resource, configure a second data field of the schedule with a first timestamp at which to start enforcement of the rule, configure a third data field of the schedule with a second timestamp at which to end enforcement of the rule, configure a fourth data field with the operation to be executed after the triggering of the rule, and generate the schedule based on at least one of the first data field, the second data field, the third data field, or the fourth data field.
Example 10 includes the at least one non-transitory machine readable storage medium of example 8, wherein the operation is a snapshot operation, and the instructions, when executed, cause the processor circuitry to obtain configuration data associated with a configuration of the virtual resource, obtain workload data associated with a progress of execution of a workload by the virtual resource, and store the configuration data and the workload data in a datastore to capture a snapshot of the virtual resource.
Example 11 includes the at least one non-transitory machine readable storage medium of example 8, wherein the virtual resource is in a first workload domain, the operation is a downsize operation, the value of the utilization parameter satisfies the threshold based on the value being less than the threshold, and the instructions, when executed, cause the processor circuitry to determine that the virtual resource is underutilized based on the value being less than the threshold, and at least one of turn off the virtual resource or assign the virtual resource to a second workload domain to execute a workload.
Example 12 includes the at least one non-transitory machine readable storage medium of example 8, wherein the virtual resource is a first virtual resource, the first virtual resource represents a first quantity of hardware resources, the operation is an upsize operation, the value of the utilization parameter satisfies the threshold based on the value being greater than the threshold, and the instructions, when executed, cause the processor circuitry to determine that the first virtual resource is overutilized based on the value being greater than the threshold, and at least one of transfer a portion of a workload of the first virtual resource to a second virtual resource or add a second quantity of hardware resources to the first virtual resource.
Example 13 includes the at least one non-transitory machine readable storage medium of example 8, wherein the virtual resource is powered off at a first time, and the instructions, when executed, cause the processor circuitry to turn on the virtual resource to execute the operation at a second time after the first time.
Example 14 includes the at least one non-transitory machine readable storage medium of example 8, wherein the utilization parameter is a compute utilization, a memory utilization, or a storage utilization.
Example 15 includes a method for lifecycle management in a virtualized environment, the method comprising generating a schedule including a rule, the rule to trigger an operation associated with a virtual resource of the virtualized environment, identifying the virtual resource after a first determination that the rule corresponds to the virtual resource, and executing the operation after a second determination that a value of a utilization parameter of the virtual resource satisfies a threshold.
Example 16 includes the method of example 15, further including configuring a first data field of the schedule with a name of a cloud provider associated with the virtual resource, configuring a second data field of the schedule with a first timestamp at which to start enforcement of the rule, configuring a third data field of the schedule with a second timestamp at which to end enforcement of the rule, configuring a fourth data field with the operation to be executed after the triggering of the rule, and generating the schedule based on at least one of the first data field, the second data field, the third data field, or the fourth data field.
Example 17 includes the method of example 15, wherein the operation is a snapshot operation, and the method further including obtaining configuration data associated with a configuration of the virtual resource, obtaining workload data associated with a progress of execution of a workload by the virtual resource, and storing the configuration data and the workload data in a datastore to capture a snapshot of the virtual resource.
Example 18 includes the method of example 15, wherein the virtual resource is in a first workload domain, the operation is a downsize operation, the value of the utilization parameter satisfies the threshold based on the value being less than the threshold, and the method further including determining that the virtual resource is underutilized based on the value being less than the threshold, and at least one of turning off the virtual resource or assigning the virtual resource to a second workload domain to execute a workload.
Example 19 includes the method of example 15, wherein the virtual resource is a first virtual resource, the first virtual resource represents a first quantity of hardware resources, the operation is an upsize operation, the value of the utilization parameter satisfies the threshold based on the value being greater than the threshold, and the method further including determining that the first virtual resource is overutilized based on the value being greater than the threshold, and at least one of transferring a portion of a workload of the first virtual resource to a second virtual resource or adding a second quantity of hardware resources to the first virtual resource.
Example 20 includes the method of example 15, wherein the virtual resource is powered off at a first time, and the method further including turning on the virtual resource to execute the operation at a second time after the first time.
Example 21 includes the method of example 15, wherein the utilization parameter is a compute utilization, a memory utilization, or a storage utilization.
The following claims are hereby incorporated into this Detailed Description by this reference. Although certain example systems, methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all systems, methods, apparatus, and articles of manufacture fairly falling within the scope of the claims of this patent.