Benefit is claimed under 35 U.S.C. 119(a)-(d) to Foreign Application Serial No. 202341004116 filed in India entitled “SELECTIVE CONFIGURATION IN A SOFTWARE-DEFINED DATA CENTER FOR APPLIANCE DESIRED STATE”, on Jan. 20, 2023, by VMware, Inc., which is herein incorporated in its entirety by reference for all purposes.
In a software-defined data center (SDDC), virtual infrastructure, which includes virtual machines (VMs) and virtualized storage and networking resources, is provisioned from hardware infrastructure that includes a plurality of host computers (hereinafter also referred to simply as “hosts”), storage devices, and networking devices. The provisioning of the virtual infrastructure is carried out by SDDC management software that is deployed on management appliances, such as a VMware vCenter Server® appliance and a VMware NSX® appliance, from VMware, Inc. The SDDC management software communicates with virtualization software (e.g., a hypervisor) installed in the hosts to manage the virtual infrastructure.
It has become common for multiple SDDCs to be deployed across multiple clusters of hosts. Each cluster is a group of hosts that are managed together by the management software to provide cluster-level functions, such as load balancing across the cluster through VM migration between the hosts, distributed power management, dynamic VM placement according to affinity and anti-affinity rules, and high availability (HA). The management software also manages a shared storage device to provision storage resources for the cluster from the shared storage device, and a software-defined network through which the VMs communicate with each other. For some customers, their SDDCs are deployed across different geographical regions, and may even be deployed in a hybrid manner, e.g., on-premise, in a public cloud, and/or as a service. “SDDCs deployed on-premise” means that the SDDCs are provisioned in a private data center that is controlled by a particular organization. “SDDCs deployed in a public cloud” means that SDDCs of a particular organization are provisioned in a public data center along with SDDCs of other organizations. “SDDCs deployed as a service” means that the SDDCs are provided to the organization as a service on a subscription basis. As a result, the organization does not have to carry out management operations on the SDDC, such as configuration, upgrading, and patching, and the availability of the SDDCs is provided according to the service level agreement of the subscription.
As described in U.S. patent application Ser. No. 17/665,602, filed on Feb. 7, 2022, the entire contents of which are incorporated by reference herein, the desired state of the SDDC, which includes configuration of services running in management appliances of the SDDC, may be defined in a declarative document, and the SDDC is deployed or upgraded according to the desired state defined in the declarative document. In addition, if drift from the desired state is detected, the SDDC is remediated according to the desired state defined in the declarative document. The desired state can include that of a virtualization management server configured to manage a cluster of hosts, the virtualization layers thereon, and the VMs executing therein. The complete configuration of a virtualization management server can be large and complex, including many managed objects and properties thereof. It is desirable to allow for selective configuration of a virtualization management server. For example, there could be several administrators and each of them can manage different parts of the configuration of the virtualization management server. Objects, properties, etc. of a virtualization management server configuration, however, can have various inter-dependencies, which makes selective configuration non-trivial. For example, selectively managing the configuration without accounting for dependencies can result in incorrect configuration and failure to achieve the desired state.
One or more embodiments provide a method of managing a configuration of a virtualization management server in a software-defined data center (SDDC), the virtualization management server managing a cluster of hosts and a virtualization layer executing therein. The method includes: generating, by a service executing in the SDDC, a profile that includes a managed configuration exclusive of an unmanaged configuration, a union of the managed configuration and the unmanaged configuration being a configuration of the virtualization management server; validating, by the service, that the unmanaged configuration in the profile does not include dependencies with the unmanaged configuration; and applying, by the service, the profile to the virtualization management server.
Further embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above method, as well as a computer system configured to carry out the above method.
In one or more embodiments, a cloud platform delivers various services (referred to herein as “cloud services”) to the SDDCs through agents of the cloud services that are running in an appliance (referred to herein as a “agent platform appliance”). The cloud platform is a computing platform that hosts containers or virtual machines corresponding to the cloud services that are delivered from the cloud platform. The agent platform appliance is deployed in the same customer environment as the management appliances of the SDDCs.
In the embodiments described herein, the cloud platform is provisioned in a public cloud and the agent platform appliance is provisioned as a virtual machine, and the two are connected over a public network, such as the Internet. In addition, the agent platform appliance and the management appliances are connected to each other over a private physical network, e.g., a local area network. Examples of cloud services that are delivered include an SDDC configuration service, an SDDC upgrade service, an SDDC monitoring service, an SDDC inventory service, and a message broker service. Each of these cloud services has a corresponding agent deployed on the agent platform appliance. All communication between the cloud services and the management software of the SDDCs is carried out through the respective agents of the cloud services.
As described in U.S. patent application Ser. No. 17/665,602, the desired state of SDDCs of a particular organization is managed by the SDDC configuration service running in the cloud platform (e.g., configuration service 110 depicted in
A plurality of SDDCs is depicted in
The VIM appliances in each customer environment communicate with an agent platform (AP) appliance, which hosts agents (not shown in
As used herein, a “customer environment” means one or more private data centers managed by the customer, which is commonly referred to as “on-prem,” a private cloud managed by the customer, a public cloud managed for the customer by another organization, or any combination of these. In addition, the SDDCs of any one customer may be deployed in a hybrid manner, e.g., on-premise, in a public cloud, or as a service, and across different geographical regions.
In the embodiments described herein, each of the agent platform appliances and the management appliances is a VM instantiated on one or more physical host computers (not shown in
Virtual infrastructure (VI) profile service 201 is the component in management appliance 51A that manages the configuration of services running in management appliance 51A according to a desired state. For example, VI profile service 201 is a system service of management appliance 51A. In another embodiment (not shown), VI profile service 201 can be a separate appliance (e.g., software running in a separate VM) or execute in a separate container from management appliance 51A. These services are referred to hereinafter as “managed services” and the desired state of these services are defined in a desired state document (depicted in
VI profile service 201 exposes various APIs that are invoked by configuration agent 140 and the managed services. The APIs include a get-current-state API 211 that is invoked by configuration agent 140 to get the current state of SDDC 41, an apply API 212 that is invoked by configuration agent 140 to apply the desired state of SDDC 41 that is defined in a desired state document to SDDC 41, scan API 213 that is invoked by configuration agent 140 to compute drift in the current state of SDDC 41 from the desired state of SDDC 41, a streaming API 215 that provides an interface for configuration agent 140 by which configuration agent 140 receives streaming updates (including any drift detected in the current state of SDDC 41 from the desired state of SDDC 41) from VI profile service 201, and a notification API 216 that is invoked by any of the managed services to notify VI profile service 201 of a change in the configuration thereof. In the embodiments described herein, each of the managed services maintains the state of its configuration, detects any change to its configuration, and notifies VI profile service 201 through notification API 216 upon detecting any change in its configuration. Each of the managed services employs a notification technique, such as long-poll, HTTP SSE (Server Sent Events), HTTP2 streaming, and webhooks, to notify VI profile service 201 through notification API 216 upon detecting any change in its configuration. In addition, instead of a streaming API 215, VI profile service 201 may implement long-poll, HTTP SSE, HTTP2 streaming, or webhooks to notify configuration agent 140 of the updates including any drift detected in the current state of SDDC 41 from the desired state of SDDC 41.
VI profile service 201 includes a plug-in orchestrator 230 that refers to a plug-in registry 231 that contains information about each of the plug-ins including: (1) process IDs of the plug-in and the corresponding service; (2) whether or not the corresponding service is enabled for proactive drift detection, passive drift detection, or both; and (3) parameters for proactive drift detection and/or passive drift detection.
Parameters for proactive drift detection specify whether or not a queue is to be set up for each of the managed services that are enabled for proactive drift detection. These queues are depicted in
Parameters for passive drift detection include a polling interval (or alternatively, minimum gap between polling) for each of the managed services that are enabled for passive drift detection. For passive drift detection, plug-in orchestrator 230 relies on drift poller 232 to provide a periodic trigger for drift computation. Drift poller 232 maintains a separate polling interval (or alternatively, minimum gap between polling) for each of the managed services that are enabled for passive drift detection.
In the embodiment illustrated in
Software 324 of each host 320 provides a virtualization layer, referred to herein as a hypervisor 350, which directly executes on hardware platform 322. In an embodiment, there is no intervening software, such as a host operating system (OS), between hypervisor 350 and hardware platform 322. Thus, hypervisor 350 is a Type-1 hypervisor (also known as a “bare-metal” hypervisor). As a result, the virtualization layer in host cluster 318 (collectively hypervisors 350) is a bare-metal virtualization layer executing directly on host hardware platforms. Hypervisor 350 abstracts processor, memory, storage, and network resources of hardware platform 322 to provide a virtual machine execution space within which multiple virtual machines (VM) 340 may be concurrently instantiated and executed. VMs 340 can execute software deployed by users (e.g., user software 342), as well as system software 344 deployed by management/control planes to provide support (e.g., virtualization management server 316).
Virtualization management server 316 is a physical or virtual server that manages hosts 320 and the hypervisors therein (e.g., a VIM appliance). Virtualization management server 316 installs agent(s) in hypervisor 350 to add a host 320 as a managed entity. Virtualization management server 316 can logically group hosts 320 into host cluster 318 to provide cluster-level functions to hosts 320, such as VM migration between hosts 320 (e.g., for load balancing), distributed power management, dynamic VM placement according to affinity and anti-affinity rules, and high-availability. The number of hosts 320 in host cluster 318 may be one or many. Virtualization management server 316 can manage more than one host cluster 318. While only one virtualization management server 316 is shown, virtualized computing system 300 can include multiple virtualization management servers each managing one or more host clusters. Virtualization management service 316 includes database(s) 317 that store a configuration 319. Virtualization management server 316 can include profiles 318 managed by VI profile service 201, as discussed further below.
A selective configuration a user wants to manage through VI profile service 201 is referred to herein as a managed configuration. The portion of the configuration other than the selective configuration (i.e., the portion of the configuration the user does not want to manage through VI profile service 201) is referred to as the unmanaged configuration. Configuration 319 is the union of a managed configuration and the unmanaged configuration (also referred to as the comprehensive configuration).
One technique for generating a profile 318 is as follows. The profile includes a managed configuration. Any changes in the unmanaged configuration will not result in a drift of profile 318 from its desired state (profile drift), which is the expected behavior. The managed configuration can be any possible object or property supported by virtualization management server 316. However, the objects/properties in the unmanaged configuration may have dependencies on objects/properties in the managed configuration of the profile. Thus, VI profile service 201 does not guarantee the correctness of the profile when a user applies the profile. Thus, this technique does not deliver system resilience.
Another technique for generating a profile 318 is as follows. The profile includes the comprehensive configuration. When the profile is created, the user selects a managed configuration and the objects/properties of the unmanaged configuration are populated in the profile from the current running state. The system guarantees the profile correctness and the comprehensive configuration is always entirely passed to the plugins to validate/apply the managed configuration. Any changes in the unmanaged configuration will result in a configuration drift, since the profile includes the state of the unmanaged configuration at the time of creation. Race conditions between existing imperative APIs and VI profile service 201 may result in the unmanaged configuration being unintentionally overwritten by VI profile service 201 when applying the profile. This can result in an incorrect configuration.
Another technique for generating a profile 318 is as follows. The profile includes the comprehensive configuration. The user does not want to distinguish between the managed configuration and the unmanaged configuration. The user manages the entire configuration (configuration 319) through either VI profile service 201 or using existing imperative APIs provided by virtualization management service 316 (external to VI profile service 201). The downside of this approach is that a user cannot chose a subset of the configuration to manage in the profile through VI profile service 201 but must instead always be confronted with managing the entire configuration. Each user will see the same configuration despite being concerned with only a subset thereof.
In embodiments, a technique for generating a profile 318 is as follows. The profile includes only the managed configuration. The managed configuration includes only independent objects/properties. In other words, none of the objects/properties in the unmanaged configuration have dependencies on the objects/properties in the managed configuration. VI profile service 201 guarantees profile correctness. A drawback of this approach is that each plugin must expect partial input in some cases (e.g., some objects/properties used as parametric input to the plugin may be in the unmanaged configuration and not present in the profile). However, the plugin interface can be configured to expect that a user may omit optional arguments (e.g., the missing objects/properties in the unmanaged configuration can be treated as optional arguments for the plugin).
Consider a case where an object (object-4) in unmanaged configuration 480 depends on an object (object-1) in profile-1. VC profile service 201 indicates such a profile as invalid because the profile must include the objects that depend any other object in the profile. The reason for this is that an instance in object-2 (instance-1) may depend on an instance in object-1 that is not defined in the profile (e.g., instance-4). In such case, instance-4 in object-1 will get removed when profile-1 is applied, but VC profile service 201 may not notify the user since, from its point of view, the system is in a consistent state.
Consider a case where an object in profile-2 depends on an object in unmanaged configuration 480 (an object-4). VC profile service 201 indicates such a profile as invalid because the profile must include the objects that depend any other object in the profile. The reason for this is that if the user removes object-3: instance-1, then that could make invalid some instance(s) in object-4. However, VC profile service 201 may not notify the user because, from its viewpoint, the system is in a consistent state.
Consider a case where instances of an object are split between two profiles. VC profile service 201 indicates such profiles as invalid because a profile cannot partially manage an object. The reason is because when such a profile is applied, the system does not know how to treat the rest of the instances (leave them or remove them) since those instances are not part of the profile.
At step 510, VC profile service 201 validates the profile. VC profile service 201 ensures the profile is correct based on the rules described for steps 504-508. At step 512, the user applies the profile to a VIM appliance. At step 514, VC profile service 201 in the VIM appliance sends the profile to its plugins for configuration thereof.
The embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities. Usually, though not necessarily, these quantities may take the form of electrical or magnetic signals, where the quantities or representations of the quantities can be stored, transferred, combined, compared, or otherwise manipulated. Such manipulations are often referred to in terms such as producing, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments may be useful machine operations.
One or more embodiments of the invention also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for required purposes, or the apparatus may be a general-purpose computer selectively activated or configured by a computer program stored in the computer. Various general-purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, etc.
One or more embodiments of the present invention may be implemented as one or more computer programs or as one or more computer program modules embodied in computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system. Computer readable media may be based on any existing or subsequently developed technology that embodies computer programs in a manner that enables a computer to read the programs. Examples of computer readable media are hard drives, NAS systems, read-only memory (ROM), RAM, compact disks (CDs), digital versatile disks (DVDs), magnetic tapes, and other optical and non-optical data storage devices. A computer readable medium can also be distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although one or more embodiments of the present invention have been described in some detail for clarity of understanding, certain changes may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation unless explicitly stated in the claims.
Virtualization systems in accordance with the various embodiments may be implemented as hosted embodiments, non-hosted embodiments, or as embodiments that blur distinctions between the two. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data.
Many variations, additions, and improvements are possible, regardless of the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest OS that perform virtualization functions.
Plural instances may be provided for components, operations, or structures described herein as a single instance. Boundaries between components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention. In general, structures and functionalities presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionalities presented as a single component may be implemented as separate components. These and other variations, additions, and improvements may fall within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202341004116 | Jan 2023 | IN | national |