A data center is a facility that houses servers, data storage devices, and/or other associated components such as backup power supplies, redundant data communications connections, environmental controls such as air conditioning and/or fire suppression, and/or various security systems. A data center may be maintained by an information technology (IT) service provider. An enterprise may purchase data storage and/or data processing services from the provider in order to run applications that handle the enterprises' core business and operational data. The applications may be proprietary and used exclusively by the enterprise or made available through a network for anyone to access and use.
Virtual computing instances (VCIs), such as virtual machines and containers, have been introduced to lower data center capital investment in facilities and operational expenses and reduce energy consumption. A VCI is a software implementation of a computer that executes application software analogously to a physical computer. VCIs have the advantage of not being bound to physical resources, which allows VCIs to be moved around and scaled to meet changing demands of an enterprise without affecting the use of the enterprise's applications. In a software defined data center, storage resources may be allocated to VCIs in various ways, such as through network attached storage (NAS), a storage area network (SAN) such as fiber channel and/or Internet small computer system interface (iSCSI), a virtual SAN, and/or raw device mappings, among others.
The term “virtual computing instance” (VCI) refers generally to an isolated user space instance, which can be executed within a virtualized environment. Other technologies aside from hardware virtualization can provide isolated user space instances, also referred to as data compute nodes (which may be referred to herein simply as “nodes”). Data compute nodes may include non-virtualized physical hosts, VCIs, containers that run on top of a host operating system without a hypervisor or separate operating system, and/or hypervisor kernel network interface modules, among others. Hypervisor kernel network interface modules are non-VCI data compute nodes that include a network stack with a hypervisor kernel network interface and receive/transmit threads.
VCIs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VCI) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. The host operating system can use namespaces to isolate the containers from each other and therefore can provide operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VCI segregation that may be offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers may be more lightweight than VCIs.
While the specification refers generally to VCIs, the examples given could be any type of data compute node, including physical hosts, VCIs, non-VCI containers, and hypervisor kernel network interface modules. Embodiments of the present disclosure can include combinations of different types of data compute nodes.
Services, as used herein, refers to services provided by a container orchestration system (e.g., nodes, pods, containers, namespaces, etc.). Particular instances of services may be referred to herein as “service instances.” “Types of service instances” or “service instance types” may alternately refer generally to “services.” An example of a service instance may be a particular container, “container 8j809fsjag” of the service instance type “container.” A container orchestration system can manage multiple applications with shared services between the applications. A container orchestration system can be responsible for application deployment, scaling, and management, such as maintenance and updates of the applications and/or services. One example of a container orchestration system is Kubernetes, however, embodiments of the present disclosure are not so limited. The container orchestration system can manage a container cluster (sometimes referred to herein simply as “cluster”).
An application service, as referred to herein, is a service running in and managed by a container orchestration system (e.g., a container orchestration system runtime). An application service may be deployed with a deployment chart (e.g., a Helm chart). An application service can have a lifecycle defined by a container orchestration system runtime. An application service can be considered to be a microservice in a product based on microservices.
A platform service, as referred to herein, is a service that is part of a platform on which the container orchestration system has been deployed. A platform service runs outside of the runtime of the container orchestration system. In some embodiments, a platform service is a Linux service. A platform service has a lifecycle defined by the operating system and serves cross-cutting concerns to multiple application services. A log management service that deals with rotating, compressing, and/or deleting service logs is an example of a platform service.
In a microservices-based product deployed on-premises it may be common to have a set of platform services running outside of Kubernetes runtime to provide functionality to the application services running inside Kubernetes runtime. In such a setup, the configuration of the platform service depends on the current set of application services deployed and whatever requirements those applications services may have. This relationship leads to a dependency between the application services and the platform services. This dependency can cause challenges because platform services may not be aware of the nature, the number, or the current versions of the various application services. In addition, the lifecycles of platform services and application services are different and disconnected. As a result, it may be difficult to set the correct configuration of the platform services because the configuration may depend on the application services that are going to be deployed. In previous approaches, for instance, it may be prohibitively difficult to upgrade the two types of services independently because of their strong coupling.
Embodiments of the present disclosure can dynamically configure platform services based on configuration bundled as part of application service deployment charts. In some embodiments, for instance, platform configurations are defined as Kubernetes ConfigMaps and are distributed together with an application service in a deployment chart. Once the ConfigMaps are deployed in Kubernetes as part of starting an application service, they can be collected by a dedicated platform service (referred to herein as an “applicator”) that can update the configurations of any related platform services. An applicator can be a script in some embodiments and can be associated with a platform service on a one-to-one basis, for instance. As described further below, ConfigMaps are versioned and provide the platform versions to which they are applicable. The applicator can filter the available configurations and apply the latest configuration applicable to the current platform version.
Multiple ConfigMap configurations for the same platform service and the same application service can be deployed in the same Kubernetes runtime to accommodate different platform service versions. Accordingly, each ConfigMap is provided a unique name. In some embodiments, for instance, ConfigMap names can include three portions: a first portion that includes an identifier (e.g., a name) of the application service, a second portion that includes an identifier (e.g., a number) of a current version of the application service, and a third portion that includes an identifier (e.g., a name) of the platform service. In an example, “provisioning-service-logrotate-conf-8.8.0” is a name of a ConfigMap that identifies the application service (provisioning-service), the current version of the application service (8.8.0), and the platform service (logrotate) the configuration is associated with (e.g., intended for). ConfigMaps can be bundled with the application service deployment chart and can be deployed at the Kubernetes runtime.
Embodiments herein include an annotation referred to as “minimal-platform-version” in each ConfigMap that defines the minimal version of the platform service to which the configuration in the ConfigMap is applicable. If, for example, a platform service is upgraded to a newer version, it may no longer support some of the configurations it previously supported. In such an instance, an updated application service configuration may be needed for that version of the platform service.
When the application services have been deployed in the Kubernetes runtime along with their configurations for the platform services they use, the applicator can select the appropriate configurations and apply them to the platform service(s). The applicator can receive a mapping between the platform services to be configured and the ConfigMaps available to them. As previously discussed, multiple ConfigMaps may be available for each platform service. The applicator can select a ConfigMap with the most recent version in the minimal-platform-version annotation, which is also older or equal to the current platform version. Stated differently, embodiments herein can determine a minimal-platform-version for each ConfigMap that corresponds to a particular platform service, discard any ConfigMap having a minimal-platform-version that exceeds the current version of the platform service, and select, from the remaining ConfigMaps, a ConfigMap having a largest minimal-platform-version. The applicator can parse the configurations from the selected ConfigMap and apply them to the corresponding platform services (e.g., by writing data from ConfigMap(s) to configuration file(s) of the platform service(s)).
Embodiments of the present disclosure do not require a platform update when a service is updated, even if the service needs to update service-related configuration of shared platform services. Updates to the platform service configurations are delivered with the updated application service that utilizes them. Embodiments herein also automatically deploy and/or delete configurations as part of deploying/deleting application services (e.g., via Helm). In contrast with previous approaches discussed above, the determination of whether to update the platform can be independent of the determination of whether to update the Kubernetes services.
As used herein, the singular forms “a”, “an”, and “the” include singular and plural referents unless the content clearly dictates otherwise. Furthermore, the word “may” is used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, mean “including, but not limited to.” The term “coupled” means directly or indirectly connected.
The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 228 may reference element “28” in
The hosts 102 can incorporate a hypervisor 114 that can execute a number of virtual computing instances 116-1, 116-2, . . . , 116-N (referred to generally herein as “VCIs 116”). The VCIs can be provisioned with processing resources 104 and/or memory resources 106 and can communicate via the network interface 108. The processing resources 104 and the memory resources 108 provisioned to the VCIs can be local and/or remote to the hosts 102. For example, in a software defined data center, the VCIs 116 can be provisioned with resources that are generally available to the software defined data center and not tied to any particular hardware device. By way of example, the memory resources 106 can include volatile and/or non-volatile memory available to the VCIs 116. The VCIs 116 can be moved to different hosts (not specifically illustrated), such that a different hypervisor 114 manages the VCIs 116.
In the example illustrated in
The platform 224 can include a number of platform services 230-1, . . . , 230-N (referred to generally herein as “platform services 230”). As discussed herein, an applicator 234 on the platform 224 can receive the ConfigMaps 228 from the application services 226 and use the ConfigMaps 228 to configure the platform services 230.
The number of engines can include a combination of hardware and program instructions that is configured to perform a number of functions described herein. The program instructions (e.g., software, firmware, etc.) can be stored in a memory resource (e.g., machine-readable medium) as well as hard-wired program (e.g., logic). Hard-wired program instructions (e.g., logic) can be considered as both program instructions and hardware.
In some embodiments, the request engine 446 can include a combination of hardware and program instructions that is configured to receive a request to configure a platform service associated with a container orchestration system. In some embodiments, the ConfigMap engine 448 can include a combination of hardware and program instructions that is configured to collect a plurality of ConfigMaps from a deployment chart of an application service managed by the container orchestration system, wherein each of the plurality of ConfigMaps includes platform service configuration data associated with a different version of the platform service. In some embodiments, the selection engine 450 can include a combination of hardware and program instructions that is configured to select one of the plurality of ConfigMaps based on a current version of the platform service. In some embodiments, the configuration engine 452 can include a combination of hardware and program instructions that is configured to configure the platform service using the selected ConfigMap.
Memory resources 510 can be non-transitory and can include volatile and/or non-volatile memory. Volatile memory can include memory that depends upon power to store information, such as various types of dynamic random access memory (DRAM) among others. Non-volatile memory can include memory that does not depend upon power to store information. Examples of non-volatile memory can include solid state media such as flash memory, electrically erasable programmable read-only memory (EEPROM), phase change memory (PCM), 3D cross-point, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, magnetic memory, optical memory, and/or a solid state drive (SSD), etc., as well as other types of machine-readable media.
The processing resources 508 can be coupled to the memory resources 510 via a communication path 556. The communication path 556 can be local or remote to the machine 554. Examples of a local communication path 556 can include an electronic bus internal to a machine, where the memory resources 510 are in communication with the processing resources 508 via the electronic bus. Examples of such electronic buses can include Industry Standard Architecture (ISA), Peripheral Component Interconnect (PCI), Advanced Technology Attachment (ATA), Small Computer System Interface (SCSI), Universal Serial Bus (USB), among other types of electronic buses and variants thereof. The communication path 556 can be such that the memory resources 510 are remote from the processing resources 508, such as in a network connection between the memory resources 510 and the processing resources 508. That is, the communication path 556 can be a network connection. Examples of such a network connection can include a local area network (LAN), wide area network (WAN), personal area network (PAN), and the Internet, among others.
As shown in
Each of the number of modules 546, 548, 550, 552 can include program instructions and/or a combination of hardware and program instructions that, when executed by a processing resource 508, can function as a corresponding engine as described with respect to
The machine 554 can include a request module 546, which can include instructions to receive a request to configure a platform service associated with a container orchestration system. The machine 554 can include a ConfigMap module 548, which can include instructions to collect a plurality of ConfigMaps from a deployment chart of an application service managed by the container orchestration system, wherein each of the plurality of ConfigMaps includes platform service configuration data associated with a different version of the platform service. The machine 554 can include a selection module 550, which can include instructions to select one of the plurality of ConfigMaps based on a current version of the platform service. The machine 554 can include a configuration module 552, which can include instructions to configure the platform service using the selected ConfigMap.
Although specific embodiments have been described above, these embodiments are not intended to limit the scope of the present disclosure, even where only a single embodiment is described with respect to a particular feature. Examples of features provided in the disclosure are intended to be illustrative rather than restrictive unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents as would be apparent to a person skilled in the art having the benefit of this disclosure.
The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Various advantages of the present disclosure have been described herein, but embodiments may provide some, all, or none of such advantages, or may provide other advantages.
In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.