Implementing affinity and anti-affinity with KUBERNETES

Information

  • Patent Grant
  • 11456914
  • Patent Number
    11,456,914
  • Date Filed
    Wednesday, October 7, 2020
    4 years ago
  • Date Issued
    Tuesday, September 27, 2022
    2 years ago
Abstract
A KUBERNETES installation processes a script and invokes a scheduling agent in response to encountering an instruction to create a pod. The scheduling agent is an agent of an orchestrator and performs tasks such as identifying a selected node, creating multiple interface objects with multiple IP addresses, and creating storage volumes in coordination with the orchestrator. Upon creation, the pod may call a CNI that is an agent of the orchestrator in order to configure the pod to use the multiple interface objects. The pod may call a CSI that is an agent of the orchestrator in order to bind a storage volume to the pod. The scheduling agent may coordinate with the orchestrator to implement affinity and anti-affinity rules for placement of pods and storage volumes. The script may also be transformed by the orchestrator in order to insert instructions implementing affinity and anti-affinity rules.
Description
RELATED APPLICATIONS

This application is related to U.S. application Ser. No. 17/065,317 filed Oct. 7, 2020, which is incorporated herein by reference for all purposes.


BACKGROUND
Field of the Invention

This invention relates to implementing containerized applications using an orchestration platform.


Background of the Invention

A multi-role application may include many objects providing different roles of the application. These objects may be application implementing services, storage volumes, databases, web servers, and the like. One environment that facilitates deployment of such applications is KUBERNETES, which was originally developed by GOOGLE.


It would be an advancement in the art to facilitate the deployment and management of multi-role applications, including those orchestrated using KUBERNETES.





BRIEF DESCRIPTION OF THE DRAWINGS

In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:



FIG. 1 is a schematic block diagram of a network environment for implementing methods in accordance with an embodiment of the present invention;



FIG. 2 is a schematic block diagram illustrating components for implementing multiple interfaces for a pod in accordance with an embodiment of the present invention;



FIGS. 3A and 3B are process flow diagrams of a method for implementing multiple interfaces for a pod in accordance with an embodiment of the present invention;



FIG. 4 is a process flow diagram of a method for generating augmented script files for implementing constraints in accordance with an embodiment of the present invention;



FIG. 5 is a process flow diagram of a method for restoring the IP address of a replacement pod in accordance with an embodiment of the present invention;



FIG. 6 is a process flow diagram of a method for binding a storage volume to a container in accordance with an embodiment of the present invention;



FIG. 7 is a process flow diagram of a method for implementing CPU exclusivity in accordance with an embodiment of the present invention; and



FIG. 8 is a schematic block diagram of an example computing device suitable for implementing methods in accordance with embodiments of the invention.





DETAILED DESCRIPTION

Referring to FIG. 1, the methods disclosed herein may be performed using the illustrated network environment 100. The network environment 100 includes a storage manager 102 that coordinates the creation of snapshots of storage volumes and maintains records of where snapshots are stored within the network environment 100. In particular, the storage manager 102 may be connected by way of a network 104 to one or more storage nodes 106, each storage node having one or more storage devices 108, e.g. hard disk drives, flash memory, or other persistent or transitory memory. The network 104 may be a local area network (LAN), wide area network (WAN), or any other type of network including wired, fireless, fiber optic, or any other type of network connections.


One or more compute nodes 110 are also coupled to the network 104 and host user applications that generate read and write requests with respect to storage volumes managed by the storage manager 102 and stored within the storage devices 108 of the storage nodes 108.


The methods disclosed herein ascribe certain functions to the storage manager 102, storage nodes 106, and compute node 110. The methods disclosed herein are particularly useful for large scale deployment including large amounts of data distributed over many storage nodes 106 and accessed by many compute nodes 110. However, the methods disclosed herein may also be implemented using a single computer implementing the functions ascribed herein to some or all of the storage manager 102, storage nodes 106, and compute node 110.


The creation of storage volumes on the storage nodes 106 and the instantiation of applications or applications executing within containers on the compute nodes 110 may be invoked by an orchestrator 112. The orchestrator may ingest a manifest defining a bundled application and invoke creation of storage volumes on storage nodes 106 and creation of containers and applications on the compute nodes according to the manifest.


In some embodiments, storage volumes and/or containers and applications may be instantiated on a cloud computing platform 114. In particular, the cloud computing platform 114 may be coupled to the network 104 and include cloud computing resources 116 and storage resources 118. The storage resources 118 may include various types of storage including object storage 120 in which data is stored as unstructured data and which is generally less expensive and has higher latency. The storage resources may include file system storage 122 that is implemented as a virtual disk in which data is stored in a structured format, such as within a hierarchical file system or a log-structured storage scheme.


The cloud computing platform 114 and corresponding resources 116, 118 may be implemented using any cloud computing platform known in the art such as AMAZON WEB SERVICES (AWS), MICROSOFT AZURE, GOOGLE CLOUD, or the like.


In some embodiments, the orchestrator 112 may operate in cooperation with a KUBERNETES installation 124. KUBERNETES provides a deployment automation platform that can instantiate containers, instantiate applications in containers, monitor operation of containers, replace failed containers and application instances executed by the failed container, scale out or scale in a number of containers and a number of instances of a particular application according to loading.


In some embodiments, the orchestrator 112 creates containers and application instances according to a bundled application manifest by invoking the functionality of the KUBERNETES installation 124. The orchestrator 112 may extend the functionality of the KUBERNETES installation 124 in order to implement complex user requirements and to operate in conjunction with virtualized storage volumes implement using the storage manager 102.


Referring to FIG. 2, the illustrated architecture 200 illustrates components for extending the capacity of KUBERNETES to implement containers with multiple network interfaces of multiple different types.


The KUBERNETES installation 124 may include a KUBERNETES master 202, which is an application executing in the network environment 100 and which controls the instantiation of a pod 204 on a compute node 110. A pod 204 is an executable executing on the pod 204 that acts as an agent of the KUBERNETES master to instantiate and manage containers 206 executing on the node 110. Each container 206 executes an application instance 208. The pod 204 may also function to manage storage resources and network resources (e.g., internet protocol addresses) used by the containers 206 on the node 110. In particular, containers 206 may access storage and network communication by way of the pod 204.


User customization of a pod 204 may be provided by means of plugins called by the pod 204 to perform storage and networking management functions. One of these plugins may include a container storage interface (CSI) 210 that manages the mounting of storage to the pod and accessing of storage by the containers 206. Another plugin may include a container networking interface (CNI) 212 that manages networking functions. In some embodiments, one or both of the CSI 210 and CNI 212 are agents of the orchestrator 112 and coordinate with the orchestrator 112 when performing their functions. Accordingly, the orchestrator 112 may instruct the CSI 210 and/or CNI 212 to implement storage and networking as specified by a bundled application manifest 214.


In some embodiments, the KUBERNETES master 202 may invoke a scheduling agent 216 when invoking the creation of pods 204 and containers 206. In some embodiments, the scheduling agent 216 is also an agent of the orchestrator 112 and may coordinate with the orchestrator 112 to implement instantiation of containers 206 and application instances 208 according to the bunded application manifest 214.


The KUBERNETES master 202 may invoke the creation of pods 204, containers 206, and application instances 208 by interpreting a script file 218. For example, the script file 218 may be a YAML (YAML Ain't Markup Language) or a HELM chart according to the KUBERNETS protocol. The KUBERNETES master 202 may also implement individual operators input to the KUBERNETES master 202 by a user, the orchestrator 112, or other entity.


Referring to FIGS. 3A and 3B, while still referring to FIG. 2, the method 300 may be used to implement multiple network interfaces 220 for a same pod 204. The multiple interfaces 220 may be implemented with respect to a multi-queue networking plugin 222, such as MULTUS from Intel. Each interface 220 may have its own IP address and may be implemented using a network interface executable that is different form the network interface executable of another interface 220 of the same pod 204. Examples of network interface executables may include FLANNEL, CALICO, CANAL, WEAVE, CILIUM, KUBE-ROUTER, OVS, and SRIOV.


The method 300 may include processing 302 a script 218 by the KUBERNETES master 202. As part of processing the script 218, the KUBERNETES master 202 may be programmed to make 304 a function call to the scheduling agent 216. For example, each time the KUBERNETES master 202 encounters a command to create a pod 204 or a container 206 for a pod, the master 202 may be programmed to call 304 the scheduling agent 216 to obtain parameters relating to the creation of the pod 204 or container 206.


The method 300 may include the scheduling agent 216 reading the script and identifying annotations in the script. As noted above, the scheduling agent 216 may be an agent of the orchestrator 112. Accordingly, actions ascribed herein to the scheduling agent 216 may also be performed by the orchestrator 112. For example, the scheduling agent 216 may pass parameters from the call to the scheduling agent 216 to the orchestrator 112, e.g. an identifier of the pod and/or container to be created, the script 218 being processed, or other information.


The method 300 may include selecting 308 a compute node 110 for the pod 204 and/or container 206 specified in the call. The selection of step 308 may include selecting the node 110 according to constraints that are specified in one or both of (a) the annotations read at step 306 and (b) a bundled application manifest 214.


Examples of constraints may include a constraint that the pod 204 and/or container 206 have a specified degree of affinity with respect to another pod 204 and/or container 206. As used herein a “degree of affinity” may require instantiation on a common node 110, a common server rack, a common data center, or some other maximum permissible distance, network latency, or other parameter describing proximity. The constraint may be with respect to a storage volume: the pod 204 and/or container 206 may have a required degree of affinity to a storage volume.


Another example constraint may be a “degree of anti-affinity” with respect to another pod 204, container 206, or storage volume. As used herein “degree of anti-affinity” may be a prohibition of colocation on a common node, a common server rack, a common data center, or within some other minimum permissible distance, network latency, or other parameter describing proximity.


Another example constraint may be based on NUMA (non-uniform memory access) awareness. In some applications, a server may have more than one system bus and multiple processors (e.g., central processing units (CPU)) that may be on a common motherboard. Each of these processors may have its own memory (e.g., random access memory (RAM)) but also, at times, access the memory of another CPU, e.g., a remote memory. Accordingly, the access time of memory access requests will be non-uniform. A NUMA-aware constraint may require that two containers 206 be on a common node, or even have a specified degree of proximity on the common node, in order to reduce non-uniformity and latency of memory access requests to remote memory by each container.


Selecting 308 the node 110 for a pod 204 and/or container 206 may be performed in parallel with the selection of the nodes 110 for other pods 204 and/or containers 206 and storage volumes. This may be referred to as “planning” in which nodes 110 are assigned to multiple pods 204, containers 206, and storage volumes in such a way that constraints of each pod 204, container 206, and storage volume with respect to other pods 204, containers 206 and/or storage volumes are met.


In some embodiments, planning has already been performed prior to execution of the method 300 such that selecting 308 includes retrieving an identifier of the node 110 for a pod 204 and/or container 206 from the previously-generated plan.


The method 300 may further include detecting 310 a requirement for multiple network interfaces for the pod 204 and/or container 206. The requirement may specify some or all of a number of the multiple interfaces, IP addresses for the multiple interfaces, and a type of plugin to use for each interface (FLANNEL, CALICO, CANAL, WEAVE, CILIUM, KUBE-ROUTER, OVS, SRIOV, etc.). For example, certain types of computing activities may be more suited for a particular type of plugin and a single pod 204 and/or container may be performing multiple types of computing activities such that a developer prefers that multiple different types of interfaces be used, each with its own IP address.


The requirement for multiple interfaces may be included in annotations of the script processed at step 320. The requirement for multiple interfaces may also be included in a separate bundled application manifest 214. Accordingly, the scheduling agent may access the manifest 214 to obtain the requirement for multiple interfaces for a particular pod identifier or container identifier referenced in the manifest 214 in response to a call from the KUBERNETES MASTER 202 referencing that identifier.


The scheduling agent 216 may then create 312 the multiple interfaces according to the requirement identified at step 310. This may include creating 312 interface objects implementing the specified interface type on the node selected at step 308. The objects may each be instances of an interface type as specified in the requirement.


The creating 312 of interface objects may include configuring and combining multiple interface objects to implement a desired functionality. For example, the manifest 214 may instruct the scheduling agent 216 to:

    • define multiple network gateways.
    • configure chaining of multiple plugins.
    • creating a bonded interface.
    • configure source-based routing
    • configure one network plugin as a slave of a master network plugin.
    • configure a virtual interfaces as executing in kernel space or user space.
    • associate virtual functions to ports of a virtual interface.


The nodes selected by the scheduling agent 2126 for implementing a network interface may be selected to implement NUMA awareness (e.g., required degree of proximity or separation relative to other components created according to the manifest 214 to avoid a single point of failure).


The method 300 may include acquiring 314 an IP address for each interface object as specified in the requirement, assigning the IP address to the node selected at step 308 and assigning each IP address to one of the interface objects.


The method 300 may continue as shown in FIG. 3B. The scheduling agent 216 may return 316 the selected node to the KUBERNETES master, e.g., an identifier (IP address, media access control (MAC) of the selected node, or some other identifier of the selected node.


The KUBERNETES master 202 may then invoke creation 318 of the pod and/or container on the selected node. Upon instantiation of the pod or a container in a pod, the pod may call 320 the CNI 212 that it is programmed with. As noted above, the CNI 212 may be an agent of the orchestrator 112. Actions ascribed herein to the CNI 212 may be performed by the CNI 212 or performed by the orchestrator 112 in response to a request or notification from the CNI 212.


The CNI 212 may obtain 322 network configuration information, such as from the orchestrator 112 or a database hosted or generated by the orchestrator 112. For example, references to the interface objects created at step 310 may be added to the database and associated with an identifier of the pod 204 instantiated at step 318 or for which a container was instantiated at step 318. The request may include an identifier of the pod 204 that invoked the CNI 212 to make the request. Accordingly, the CNI 212 or orchestrator 112 may request the references to the interface objects at step 322.


The CNI 212 may then configure 324 the pod 204 to use the multiple interfaces defined by the interface objects and the IP addresses associated with the multiple interfaces. This may include configuring a particular container 206 managed by the pod 204 to use a particular interface object for network communication. Thereafter, the CNI 212 may manage network communication by the containers 206 of a POD. Traffic from containers may be routed with a source address selected from the IP addresses associated with the multiple interfaces and processed using the interface object associated with that source address. Accordingly, a developer may configure a container 206 managed by a pod 204 to access any of the multiple interfaces suited for the computing activity being performed by the container 206.



FIG. 4 illustrates a method 400 providing an alternative approach for implementing affinity and anti-affinity constraints for pods 204 and containers 206 instantiated by the KUBERNETES master 202.


The method 400 may include processing 402 a script 218, such as a YAML script or a HELM chart. The script 218 may include annotations specifying affinity and/or anti-affinity constraints. The annotations may be in the form of plain human language statements that are flagged or marked as annotations as opposed to machine interpretable instructions for the KUBERNETES master 202. For example, the annotations may be include plain human language statements such as “container A must be on the same node as container B,” “container A must not be on the same node as container B,” “container A must be on the same server rack as container B,” “container A must not be on the same server rack as container B.” Similar statements may be made with respect to same data center affinity or anti-affinity, same city affinity or anti-affinity, or any other proximity requirement or prohibition.


The method 400 may further include the Kubernetes Master 202 calling a “mutation webhook.” The mutation webhook may be an executable specified in the script 218 or in a configuration of the KUBERNETES master 202 that specifies an executable that is executed before the operations specified in the script 218 are executed by the KUBERNETES master 202. In particular, the mutation webhook may be an executable that is provided the script file 218 and preprocesses the script file 218. In the method 400, the mutation webhook may reference the orchestrator 112. Alternatively, the functions ascribed to the orchestrator 112 in the description of the method 400 may be performed by another executable that is an agent of the orchestrator 112 or another executable that functions as a preprocessor.


The orchestrator 112 detects 406 the annotations in the script 218. For example, annotations may be formatted according to an annotation format of KUBERNETES such that the orchestrator 112 can identify them:



















“metadata”: {




 “annotations”: {




  “[PLAIN HUMAN LANGUAGE STATEMENT]”




 }




}.










The orchestrator 112 may then interpret 408 the annotations. Interpreting 408 the annotations may include using a natural language processing (NLP) algorithm to determine identifiers of the entities (containers, storage volumes, pods) referenced in the plain human language statement, a proximity (same node, same rack, same data center, same city, etc.), and a type (affinity or anti-affinity). The orchestrator 112 may then generate 410 a set of KUBERNETES instructions that will invoke creation of the referenced entities with the required affinity or anti-affinity. For example, these instructions may include logic that records a node selected for a first entity in the annotation by KUBERNETES and then use a nodeSelector function of KUBERNETES to select a node for the second entity that meets the affinity or anti-affinity constraint. The generated KUBERNETES instructions may be inserted into the script 218 in place of the annotation that the operations replaced. In particular, each annotation in the script 218 may be replaced with a set of KUBERNETES instructions that are derived for that annotation according to steps 408 and 410.


The KUBERNETES master 202 may then execute 412 the script 218 as modified according to step 410. This may include instantiating pods and containers satisfying the affinity and anti-affinity constraints. Executing 412 the script 218 may further include invoking the orchestrator 112 to create storage volumes for pods and containers according to the bundled application manifest 214 with the pods and containers also satisfying any affinity and anti-affinity constraints specified in the annotations.



FIG. 5 illustrates a method 500 for setting the IP address of a replacement pod 204 to be the same as the failed pod 204 the replacement pod 204 is replacing. KUBERNETES provides a mechanism for monitoring the performance of pods 204 and automatically replacing failed pods 204. However, KUBERNETES does not provide a mechanism to maintain the IP address of the replacement pod 204 to be the same as the failed pod 204.


The method 500 may include the KUBERNETES master 202 detecting 502 failure of a pod 204 having a pod identifier. In response, the KUBERNETES master 202 may call 504 the scheduling agent 216 and pass it the pod identifier. In response, the scheduling agent 216 may retrieve a node assigned to that pod identifier from a database. For example, each the orchestrator 112 may create entries in the database that include, for each pod identifier, an identifier of the node 110 selected for that pod identifier when the pod 204 was instantiated. Accordingly, for the pod identifier in the call of step 504, the scheduling agent 216 may return to the KUBERNETES master 202 an identifier of the node associated with that pod identifier in the database.


The KUBERNETES master 202 may then invoke 508 creation of the replacement pod 204 on the selected node 110 referenced by the node identifier returned at step 506. Upon creation on the selected pod 204, the pod 204 may call 510 a CNI 212 that the pod 204 was previously programmed to call. As noted above, the CNI 212 may be an agent of the orchestrator 112 and may respond to the call by obtaining 512 an IP address from the orchestrator 112 or directly from the database. For example, the database may store each IP address assigned to a pod 204 in association with the identifier of the pod 204 to which the IP address was assigned. This may include IP addresses assigned according to the method 300. The CNI 212 may then configure 514 the replacement pod 204 to use the IP address obtained at step 512.


Referring to FIG. 6, the illustrated method 600 may be used to enable KUBERNETES to operate with virtualized storage managed by the storage manager 102.


The method 600 may include processing 602 a script 218 by the KUBERNETES master 202, which includes reading 604 an instruction to create a pod 204 and mount a storage volume to the pod 204. In response to reading the instruction, the KUBERNERTES master 202 may call 606 the scheduling agent 216, the call including a pod identifier and possibly an identifier of the storage volume to be bound to the pod 204. As noted above, the scheduling agent 216 may be an agent of the orchestrator 112. Accordingly, step of the method 600 ascribed to the scheduling agent 216 may also be performed by the orchestrator 112 in response to a notification from the scheduling agent 216.


In response to the call, the scheduling agent 216 may detect 608 a storage requirement for the pod identifier. This may include detecting a reference to the storage volume in the call, e.g. the specifications of the storage volume (e.g., size) may be included in the script 218 and passed to the scheduling agent with the call at step 606. The specification of the storage volume may also be included in the bundled application manifest 214. For example, a storage volume specification in the bundled application manifest 214 may be any of (a) associated with the pod identifier in the manifest 214, (b) associated with a container identifier specified in the bundled application manifest 214 that is to be managed by the pod 204 referenced by the pod identifier, or (c) associated with an application instance specified in the bundled application manifest 214 to be executed within a container managed by the pod 204 referenced by the pod identifier.


In response to detecting the storage requirement, the scheduling agent 216 may instruct 610 the KUBERNETES master 202 to delay binding of the storage volume to the pod referenced by the pod identifier (“lazy binding”). Accordingly, the Kubernetes master 202 may create 614 a pod 204 according to the pod specification without binding a storage volume to it. The scheduling agent 216 may further instruct the storage manager 102 to create 612 a storage volume according to a specification of the storage volume. The storage volume may be mounted to the node on which the pod is to be created. For example, creating 614 the pod 204 may include obtaining an identifier of a node 110 on which the pod 204 is to be instantiated from the scheduling agent 216 as for other embodiments disclosed herein. The storage volume may then be mounted to that node. The storage volume may be implemented on a storage device 108 local to the node 110 hosting the pod 204 created at step 614 (e.g., a hybrid node that is both a compute and storage node) or may be implemented on a storage device 108 of another node 106, 110.


Following creation at step 614, the pod 204 may call 616 a CSI 210 that it is configured to call upon instantiation, the CSI 210 being an agent of the orchestrator 112 as described above. As for other embodiments disclosed herein, actions ascribed to the CSI 210 may be performed by the orchestrator 112 in response to a notification from the CSI 210.


In response to the call from step 616, the CSI 210 may bind 618 the storage volume created at step 612 to the pod created at step 614. As noted above, the storage volume may previously have been mounted to the node on which the pod 204 was created such that the locally mounted storage volume may then be bound by the CSI 210 to the pod 204 created at step 614.



FIG. 7 illustrates a method 700 for controlling assignment of containers 206 to CPUs of a node 110. The method 700 may be used to ensure the performance and stability of containers executing critical application instances 208. The method 700 may also be used for NUMA awareness, e.g., making sure different application instances 208 are executed by the same CPU or a same pool of CPUs. In the illustrated implementation, a pod 204 may be configured to call a container runtime 702 in order to instantiate the containers specified for the pod 204 in the bundled application manifest 214. In the illustrated embodiment, the container runtime 702 is an agent of the orchestrator 112 and may coordinate with the orchestrator 112 as described below.


The method 700 may include the KUBERNETES master 202 processing 704 a script 218 and reading 706 specifications for a pod 204 and one or more containers 206 to be instantiated and managed by the pod. In response to reading 706 the specifications, the KUBERNETES master 202 may invoke creation 708 of a pod 204. The KUBERNETES master 202 may further instruct the pod 204 created at step 708 to create one or more containers 206 (the number being specified in the specifications) and configure the pod 204 to use the container runtime 702. Accordingly, upon creation, the pod 204 may call 710 the container runtime 702 and instruct the container runtime 702 to create one or more containers 206 as instructed by the KUBERNETES master 202.


The container runtime 702 receives the call and, in response, requests 712 from the orchestrator 112 a CPU requirement for the one or more containers to be instantiated. For example, the specifications for the pod 204 and one or more containers may include container identifiers. The container identifiers may be passed to the container runtime 702, which passes the container identifiers to the orchestrator 112. The orchestrator 112 may look up CPU requirements for the one or more container identifiers in the bundle application manifest 214 and return 714 the CPU requirements to the container runtime 702. Examples of CPU requirements may include any of the following: (a) container A must be the only container executed by the CPU to which it is assigned, (b) container A may share a CPU with no more than N other containers, where N is an integer greater than or equal to 1, (c) containers A, B, and C (or any number of containers) must be the only containers assigned to at least N CPUs, where N is an integer greater than or equal to 1.


The container runtime 702 receives the CPU requirement and selects 716 one or more CPUs for each container identifier according to the requirement. For example, if a single exclusive CPU is required, a previously unassigned CPU may be selected. Where a first CPU is being used by a previously-instantiated container that does not require exclusivity, the previously-instantiated container may be assigned to a second CPU that is being used by another previously-instantiated container. The first CPU may then be assigned to the container requiring an exclusive CPU. In a like manner, a group of CPUs may be assigned to a group of containers requiring exclusive CPUs. Where a container does not require exclusivity, a CPU that is assigned to a previously-instantiated container may be assigned to the container. Where a container requires no more than a maximum number of containers be assigned to the same CPU as the container, a CPU may be selected such that the number of previously-instantiated containers assigned to that CPU is less than the maximum number of containers.


The container runtime 702 assigns 718 the one or more containers to the CPUs selected at step 716 for the one or more containers. The container runtime 702 may then invoke instantiation of the one or more containers pinned to the CPUs to which they are assigned. This may include calling a container runtime such as DOCKER, LINUX containers (LXC), or other container. For example, the container runtime 702 may call a docker daemon with instructions to instantiate 720 the one or more containers pinned to the CPUs selected for them at step 716.



FIG. 8 is a block diagram illustrating an example computing device 800. Computing device 800 may be used to perform various procedures, such as those discussed herein. The storage manager 102, storage nodes 106, compute nodes 110, and hybrid nodes, may have some or all of the attributes of the computing device 800.


Computing device 800 includes one or more processor(s) 802, one or more memory device(s) 804, one or more interface(s) 806, one or more mass storage device(s) 808, one or more Input/output (I/O) device(s) 810, and a display device 830 all of which are coupled to a bus 812. Processor(s) 802 include one or more processors or controllers that execute instructions stored in memory device(s) 804 and/or mass storage device(s) 808. Processor(s) 802 may also include various types of computer-readable media, such as cache memory.


Memory device(s) 804 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 814) and/or nonvolatile memory (e.g., read-only memory (ROM) 816). Memory device(s) 804 may also include rewritable ROM, such as Flash memory.


Mass storage device(s) 808 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in FIG. 8, a particular mass storage device is a hard disk drive 824. Various drives may also be included in mass storage device(s) 808 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 808 include removable media 826 and/or non-removable media.


I/O device(s) 810 include various devices that allow data and/or other information to be input to or retrieved from computing device 800. Example I/O device(s) 810 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.


Display device 830 includes any type of device capable of displaying information to one or more users of computing device 800. Examples of display device 830 include a monitor, display terminal, video projection device, and the like.


Interface(s) 806 include various interfaces that allow computing device 800 to interact with other systems, devices, or computing environments. Example interface(s) 806 include any number of different network interfaces 820, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 818 and peripheral device interface 822. The interface(s) 806 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.


Bus 812 allows processor(s) 802, memory device(s) 804, interface(s) 806, mass storage device(s) 808, I/O device(s) 810, and display device 830 to communicate with one another, as well as other devices or components coupled to bus 812. Bus 812 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.


For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 800, and are executed by processor(s) 802. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.


In the above disclosure, reference has been made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.


Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.


Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.


An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.


Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.


Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.


Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.


It should be noted that the sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s).


At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.


While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure.

Claims
  • 1. A method comprising: processing, using a first orchestration platform executing in a network environment including one or more computer systems connected by a network, a script, the script including a specification for a pod;in response to the specification for the pod, calling, by the first orchestration platform, a second orchestration platform executing in the network environment;instructing, by the second orchestration platform, the first orchestration platform to create the pod on a selected node, the selected node being one of the one or more computer systems meeting one or more rules, the one or more rules including one or both of affinity rules and anti-affinity rules; andcreating, by the first orchestration platform, the pod on the selected node;wherein calling, by the first orchestration platform, the second orchestration platform comprises calling a mutation agent that is an agent of the second orchestration platform, the mutation agent programmed to transform the script; andwherein instructing the first orchestration platform to create the pod on the selected node comprises: transforming, by the mutation agent, the script by inserting instructions effective to cause the first orchestration platform to create the pod on the selected node according to the one or more rules.
  • 2. The method of claim 1, wherein calling, by the first orchestration platform, the second orchestration platform comprises: calling a scheduling agent that is an agent of the second orchestration platform.
  • 3. The method of claim 2, wherein instructing the first orchestration platform to create the pod on the selected node comprises: obtaining, by the scheduling agent, an identifier of the selected node from the second orchestration platform; andreturning, by the scheduling agent, the identifier of the selected node to the first orchestration platform.
  • 4. The method of claim 1, wherein the one or more rules either require or forbid placement on a same node, in a same server rack, or in a same data center.
  • 5. The method of claim 1, further comprising: calling, by the pod, a container runtime on the selected node, the container runtime being an agent of the second orchestration platform;obtaining, by the container runtime, from the second orchestration platform, a processor requirement for a container; andcreating, by the container runtime, a container on the selected node pinned to a processor of the selected node according to the processor requirement.
  • 6. The method of claim 5, wherein the processor requirement requires the processor be exclusive to the container.
  • 7. The method of claim 5, wherein the processor requirement requires the processor be exclusive to a group of containers including the container.
  • 8. The method of claim 5, further comprising: calling, by the pod, a storage plugin, the storage plugin being an agent of the second orchestration platform;coordinating, by the storage plugin, with the second orchestration platform, to configure the pod to access a storage volume mounted to the selected node.
  • 9. A system comprising: a network environment including one or more computer systems connected by a network, each computer system of the one or more computer systems including one or more processing devices and one or more memory devices connected to the one or more processing devices;wherein the one or more computer systems are programmed to execute a first orchestration platform and a second orchestration platform;wherein the first orchestration platform is programmed to: process a script, the script including a specification for a pod; andcall the second orchestration platform in response to detecting the specification for the pod;wherein the first orchestration platform is programmed to: instruct the second orchestration platform to create the pod on a selected node, the selected node being one of the one or more computer systems meeting one or more rules, the one or more rules including one or both of affinity rules and anti-affinity rules; andwherein the second orchestration platform is programmed to create the pod on the selected node,wherein the first orchestration platform is programmed to call a mutation agent that is an agent of the second orchestration platform, the mutation agent programmed to transform the script; andwherein the mutation agent is programmed to transform the script by inserting instructions effective to cause the first orchestration platform to create the pod on the selected node according to the one or more rules.
  • 10. The system of claim 9, wherein the first orchestration platform is programmed to call the second orchestration platform by: calling a scheduling agent that is an agent of the second orchestration platform.
  • 11. The system of claim 10, the scheduling agent is programmed to: obtain an identifier of the selected node from the second orchestration platform; andreturn the identifier of the selected node to the first orchestration platform.
  • 12. The system of claim 9, wherein the one or more rules either require or forbid placement on a same node, in a same server rack, or in a same data center.
  • 13. The system of claim 9, wherein: the pod is programmed to call a container runtime on the selected node, the container runtime being an agent of the second orchestration platform; andthe container runtime is programmed to: obtain from the second orchestration platform, a processor requirement for a container; andcreate a container on the selected node pinned to a processor of the selected node according to the processor requirement.
  • 14. The system of claim 13, wherein the processor requirement requires the processor be exclusive to the container.
  • 15. The system of claim 13, wherein the processor requirement requires the processor be exclusive to a group of containers including the container.
  • 16. The system of claim 13, wherein: the pod is programmed to call a storage plugin, the storage plugin being an agent of the second orchestration platform;the storage plugin is programmed to coordinate with the second orchestration platform to configure the pod to access a storage volume mounted to the selected node.
US Referenced Citations (333)
Number Name Date Kind
3715573 Vogelsberg Feb 1973 A
4310883 Clifton Jan 1982 A
5602993 Stromberg Feb 1997 A
5680513 Hyland Oct 1997 A
5796290 Takahashi Aug 1998 A
6014669 Slaughter Jan 2000 A
6052797 Ofek Apr 2000 A
6119214 Dirks Sep 2000 A
6157963 Courtright, II Dec 2000 A
6161191 Slaughter Dec 2000 A
6298478 Nally Oct 2001 B1
6301707 Carroll Oct 2001 B1
6311193 Sekido Oct 2001 B1
6851034 Challenger Feb 2005 B2
6886160 Lee Apr 2005 B1
6895485 Dekoning May 2005 B1
6957221 Hart Oct 2005 B1
7096465 Dardinski Aug 2006 B1
7111055 Falkner Sep 2006 B2
7171659 Becker Jan 2007 B2
7246351 Bloch Jul 2007 B2
7305671 Davidov Dec 2007 B2
7461374 Balint Dec 2008 B1
7467268 Lindemann Dec 2008 B2
7535854 Luo May 2009 B2
7590620 Pike Sep 2009 B1
7698698 Skan Apr 2010 B2
7721283 Kovachka May 2010 B2
7734859 Daniel Jun 2010 B2
7738457 Nordmark Jun 2010 B2
7779091 Wilkinson Aug 2010 B2
7797693 Gustafson Sep 2010 B1
7984485 Rao Jul 2011 B1
8037471 Keller Oct 2011 B2
8046450 Schloss Oct 2011 B1
8060522 Birdwell Nov 2011 B2
8121874 Guheen Feb 2012 B1
8171141 Offer May 2012 B1
8219821 Zimmels Jul 2012 B2
8250033 De Souter Aug 2012 B1
8261295 Risbood Sep 2012 B1
8326883 Pizzorni Dec 2012 B2
8392498 Berg Mar 2013 B2
8429346 Chen Apr 2013 B1
8464241 Hayton Jun 2013 B2
8505003 Bowen Aug 2013 B2
8527544 Colgrove Sep 2013 B1
8589447 Gud Nov 2013 B1
8601467 Hofhansl Dec 2013 B2
8620973 Veeraswamy Dec 2013 B1
8666933 Pizzorni Mar 2014 B2
8745003 Patterson Jun 2014 B1
8775751 Pendharkar Jul 2014 B1
8782632 Chigurapati Jul 2014 B1
8788634 Krig Jul 2014 B2
8832324 Hodges Sep 2014 B1
8886806 Tung Nov 2014 B2
8909885 Corbett Dec 2014 B2
8954383 Vempati Feb 2015 B1
8954568 Krishnan Feb 2015 B2
8966198 Harris Feb 2015 B1
9009542 Marr Apr 2015 B1
9134992 Wong Sep 2015 B2
9146769 Shankar Sep 2015 B1
9148465 Gambardella Sep 2015 B2
9152337 Kono Oct 2015 B2
9167028 Bansal Oct 2015 B1
9280591 Kharatishvili Mar 2016 B1
9330155 Bono May 2016 B1
9336060 Nori May 2016 B2
9342444 Minckler May 2016 B2
9367301 Serrano Jun 2016 B1
9390128 Seetala Jul 2016 B1
9436693 Lockhart Sep 2016 B1
9514160 Song Dec 2016 B2
9521198 Agarwala Dec 2016 B1
9569274 Tarta Feb 2017 B2
9569480 Provencher Feb 2017 B2
9590872 Jagtap Mar 2017 B1
9600193 Ahrens Mar 2017 B2
9613119 Aron Apr 2017 B1
9619389 Roug Apr 2017 B1
9635132 Lin Apr 2017 B1
9667470 Prathipati May 2017 B2
9733992 Poeluev Aug 2017 B1
9747096 Searle Aug 2017 B2
9870366 Duan Jan 2018 B1
9880933 Gupta Jan 2018 B1
9892265 Tripathy Feb 2018 B1
9929916 Subramanian Mar 2018 B1
9998955 MaCcarthaigh Jun 2018 B1
10019459 Agarwala Jul 2018 B1
10042628 Thompson Aug 2018 B2
10061520 Zhao Aug 2018 B1
10133619 Nagpal Nov 2018 B1
10169169 Shaikh Jan 2019 B1
10191778 Yang Jan 2019 B1
10241774 Spivak Mar 2019 B2
10282229 Wagner May 2019 B2
10339112 Ranade Jul 2019 B1
10346001 Greenberg Jul 2019 B2
10353634 Greenwood Jul 2019 B1
10430434 Sun Oct 2019 B2
10564850 Gud Feb 2020 B1
10657119 Acheson May 2020 B1
10705878 Liu Jul 2020 B2
10956246 Bagde Mar 2021 B1
11082333 Lam Aug 2021 B1
20020141390 Fangman Oct 2002 A1
20030126426 Frye Jul 2003 A1
20040010716 Childress Jan 2004 A1
20040153703 Vigue Aug 2004 A1
20040221125 Ananthanarayanan Nov 2004 A1
20050065986 Bixby Mar 2005 A1
20050216895 Tran Sep 2005 A1
20050256948 Hu Nov 2005 A1
20060025908 Rachlin Feb 2006 A1
20060053357 Rajski Mar 2006 A1
20060085674 Ananthamurthy Apr 2006 A1
20060259686 Sonobe Nov 2006 A1
20070006015 Rao Jan 2007 A1
20070016786 Waltermann Jan 2007 A1
20070033356 Erlikhman Feb 2007 A1
20070067583 Zohar Mar 2007 A1
20070165625 Eisner Jul 2007 A1
20070260842 Faibish Nov 2007 A1
20070277056 Varadarajan Nov 2007 A1
20070288791 Allen Dec 2007 A1
20080010421 Chen Jan 2008 A1
20080068899 Ogihara Mar 2008 A1
20080083012 Yu Apr 2008 A1
20080189468 Schmidt Aug 2008 A1
20080235544 Lai Sep 2008 A1
20080256141 Wayda Oct 2008 A1
20080256143 Reddy Oct 2008 A1
20080256167 Branson Oct 2008 A1
20080263400 Waters Oct 2008 A1
20080270592 Choudhary Oct 2008 A1
20090144497 Withers Jun 2009 A1
20090172335 Kulkarni Jul 2009 A1
20090240809 La Frese Sep 2009 A1
20090254701 Kurokawa Oct 2009 A1
20090307249 Koifman Dec 2009 A1
20100100251 Chao Apr 2010 A1
20100161941 Vyshetsky Jun 2010 A1
20100162233 Ku Jun 2010 A1
20100211815 Mankovskii Aug 2010 A1
20100274984 Inomata Oct 2010 A1
20100299309 Maki Nov 2010 A1
20100306495 Kumano Dec 2010 A1
20100332730 Royer Dec 2010 A1
20110083126 Bhakta Apr 2011 A1
20110119664 Kimura May 2011 A1
20110161291 Taleck Jun 2011 A1
20110188506 Arribas Aug 2011 A1
20110208928 Chandra Aug 2011 A1
20110239227 Schaefer Sep 2011 A1
20110246420 Wang Oct 2011 A1
20110276951 Jain Nov 2011 A1
20120005557 Mardiks Jan 2012 A1
20120016845 Bates Jan 2012 A1
20120066449 Colgrove Mar 2012 A1
20120102369 Hiltunen Apr 2012 A1
20120137059 Yang May 2012 A1
20120159519 Matsuda Jun 2012 A1
20120216052 Dunn Aug 2012 A1
20120226667 Volvovski Sep 2012 A1
20120240012 Weathers Sep 2012 A1
20120259819 Patwardhan Oct 2012 A1
20120265976 Spiers Oct 2012 A1
20120303348 Lu Nov 2012 A1
20120311671 Wood Dec 2012 A1
20120331113 Jain Dec 2012 A1
20130054552 Hawkins Feb 2013 A1
20130054932 Acharya Feb 2013 A1
20130080723 Sawa Mar 2013 A1
20130179208 Chung Jul 2013 A1
20130254521 Bealkowski Sep 2013 A1
20130282662 Kumarasamy Oct 2013 A1
20130332688 Corbei Dec 2013 A1
20130339659 Bybell Dec 2013 A1
20130346618 Holkkola Dec 2013 A1
20130346709 Wang Dec 2013 A1
20140006465 Davis Jan 2014 A1
20140047263 Coatney Feb 2014 A1
20140047341 Breternitz Feb 2014 A1
20140047342 Breternitz Feb 2014 A1
20140058871 Marr Feb 2014 A1
20140059527 Gagliardi Feb 2014 A1
20140059528 Gagliardi Feb 2014 A1
20140089265 Talagala Mar 2014 A1
20140108483 Tarta Apr 2014 A1
20140130040 Lemanski May 2014 A1
20140149696 Frenkel May 2014 A1
20140181676 Samborskyy Jun 2014 A1
20140195847 Webman Jul 2014 A1
20140245319 Fellows Aug 2014 A1
20140281449 Christopher Sep 2014 A1
20140282596 Bourbonnais Sep 2014 A1
20150007171 Blake Jan 2015 A1
20150019495 Siden Jan 2015 A1
20150046644 Karp Feb 2015 A1
20150067031 Acharya Mar 2015 A1
20150074358 Flinsbaugh Mar 2015 A1
20150106549 Brown Apr 2015 A1
20150112951 Naraxananuatax Apr 2015 A1
20150134857 Hahn May 2015 A1
20150149605 De La Iglesia May 2015 A1
20150186217 Eslami Jul 2015 A1
20150278333 Hirose Oct 2015 A1
20150317212 Lee Nov 2015 A1
20150319160 Ferguson Nov 2015 A1
20150326481 Rector Nov 2015 A1
20150379287 Mathur Dec 2015 A1
20160011816 Aizman Jan 2016 A1
20160026667 Mukherjee Jan 2016 A1
20160042005 Liu Feb 2016 A1
20160124775 Ashtiani May 2016 A1
20160197995 Lu Jul 2016 A1
20160239412 Wada Aug 2016 A1
20160259597 Worley Sep 2016 A1
20160283261 Nakatsu Sep 2016 A1
20160357456 Iwasaki Dec 2016 A1
20160357548 Stanton Dec 2016 A1
20160373327 Degioanni Dec 2016 A1
20170034023 Nickolov Feb 2017 A1
20170060710 Ramani Mar 2017 A1
20170060975 Akyureklier Mar 2017 A1
20170075749 Ambichl Mar 2017 A1
20170139645 Byun May 2017 A1
20170149843 Amulothu May 2017 A1
20170168903 Dornemann Jun 2017 A1
20170192889 Sato Jul 2017 A1
20170206017 Sun Jul 2017 A1
20170214550 Kumar Jul 2017 A1
20170235649 Shah Aug 2017 A1
20170242617 Walsh Aug 2017 A1
20170242719 Tsirkin Aug 2017 A1
20170244557 Riel Aug 2017 A1
20170244787 Rangasamy Aug 2017 A1
20170293450 Battaje Oct 2017 A1
20170322954 Horowitz Nov 2017 A1
20170337492 Chen Nov 2017 A1
20170371551 Sachdev Dec 2017 A1
20180006896 MacNamara Jan 2018 A1
20180024889 Verma Jan 2018 A1
20180046553 Okamoto Feb 2018 A1
20180082053 Brown Mar 2018 A1
20180107419 Sachdev Apr 2018 A1
20180113625 Sancheti Apr 2018 A1
20180113770 Hasanov Apr 2018 A1
20180136931 Hendrich May 2018 A1
20180137306 Brady May 2018 A1
20180150306 Govindaraju May 2018 A1
20180159745 Byers Jun 2018 A1
20180165170 Hegdal Jun 2018 A1
20180218000 Setty Aug 2018 A1
20180225140 Titus Aug 2018 A1
20180225216 Filippo Aug 2018 A1
20180246670 Baptist Aug 2018 A1
20180246745 Aronovich Aug 2018 A1
20180247064 Aronovich Aug 2018 A1
20180267820 Jang Sep 2018 A1
20180276215 Chiba Sep 2018 A1
20180285164 Hu Oct 2018 A1
20180285223 McBride Oct 2018 A1
20180285353 Ramohalli Oct 2018 A1
20180287883 Joshi Oct 2018 A1
20180288129 Joshi Oct 2018 A1
20180300653 Srinivasan Oct 2018 A1
20180302335 Gao Oct 2018 A1
20180329981 Gupte Nov 2018 A1
20180364917 Ki Dec 2018 A1
20180365092 Linetskiy Dec 2018 A1
20180375728 Gangil Dec 2018 A1
20190004704 Rathi Jan 2019 A1
20190065061 Kim Feb 2019 A1
20190065323 Dhamdhere Feb 2019 A1
20190073132 Zhou Mar 2019 A1
20190073372 Venkatesan Mar 2019 A1
20190079928 Kumar Mar 2019 A1
20190089651 Pignatari Mar 2019 A1
20190102226 Caldato Apr 2019 A1
20190109756 Abu Lebdeh Apr 2019 A1
20190116690 Chen Apr 2019 A1
20190132203 Wince May 2019 A1
20190148932 Benesch May 2019 A1
20190156023 Gerebe May 2019 A1
20190163460 Kludy May 2019 A1
20190188094 Ramamoorthi Jun 2019 A1
20190190803 Joshi Jun 2019 A1
20190199601 Lynar Jun 2019 A1
20190213080 Alluboyina Jul 2019 A1
20190213085 Alluboyina Jul 2019 A1
20190215313 Doshi Jul 2019 A1
20190220266 Doshi Jul 2019 A1
20190220315 Vallala Jul 2019 A1
20190235895 Ovesea Aug 2019 A1
20190250849 Compton Aug 2019 A1
20190272205 Jiang Sep 2019 A1
20190278624 Bade Sep 2019 A1
20190324666 Kusters Oct 2019 A1
20190334727 Kaufman Oct 2019 A1
20190335551 Williams Oct 2019 A1
20190361748 Walters Nov 2019 A1
20190369273 Liu Dec 2019 A1
20190370018 Kirkpatrick Dec 2019 A1
20200019414 Byard Jan 2020 A1
20200026635 Gaber Jan 2020 A1
20200034193 Jayaram Jan 2020 A1
20200034254 Natanzon Jan 2020 A1
20200065406 Lppatapu Feb 2020 A1
20200073586 Kurata Mar 2020 A1
20200083909 Kusters Mar 2020 A1
20200150977 Wang May 2020 A1
20200162330 Vadapalli May 2020 A1
20200257519 Shen Aug 2020 A1
20200310774 Zhu Oct 2020 A1
20200310915 Alluboyina Oct 2020 A1
20200344326 Ghosh Oct 2020 A1
20200356537 Sun Nov 2020 A1
20200412625 Bagarolo Dec 2020 A1
20210011775 Baxter Jan 2021 A1
20210029000 Mordani Jan 2021 A1
20210042151 Muller Feb 2021 A1
20210064536 Palmer Mar 2021 A1
20210067607 Gardner Mar 2021 A1
20210126839 Rudrachar Apr 2021 A1
20210141655 Gamage May 2021 A1
20210157622 Ananthapur May 2021 A1
20210168034 Qian Jun 2021 A1
20210271506 Ganguly Sep 2021 A1
20210406079 Atur Dec 2021 A1
Foreign Referenced Citations (1)
Number Date Country
WO2017008675 Jan 2017 WO
Non-Patent Literature Citations (15)
Entry
Segment map, Google, Feb. 4, 2019.
Fast and Secure Append-Only storage with Infinite Capacity, Zheng, Aug. 27, 2003.
User Mode and Kernel Mode, Microsoft, Apr. 19, 2017.
Precise memory link detection for java software using container profiling, XU, Jul. 2013.
Mogi et al., “Dynamic Parity Stripe Reorganization for RAID5 Disk Arrays,” 1994, IEEE, pp. 17-26.
Syed et al., “The Contaner Manager Pattern”, ACM, pp. 1-9 (Year 2017).
Rehmann et al., “Performance of Containerized Database Management Systems”, ACM, pp. 1-6 (Year 2018).
Awada et al, “Improving Resource Efficiency of Container-instance Clusters on Clouds”, IEEE, pp. 929-934 (Year 2017).
Stankovski et al, “Implementing Time—Critical Functionalities with a Distributed Adaptive Container Architecture”, ACM, pp. 1-5 (Year 2016).
Dhakate et al, “Distributed Cloud Monitoring Using Docker as Next Generation Container Virtualization Technology” IEEE, pp. 1-5 (Year 2015).
Crameri et al, “Staged Deployment in Mirage, an Integrated Software Upgrade Testing and Distribution System”, ACM, pp. 221-236 (Year: 2007).
Cosmo et al, “Packages Upgrades in FOSS Distributions: Details and Challenges”, ACM, pp. 1-5 (Year: 2008).
Burg et al, “Atomic Upgrading of Distributed Systems”, ACM, pp. 1-5 (Year: 2008).
Souer et al, “Component Based Architectur forWeb Content Management: Runtime Deployable Web Manager Component Bundles”, IEEE, pp. 366-369 (Year: 2008).
Weingartner et al, “A distributed autonomic management framework for cloud computing orchestration.” In 2016 IEEE World Congress on Services (Year: 2016).
Related Publications (1)
Number Date Country
20220109605 A1 Apr 2022 US