Applications today are deployed onto a combination of virtual machines (VMs), containers, application services, and more. For deploying such applications, a container orchestration platform known as Kubernetes® has gained in popularity among application developers. Kubernetes provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts It offers flexibility in application development and offers several useful tools for scaling.
In a Kubernetes system, containers are grouped into a logical unit called a “pod.” Containers in the same pod share the same resources and network, and maintain a degree of isolation from containers in other pods. The pods are distributed across nodes of the Kubernetes system and an image cache is provided on each node to speed up pod deployment. However, when an instance of the same pod is deployed across multiple nodes, and none of the image caches of the nodes have the images of containers that are in the pod, the network can become saturated during the deployment.
In addition, the image caches in a Kubernetes system are opaque to the user. Without a view into which images are cached on which nodes, it is not possible to know how quickly pods can be deployed on a node. Thus, the deployment time for a pod becomes non-deterministic because some nodes may have the images cached and some nodes may not. As a result, it can be difficult to make appropriate scheduling decisions.
Over time, duplication of cached images across nodes may also result. Because the image binaries are generally not small, the amount of disk space consumed by them can become very large, e.g., N×their size when they are cached on N nodes. Accordingly, pre-seeding of the images in the image cache of each node in a Kubernetes system, which has been employed as a solution to alleviate the network saturation and scheduling problems noted above, is far from ideal because this results in duplication of images in each cache, which would be wasteful.
One or more embodiments provide a global cache for container images in a clustered container host system in which containers are executed within VMs. According to one embodiment, the clustered container host system includes a shared storage device for the container hosts, and the global cache for the container images is allocated in the shared storage device. With this configuration, a container can be deployed in a VM running in one host using a container image that has been cached as a result of deploying the same container image in a VM running in another host.
A method of managing container images according to one embodiment includes the steps of: in connection with deploying a container in a first VM running in a first host, creating a virtual disk in the shared storage device, storing an image of the container in the virtual disk, mounting the virtual disk to the first VM, and updating a metadata cache to associate the image of the container to the virtual disk; and in connection with deploying the container in a second VM running in a second host, checking the metadata cache to determine that the image of the container is stored in the virtual disk, and mounting the virtual disk to the second VM.
Further embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above methods, as well as a computer system configured to carry out the above methods.
In the embodiment illustrated in
VM management server 40 is a physical or virtual server that cooperates with hypervisors installed in hosts 10A, 10B, 10C to provision VMs from the hardware resources of hosts 10A, 10B, 10C, and virtual disks for the VMs in the shared storage. The unit of managing the hosts by VM management server 40 is a cluster. A cluster may include any number of hosts and in the embodiment illustrated herein the number of hosts in the cluster is three.
A group of containers is executed in VMs in the embodiments to provide isolation from another group of containers running in a different VM. In each VM, a container engine (not shown) runs on top of the VM's guest operating system (not shown) to provide the execution environment for the containers.
In the embodiments illustrated herein, metadata cache 110 is a database 111 comprising a plurality of relational database tables. Two such tables are shown in
Container images are registered with image registry 130, which manages a plurality of container repositories (one of which is shown in
Upon receiving the request to spin up a new container, VM management server 20 sends a request for an image of the new container to the resolver VMs. The image request includes the URI of the new container image and credentials of the application administrator. The resolver VMs then carry out the function of resolving the image request, which includes: (1) authenticating the credentials with image registry 130, (2) acquiring a chain ID of the new container image from image registry 130 and determining if the new container image corresponding to this chain ID is cached in global cache 120 or not, i.e., whether or not the chain ID is present or not present in metadata cache 110, and (3) acquiring a size of the new container image from image registry 130. If it is not cached, one of the resolver VMs updates metadata cache 110 to add an entry, which associates the URI of the new container image with the chain ID, in Table 1 and to add an entry for the chain ID in Table 2, and also sends a request to fetch the new container image to the fetcher VMs. The request to fetch includes the URI of the new container image, the credentials of the application administrator, the chain ID of the new container image, and the size of the new container image, the latter two of which were acquired from image registry 130.
In the embodiments described herein, the authentication part of the resolving function is carried out when a new container is spun up within a VM. In other embodiments, the authentication part of the resolving function also may be carried out for a container each time that a container is run.
The fetcher VMs carry out the function of fetching in response to the request to fetch the new container image by calling an API of VM management server 20 to create a new virtual disk (also referred to herein as VMDK), the parameters of the API including a size corresponding to the size of the container image acquired from image registry 130. In response to the API calls for creating a new virtual disk, one of the fetcher VMs receives a pointer to the new virtual disk, updates the entry in Table 2 of metadata cache 110 corresponding to the chain ID to add a pointer to the new virtual disk, and sends a fetch request to image registry 130, the fetch request including the URI of the new container image and the credentials of the application administrator. In response, image registry 130 retrieves the contents of the new container image from image repository 135 and transmits the contents of the container image to the fetcher VM. Then, the fetcher VM stores the contents of the container image received from image registry 130 in the new virtual disk.
After creating the new virtual disk, VM management server 20 instructs the hypervisor supporting the VM in which container Cn is to be spun up, to reconfigure the VM to mount the new virtual disk. Once the VM is reconfigured in this manner, container Cn can be executed within VM 21 according to the contents of its container image stored in the new virtual disk.
In some embodiments, the container engine that supports execution of containers in VMs employs an overlay file system. An image of a container executed in such an environment consists of a plurality of layers and these layers need to be mounted on top of each other in the proper order by the overlay file system for execution of the container. Accordingly, when these layers are fetched from image registry 130 and stored in a virtual disk, the fetcher VM, based on information acquired from image registry 130 during the fetching, creates metadata that describes how and in what order the layers should be mounted by the overlay file system, and stores this metadata in the virtual disk for later consumption during mounting of the layers.
In the embodiments, the function of resolving and the function of fetching are carried out in a distributed manner. As such, all of the resolver VMs in the cluster of hosts managed by VM management server 20 carry out the function of resolving and all of the fetcher VMs in the cluster of hosts managed by VM management server 20 carry out the function of fetching. Although multiple resolver VMs are carrying out the same resolving function, the process described herein ensures that only one resolver VM completes the resolving function. In the case of a cache miss, the resolver VM that is the first to access metadata cache 110 to determine the cache miss will have a lock on Table 2 and will update Table 2 to include the chain ID in response to the cache miss. Consequently, all subsequent accesses to metadata cache 110 to determine a cache hit or miss on the chain ID will result in a cache hit and will not cause a further updating of Table 2. In the case of a cache hit, multiple resolver VMs will call an API of VM management server 20 to mount a virtual disk corresponding to the cache hit, but VM management server 20 will process only the first of these API calls and ignore the rest. Likewise, for fetching, multiple fetcher VMs will call an API of VM management server 20 to create a new virtual disk, but VM management server 20 will process only the first one of the API calls and ignore the rest.
VM management server 20 at step S1, sends a request for the new container image to the resolver VMs in the cluster of hosts managed by VM management server 20. The image request includes the URI of the new container image and credentials of the application administrator. At step S2, each of the resolver VMs sends the URI and the credentials to image registry 130. If image registry 130 is able to authenticate the credentials at step S3, image registry 130 at step S4 sends the chain ID (which is generated by hashing the contents of the new container image) and a size of the new container image to each resolver VM.
Each resolver VM at step S5 searches metadata cache 110, in particular Table 2, to determine if the chain ID of the new container image acquired from image registry 130 is or is not present in metadata cache 110. If it is not present, a cache miss is determined and steps S6 to S11 are carried out. If it is present, a cache hit is determined and steps S12 and S13 are carried out.
At step S6, the resolver VM (e.g., the first resolver VM that determined the absence of the chain ID in the metadata cache 110) updates metadata cache 110 to add an entry that associates the URI of the new container image with the chain ID to Table 1 and to add an entry for the chain ID in Table 2. At step S7, the resolver VM sends a request to fetch the new container image to the fetcher VMs in the cluster of hosts managed by VM management server 20. The request to fetch includes the URI of the new container image, the credentials of the application administrator, the chain ID of the new container image, and the size of the new container image, the latter two of which were acquired from image registry 130.
Each of the fetcher VMs carries out the function of fetching in response to the request to fetch the new container image. At step S8, the fetcher VMs each call an API of VM management server 20 for creating a new virtual disk of the requested size and thereafter mounting the new virtual disk to the VM in which the new container is to be spun up. VM management server 20 responds to only the first one of these API calls by: (1) sending back a pointer to the new virtual disk to that fetcher VM, and (2) instructing the hypervisor supporting the VM (in which the new container image is to be spun up) to reconfigure the VM to mount the new virtual disk (step S9). After responding to the first one of these API calls, VM management server 20 ignores the remainder of these API calls. Upon receiving the pointer to the new virtual disk, the fetcher VM at step S10 updates the metadata cache 110 using this information and also communicates with image registry 130 to fetch the new container image. Upon fetching the contents of the new container image, the fetcher VM at step S11 downloads the contents of the container image into the new virtual disk. After completion of step S11, the new container is ready to be loaded into the memory of the VM and executed.
At step S12, which is carried out if the chain ID of the new container image acquired from image registry 130 is present in metadata cache 110, each resolver VM determines the pointer to the virtual disk that is associated with the chain ID in Table 2 of metadata cache 110, and calls an API of VM management server 20 for mounting the virtual disk that is located at the determined pointer to the VM in which the new container is to be spun up. VM management server 20 responds to only the first one of these API calls and ignores the rest. Upon receiving the first of these API calls, VM management server 20 at step S13, instructs the hypervisor supporting the VM to reconfigure the VM to mount the virtual disk that is located at the determined pointer. After completion of step S13, the new container is ready to be loaded into the memory of the VM and executed.
The method
The process loops through steps 410 and 412 if it is determined at step 414 that all of the VMDKs stored in global cache 120 have not been analyzed. If they have, step 416 is executed where the VMDK that costs the least to replace is deleted, e.g., by calling an API of VM management server 20 to delete the VMDK. After step 416, it is determined at step 418, whether or not sufficient space has been freed up in global cache 120. If sufficient space has not been freed up, the process returns to step 416, where the VMDK having the next lowest cost to replace is deleted. If sufficient space has been freed up, the process ends.
Embodiments provide a global cache, which in comparison to per-node caching employed in conventional implementations, reduces the spin-up time for a container, provides better estimates on how long it will take to spin up a container, and eliminates redundant storing of the same container images.
The various embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities-usually, though not necessarily, these quantities may take the form of electrical or magnetic signals, where they or representations of them are capable of being stored, transferred, combined, compared, or otherwise manipulated. Further, such manipulations are often referred to in terms such as producing, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments of the invention may be useful machine operations. In addition, one or more embodiments of the invention also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for specific required purposes, or it may be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The various embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
One or more embodiments of the present invention may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system. Computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a computer readable medium include a hard drive, NAS, read-only memory (ROM), RAM (e.g., flash memory device), Compact Disk (e.g., CD-ROM, CD-R, or CD-RW), Digital Versatile Disk (DVD), magnetic tape, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although one or more embodiments of the present invention have been described in some detail for clarity of understanding, it will be apparent that certain changes and modifications may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.
Virtualization systems in accordance with the various embodiments may be implemented as hosted embodiments, non-hosted embodiments or as embodiments that tend to blur distinctions between the two, are all envisioned. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data.
Many variations, modifications, additions, and improvements are possible, regardless the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest operating system that performs virtualization functions. Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention. In general, structures and functionalities presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionalities presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
8966318 | Shah | Feb 2015 | B1 |
9891952 | Chen | Feb 2018 | B1 |
10430378 | Harter et al. | Oct 2019 | B1 |
10684884 | Emelyanov et al. | Jun 2020 | B1 |
10698925 | Zhao | Jun 2020 | B1 |
10990365 | Burgazzoli et al. | Apr 2021 | B2 |
11093221 | Novy | Aug 2021 | B1 |
20120109958 | Thakur et al. | May 2012 | A1 |
20140053150 | Barnett | Feb 2014 | A1 |
20160004480 | Lakshman | Jan 2016 | A1 |
20160004611 | Lakshman et al. | Jan 2016 | A1 |
20160098285 | Davis et al. | Apr 2016 | A1 |
20170083541 | Mann et al. | Mar 2017 | A1 |
20170132090 | Banerjee et al. | May 2017 | A1 |
20170264684 | Spillane | Sep 2017 | A1 |
20170371693 | Corrie | Dec 2017 | A1 |
20180054469 | Simoncelli | Feb 2018 | A1 |
20180203736 | Vyas et al. | Jul 2018 | A1 |
20180307537 | Chen et al. | Oct 2018 | A1 |
20180336079 | Soman et al. | Nov 2018 | A1 |
20180365006 | Carvalho et al. | Dec 2018 | A1 |
20180365238 | Eder et al. | Dec 2018 | A1 |
20190243681 | Chen | Aug 2019 | A1 |
20190294461 | Woods et al. | Sep 2019 | A1 |
20190392050 | Weil et al. | Dec 2019 | A1 |
20200092392 | Seelam | Mar 2020 | A1 |
20200104385 | Zheng | Apr 2020 | A1 |
20200133883 | Bedi | Apr 2020 | A1 |
20200136825 | Gupta et al. | Apr 2020 | A1 |
20200142865 | Manjunath et al. | May 2020 | A1 |
20200272440 | Burgazzoli et al. | Aug 2020 | A1 |
20200301791 | Zhang | Sep 2020 | A1 |
20200409921 | Starks | Dec 2020 | A1 |
20210004712 | Sarferaz | Jan 2021 | A1 |
20210092071 | Tortosa et al. | Mar 2021 | A1 |
20210096894 | Rupprecht | Apr 2021 | A1 |
20210232344 | Corrie | Jul 2021 | A1 |
20210232345 | Corrie | Jul 2021 | A1 |
20210232418 | Corrie et al. | Jul 2021 | A1 |
Entry |
---|
Examiner search result for “running docker on VM”, performed May 28, 2022 (Year: 2022). |
Examiner Search Result for “Mounting a virtual disk path is mounting a drive”, performed on May 28, 2022 (Year: 2022). |
Ernst, E. et al. “Kata Containers Architecture,” GitHub, Inc., 2019, 22 pages, URL: https://github.com/kata-containers/documentation/blob/master/design/architecture.md. |
Vmware, Inc. “Overview of vSphere Integrated Containers,” Product version: 1.5, 2019, 21 pages. |
Number | Date | Country | |
---|---|---|---|
20210232418 A1 | Jul 2021 | US |