This disclosure relates generally to container-based computing, and, more particularly, to enable efficient loading of a container image.
Containers are virtual structures used for execution of an isolated instance of an application within a host virtualization environment. Containers are used to facilitate operating system virtualization, thereby abstracting (e.g., isolating) an application from the operating system. As a result, an application executing in a first container is isolated from another application (perhaps even a copy of the same application) that is executed outside of the container (e.g., at the host operating system or in another container).
The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc. are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.
Operating System (OS) level virtualization enables execution of multiple isolated user space instances on a computing system. Such instances are commonly referred to as containers, but may additionally or alternatively be referred to as Zones, virtual private servers, partitions, virtual environments (VEs), virtual kernels, jails, etc. Such containers appear to function as a complete computer system from the perspective of the application(s) executed inside of the container. However, while an application executed on a traditional computer operating system can access resources of that computer (e.g., connected devices, file systems, network shares, etc.), when the application is executed within a container, the application can only access those resources associated with the container. In other words, the application does not have access to resources outside of the container (e.g., resources in another container, resources of the host operating system, etc.) other than those resources outside of the container that the application is specifically allowed to access.
To that end, containers are useful from a resource management standpoint (e.g., resources used by containerized components are isolated for use only by those components that are part of the same container) and/or from a security standpoint (e.g., access to containerized files or components can be restricted). Likewise, containers are useful for achieving lightweight, reproducible application deployments. In this manner, containers are frequently used in cloud computing environments to enable resources to be allocated, removed, and/or re-allocated based on demand.
Prior to execution of a container within a host environment, the container is stored as a container image that specifies components of the container including, for example, libraries, binaries, and/or other files for needed for execution of the container.
A container is different from a container image, even from a storage perspective, in the sense that the container image represents the base from which a container starts. A container will add on top the image a “thin” layer which is both readable and writable. Any changes that a container will attempt to make to the content of a file in the image will trigger a copy-on-write mechanism, which will instead create a copy in the thin layer of the file from the image, which will then be used by the container for both reading and writing purposes. In this manner, a container image or, more generally, a container, is different from a virtual machine image (or other data structure), where changes to a virtual machine image made by an application executed within the virtual machine will cause writes to the image itself.
A container image can include one or more container layers, which can be considered the fundamental units of container storage. Each layer adds a different set of files and folders to the overall image. The containers layers of an image are considered read-only and will often be reused as building blocks for one or more containers which can share them (e.g., 2 containers with the same operating system, will share the layers that represent that OS). When a container is to be created, a Container Runtime Engine (e.g., an entity creating and managing the container during its lifespan) will have a handler which will be responsible to ensure that all necessary layers to create the container exists on disk, pull them from a remote storage over the network if unavailable and create a last, thin layer (which is container specific), to which the container can both read and write. Such an approach is called Copy-on-Write, and prevents the container from writing to files in the read-only layers. Therefore, when a determination is made to execute the container in the host environment, the Container Runtime Engine pulls all missing layers from a container registry (e.g., via a network), creates a small copy-on-write layer and mounts all the layers in the new container (the mount operation does not implicate any read, write or execute operation), and loads or runs whatever binary or program the container was designated to run from final image. These steps of creating the containers storage are usually referred to as: pull phase, create phase (including mounting), and run phase. These steps are the main cause for any overhead when launching a container and should be mitigated.
In Function-as-a-Service (FAAS) and Container-as-a-Service (CAAS) environments, container start-up time has major impact on the Quality-of-Service (QoS), as seen from an end user perspective. A cold start time, that is, an amount of time it takes from the command to launch a container, go through the 3 steps of preparation (pull, create, run) until it completes execution for the first time on a machine. Depending on how many layers are missing and how long it takes to create and mount all the container layer, the impact to function performance can be significant. This cold-start time can be also influenced for the need during run time to process large or numerous files, high latency for file operations, etc. for the first time. High latencies and volatile runtime cannot be tolerated for some use-cases (e.g., real-time processing, streaming, sub-second functions). Moreover, for workloads that are deemed to be high priority, start-up time (and variance of that time) is ideally reduced and/or minimized. Example approaches disclosed herein enable a reduction in container start-up times, particularly in FaaS and CaaS environments. Also, most hosts of FaaS/CaaS services wish for predictable runtimes with low variation, as long cold-starts can either result in higher bills for the customer (if the time is billed) or losses for the provider (if the cold-start is not billed). Moreover, examples disclosed herein offer differentiated services based on a priority and/or importance of a particular container.
Existing approaches attempt to minimize cold start time by keeping the container warm. That is, the image for the container is kept in DRAM, by using a RAM-disk. However, continuously storing container images in DRAM is a costly solution, especially if those containers are not actively used.
Some other existing approaches attempt to overcome delays associated with downloading a full image on disk in order to start a container and, instead, offer lazy loading of the image. In such prior approaches, the container can be started once critical portions of the image are downloaded into on the storage, while other layers are downloaded after the container start, and are pulled from the registry during execution.
As explained herein, use of a tiered storage topology (e.g., based on Intel® 3D XPoint) can further improve cold start times, increase a density of functions deployed in the system, and offer differentiated services (e.g., improved loading times based on workload priority). In this manner, hierarchical storage topologies (e.g., based on Intel® 3D XPoint) are used to offer differentiated services and reduced container cold-start time in FaaS/CaaS deployments. In some examples, Intel® Resource Director Technology (Intel® RDT) is used to further extend service differentiation and improve performance predictability (e.g., container run time). In this manner, example approaches disclosed herein extend the concept of image lazy loading in the context of a tiered storage system, with the added benefit of using various tiers of the tiered storage system to provide varying levels of performance and/or predictability.
In the illustrated example of
The second example image 120 of
By using the landmark 150, layers of the image prior to the landmark 150 can be pre-fetched for loading into memory. In some examples, the pre-fetching of the layers prior to the landmark 150 is performed using a single operation. In the illustrated example of
In the illustrated example of
While in the illustrated example of
The example first tier storage device 220, the example second-tier storage device 225, the example third tier storage device 230, and the example Nth tier storage device 235 of the illustrated example of
In the illustrated example of
The example container compressor 250 of the illustrated example of
The example prioritizer 270 of the illustrated example of
The example container controller 275 of the illustrated example of
The example container loader 280 of the illustrated example of
The example container executor 290 of the illustrated example of
The example container registry 295 of the illustrated example of
The example network 296 of the illustrated example of
While in the illustrated example of
Moreover, while in the illustrated example of
While an example manner of implementing the container layer manager 205 of
Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the container layer manager 205 of
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement one or more functions that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, Go, etc.
As mentioned above, the example processes of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
As used herein, singular references (e.g., “a,” “an,” “first,” “second,” etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more,” and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
The example prioritizer 270 determines a priority level to which the container is to be prioritized. (Block 510). In examples disclosed herein, the priority of the container is determined based on an indication included in the request to execute the container. Such indication may be provided by, for example, a user that submitted the request for execution of the container. However, the priority of the container may be determined based on any other criteria including, for example, a history of the execution of the container, a type of operation performed by the container, etc. In some examples, the priority may correspond to a desired latency for execution of the container, a desired Quality of Service (QoS), etc.
The example container controller 275 identifies a first expected location of a first set of layers of the container image, based on the priority level identified by the prioritizer 270. (Block 515). In some examples, the container controller 275 consults the priority mapping table (e.g., the mapping table 300 of
The example container loader 280 determines whether the first set of layers is present at the first expected location. (Block 520). If the example container loader 280 determines that the first set of layers is present at the first expected location (e.g., Block 520 returns a result of YES), the example container loader 280 mounts the first set of layers from the first expected location. (Block 525). In this manner, the first set of layers can begin execution from a selected tier of storage device, while the remaining layers are loaded (or retrieved).
If the example container loader 280 determines that the first set of layers is not present at the first expected location (e.g., Block 520 returns a result of NO), the example container loader 280 determines if the first set of layers of the image is present on another storage device of the container runtime engine 201. (Block 530). If the first set of layers of the image is not present on another storage device of the container runtime engine 201, (e.g., Block 530 returns a result of NO), the example container controller 275 pulls the first set of layers from the container registry to the first expected location. (Block 535). The example container loader 280 then mounts the first set of layers from the first expected location. (Block 525).
In some examples, a first set of layers for a container might not already be present at the first expected location (e.g., block 520 returns a result of NO), but may be present in a different storage device of the container runtime engine 201 (e.g., block 530 returns a result of YES). Such a situation may, in some examples, be the result of a change in a priority level of a container. Such a change may be made to increase the priority level of the container (e.g., resulting in the layers being loaded from a faster storage device), or alternatively may be made to decrease the priority level of the container (e.g., resulting in the layers being loaded from a slower storage device).
If the example container loader 280 determines that the first set of layers of the image, while not present in the first expected location, are present in another location of the container runtime engine 201 (e.g., Block 530 returns a result of YES), the example container loader 280 mounts the first set of layers from the current location. (Block 540). In such an example, using the first set of layers from the current location will not only result in improved loading performance as compared to pulling the first set of layers from the container registry 295, but also conserves network bandwidth that would have otherwise been consumed by pulling the first set of layers from the container registry 295.
After mounting the first set of layers from their current location (Block 540), the example container loader 280 moves and/or copies the first set of layers to the first expected location. (Block 545). Such a moving/copying operation is performed in the background, and ensures that upon subsequent requests for execution of the container, that the first set of layers will be present in the first expected location. In some examples, after movement/copying of the first set of layers to the first expected location, the first set of layers are re-mounted from the first expected location. In some alternative examples, the first set of layers are first moved to the first expected location and mounted from there (e.g., instead of mounting from the current location while the first set of layers are moved/copied to the first expected location).
Upon mounting of the first set of layers (e.g., from the first expected location at block 525 or from the current location at block 540), the example container executor 290 begins execution of the container. (Block 550). In some examples, the causing of the execution of the container by the container executor 290 includes initializing a thin read/write layer. In this manner, because the first set of layers are mounted, and may be done so from a first storage device that is implemented using a memory technology that offers improved performance as compared to a second storage device (e.g., where a second set of layers of the image may be stored), important components of the container can be more quickly mounted for execution, while components of the container of lesser importance can be mounted and/or otherwise made available for execution in a delayed fashion. Such an approach reduces the cold start time of the execution of the container.
The example container controller 275 identifies a second expected location of a second set of layers of the container image, based on the priority level identified by the prioritizer 270 at Block 515. (Block 555). In some examples, the identification of the second expected location of the second set of layers may be performed at a same time as the identification of the first expected location of the first set of layers. In other words, having identified the priority level of the container, the example controller 275 may determine each of the locations in which layers of the container image are expected to be stored.
The example container loader 280 determines whether the second set of layers is present at the second expected location. (Block 560). If the example container loader 280 determines that the second set of layers is present at the second expected location (e.g., Block 560 returns a result of YES), the example container loader 280 mounts the second set of layers from the first expected location. (Block 565). In this manner, the second set of layers can be accessed as part of execution of the container. While in the illustrated example of
If the example container loader 280 determines that the second set of layers is not present at the second expected location (e.g., Block 560 returns a result of NO), the example container loader 280 determines if the second set of layers of the image is present on another storage device of the container runtime engine 201. (Block 570). If the second set of layers of the image is not present on another storage device of the container runtime engine 201, (e.g., Block 570 returns a result of NO), the example container controller 275 pulls the second set of layers from the container registry to the second expected location. (Block 575). The example container loader 280 then mounts the second set of layers from the second expected location. (Block 565).
In some examples, a second set of layers for a container might not already be present at the second expected location (e.g., block 560 returns a result of NO), but may be present in a different storage device of the container runtime engine 201 (e.g., block 570 returns a result of YES). Such a situation may, in some examples, be the result of a change in a priority level of a container. Such a change may be made to increase the priority level of the container (e.g., resulting in the layers being loaded from a faster storage device), or alternatively may be made to decrease the priority level of the container (e.g., resulting in the layers being loaded from a slower storage device).
If the example container loader 280 determines that the second set of layers of the image, while not present in the second expected location, are present in another location of the container runtime engine 201 (e.g., Block 530 returns a result of YES), the example container loader 280 mounts the second set of layers from the current location. (Block 580). In such an example, using the second set of layers from the current location will not only result in improved loading performance as compared to pulling the second set of layers from the repository, but also conserves network bandwidth that would have otherwise been consumed by pulling the second set of layers from the container registry 295.
After mounting the second set of layers from their current location (Block 580), the example container loader 280 moves and/or copies the second set of layers to the second expected location. (Block 585). Such a moving/copying operation is performed in the background, and ensures that upon subsequent requests for execution of the container, that the second set of layers will be present in the second expected location. In some examples, after movement/copying of the second set of layers to the second expected location, the second set of layers are re-mounted from the second expected location. In some alternative examples, the second set of layers are second moved to the second expected location and mounted from there (e.g., instead of mounting from the current location while the second set of layers are moved/copied to the second expected location).
Upon mounting of the second set of layers (e.g., from the second expected location at block 525 or from the current location at block 540), the example container executor 290 continues execution of the container, using the layers in the second set of layers which were mounted in a delayed fashion, as necessary. As noted above, such an approach reduces the cold start time of the execution of the container. The example process 500 of
If, for example, a subsequent (e.g., a second) request for execution of the container were received, and the initial request were to request execution of a container and had changed the priority level of the container (e.g., causing movement of the first set of layers and/or the second set of layers to new expected locations, respectively, as described in blocks 540, 545, 580, 585), the execution of the container in the context of the second request would result in mounting of the layers from the expected locations (e.g., blocks 525, 565). As a result, the time to begin execution responsive to the second request is improved as compared to the time to begin execution responsive to the first request, as movement of the first and/or second sets of layers to their new expected locations is no longer necessary. Thus, while a change in the priority of the container that is indicated via a request to execute the container might not affect the loading and/or execution of the container in response to the request, subsequent executions of the container may benefit from the adjusted priority level.
While in the illustrated example of
The processor platform 600 of the illustrated example includes a processor 612. The processor 612 of the illustrated example is hardware. For example, the processor 612 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example container compressor 250, the example prioritizer 270, the example container controller 275, the example container loader 280, and the example container executor 290.
The processor 612 of the illustrated example includes a local memory 613 (e.g., a cache). The processor 612 of the illustrated example is in communication with a main memory including a volatile memory 614 and a non-volatile memory 616 via a bus 618. The volatile memory 614 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAIVIBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 616 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 614, 616 is controlled by a memory controller.
The processor platform 600 of the illustrated example also includes an interface circuit 620. The interface circuit 620 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one or more input devices 622 are connected to the interface circuit 620. The input device(s) 622 permit(s) a user to enter data and/or commands into the processor 612. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
One or more output devices 624 are also connected to the interface circuit 620 of the illustrated example. The output devices 624 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 620 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor.
The interface circuit 620 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 626. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
The processor platform 600 of the illustrated example also includes one or more mass storage devices 628 for storing software and/or data. Examples of such mass storage devices 628 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives.
The machine executable instructions 632 of
A block diagram illustrating an example software distribution platform 705 to distribute software such as the example computer readable instructions 632 of
From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that enable efficient loading of a container into operational memory for execution. Example approaches disclosed herein enable portions of an image of a container to be stored in separate memories having different performance characteristics and, as a result, enable loading of sections of a container image that are needed for immediate execution of a container in a prioritized manner.
Such prioritization enables improved loading and/or execution times for such containers. The disclosed methods, apparatus and articles of manufacture improve the efficiency of using a computing device by enabling images to be loaded into operational memory for execution in a more efficient manner. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.
Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
Example methods, apparatus, systems, and articles of manufacture for loading of a container image are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an apparatus for managing a container image, the apparatus comprising a prioritizer to determine a priority level at which a container is to be executed, a container controller to determine a first expected location for a first set of layers of the container, the container controller to determine a second expected location for a second set of layers of the container, the first expected location and the second expected location determined based on the determined priority level, the second set of layers separated from the first set of layers in an image by a landmark, a container loader to mount the first set of layers from the first expected location, and a container executor to initiate execution of the container based on the mounted first set of layers.
Example 2 includes the apparatus of example 1, further including a container compressor to build the image of the container using a compression format that includes the landmark.
Example 3 includes the apparatus of example 1, wherein the container controller is to pull the first set of layers from a container registry to the first expected location, the container controller to pull the second set of layers from the container registry to the second expected location.
Example 4 includes the apparatus of example 1, wherein the container executor is to trigger execution of the container based on the first set of layers prior to the container loader having mounted the second set of layers.
Example 5 includes the apparatus of example 1, wherein the first expected location identifies a first storage device.
Example 6 includes the apparatus of example 5, wherein the first storage device is implemented using dynamic random access memory.
Example 7 includes the apparatus of example 5, wherein the second expected location identifies a second storage device different from the first storage device.
Example 8 includes the apparatus of example 7, wherein the second storage device is implemented by a persistent memory.
Example 9 includes at least one non-transitory computer-readable medium comprising instructions that, when executed, cause at least one processor to at least determine a priority level at which a container is to be executed, determine a first expected location for a first set of layers of the container, determine a second expected location for a second set of layers of the container, the first expected location and the second expected location determined based on the determined priority level, the second set of layers separated from the first set of layers in an image by a landmark, mount the first set of layers from the first expected location, and initiate execution of the container based on the first set of layers.
Example 10 includes the least one non-transitory computer-readable medium of example 9, wherein the instructions, when executed, cause the at least one processor to build the image of the container using a compression format that includes a landmark.
Example 11 includes the least one non-transitory computer-readable medium of example 9, wherein the instructions, when executed, cause the at least one processor to pull the first set of layers from a container registry to the first expected location responsive to a first determination that the first set of layers are not present in the first expected location, and pull the second set of layers from the container registry to the second expected location responsive to a second determination that the second set of layers are not present in the second expected location.
Example 12 includes the least one non-transitory computer-readable medium of example 9, wherein the instructions, when executed, cause the at least one processor to mount the second set of layers from the second expected location after the initiation of the execution of the container based on the first set of layers.
Example 13 includes the least one non-transitory computer-readable medium of example 9, wherein the first expected location identifies a first storage device.
Example 14 includes the least one non-transitory computer-readable medium of example 13, wherein the first storage device is implemented using dynamic random access memory.
Example 15 includes the least one non-transitory computer-readable medium of example 13, wherein the second expected location identifies a second storage device different from the first storage device.
Example 16 includes the least one non-transitory computer-readable medium of example 15, wherein the second storage device is a persistent memory.
Example 17 includes a method for managing a container image, the method comprising determining a priority level at which a container is to be executed, determining a first expected location for a first set of layers of the container, determining, by executing an instruction with a processor, a second expected location for a second set of layers of the container, the first expected location and the second expected location determined based on the determined priority level, mounting the first set of layers from the first expected location, and initiating execution of the container based on the first set of layers.
Example 18 includes the method of example 17, further including building the image of the container using a compression format that includes a landmark.
Example 19 includes the method of example 17, further including pulling the first set of layers from a container registry to the first expected location responsive to a first determination that the first set of layers are not present in the first expected location, and pulling the second set of layers from the container registry to the second expected location responsive to a second determination that the second set of layers are not present in the second expected location.
Example 20 includes the method of example 17, further including mounting the second set of layers from the second expected location after the initiation of the execution of the container based on the first set of layers.
Example 21 includes an apparatus for managing a container image, the apparatus comprising means for prioritizing to determine a priority level at which a container is to be executed, means for controlling to determine a first expected location for a first set of layers of the container, the means for controlling to determine a second expected location for a second set of layers of the container, the first expected location and the second expected location determined based on the determined priority level, the second set of layers separated from the first set of layers in an image by a landmark, means for loading to mount the first set of layers from the first expected location, and means for executing to initiate execution of the container based on the mounted first set of layers.
Example 22 includes the apparatus of example 21, further including means for compressing to build the image of the container using a compression format that includes the landmark.
Example 23 includes the apparatus of example 21, wherein the means for controlling is to pull the first set of layers from a container registry to the first expected location, the means for controlling to pull the second set of layers from the container registry to the second expected location.
Example 24 includes the apparatus of example 21, wherein the means for executing is to trigger execution of the container based on the first set of layers prior to the means for loading having mounted the second set of layers.
Example 25 includes the apparatus of example 21, wherein the first expected location identifies a first storage device.
Example 26 includes the apparatus of example 25, wherein the first storage device is implemented using dynamic random access memory.
Example 27 includes the apparatus of example 25, wherein the second expected location identifies a second storage device different from the first storage device.
Example 28 includes the apparatus of example 27, wherein the second storage device is implemented by a persistent memory.
The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.