The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
A cloud-based software distribution platform may provide users with cloud-based access to applications running remotely on the platform. The cloud-based software distribution platform may allow a user to use his or her own device to connect with the platform and access applications as if running on the user's device. The platform may further allow the user to run applications regardless of a type or operating system (“OS”) of the user's device as well as an intended operating environment of the application. For example, the user may use a mobile device to run applications designed for a desktop computing environment. Even if the application may not natively be run on the user's device, the platform may provide cloud-based access to the application.
The cloud-based software distribution platform may provide such inter-device application access via nested containers and/or virtual machines (“VM”). For example, the platform may run one or more containers as base virtualization environments, each of which may host a VM as nested virtualization environments. A container may provide an isolated application environment that virtualizes at least an OS of the base host machine by sharing a system or OS kernel with the host machine. A virtual machine may provide an isolated application environment that virtualizes hardware as well as an OS. Although a VM may be more resource-intensive than a container, a VM may virtualize hardware and/or an OS different from the base host machine.
In order to scale cloud-based access to applications, the platform may utilize several containers, with a virtual machine running in each container. The use of containers may facilitate scaling of independent virtualized environments, whereas the use of virtual machines may facilitate running applications designed for different application environments. However, network management of the various virtualization environments (e.g., containers and/or VMs) may require assigning network addresses (e.g., internet protocol (“IP”) addresses) to each virtualization environment. A conventional dynamic host configuration protocol (“DHCP”) addressing scheme may assign unique network addresses to each virtualization environment without accounting for the nested architecture of the platform. Thus, a lookup table or other similar additional network address management may be required to manage the network addresses and correlate VMs to their corresponding containers. However, enforcing network policies or logging/investigating network behavior may require additional overhead for using the lookup table.
The present disclosure is generally directed to dynamic container network management. As will be explained in greater detail below, embodiments of the present disclosure may use an addressing scheme for assigning IP addresses to base virtualization environments and their corresponding nested virtualization environment. The addressing scheme may provide for IP addresses that may correlate, using the IP addresses themselves, the base virtualization environment with the nested virtualization environment. Thus, the addressing scheme described herein may not require using a lookup table to determine which IP address corresponds to which base or nested virtualization environment. The systems and methods described herein may improve the functioning of a computer by providing more efficient network management that may obviate the overhead associated with using a lookup table. In addition, the systems and methods described herein may improve the field of network management by providing an efficient network addressing scheme for nested virtualization environment.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The following will provide, with reference to
As illustrated in
In some embodiments, the term “virtualization environment” may refer to an isolated application environment that may virtualize at least some aspects of the application environment such that an application may interface with the virtualized aspects as if running on the application's native environment. Examples of virtualization environments include, without limitation, containers and VMs. In some embodiments, the term “container” may refer to an isolated application environment that virtualizes at least an OS of the base host machine by sharing a system or OS kernel with the host machine. For example, if the base host machine runs Windows (or other desktop OS), the container may also run Windows (or other desktop OS) by sharing the OS kernel such that the container may not require a complete set of OS binaries and libraries. In some embodiments, the term “virtual machine” may refer to an isolated application environment that virtualizes hardware as well as an OS. Because a VM may virtualize hardware, an OS for the VM may not be restricted by the base host machine OS. For example, even if the base host machine is running Windows (or another desktop OS), a VM on the base host machine may be configured to run Android (or other mobile OS) by emulating mobile device hardware. In other examples, other combinations of OSes may be used.
In some embodiments, the cloud-based software distribution host may host software applications for cloud-based access. Conventionally, software applications, particularly games, are often developed for a specific OS and require porting to run on other OSes. However, the cloud-based software distribution host described herein (also referred to as the cloud-based software distribution platform herein) may provide cloud-based access to games designed for a particular OS on a device running an otherwise incompatible OS for the games. For example, the platform may host a desktop game and allow a mobile device (or other device running an OS that is not supported by the game) to interact with an instance of the desktop game as if running on the mobile device. Similarly, the platform may host a mobile game and allow a desktop computer (or other device running an OS that is not supported by the game) to interact with an instance of the mobile game as if running on the desktop computer. Although the examples herein refer to games as well as OS incompatibility, in other examples the software applications may correspond to any software application that may not be supported or is otherwise incompatible with another computing device, including but not limited to OS, hardware, etc.
Various systems described herein may perform step 110.
In certain embodiments, one or more of modules 202 in
As illustrated in
As illustrated in
As illustrated in
Example system 200 in
Server 306 may represent or include one or more servers capable of hosting a cloud-based software distribution platform. Server 306 may provide cloud-based access to software applications running in nested virtualization environments. Server 306 may include a physical processor 230, which may include one or more processors, memory 240, which may store modules 202, and one or more of additional elements 220.
Computing device 302 may be communicatively coupled to server 306 through network 304. Network 304 may represent any type or form of communication network, such as the Internet, and may comprise one or more physical connections, such as LAN, and/or wireless connections, such as WAN.
Returning to
In some examples, identifying the base virtualization environment may include initiating the base virtualization environment. For example, virtualization module 204, which may correspond to a virtualization environment management system such as a hypervisor or other virtualization or container management software, may initiate container 222. As part of initiating container 222, identifying module 206 may identify container 222 as requiring assignment of an IP address.
Computing device 402, which may correspond to an instance of computing device 302, may access application 420 via network 404. Computing device 403, which may correspond to an instance of computing device 302, may access application 422 via network 404.
Returning to
In some embodiments, the term “IP address” may refer to a numerical label assigned to a device on a network for identification and location addressing. Examples of IP addresses include, without limitation, static IP addresses that are fixed and may remain the same each time a system connects to a network and dynamic IP addresses that may be reassigned as needed for a network topology.
The systems described herein may perform step 104 in a variety of ways. In one example, addressing module 208 may assign container IP address 224 to container 222 based on the addressing scheme described herein.
As illustrated in
As will be described further herein, the addressing scheme may correlate IP address 500 with IP address 502 based on one or more of network identifiers 510 and 520, subnet identifiers 512 and 522, and host identifiers 514 and 524.
Turning back to
Host 406 may serve application 420 via VM 430 and may also serve application 422 via VM 432. As further illustrated in
The systems described herein may perform step 106 in a variety of ways. In one example, the nested virtualization environment may have been previously initiated and may require an IP address, if not previously assigned, or may require a new IP address, for instance due to changes to the network topology. Identifying module 206 may identify VM 226 as requiring assignment of an IP address.
In some examples, identifying the nested virtualization environment may include initiating the nested virtualization environment. For example, virtualization module 204 may initiate VM 226. As part of initiating VM 226, identifying module 206 may identify VM 226 as requiring assignment of an IP address.
At step 108 one or more of the systems described herein may assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address. The addressing scheme may correlate the second IP address to the first IP address. For example, addressing module 208 may assign VM IP address 228 to VM 226.
The systems described herein may perform step 108 in a variety of ways. In one example, the addressing scheme may involve using the first IP address to assign the second IP address. Addressing module 208 may use container IP address 224 to determine a value for VM IP address 228. The value for container IP address 224 may be directly used for assigning the value for VM IP address 228. For example, all or a subset of container IP address 224 may directly identify VM 226. In other examples, the value for container IP address 224 may indirectly identify VM 226. For example, all or a subset of container IP address 224 may be transformed (e.g., with a hash or similar function) to identify VM 226.
Alternatively, addressing module 208 may use VM IP address 228 to determine a value for container IP address 224. For example, all or a subset of VM IP address 228 may directly or indirectly identify container 222.
As illustrated in
In some examples, the addressing scheme may reserve a separate subnetwork address range for IP addresses of nested virtualization environments to distinguish from base virtualization environments. For example, VM 430 and VM 432 in
A subset or portion of the second IP address may identify the base virtualization environment. Alternatively or additionally, a subset or portion of the second IP address may identify the nested virtualization environment. Thus, using the addressing scheme, the first IP address may directly correlate to the second IP address. Advantageously, the addressing scheme may forego a lookup table for correlating the first IP address with the second IP address.
By using the addressing scheme described herein, host 406 may more efficiently perform network management functions for containers 440 and 442 and VMs 430 and 432. For example, host 406 may independently filter network traffic for the base virtualization environments (e.g., containers 440 and 442) and network traffic for the nested virtualization environments (e.g., and VMs 430 and 432). Rather than using a lookup table to determine whether a particular IP address corresponds to a container or a VM, a subset of the particular IP address may identify between a container or a VM. Because host 406 may identify between base virtualization environments and nested virtualization environments using the IP addresses themselves, host 406 may efficiently apply a first filter protocol to base virtualization environments and independently apply a second filter protocol to nested virtualization environments. In addition, host 406 may independently enforce different network policies for base virtualization environments and nested virtualization environments. For example, host 406 may enforce a first network policy for the base virtualization environments and a second network policy for the nested virtualization environments. Additionally, tracing of network behavior may be simplified because a subset of a particular IP address may identify a nested container/VM pair without requiring a lookup table to identify such pairs.
The systems and methods described herein provide dynamic container network management via an addressing scheme that correlates virtual machines to their corresponding containers. A cloud application architecture may run virtual machines on top of an existing container platform for running instances of particular hosting environments. Although the container platform may assign IP addresses for each container, each virtual machine may require its own IP address to facilitate access to external services. A conventional DHCP scheme may assign IP addresses to virtual machines in a way that may not account for the cloud application architecture such that a lookup table may be needed to determine which virtual machine address corresponds to which container address. Thus, enforcing network policies or logging and investigating network behavior may require using the lookup table. The systems and methods described herein may provide an addressing scheme that may simplify correlation between containers and virtual machines without requiring the lookup table. For example, virtual machine addresses may be assigned to a separate subnetwork address range to facilitate independent filtering of container program traffic and virtual machine traffic. In addition, the addressing scheme may allow determining the address of a container from the address of the corresponding virtual machine and vice versa.
Example 1: A computer-implemented method may include: (i) identifying a base virtualization environment on a cloud-based software distribution host, (ii) assigning, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identifying a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assigning, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
Example 2: The method of Example 1, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
Example 3: The method of Example 1 or 2, wherein a portion of the second IP address identifies the base virtualization environment.
Example 4: The method of Example 1, 2, or 3, wherein a portion of the second IP address identifies the nested virtualization environment.
Example 5: The method of any of Examples 1-4, wherein the addressing scheme directly correlates the first IP address to the second IP address.
Example 6: The method of any of Examples 1-5, wherein the second IP address includes a subnetwork address based on a separate subnetwork address range reserved by the addressing scheme for IP addresses of nested virtualization environments.
Example 7: The method of any of Examples 1-6, further comprising applying a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
Example 8: The method of any of Examples 1-7, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
Example 9: The method of any of Examples 1-8, wherein the base virtualization environment corresponds to a container that shares an OS kernel with the cloud-based software distribution host and the nested virtualization environment corresponds to a virtual machine (VM).
Example 10: The method of any of Examples 1-9, wherein the VM corresponds to a mobile OS environment, the application corresponds to a mobile game, and the cloud-based software distribution host provides cloud-based access to an instance of the mobile came.
Example 11: A system may include: at least one physical processor, physical memory comprising computer-executable instructions that, when executed by the physical processor, may cause the physical processor to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
Example 12: The system of Example 11, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
Example 13: The system of Example 11 or 12, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
Example 14: The system of Example 11, 12, or 13, wherein the addressing scheme directly correlates the first IP address with the second IP address.
Example 15: The system of any of Examples 11-14, further comprising instructions that, when executed by the physical processor, cause the physical processor to: apply a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
Example 16: The system of any of Examples 11-15, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
Example 17: A non-transitory computer-readable medium that may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
Example 18: The method of Example 17, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
Example 19: The method of Example 17 or 18, wherein the addressing scheme directly correlates the first IP address to the second IP address.
Example 20: The method of Example 17, 18, or 19, further comprising instructions that, when executed by the at least one processor of the computing device, may cause the computing device to: enforce a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive network address data to be transformed, transform the network address data, use the result of the transformation to assign network addresses, and store the result of the transformation to manage network addresses. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
This application claims the benefit of U.S. Provisional Application No. 63/105,320, filed 25 Oct. 2020, and U.S. Provisional Application No. 63/194,821, filed 28 May 2021, the disclosures of each of which are incorporated, in their entirety, by this reference. Co-pending U.S. application Ser. No. 17/506,640, filed 20 Oct. 2021, is incorporated, in its entirety, by this reference.
Number | Date | Country | |
---|---|---|---|
63194821 | May 2021 | US | |
63105320 | Oct 2020 | US |