SERVICE NETWORK APPROACH FOR DYNAMIC CONTAINER NETWORK MANAGEMENT

Information

  • Patent Application
  • 20220129296
  • Publication Number
    20220129296
  • Date Filed
    October 21, 2021
    3 years ago
  • Date Published
    April 28, 2022
    2 years ago
Abstract
The disclosed computer-implemented method may include identifying a base virtualization environment on a cloud-based software distribution host. The method may also include assigning, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment. The method may further include identifying a nested virtualization environment running in the base virtualization environment. The cloud-based software distribution host may serve an application running in the nested virtualization environment. Each of the base and nested virtualization environments may include an isolated application environment that virtualizes at least an operating system. The method may additionally include assigning, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address. The addressing scheme correlates the second IP address to the first IP address. Various other methods, systems, and computer-readable media are also disclosed.
Description
BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.



FIG. 1 is a flow diagram of an exemplary method for dynamic container network management.



FIG. 2 is a block diagram of an exemplary system for dynamic container network management.



FIG. 3 is a block diagram of an exemplary network for dynamic container network management.



FIG. 4 is a block diagram of an exemplary cloud-based application platform.



FIG. 5 is a block diagram of exemplary network addresses for dynamic container network management.


Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.







DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

A cloud-based software distribution platform may provide users with cloud-based access to applications running remotely on the platform. The cloud-based software distribution platform may allow a user to use his or her own device to connect with the platform and access applications as if running on the user's device. The platform may further allow the user to run applications regardless of a type or operating system (“OS”) of the user's device as well as an intended operating environment of the application. For example, the user may use a mobile device to run applications designed for a desktop computing environment. Even if the application may not natively be run on the user's device, the platform may provide cloud-based access to the application.


The cloud-based software distribution platform may provide such inter-device application access via nested containers and/or virtual machines (“VM”). For example, the platform may run one or more containers as base virtualization environments, each of which may host a VM as nested virtualization environments. A container may provide an isolated application environment that virtualizes at least an OS of the base host machine by sharing a system or OS kernel with the host machine. A virtual machine may provide an isolated application environment that virtualizes hardware as well as an OS. Although a VM may be more resource-intensive than a container, a VM may virtualize hardware and/or an OS different from the base host machine.


In order to scale cloud-based access to applications, the platform may utilize several containers, with a virtual machine running in each container. The use of containers may facilitate scaling of independent virtualized environments, whereas the use of virtual machines may facilitate running applications designed for different application environments. However, network management of the various virtualization environments (e.g., containers and/or VMs) may require assigning network addresses (e.g., internet protocol (“IP”) addresses) to each virtualization environment. A conventional dynamic host configuration protocol (“DHCP”) addressing scheme may assign unique network addresses to each virtualization environment without accounting for the nested architecture of the platform. Thus, a lookup table or other similar additional network address management may be required to manage the network addresses and correlate VMs to their corresponding containers. However, enforcing network policies or logging/investigating network behavior may require additional overhead for using the lookup table.


The present disclosure is generally directed to dynamic container network management. As will be explained in greater detail below, embodiments of the present disclosure may use an addressing scheme for assigning IP addresses to base virtualization environments and their corresponding nested virtualization environment. The addressing scheme may provide for IP addresses that may correlate, using the IP addresses themselves, the base virtualization environment with the nested virtualization environment. Thus, the addressing scheme described herein may not require using a lookup table to determine which IP address corresponds to which base or nested virtualization environment. The systems and methods described herein may improve the functioning of a computer by providing more efficient network management that may obviate the overhead associated with using a lookup table. In addition, the systems and methods described herein may improve the field of network management by providing an efficient network addressing scheme for nested virtualization environment.


Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.


The following will provide, with reference to FIGS. 1-5, detailed descriptions of dynamic container network management. FIG. 1 illustrates a method for dynamic container network management are provided. FIG. 2 illustrates a system for performing the methods described herein. FIG. 3 illustrates a network environment for dynamic container network management. FIG. 4 illustrates a cloud-based software distribution platform. FIG. 5 illustrates an exemplary addressing scheme.



FIG. 1 is a flow diagram of an exemplary computer-implemented method 100 for dynamic container network management. The steps shown in FIG. 1 may be performed by any suitable computer-executable code and/or computing system, including the system(s) illustrated in FIGS. 2 and/or 3. In one example, each of the steps shown in FIG. 1 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below.


As illustrated in FIG. 1, at step 102 one or more of the systems described herein may identify a base virtualization environment on a cloud-based software distribution host. The virtualization environment may be an isolated application environment that virtualizes at least an OS. For example, identifying module 206 may identify container 222 (e.g., the base virtualization environment).


In some embodiments, the term “virtualization environment” may refer to an isolated application environment that may virtualize at least some aspects of the application environment such that an application may interface with the virtualized aspects as if running on the application's native environment. Examples of virtualization environments include, without limitation, containers and VMs. In some embodiments, the term “container” may refer to an isolated application environment that virtualizes at least an OS of the base host machine by sharing a system or OS kernel with the host machine. For example, if the base host machine runs Windows (or other desktop OS), the container may also run Windows (or other desktop OS) by sharing the OS kernel such that the container may not require a complete set of OS binaries and libraries. In some embodiments, the term “virtual machine” may refer to an isolated application environment that virtualizes hardware as well as an OS. Because a VM may virtualize hardware, an OS for the VM may not be restricted by the base host machine OS. For example, even if the base host machine is running Windows (or another desktop OS), a VM on the base host machine may be configured to run Android (or other mobile OS) by emulating mobile device hardware. In other examples, other combinations of OSes may be used.


In some embodiments, the cloud-based software distribution host may host software applications for cloud-based access. Conventionally, software applications, particularly games, are often developed for a specific OS and require porting to run on other OSes. However, the cloud-based software distribution host described herein (also referred to as the cloud-based software distribution platform herein) may provide cloud-based access to games designed for a particular OS on a device running an otherwise incompatible OS for the games. For example, the platform may host a desktop game and allow a mobile device (or other device running an OS that is not supported by the game) to interact with an instance of the desktop game as if running on the mobile device. Similarly, the platform may host a mobile game and allow a desktop computer (or other device running an OS that is not supported by the game) to interact with an instance of the mobile game as if running on the desktop computer. Although the examples herein refer to games as well as OS incompatibility, in other examples the software applications may correspond to any software application that may not be supported or is otherwise incompatible with another computing device, including but not limited to OS, hardware, etc.


Various systems described herein may perform step 110. FIG. 2 is a block diagram of an example system 200 for dynamic container network management. As illustrated in this figure, example system 200 may include one or more modules 202 for performing one or more tasks. As will be explained in greater detail herein, modules 202 may include a virtualization module 204, an identifying module 206, an addressing module 208, and a networking module 210. Although illustrated as separate elements, one or more of modules 202 in FIG. 2 may represent portions of a single module or application.


In certain embodiments, one or more of modules 202 in FIG. 2 may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, and as will be described in greater detail below, one or more of modules 202 may represent modules stored and configured to run on one or more computing devices, such as the devices illustrated in FIG. 3 (e.g., computing device 302 and/or server 306). One or more of modules 202 in FIG. 2 may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.


As illustrated in FIG. 2, example system 200 may also include one or more memory devices, such as memory 240. Memory 240 generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, memory 240 may store, load, and/or maintain one or more of modules 202. Examples of memory 240 include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, and/or any other suitable storage memory.


As illustrated in FIG. 2, example system 200 may also include one or more physical processors, such as physical processor 230. Physical processor 230 generally represents any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, physical processor 230 may access and/or modify one or more of modules 202 stored in memory 240. Additionally or alternatively, physical processor 230 may execute one or more of modules 202 to facilitate maintain the mapping system. Examples of physical processor 230 include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, and/or any other suitable physical processor.


As illustrated in FIG. 2, example system 200 may also include one or more additional elements 220, such as a container 222, a container IP address 224, a virtual machine 226, and a VM IP address 228. Container 222, container IP address 224, VM 226, and/or VM IP address 228 may be stored on and/or executed from a local storage device, such as memory 240, or may be accessed remotely. Container 222 may represent a base virtualization environment, as will be explained further below. Container IP address 224 may represent a network address assigned to container 222 according to an addressing scheme described herein. VM 226 may represent a nested virtualization environment running in container 222. VM IP address 228 may represent a network address assigned to VM 226 according to the addressing scheme, as will be explained further below.


Example system 200 in FIG. 2 may be implemented in a variety of ways. For example, all or a portion of example system 200 may represent portions of example network environment 300 in FIG. 3.



FIG. 3 illustrates an exemplary network environment 300 implementing aspects of the present disclosure. The network environment 300 includes computing device 302, a network 304, and server 306. Computing device 302 may be a client device or user device, such as a mobile device, a desktop computer, laptop computer, tablet device, smartphone, or other computing device. Computing device 302 may include a physical processor 230, which may be one or more processors, and a memory 240, which may store data such as one or more of additional elements 220 and/or modules 202.


Server 306 may represent or include one or more servers capable of hosting a cloud-based software distribution platform. Server 306 may provide cloud-based access to software applications running in nested virtualization environments. Server 306 may include a physical processor 230, which may include one or more processors, memory 240, which may store modules 202, and one or more of additional elements 220.


Computing device 302 may be communicatively coupled to server 306 through network 304. Network 304 may represent any type or form of communication network, such as the Internet, and may comprise one or more physical connections, such as LAN, and/or wireless connections, such as WAN.


Returning to FIG. 1, the systems described herein may perform step 102 in a variety of ways. In one example, the base virtualization environment may correspond to a container (e.g., container 222) that shares an OS kernel with the cloud-based software distribution host and, as will be described further below, the nested virtualization environment may correspond to a virtual machine (e.g., VM 226) running in the container. The base virtualization environment may have been previously initiated and may require an IP address, if not previously assigned, or may require a new IP address, for instance due to changes to a network topology or changes to the nested virtualization environment. Identifying module 206 may identify container 222 as requiring assignment of an IP address.


In some examples, identifying the base virtualization environment may include initiating the base virtualization environment. For example, virtualization module 204, which may correspond to a virtualization environment management system such as a hypervisor or other virtualization or container management software, may initiate container 222. As part of initiating container 222, identifying module 206 may identify container 222 as requiring assignment of an IP address.



FIG. 4 illustrates an exemplary cloud-based software distribution platform 400. The platform 400 may include a host 406, a network 404 (which may correspond to network 304), and computing devices 402 and 403. Host 406, which may correspond to server 306, may include containers 440 and 442, which may respectively include a virtual machine 430 and a virtual machine 432. VM 430 may run an application 420 and VM 432 may run an application 422. Host 406 may utilize nested virtualization environments (e.g., VM 430 running in container 440 and VM 432 running in container 442) to more efficiently manage virtualization environments. For instance, as a number of VMs are initiated and/or closed the nested virtualization may facilitate management of virtualization environments for various types of VMs as well as more efficiently scale the number of VMs running concurrently. Certain aspects which may be global across certain VMs may be better managed via containers.


Computing device 402, which may correspond to an instance of computing device 302, may access application 420 via network 404. Computing device 403, which may correspond to an instance of computing device 302, may access application 422 via network 404.


Returning to FIG. 1, at step 104 one or more of the systems described herein may assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment. For example, addressing module 208 may assign container IP address 224 to container 222.


In some embodiments, the term “IP address” may refer to a numerical label assigned to a device on a network for identification and location addressing. Examples of IP addresses include, without limitation, static IP addresses that are fixed and may remain the same each time a system connects to a network and dynamic IP addresses that may be reassigned as needed for a network topology.


The systems described herein may perform step 104 in a variety of ways. In one example, addressing module 208 may assign container IP address 224 to container 222 based on the addressing scheme described herein. FIG. 5 illustrates a first IP address 500, which may correspond to container IP address 224, and a second IP address 502, which may correspond to VM IP address 228.


As illustrated in FIG. 5, IP address 500 may include a network identifier 510, a subnet identifier 512, and a host identifier 514. IP address 502 may include a network identifier 520, a subnet identifier 522, and a host identifier 524. In some embodiments, the term “network identifier” may refer to a network number or routing prefix for routing network traffic to associated routers. In some embodiments, the term “subnet identifier” may refer to an identifier for a subnetwork of a particular network (e.g., a network identified by the network identifier). In some embodiments, the term “host identifier” may refer to an identifier for a particular host device on the subnetwork. This host device may correspond to physical host device as well as virtual host devices, such as a container, VM, etc.


As will be described further herein, the addressing scheme may correlate IP address 500 with IP address 502 based on one or more of network identifiers 510 and 520, subnet identifiers 512 and 522, and host identifiers 514 and 524.


Turning back to FIG. 1, at step 106 one or more of the systems described herein may identify a nested virtualization environment running in the base virtualization environment. The cloud-based software distribution host may serve an application running in the nested virtualization environment. For example, identifying module 206 may identify VM 226 running in container 222. In another example, identifying module 206, as part of host 406, may identify VM 430 running in container 440, and/or VM 432 running in container 442.


Host 406 may serve application 420 via VM 430 and may also serve application 422 via VM 432. As further illustrated in FIG. 4, computing device 402 may access and virtually run application 420 served by host 406. In some examples, application 420 may be an application that is not configured to run in a native application environment of computing device 402. For example, application 420 may be a mobile app for running on a mobile device OS, and computing device 402 may be a desktop computer or a mobile device with an OS incompatible with application 420. However, as seen in FIG. 4, host 406 may run VM 430 capable of running application 420. Host 406 may provide computing device 402 with cloud-based access to application 420 for instance by receiving inputs (e.g., user inputs, device information, commands, etc.) from computing device 402, converting the inputs for use with application 420, and provide outputs (e.g., graphical outputs, commands, etc.) from application 420 to computing device 402. Thus, a user of computing device 402 may use application 420 as if running on computing device 402 even if much of the processing for application 420 is performed on host 406. Similarly, host 406 may provide computing device 403 with cloud-based access to application 422.


The systems described herein may perform step 106 in a variety of ways. In one example, the nested virtualization environment may have been previously initiated and may require an IP address, if not previously assigned, or may require a new IP address, for instance due to changes to the network topology. Identifying module 206 may identify VM 226 as requiring assignment of an IP address.


In some examples, identifying the nested virtualization environment may include initiating the nested virtualization environment. For example, virtualization module 204 may initiate VM 226. As part of initiating VM 226, identifying module 206 may identify VM 226 as requiring assignment of an IP address.


At step 108 one or more of the systems described herein may assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address. The addressing scheme may correlate the second IP address to the first IP address. For example, addressing module 208 may assign VM IP address 228 to VM 226.


The systems described herein may perform step 108 in a variety of ways. In one example, the addressing scheme may involve using the first IP address to assign the second IP address. Addressing module 208 may use container IP address 224 to determine a value for VM IP address 228. The value for container IP address 224 may be directly used for assigning the value for VM IP address 228. For example, all or a subset of container IP address 224 may directly identify VM 226. In other examples, the value for container IP address 224 may indirectly identify VM 226. For example, all or a subset of container IP address 224 may be transformed (e.g., with a hash or similar function) to identify VM 226.


Alternatively, addressing module 208 may use VM IP address 228 to determine a value for container IP address 224. For example, all or a subset of VM IP address 228 may directly or indirectly identify container 222.


As illustrated in FIG. 5, a subset of first IP address 500 (which may correspond to container IP address 224 or VM IP address 228) may correlate to second IP address 502 (which may correspond to VM IP address 228 or container IP address 224). For example, a subset of first IP address 500 (e.g., subnet identifier 512 and/or host identifier 514) may correspond to at least a portion of second IP address 502.


In some examples, the addressing scheme may reserve a separate subnetwork address range for IP addresses of nested virtualization environments to distinguish from base virtualization environments. For example, VM 430 and VM 432 in FIG. 4 may be assigned to a subnetwork address range separate from that of container 440 and container 442. The subnet identifier may indicate whether the IP address corresponds to a virtual machine or a container. In such examples, the host identifier may match for matching containers and VMs. For instance, if first IP address 500 is assigned to container 440 and second IP address 502 is assigned to VM 430, then subnet identifier 512 may indicate that first IP address 500 corresponds to a container and subnet identifier 522 may indicate that second IP address 502 corresponds to a VM. Host identifier 514 may match (e.g., be the same as or otherwise complements) host identifier 524 to indicate a nested pair of virtualization environments.


A subset or portion of the second IP address may identify the base virtualization environment. Alternatively or additionally, a subset or portion of the second IP address may identify the nested virtualization environment. Thus, using the addressing scheme, the first IP address may directly correlate to the second IP address. Advantageously, the addressing scheme may forego a lookup table for correlating the first IP address with the second IP address.


By using the addressing scheme described herein, host 406 may more efficiently perform network management functions for containers 440 and 442 and VMs 430 and 432. For example, host 406 may independently filter network traffic for the base virtualization environments (e.g., containers 440 and 442) and network traffic for the nested virtualization environments (e.g., and VMs 430 and 432). Rather than using a lookup table to determine whether a particular IP address corresponds to a container or a VM, a subset of the particular IP address may identify between a container or a VM. Because host 406 may identify between base virtualization environments and nested virtualization environments using the IP addresses themselves, host 406 may efficiently apply a first filter protocol to base virtualization environments and independently apply a second filter protocol to nested virtualization environments. In addition, host 406 may independently enforce different network policies for base virtualization environments and nested virtualization environments. For example, host 406 may enforce a first network policy for the base virtualization environments and a second network policy for the nested virtualization environments. Additionally, tracing of network behavior may be simplified because a subset of a particular IP address may identify a nested container/VM pair without requiring a lookup table to identify such pairs.


The systems and methods described herein provide dynamic container network management via an addressing scheme that correlates virtual machines to their corresponding containers. A cloud application architecture may run virtual machines on top of an existing container platform for running instances of particular hosting environments. Although the container platform may assign IP addresses for each container, each virtual machine may require its own IP address to facilitate access to external services. A conventional DHCP scheme may assign IP addresses to virtual machines in a way that may not account for the cloud application architecture such that a lookup table may be needed to determine which virtual machine address corresponds to which container address. Thus, enforcing network policies or logging and investigating network behavior may require using the lookup table. The systems and methods described herein may provide an addressing scheme that may simplify correlation between containers and virtual machines without requiring the lookup table. For example, virtual machine addresses may be assigned to a separate subnetwork address range to facilitate independent filtering of container program traffic and virtual machine traffic. In addition, the addressing scheme may allow determining the address of a container from the address of the corresponding virtual machine and vice versa.


EXAMPLE EMBODIMENTS

Example 1: A computer-implemented method may include: (i) identifying a base virtualization environment on a cloud-based software distribution host, (ii) assigning, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identifying a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assigning, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.


Example 2: The method of Example 1, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.


Example 3: The method of Example 1 or 2, wherein a portion of the second IP address identifies the base virtualization environment.


Example 4: The method of Example 1, 2, or 3, wherein a portion of the second IP address identifies the nested virtualization environment.


Example 5: The method of any of Examples 1-4, wherein the addressing scheme directly correlates the first IP address to the second IP address.


Example 6: The method of any of Examples 1-5, wherein the second IP address includes a subnetwork address based on a separate subnetwork address range reserved by the addressing scheme for IP addresses of nested virtualization environments.


Example 7: The method of any of Examples 1-6, further comprising applying a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.


Example 8: The method of any of Examples 1-7, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.


Example 9: The method of any of Examples 1-8, wherein the base virtualization environment corresponds to a container that shares an OS kernel with the cloud-based software distribution host and the nested virtualization environment corresponds to a virtual machine (VM).


Example 10: The method of any of Examples 1-9, wherein the VM corresponds to a mobile OS environment, the application corresponds to a mobile game, and the cloud-based software distribution host provides cloud-based access to an instance of the mobile came.


Example 11: A system may include: at least one physical processor, physical memory comprising computer-executable instructions that, when executed by the physical processor, may cause the physical processor to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.


Example 12: The system of Example 11, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.


Example 13: The system of Example 11 or 12, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.


Example 14: The system of Example 11, 12, or 13, wherein the addressing scheme directly correlates the first IP address with the second IP address.


Example 15: The system of any of Examples 11-14, further comprising instructions that, when executed by the physical processor, cause the physical processor to: apply a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.


Example 16: The system of any of Examples 11-15, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.


Example 17: A non-transitory computer-readable medium that may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.


Example 18: The method of Example 17, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.


Example 19: The method of Example 17 or 18, wherein the addressing scheme directly correlates the first IP address to the second IP address.


Example 20: The method of Example 17, 18, or 19, further comprising instructions that, when executed by the at least one processor of the computing device, may cause the computing device to: enforce a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.


As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.


In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.


In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.


Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.


In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive network address data to be transformed, transform the network address data, use the result of the transformation to assign network addresses, and store the result of the transformation to manage network addresses. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.


In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.


The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.


The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.


Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”

Claims
  • 1. A computer-implemented method comprising: identifying a base virtualization environment on a cloud-based software distribution host;assigning, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment;identifying a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment; andeach of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS); andassigning, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
  • 2. The method of claim 1, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
  • 3. The method of claim 1, wherein a portion of the second IP address identifies the base virtualization environment.
  • 4. The method of claim 1, wherein a portion of the second IP address identifies the nested virtualization environment.
  • 5. The method of claim 1, wherein the addressing scheme directly correlates the first IP address to the second IP address.
  • 6. The method of claim 1, wherein the second IP address includes a subnetwork address based on a separate subnetwork address range reserved by the addressing scheme for IP addresses of nested virtualization environments.
  • 7. The method of claim 1, further comprising applying a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
  • 8. The method of claim 1, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
  • 9. The method of claim 1, wherein the base virtualization environment corresponds to a container that shares an OS kernel with the cloud-based software distribution host and the nested virtualization environment corresponds to a virtual machine (VM).
  • 10. The method of claim 1, wherein the VM corresponds to a mobile OS environment, the application corresponds to a mobile game, and the cloud-based software distribution host provides cloud-based access to an instance of the mobile game.
  • 11. A system comprising: at least one physical processor;physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: identify a base virtualization environment on a cloud-based software distribution host;assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment;identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment; andeach of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS); andassign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
  • 12. The system of claim 11, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
  • 13. The system of claim 11, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
  • 14. The system of claim 11, wherein the addressing scheme directly correlates the first IP address to the second IP address.
  • 15. The system of claim 11, further comprising instructions that, when executed by the physical processor, cause the physical processor to: apply a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
  • 16. The system of claim 11, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
  • 17. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: identify a base virtualization environment on a cloud-based software distribution host;assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment;identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment; andeach of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS); andassign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
  • 18. The non-transitory computer-readable medium of claim 17, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
  • 19. The non-transitory computer-readable medium of claim 17, wherein the addressing scheme directly correlates the first IP address to the second IP address.
  • 20. The non-transitory computer-readable medium of claim 17, further comprising instructions that, when executed by the at least one processor of the computing device, cause the computing device to: enforce a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
CROSS REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 63/105,320, filed 25 Oct. 2020, and U.S. Provisional Application No. 63/194,821, filed 28 May 2021, the disclosures of each of which are incorporated, in their entirety, by this reference. Co-pending U.S. application Ser. No. 17/506,640, filed 20 Oct. 2021, is incorporated, in its entirety, by this reference.

Provisional Applications (2)
Number Date Country
63194821 May 2021 US
63105320 Oct 2020 US