Datacenters providing cloud computing services typically include routers, switches, bridges, and other physical network devices that interconnect a large number of servers, network storage devices, and other types of physical computing devices via wired or wireless network links. The individual servers can host one or more virtual machines or other types of virtualized components accessible to cloud computing clients. The virtual machines can exchange messages such as emails via virtual networks in accordance with one or more network protocols supported by the physical network devices.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Cloud computing can utilize multiple virtual machines on one or more servers to accommodate computation, communications, or other types of cloud service requests from users. However, virtual machines can incur a significant amount of overhead. For example, each virtual machine needs a corresponding guest operating system, virtual memory, and applications, all of which can amount to tens of gigabytes in size. In contrast, containers (e.g., Dockers) are software packages that each contain a piece of software in a complete filesystem with everything the piece of software needs to run, such as code, runtime, system tools, system libraries, etc. Containers running on a single server or virtual machine can all share the same operating system kernel and can make efficient use of system and/or virtual memory.
In certain computing systems, containers on a host (e.g., a server or a virtual machine) are assigned network addresses in an isolated name space (e.g., 172.168.0.0 address space) typically specific to the host. Multiple containers on a host can be connected together via a bridge. Network connectivity outside the host can be provided by network address translation (“NAT”) to a network address of the host (e.g., 10.0.0.1). Such an arrangement can limit the functionalities of the containers on a host. For example, network addresses assigned to containers are not routable outside the host to other containers on other hosts or to the Internet. In another example, containers may not expose any arbitrary service endpoint. For instance, two containers on a host may not both expose service endpoints on port 80 because only one is allowed to do so. Containers can also be limited from running applications that dynamically negotiate a port, such as passive FTP, remote procedure call, or session initiated protocol applications.
Several embodiments of the disclosed technology are directed to provide routable network addresses (e.g., IP addresses) to containers on a host. The routable network addresses can allow connections between containers on different hosts without network name translation and allow access to network services such as load balancing, routing selection, etc. In certain implementations, containers are individually assigned an IP address from a virtual network (“vnet”), which can be a tenant vnet or a default vnet created on behalf of the tenant. Network traffic to/from the containers can be delivered using the assigned IP addresses directly rather than utilizing network name translation to the IP address of the host. The IP address of the host can be another address from the same or a different vnet than that associated with the containers. In certain embodiments, multiple containers on a host can all be in one vnet. In other embodiments, multiple containers on a host can belong to multiple virtual networks.
Several embodiments of the disclosed technology can enable flexible and efficient implementation of networking for containers in cloud-based computing systems. For example, a container can be assigned an IP address from a vnet, and thus the IP address is routable and has full connectivity on all ports within a vnet. The container can thus be connected to another container on the same or different host at IP level on any suitable port. Further, the assigned IP addresses are now visible to the host. As such, the containers can have access to all the software defined networking (“SDN”) capabilities currently available to virtual machines without incurring a change in existing SDN infrastructure. Example SDN capabilities can include access control lists, routes, load balancing, on premise connectivity, etc.
Certain embodiments of systems, devices, components, modules, routines, data structures, and processes for network virtualization of containers in datacenters or other suitable computing systems are described below. In the following description, specific details of components are included to provide a thorough understanding of certain embodiments of the disclosed technology. A person skilled in the relevant art will also understand that the technology can have additional embodiments. The technology can also be practiced without several of the details of the embodiments described below with reference to
As used herein, the term a “computing system” generally refers to an interconnected computer network having a plurality of network nodes that connect a plurality of servers or hosts to one another or to external networks (e.g., the Internet). The term “network node” generally refers to a physical network device. Example network nodes include routers, switches, hubs, bridges, load balancers, security gateways, or firewalls. A “host” generally refers to a physical computing device configured to implement, for instance, one or more virtual machines or other suitable virtualized components. For example, a host can include a server having a hypervisor configured to support one or more virtual machines or other suitable types of virtual components.
A computer network can be conceptually divided into an overlay network implemented over an underlay network. An “overlay network” generally refers to an abstracted network implemented over and operating on top of an underlay network. The underlay network can include multiple physical network nodes interconnected with one another. An overlay network can include one or more virtual networks. A “virtual network” generally refers to an abstraction of a portion of the underlay network in the overlay network. A virtual network can include one or more virtual end points referred to as “tenant sites” individually used by a user or “tenant” to access the virtual network and associated computing, storage, or other suitable resources. A tenant site can host one or more tenant end points (“TEPs”), for example, virtual machines. The virtual networks can interconnect multiple TEPs on different hosts. Virtual network nodes in the overlay network can be connected to one another by virtual links individually corresponding to one or more network routes along one or more physical network nodes in the underlay network.
Also used herein, the term “container” generally refers to a software package that contains a piece of software (e.g., an application) in a complete filesystem having codes (e.g., executable instructions), a runtime environment, system tools, system libraries, or other suitable components sufficient to execute the piece of software. Containers running on a single server or virtual machine can all share the same operating system kernel and can make efficient use of system or virtual memory. A container can have similar resource isolation and allocation benefits as virtual machines. However, a different architectural approach allows a container to be much more portable and efficient than a virtual machine. For example, a virtual machine typically includes one or more applications, necessary binaries and libraries of the applications, and an entire operating system. In contrast, a container can include an application and all of its dependencies, but shares an operating system kernel with other containers on the same host. As such, containers can be more resource efficient and flexible than virtual machines. One example container is a Docker provided by Docker, Inc. of San Francisco, Calif.
In conventional computing systems, containers on a host (e.g., a server or a virtual machine) are assigned network addresses in an isolated name space (e.g., 172.168.0.0 address space) typically specific to the host. Multiple containers on a host can be connected together via a bridge. Network connectivity outside the host typically utilizes network address translation to a network address of the host (e.g., 10.0.0.1). Such an arrangement can limit the functionalities of the containers. For example, network addresses assigned to containers are not routable outside the host. In another example, containers may not expose any arbitrary service endpoint. For instance, two containers on a host may not both expose service endpoints on port 80 because only one is allowed to do so. Containers can also be limited from running passive FTP, remote procedure call, or session initiated protocol applications that dynamically negotiate a port. Several embodiments of he disclosed technology can enable flexible and efficient implementation of networking for containers in computing systems via network virtualization, as described in more detail below with reference to
As shown in
The hosts 106 can individually be configured to provide computing, storage, and/or other suitable cloud computing services to the tenants 101. For example, as described in more detail below with reference to
In accordance with several embodiments of the disclosed technology, the cloud controller 126 can be configured to manage instantiation of containers 145 on the hosts 106 or virtual machines 144. In certain embodiments, the cloud controller 126 can include a standalone server, desktop computer, laptop computer, or other suitable types of computing device operatively coupled to the underlay network 108. In other embodiments, the cloud controller 126 can include one of the hosts 106. In further embodiments, the cloud controller 126 can be implemented as one or more network services executing on and provided by, for example, one or more of the hosts 106 or another server (not shown). Example components of the cloud controller 126 are described in more detail below with reference to
The first and second hosts 106a and 106b can individually contain instructions in the memory 134 executable by the processors 132, cause the individual processors 132 to provide a hypervisor 140 (identified individually as first and second hypervisors 140a and 140b) and a status agent 141 (identified individually as first and second status agent 141a and 141b). Even though the hypervisor 140 and the status agent 141 are shown as separate components, in other embodiments, the status agent 141 can be a part of the hypervisor 140 or an operating system (not shown) executing on the corresponding host 106. In further embodiments, the status agent 141 can be a standalone application.
The hypervisors 140 can individually be configured to generate, monitor, terminate, and/or otherwise manage one or more virtual machines 144 organized into tenant sites 142. For example, as shown in
Also shown in
The virtual machines 144 on the virtual networks 146 can communicate with one another via the underlay network 108 (
Components within a system may take different forms within the system. As one example, a system comprising a first component, a second component and a third component can, without limitation, encompass a system that has the first component being a property in source code, the second component being a binary compiled library, and the third component being a thread created at runtime. The computer program, procedure, or process may be compiled into object, intermediate, or machine code and presented for execution by one or more processors of a personal computer, a network server, a laptop computer, a smartphone, and/or other suitable computing devices.
Equally, components may include hardware circuitry. A person of ordinary skill in the art would recognize that hardware may be considered fossilized software, and software may be considered liquefied hardware. As just one example, software instructions in a component may be burned to a Programmable Logic Array circuit, or may be designed as a hardware circuit with appropriate integrated circuits. Equally, hardware may be emulated by software. Various implementations of source, intermediate, and/or object code and associated data may be stored in a computer memory that includes read-only memory, random-access memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other suitable computer readable storage media excluding propagated signals.
As shown in
As also shown in
Also shown in
The compute controller 127 can be configured to determine a computation and/or processing demand for a user request 150 for instantiate a container 145. For example, the compute controller 127 can be configured to determine one or more of processing speed, memory usage, storage usage, or other suitable demands based on a size (e.g., application size) and/or execution characteristics (e.g., low latency, high latency, etc.) of the requested container 145. The compute controller 127 can also be configured to select a host 106 and/or a virtual machine 144 that can accommodate the requested instantiation of the container 145 based on the determined demands and operational profiles, current workload, or other suitable parameters of available hosts 106 and/or virtual machines 144. In certain embodiments, the compute controller 127 can be a fabric controller such as Microsoft Azure® controller or a portion thereof. In other embodiments, the compute controller 127 can include other suitable types of suitably configured controllers.
The network controller 129 can be configured to determine network configurations for the requested instantiation of the container 145. As shown in
The policy component 153 can be configured to generate a network policy 156 for the requested container 145 based on the received response 154. For example, the network policy 156 can include an SDN policy that specifies, inter alia, settings for network route determination, load balancing, and/or other parameters. The network policy 156 can be applied in a virtual network node (e.g., a virtual switch) based on an assigned IP address of a container 145. The network policy 156 can also be applied in a generally consistent fashion for both containers 145 and virtual machines 144 and thus enabling seamless connectivity between containers 145 and virtual machines 144. In certain embodiments, the network policy 156 can be specified consistently for both containers 145 and virtual machines 144 using a network namespace object, which can be an identifier for a container 145 or a virtual machine 144 and maps to an IP address of the container 145 or virtual machine 144. A network namespace can contain its own network resources such as network interfaces, routing tables, etc. In other embodiments, the network policy 156 can be specified differently for containers 145 and virtual machines 144 to create resource separation or for other suitable purposes.
In operation, the tenant 101 transmits the request 150 for instantiate a container 145 to the cloud controller 126 via, for example, the underlay network 108 (
Once the compute controller 127 selects the host 106 and/or the virtual machine 144, the network controller 129 transmits a query 152 to the host 106 and/or the virtual machine 144 based on, for example, an IP address of the host 106 and/or the virtual machine 144. In response to the query 152, the network agent 147 on the host 106 can provide a response 154 to the network controller 129. The response 154 can include data representing various parameters related to network operations on the host 106 and/or the virtual machine 144, such as load balancing, routing configurations, virtual network configurations (e.g., vnet identifications), and/or other suitable types of network parameters.
The network controller 129 can then configure a network policy 156 for the requested container 145 based on the received response 154. For example, the network policy 156 can include an SDN policy that specifies, inter alia, settings for network route determination, load balancing, and/or other parameters. The network controller 129 can then instruct the host 106 and/or the virtual machine 144 to instantiate the requested container 145 based on the configured network policy 156. In certain embodiments, the network controller 129 can assign certain network settings 157 to the instantiated container 145 according to, for example, DHCP protocol and the determined network policy 156. In other embodiments, the container engine 143 can be configured to configure the network settings 157 by, for instance, requesting the network policy 156 from the network controller 129. The network settings 157 can include, for instance, an IP address in a virtual network 146 (
The container engine 143 can then instantiate the requested container 145 based on the received network settings 157. In accordance with one aspect of the disclosed technology, the assigned IP address of the container 145 can include a routable vnet address exposed to virtual machines 144 and to other containers 145 on the same or different hosts 106 or virtual machines 144. For example, as shown in
In further embodiments, the first and second container hosts 162a and 162b can individually host containers 145 that belong to different virtual networks 146. For example, as shown in
Several embodiments of the disclosed technology can enable flexible and efficient implementation of networking for containers 145 in computing systems. For example, a container 145 can be assigned an IP address from a virtual network 146, and thus the IP address is routable and has full connectivity on all ports within the virtual network 146. One container (e.g., the container 145a in
Several embodiments of the disclosed technology can allow the containers 145 to communicate with each other bi-directionally when the containers are hosted on the same or different virtual machines 144. For example, as shown in
As shown in
The process 200 can then include configuring network settings for the requested container at stage 206. In certain embodiments, as shown in
Depending on the desired configuration, the processor 304 can be of any type including but not limited to a microprocessor (pP), a microcontroller (pC), a digital signal processor (DSP), or any combination thereof. The processor 304 can include one more levels of caching, such as a level-one cache 310 and a level-two cache 312, a processor core 314, and registers 316. An example processor core 314 can include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. An example memory controller 318 can also be used with processor 304, or in some implementations memory controller 318 can be an internal part of processor 304.
Depending on the desired configuration, the system memory 306 can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. The system memory 306 can include an operating system 320, one or more applications 322, and program data 324. As shown in
The computing device 300 can have additional features or functionality, and additional interfaces to facilitate communications between basic configuration 302 and any other devices and interfaces. For example, a bus/interface controller 330 can be used to facilitate communications between the basic configuration 302 and one or more data storage devices 332 via a storage interface bus 334. The data storage devices 332 can be removable storage devices 336, non-removable storage devices 338, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. The term “computer readable storage media” or “computer readable storage device” excludes propagated signals and communication media.
The system memory 306, removable storage devices 336, and non-removable storage devices 338 are examples of computer readable storage media. Computer readable storage media include, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other media which can be used to store the desired information and which can be accessed by computing device 300. Any such computer readable storage media can be a part of computing device 300. The term “computer readable storage medium” excludes propagated signals and communication media.
The computing device 300 can also include an interface bus 340 for facilitating communication from various interface devices (e.g., output devices 342, peripheral interfaces 344, and communication devices 346) to the basic configuration 302 via bus/interface controller 330. Example output devices 342 include a graphics processing unit 348 and an audio processing unit 350, which can be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 352. Example peripheral interfaces 344 include a serial interface controller 354 or a parallel interface controller 356, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 358. An example communication device 346 includes a network controller 360, which can be arranged to facilitate communications with one or more other computing devices 362 over a network communication link via one or more communication ports 364.
The network communication link can be one example of a communication media. Communication media can typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. A “modulated data signal” can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein can include both storage media and communication media.
The computing device 300 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. The computing device 300 can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
Specific embodiments of the technology have been described above for purposes of illustration. However, various modifications can be made without deviating from the foregoing disclosure. In addition, many of the elements of one embodiment can be combined with other embodiments in addition to or in lieu of the elements of the other embodiments. Accordingly, the technology is not limited except as by the appended claims.
This application is a non-provisional of and claims priority to U.S. Provisional Application No. 62/309,933, filed on Mar. 17, 2016, the disclosure of which is incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62309933 | Mar 2016 | US |