Hardware-isolated virtualization environments (HIVEs) have seen increasing use for reasons such as security, administrative convenience, portability, maximizing utilization of hardware assets, and others. HIVEs are provided by virtualization environments or virtualization layers such as type-1 and type-2 hypervisors, kernel-based virtualization modules, etc. Examples of HIVEs include virtual machines (VMs) and containers. However, the distinction between types of HIVEs have blurred and there are many architectures for providing isolated access to virtualized hardware. For convenience, the term “hypervisor” will be used herein to refer to any architecture or virtualization model that virtualizes hardware access for HIVEs such as VMs and containers. Virtual machine managers (VMMs), container engines, and kernel-based virtualization modules, are some examples of hypervisors.
Most hypervisors provide their HIVEs with virtualized access to the networking resources of the host on which they execute. Guest software executing in a HIVE is presented with a virtual network interface card (vNIC). The vNIC is backed by a physical NIC (pNIC). The virtualization models implemented by prior hypervisors have used a bifurcated network stack state where there is one network stack and state in the HIVE, and a separate network stack and state on the host. The host network hardware, stack, and state are fully opaque to the guest software in a HIVE. The primary network functionality the guest software has had has been external connectivity. The networking hardware and software components that are involved in that providing connectivity for the HIVE have been hidden from the HIVE and its guest software. Moreover, much of the information about the external network that is available at the host is unavailable in the HIVE. In sum, previous hypervisors have not provided the fidelity and network visibility that many applications require to perform their full functionality from within a HIVE. As observed only by the inventors and explained below, this opacity can affect the network performance, security and policy behavior, cost implications, and network functionality of many types of applications when they run in a HIVE.
Regarding network performance, because prior virtualization models have provided mainly network connectivity, the networking information needed for many applications to perform in a network-cognizant manner has not been available when executing within a HIVE. Telecommunication applications for video or voice calls are usually designed to query for network interfaces and their properties and may adjust their behavior differently based on the presence or absence of a media type (e.g. a WiFi (Wireless Fidelity) or mobile broadband NIC). For these types of applications to be able to perform their full functionality, the HIVE would need a representation of all the media types that are present on the host. Many applications will adjust their behavior and may display additional user interface information if they detect that their network traffic is being routed over a costed network (i.e., when data usage fees may apply). Some applications may be configured to look specifically for cellular interfaces because they have code that invokes system-provided interfaces which expose a cost flag to hard-code different policies for connections over a cellular media type. Some synchronization engines and background transfer engines of operating systems may specifically look to the available media type to determine what type of updates to download, when and how much bandwidth to consume, and so forth. In addition, in many cases hiding the host stack from the HIVE implies more layers of indirection and an increased data path, which degrades performance.
With respect to the security and policy behavior of guest software or applications running within a HIVE, some applications have specific requirements to use free-cost interfaces or may need to use a specific Mobile Operator (MO) interface. However, cost is usually exposed at the interface granularity, so if only a single generic interface is exposed in a HIVE then one of these two types of apps will be broken at any given time. Consider that VPNs may support split tunnels where, per policy, some traffic must be routed over a VPN interface, and some traffic may need to be routed over a non-VPN interface. Without sufficient interface information within a HIVE, the software cannot implement the policy. There may be policies that force specific applications to bind to VPN interfaces. If there is only a single interface in the container, an application will not know where to bind, and, if it binds to the single interface inside the container, it won't have enough information to bind again to the VPN interface in the host. Moreover, the HIVE may also be running applications that do not use the VPN and hence the VPN cannot just be specifically excluded from the container. Another security consideration is that for host interfaces that applications running in a HIVE should not use, it is possible to simply not connect them to the HIVE so the interface simply does not exist for the HIVE.
Another consideration is that a guest operating system may have a connection manager with policies to direct traffic over on-demand cellular interfaces, for instance. These interfaces might not even exist before a request is received by the connection manager, which may add a reference or create an interface. A connection manager might also include an application programming interface (API) which can be used by applications. However, functions of the API might have media-specific parameters or filters which cannot be used by guest software without knowing about the available interfaces. To make full use of a connection manager's API, the HIVE would need to know what interfaces are connected to return the appropriate interface/IP (Internet Protocol) to use, which has not previously been possible.
Applications traffic is not the only traffic that may be affected by network opacity within a HIVE. A significant portion of the traffic in a HIVE can be generated by system components on behalf of applications. For example, a DNS (Domain Name Service) system service may send DNS queries on all interfaces. Each interface can potentially receive a different answer and applications may need to see these differences. This is typical in multi-home scenarios. However, if a HIVE has only a single interface then the DNS service will send one single query and only return one specific answer and fail to give the correct responses. The same problem occurs with Dynamic Host Configuration Protocol services.
Regarding network functionality, many applications embed port or IP addresses in their packets, which break when traversing the Network Address Translation (NAT) found in many virtualization stacks. Because virtualization models use NAT artificially, these applications cannot function properly. Moreover, NAT-ing causes applications to increase load in critical enterprise gateway infrastructure. Many applications fall back to NAT traversal technologies using an Internet rendezvous server when peer-to-peer NAT does not work. When NAT is in between, peer-to-peer connectivity fails. When a NAT point is traversed, the NAT point identifying the device is often an external corporate NAT. This can increase the load on the corporation's NAT device.
Furthermore, many virtualization models have an internal network, which can cause IP address conflicts. If the virtualization component uses a complete internal network behind a NAT service inside the host then IP address assignment usually must comply with IPV4. Hence, there is a risk of IP address conflicts with the on-link network. Many applications need to see the on-link network to work properly, for instance to perform discovery. But when a complete internal network is used inside the host, the on-link network can't be seen, which can impact the ability to multicast and broadcast. Consequently, devices cannot be discovered on the network. This may make it impossible to use IP cameras, network-attached storage, networked appliances, and other IP devices. Also, by the time traffic arrives at the host stack, application ID, slots and other information that is relevant for these client features is already missing.
There are other network functionalities that can be impaired when running within a HIVE. Wake-on-LAN functionality, low power modes, and roaming support, for example. Network statistics within the HIVE may poorly reflect the networking reality beyond the HIVE.
The preceding problems, appreciated only by the inventors, are potentially resolved by embodiments described below.
To summarize, with prior hypervisors and virtualization models, the artificial network that a HIVE sees has significantly different characteristics than the real networks that the host sees. Therefore, features coded in a guest operating system or application that depend on the characteristics of the network are likely to malfunction or break, which affects the experience and expectations of users.
The following summary is included only to introduce some concepts discussed in the Detailed Description below. This summary is not comprehensive and is not intended to delineate the scope of the claimed subject matter, which is set forth by the claims presented at the end.
Embodiments described herein relate to providing hardware isolated virtualized environments (HIVEs) with network information. The HIVEs are managed by a hypervisor that virtualizes access to one or more physical network interface cards (NICs) of the host. Each HIVE has a virtual NIC backed by the physical NIC. Network traffic of the HIVEs flows through the physical NIC to a physical network. Traits of the physical NIC may be projected to the virtual NICs. For example, a media-type property of the virtual NICs (exposed to guest software in the HIVEs) may be set to mirror the media type of the physical NIC. A private subnet connects the virtual NICs with the physical NICs, possibly through a network address translation (NAT) component and virtual NICs of the host.
Many of the attendant features will be explained below with reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein like reference numerals are used to designate like parts in the accompanying description.
In the example architecture shown in
The vmNIC 112 is a generic virtual device that only attaches to the virtual subnet and is addressed accordingly. From the perspective of the guest software 104, the vmNIC 112 is completely synthetic. Its properties are not determined by any of the properties of the pNICs 114. If a pNIC is removed, the vmMIC 112 might not change. If a pNIC is replaced with a new pNIC of a different media type, the vmNIC 112 is unaffected and the networking behavior and state of the HIVE and guest software will not change (although performance may be affected). Within the HIVE, at the IP layer and at the application layer, the network is generally a virtual construct that, aside from connectivity and performance, does not reflect properties of the network 108, the pNICs 114, and other non-virtualized elements that enable the connectivity for the HIVE.
The vmNICs 120 need not actually emulate or behave in any ways that depend on the pNICs that they correspond to. Furthermore, the design shown in
To reiterate, in some embodiments, the vmNICs in the HIVEs will advertise the same media type and physical media type as the “parent” pNIC in the host they are associated with. As noted, these vmNICs may actually send and receive Ethernet frames. Layer-2 and/or layer-3 notifications and route changes are propagated from each pNIC on the host, through the vNICs 124 and vSwitch 122 to the corresponding vmNICs inside the HIVEs, where they are visible to the guest software. Client or guest operating system APIs (as the case may be) for networking may be made virtualization-aware so that any calls made to modify WiFi or cellular state, for example, can gracefully fail and provide valid returns, and any calls to read WiFi or cellular vmNIC state, for instance, will correctly reflect the state that exists on the host side.
Mirroring pNIC properties to vmNIC properties may occur when configuring a HIVE or when a HIVE is operating.
Properties that may be projected from pNICs to vmNICs may also include wake slots and others. In some embodiments, the same IP address, same MAC address, network routes, WiFi signal strength, broadcast domain, subnet, etc. may be projected to a HIVE, but into a separate kernel (if the HIVE hosts a guest operating system). As noted above, host mirroring logic may also include mirroring the addition of a new pNIC on the host. In that case, a new vmNIC is added to the HIVE (or HIVEs), with one or more properties reflecting properties of the new pNIC.
To be clear, the techniques described above differ from single root input/output virtualization (SR-IOV), which does not provide information in a way that allows an application to understand the information and tune its performance in a network cognizant manner.
The computing device 100 may have one or more displays 322, a camera (not shown), a network interface 324 (or several), as well as storage hardware 326 and processing hardware 328, which may be a combination of any one or more: central processing units, graphics processing units, analog-to-digital converters, bus chips, FPGAs, ASICs, Application-specific Standard Products (ASSPs), or Complex Programmable Logic Devices (CPLDs), etc. The storage hardware 326 may be any combination of magnetic storage, static memory, volatile memory, non-volatile memory, optically or magnetically readable matter, etc. The meaning of the term “storage”, as used herein does not refer to signals or energy per se, but rather refers to physical apparatuses and states of matter. The hardware elements of the computing device 100 may cooperate in ways well understood in the art of machine computing. In addition, input devices may be integrated with or in communication with the computing device 100. The computing device 100 may have any form-factor or may be used in any type of encompassing device. The computing device 100 may be in the form of a handheld device such as a smartphone, a tablet computer, a gaming device, a server, a rack-mounted or backplaned computer-on-a-board, a system-on-a-chip, or others.
Embodiments and features discussed above can be realized in the form of information stored in volatile or non-volatile computer or device readable storage hardware. This is deemed to include at least hardware such as optical storage (e.g., compact-disk read-only memory (CD-ROM)), magnetic media, flash read-only memory (ROM), or any means of storing digital information in to be readily available for the processing hardware 328. The stored information can be in the form of machine executable instructions (e.g., compiled executable binary code), source code, bytecode, or any other information that can be used to enable or configure computing devices to perform the various embodiments discussed above. This is also considered to include at least volatile memory such as random-access memory (RAM) and/or virtual memory storing information such as central processing unit (CPU) instructions during execution of a program carrying out an embodiment, as well as non-volatile media storing information that allows a program or executable to be loaded and executed. The embodiments and features can be performed on any type of computing device, including portable devices, workstations, servers, mobile wireless devices, and so on.
Embodiments and features discussed above can be realized in the form of information stored in volatile or non-volatile computer or device readable media. This is deemed to include at least media such as optical storage (e.g., compact-disk read-only memory (CD-ROM)), magnetic media, flash read-only memory (ROM), or any current or future means of storing digital information. The stored information can be in the form of machine executable instructions (e.g., compiled executable binary code), source code, bytecode, or any other information that can be used to enable or configure computing devices to perform the various embodiments discussed above. This is also deemed to include at least volatile memory such as random-access memory (RAM) and/or virtual memory storing information such as central processing unit (CPU) instructions during execution of a program carrying out an embodiment, as well as non-volatile media storing information that allows a program or executable to be loaded and executed. The embodiments and features can be performed on any type of computing device, including portable devices, workstations, servers, mobile wireless devices, and so on.