Private ethernet overlay networks over a shared ethernet in a virtual environment

Information

  • Patent Grant
  • 11838395
  • Patent Number
    11,838,395
  • Date Filed
    Saturday, March 13, 2021
    3 years ago
  • Date Issued
    Tuesday, December 5, 2023
    5 months ago
  • Inventors
    • Dalal; Anupam (Mountain View, CA, US)
  • Original Assignees
  • Examiners
    • Sciacca; Scott M
    Agents
    • ADELI LLP
Abstract
A system for private networking within a virtual infrastructure is presented. The system includes a virtual machine (VM) in a first host, the VM being associated with a first virtual network interface card (VNIC), a second VM in a second host, the second VM being associated with a second VNIC, the first and second VNICs being members of a fenced group of computers that have exclusive direct access to a private virtual network, wherein VNICs outside the fenced group do not have direct access to packets on the private virtual network, a filter in the first host that encapsulates a packet sent on the private virtual network from the first VNIC, the encapsulation adding to the packet a new header and a fence identifier for the fenced group, and a second filter in the second host that de-encapsulates the packet to extract the new header and the fence identifier.
Description
1. Field of the Invention

The present invention relates to methods, systems, and computer programs for deploying fenced groups of Virtual Machines (VMs) in a virtual infrastructure, and more particularly, to methods, systems, and computer programs for private networking among fenced groups of VMs executing in multiple hosts of the virtual infrastructure.


2. Description of the Related Art

Virtualization of computer resources generally involves abstracting computer hardware, which essentially isolates operating systems and applications from underlying hardware. Hardware is therefore shared among multiple operating systems and applications wherein each operating system and its corresponding applications are isolated in corresponding VMs and wherein each VM is a complete execution environment. As a result, hardware can be more efficiently utilized.


The virtualization of computer resources sometimes requires the virtualization of networking resources. To create a private network in a virtual infrastructure means that a set of virtual machines have exclusive access to this private network. However, virtual machines can be located in multiple hosts that may be connected to different physical networks. Trying to impose a private network on a distributed environment encompassing multiple physical networks is a complex problem. Further, sending a broadcast message in a private network presents two problems. First, the broadcast may be received by hosts which do not host any VMs in the private network, thus reducing the scalability of the entire distributed system. Second, if hosts are not located on adjacent layer 2 networks, the broadcast may not reach all hosts with VMs in the private network.


Virtual Local Area Networks (VLAN) are sometimes used to implement distributed networks for a set of computing resources that are not connected to one physical network. A VLAN is a group of hosts that communicate as if the group of hosts were attached to the Broadcast domain, regardless of their physical location. A VLAN has the same attributes as a physical Local Area Network (LAN), but the VLAN allows for end stations to be grouped together even if the end stations are not located on the same network switch. Network reconfiguration can be done through software instead of by physically relocating devices. Routers in VLAN topologies provide broadcast filtering, security, address summarization, and traffic flow management. However, VLANs only offer encapsulation and, by definition, switches may not bridge traffic between VLANs as it would violate the integrity of the VLAN broadcast domain. Further, VLANs are not easily programmable by a centralized virtual infrastructure manager.


Virtual labs, such as VMware's vCenter Lab Manager™ from the assignee of the present patent application, enable application development and test teams to create and deploy complex multi-tier system and network configurations on demand quickly. Testing engineers can set up, capture, and reset virtual machine configurations for demonstration environments in seconds. In addition, hands-on labs can be quickly configured and deployed for lab testing, hands-on training classes, etc.


The creation of virtual lab environments requires flexible tools to assist in the creation and management of computer networks. For example, if a test engineer decides to perform different tests simultaneously on one sample environment, the test engineer must deploy multiple times the sample environment. The multiple deployments must coexist in the virtual infrastructure. However, these environments often have network configurations that when deployed multiple times would cause networking routing problems, such as the creation of VMs with duplicate Internet Protocol (IP) addresses—an impermissible network scenario for the proper operation of the VMs and of the virtual lab environments.


Existing solutions required that VMs within the same private environment be executed on the same host using virtual switches in the host. However, the single-host implementation has drawbacks, such as a maximum number of VMs that can be deployed on a single host, inability to move VMs to different hosts for load balancing, unexpected host shutdowns, etc.


SUMMARY

Methods, systems, and computer programs for implementing private networking within a virtual infrastructure are presented. It should be appreciated that the present invention can be implemented in numerous ways, such as a process, an apparatus, a system, a device or a method on a computer readable medium. Several inventive embodiments of the present invention are described below.


In one embodiment, a method includes an operation for sending a packet on a private virtual network from a first virtual machine (VM) in a first host to a second VM. The first and second VMs are members of a fenced group of computers that have exclusive direct access to the private virtual network, where VMs outside the fenced group do not have direct access to the packets that travel on the private virtual network. Further, the method includes encapsulating the packet at the first host to include a new header as well as a fence identifier for the fenced group. The packet is received at a host where the second VM is executing and the packet is de-encapsulated to extract the new header and the fence identifier. Additionally, the method includes an operation for delivering the de-encapsulated packet to the second VM after validating that the destination address in the packet and the fence identifier correspond to the destination address and the fence identifier, respectively, of the second VM.


In another embodiment, a computer program embedded in a non-transitory computer-readable storage medium, when executed by one or more processors, for implementing private networking within a virtual infrastructure, includes program instructions for sending a packet on a private virtual network from a first VM in a first host to a second VM. The first and second VMs are members of a fenced group of computers that have exclusive direct access to the private virtual network, where VMs outside the fenced group do not have direct access to packets on the private virtual network. Further, the computer program includes program instructions for encapsulating the packet at the first host to include a new header and a fence identifier for the fenced group, and for receiving the packet at a host where the second VM is executing. Further yet, the computer includes program instructions for de-encapsulating the packet to extract the new header and the fence identifier, and program instructions for delivering the de-encapsulated packet to the second VM after validating that a destination address in the packet and the fence identifier correspond to the second VM.


In yet another embodiment, a system for private networking within a virtual infrastructure includes a first VM and a first filter in a first host, in addition to a second VM and a second filter in a second host. The first and second VMs are members of a fenced group of computers that have exclusive direct access to a private virtual network, where VMs outside the fenced group do not have direct access to packets on the private virtual network. The first filter encapsulates a packet sent on a private virtual network from the first VM, by adding to the packet a new header and a fence identifier for the fenced group. The second filter de-encapsulates the packet to extract the new header and the fence identifier, and the second filter delivers the de-encapsulated packet to the second VM after validating that a destination address in the packet and the fence identifier correspond to the second VM.


Other aspects of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention may best be understood by reference to the following description taken in conjunction with the accompanying drawings in which:



FIG. 1 includes an architectural diagram of an embodiment of a virtual infrastructure system.



FIG. 2 depicts one embodiment of the host architecture for instantiating Virtual Machines (VM) with multiple Virtual Network Interface Cards (VNIC).



FIG. 3 illustrates the deployment of multiple VM configurations, according to one embodiment.



FIG. 4 illustrates the use of packet filters within the host, according to one embodiment.



FIG. 5 shows one embodiment for Ethernet frame encapsulation.



FIG. 6 provides a detailed illustration of the encapsulated packet, in accordance with one embodiment of the invention.



FIG. 7 illustrates the flow of a broadcast packet sent within the private virtual network, according to one embodiment.



FIG. 8 illustrates the flow of the response packet to the broadcast, according to one embodiment.



FIG. 9A illustrates the flow of a packet travelling between VMs in the same host, according to one embodiment.



FIG. 9B illustrates the flow of an Internet Protocol (IP) packet, according to one embodiment.



FIG. 10 illustrates the update of bridge tables when a VM migrates to a different host, according to one embodiment.



FIG. 11 shows the structure of a Maximum Transmission Unit (MTU) configuration table, according to one embodiment.



FIG. 12 shows one embodiment of an active-ports table.



FIG. 13 shows an embodiment of a bridge table.



FIG. 14 shows the process flow of a method for private networking within a virtual infrastructure, in accordance with one embodiment of the invention.





DETAILED DESCRIPTION

The following embodiments describe methods and apparatus for implementing private networking within a virtual infrastructure. Embodiments of the invention use Media Access Control (MAC) encapsulation of Ethernet packets. The hosts that include Virtual Machines (VM) from fenced groups of machines implement distributed switching with learning for unicast delivery. As a result, VMs are allowed to migrate to other hosts to enable resource management and High Availability (HA). Further, the private network implementation is transparent to the guest operating system (GOS) in the VMs and provides an added level of privacy.


With a host-spanning private network (HSPN), VMs can be placed on any host where the private network is implemented. The HSPN may span hosts in a cluster or clusters in a datacenter, allowing large groups of VMs to communicate over the private network. Additionally, VMs may move between hosts since VMs maintain private network connectivity. A VM can also be powered-on in a different host after failover and still retain network connectivity. Further, VMs get their own isolated private level 2 connectivity without the need to obtain Virtual Local Area Networks (VLAN) ID resources or even setup VLANs. Creating a HSPN is therefore simpler because there is no dependency on the network administrator. The HSPN can be deployed on either a VLAN or an Ethernet segment.


It should be appreciated that some embodiments of the invention are described below using Ethernet, Internet Protocol (IP), and Transmission Control Protocol (TCP) protocols. Other embodiments may utilize different protocols, such as an Open Systems Interconnection (OSI) network stack, and the same principles described herein apply. The embodiments described below should therefore not be interpreted to be exclusive or limiting, but rather exemplary or illustrative.


It will be obvious, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.



FIG. 1 includes an architectural diagram of an embodiment of a virtual infrastructure system. Virtual infrastructure 102 includes one or more virtual infrastructure servers 104 that manage a plurality of hosts 106. Virtual machines 108 are instantiated in hosts 106, and the multiple hosts share a plurality of resources within the virtual infrastructure, such as shared storage 110. A configuration is a core element of a virtual lab and is composed of virtual machines and virtual lab networks. Virtual lab users can group, deploy, save, share, and monitor multi-machine configurations. Configurations reside in the library or in user workspaces, in which case they are referred to as workspace configurations.


Many applications run on more than one machine and grouping machines in one configuration is more convenient to manage the applications. For example, in a classic client-server application, the database server may run on one machine, the application server on another machine, and the client on a third machine. All these machines would be configured to run with each other. Other servers may execute related applications, such as LDAP servers, Domain Name servers, domain controllers, etc. Virtual lab server allows the grouping of these dependent machines into a Configuration, which can be checked in and out of the library. When a configuration is checked out, all the dependent machines configured to work with each other are activated at the same time. Library configurations can also store the running state of machines so the deployment of machines that are already running is faster.


Virtual lab networks, also referred to herein as enclosed local networks, can be categorized as private networks and shared networks. Private networks in a configuration are those networks available exclusively to VMs in the configuration, that is, only VMs in the configuration can have a Network Interface Controller (NIC) or VNIC connected directly to a switch or virtual switch (VSwitch) for the private network. Access to data on a private network is restricted to members of the configuration, that is, the private network is isolated from other entities outside the configuration. In one embodiment, a private network in the configuration can be connected to a physical network to provide external connectivity to the VMs in the private network. Private networks in a configuration are also referred to herein as Configuration Local Networks (CLN) or virtual networks. Shared networks, also referred to herein as shared physical networks or physical networks, are available to all VMs in the virtual infrastructure, which means that a configuration including a shared network will enable VMs in the shared network to communicate with other VMs in the virtual infrastructure connected, directly or indirectly, to the shared network. In one embodiment, a shared network is part of a Virtual Local Area Network (VLAN).


Deploying a configuration causes the VMs and networks in the configuration to be instantiated in the virtual infrastructure. Instantiating the VMs includes registering the VMs in the virtual infrastructure and powering-on the VMs. When an individual VM from a configuration is deployed, virtual lab deploys all shared networks and CLNs associated with the configuration using the network connectivity options in the configuration. Undeploying a configuration de-instantiates the VMs in the configuration from the virtual infrastructure. De-instantiating VMs includes powering off or suspending the VMs and un-registering the VMs from the virtual infrastructure. The state of the deployment can be saved in storage or discarded. Saving the memory state helps debugging memory-specific issues and makes VMs in the configuration ready for deployment and use almost instantly.


Virtual lab server 112 manages and deploys virtual machine configurations in a collection of hosts 106. It should be appreciated that not all hosts 106 need to be part of the scope of virtual lab server 112, although in one embodiment, all the hosts are within the scope of virtual lab server 112. Virtual lab server 112 manages hosts 106 by communicating with virtual infrastructure server 104, and by using virtual lab server agents installed on those hosts. In one embodiment, virtual lab server 112 communicates with virtual infrastructure server 104 via an Application Programming Interface (API), for example, to request the instantiation of VMs and networks.


Although virtual lab server 112 is used to perform some management tasks on hosts 106, the continuous presence of virtual lab server 112 is not required for the normal operation of deployed VMs, which can continue to run even if virtual lab server 112 becomes unreachable, for example because a network failure. One or more users 116 interface with virtual lab server 112 and virtual infrastructure 102 via a computer interface, which in one embodiment is performed via web browser.



FIG. 2 depicts one embodiment of the host architecture for instantiating VMs with multiple Virtual Network Interface Cards (VNIC). In one embodiment, VMkernel 204, also referred to as virtual infrastructure layer, manages the assignment of VMs 206 in host 202. VM 206 includes Guest Operating System (GOS) 208 and multiple VNICs 210. Each VNIC 210 is connected to a VSwitch 212 that provides network switch functionality for the corresponding virtual network interfaces. VSwitches 212 are connected to a physical NIC device in the host to provide access to physical network 216. Each of the VNICs and VSwitches are independent, thus a VM can connect to several virtual networks via several VNICs that connect to one or more physical NIC devices 214. In another embodiment, each VSwitch 212 is connected to a different physical NIC device, thus each VSwitch 212 provides connectivity to a different physical network. In the sample configuration illustrated in FIG. 2, VSwitch 212 provides switching for virtual networks “Network 1” (VNIC1) and “Network 4” (VNIC4). VSwitch 212 assigns a set of ports to “Network 1” and a different set of ports to “Network 4,” where each set of ports supports Media Access Control (MAC) addressing for the corresponding virtual network. Thus, packets from “Network 1” coexist with packets from “Network 4” on the same transmission media.


The virtual computer system supports VM 206. As in conventional computer systems, both system hardware 220 and system software are included. The system hardware 220 includes one or more processors (CPUs) 222, which may be a single processor, or two or more cooperating processors in a known multiprocessor arrangement. The system hardware also includes system memory 226, one or more disks 228, and some form of Memory Management Unit (MMU) 224. The system memory is typically some form of high-speed RAM (random access memory), whereas the disk is typically a non-volatile, mass storage device. As is well understood in the field of computer engineering, the system hardware also includes, or is connected to, conventional registers, interrupt handling circuitry, a clock, etc., which, for the sake of simplicity, are not shown in the figure.


The system software includes VMKernel 204, which has drivers for controlling and communicating with various devices 230, NICs 214, and disk 228. In VM 206, the physical system components of a “real” computer are emulated in software, that is, they are virtualized. Thus, VM 206 will typically include virtualized guest OS 208 and virtualized system hardware (not shown), which in turn includes one or more virtual CPUs, virtual system memory, one or more virtual disks, one or more virtual devices, etc., all of which are implemented in software to emulate the corresponding components of an actual computer.


The guest OS 208 may, but need not, simply be a copy of a conventional, commodity OS. The interface between VM 103 and the underlying host hardware 220 is responsible for executing VM related instructions and for transferring data to and from the actual physical memory 226, the processor(s) 222, the disk(s) 228 and other devices.



FIG. 3 illustrates the deployment of multiple VM configurations, according to one embodiment. Configuration 302, which includes VMs A, B, and C, is deployed a first time resulting in deployment 304. When a configuration of machines is copied, the system performs the copying, also referred to as cloning, in a short amount of time, taking a fraction of the disk space a normal copy would take. This is referred to as linked clones. For example, when a virtual lab server VM with an 80 GB disk, is cloned, the 80 GB are not copied. Instead a 16 MB file called a linked clone is created, which points to the 80 GB disk and acts like a new instance of the disk.


Another feature of virtual lab server is the ability to use multiple copies of VMs simultaneously, without modifying them. When machines are copied using traditional techniques, the original and the copy cannot be used simultaneously due to duplicate IP addresses, MAC addresses, and security IDs (in the case of Windows). Virtual lab server provides a networking technology called “fencing” that allows multiple unchanged copies of virtual lab server VMs to be run simultaneously on the same network without conflict, while still allowing the VMs to access network resources and be accessed remotely.



FIG. 3 illustrates the process of deploying fenced VMs. The first deployment 304 can use the IP and Ethernet addresses in configuration 302 and be directly connected to the network without any conflicts (of course assuming no other VMs in the network have the same addresses). Deployment 306 is created after cloning the first deployment 304. However, deployment 306 cannot be connected directly to the network because there would be duplicate addresses on the network.


Deployment 308 is deployed in fenced mode, including private networking module 310, which performs, among other things, filtering and encapsulation of network packets before sending the packets on the physical network. This way, there is no duplication of addresses in the physical network.



FIG. 4 illustrates the use of packet filters within host 402, according to one embodiment. Host 402 includes several VMs. Each VM is associated with a VNIC 404. In general, it is assumed that each VM only has one network connection, which means one VNIC. However, it is possible for a VM to have multiple network connections, where each connection is associated with a different VNIC. Principles of the invention can also be applied when a VM has multiple VNICs, because the important consideration is that each VNIC be associated with one layer 2 and one layer 3 address. Therefore, it would be more precise to refer to VNICs instead of VMs, but for ease of description VMs are used in the description of some embodiments. However, it is understood that if a VM has more than one VNIC, then each VNIC would be separately considered and belonging to a separate private network.


A Distributed Virtual (DV) Filter 408 is associated with VNIC 404 and performs filtering and encapsulation of packets originating in VNIC 404 before the packets reach distributed vSwitch 410. On the receiving side, DV Filter 408 performs filtering and de-encapsulation (stripping) when needed. Distributed vSwitch 410 is connected to one or more physical NICs (PNIC) 412 that connect host 402 to physical network 414.


The use of a DV Filter enables the implementation of the cross-host private virtual network. The DV filter is compatible with VLAN and other overlays solutions as the encapsulation performed by DV Filter 408 is transparent to switches and routers on the network. More details on the operation of DV Filter 408 are given below in reference to FIGS. 7-13.



FIG. 5 shows one embodiment for Ethernet frame encapsulation. A packet sent from a VM in the private virtual network includes original Ethernet header 506 and original payload 508. The DV Filter adds a new header and new data to the original packet. The encapsulation Ethernet header 502 has the standard fields of an Ethernet header. More details on the composition of encapsulation Ethernet header 502 are given below in reference to FIG. 6. DV Filter also adds fence protocol data 504 to the data field of the new Ethernet packet in front of the original packet. In other words, the payload for the new packet includes fence protocol data 504, original Ethernet header 506, and original payload 508.


Since the new encapsulated packet includes additional bytes, it is possible that the resulting encapsulated packet exceeds the Maximum Transmission Unit (MTU) of layer 2. In this case fragmentation is required. The encapsulated packet is fragmented in 2 different packets, transmitted separately, and the DV filter at the receiving host de-encapsulates the two packets by combining their contents to recreate the encapsulated packet. In one embodiment, fragmentation is avoided by increasing the uplink MTU. In another embodiment, the VM is configured by the user with an MTU that is smaller from the MTU on the network, such that encapsulation can be performed on all packets without fragmentation.


Because of the MAC-in-MAC Ethernet frame encapsulation the traffic of the private virtual network is isolated from other traffic, in the sense that the Ethernet headers of the private network packets are “hidden” from view. Also, the private network packets terminate in hosts that implement the private networking, allowing an additional level of control and security. Switches and routers on the network do not see or have to deal with this encapsulation because they only see a standard Ethernet header, which is processed the same as any standard Ethernet header. As a result, no network infrastructure or additional resources are required to implement private networking, there no MAC addressing collisions, and VLANs are interoperable with the private virtual network scheme. Also, a large number of private networks is possible (i.e. 16 million or more) per VLAN.



FIG. 6 provides a detailed illustration of the encapsulated packet, in accordance with one embodiment of the invention. As with any standard Ethernet header, the encapsulating header includes a destination address, a source address and a time to live (T/L) field. The source and destination address are form by joining together a fence Organizationally-Unique-Identifier (OUI) (24 bits), an installation identifier (“Install ID”) (8 bits), and a host identifier (16 bits). An OUI is a 24-bit number that is purchased from the Institute of Electrical and Electronics Engineers, Incorporated (IEEE) Registration Authority. This identifier uniquely identifies a vendor, manufacturer, or other organization globally and effectively reserves a block of each possible type of derivative identifier (such as MAC addresses, group addresses, Subnetwork Access Protocol protocol identifiers, etc.) for the exclusive use of the assignee.


The fence OUI is a dedicated OUI reserved for private virtual networking. Therefore, there will not be address collisions on the network because nodes that are not part of the private networking scheme will not use the reserved fence OUI. The destination address in the encapsulating header can also be a broadcast address, and all the hosts in the network will receive this packet.


The virtual lab server installation ID is unique on a LAN segment and is managed by virtual lab server 112 (FIG. 1). The fence identifier uniquely identifies a private network within the virtual lab server. Fence IDs can be recycled over time. Further, the T/L field in the encapsulating header includes the fence Ethernet type which is an IEEE assigned number (in this case assigned to VMware, the assignee of the present application) that identifies the protocol carried by the Ethernet frame. More specifically, the protocol identified is the Fence protocol, i.e., the protocol to perform MAC-in-MAC framing. The Ethernet type is used to distinguish one protocol from another.


The fence protocol data includes a version ID of the private network implementation or protocol (2 bits), a fragment type (2 bits), a fragment sequence number, and a fence identifier. The fragment type and sequence number indicate if the original packet has been fragmented, and if so, which fragment number corresponds to the packet. The fence identifier indicates a value assigned to the private virtual network. In one embodiment, this field is 24 bits which allows for more than 16 million different private networks per real LAN.


It should be appreciated that the embodiments illustrated in FIGS. 5 and 6 are exemplary data fields for encapsulating network packets. Other embodiments may utilize different fields, or may arrange the data in varying manners. The embodiments illustrated in FIGS. 5 and 6 should therefore not be interpreted to be exclusive or limiting, but rather exemplary or illustrative.



FIG. 7 illustrates the flow of a broadcast packet sent within the private virtual network, according to one embodiment. FIG. 7 illustrates some of the events that take place after VM A 720 is initialized. During network initialization, VM A 720 sends a broadcast Address Resolution Protocol (ARP) packet 702 to be delivered to all nodes that have layer-2 connectivity on the private network associated with Fence 1. Fence 1 includes VM A 720 in Host 1 714 and VM B 742 in Host 2 716. It should be noted that VM B 744, also in Host 2 716, is a clone of VM B 742 and is connected to a different private network from the one connected to VMA 720 and VM B 742.


Packet 702 is a standard ARP broadcast packet including VM A's address as the source address. VM A 720 sends the message through port 722, which is associated with Fence 1. DV Filter 724 receives packet 704, associated with Fence 1, and adds the encapsulating header, as described above in reference to FIGS. 5 and 6, to create encapsulated packet 706. The destination address of the encapsulating header is also an Ethernet broadcast address. DV Filter 724 sends packet 706 to distributed vSwitch 726 for transmittal over the network via physical NIC 728.


Host 2 716 receives packet 706 (referred to as packet 708) and the Distributed vSwitch forwards packet 708 to the DV Filters for all VNICS, since it is a broadcast packet. DV Filter 734 associated with VM B742 examines the source address. It determines that packet 708 is a private virtual network packet because of the unique fence OUI. This packet comes from Host 1 because the source address includes Host 1's ID and it is originally from VM A because VM A's Ethernet address is in the original Ethernet header. Since DV Filter 734 did not have an entry for VM A in that private network, an entry is added to bridge table 746 mapping VM A with Host 1. More details on the structure of bridge table 746 are given below in reference to FIG. 13.


DV Filter 734 de-encapsulates the packet by stripping the encapsulating headers and added data to create packet 710, which is associated with Fence 1 as indicated in the Fence ID of the fence protocol data. DV Filter then checks for ports associated with Fence 1 and the destination address of packet 710, which is every node since it is a broadcast address. Since VM 742 is associated with Fence 1 738, packet 712 is delivered to VM B 742. On the other hand, VM B 744 will not get delivery of the packet or frame because the DV Filter for VM B 744 (not shown) will detect that the frame is for Fence 1 nodes and will drop the frame because VM B 744 does not belong to Fence 1. It belongs to Fence 2.


It should be noted that this mechanism provides an added level of security by assuring that the fence is isolated. Packets that have no Fence ID will be dropped and will never make it inside the fence.



FIG. 8 illustrates the flow of the response packet to the broadcast, according to one embodiment. VM B 742 replies to packet 712 with packet 802 addressed to VM A 720. DV Filter 734 receives packet 804, associated with Fence 1 because VM B's port is associated with Fence 1. DV Filter 734 checks bridge table 746 and finds an entry for VM A indicating that VM A is executing in Host 1. DV Filter proceeds to create new Ethernet packet 806 by encapsulating packet 804. The addresses in the encapsulation header are created according to the process described in reference to FIG. 6. For example, the destination Ethernet address is constructed by combining the fence OUI (24 bits), the installation identifier (8 bits), and the number associated with Host 1 (16 bits). The fence ID for Fence 1 is added after the header and before the original packet, as previously described.


After packet 806 is unicast via the physical network, Host 1 receives packet 808, which is processed in similar fashion as described in reference to FIG. 7, except that the destination address is not a broadcast address. DV Filter 724 determines that the packet is from VM B in Fence 1. Since there is not an entry for VM B in bridge table 814, a new entry for VM B is added to bridge table 814 indicating that VM B is executing in Host 2. Additionally, DV Filter 724 proceeds to strip packet 808 to restore original packet 802 sent by VM B 74, by taking out the added header and the additional payload ahead of the original packet. This results in packet 810, which is associated with Fence 1 because the payload in packet 808 indicates that the packet is for a Fence 1 node. Since VM A's port is associated with Fence 1 722 and the Ethernet destination address, packet 812 is successfully delivered to VM A 720.



FIG. 9A illustrates the flow of a packet travelling between VMs in the same host, according to one embodiment. Packet 902 is sent from VM A 920 with a destination address of VM C 928, with both VMs executing in the same host. The process is similar as the one previously described in FIGS. 7 and 8, except that the packet does not travel over the physical network and is “turned around” by Distributed VSwitch 926. Thus, packet 902 is sent to VNIC 930, which in turn sends packet 904 to DV Filter 922.


It should be noted that although packets are described herein as travelling (sent and received) among the different entities of the chain of communication, it is not necessary to actually transfer the whole packet from one module to the next. For example, a pointer to the message may be passed between VNIC 930 and DV filter without having to actually make a copy of the packet.


DV filter for VM A 922 checks bridge table 924 and determines that the destination VM C is executing in Host 1. The corresponding encapsulation is performed to create packet 906 which is forwarded to distributed vSwitch 926 via output leaf 932. VSwitch 926 determines that the destination address of packet 906 is for a VM inside the host and “turns the packet around” by forwarding packet 908 to the DV Filter for VM C (not shown) via input leaf 934. The DV Filter for VM C strips the headers and, after checking the destination address and the Fence ID, delivers the packet to VM C's port in VNIC 930.



FIG. 9B illustrates the flow of an Internet Protocol (IP) packet, according to one embodiment. FIG. 9B illustrates sending an IP packet from VM A 970 in Host 1 to VM B 972 in Host 2. The process is similar to the one described in FIG. 7, except that there is not a broadcast address but instead a unicast address, and the bridge tables in the DV filters already have the pertinent entries as VMs A and B have been running for a period of time.


Thus, encapsulated packet 956, leaving DV Filter 974, includes source and destination address associated with the IDs of hosts 1 and 2, respectively. When DV Filter 976 for VM B receives packet 958, it does not create a new entry in the bridge table because the entry for VM A already exists. Packet 958 is forwarded to VM B via the distributed switch and the VNIC port, as previously described.


It should be noted that packet 952 is an Ethernet frame and that the scenario described in FIG. 9 is for VMs that are executing in hosts with layer 2 connectivity. If the destination VM were in a host executing in a different LAN segment (i.e., a different data link layer segment), then MAC in MAC encapsulation would not work because the packet would be sent to a router in the network which may not be aware of the private networking scheme for fencing and would not work properly as the IP header is not where the router would expect it. In this case other fencing solutions for hosts on different networks can be combined with embodiments of the inventions. Solutions for internetwork fencing are described in U.S. patent application Ser. No. 12/571,224, filed Sep. 30, 2009, and entitled “PRIVATE ALLOCATED NETWORKS OVER SHARED COMMUNICATIONS INFRASTRUCTURE”, which is incorporated herein by reference. Also, a VLAN network can be used to provide layer-2 connectivity to hosts in different networks.



FIG. 10 illustrates the update of bridge tables when a VM migrates to a different host, according to one embodiment. When VM A 156 moves from Host 1 150 to Host 2 152, VM A 156 sends a Reverse Address Resolution Protocol (RARP) packet 160. RARP is a computer networking protocol used by a host computer to request its IP address from an administrative host, when the host computer has available its Link Layer or hardware address, such as a MAC address.


Since packet 160 is a broadcast packet, packet 160 will reach all nodes in the same private network as VM A 156. When packet 166 is received by DV Filter 172 in Host 2 154, DV Filter 172 detects that message is from VM A in Host 3. Since the bridge table entry for VM A has Host 1 as the host for VM A, and the new packet indicates that VM A is now executing in Host 3, the entry for VM A in bridge table 174 is updated to reflect this change. The packet is then delivered to VM B 158 because VM B is part of the private network in Fence 1 and this is a broadcast packet.



FIG. 11 shows the structure of an MTU configuration table, according to one embodiment. The MTU configuration table is used to store the MTU for each network. Thus, each entry (shown as columns in the table) includes a LAN identifier and a MTU for the corresponding LAN. When encapsulating a packet that results in a packet that is bigger than the MTU for that network, then the packet has to be fragmented. Each fragment is sent separately to the destination host with a different fragment sequence number. The DV filter at the destination will combine the fragments to reconstruct the original encapsulated packet.


As previously described, a way to avoid fragmentation is by reducing the MTU in the network configuration of the host. For example, if the MTU of a network is 1,500, the network can be configured in the VM as having an MTU of 1,336, reserving 144 bits for the encapsulation by the DV Filter.



FIG. 12 shows one embodiment of an active-ports table. The active-ports table has one entry (in each row) for each active VNIC and includes an OPI field, a LAN ID, and the MTU. The OPI includes virtual lab server parameters “installation ID” and “fence ID”. The installation ID identifies a particular implementation of a fenced configuration, and different clones will have different fence IDs. The fence ID identifies the fence ID associated with the VNIC. The LAN ID is an internal identifier of the underlying network that the private network (fence) overlays. Different fences may share the underlying LAN. The MTU indicates the maximum transmission unit on the network.



FIG. 13 shows an embodiment of a bridge table. As previously described, the bridge table resides in the DV filter and is used to keep the address of the destination hosts where the VMs of the private network are executing. The network is organized by VNIC, also referred to as ports, each associated with the VNIC for a VM. The example shown in FIG. 13 includes entries for 3 ports, 0x4100b9f869e0, 0x4100b9f86d40, and 0x4100b9f86f30. Port 0x4100b9f869e0 has no entries in the bridge table yet, and the other two ports have 4 entries. Each of these entries includes an inner MAC address, an outer MAC address, a “used” flag, an “age” value, and a “seen” flag.


The inner MAC address corresponds to the Ethernet of another VM in the same private network. The outer MAC address corresponds to the Ethernet of the host that the VM is on and includes the address that would be added in an encapsulating header to send a message to the corresponding VM. Of course, the address may be constructed as described in reference to FIG. 6. For example, the entry in DV filter 746 of FIG. 7 holds the inner MAC address of VM A, and the outer MAC address for Host 1. The used flag indicates if the entry is being used, the age flag indicates if the entry has been updated in a predetermined period of time, and the seen flag indicates if the entry has been used recently.


The tables in FIGS. 11-13 are interrelated. For example, the second entry in active ports table of FIG. 2 is for port 0x4100b9f86d40. The OPI is “3e,0000fb”, which means that the installation ID is 3e and the fence ID is 0000fb. In the bridge table of FIG. 13, it can be observed that outer MAC addresses for port 0x4100b9f86d40 have the same OUI (00:13:f5), and the same installation ID (3e). The remainder of the outer MAC address corresponds host IDs for different hosts (02:c2, 02:e2, 03:02, and 02:f2).



FIG. 14 shows the process flow of a method for private networking within a virtual infrastructure, in accordance with one embodiment of the invention. The process includes operation 1402 for sending a packet on a private virtual network from a first VM in a first host. The first VM and a second VM are members of a fenced group of computers that have exclusive direct access to the private virtual network, such that VMs outside the fenced group do not have direct access to packets on the private virtual network. From operation 1402, the method flows to operation 1404 for encapsulating the packet at the first host to include a new header and a fence identifier for the fenced group. See for example DV filter 724 of FIG. 7.


The packet is received at a host where the second VM is executing, in operation 1406, and the method continues in operation 1408 for de-encapsulating the packet to extract the new header and the fence identifier. In operation 1410, the de-encapsulated packet is delivered to the second VM after validating that the destination address in the packet and the fence identifier correspond to the address of the second VM and the fence identifier of the second VM.


Embodiments of the present invention may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a network.


With the above embodiments in mind, it should be understood that the invention can employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus may be specially constructed for the required purpose, such as a special purpose computer. When defined as a special purpose computer, the computer can also perform other processing, program execution or routines that are not part of the special purpose, while still being capable of operating for the special purpose. Alternatively, the operations may be processed by a general purpose computer selectively activated or configured by one or more computer programs stored in the computer memory, cache, or obtained over a network. When data is obtained over a network the data may be processed by other computers on the network, e.g., a cloud of computing resources.


The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes and other optical and non-optical data storage devices. The computer readable medium can include computer readable tangible medium distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.


Although the method operations were described in a specific order, it should be understood that other housekeeping operations may be performed in between operations, or operations may be adjusted so that they occur at slightly different times, or may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the overlay operations are performed in the desired way.


Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims
  • 1. A method of defining overlay-network encapsulation headers to establish a private virtual network (PVN) over a shared physical network, the method comprising: at a first filter defined on a first host computer: receiving an Ethernet packet sent by a first machine executing on the first host computer, the packet addressed to a second machine executing on a second host computer;performing layer-2 (L2) encapsulation on the Ethernet packet by encapsulating the Ethernet packet with (i) fence protocol data that includes a particular PVN identifier specifying that the packet is associated with the PVN and (ii) an Ethernet overlay-network L2 encapsulation header that allows the packet to be forwarded on the PVN to the second machine; andsending the L2 encapsulated packet to the second machine over the physical network,wherein a second filter executes on the second host computer to form a distributed virtual filter with the first filter that adds and removes encapsulating headers with the particular PVN identifier to allow the first and second machines to exchange packets associated with the particular PVN.
  • 2. The method of claim 1, wherein the first machine is a first virtual machine (VM) executing on the first host computer and the second machine is a second VM executing on the second host computer.
  • 3. The method of claim 2, wherein each VM has an associated virtual network interface card (VNIC) and the first filter is associated with a VNIC of the first VM.
  • 4. The method of claim 2, wherein the first filter is a module that executes on the first host computer outside of the first VM and that processes packets sent by the first VM.
  • 5. The method of claim 1, wherein receiving the Ethernet packet sent by the first machine comprises obtaining the packet as the packet passes along an egress data path from the first machine to a physical network interface card (PNIC) of the first host computer.
  • 6. The method of claim 5, wherein: the packet is obtained before the packet is processed by a software switch executing on the first host computer;sending the L2 encapsulated packet tothe second machine comprises sending the L2 encapsulated packet to the software switch for forwarding to the PNIC to send the L2 encapsulated packet to the physical network; andthe physical network forwards the L2 encapsulated packet to the second host computer, which removes the overlay-network L2 encapsulation header, uses a destination address in an original header of the packet to identify the second machine, and passes the packet to the second machine.
  • 7. The method of claim 1, wherein the first filter comprises a bridge table that stores addresses of destination hosts where machines of the private virtual network execute.
  • 8. The method of claim 1, wherein performing the L2 encapsulation comprises, when the size of the encapsulated packet exceeds a maximum-transmission unit (MTU) for the network, fragmenting the packet into at least two packets and encapsulating each of the two packets with (i) respective fence protocol data and (ii) the overlay-network L2 encapsulation header before sending the at least two encapsulated packets over the physical network.
  • 9. The method of claim 8, wherein the respective fence protocol data for each of the at least two packets comprises (i) a 2-bit field to indicate that the packet has been fragmented and (ii) a respective fragment sequence number that indicates a fragment number corresponding to the respective packet.
  • 10. A non-transitory machine readable medium storing a first filter for defining overlay-network encapsulation headers to establish a private virtual network (PVN) over a shared physical network, the first filter for execution by at least one hardware processing unit of a first host computer, the first filter comprising sets of instructions for: receiving an Ethernet packet sent by a first machine executing on the first host computer, the packet addressed to a second machine executing on a second host computer;performing layer-2 (L2) encapsulation on the Ethernet packet by encapsulating the Ethernet packet with (i) fence protocol data that includes a particular PVN identifier specifying that the packet is associated with the PVN and (ii) an Ethernet overlay-network L2 encapsulation header that allows the packet to be forwarded on the PVN to the second machine; and sending the L2 encapsulated packet to the second machine over the physical network,wherein a second filter executes on the second host computer to form a distributed virtual filter with the first filter that adds and removes encapsulating headers with the particular PVN identifier to allow the first and second machines to exchange packets associated with the particular PVN.
  • 11. The non-transitory machine readable medium of claim 10, wherein the first machine is a first virtual machine (VM) executing on the first host computer and the second machine is a second VM executing on the second host computer.
  • 12. The non-transitory machine readable medium of claim 11, wherein each VM has an associated virtual network interface card (VNIC) and the first filter is associated with a VNIC of the first VM.
  • 13. The non-transitory machine readable medium of claim 11, wherein the first filter executes on the first host computer outside of the first VM and processes packets sent by the first VM.
  • 14. The non-transitory machine readable medium of claim 10, wherein the set of instructions for receiving the Ethernet packet sent by the first machine comprises a set of instructions for obtaining the packet as the packet passes along an egress data path from the first machine to a physical network interface card (PNIC) of the first host computer.
  • 15. The non-transitory machine readable medium of claim 14, wherein: the packet is obtained before the packet is processed by a software switch executing on the first host computer;the set of instructions for sending the L2 encapsulated packet to the second machine comprises a set of instructions for sending the L2 encapsulated packet to the software switch for forwarding to the PNIC to send the L2 encapsulated packet to the physical network; andthe physical network forwards the L2 encapsulated packet to the second host computer, which removes the fence protocol data and the overlay-network L2 encapsulation header, uses a destination address in an original header of the packet to identify the second machine, and passes the packet to the second machine.
  • 16. The non-transitory machine readable medium of claim 10, wherein the first filter uses a bridge table that stores addresses of destination hosts where machines of the private virtual network execute.
  • 17. The non-transitory machine readable medium of claim 10, wherein the set of instructions for performing the L2 encapsulation comprises sets of instructions for: fragmenting the packet into at least two packets when the size of the encapsulated packet exceeds a maximum-transmission unit (MTU) for the network; andencapsulating each of the two packets with (i) respective fence protocol data and (ii) the overlay-network L2 encapsulation header before sending the at least two encapsulated packets over the physical network.
  • 18. The non-transitory machine readable medium of claim 17, wherein the respective fence protocol data for each of the at least two packets comprises (i) a 2-bit field to indicate that the packet has been fragmented and (ii) a respective fragment sequence number that indicates a fragment number corresponding to the packet.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is related by subject matter to U.S. patent application Ser. No. 12/510,072, filed Jul. 27, 2009, and entitled “AUTOMATED NETWORK CONFIGURATION OF VIRTUAL MACHINES IN A VIRTUAL LAB ENVIRONMENT”, now issued as U.S. Pat. No. 8,924,524; U.S. patent application Ser. No. 12/510,135, filed Jul. 27, 2009, and entitled “MANAGEMENT AND IMPLEMENTATION OF ENCLOSED LOCAL NETWORKS IN A VIRTUAL LAB”, now issued as U.S. Pat. No. 8,838,756; U.S. patent application Ser. No. 12/571,224, filed Sep. 30, 2009, and entitled “PRIVATE ALLOCATED NETWORKS OVER SHARED COMMUNICATIONS INFRASTRUCTURE”, now issued as U.S. Pat. No. 8,619,771; and U.S. patent application Ser. No. 11/381,119, filed May 1, 2006, and entitled “VIRTUAL NETWORK IN SERVER FARM”, now issued as U.S. Pat. No. 7,802,000, all of which are incorporated herein by reference.

US Referenced Citations (261)
Number Name Date Kind
5504921 Dev et al. Apr 1996 A
5550816 Hardwick et al. Aug 1996 A
5729685 Chatwani et al. Mar 1998 A
5751967 Raab et al. May 1998 A
6104699 Holender et al. Aug 2000 A
6111876 Frantz et al. Aug 2000 A
6151324 Belser et al. Nov 2000 A
6151329 Berrada et al. Nov 2000 A
6219699 McCloghrie et al. Apr 2001 B1
6456624 Eccles et al. Sep 2002 B1
6512745 Abe et al. Jan 2003 B1
6539432 Taguchi et al. Mar 2003 B1
6680934 Cain Jan 2004 B1
6765921 Stacey et al. Jul 2004 B1
6785843 McRae et al. Aug 2004 B1
6941487 Balakrishnan et al. Sep 2005 B1
6963585 Pennec et al. Nov 2005 B1
6999454 Crump Feb 2006 B1
7046630 Abe et al. May 2006 B2
7120728 Krakirian et al. Oct 2006 B2
7146431 Hipp et al. Dec 2006 B2
7197572 Matters et al. Mar 2007 B2
7200144 Terrell et al. Apr 2007 B2
7203944 Rietschote et al. Apr 2007 B1
7209439 Rawlins et al. Apr 2007 B2
7260102 Mehrvar et al. Aug 2007 B2
7260648 Tingley et al. Aug 2007 B2
7263700 Bacon et al. Aug 2007 B1
7283473 Arndt et al. Oct 2007 B2
7339929 Zelig et al. Mar 2008 B2
7391771 Orava et al. Jun 2008 B2
7450498 Golia et al. Nov 2008 B2
7450598 Chen et al. Nov 2008 B2
7463579 Lapuh et al. Dec 2008 B2
7467198 Goodman et al. Dec 2008 B2
7478173 Delco Jan 2009 B1
7483370 Dayal et al. Jan 2009 B1
7512744 Banga et al. Mar 2009 B2
7554995 Short et al. Jun 2009 B2
7555002 Arndt et al. Jun 2009 B2
7577722 Khandekar et al. Aug 2009 B1
7606260 Oguchi et al. Oct 2009 B2
7633909 Jones et al. Dec 2009 B1
7634608 Droux et al. Dec 2009 B2
7640298 Berg Dec 2009 B2
7643482 Droux et al. Jan 2010 B2
7643488 Khanna et al. Jan 2010 B2
7649851 Takashige et al. Jan 2010 B2
7660324 Oguchi et al. Feb 2010 B2
7710874 Balakrishnan et al. May 2010 B2
7715416 Srinivasan et al. May 2010 B2
7716667 Rietschote et al. May 2010 B2
7725559 Landis et al. May 2010 B2
7752635 Lewites Jul 2010 B2
7761259 Seymour Jul 2010 B1
7764599 Doi et al. Jul 2010 B2
7792987 Vohra et al. Sep 2010 B1
7797507 Tago Sep 2010 B2
7801128 Hoole et al. Sep 2010 B2
7802000 Huang et al. Sep 2010 B1
7814228 Caronni et al. Oct 2010 B2
7814541 Manvi Oct 2010 B1
7818452 Matthews et al. Oct 2010 B2
7826482 Minei et al. Nov 2010 B1
7839847 Nadeau et al. Nov 2010 B2
7843906 Chidambaram et al. Nov 2010 B1
7853714 Moberg et al. Dec 2010 B1
7865893 Omelyanchuk et al. Jan 2011 B1
7865908 Garg et al. Jan 2011 B2
7885276 Lin Feb 2011 B1
7936770 Frattura et al. May 2011 B1
7941812 Sekar May 2011 B2
7948986 Ghosh et al. May 2011 B1
7958506 Mann Jun 2011 B2
7983257 Chavan et al. Jul 2011 B2
7983266 Srinivasan et al. Jul 2011 B2
7984108 Landis et al. Jul 2011 B2
7987432 Grechishkin et al. Jul 2011 B1
7995483 Bayar et al. Aug 2011 B1
8005013 Teisberg et al. Aug 2011 B2
8018873 Kompella Sep 2011 B1
8019837 Kannan et al. Sep 2011 B2
8027354 Portolani et al. Sep 2011 B1
8031606 Memon et al. Oct 2011 B2
8031633 Bueno et al. Oct 2011 B2
8036127 Droux et al. Oct 2011 B2
8051180 Mazzaferri et al. Nov 2011 B2
8054832 Shukla et al. Nov 2011 B1
8055789 Richardson et al. Nov 2011 B2
8060875 Lambeth Nov 2011 B1
8065714 Budko et al. Nov 2011 B2
8068602 Bluman et al. Nov 2011 B1
RE43051 Newman et al. Dec 2011 E
8074218 Eilam et al. Dec 2011 B2
8108855 Dias et al. Jan 2012 B2
8127291 Pike et al. Feb 2012 B2
8135815 Mayer Mar 2012 B2
8146148 Cheriton Mar 2012 B2
8149737 Metke et al. Apr 2012 B2
8155028 Abu-Hamdeh et al. Apr 2012 B2
8166201 Richardson et al. Apr 2012 B2
8166205 Farinacci et al. Apr 2012 B2
8171485 Muller May 2012 B2
8190769 Shukla et al. May 2012 B1
8199750 Schultz et al. Jun 2012 B1
8200752 Choudhary et al. Jun 2012 B2
8201180 Briscoe et al. Jun 2012 B2
8209684 Kannan et al. Jun 2012 B2
8214193 Chawla et al. Jul 2012 B2
8223668 Allan et al. Jul 2012 B2
8265075 Pandey Sep 2012 B2
8281067 Stolowitz Oct 2012 B2
8281363 Hernacki et al. Oct 2012 B1
8281387 Gupta Oct 2012 B2
8286174 Schmidt et al. Oct 2012 B1
8339959 Moisand et al. Dec 2012 B1
8339994 Gnanasekaran et al. Dec 2012 B2
8345650 Foxworthy et al. Jan 2013 B2
8346891 Safari et al. Jan 2013 B2
8351418 Zhao et al. Jan 2013 B2
8352608 Keagy et al. Jan 2013 B1
8359377 McGuire Jan 2013 B2
8370481 Wilson et al. Feb 2013 B2
8370834 Edwards et al. Feb 2013 B2
8370835 Dittmer Feb 2013 B2
8386642 Elzur Feb 2013 B2
8401024 Christensen et al. Mar 2013 B2
8473594 Astete et al. Jun 2013 B2
8515015 Maffre et al. Aug 2013 B2
8538919 Nielsen et al. Sep 2013 B1
8565118 Shukla et al. Oct 2013 B2
8611351 Gooch et al. Dec 2013 B2
8619771 Lambeth et al. Dec 2013 B2
8625603 Ramakrishnan et al. Jan 2014 B1
8627313 Edwards et al. Jan 2014 B2
8644188 Brandwine et al. Feb 2014 B1
8650299 Huang et al. Feb 2014 B1
8656386 Baimetov et al. Feb 2014 B1
8683004 Bauer Mar 2014 B2
8683464 Rozee et al. Mar 2014 B2
8706764 Sivasubramanian et al. Apr 2014 B2
8776050 Plouffe et al. Jul 2014 B2
8798056 Ganga Aug 2014 B2
8799431 Pabari Aug 2014 B2
8819561 Gupta et al. Aug 2014 B2
8838743 Lewites et al. Sep 2014 B2
8838756 Dalal et al. Sep 2014 B2
8850060 Beloussov et al. Sep 2014 B1
8868698 Millington et al. Oct 2014 B2
8874425 Cohen et al. Oct 2014 B2
8880659 Mower et al. Nov 2014 B2
8892706 Dalal Nov 2014 B1
8924524 Dalal et al. Dec 2014 B2
9037689 Khandekar et al. May 2015 B2
9038062 Fitzgerald et al. May 2015 B2
9076342 Brueckner et al. Jul 2015 B2
9086901 Gebhart et al. Jul 2015 B2
9106540 Cohn et al. Aug 2015 B2
9900410 Dalal Feb 2018 B2
20010043614 Viswanadham et al. Nov 2001 A1
20020093952 Gonda Jul 2002 A1
20020194369 Rawlins et al. Dec 2002 A1
20030002468 Khalil et al. Jan 2003 A1
20030041170 Suzuki Feb 2003 A1
20030058850 Rangarajan et al. Mar 2003 A1
20030088697 Matsuhira May 2003 A1
20030120822 Langrind et al. Jun 2003 A1
20040073659 Rajsic et al. Apr 2004 A1
20040081203 Sodder Apr 2004 A1
20040098505 Clemmensen May 2004 A1
20040240453 Ikeda et al. Dec 2004 A1
20040249973 Alkhatib et al. Dec 2004 A1
20040267866 Carollo et al. Dec 2004 A1
20040267897 Hill et al. Dec 2004 A1
20050018669 Arndt et al. Jan 2005 A1
20050027881 Figueira et al. Feb 2005 A1
20050053079 Havala Mar 2005 A1
20050071446 Graham et al. Mar 2005 A1
20050083953 May Apr 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050182853 Lewites et al. Aug 2005 A1
20050220096 Friskney et al. Oct 2005 A1
20060002370 Rabie et al. Jan 2006 A1
20060026225 Canali et al. Feb 2006 A1
20060029056 Perera et al. Feb 2006 A1
20060174087 Hashimoto et al. Aug 2006 A1
20060187908 Shimozono et al. Aug 2006 A1
20060193266 Siddha et al. Aug 2006 A1
20060221961 Basso et al. Oct 2006 A1
20060245438 Sajassi et al. Nov 2006 A1
20060291388 Amdahl et al. Dec 2006 A1
20070050520 Riley et al. Mar 2007 A1
20070055789 Claise et al. Mar 2007 A1
20070064673 Bhandaru et al. Mar 2007 A1
20070130366 O'Connell et al. Jun 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070195794 Fujita et al. Aug 2007 A1
20070234302 Suzuki et al. Oct 2007 A1
20070260721 Bose et al. Nov 2007 A1
20070280243 Wray et al. Dec 2007 A1
20070286137 Narasimhan et al. Dec 2007 A1
20070297428 Bose et al. Dec 2007 A1
20080002579 Lindholm et al. Jan 2008 A1
20080002683 Droux et al. Jan 2008 A1
20080028401 Geisinger et al. Jan 2008 A1
20080040477 Johnson et al. Feb 2008 A1
20080043756 Droux et al. Feb 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080059556 Greenspan et al. Mar 2008 A1
20080059811 Sahita et al. Mar 2008 A1
20080071900 Hecker et al. Mar 2008 A1
20080086726 Griffith et al. Apr 2008 A1
20080159301 Heer Jul 2008 A1
20080162922 Swartz Jul 2008 A1
20080163207 Reumann et al. Jul 2008 A1
20080198858 Townsley et al. Aug 2008 A1
20080209415 Riel et al. Aug 2008 A1
20080215705 Liu et al. Sep 2008 A1
20080235690 Ang et al. Sep 2008 A1
20080244579 Muller et al. Oct 2008 A1
20080310421 Teisberg et al. Dec 2008 A1
20090113021 Andersson et al. Apr 2009 A1
20090113109 Nelson et al. Apr 2009 A1
20090116490 Charpentier et al. May 2009 A1
20090141729 Fan Jun 2009 A1
20090150527 Tripathi et al. Jun 2009 A1
20090254990 McGee et al. Oct 2009 A1
20090292858 Lambeth et al. Nov 2009 A1
20100040063 Srinivasan et al. Feb 2010 A1
20100058051 Imai Mar 2010 A1
20100107162 Edwards et al. Apr 2010 A1
20100115101 Lain et al. May 2010 A1
20100125667 Soundararajan May 2010 A1
20100131636 Suri et al. May 2010 A1
20100138830 Astete et al. Jun 2010 A1
20100154051 Bauer Jun 2010 A1
20100169880 Haviv et al. Jul 2010 A1
20100180275 Neogi et al. Jul 2010 A1
20100191881 Tauter et al. Jul 2010 A1
20100223610 DeHaan et al. Sep 2010 A1
20100235831 Dittmer et al. Sep 2010 A1
20100254385 Sharma et al. Oct 2010 A1
20100257263 Casado et al. Oct 2010 A1
20100281478 Sauls et al. Nov 2010 A1
20100306408 Greenberg et al. Dec 2010 A1
20100306773 Lee et al. Dec 2010 A1
20100329265 Lapuh et al. Dec 2010 A1
20100333189 Droux et al. Dec 2010 A1
20110022694 Dalal et al. Jan 2011 A1
20110022695 Dalal et al. Jan 2011 A1
20110023031 Bonola et al. Jan 2011 A1
20110035494 Pandey et al. Feb 2011 A1
20110075664 Lambeth et al. Mar 2011 A1
20110194567 Shen Aug 2011 A1
20110208873 Droux et al. Aug 2011 A1
20110299537 Saraiya et al. Dec 2011 A1
20120005521 Droux et al. Jan 2012 A1
20140317059 Lad et al. Oct 2014 A1
20150071301 Dalal Mar 2015 A1
20150334012 Butler et al. Nov 2015 A1
20180248986 Dalal Aug 2018 A1
Foreign Referenced Citations (4)
Number Date Country
2004145684 May 2004 JP
03058584 Jul 2003 WO
2008098147 Aug 2008 WO
WO-2010041996 Apr 2010 WO
Non-Patent Literature Citations (5)
Entry
Author Unknown, “Introduction to VMware Infrastructure: ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5,” Revision Dec. 13, 2007, pp. 1-46, VMware, Inc., Palo Alto, California, USA.
Author Unknown, “iSCSI SAN Configuration Guide: ESX Server 3.5, ESX Server 3i version 3.5,” VirtualCenter 2.5, Nov. 2007, 134 pages, Revision: Nov. 29, 2007, VMware, Inc., Palo Alto, California, USA.
Author Unknown, “Cisco VN-Link: Virtualization-Aware Networking,” Mar. 2009, 10 pages, Cisco Systems, Inc.
Author Unknown, “Virtual Machine Mobility Planning Guide,” Oct. 2007, 33 pages, Revision Oct. 18, 2007, VMware, Inc., Palo Alto, CA.
Pollack, Melvin H., “Office Action”, U.S. Appl. No. 14/585,235, dated Oct. 19, 2016, 12 pages.
Related Publications (1)
Number Date Country
20210227057 A1 Jul 2021 US
Divisions (1)
Number Date Country
Parent 12819438 Jun 2010 US
Child 14546787 US
Continuations (2)
Number Date Country
Parent 15898613 Feb 2018 US
Child 17200801 US
Parent 14546787 Nov 2014 US
Child 15898613 US