Assignment of unique physical network addresses for logical network addresses

Information

  • Patent Grant
  • 11595345
  • Patent Number
    11,595,345
  • Date Filed
    Tuesday, May 5, 2020
    4 years ago
  • Date Issued
    Tuesday, February 28, 2023
    a year ago
Abstract
Some embodiments provide a method for a network controller that manages multiple logical networks implemented by multiple managed forwarding elements (MFEs) operating on multiple host machines. The method receives a notification from a particular MFE that an interface corresponding to a logical port of a logical forwarding element has connected to the particular MFE and has a particular logical network address. The method assigns a unique physical network address to the interface. Each of multiple interfaces connected to the particular MFE is assigned a different physical network address. The method provides the assigned unique physical network address to the particular MFE for the particular MFE to convert data messages sent from the particular logical network address to have the unique physical network address.
Description
BACKGROUND

Network virtualization plays a crucial role in the operation of datacenters, with two different approaches generally used to achieve network virtualization. In the overlay approach, the physical datacenter network is used as a packet carrier, and the network functionalities (of the logical networks) are separated and implemented in the upper overlaying layer. A common technique is to embed logical L2 (data link layer) packets in physical L3 (network layer) packets. In the underlay approach, the physical network devices (e.g., switches, routers) are programmed based on the logical network model, so that the physical datacenter network acts as both the packet carrier and logical network provider.


Using overlays provides flexibility, but the network stack includes five layers rather than three for IP networking. The extra protocol layers consume extra physical network bandwidth, which means less bandwidth is available for the actual payload. In addition, the packet encapsulation and resulting fragmentation and checksum calculation use extra CPU cycles, which otherwise would be available for guest workloads (e.g., virtual machines). As such, other techniques for network virtualization would be useful.


BRIEF SUMMARY

Some embodiments provide a method for implementing multiple logical networks in a physical network without using encapsulation, and without the physical network being required to perform logical network services and processes. Instead, some embodiments map each logical network address to a unique physical network address, and use address replacement on logical network packets rather than encapsulation.


In some embodiments, a network controller (or cluster of network controllers) maintains a pool of available physical network addresses, and handles requests from managed forwarding elements (MFEs) to assign unique physical addresses to logical network addresses for interfaces that connect to the MFEs. For example, when an interface (e.g., a virtual network interface controller (VNIC)) of a virtual machine (VM) or other data compute node (DCN) corresponding to a logical port of a logical forwarding element attaches to a MFE, that interface is assigned a logical network address. This assignment may be via dynamic host configuration protocol (DHCP), statically assigned or otherwise pre-configured, etc. The MFE notifies the network controller of the new logical network address.


The network controller receives this notification with the logical network address and assigns a unique physical network address for the interface (i.e., mapped to the logical network address). The network controller provides this physical network address to the requesting MFE, and stores the mapping between logical network address (and interface) and physical network address. In some embodiments, based on its network topology information, the network controller distributes the mapping to other MFEs that could potentially be sending packets to (or receiving packets from) the logical network address, and would thus need the physical mapping. In other embodiments, when a different MFE receives a first packet sent to the logical network address (e.g., from one of its local DCNs) or from the physical network address, that MFE sends a request to the controller for the mapping. The controller notifies the MFE regarding the mapping so that the MFE can use the mapping to process packets, as described further below.


In some embodiments, each physical network address is not just unique within a particular logical network, but is unique among all logical interfaces for all logical networks implemented within the physical network. That is, while logical address spaces may overlap between separate logical networks (i.e., the same subnet and/or IP address could be used in multiple logical networks), the physical network uses a single network address space. In a typical datacenter, this physical address space is allocated privately (i.e., does not need to be used or known outside of the datacenter), and thus the available address space is fairly large.


To process a packet at the source MFE (i.e., the MFE that sends the packet onto the physical network, which is often the MFE that first receives the packet from its source DCN), the source MFE first performs logical network processing. This processing may include logically forwarding the packet through one or more logical forwarding elements (e.g., a logical switch, a logical router, and another logical switch), performing logical ACL and distributed firewall checks, etc. If the packet is routed, the time to live and logical MAC address may be changed.


Once this logical processing is complete, a typical overlay network would encapsulate the packet based on its destination address being mapped to a physical tunnel endpoint address. However, in some embodiments, the MFE determines whether the packet is eligible for address replacement instead of encapsulation. In some embodiments, only unicast packets sent between logical network addresses are eligible for address replacement. That is, multicast/broadcast packets, and packets sent to (or received from) a destination outside of the logical network are not eligible for address replacement. Assuming that the packet is eligible (and the MFE has the mapping information for the source and destination addresses), the source MFE replaces the logical source and destination network (e.g., IP) addresses in the packet with the unique physical addresses to which they are mapped. Some embodiments also modify the source and destination data link (e.g., MAC) addresses with those that would be used for an encapsulated packet (e.g., a source MAC corresponding to the physical interface of the MFE and a destination MAC corresponding to the physical network next hop).


In addition, a logical interface might send a packet that could cause the physical network routers to perform various unwanted actions when using address replacement (e.g., an ICMP packet). Whereas an encapsulated packet would have this information hidden in the inner header (being encapsulated with, e.g., a TCP or UDP packet), with address replacement the physical network would see this protocol and potentially act upon it. Thus, for certain protocols, the source MFE replaces the protocol header field value with an unused or reserved protocol value that (i) would not cause the physical network to take any unwanted action and (ii) should not be used within the logical network.


The packet is then processed through the physical network as normal. Once the packet reaches the destination MFE, additional processing is required to handle the non-encapsulated packet. The destination MFE maps the protocol field value to its original value, if needed (i.e., if the protocol value is one of the unused or reserved values to which a different value was mapped at the source MFE). The physical network addresses are also replaced with the logical network addresses based on the mappings stored by the MFE. To determine the logical data link addresses, some embodiments use the network topology. If the source and destination network addresses are on the same logical switch, then the data link addresses will be those of the corresponding logical interfaces. However, if the source network address is on a different logical switch from the destination, then the data link address of the logical router interface that connects to the logical switch will be the source data link address. Once the data link layer address is also replaced, the MFE can perform any additional required logical processing and deliver the packet to the destination interface.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 conceptually illustrates a network controller and its communication with an MFE to provide the MFE with a physical IP address for a newly connected interface.



FIG. 2 conceptually illustrates a process of some embodiments for assigning a physical IP address to map to a logical IP address.



FIG. 3 conceptually illustrates a process of some embodiments for releasing an assigned physical IP address when a logical interface is moved or released.



FIG. 4 conceptually illustrates a set of MFEs that implement at least one logical network within a datacenter network of some embodiments, and the difference in physical network traffic between two logical network endpoints (e.g., VMs) and physical network traffic between a logical network endpoint and an external network.



FIG. 5 conceptually illustrates a process of some embodiments for replacing logical IP addresses with physical IP addresses.



FIG. 6 conceptually illustrates a process of some embodiments for replacing physical IP addresses with logical IP addresses before delivering a packet to an interface.



FIG. 7 conceptually illustrates a logical network and the logical to physical IP address mappings assigned for the endpoints of that network.



FIGS. 8 and 9 illustrate examples of packets sent through the physical implementation of that logical network.



FIG. 10 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Some embodiments provide a method for implementing multiple logical networks in a physical network without using encapsulation, and without the physical network being required to perform logical network services and processes. Instead, some embodiments map each logical network address to a unique physical network address, and use address replacement on logical network packets rather than encapsulation.


In some embodiments, a network controller (or cluster of network controllers) maintains a pool of available physical network addresses, and handles requests from managed forwarding elements (MFEs) to assign unique physical addresses to logical network addresses for interfaces that connect to the MFEs. For example, when an interface (e.g., a virtual network interface controller (VNIC)) of a virtual machine (VM) or other data compute node (DCN) corresponding to a logical port of a logical forwarding element attaches to a MFE, that interface is assigned a logical network address.



FIG. 1 conceptually illustrates such a network controller 100 and its communication with an MFE 105 to provide the MFE 105 with a physical IP address for a newly connected interface. It should be understood that while a single central network controller 100 is shown, in some embodiments a cluster of such controllers operates to communicate with numerous MFEs on numerous host machines.


As shown, the MFE 105 operates on a host 110, and at least one DCN (in this case a VM 115) attaches to the MFE 105. The MFE 105, in some embodiments, is a virtual switch or other software forwarding element that operates in the virtualization software (e.g., hypervisor) of the host machine 110, and which is configured by a network control system that includes the network controller 100. In some embodiments, a local controller operates on the host machine 110 (e.g., also within the virtualization software). This local controller receives configuration data from the network controller 100 and translates the configuration data from the network controller 100 for the MFE 105. In some such embodiments, the communication between the MFE 105 and the controller 100 is sent through the local controller.


The VM 115 attaches to the MFE 105 via a VNIC or similar interface. When a VNIC attaches to the network, it will be assigned a logical network address. In the subsequent discussion, Internet Protocol (IP) addresses will be used, but it should be understood that these addresses could be other types of network layer addresses in different embodiments. A logical IP address is the address that the VNIC uses to send/receive traffic on a logical network. As described further below, multiple distinct logical networks may be implemented within a single physical datacenter network, with each logical network having its own address space (which can overlap with the address spaces of other logical networks). The MFEs implement the logical networks based on the configuration data received from the network controllers.


The assignment of an IP address may be accomplished via dynamic host configuration protocol (DHCP), static assignment, other pre-configuration of the IP, etc. When the MFE 105 identifies the logical IP address of a new interface (by intercepting a DHCP packet, receiving the information from the VNIC, processing a packet from the VNIC, etc.), the MFE 105 notifies the network controller 100 of the new logical network address and interface, so that the network controller 100 can assign a unique physical IP address for the interface (i.e., mapped to the logical network address).



FIG. 2 conceptually illustrates a process 200 of some embodiments for assigning a physical IP address to map to a logical IP address. The process 200 is performed by a network controller (e.g., the controller 100) in response to receiving a request from an MFE (e.g., the MFE 105) for a physical IP address.


As shown, the process 200 begins by receiving (at 205) a new logical IP address and a corresponding interface from an MFE. Because the logical IP address is not necessarily exclusive to the logical network, an additional identifier is required for the mapping. Some embodiments use a unique VNIC identifier or a unique logical port identifier. FIG. 1 illustrates that the MFE 105 sends a message 120 to the network controller 100 with the interface and logical IP address of the VNIC by which the VM 115 connects to the MFE 105. As mentioned, the MFE may have become aware of this after a DHCP request, when a first packet is sent by the VM 115, etc. In some embodiments, the message 120 from the MFE only needs to identify the presence of the logical interface on the host 110, as the controller 100 already has the corresponding logical IP address that has been assigned to the interface.


The process 200, in response to this request, assigns (at 210) an available unique physical IP address to the logical IP address/interface combination. In some embodiments, each physical network address is not just unique within a particular logical network, but is unique among all logical interfaces for all logical networks implemented within the physical network. That is, while logical address spaces may overlap between separate logical networks (i.e., the same subnet and/or IP address could be used in multiple logical networks), the physical network uses a single network address space. In a typical datacenter, this physical address space is allocated privately (i.e., does not need to be used or known outside of the datacenter), and thus the available address space is fairly large. In some embodiments, the datacenter may use both IPv4 and IPv6 addresses. In such embodiments, these addresses are allocated separately. That is, when a logical IPv4 address is sent to the controller 100, the controller 100 allocates a unique physical IPv4 address, and when a logical IPv6 address is sent to the controller 100, the controller 100 allocates a unique physical IPv6 address.


The process 200 then provides (at 215) the assigned unique physical IP address to the requesting MFE. As shown in FIG. 1, the network controller 100 sends a message 125 with the assigned physical IP address to the MFE 100. As noted, in some embodiments this message is sent to a local controller on the host 100, which in turn provides the data to the MFE 105. The MFE 105 stores this mapping, and uses the mapping to process packets sent to and from the VM 115, as described in more detail below. In some embodiments, the MFE sends a gratuitous ARP packet to notify the physical network of the new IP address.


The process 200 also stores (at 220) the mapping of logical IP address and interface to the physical IP address. As shown in FIG. 1, the network controller 100 stores a physical to logical network address mapping table 130, as well a pool of available IP addresses 135 and a waiting pool of IP addresses 140. The network controller 100 stores this mapping table (which, in some embodiments, also identifies the host machine for each logical IP address and interface combination) in order to distribute the mappings to other MFEs that need the data. In some embodiments, based on its network topology information, the network controller distributes the mapping to other MFEs that could potentially be sending packets to (or receiving packets from) the logical network address, and would thus need the physical mapping. In other embodiments, when a different MFE receives a first packet sent to the logical network address (e.g., from one of its local DCNs) or from the physical network address, that MFE sends a request to the controller 100 for the mapping. The controller 100 notifies the MFE regarding the mapping so that the MFE can use the mapping to process packets, as described further below.


As noted, the network controller 135 also includes a pool 135 of available physical IP addresses and a waiting pool 140 of physical IP addresses. The physical IP addresses, as described above, are unique within a datacenter (or other privately-allocated physical network). Thus, the available physical IP addresses pool 135 lists all of the IP addresses available to be used for mapping—i.e., the physical IP addresses that are not currently mapped to a logical IP address of an operating interface. Once the network controller 100 assigns a particular physical IP address to an interface, the controller 100 stores this mapping in the table 130 and removes the physical IP address from the pool 135 of available IPs.



FIG. 3 conceptually illustrates a process 300 of some embodiments for releasing an assigned physical IP address when a logical interface is moved or released. The process 300 is performed by a network controller (e.g., the controller 100) in response to receiving a notification from an MFE (e.g., the MFE 105) that a logical interface is no longer in use.


As shown, the process 300 begins by receiving (at 305) from an MFE (or a local controller operating on a host with an MFE) a notification that an interface with a logical IP address is no longer present on the MFE. If a VM is migrated to a different host, some embodiments release the physical IP and reassign a new one; other embodiments keep the same logical IP to physical IP mapping. Other circumstances that could cause a logical IP address to no longer be present on an MFE are the removal of that interface from its logical network (i.e., by an administrator changing the logical network configuration), or the logical IP is changed (e.g., also by a change to the logical network configuration).


In response, the process 300 places (at 310) the physical IP address corresponding to the released logical IP address in a waiting pool for a threshold period of time. As indicated, the network controller 100 includes a waiting pool 140 for physical IP addresses. The waiting pool 140 is used to ensure that a physical IP address is not reallocated too quickly after being released, giving the network time to flush packets that may be sent to the previous interface to which the physical IP address is mapped.


Thus, the process determines (at 315) whether the period of time has expired. If not, the process continues to evaluate this until the period of time expires. It should be understood that the process 200 (as well as the other processes described herein) is a conceptual process, and that some embodiments do not perform continuous checks for each physical IP address in the waiting pool 140. Instead, some embodiments use an event-driven process that simply waits and then takes action upon the waiting period expiring. Once the period of time has expired, the process 300 moves (at 320) the physical address from the waiting pool into the pool of available physical IP addresses. That is, the network controller 100 moves the IP address from the waiting pool 140 to the available IP address pool 135.


The above description relates to the network controller operations to assign and manage the logical to physical IP address mappings. Once these mappings are assigned, packets are sent between MFEs without encapsulation (at least for certain packets that meet certain criteria). FIG. 4 conceptually illustrates a set of MFEs that implement at least one logical network within a datacenter network 400 of some embodiments. Specifically, this figure illustrates the difference in physical network traffic between two logical network endpoints (e.g., VMs) and physical network traffic between a logical network endpoint and an external network.


As shown, the datacenter 400 includes two host machines 405 and 410 that host VMs, which belong to the same logical network (they may attach to the same logical switch or different logical switches). The VMs 415 and 420 connect to MFEs 425 and 430, respectively, which operate on the host machines 405 and 410 to implement the logical network. In addition, the logical network to which the VMs 415 and 420 belong includes a connection (e.g., a logical router connection) to an external network 435. This connection is implemented by a gateway 440 operating on a third host machine 445. In some embodiments, the gateway 440 is a separate component of a logical router, and may be implemented in a VM or other DCN on the host 445, in the datapath of the host 445, etc.


When the VM 420 (or the VM 415) sends traffic to the external network 435 or receives traffic from this external network, the traffic between the gateway 440 and the MFE 430 is encapsulated with the physical IP addresses. As shown by the packet 450, this traffic includes inner IP and Ethernet headers as well as outer (encapsulation) IP and Ethernet headers. For the sake of simplicity, the other inner and outer protocols (e.g., transport protocols) are not shown here. Because the external IP address will not have a mapping to a unique IP address, if the MFE or gateway were to replace this IP in the packet (e.g., with the IP address of a PNIC of the host 445), the receiving WE/gateway would not be able to map this back to the correct IP address. Instead, encapsulation is used for this communication between logical network endpoints and the external network in order to preserve these addresses.


On the other hand, when the VM 415 sends a packet to the VM 420 (or vice versa), the MFE 425 performs address replacement to replace the logical IP (and logical MAC) addresses with physical IP and MAC addresses, as indicated by the packet 455. This packet 455 has fewer headers and thus more room for payload without fragmentation if the network is constrained by a maximum transmission size. Address replacement is available for the packet 455 because the traffic is unicast communication between two logical network endpoints that have one-to-one mappings with physical IP addresses. In some embodiments, the WEs do not use address replacement for multicast/broadcast communications, because the packets are sent to multiple physical destinations. However, in other embodiments, at least some multicast/broadcast packets are replicated into unicast packets by the MFE (e.g., a separate unicast packet for each destination, each packet having a different destination address), and these unicast packets can be sent onto the physical network using address replacement rather than encapsulation.



FIGS. 5 and 6 describe processes performed by a source MFE (i.e., the first-hop WE for a packet) and a destination MFE (the recipient of such a packet via the physical network) to perform address replacement on a packet. These processes assume that the WEs performing the respective processes have the logical IP to physical IP mapping information, and do not need to request this information from a network controller in order to process the packet.


The processes of FIGS. 5 and 6 will be described in part by reference to FIGS. 7-9. FIG. 7 conceptually illustrates a logical network 700 and the logical to physical IP address mappings assigned for the endpoints of that network, while FIGS. 8 and 9 illustrate examples of packets sent through the physical implementation of that logical network. The logical network 700 includes two logical switches 705 and 710 that are logically connected by a logical router 715. Two VMs (VM1 and VM2) connect to the first logical switch 705 and two VMs (VM3 and VM4) connect to the second logical switch 710. Each of these logical interfaces has a MAC address (MAC A, MAC B, MAC C, and MAC D). In addition, the logical router downlinks (interfaces to the logical switches) have their own logical MAC addresses (MAC E and MAC F).


The logical to physical IP address mapping table 720 is information that would be stored by a network controller (or network controller cluster), as well as the MFEs that implement the logical network. As shown in this table, the VMs are implemented on three hosts, and thus the three MFEs operating on these hosts would store the information in the mapping table 720. VM1 and VM3 are implemented on a first host, with VM2 on a second host and VM4 on a third host. The first logical switch 705 is assigned a subnet 10.1.1.0/24, and the logical IP addresses of the two VMs on this subnet are 10.1.1.5 and 10.1.1.6. Similarly, the second logical switch 710 is assigned a subnet 10.2.1.0/24, and the logical IP addresses of the two VMs on this subnet are 10.2.1.5 and 10.2.1.6. According to the mapping table 720, each of these logical interfaces maps to a unique physical IP address. While this example shows only a single logical network, if other logical networks were implemented on the hosts (or even on some of the hosts), those hosts would also map the logical IP addresses of the additional logical networks to unique physical IP addresses. A single host could, for example, have numerous mappings for the logical IP address 10.1.1.5, to different physical IP addresses for different interfaces of different logical networks.



FIG. 5 conceptually illustrates a process 500 of some embodiments for replacing logical IP addresses with physical IP addresses. In some embodiments, the source MFE for a packet (i.e., the MFE to which the source interface for the packet connects) performs this process 500 on the packet upon receiving the packet (e.g., from a VNIC).


As shown, the process 500 begins by receiving (at 505) a packet from an interface with a logical IP address. The packet, as sent, will have logical source and destination IP addresses as well as logical source and destination MAC addresses. The source addresses are those of the interface from which the packet was received (e.g., the VNIC or similar interface) by the MFE. The destination IP address is the address of the ultimate destination for the packet, while the MAC address is either that of the destination (if the destination is on the same logical switch) or of the local logical gateway (if the packet requires logical routing).



FIGS. 8 and 9 illustrate examples of such packets as they are sent through the physical network. In FIG. 8, VM1 sends a packet 800 to VM2 (on the same logical switch, but operating in a different physical host machine). The packet 900, as sent to an MFE 805, has a source IP address of 10.1.1.5, a destination IP address of 10.1.1.6, a source MAC address of MAC A, and a destination MAC address of MAC B. In addition, the protocol field of the IP header has the value 17 (for User Datagram Protocol (UDP)). In FIG. 9, VM1 sends a packet 900 to VM4 (on a different logical switch and operating in a different physical host machine). The packet 900, as sent to the MFE 805, has a source IP address of 10.1.1.5, a destination IP address of 10.2.1.6, a source MAC address of MAC A, and a destination MAC address of MAC E (corresponding to the default gateway for VM1). In addition, the protocol field of the IP header has the value 1 (for Internet Control Message Protocol (ICMP)).


Returning to FIG. 5, the process 500 performs (at 510) logical processing on the received packet. That is, the MFE processes the packet through the logical network, which may include application of ACL and firewall (e.g., distributed firewall) rules, network address translation (NAT) processing, distributed load balancing, etc. The logical processing also includes logical switching and/or routing. If logical routing is required (e.g., for the packet 900 of FIG. 9), the logical MAC address is modified and the time to live (TTL) is decremented for the packet.


After logical processing is completed, the process 500 determines (at 515) whether the packet is eligible for address replacement. In some embodiments, only unicast packets sent between logical network addresses are eligible for address replacement. That is, multicast/broadcast packets, and packets sent to (or received from) a destination outside of the logical network are not eligible for address replacement. Because the logical IP addresses are no longer in the packet at all when address replacement is used, some embodiments only use the technique when there is a 1:1 mapping between the logical IP addresses being replaced and the physical IP addresses that replace them.


In the case of broadcast/multicast, the MFEs do not use address replacement because the packets are sent to multiple physical destinations. However, in other embodiments, at least some multicast/broadcast traffic is replicated into multiple unicast packets by the MFE, and these unicast packets can be sent onto the physical network using address replacement rather than encapsulation. For packets sent to/from the external network, using address replacement would require assigning unique physical IP addresses for every external IP address that communicated with the logical network(s). Given the large number of such IP addresses and that the nature of the communication is more likely to be transient, there is likely to be less value in such local physical IP address assignment.


If the packet is not eligible for address replacement (e.g., the packet is a multi-recipient packet, or is addressed to or received from an external IP address that is not a logical network endpoint), the process 500 encapsulates (at 520) the packet. For the encapsulation headers, some embodiments use tunnel endpoint IP addresses that are on the physical network but separate from the unique physical IP addresses used for address replacement. The process 500 then proceeds to 550, described below.


On the other hand, when the packet is eligible for address replacement, the process identifies (at 525) the unique physical IP addresses for the source and destination logical IP addresses and interfaces. The source MFE identifies the logical IP addresses based on the data in the packet header fields, and the source interface based on the interface from which the packet is received. The destination logical interface is identified by the MFE during the logical processing operations (e.g., during logical forwarding).


The MFE consults its IP address mapping table to identify the physical IP addresses. In some embodiments, if the MFE does not have a unique physical IP address stored for the destination logical IP address and interface (or the source, if this is the initial packet from the source interface), the MFE sends a message to the network controller requesting the unique physical IP address. In some embodiments (not shown in this process), rather than wait for the controller, the first packet (or first several packets) are encapsulated rather than sent using address replacement, until the MFE receives the corresponding physical IP address from the network controller.


Assuming that the physical IP addresses are identified, however, the process 500 replaces (at 530) the logical IP addresses in the packet with the identified unique physical IP addresses. In addition, the process modifies (at 532) the time to live (TTL) field of the packet to account for the number of physical network hops the packet will traverse (each of which will decrement the TTL field). In some embodiments, the TTL field should only be decremented by logical processing (for each logical router that processes the packet). The physical datacenter network will often be stable with respect to the number of physical hops between two physical endpoints (when a logical network interface is migrated, this could change the number of physical network hops, but the interface will be assigned a new unique physical network address at this point). Some embodiments use probe messages or other techniques to determine the number of hops to each possible destination physical IP address, and store this information in the mapping tables (e.g., as another column in the table 720).


The process 500 also replaces (at 535) the logical MAC addresses with physical network MAC addresses. The source MAC is that of the physical interface to which the source physical IP address corresponds, while the destination MAC is that of the local gateway (unless the destination physical interface is on the same physical switch as the source physical interface).



FIG. 8 illustrates that the packet sent by the source MFE 805 has the source and destination physical IP addresses that have replaced. The source and destination IP addresses are replaced with the unique physical IP addresses shown in the mapping table 720 as corresponding to 10.1.1.5 (VM1) and 10.1.1.6 (VM2). For the physical MAC addresses, the source MAC (PMAC1) is that of the PNIC to which the 192.168.1.10 address corresponds, while the destination MAC (PMAC2) is that of the local default gateway. FIG. 9 illustrates similar address replacement of the source and destination IP and MAC addresses for the packet 900. The same source physical IP address is used, while the destination IP address corresponding to 10.2.1.6 (VM4) is used. In this case, the same physical MAC addresses are used as for the first packet, because the packet is again sent to the local default gateway on the physical network.


In addition to replacing the logical addresses with physical addresses, the process 500 also determines (at 540) whether the protocol field of the IP header matches one of a set of pre-specified values. When the protocol field does match one of these pre-specified values, the process replaces (at 445) the protocol field value with a replacement value. A logical interface (i.e., the DCN to which the logical interface belongs) might send a packet that could cause the physical network routers to perform various unwanted actions when using address replacement (e.g., an ICMP packet). Whereas an encapsulated packet would have this information hidden in the inner header (being encapsulated with, e.g., a TCP or UDP packet), with address replacement the physical network would see this protocol and potentially act upon it. Thus, for certain protocols, the source MFE replaces the protocol header field value with an unused or reserved protocol value that (i) would not cause the physical network to take any unwanted action and (ii) should not be used within the logical network.


For example, the packet 800 of FIG. 8 has the protocol field value 17, which corresponds to UDP. As UDP packets will be forwarded normally by the routers of the physical network, this protocol field value is not modified by the MFE 805. On the other hand, the packet 900 of FIG. 9 has the protocol field value 1, which corresponds to ICMP. ICMP packets may be acted upon by the physical routers in ways that are not desired, so the MFE 805 replaces this with the value 143, which is a reserved value that will be ignored by the physical network routers.


Finally, whether the packet is encapsulated or has address replacement performed, the process transmits (at 550) the packet to the physical network (i.e., the physical datacenter network 810). The packet is then processed through the physical network as normal, during which the physical MAC addresses may be modified.



FIG. 6 conceptually illustrates a process 600 of some embodiments for replacing physical IP addresses with logical IP addresses before delivering a packet to an interface. In some embodiments, the destination MFE for a packet (i.e., the MFE to which the destination interface for the packet connects) performs the process 600 on the packet upon receiving the packet from the physical datacenter network.


As shown, the process 600 begins by receiving (at 605) a logical network packet with physical IP addresses. The packet, as received, will have physical IP addresses that may correspond to logical interfaces or that may be tunnel endpoint addresses in an encapsulation header. These physical IP addresses, in some embodiments, are the IP addresses either added as encapsulation headers or replaced in the packet by the source MFE (e.g., using a process such as that shown in FIG. 5). In FIG. 8, the packet 800 has the same source and destination physical IP addresses when received by the destination MFE 815 as when sent by the source MFE 805, though different physical MAC addresses owing to the routing through the physical datacenter network 810. The same is true in the example shown in FIG. 9.


Thus, the process 600 determines (at 610) whether the packet is encapsulated. In some embodiments, the IP addresses will be different for encapsulated packets as compared to non-encapsulated packets. Specifically, if the source and destination IP addresses correspond to tunnel endpoints of the source and destination MFEs, then the packet is encapsulated. On the other hand, if the source and destination IP addresses are unique physical IP addresses in the logical to physical IP address mapping table of the MFE, then the packet was sent using address replacement. If the packet is encapsulated, the process decapsulates (at 615) the packet and proceeds to 645, described below. It should be noted that, in some embodiments, the MFE performs additional processing to determine that the packet is not sent to an IP address associated with neither a VTEP nor a unique physical IP address that maps to a logical IP address. For example, management traffic or other types of traffic may be received and processed by the MFE in some embodiments.


If the packet is not encapsulated (i.e., because address replacement was performed on the packet at the source MFE), the process 600 essentially performs the opposite operations of those in FIG. 5. The process 600 determines (at 620) whether the protocol field matches one of a set of pre-specified mapped values. This identifies whether the protocol field value is one of the reserved or unused values to which a particular protocol field value (e.g., ICMP) is mapped. If this is the case, the process replaces (at 625) the protocol field value with the original value. For example, in FIG. 9, the MFE 905 maps the value 143 (a reserved value) back to the original value of 1 (for ICMP).


The process 600 identifies (at 630) the logical IP address and interface for the source and destination physical IP addresses. As noted, each physical IP address maps not just to a logical IP address but also to a logical interface. While the source interface is not necessarily critical for the destination MFE (although it could be, depending on the processing required), the destination interface is important in terms of delivering the packet to the appropriate interface.


Based on the information identified from the physical IP addresses, the process 600 replaces (at 635) the physical IP addresses in the packet with the identified logical IP addresses. These should be the logical IP addresses that were in the packet prior to address replacement by the source MFE. In addition, the process replaces (at 640) the physical MAC addresses with logical MAC addresses based on the logical network topology. If the source and destination interfaces are on the same logical switch, then the MAC addresses will be those that correspond to these interfaces. However, if the source interface is on a different logical switch from the destination interface, then the MAC address of the logical router interface that connects to the destination logical switch will be the source MAC address.


In FIG. 8, the source and destination IP addresses are converted back into 10.1.1.5 and 10.1.1.6, respectively, by the MFE 815. Similarly, because the source and destination interfaces (VM1 and VM2) are on the same logical switch 705, both the source and destination logical MAC addresses are those that correspond to the interfaces (i.e., the same as when the packet was sent to the MFE 805). However, in FIG. 9, the source logical MAC address in the packet 900 as sent from the MFE 905 to the destination VM4 is MAC F, the address of the logical router interface that connects to the logical switch 710. In addition, the destination logical MAC address for the packet is MAC D, the MAC address of the destination VM4. The MFE 905 identifies that the source interface is on a different logical switch 705 based on the network topology, and performs this MAC address replacement.


Having completed the reverse address replacement (or having decapsulated the packet), the process 600 performs (at 645) any additional logical processing, such as applying egress ACL rules, additional distributed firewall rules, etc. The process then delivers (at 650) the packet to the identified destination interface.


Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.


In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.



FIG. 10 conceptually illustrates an electronic system 1000 with which some embodiments of the invention are implemented. The electronic system 1000 can be used to execute any of the control, virtualization, or operating system applications described above. The electronic system 1000 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system 1000 includes a bus 1005, processing unit(s) 1010, a system memory 1025, a read-only memory 1030, a permanent storage device 1035, input devices 1040, and output devices 1045.


The bus 1005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1000. For instance, the bus 1005 communicatively connects the processing unit(s) 1010 with the read-only memory 1030, the system memory 1025, and the permanent storage device 1035.


From these various memory units, the processing unit(s) 1010 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.


The read-only-memory (ROM) 1030 stores static data and instructions that are needed by the processing unit(s) 1010 and other modules of the electronic system. The permanent storage device 1035, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 1000 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1035.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1035, the system memory 1025 is a read-and-write memory device. However, unlike storage device 1035, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1025, the permanent storage device 1035, and/or the read-only memory 1030. From these various memory units, the processing unit(s) 1010 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.


The bus 1005 also connects to the input and output devices 1040 and 1045. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 1040 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1045 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.


Finally, as shown in FIG. 10, bus 1005 also couples electronic system 1000 to a network 1065 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 1000 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.


This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.


VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.


Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.


It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including FIGS. 2, 3, 5, and 6) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A method of forwarding a packet associated with a logical network implemented over a physical network, the method comprising: determining whether a destination network address of a packet received from a first logical network data compute node (DCN) is a unicast address of a second logical network DCN or is an address of an endpoint outside of the logical network; andbased on a determination that the destination network address is a unicast address of the second logical network DCN, (i) replacing logical network addresses in the packet, including the destination network address, with corresponding physical network addresses and (ii) forwarding the packet with the physical network addresses through the physical network without encapsulating the packet,wherein when the destination network address is an address of an endpoint outside of the logical network, the packet is encapsulated with an encapsulation header that identifies a gateway providing a connection to an external network as a physical network destination of the packet and the encapsulated packet is forwarded through the physical network.
  • 2. The method of claim 1, wherein the packet is also encapsulated when the destination network address is a logical network multicast or broadcast address.
  • 3. The method of claim 1, wherein replacing the logical network addresses in the packet comprises: replacing the unicast destination network address of the second logical network DCN with a first physical network address corresponding to the unicast destination network address; andreplacing a source network address of the packet with a second physical network address corresponding to the source network address, wherein the source network address of the packet is a network address of the first logical network DCN.
  • 4. The method of claim 3, wherein: the method is performed by a managed forwarding element (WE);the packet is received from the first logical network DCN via a virtual interface of the first logical network DCN operating on a same host computer as the MFE; andthe virtual interface is assigned the source network address.
  • 5. The method of claim 1 further comprising, when the destination network address is a unicast address of the second logical network DCN: determining that a protocol header field value of the packet corresponds to a protocol that causes a physical network forwarding element to take a particular action in response to receiving the packet; andreplacing the protocol header field value of the packet with a different value.
  • 6. The method of claim 5, wherein the protocol header field value is a value in a layer 3 header field that specifies a particular layer 4 protocol for the packet, wherein the different value is a reserved value that does not correspond to any specific layer 4 protocol.
  • 7. The method of claim 1 further comprising, when the destination network address is a unicast address of the second logical network DCN: determining a number of physical network hops that will process the packet; andadding the number to a time to live (TTL) field value of the packet such that the TTL field value at a destination for the packet will be equal to the TTL value prior to adding the number to the TTL field value.
  • 8. The method of claim 1, wherein the packet is a first packet, the destination network address is a first logical network address, and a first physical network address corresponds to the first logical network address, the method further comprising: receiving, from the physical network, a second packet having the first physical network address as a source address; andreplacing, in the second packet, the first physical network address with the corresponding first logical network address.
  • 9. The method of claim 1 further comprising performing logical network processing on the packet prior to either replacing the logical network addresses in the packet.
  • 10. The method of claim 1, wherein (i) the logical network addresses, including the destination network address, that are replaced in the packet, and (ii) the corresponding physical network addresses, are IP addresses.
  • 11. The method of claim 1 further comprising: receiving a second packet from the first logical network DCN;determining that a destination network address of the second packet is a broadcast or multicast address;based on the determination that the second packet is a broadcast or multicast address: generating a plurality of packets having a source address of the first logical network DCN and different unicast destination network addresses; andfor each generated packet, replacing logical network addresses in the generated packet with corresponding physical network addresses and (ii) forwarding the generated packet with the physical network addresses through the physical network without encapsulating the generated packet.
  • 12. A non-transitory machine readable medium storing a program which when executed by at least processing unit forwards a packet associated with a logical network implemented over a physical network, the program comprising sets of instructions for: determining whether a destination network address of a packet received from a first logical network data compute node (DCN) is a unicast address of a second logical network DCN or is an address of an endpoint outside of the logical network;based on a determination that the destination network address is a unicast address of the second logical network DCN, (i) replacing logical network addresses in the packet, including the destination network address, with corresponding physical network addresses and (ii) forwarding the packet with the physical network addresses through the physical network without encapsulating the packet; andbased on a determination that the destination network address is an address of an endpoint outside of the logical network, encapsulating the packet with an encapsulation header that identifies a gateway providing a connection to an external network as a physical network destination of the packet and forwarding the encapsulated packet through the physical network.
  • 13. The non-transitory machine readable medium of claim 12, wherein the packet is also encapsulated when the destination network address is a multicast or broadcast address.
  • 14. The non-transitory machine readable medium of claim 12, wherein the set of instructions for replacing the logical network addresses in the packet comprises sets of instructions for: replacing the unicast destination network address of the second logical network DCN with a first physical network address corresponding to the destination network address; andreplacing a source network address of the packet with a second physical network address corresponding to the source network address, wherein the source network address of the packet is a network address of the first logical network DCN.
  • 15. The non-transitory machine readable medium of claim 14, wherein: the at least one processing unit is a processing unit of a host computer;the packet is received from the first logical network DCN via a virtual interface of the first logical network DCN operating on the same host computer; andthe virtual interface is assigned the source network address.
  • 16. The non-transitory machine readable medium of claim 12, wherein the program further comprises sets of instructions for, when the destination network address is a unicast address of the second logical network DCN: determining that a protocol header field value of the packet corresponds to a protocol that causes a physical network forwarding element to take a particular action in response to receiving the packet; andreplacing the protocol header field value of the packet with a different value.
  • 17. The non-transitory machine readable medium of claim 16, wherein the protocol header field value is a value in a layer 3 header field that specifies a particular layer 4 protocol for the packet, wherein the different value is a reserved value that does not correspond to any specific layer 4 protocol.
  • 18. The non-transitory machine readable medium of claim 12, wherein the program further comprises sets of instructions for, when the destination network address is a unicast address of the second logical network DCN: determining a number of physical network hops that will process the packet; andadding the number to a time to live (TTL) field value of the packet such that the TTL field value at a destination for the packet will be equal to the TTL value prior to adding the number to the TTL field value.
  • 19. The non-transitory machine readable medium of claim 12, wherein the packet is a first packet, the destination network address is a first logical network address, and a first physical network address corresponds to the first logical network address, the program further comprising sets of instructions for: receiving, from the physical network, a second packet having the first physical network address as a source address; andreplacing, in the second packet, the first physical network address with the corresponding logical network address.
  • 20. The non-transitory machine readable medium of claim 12, wherein the program further comprises a set of instructions for performing logical network processing on the packet prior to either replacing the logical network addresses in the packet or encapsulating the packet.
CLAIM OF BENEFIT TO PRIOR APPLICATIONS

This application is a continuation application of U.S. patent application Ser. No. 15/640,376, filed Jun. 30, 2017, now published as U.S. Patent Publication 2019/0007364. U.S. patent application Ser. No. 15/640,376, now published as U.S. Patent Publication 2019/0007364, is hereby incorporated by reference.

US Referenced Citations (305)
Number Name Date Kind
5504921 Dev et al. Apr 1996 A
5550816 Hardwick et al. Aug 1996 A
5729685 Chatwani et al. Mar 1998 A
5751967 Raab et al. May 1998 A
5923854 Bell et al. Jul 1999 A
6104699 Holender et al. Aug 2000 A
6111876 Frantz et al. Aug 2000 A
6151324 Belser et al. Nov 2000 A
6151329 Berrada et al. Nov 2000 A
6219699 McCloghrie et al. Apr 2001 B1
6456624 Eccles et al. Sep 2002 B1
6512745 Abe et al. Jan 2003 B1
6539432 Taguchi et al. Mar 2003 B1
6680934 Cain Jan 2004 B1
6765921 Stacey et al. Jul 2004 B1
6785843 McRae et al. Aug 2004 B1
6941487 Balakrishnan et al. Sep 2005 B1
6963585 Pennec et al. Nov 2005 B1
6999454 Crump Feb 2006 B1
7046630 Abe et al. May 2006 B2
7120728 Krakirian et al. Oct 2006 B2
7146431 Hipp et al. Dec 2006 B2
7197572 Matters et al. Mar 2007 B2
7200144 Terrell et al. Apr 2007 B2
7203944 Rietschote et al. Apr 2007 B1
7209439 Rawlins et al. Apr 2007 B2
7260102 Mehrvar et al. Aug 2007 B2
7260648 Tingley et al. Aug 2007 B2
7263700 Bacon et al. Aug 2007 B1
7277453 Chin et al. Oct 2007 B2
7283473 Arndt et al. Oct 2007 B2
7339929 Zelig et al. Mar 2008 B2
7366182 O'neill Apr 2008 B2
7391771 Orava et al. Jun 2008 B2
7450498 Golia et al. Nov 2008 B2
7450598 Chen et al. Nov 2008 B2
7463579 Lapuh et al. Dec 2008 B2
7467198 Goodman et al. Dec 2008 B2
7478173 Delco Jan 2009 B1
7483370 Dayal et al. Jan 2009 B1
7512744 Banga et al. Mar 2009 B2
7554995 Short et al. Jun 2009 B2
7555002 Arndt et al. Jun 2009 B2
7577722 Khandekar et al. Aug 2009 B1
7606260 Oguchi et al. Oct 2009 B2
7633909 Jones et al. Dec 2009 B1
7634608 Droux et al. Dec 2009 B2
7640298 Berg Dec 2009 B2
7643482 Droux et al. Jan 2010 B2
7643488 Khanna et al. Jan 2010 B2
7649851 Takashige et al. Jan 2010 B2
7660324 Oguchi et al. Feb 2010 B2
7710874 Balakrishnan et al. May 2010 B2
7715416 Srinivasan et al. May 2010 B2
7716667 Van Rietschote et al. May 2010 B2
7725559 Landis et al. May 2010 B2
7752635 Lewites Jul 2010 B2
7761259 Seymour Jul 2010 B1
7764599 Doi et al. Jul 2010 B2
8001214 Jan Aug 2010 B2
7792987 Vohra et al. Sep 2010 B1
7797507 Tago Sep 2010 B2
7801128 Hoole et al. Sep 2010 B2
7802000 Huang et al. Sep 2010 B1
7814228 Caronni et al. Oct 2010 B2
7814541 Manvi Oct 2010 B1
7818452 Matthews et al. Oct 2010 B2
7826482 Minei et al. Nov 2010 B1
7839847 Nadeau et al. Nov 2010 B2
7840701 Hsu et al. Nov 2010 B2
7843906 Chidambaram et al. Nov 2010 B1
7853714 Moberg et al. Dec 2010 B1
7865893 Omelyanchuk et al. Jan 2011 B1
7865908 Garg et al. Jan 2011 B2
7885276 Lin Feb 2011 B1
7895642 Larson et al. Feb 2011 B1
7936770 Frattura et al. May 2011 B1
7941812 Sekar May 2011 B2
7948986 Ghosh et al. May 2011 B1
7958506 Mann et al. Jun 2011 B2
7983257 Chavan et al. Jul 2011 B2
7983266 Srinivasan et al. Jul 2011 B2
7984108 Landis et al. Jul 2011 B2
7987432 Grechishkin et al. Jul 2011 B1
7995483 Bayar et al. Aug 2011 B1
8001269 Satapati et al. Aug 2011 B1
8005013 Teisberg et al. Aug 2011 B2
8018873 Kompella Sep 2011 B1
8019837 Kannan et al. Sep 2011 B2
8027354 Portolani et al. Sep 2011 B1
8031606 Memon et al. Oct 2011 B2
8031633 Bueno et al. Oct 2011 B2
8036127 Droux et al. Oct 2011 B2
8051180 Mazzaferri et al. Nov 2011 B2
8054832 Shukla et al. Nov 2011 B1
8055789 Richardson et al. Nov 2011 B2
8060875 Lambeth Nov 2011 B1
8065714 Budko et al. Nov 2011 B2
8068602 Bluman et al. Nov 2011 B1
RE43051 Newman et al. Dec 2011 E
8074218 Eilam et al. Dec 2011 B2
8108855 Dias et al. Jan 2012 B2
8127291 Pike et al. Feb 2012 B2
8135815 Mayer et al. Mar 2012 B2
8146148 Cheriton Mar 2012 B2
8149737 Metke et al. Apr 2012 B2
8155028 Abu-Hamdeh et al. Apr 2012 B2
8166201 Richardson et al. Apr 2012 B2
8166205 Farinacci et al. Apr 2012 B2
8171485 Muller May 2012 B2
8190769 Shukla et al. May 2012 B1
8194674 Pagel et al. Jun 2012 B1
8199750 Schultz et al. Jun 2012 B1
8200752 Choudhary et al. Jun 2012 B2
8201180 Briscoe et al. Jun 2012 B2
8209684 Kannan et al. Jun 2012 B2
8214193 Chawla et al. Jul 2012 B2
8223668 Allan et al. Jul 2012 B2
8248967 Nagy et al. Aug 2012 B2
8265075 Pandey Sep 2012 B2
8281067 Stolowitz Oct 2012 B2
8281363 Hernacki et al. Oct 2012 B1
8286174 Schmidt et al. Oct 2012 B1
8289975 Suganthi et al. Oct 2012 B2
8339959 Moisand et al. Dec 2012 B1
8339994 Gnanasekaran et al. Dec 2012 B2
8345650 Foxworthy et al. Jan 2013 B2
8346891 Safari et al. Jan 2013 B2
8351418 Zhao et al. Jan 2013 B2
8352608 Keagy et al. Jan 2013 B1
8359377 Mcguire Jan 2013 B2
8370481 Wilson et al. Feb 2013 B2
8370834 Edwards et al. Feb 2013 B2
8370835 Dittmer Feb 2013 B2
8374183 Alkhatib et al. Feb 2013 B2
8386642 Elzur Feb 2013 B2
8396946 Brandwine Mar 2013 B1
8401024 Christensen et al. Mar 2013 B2
8407366 Alkhatib et al. Mar 2013 B2
8429279 Veits Apr 2013 B2
8473594 Astete et al. Jun 2013 B2
8515015 Maffre et al. Aug 2013 B2
8538919 Nielsen et al. Sep 2013 B1
8549281 Samovskiy et al. Oct 2013 B2
8565118 Shukla et al. Oct 2013 B2
8611351 Gooch et al. Dec 2013 B2
8619771 Lambeth et al. Dec 2013 B2
8625603 Ramakrishnan et al. Jan 2014 B1
8627313 Edwards et al. Jan 2014 B2
8644188 Brandwine et al. Feb 2014 B1
8650299 Huang et al. Feb 2014 B1
8656386 Baimetov et al. Feb 2014 B1
8683004 Bauer Mar 2014 B2
8683464 Rozee et al. Mar 2014 B2
8706764 Sivasubramanian et al. Apr 2014 B2
8725898 Vincent May 2014 B1
8776050 Plouffe et al. Jul 2014 B2
8798056 Ganga Aug 2014 B2
8799431 Pabari Aug 2014 B2
8819561 Gupta et al. Aug 2014 B2
8838743 Lewites et al. Sep 2014 B2
8838756 Dalal et al. Sep 2014 B2
8850060 Beloussov et al. Sep 2014 B1
8868608 Friedman et al. Oct 2014 B2
8874425 Cohen et al. Oct 2014 B2
8880659 Mower et al. Nov 2014 B2
8892706 Dalal Nov 2014 B1
8924524 Dalal et al. Dec 2014 B2
8953441 Nakil et al. Feb 2015 B2
9014184 Iwata et al. Apr 2015 B2
9021092 Silva et al. Apr 2015 B2
9037689 Khandekar et al. May 2015 B2
9038062 Fitzgerald et al. May 2015 B2
9076342 Brueckner et al. Jul 2015 B2
9086901 Gebhart et al. Jul 2015 B2
9106540 Cohn et al. Aug 2015 B2
9172615 Samovskiy et al. Oct 2015 B2
9178850 Lain et al. Nov 2015 B2
9749149 Mazarick Jun 2017 B2
9697032 Dalal et al. Jul 2017 B2
9819649 Larson et al. Nov 2017 B2
9952892 Dalal et al. Apr 2018 B2
10637800 Wang et al. Apr 2020 B2
10681000 Wang et al. Jun 2020 B2
20010043614 Viswanadham et al. Nov 2001 A1
20020093952 Gonda Jul 2002 A1
20020194369 Rawlins et al. Dec 2002 A1
20030041170 Suzuki Feb 2003 A1
20030058850 Rangarajan et al. Mar 2003 A1
20030120822 Langrind et al. Jun 2003 A1
20040073659 Rajsic et al. Apr 2004 A1
20040098505 Clemmensen May 2004 A1
20040249973 Alkhatib et al. Dec 2004 A1
20040267866 Carollo et al. Dec 2004 A1
20040267897 Hill et al. Dec 2004 A1
20050018669 Arndt et al. Jan 2005 A1
20050027881 Figueira et al. Feb 2005 A1
20050053079 Havala Mar 2005 A1
20050071446 Graham et al. Mar 2005 A1
20050083953 May Apr 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050182853 Lewites et al. Aug 2005 A1
20050220096 Friskney et al. Oct 2005 A1
20060002370 Rabie et al. Jan 2006 A1
20060026225 Canali et al. Feb 2006 A1
20060029056 Perera et al. Feb 2006 A1
20060031407 Dispensa et al. Feb 2006 A1
20060174087 Hashimoto et al. Aug 2006 A1
20060187908 Shimozono et al. Aug 2006 A1
20060193266 Siddha et al. Aug 2006 A1
20060221961 Basso et al. Oct 2006 A1
20060245438 Sajassi et al. Nov 2006 A1
20060291388 Amdahl et al. Dec 2006 A1
20070050520 Riley et al. Mar 2007 A1
20070055789 Claise et al. Mar 2007 A1
20070064673 Bhandaru et al. Mar 2007 A1
20070064704 Balay et al. Mar 2007 A1
20070130366 O'Connell et al. Jun 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070195794 Fujita et al. Aug 2007 A1
20070234302 Suzuki et al. Oct 2007 A1
20070260721 Bose et al. Nov 2007 A1
20070280243 Wray et al. Dec 2007 A1
20070286137 Narasimhan et al. Dec 2007 A1
20070297428 Bose et al. Dec 2007 A1
20080002579 Lindholm et al. Jan 2008 A1
20080002683 Droux et al. Jan 2008 A1
20080028401 Geisinger et al. Jan 2008 A1
20080040477 Johnson et al. Feb 2008 A1
20080043756 Droux et al. Feb 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080059556 Greenspan et al. Mar 2008 A1
20080071900 Hecker et al. Mar 2008 A1
20080086726 Griffith et al. Apr 2008 A1
20080159301 Heer Jul 2008 A1
20080163207 Reumann et al. Jul 2008 A1
20080198858 Townsley et al. Aug 2008 A1
20080209415 Riel et al. Aug 2008 A1
20080215705 Liu et al. Sep 2008 A1
20080235690 Ang et al. Sep 2008 A1
20080244579 Muller et al. Oct 2008 A1
20090113021 Andersson et al. Apr 2009 A1
20090141729 Fan Jun 2009 A1
20090150527 Tripathi et al. Jun 2009 A1
20090199291 Hayasaka et al. Aug 2009 A1
20090254990 McGee et al. Oct 2009 A1
20090292858 Lambeth et al. Nov 2009 A1
20100040063 Srinivasan et al. Feb 2010 A1
20100107162 Edwards et al. Apr 2010 A1
20100115080 Kageyama May 2010 A1
20100115101 Lain et al. May 2010 A1
20100115606 Samovskiy et al. May 2010 A1
20100125667 Soundararajan May 2010 A1
20100131636 Suri et al. May 2010 A1
20100138830 Astete et al. Jun 2010 A1
20100154051 Bauer Jun 2010 A1
20100169880 Haviv et al. Jul 2010 A1
20100180275 Neogi et al. Jul 2010 A1
20100191881 Tauter et al. Jul 2010 A1
20100214949 Smith et al. Aug 2010 A1
20100223610 DeHaan et al. Sep 2010 A1
20100235831 Dittmer et al. Sep 2010 A1
20100254385 Sharma et al. Oct 2010 A1
20100257263 Casado et al. Oct 2010 A1
20100275199 Smith et al. Oct 2010 A1
20100281478 Sauls et al. Nov 2010 A1
20100306408 Greenberg et al. Dec 2010 A1
20100306773 Lee et al. Dec 2010 A1
20100329265 Lapuh et al. Dec 2010 A1
20100333189 Droux et al. Dec 2010 A1
20110022694 Dalal et al. Jan 2011 A1
20110022695 Dalal et al. Jan 2011 A1
20110023031 Bonola et al. Jan 2011 A1
20110026537 Kolhi et al. Feb 2011 A1
20110035494 Pandey et al. Feb 2011 A1
20110075664 Lambeth et al. Mar 2011 A1
20110110377 Alkhatib et al. May 2011 A1
20110194567 Shen Aug 2011 A1
20110208873 Droux et al. Aug 2011 A1
20110299537 Saraiya et al. Dec 2011 A1
20120005521 Droux et al. Jan 2012 A1
20120110188 Biljon et al. May 2012 A1
20130151661 Koponen et al. Jun 2013 A1
20130239198 Niemi et al. Sep 2013 A1
20130322436 Wijnands Dec 2013 A1
20140052877 Mao Feb 2014 A1
20140192804 Ghanwani et al. Jul 2014 A1
20140195666 Dumitriu Jul 2014 A1
20140317059 Lad et al. Oct 2014 A1
20150106489 Duggirala Apr 2015 A1
20150195137 Kashyap et al. Jul 2015 A1
20150271303 Neginhal et al. Sep 2015 A1
20150281060 Xiao Oct 2015 A1
20150281171 Xiao Oct 2015 A1
20150301846 Dalal et al. Oct 2015 A1
20150312054 Barabash et al. Oct 2015 A1
20150334012 Butler et al. Nov 2015 A1
20170149664 Ganga May 2017 A1
20170170988 Mazarick Jun 2017 A1
20170272316 Johnson et al. Sep 2017 A1
20170272400 Bansal et al. Sep 2017 A1
20170300354 Dalal et al. Oct 2017 A1
20180063193 Chandrashekhar et al. Mar 2018 A1
20190007342 Wang et al. Jan 2019 A1
20190007364 Wang et al. Jan 2019 A1
Foreign Referenced Citations (7)
Number Date Country
3643052 Apr 2020 EP
2004145684 May 2004 JP
03058584 Jul 2003 WO
2008098147 Aug 2008 WO
2015147943 Oct 2015 WO
WO-2015147943 Oct 2015 WO
2019006042 Jan 2019 WO
Non-Patent Literature Citations (6)
Entry
Author Unknown, “Introduction to VMware Infrastructure: ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5,” Revision Dec. 13, 2007, pp. 1-46, VMware, Inc., Palo Alto, California, USA.
Author Unknown, “iSCSI SAN Configuration Guide: ESX Server 3.5, ESX Server 3i version 3.5,” VirtualCenter 2.5, Nov. 2007, 134 pages, Revision: Nov. 29, 2007, VMware, Inc., Palo Alto, California, USA.
Author Unknown, “Cisco VN-Link: Virtualization-Aware Networking,” Mar. 2009, 10 pages, Cisco Systems, Inc.
Author Unknown, “Virtual Machine Mobility Planning Guide,” Oct. 2007, 33 pages, Revision Oct. 18, 2007, VMware, Inc., Palo Alto, CA.
Author Unknown, “VMware Infrastructure 3 Primer: ESX Server 3.5, ESX Server 3i version 3.5,” VirtualCenter 2.5, Nov. 2007, 22 pages, Revision: Nov. 29, 2007, VMware, Inc., Palo Alto, California, USA.
International Search Report and Written Opinion of commonly owned International Patent Application PCT/US2018/039873, dated Nov. 9, 2018, 16 pages, International Searching Authority.
Related Publications (1)
Number Date Country
20200267113 A1 Aug 2020 US
Continuations (1)
Number Date Country
Parent 15640376 Jun 2017 US
Child 16867488 US