Multicast is group communication in computer networking where data transmission is addressed to a group of destination computers simultaneously. Multicast can be one-to-many or many-to-many distribution. Group communication makes it possible for the source to efficiently send to the group in a single transmission. Copies are automatically created in network elements such as routers, switches, and cellular network base stations, but only to network segments that currently contain members of the group. Multicast protocols such as Internet Group Management Protocol (IGMP) and Protocol Independent Multicast (PIM) are used to setup the forwarding state in routers based on the information exchanged about the senders and the receivers of multicast traffic.
Some embodiments of the invention provide a method for offloading multicast replication from multiple tiers of edge nodes implemented by multiple host machines to a physical switch. Each of the multiple host machines implements a provider edge node and a tenant edge node. One host machine among the multiple host machines receives a packet having an overlay multicast group identifier. The host machine maps the overlay multicast group identifier to an underlay multicast group identifier. The host machine encapsulates the packet with an encapsulation header that includes the underlay multicast group identifier to create an encapsulated packet. The host machine forwards the encapsulated packet to a physical switch of the network segment. The physical switch forwards copies of the encapsulated packet to tenant edge nodes at one or more ports that are determined to be interested in the underlay multicast group identifier.
The packet may be received from a tenant network and the host machine hosts a tenant edge node that serves data traffic to and from the tenant network. Each tenant edge node is serving data traffic, including multicast traffic to and from a tenant network by performing gateway functions. The packet may also be received from an external network, and the particular provider edge node serves data traffic, including multicast traffic, to and from the external network by performing gateway functions. The particular provider edge node is actively serving data traffic to and from the external network, and other provider edge nodes implemented by other host machines are standing by and not actively serving data traffic to and from the external network.
In some embodiments, a network controller (e.g., SDN controller) sends multicast grouping information associating an overlay multicast group identifier with (i) a corresponding underlay multicast group identifier and (ii) a list of VTEPs that are interested in the multicast group to each tenant edge and each provider edge. In some embodiments, the network controller generates the multicast grouping information based on the multicast reports (e.g., IGMP inquiry reports) that associates VTEPs with overlay multicast group identifiers. In some embodiments, the list of VTEPs that are interested in a multicast group sent to a particular tenant edge node distinguishes (i) VTEPs connected to a same network segment as the particular tenant edge node from (ii) VTEPs connected to a different network segment as the particular tenant edge node.
A tenant edge node receiving a copy of the encapsulated packet may decapsulate the packet to remove the underlay multicast group identifier and forward the decapsulated packet to a tenant network by multicast based on the overlay multicast group identifier. The particular provider edge node may also receive a copy of the encapsulated packet from the physical switch and forward a decapsulated copy of the packet to the external network without the underlay multicast group identifier. In some embodiments, when the packet is received from the external network, the host machine that implements the particular provider edge node receives the packet, maps the overlay multicast group identifier to the underlay multicast group identifier, encapsulates the packet with an encapsulation header that includes the underlay multicast group identifier, and forwards the encapsulated packet to the physical switch of the first network segment.
In some embodiments, the host machine may use the multicast grouping information sent to the host machine to identify any VTEPs in other segments that are also interested in the multicast group. If there is a VTEP in another network segment interested in the multicast group, the host machine identifies a VTEP at a second network segment having a tenant edge node that is interested in the underlay multicast group identifier. The host machine forwards the packet to the identified VTEP by unicast.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
In a software defined network (SDN) environment, a provider level (Tier-0 or T0) edge logical router device acts as the gateway between physical network (e.g., wide area network or WAN) and virtual or overlay networks. In a multi-tenant topology, a tenant level (Tier-1 or T1) dedicated edge device can be configured to be the gateway for a given tenant. For traffic originated in an overlay network, a T1 edge routes data packets to a T0 edge to connect to the physical network. Similarly, WAN traffic from the physical network reaches T0 edge gateway and then gets routed to T1 edge gateways. An edge transport node (TN) can host one or more T0 and/or T1 routers and there can be multiple such edge TNs in a cluster. (An edge can be referred to as an edge node, an edge router, an edge device, an edge gateway, an edge TN, etc.)
As illustrated, a SDN environment is a network 100 that provide connectivity to several tenant networks 101-107 (tenants A through G). The network 100 also provide connectivity to an external physical network that is a wide area network (WAN) 109. The network 100 may include physical network components provided by one or more datacenters as underlay.
The network 100 includes a cluster of host machines 111-117 that implement a first tier of edge routers 121-127 and a second tier of edge routers 131-137. The first tier of edge routers 121-127 are provider edge routers (also referred to T0 edges) shared by different tenants of a datacenter. The first tier provider edge routers also perform gateway function for traffic to and from a WAN 109 in active/standby mode. The second tier of routers 131-137 are tenant edge routers (or T1 edges or T1 TNs) for tenant networks 101-107, each T1 edge performing gateway function for traffic to and from a tenant network. The T0 provider edges and the T1 tenant edges together enable traffic between the WAN 109 and the tenant networks 101-107 (North-South traffic), as well as traffic among the different tenant networks (East-West traffic).
Each host machine is addressable by a virtual tunnel endpoint (VTEP) address, as traffic to and from the different host machines are conducted by tunnels. In the example, the host machines 111-114 are interconnected by a physical L2 switch 141 (Top of Rack or ToR switch), while the host machines 115-117 are interconnected by a different L2 switch 142. In other words, in the physical underlay, the host machines 111-114 belong to one network segment and the host machines 115-117 belongs to a different network segment.
When edge devices are configured as multicast-routers, IP-multicast traffic will be routed from the physical network (e.g., the WAN 109) to the virtual network (e.g., tenant networks 101-107) and vice-versa. While edges are running in a multi-tiered architecture, inter-tier multicast traffic is routed by one centralized T0 router. This is because only one router is allowed to be the multicast querier for each network segment (L2 segment or IP subnet) according to multicast protocols such as Protocol-Independent Multicast (PIM) or Internet Group Management Protocol (IGMP). Thus, one edge gateway (e.g., T0 edge 121) that supports IP multicast routing encapsulates and replicates the routed multicast packets to all edge virtual tunnel endpoints (VTEPs) that have receivers for the corresponding multicast group in an overlay domain and another copy towards PIM core for receivers in the physical network (e.g., WAN 109).
In the example of
The active T0 edge node 121 receives multicast traffic for a multicast group having an identifier of 237.1.1.1, and by multicast inquiry the T0 edge 121 knows that tenant networks C, D, E have receivers that are interested in the multicast group 237.1.1.1.
Having the one T0 edge 121 to centrally replicate multicast traffic for multiple different T1 TNs degrades the multicast routing throughput and latency, because the throughput of multicast traffic flow is limited by total number of replications that the one centralized T0 edge performs. The more T1 edge TNs there are to receive multicast traffic, the more replications that the one centralized T0 edge 121 has to perform, and more likely to saturate the downlink bandwidth of the centralized T0 edge. Relying on the one T0 edge 121 to handle traffic for all T1 TNs also makes the multicast replication scheme difficult to scale for additional tenant networks. For example, if 2 Gbps of multicast source traffic are to be replicated to 5 different T1 edges, then the 10G uplinks of the T0 edge will be saturated by this one multicast flow and traffic from other sources cannot be processed. In that instance, the one centralized T0 edge can only accommodate up to 5 T1 edge TNs at one time.
Some embodiments of the invention provide a method for scaling the multicast replications to a larger number of T1 edge TNs using Top-of-Rack (ToR) L2-multicast, without reducing routing throughput or worsening forwarding latency. Specifically, the number of multicast replications at the active centralized T0 edge (e.g., 121) is reduced by offloading multicast replication to L2 ToR switches (e.g., ToRs 141 and 142) using underlay multicast group IPs. Routing throughput is further improved by leveraging underlay multicast replication using underlay multicast group IPs and VTEP list that are synchronized by a SDN controller. Doing so allows larger number of parallel flows which results in higher throughput and enables larger number of tenants to participate in multicast routing. Reducing number of replications at source edge TNs also improves or reduces routing latency. In some embodiments, multicast routing protocols are not used at T1 edges, which keeps T1 edges to be light-weight forwarding planes. Multicast routing throughput can be scaled out (horizontal scaling) by deploying more TNs with T1 edges.
At operations labeled (2), the host machine 111 maps the overlay multicast group identifier 237.1.1.1 to an underlay multicast group identifier 240.2.2.2. The host machine 111 encapsulates the packet 210 with an encapsulation header that includes the underlay multicast group identifier 240.2.2.2 to create an encapsulated packet 212. The host machine 111 forwards the encapsulated packet to the physical switch (L2-ToR) 141 (which is the L2 switch of the network segment that includes host machines 111-114).
At operations labeled (3), the physical switch 141 forwards copies of the encapsulated packet 212 to host machines 113 and 114 (having tenant edge nodes 133 and 134) at one or more ports of the switch that are determined (by IGMP snooping) to be interested in (or has receivers for) the underlay multicast group identifier 240.2.2.2. (The multicast traffic is replicated to ports that correspond to tenant C and tenant D).
At operations labeled (4), the T1 tenant edge node 133 decapsulates the packet 212 into the packet 210 and forwards the decapsulated packet 210 to tenant network 103 (for tenant C). Likewise, the T1 tenant edge node 134 decapsulates the packet 212 into the packet 210 and forwards the decapsulated packet 210 to tenant network 104 (for tenant D).
At operations labeled (2), the host machine 112 maps the overlay multicast group identifier 237.1.1.1 to the underlay multicast group identifier 240.2.2.2. The host machine 112 encapsulates the packet 220 with an encapsulation header that includes the underlay multicast group identifier 240.2.2.2 to create an encapsulated packet 222. The host machine 112 forwards the encapsulated packet 222 to the physical switch (L2-ToR) 141.
At operations labeled (3), the physical switch 141 forwards copies of the encapsulated packet 222 to host machines 113 and 114 (having tenant edge nodes 133 and 134) at one or more ports of the switch that are determined (by IGMP snooping) to be interested in (or has receivers for) the underlay multicast group identifier 240.2.2.2. (The multicast traffic is replicated to ports that corresponds to tenant C and tenant D). The physical switch 141 also forwards a copy of the encapsulated packet 222 to the host machine 111 having the active T0 edge 121 to be forwarded to the WAN 109.
At operations labeled (4), the T1 tenant edge node 133 decapsulates the packet 222 into the packet 220 (to remove the underlay multicast group identifier) and forwards the decapsulated packet 220 to tenant network 103 (for tenant C). Likewise, the T1 tenant edge node 134 decapsulates the packet 222 into the packet 220 and then forwards the decapsulated packet 220 to tenant network 104 (for tenant D). Also, the T0 active edge 121 decapsulates the packet 222 and forwards the decapsulated packet 220 to the WAN 109.
In some embodiments, a controller of the SDN network (or SDN controller) associates or maps each overlay multicast group identifier with a corresponding underlay multicast group identifier. The underlay multicast group identifier is one that is predetermined to be available in the underlay domain. In the example of
The SDN controller also sends multicast group information to each T1 (tenant) edge and each T0 (provider) edge. The multicast group information may include mapping for associating each overlay multicast group identifier with its corresponding underlay multicast group identifier. For each multicast group, the multicast group information may also include a list of VTEPs that are interested in the multicast group. In some embodiments, the list of VTEPs interested in the multicast group is identified based on multicast reports (e.g., IGMP reports) associating VTEPs with overlay multicast group identifiers.
In some embodiments, the list of VTEPs in the multicast information sent to a particular tenant edge node distinguishes (i) VTEPs connected to a same network segment as the particular tenant edge node from (ii) VTEPs connected to a different network segment as the particular tenant edge node. Based on the multicast group information, the T1 tenant edge may send the packet to receivers in the same segment by encapsulating the underlay multigroup identifier (to rely on the ToR switch to perform multicast replication); or directly send the packet to receivers in a different segment without encapsulating the underlay multigroup identifier.
The controller 300 then maps each overlay multicast group identifier to underlay multicast group identifier. Thus, the overlay multicast group identifier or address 237.1.1.1 is mapped to an underlay multicast group identifier 240.2.2.2, the overlay multicast group identifier or address 238.3.3.3 is mapped to an underlay multicast group identifier 241.4.4.4, etc. The underlay multicast group identifiers are chosen from IP address ranges that are available for use in the underlay domain. Based on the received multicast reports and the multicast group identifier mapping, the controller 300 generates and sends multicast group information to VTEPs of host machines that hosts tenant edges and each provider edge. Each host machine and the T0/T1 edges hosted by the host machines in turn uses the multicast group information to map multicast group identifiers and to identify which VTEPs are interested in which multicast group.
In the figure, multicast group information 321-327 are sent to host machines 111-117 (VTEPs A through G), respectively. The multicast group information sent to each host machine associates each overlay multicast group identifier with (i) its corresponding underlay multicast group identifier and (ii) a list of VTEPs that are interested in the multicast group to each tenant edge and each provider edge. Thus, in the multicast group information 321-327, the overlay multicast group identifier 237.1.1.1 is associated with the underlay multicast group identifier 240.2.2.2 and a list that includes VTEPs C, D, and E, and the overlay multicast group identifier 238.3.3.3 is associated with the underlay multicast group identifier 241.4.4.4 and a list that includes VTEPs A, B, D, F, G, etc.
In some embodiments, the multicast information sent to a host machine does not list the VTEP of the host machine as one of the VTEPs interested in any of the multicast groups. As illustrated in the figure, the multicast group information 321 sent to VTEP-A does not list VTEP-A as one of the VTEPs interested in the multicast group 238.3.3.3 (so “A” in information 321 appear darkened), while the multicast group information 322 sent to VTEP-B does not list VTEP-B as one of the VTEPs interested in the multicast group 238.3.3.3 (so “B” in information 322 appear darkened). Thus, multicast traffic from tenant A network will not be replicated by the tenant edge in VTEP A for multicast group 238.3.3.3, even if tenant A has a receiver for the multicast group 238.3.3.3.
In some embodiments, the list of VTEPs of multicast groups sent to a particular host machine or tenant edge node distinguishes VTEPs connected to a same network segment as the particular tenant edge node or host machine from VTEPs connected to a different network segment as the particular tenant edge node or host machine. In the example of
In some embodiments, a T1 (tenant) edge node receiving the multicast group information would identify a VTEP at a different network segment having a tenant edge node that is interested in the underlay multicast group identifier. The T1 edge node then forwards the packet to the identified VTEP by direct unicast rather than by using the ToR switch.
At operations labeled (2), the host machine 113 maps the overlay multicast group identifier 238.3.3.3 to the underlay multicast group identifier 241.4.4.4 (by using the multicast information 323.) The host machine 113 (at the T1 edge 133) encapsulates the packet 410 with an encapsulation header that includes the underlay multicast group identifier 241.4.4.4 to create an encapsulated packet 412. The host machine 113 forwards the encapsulated packet 412 to the physical switch (L2-ToR) 141.
At operations labeled (3), the physical switch 141 forwards copies of the encapsulated packet 412 to host machines 111, 112 and 114 (having T1 edges 131, 132, 134) at one or more ports of the switch 141 that are determined to be interested in the underlay multicast group identifier 241.4.4.4. (The multicast traffic is replicated to ports that corresponds to VTEPs A, B and D). The physical switch 141 also forwards a copy of the encapsulated packet 412 to the host machine 111 having the active T0 edge 121 to be forwarded to the WAN 109. The T1 tenant edge nodes 131, 132, 134, and the active T0 edge in turn decapsulate the packets to remove the underlay multicast group identifier and forwards the decapsulated packet to their respective tenant networks (including the WAN 109).
(In some embodiments, the physical switch 141 learns which ports or which VTEPs are interested in the underlay multicast group 241.4.4.4 by IGMP snooping. Specifically, the VTEPs may initiate IGMP join on the underlay multicast group 241.4.4.4 toward the physical switch 141, when a TN has an overlay receiver learned from its attached tenant network and the SDN controller 300 sends the underlay multicast mapping for the corresponding overlay multicast group. The physical switch 141 learns VTEP port based on the IGMP report message on the VLANs (aka Transport VLAN) that the VTEPs belong to. If there is any TN that does not have any tenant that is interested in 241.4.4.4 then TOR will not forward the packets to that particular TN.)
At operations labeled (4), the T1 edge 133 (at VTEP C) forwards a copy of the multicast packet 410 to the T1 edge 136 (at VTEP F) by unicast. The T1 edge 133 performs this operation because the multicast group information 323 sent to host machine 113 indicates that VTEPs F and G are also interested in the multicast group 238.3.3.3 (241.4.4.4), and that VTEPs F and G are in a different network segment than VTEP C. The host machine 113 therefore sends the packet 410 to the T1 edge 136. In some embodiments, the packet 410 is sent using an overlay tunnel from the VTEP-C to VTEP-F, though physically through the L2 switches 141 and 142 and L3 router 150.
At operations labeled (5), the T1 edge 136 (at VTEP-F or the host machine 116) maps the overlay multicast group identifier 238.3.3.3 to the underlay multicast group identifier 241.4.4.4 (by using the multicast information 326.) The host machine 116 (at the T1 edge 136) encapsulates the packet 410 with an encapsulation header that includes the underlay multicast group identifier 241.4.4.4 to create an encapsulated packet 412. The host machine 113 forwards the encapsulated packet 412 to the physical switch (L2-ToR) 142.
At operations labeled (6), the physical switch 142 forwards copies of the encapsulated packet 412 to the host machine 117 (having T1 edge 137) at ports of the switch that are determined to be interested in the underlay multicast group identifier 241.4.4.4. (The multicast traffic is replicated to ports that correspond to VTEP-G). The T1 tenant edge 137 in turn decapsulates the packet to remove the underlay multicast group identifier and forwards the decapsulated packet 410 to tenant network G 107 with overlay multicast group identifier 238.3.3.3.
For some embodiments,
In some embodiments, the process 500 starts when the host machine receives (at 510) a packet having an overlay multicast group identifier. The packet may be received from a tenant network and the host machine hosts a tenant edge node that serves data traffic to and from the tenant network. Each tenant edge node is serving data traffic, including multicast traffic to and from a tenant network by performing gateway functions. The packet may also be received from an external network (e.g., WAN), and the particular provider edge node serves data traffic, including multicast traffic, to and from the external network by performing gateway functions. The particular provider edge node is actively serving data traffic to and from the external network, and other provider edge nodes implemented by other host machines are standing by and not actively serving data traffic to and from the external network.
The host machine maps (at 520) the overlay multicast group identifier to an underlay multicast group identifier according to mapping information provided by a network controller. In some embodiments, a network controller (e.g., SDN controller) sends multicast grouping information associating an overlay multicast group identifier with (i) a corresponding underlay multicast group identifier and (ii) a list of VTEPs that are interested in the multicast group to each tenant edge and each provider edge. In some embodiments, the network controller generates the multicast grouping information based on the multicast reports (e.g., IGMP inquiry reports) that associates VTEPs with overlay multicast group identifiers. In some embodiments, the list of VTEPs that are interested in a multicast group sent to a particular tenant edge node distinguishes (i) VTEPs connected to a same network segment as the particular tenant edge node from (ii) VTEPs connected to a different network segment as the particular tenant edge node.
The host machine encapsulates (at 530) the packet with an encapsulation header that includes the underlay multicast group identifier to create an encapsulated packet.
The host machine forwards (at 540) the encapsulated packet to a physical switch of the first network segment. The physical switch then forwards copies of the encapsulated packet to tenant edge nodes at one or more ports that are determined (by e.g., IGMP snooping) to be interested in (or has receivers for) the underlay multicast group identifier. A tenant edge node receiving a copy of the encapsulated packet may decapsulate the packet to remove the underlay multicast group identifier and forward the decapsulated packet to a tenant network by multicast based on the overlay multicast group identifier. The particular provider edge node may also receive a copy of the encapsulated packet from the physical switch and forward a decapsulated copy of the packet to the external network without the underlay multicast group identifier. In some embodiments, when the packet is received from the external network, the host machine that implements the particular provider edge node receives the packet, maps the overlay multicast group identifier to the underlay multicast group identifier, encapsulates the packet with an encapsulation header that includes the underlay multicast group identifier, and forwards the encapsulated packet to the physical switch of the first network segment.
The host machine then determines (at 550) whether VTEP(s) in another network segment is interested in the multicast group. The host machine may use the multicast grouping information sent to the host machine to identify any VTEPs in other segments that are also interested in the multicast group. If there is a VTEP in another network segment interested in the multicast group, the process proceeds to 560. If no other network segment has a VTEP that is interested in the multicast group, the process 500 ends.
At 560, the host machine identifies a VTEP at a second network segment having a tenant edge node that is interested in the underlay multicast group identifier. The host machine forwards (at 570) the packet to the identified VTEP by unicast. The process 500 then ends.
As mentioned, provider edges (T0 edge nodes) and tenant edges (T1 edge nodes) may be implemented by host machines that are running virtualization software, serving as virtual network forwarding engines. Such a virtual network forwarding engine is also known as managed forwarding element (MFE), or hypervisors. Virtualization software allows a computing device to host a set of virtual machines (VMs) or data compute nodes (DCNs) as well as to perform packet-forwarding operations (including L2 switching and L3 routing operations). These computing devices are therefore also referred to as host machines. The packet forwarding operations of the virtualization software are managed and controlled by a set of central controllers, and therefore the virtualization software is also referred to as a managed software forwarding element (MSFE) in some embodiments. In some embodiments, the MSFE performs its packet forwarding operations for one or more logical forwarding elements as the virtualization software of the host machine operates local instantiations of the logical forwarding elements as physical forwarding elements. Some of these physical forwarding elements are managed physical routing elements (MPREs) for performing L3 routing operations for a logical routing element (LRE), some of these physical forwarding elements are managed physical switching elements (MPSEs) for performing L2 switching operations for a logical switching element (LSE).
As illustrated, the computing device 600 has access to a physical network 690 through a physical NIC (PNIC) 695. The host machine 600 also runs the virtualization software 605 and hosts VMs 611-614. The virtualization software 605 serves as the interface between the hosted VMs 611-614 and the physical NIC 695 (as well as other physical resources, such as processors and memory). Each of the VMs 611-614 includes a virtual NIC (VNIC) for accessing the network through the virtualization software 605. Each VNIC in a VM 611-614 is responsible for exchanging packets between the VM 611-614 and the virtualization software 605. In some embodiments, the VNICs are software abstractions of physical NICs implemented by virtual NIC emulators.
The virtualization software 605 manages the operations of the VMs 611-614, and includes several components for managing the access of the VMs 611-614 to the physical network 690 (by implementing the logical networks to which the VMs connect, in some embodiments). As illustrated, the virtualization software 605 includes several components, including a MPSE 620, a set of MPREs 630, a controller agent 640, a network data storage 645, a VTEP 650, and a set of uplink pipelines 670.
The VTEP (virtual tunnel endpoint) 650 allows the host machine 600 to serve as a tunnel endpoint for logical network traffic (e.g., VXLAN traffic). VXLAN is an overlay network encapsulation protocol. An overlay network created by VXLAN encapsulation is sometimes referred to as a VXLAN network, or simply VXLAN. When a VM 611-614 on the host machine 600 sends a data packet (e.g., an Ethernet frame) to another VM in the same VXLAN network but on a different host (e.g., other machines 680,) the VTEP 650 will encapsulate the data packet using the VXLAN network's VNI and network addresses of the VTEP 650, before sending the packet to the physical network 690. The packet is tunneled through the physical network (i.e., the encapsulation renders the underlying packet transparent to the intervening network elements) to the destination host. The VTEP at the destination host decapsulates the packet and forwards only the original inner data packet to the destination VM. In some embodiments, the VTEP module serves only as a controller interface for VXLAN encapsulation, while the encapsulation and decapsulation of VXLAN packets is accomplished at the uplink module 670.
The controller agent 640 receives control plane messages from a controller 660 (e.g., a CCP node) or a cluster of controllers. In some embodiments, these control plane messages include configuration data for configuring the various components of the virtualization software 605 (such as the MPSE 620 and the MPREs 630) and/or the virtual machines 611-614. In the example illustrated in
In some embodiments, the controller agent 640 receives the multicast group information from the SDN controller and uses the multicast group information to map multicast group identifiers from overlay to underlay, and to identify which VTEPs are interested in which multicast group. The controller agent 640 also uses the multicast group information to distinguish VTEPs that are in the same network segment as the current host machine from VTEPs that are not in the same network segment. Based on the received multicast group information, the host machine 600 encapsulates multicast packets with underlay multicast group identifier, and sends multicast packet to VTEPs in different network segments by unicast.
The network data storage 645 in some embodiments stores some of the data that are used and produced by the logical forwarding elements of the host machine 600 (logical forwarding elements such as the MPSE 620 and the MPRE 630). Such stored data in some embodiments include forwarding tables and routing tables, connection mappings, as well as packet traffic statistics. These stored data are accessible by the controller agent 640 in some embodiments and delivered to another computing device (e.g., SDN controller 300.)
The MPSE 620 delivers network data to and from the physical NIC 695, which interfaces the physical network 690. The MPSE 620 also includes a number of virtual ports (vPorts) that communicatively interconnect the physical NIC 695 with the VMs 611-614, the MPREs 630, and the controller agent 640. Each virtual port is associated with a unique L2 MAC address, in some embodiments. The MPSE 620 performs L2 link layer packet forwarding between any two network elements that are connected to its virtual ports. The MPSE 620 also performs L2 link layer packet forwarding between any network element connected to any one of its virtual ports and a reachable L2 network element on the physical network 690 (e.g., another VM running on another host). In some embodiments, a MPSE is a local instantiation of a logical switching element (LSE) that operates across the different host machines and can perform L2 packet switching between VMs on a same host machine or on different host machines. In some embodiments, the MPSE performs the switching function of several LSEs according to the configuration of those logical switches.
The MPREs 630 perform L3 routing on data packets received from a virtual port on the MPSE 620. In some embodiments, this routing operation entails resolving a L3 IP address to a next-hop L2 MAC address and a next-hop VNI (i.e., the VNI of the next-hop's L2 segment). Each routed data packet is then sent back to the MPSE 620 to be forwarded to its destination according to the resolved L2 MAC address. This destination can be another VM connected to a virtual port on the MPSE 620, or a reachable L2 network element on the physical network 690 (e.g., another VM running on another host, a physical non-virtualized machine, etc.).
As mentioned, in some embodiments, a MPRE is a local instantiation of a logical routing element (LRE) that operates across the different host machines and can perform L3 packet forwarding between VMs on a same host machine or on different host machines. In some embodiments, a host machine may have multiple MPREs connected to a single MPSE, where each MPRE in the host machine implements a different LRE. MPREs and MPSEs are referred to as “physical” routing/switching elements in order to distinguish from “logical” routing/switching elements, even though MPREs and MPSEs are implemented in software in some embodiments. In some embodiments, a MPRE is referred to as a “software router” and a MPSE is referred to as a “software switch”. In some embodiments, LREs and LSEs are collectively referred to as logical forwarding elements (LFEs), while MPREs and MPSEs are collectively referred to as managed physical forwarding elements (MPFEs). Some of the logical resources (LRs) mentioned throughout this document are LREs or LSEs that have corresponding local MPREs or a local MPSE running in each host machine.
In some embodiments, the MPRE 630 includes one or more logical interfaces (LIFs) that each serve as an interface to a particular segment (L2 segment or VXLAN) of the network. In some embodiments, each LIF is addressable by its own IP address and serves as a default gateway or ARP proxy for network nodes (e.g., VMs) of its particular segment of the network. In some embodiments, all of the MPREs in the different host machines are addressable by a same “virtual” MAC address (or vMAC), while each MPRE is also assigned a “physical” MAC address (or pMAC) in order to indicate in which host machine the MPRE operates.
The uplink module 670 relays data between the MPSE 620 and the physical NIC 695. The uplink module 670 includes an egress chain and an ingress chain that each perform a number of operations. Some of these operations are pre-processing and/or post-processing operations for the MPRE 630.
As illustrated by
The MPSE 620 and the MPRE 630 make it possible for data packets to be forwarded amongst VMs 611-614 without being sent through the external physical network 690 (so long as the VMs connect to the same logical network, as different tenants' VMs will be isolated from each other). Specifically, the MPSE 620 performs the functions of the local logical switches by using the VNIs of the various L2 segments (i.e., their corresponding L2 logical switches) of the various logical networks. Likewise, the MPREs 630 perform the function of the logical routers by using the VNIs of those various L2 segments. Since each L2 segment/L2 switch has its own a unique VNI, the host machine 600 (and its virtualization software 605) is able to direct packets of different logical networks to their correct destinations and effectively segregate traffic of different logical networks from each other.
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 705 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 700. For instance, the bus 705 communicatively connects the processing unit(s) 710 with the read-only memory 730, the system memory 720, and the permanent storage device 735.
From these various memory units, the processing unit(s) 710 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) 710 may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 730 stores static data and instructions that are needed by the processing unit(s) 710 and other modules of the computer system 700. The permanent storage device 735, on the other hand, is a read-and-write memory device. This device 735 is a non-volatile memory unit that stores instructions and data even when the computer system 700 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 735.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device 735. Like the permanent storage device 735, the system memory 720 is a read-and-write memory device. However, unlike storage device 735, the system memory 720 is a volatile read-and-write memory, such a random access memory. The system memory 720 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 720, the permanent storage device 735, and/or the read-only memory 730. From these various memory units, the processing unit(s) 710 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 705 also connects to the input and output devices 740 and 745. The input devices 740 enable the user to communicate information and select commands to the computer system 700. The input devices 740 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 745 display images generated by the computer system 700. The output devices 745 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices 740 and 745.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Several embodiments described above include various pieces of data in the overlay encapsulation headers. One of ordinary skill will realize that other embodiments might not use the encapsulation headers to relay all of this data.
Also, several figures conceptually illustrate processes of some embodiments of the invention. In other embodiments, the specific operations of these processes may not be performed in the exact order shown and described in these figures. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
This application is a continuation application of U.S. patent application Ser. No. 17/367,347, filed Jul. 3, 2021, now published as U.S. Patent Publication 2023/0006922. U.S. patent application Ser. No. 17/367,347, now published as U.S. Patent Publication 2023/0006922, is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5224100 | Lee et al. | Jun 1993 | A |
5331634 | Fischer | Jul 1994 | A |
5729685 | Chatwani et al. | Mar 1998 | A |
5831975 | Chen et al. | Nov 1998 | A |
5926463 | Ahearn et al. | Jul 1999 | A |
6018526 | Liu et al. | Jan 2000 | A |
6104699 | Holender et al. | Aug 2000 | A |
6181697 | Nurenberg et al. | Jan 2001 | B1 |
6192417 | Block et al. | Feb 2001 | B1 |
6728777 | Lee et al. | Apr 2004 | B1 |
6804263 | Okawa | Oct 2004 | B1 |
6836481 | Hotta | Dec 2004 | B1 |
6862263 | Simmons | Mar 2005 | B1 |
6901510 | Srivastava | May 2005 | B1 |
6917985 | Madruga et al. | Jul 2005 | B2 |
6934252 | Mehrotra et al. | Aug 2005 | B2 |
6950428 | Horst et al. | Sep 2005 | B1 |
7046630 | Abe et al. | May 2006 | B2 |
7209439 | Rawlins et al. | Apr 2007 | B2 |
7286490 | Saleh et al. | Oct 2007 | B2 |
7333487 | Novaes | Feb 2008 | B2 |
7529199 | Wijnands et al. | May 2009 | B1 |
7606187 | Zeng et al. | Oct 2009 | B2 |
7792099 | Yasukawa et al. | Sep 2010 | B2 |
7792987 | Vohra et al. | Sep 2010 | B1 |
7813340 | Novaes | Oct 2010 | B2 |
7876754 | Novaes | Jan 2011 | B2 |
7937438 | Miller et al. | May 2011 | B1 |
7940685 | Breslau | May 2011 | B1 |
7961646 | Liu et al. | Jun 2011 | B2 |
8089964 | Lo et al. | Jan 2012 | B2 |
8223649 | Rangarajan et al. | Jul 2012 | B2 |
8224971 | Miller et al. | Jul 2012 | B1 |
8310957 | Rekhter | Nov 2012 | B1 |
8312129 | Miller et al. | Nov 2012 | B1 |
8391185 | Wijnands et al. | Mar 2013 | B2 |
8553689 | Bachmann et al. | Oct 2013 | B2 |
8612627 | Brandwine | Dec 2013 | B1 |
8625603 | Ramakrishnan et al. | Jan 2014 | B1 |
9432204 | Shen et al. | Aug 2016 | B2 |
9602385 | Tessmer et al. | Mar 2017 | B2 |
9602392 | Tessmer et al. | Mar 2017 | B2 |
9794079 | Tessmer et al. | Oct 2017 | B2 |
9887851 | Shen et al. | Feb 2018 | B2 |
10103980 | Tiwari | Oct 2018 | B1 |
10218526 | Shen et al. | Feb 2019 | B2 |
10333727 | Tessmer et al. | Jun 2019 | B2 |
10623194 | Shen et al. | Apr 2020 | B2 |
10778457 | Mathew et al. | Sep 2020 | B1 |
10789267 | Dhoolam et al. | Sep 2020 | B1 |
10999087 | Tessmer | May 2021 | B2 |
11310150 | Tessmer et al. | Apr 2022 | B2 |
11456888 | Mathew et al. | Sep 2022 | B2 |
11469983 | Malov | Oct 2022 | B1 |
20020091926 | Fukutomi | Jul 2002 | A1 |
20020138618 | Szabo | Sep 2002 | A1 |
20040267897 | Hill et al. | Dec 2004 | A1 |
20050111474 | Kobayashi | May 2005 | A1 |
20050147095 | Guerrero et al. | Jul 2005 | A1 |
20060045092 | Kubsch et al. | Mar 2006 | A1 |
20060182033 | Chen et al. | Aug 2006 | A1 |
20060187950 | Bou-Diab et al. | Aug 2006 | A1 |
20060239290 | Lin et al. | Oct 2006 | A1 |
20070058638 | Guichard et al. | Mar 2007 | A1 |
20070253409 | Fu et al. | Nov 2007 | A1 |
20080002727 | Yamane | Jan 2008 | A1 |
20080020758 | Nagarajan et al. | Jan 2008 | A1 |
20080059556 | Greenspan et al. | Mar 2008 | A1 |
20080071900 | Hecker et al. | Mar 2008 | A1 |
20080104273 | Bruck et al. | May 2008 | A1 |
20080175239 | Sistanizadeh et al. | Jul 2008 | A1 |
20080186962 | Sinha | Aug 2008 | A1 |
20080205302 | Florit et al. | Aug 2008 | A1 |
20080212496 | Zou | Sep 2008 | A1 |
20090285206 | Kawauchi et al. | Nov 2009 | A1 |
20100002698 | Clack et al. | Jan 2010 | A1 |
20100106779 | Yamauchi | Apr 2010 | A1 |
20100157888 | Aggarwal et al. | Jun 2010 | A1 |
20100157889 | Aggarwal et al. | Jun 2010 | A1 |
20100271948 | Challapali et al. | Oct 2010 | A1 |
20100284402 | Narayanan | Nov 2010 | A1 |
20110022652 | Lai et al. | Jan 2011 | A1 |
20110075664 | Lambeth et al. | Mar 2011 | A1 |
20110188500 | Du | Aug 2011 | A1 |
20110202920 | Takase | Aug 2011 | A1 |
20110280572 | Vobbilisetty et al. | Nov 2011 | A1 |
20110317696 | Aldrin et al. | Dec 2011 | A1 |
20110317703 | Dunbar et al. | Dec 2011 | A1 |
20120106950 | Madrahalli et al. | May 2012 | A1 |
20120155322 | Lamba et al. | Jun 2012 | A1 |
20120177042 | Berman | Jul 2012 | A1 |
20120185553 | Nelson | Jul 2012 | A1 |
20120233326 | Shaffer et al. | Sep 2012 | A1 |
20120236734 | Sampath et al. | Sep 2012 | A1 |
20120254943 | Li | Oct 2012 | A1 |
20120278804 | Narayanasamy et al. | Nov 2012 | A1 |
20120307826 | Matsuoka | Dec 2012 | A1 |
20130040677 | Lee et al. | Feb 2013 | A1 |
20130044636 | Koponen et al. | Feb 2013 | A1 |
20130114597 | Ogisawa et al. | May 2013 | A1 |
20130124709 | Shah et al. | May 2013 | A1 |
20130124750 | Anumala et al. | May 2013 | A1 |
20130159826 | Mason et al. | Jun 2013 | A1 |
20130223454 | Dunbar | Aug 2013 | A1 |
20130266015 | Qu | Oct 2013 | A1 |
20130318219 | Kancherla | Nov 2013 | A1 |
20130322443 | Dunbar et al. | Dec 2013 | A1 |
20140052877 | Mao | Feb 2014 | A1 |
20140092907 | Sridhar et al. | Apr 2014 | A1 |
20140098814 | Bansal et al. | Apr 2014 | A1 |
20140169366 | Kotalwar et al. | Jun 2014 | A1 |
20140192804 | Ghanwani et al. | Jul 2014 | A1 |
20140195666 | Dumitriu et al. | Jul 2014 | A1 |
20140243035 | Jung et al. | Aug 2014 | A1 |
20140372624 | Wang et al. | Dec 2014 | A1 |
20150016300 | Devireddy et al. | Jan 2015 | A1 |
20150055651 | Shen et al. | Feb 2015 | A1 |
20150131655 | Dayama et al. | May 2015 | A1 |
20150163100 | Graf et al. | Jun 2015 | A1 |
20150172132 | Tessmer et al. | Jun 2015 | A1 |
20150172165 | Tessmer et al. | Jun 2015 | A1 |
20150254190 | Yang et al. | Sep 2015 | A1 |
20150263862 | Sugyou et al. | Sep 2015 | A1 |
20150280928 | Tessmer et al. | Oct 2015 | A1 |
20150381484 | Hira et al. | Dec 2015 | A1 |
20160119156 | Drake et al. | Apr 2016 | A1 |
20160285932 | Thyamagundalu | Sep 2016 | A1 |
20160301724 | Kodaypak | Oct 2016 | A1 |
20160352531 | Shen et al. | Dec 2016 | A1 |
20170171061 | Tessmer et al. | Jun 2017 | A1 |
20180048478 | Tessmer et al. | Feb 2018 | A1 |
20180062914 | Boutros | Mar 2018 | A1 |
20180159696 | Shen et al. | Jun 2018 | A1 |
20180212788 | Iszlai et al. | Jul 2018 | A1 |
20190190734 | Shen et al. | Jun 2019 | A1 |
20190207779 | Immidi | Jul 2019 | A1 |
20190273625 | Tessmer et al. | Sep 2019 | A1 |
20200403819 | Mathew et al. | Dec 2020 | A1 |
20210044445 | Bottorff et al. | Feb 2021 | A1 |
20210092092 | Lo | Mar 2021 | A1 |
20210258178 | Tessmer et al. | Aug 2021 | A1 |
20230006853 | Mathew et al. | Jan 2023 | A1 |
20230006922 | Karunakaran et al. | Jan 2023 | A1 |
Number | Date | Country |
---|---|---|
101119578 | Feb 2008 | CN |
101783734 | Jul 2010 | CN |
101282338 | Aug 2011 | CN |
1855420 | Nov 2007 | EP |
2458154 | Sep 2009 | GB |
2001230774 | Aug 2001 | JP |
2005184234 | Jul 2005 | JP |
2006121517 | May 2006 | JP |
2006229967 | Aug 2006 | JP |
2009272803 | Nov 2009 | JP |
2010103757 | May 2010 | JP |
2011171874 | Sep 2011 | JP |
2015531212 | Oct 2015 | JP |
2004082221 | Sep 2004 | WO |
2006095391 | Sep 2006 | WO |
2013009850 | Jan 2013 | WO |
2013078979 | Jun 2013 | WO |
2014087591 | Jun 2014 | WO |
2015152976 | Oct 2015 | WO |
Entry |
---|
Casado, Martin, et al. “Ethane: Taking Control of the Enterprise,” SIGCOMM'07, Aug. 27-31, 2007, 12 pages, ACM, Kyoto, Japan. |
Casado, Martin, et al., “SANE: A Protection Architecture for Enterprise Networks,” Proceedings of the 15th USENIX Security Symposium, Jul. 31-Aug. 4, 2006, 15 pages, USENIX, Vancouver, Canada. |
Dumitriu, Dan Mihai, et al., (U.S. Appl. No. 61/514,990), filed Aug. 4, 2011, 31 pages. |
Gross, J., et al., “Geneve: Generic Network Virtualization Encapsulation,” Feb. 14, 2014, 24 pages, Internet Engineering Task Force (IETF). |
Narten, Thomas, et al., “Address Resolution Problems in Large Data Center Networks,” Jan. 2013, 17 pages, Internet Engineering Task Force (IETF). |
Number | Date | Country | |
---|---|---|---|
20230370367 A1 | Nov 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17367347 | Jul 2021 | US |
Child | 18226777 | US |