Using route type to determine routing protocol behavior

Information

  • Patent Grant
  • 10931560
  • Patent Number
    10,931,560
  • Date Filed
    Thursday, February 14, 2019
    5 years ago
  • Date Issued
    Tuesday, February 23, 2021
    3 years ago
Abstract
Some embodiments provide a method for implementing a logical network. Based on logical network configuration data, the method identifies a route for a set of network addresses to add to a routing table of the logical router, and also identifies a route type for the identified route. The method determines whether to include the identified route as a route for the logical router to advertise based on the route type of the identified route. The method distributes a routing table comprising the identified route to a computing device that implements the logical router, where the computing device advertises the identified route when the route type is specified for advertisement.
Description
RELATED APPLICATIONS

Benefit is claimed under 35 U.S.C. 119(a)-(d) to Foreign Application Serial No. 201841044147 filed in India entitled “USING ROUTE TYPE TO DETERMINE ROUTING PROTOCOL BEHAVIOR”, on Nov. 23, 2018, by VMware, Inc., which is herein incorporated in its entirety by reference for all purposes.


BACKGROUND

In a datacenter, the network administrator will often need to maintain a list of network prefixes that should be advertised to peers outside of the network (e.g., a list of public IP addresses). However, cloud environments may have fast-changing network topology, such that maintaining such a list of prefixes is difficult. This is exacerbated in a multi-tenant environment, with multiple tenant networks having various different prefixes that need to be advertised. Thus, techniques for more easily maintaining the list of network prefixes to be advertised are needed.


BRIEF SUMMARY

Some embodiments of the invention provide a method for determining the routes that a logical router advertises to external routers based at least partly on route type. In some embodiments, each route in a routing table of the logical router is tagged with a route type (e.g., connected routes, routes associated with specific service types, etc.), and the logical router is configured to advertise specific types of routes. Thus, when new routes are added to the routing table of the logical router, the method determines whether to advertise the route based on the route type rather than a route-specific configuration.


In some embodiments, the routing table is a routing table for a centralized routing component of a logical router that includes a distributed routing component and one or more centralized routing components. The centralized routing components of some embodiments each execute on a single computing device, while the distributed routing component is implemented on numerous computing devices in a datacenter (or multiple datacenters). These numerous computing devices implementing the distributed routing component may include host computers that host data compute nodes (DCNs) such as virtual machines (VMs) that are the endpoints of the logical network as well as the computing devices that implement the centralized logical routers. The centralized routing components interface with external networks (e.g., public networks) as well as centralized routing components in other datacenters, in some embodiments, while the distributed routing component interfaces with the internal logical network.


In some cases, this internal logical network includes numerous other logical routers that connect to the distributed routing component, and which may belong to different datacenter tenants. For instance, the logical router with centralized routing components that interfaces with external networks might be a provider logical router managed by the datacenter administrator, while the other logical routers are tenant logical routers managed by different datacenter tenants. The routes advertised (or potentially advertised) by the centralized routing components may be routes configured specifically for the provider logical router (e.g., static routes, connected routes based on logical switches directly connected to the provider logical router, etc.) or routes that these other tenant logical routers advertise to the provider logical router.


When a logical network configuration changes (e.g., adding new logical switches, configuration of network address translation (NAT) rules, static route configuration, load balancer or other service configurations, receipt of advertised routes from a tenant logical router due to these types of changes, etc.), a network management and control system (e.g., a centralized management and/or control plane) updates the routing tables for the centralized routing components. In some embodiments, when a new route is added, the route is tagged with a route type based on the source of the route. These route types may include, e.g., connected downlink (for logical switches connected to tenant logical routers), connected uplink (for the subnets via which the centralized routing component directly connects to external peers, etc.), NAT address, load balancer address, etc.


To determine whether to advertise a given route, the provider logical router is configured with decisions on the basis of route type. That is, if all connected downlink routes are to be advertised, then any new route with a connected downlink tag will be advertised without requiring additional administrator intervention. Similarly, if the configuration specifies that connected uplink routes are not to be advertised, then new routes with the connected uplink tag will not be advertised. In different embodiments, the decision as to whether to advertise a route may be made by the network management and control system (e.g., by tagging a route for advertisement or not when distributing the routing table configuration to the computing device implementing a centralized routing component) or by the computing device itself. In the latter case, the management and control system distributes the routes with their route type tags and also distributes the advertisement decision for each route type.


In addition to determining whether to advertise a route based on route type, some embodiments also include other factors. For example, some embodiments allow differentiation based on the source of the route (e.g., from which tenant logical router the route was learned). Thus, for example, the provider logical router could be configured to advertise NAT routes from a first tenant logical router but not from a second tenant logical router. In addition, decisions can be made based on the peer router to which the routes are advertised in some embodiments. For instance, the administrator may want certain types of routes to be advertised to external routers in a public network but not to centralized routing components in other datacenters, or vice versa.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all of the inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 conceptually illustrates a management/control plane view of a logical network with two tiers of logical routers.



FIG. 2 illustrates a physical implementation of the logical network in FIG. 1.



FIG. 3 illustrates a process performed in some embodiments by the management/control plane to assign tags to routes specifying the route type.



FIG. 4 conceptually illustrates an example of advertising routes based on route type and/or route source.



FIG. 5 illustrates a portion of the prefix list used in configuring the logical network.



FIG. 6 illustrates a portion of the rules table used to make decisions about advertising routes.



FIG. 7 illustrates a process performed by the network management and control system in some embodiments for determining route advertisements.



FIG. 8 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

Some embodiments of the invention provide a method for determining the routes that a logical router advertises to external routers based at least partly on route type. In some embodiments, each route in a routing table of the logical router is tagged with a route type (e.g., connected routes, routes associated with specific service types, etc.), and the logical router is configured to advertise specific types of routes. Thus, when new routes are added to the routing table of the logical router, the method determines whether to advertise the route based on the route type rather than a route-specific configuration.


In some embodiments, the routing table is a routing table for a centralized routing component of a logical router. The logical router includes a distributed routing component and one or more centralized routing components. The centralized routing components of some embodiments each execute on a single computing device, while the distributed routing component is implemented on numerous computing devices in a datacenter (or multiple datacenters). These numerous computing devices implementing the distributed routing component may include host computers that host data compute nodes (DCNs) such as virtual machines (VMs) that are the endpoints of the logical network, as well as the computing devices that implement the centralized logical routers. The centralized routing components interface with external networks (e.g., public networks) as well as centralized routing components in other datacenters, in some embodiments, while the distributed routing component interfaces with the internal logical network.


In some cases, this internal logical network includes numerous other logical routers that connect to the distributed routing component, and which may belong to different datacenter tenants. For instance, the logical router with centralized routing components that interfaces with external networks might be a provider logical router managed by the datacenter administrator, while the other logical routers are tenant logical routers managed by different datacenter tenants. The routes advertised (or potentially advertised) by the centralized routing components may be routes configured specifically for the provider logical router (e.g., static routes, connected routes based on logical switches directly connected to the provider logical router, etc.) or routes that these other tenant logical routers advertise to the provider logical router.



FIG. 1 conceptually illustrates a network management and control system view of a logical network 100 with two tiers of logical routers. As shown, the logical network 100 includes a provider logical router 105 (PLR) and several tenant logical routers 110-120 (TLRs). The first tenant logical router 110 has two logical switches 125 and 130 attached, with one or more data compute nodes (not shown) coupling to each of the logical switches. For simplicity, only the logical switches attached to the first TLR 110 are shown, although the other TLRs 115-120 would also typically have logical switches attached (to which other data compute nodes could couple).


In some embodiments, any number of TLRs may be attached to a PLR such as the PLR 105. Some datacenters may have only a single PLR to which all TLRs implemented in the datacenter attach, whereas other datacenters may have numerous PLRs. For instance, a large datacenter may want to use different PLR policies for different tenants or groups of tenants. Alternatively, the datacenter may have too many different tenants to attach all of the TLRs to a single PLR (because, e.g., the routing table for the PLR might get too big). Part of the routing table for a PLR includes routes for all of the logical switch domains of its TLRs, so attaching numerous TLRs to a PLR creates several routes for each TLR just based on the subnets attached to the TLR. The PLR 105, as shown in the figure, provides a connection to the external physical network 135; some embodiments only allow the PLR to provide such a connection, so that the datacenter provider can manage this connection. Each of the separate TLRs 115-120, though part of the logical network 100, are configured independently (although a single tenant could have multiple TLRs if they so choose).


The PLR in FIG. 1 has centralized routing components 140 and 142 (also referred to as service routers, or SRs), a distributed routing component (DR) 150, and a transit logical switch 155, created by the network management and control system of some embodiments based on the configuration of the PLR. That is, in some embodiments a user (e.g., a network/datacenter administrator) provides a configuration of the PLR as a router with certain interfaces, and the network management and control system defines the internal components of the PLR and interfaces between these components as shown. The DR 150 includes a southbound interface for each of the TLRs 110-120, and a single northbound interface to the transit logical switch 155 (and through this to the SRs). The SRs 140-142 each include a single southbound interface to the transit logical switch 155 (used to communicate with the DR 150, as well as each other in certain situations).


Each SR 140 and 142 also corresponds to one or more uplink ports of the PLR 105 for connecting to the external network 135 in some embodiments. Each of the SRs in this example has a single north-facing interface, though in other embodiments a single SR can implement more than one uplink interface. The SRs of some embodiments are responsible for delivering services that are not implemented in a distributed fashion (e.g., some stateful services). Even if there are no stateful services configured on the logical router 105, some embodiments use centralized routing components to centralize management of the connection(s) to the external network 135.


In some embodiments, the network management and control system generates separate routing information bases (RIBs) for each of the router constructs 140-150. Essentially, the network management and control system treats each of the router constructs 140-150 as a separate logical router with separate interfaces and a separate routing table.



FIG. 2 illustrates a physical implementation of the logical network 100. As shown, the data compute nodes (such as VMs 215) which couple to the logical switches 125 and 130 actually execute on host machines 205. The MFEs 210 that operate on these host machines 205 in some embodiments are virtual switches (e.g., Open vSwitch (OVS), ESX) that operate within the hypervisors or other virtualization software on the host machines. These MFEs 210 perform first-hop switching and routing to implement the logical switches 125 and 130, the TLRs 100-120, and the PLR 105, for packets sent by the VMs 215 of the logical network 100. The MFEs 210 (or a subset of them) also may implement logical switches (and distributed logical routers) for other logical networks if the other logical networks have VMs that reside on the host machines 205 as well.



FIG. 2 also illustrates the network management and control system for the logical network 100. The configuration of the MFEs 205 is controlled by a management and central control plane cluster 225 (MP/CCP), which calculates and distributes configuration data to each MFE. In some embodiments, each host machine 205 also hosts a local control plane agent 230 (LCP) which receives the configuration data from the MP/CCP 225, converts the data into a format useable by the local MFE 210, if needed, and distributes the converted data to the local MFE 210. The MP/CCP 225 generates the configuration data based on the specification of the logical network 100 (e.g., based on administrator-entered network configuration). The MP/CCP 225 may also include a prefix list 250 of routes for advertisement, with additional information like tags that indicate the type of route and the source of the route. An example of such a prefix list is illustrated in FIG. 5 and explained in further detail below.


The centralized routing components 140 and 142 each operate on a different gateway machine 240. Unlike the DR, these SRs are centralized to the gateway machines and not distributed. The gateway machines 240 are host machines similar to the machines 205 in some embodiments, hosting centralized routing components rather than user VMs (in other embodiments, host machines may host both centralized routing components as well as user VMs). In some embodiments, the gateway machines 240 each include an MFE 210 as well as the centralized routing components 140-142, in order for the MFE to handle logical switching as well as routing for the logical routers. As an example, packets sent from the external network 135 may be routed by the SR routing table on one of the gateway machines and then subsequently switched and routed (according to the DR routing table) by the MFE on the same gateway. In other embodiments, the gateway machine executes a single datapath (e.g., a DPDK-based datapath) that implements the SR as well as the DR and other distributed logical forwarding elements. The gateway machines 240 may also include an LCP 230 to receive configuration data from the MP/CCP 225.


The SRs 140 and 142 may be implemented in a namespace, a virtual machine, as a VRF, etc., in different embodiments. The SRs may operate in an active-active or active-standby mode in some embodiments, depending on whether any stateful services (e.g., firewalls) are configured on the logical router. When stateful services are configured, some embodiments require only a single active SR. In some embodiments, the active and standby service routers are provided with the same configuration, but the MFEs 210 are configured to send packets via a tunnel to the active SR (or to the MFE on the gateway machine with the active SR). Only if the tunnel is down will the MFE send packets to the standby gateway.


In order for VMs in the logical network 100 to receive southbound data message traffic, the SRs 140 and 142 of some embodiments (or routing protocol applications executing on the gateways alongside the SRs) advertise routes to their network peers. Route advertisement in logical networks is explained in further detail in U.S. Pat. Nos. 10,075,363, 10,038,628, and 9,590,901, which are incorporated herein by reference. However, not all routes should be advertised to all neighbors. Some network prefixes, such as those for private subnets, should only be redistributed to selected internal peers and not to the Internet, for example. Routes may also be specific to certain tenants and should not be available to other tenants. A NAT IP block or a load balancer virtual IP address might only be advertised to the Internet, whereas a private network could be advertised to a peer in a remote site connected over a VPN or not at all. These advertisement decisions are dependent on the type of the route and the source of the route in some embodiments.


In some embodiments, these advertisement decisions are controlled using tags that indicate the type and source of route. FIG. 3 illustrates a process 300 of some embodiments for assigning these tags. In some embodiments, the process 300 is performed by the network management and control system (e.g., by a management plane and/or central control plane). When a logical network 100 configuration changes, the network management and control system updates the routing tables for the centralized routing components. Examples of configuration changes or updates include configuration of ports and types of ports, configuration of network address translation (NAT) rules, static route configuration, load balancer or other service configurations, DNS services, VPN endpoints, adding new logical switches, receipt of advertised routes from a tenant logical router due to these types of changes, etc.


As shown, the process 300 begins by receiving (at 305) a configuration update that includes a route. This may involve the direct configuration of a route by an administrator or a configuration update that indirectly results in the creation (or deletion) of a route, such as the connection of a logical switch or creation of a new NAT IP address.


After receiving the route, the process 300 determines (at 310) the type of the route. Some examples of route types may include, e.g., connected downlink (for logical switches connected to tenant logical routers), connected uplink (for the subnets via which the centralized routing component directly connects to external peers, etc.), NAT address, load balancer address, services like DNS and IPSec, etc. The routes may include routes advertised from tier 1 routers (i.e., the TLRs 110-120) as well as routes directly configured for the centralized routing components. In some embodiments, the route types may be manually pre-defined by a user.


Once the type of route is determined, the process 300 tags the route (at 315) with the determined route type. Tags may also be removed from a route or modified for a route as necessary. All advertised prefixes are tagged and tracked separately with different sets of route types. Automatic tracking of network prefixes helps the administrator write simple redistribution rules and BGP filters, for example. In some embodiments, a prefix may also be tagged with a tag indicating that the prefix is not to be advertised.


In some embodiments, the routes, their associated route types (i.e., tags), and their source are stored as a prefix list 250 in the MP/CCP 225. The process 300 updates (at 320) the prefix list with the tagged route. A logical router may host multiple virtual routing and forwarding (VRF) contexts, in which case a separate prefix list is maintained for each VRF. In some embodiments, the updated prefix list is also distributed to the local agents 230. The process 300 then ends.



FIG. 4 conceptually illustrates an example of advertising routes based on route type and/or route source. In this example, a first SR 405 of a logical router executes on a gateway machine 410 in a datacenter 415 (Datacenter A). FIG. 5 illustrates a portion of the prefix list used in configuring the logical network at Datacenter A, and FIG. 6 illustrates a portion of the associated rules table used by SR1405 in making decisions about advertising routes. In some embodiments, the rules or filters in the rules database 600 may be simple expressions based on the tags, such as “deny:all and allow:connected_downlinks.” Some of these rules may also specify applicable peer targets, such as “external”, “logical”, or specific router interfaces. In other embodiments, the prefix list 500 can be referred to in route maps (not shown). With route maps, multiple actions can be applied such as set community, set Autonomous System (AS) path, etc.


In the illustrated example of FIG. 4, the first SR 405 peers with an external router 420 in an external network. In addition, this first SR 405 also peers with a second SR 425 executing on a gateway host 430 in a different datacenter 435 (Datacenter B). In some embodiments, the gateways 410 and 430 may implement the same logical network spanning host machines 440-455 in both datacenters, or different logical networks (or logical network segments) that only communicate through the gateways.


In the example, the first SR 405 advertises several routes as specified in the prefix list 500 of FIG. 5. The first route 505 indicates a private subnet 192.168.1.0/24 with a route type as “connected downlink”, associated with a tenant logical router TLR1. Since this subnet is private, this route should be advertised to the second SR 425 in Datacenter B 435, but not to the external router R1420. In order to properly advertise this route and other routes of the same classification, a rule 605 can be specified in the rules table 600 in the MP/CCP 225 which specifies that routes with “connected downlink” type should only be advertised to logical peers. SR1405 is then accordingly configured by the MP/CCP 225, so that it advertises the route to SR2425, but not to R1420. Accordingly, SR1 sends a route advertisement 460 containing the route 505 to SR2425.


A second route 510 indicates a load balancer service which directs traffic between multiple VMs running copies of a web application. The load balancer route prefix is for a single IP address, 172.16.1.0/32, indicating that all traffic bound for the web application must go through the load balancer and be distributed between the various VMs running the web application. Accordingly, this route is listed in the second entry 510 of the prefix list 500 with a route type of “load balancer virtual IP address” and associated with a tenant logical router TLR2. Since the web application is public, the route to the load balancer should be advertised to the external router 420. In order to properly advertise this route, a rule 610 can be specified in the rules table 600 in the network management and control system that specifies routes with “load balancer virtual IP” type should be advertised to external peers. The first SR 405 is then accordingly configured by the network management and control system, so that it advertises the route to the external router 420. Accordingly, the first SR 405 sends a route advertisement message 465 for the route 510 to the external router 420.


If, for example, the tenant decides to create a new subnet, then the administrator would specify the new subnet by providing configuration data to the network management and control system. The addition of the subnet would be detected and tagged with “connected downlink” type according to the process 300 described above and added to the prefix list 500 as a new entry 515. The updated prefix list 500 would then be used by the network management and control system to generate new configuration data, which would then update the configuration of the gateways 410 and 430 in Datacenters A 415 and B 435. This new subnet would then not be advertised according to the same rule used to determine advertisement for the route 505, even though the route 515 is for a different subnet and has a different source.


In some embodiments, routes are advertised by a routing protocol control plane element executing on the host machines 205 and the gateway machines 240. For example, a Border Gateway protocol (BGP) control plane element would handle processing of incoming and outgoing advertisement messages 460 and 465.


To determine whether to advertise a given route, in some embodiments the gateway machines 240 are configured by the network management and control system 225 to make decisions on the basis of route type. That is, if all connected downlink routes are to be advertised, then any new route with a connected downlink tag will be provided to the SR routing table and the gateway will make the determination to advertise the new route without requiring additional administrator intervention. In different embodiments, the decision as to whether to advertise a route may be made by the network management and control system (e.g., by tagging a route for advertisement or not when distributing the routing table configuration to the computing device implementing an SR). The network management and control system distributes the routes with their route type tags and also distributes the advertisement decision for each route type.


In addition to determining whether to advertise a route based on route type, some embodiments also include other factors. For example, some embodiments allow differentiation based on the source of the route (e.g., from which tenant logical router the route was learned). As noted above, some embodiments also store the source for each prefix in the prefix list 250. Thus, for example, the provider logical router could be configured to advertise NAT routes from a first tenant logical router but not from a second tenant logical router. In addition, decisions can be made based on the peer router to which the routes are advertised in some embodiments. For instance, the administrator may want certain types of routes to be advertised to external routers in a public network but not to centralized routing components in other datacenters, or vice versa. These rules for advertising based on type and source are defined and stored in the rules storage database 235.



FIG. 7 illustrates a process 700 of some embodiments for determining whether a centralized routing component will advertise a set of routes. In some embodiments, the process 700 is performed by the network management and control system in order to configure an SR. In other embodiments, a similar process is performed by the SR or a module operating on a gateway host machine with an SR without involving the central manager or controller.


The process 700 begins (at 705) by selecting a route intended for the SR routing table. The SR's routing table is populated with routes based on the definition of the logical network in the network management and control system. The routes in the routing table therefore depend on the logical and physical network topologies as well as any additional configuration data that the network management and control system receives (e.g., static route configuration, configuring specific services with IP addresses, etc.).


The process 700 then determines (at 710) whether the route is indicated for potential advertisement. In some embodiments, the process checks the selected route against the prefix list (e.g., the prefix list 250) and the rules database (e.g., the rules database 235) to determine if the route may potentially be advertised. If the selected route is not for advertisement irrespective of its route type (e.g., a static route configured for an SR that is only for internal use), then the route is marked (at 712) as not for advertisement in a local configuration database. This database is used by the network management and control system to generate the configuration data for configuring the gateway device on which the SR resides. The process then proceeds to 715, which is described below.


If the selected route is determined to be conditionally advertised, then the process 700 determines (at 725) the route type and source (e.g., from the prefix list). The process 700 then selects (at 730) one of the SR's peer routers. As described above, these routers with which the SR peers (e.g., using BGP and/or OSPF) may include external routers as well as other SRs in the logical network topology (e.g., in other datacenters).


The process 700 then determines (at 735) whether the route type and/or route source are specified for advertisement to the selected peer. In some embodiments, this determination is based on rules or filters defined by an administrator via the network management and control system, and stored in the rules database while defining the logical network or updating its configuration. Some route types are always advertised to certain peers regardless of their source, whereas other route types may be advertised to one peer but not another. In some cases, the route type may be advertised to one peer but not another peer based on the route source, such as a private subnet which is only advertised to internal logical routers but not to external networks.


If the selected route is specified for advertisement to the selected peer, then the process 700 marks (at 740) the route for advertisement in the local configuration database. This database is used by the network management and control system to generate the configuration data for configuring the gateway device on which the SR resides. The process 700 then determines (at 745) whether any additional peers remain to be evaluate for advertisement of the selected route. If any peers remain, the process returns to 730 to select the next peer router and determine whether the SR will advertise the route to that peer.


Once the process has evaluated whether to advertise the route to each of the peers of the SR, the process 700 determines (at 715) whether there are any additional routes in the SR routing table. If there are additional routes, the process 500 returns to 705 to select the next route, until all routes have been evaluated. Once all of the routes have been evaluated, the process configures (at 720) the gateway device that implements the SR, using the information in the local configuration database. The process 700 then ends.


Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.


In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.



FIG. 8 conceptually illustrates an electronic system 800 with which some embodiments of the invention are implemented. The electronic system 800 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system 800 includes a bus 805, processing unit(s) 810, a system memory 825, a read-only memory 830, a permanent storage device 835, input devices 840, and output devices 845.


The bus 805 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 800. For instance, the bus 805 communicatively connects the processing unit(s) 810 with the read-only memory 830, the system memory 825, and the permanent storage device 835.


From these various memory units, the processing unit(s) 810 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.


The read-only-memory (ROM) 830 stores static data and instructions that are needed by the processing unit(s) 810 and other modules of the electronic system. The permanent storage device 835, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 800 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 835.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 835, the system memory 825 is a read-and-write memory device. However, unlike storage device 835, the system memory is a volatile read-and-write memory, such as random-access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 825, the permanent storage device 835, and/or the read-only memory 830. From these various memory units, the processing unit(s) 810 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.


The bus 805 also connects to the input and output devices 840 and 845. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 840 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 845 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.


Finally, bus 805 also couples electronic system 800 to a network 865 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 800 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.


This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DNCs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.


VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system isolates the containers for different tenants and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.


Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESX hypervisor of VMware Inc.


One of ordinary skill in the art will recognize that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.


In addition, as used in this document, the term data packet, packet, data message, or message refers to a collection of bits in a particular format sent across a network. It should be understood that the term data packet, packet, data message, or message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. While the description above refers to data packets, packets, data messages, or messages, it should be understood that the invention should not be limited to any specific format or type of data message.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, at least one figure conceptually illustrates a process. The specific operations of this process may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A method for implementing a logical router in a logical network, the method comprising: based on logical network configuration data, identifying (i) a route for a set of network addresses to add to a routing table of the logical router and (ii) a route type for the identified route, the logical router having a set of peer routers to which the logical router advertises routes;based on the route type of the identified route, determining separately for each particular peer router of the set of peer routers whether to include the identified route as a route for the logical router to advertise to the particular peer router; anddistributing a routing table comprising the identified route to a computing device that implements the logical router, wherein the computing device advertises the identified route to a peer router when the route type is specified for advertisement to the peer router, wherein the computing device advertises the route to a first peer router of the logical router but does not advertise the route to a second peer router of the logical router.
  • 2. The method of claim 1, wherein the computing device does not advertise the identified route when the route type is not specified for advertisement.
  • 3. The method of claim 1, wherein the route type comprises a connected uplink, a connected downlink, and a set of different network service types.
  • 4. The method of claim 3, wherein the set of network service types comprises at least one of network address translation (NAT), load balancing, Internet Protocol security (IPSec), and Dynamic Host Configuration Protocol (DHCP).
  • 5. The method of claim 1, wherein the set of peer routers comprises at least one physical router external to the logical network.
  • 6. The method of claim 1, wherein the logical router is a first logical router implemented by a first computing device in a first datacenter, wherein the set of peer routers comprises a second logical router implemented by a second computing device in a second datacenter.
  • 7. The method of claim 1, wherein the logical router comprises a distributed routing component and a set of centralized routing components, wherein the distributed routing table is a routing table for one of the centralized routing components.
  • 8. The method of claim 7, wherein the distributed routing component is implemented by a plurality of computing devices including the computing device.
  • 9. The method of claim 7, wherein each of the centralized routing components is implemented by a different computing device.
  • 10. A method for implementing a first logical router in a logical network, the method comprising: determining that a route is advertised to the first logical router by a second logical router that connects to the first logical router;identifying (i) the route for a set of network addresses to add to a routing table of the first logical router and (ii) a route type for the route;determining whether to include the route as a route for the first logical router to advertise based on the route type of the route and the second logical router; andwhen the determination is made that the route has to be advertised, distributing a routing table comprising the route to a computing device that implements the first logical router in order for the computing device to advertise the route.
  • 11. The method of claim 10, wherein the first logical router advertises routes of a particular route type that are advertised to the first logical router from the second logical router and does not advertise routes of the particular route type that are advertised to the first logical router from a third logical router.
  • 12. The method of claim 11, wherein the first logical router is configured by an administrator of a datacenter and provides a connection to external networks for a plurality of tenant logical networks, wherein the second and third logical routers are configured by different datacenter tenants and connect different tenant logical networks to the first logical router.
  • 13. A non-transitory machine-readable medium storing a program executable by at least one processing unit, the program for implementing a logical router in a logical network, the program comprising sets of instructions for: based on logical network configuration data, identifying (i) a route for a set of network addresses to add to a routing table of the logical router and (ii) a route type for the identified route, the logical router having a set of peer routers to which the logical router advertises routes;based on the route type of the identified route, determining separately for each particular peer router of the set of peer routers whether to include the identified route as a route for the logical router to advertise to the particular peer router; anddistributing a routing table comprising the identified route to a computing device that implements the logical router, wherein the computing device advertises the identified route to a peer router when the route type is specified for advertisement to the peer router, wherein the computing device advertises the route to a first peer router of the logical router but does not advertise the route to a second peer router of the logical router.
  • 14. The non-transitory machine-readable medium of claim 13, wherein the computing device does not advertise the identified route when the route type is not specified for advertisement.
  • 15. The non-transitory machine-readable medium of claim 13, wherein the first logical router comprises a distributed routing component and a set of centralized routing components, wherein the distributed routing table is a routing table for one of the centralized routing components, wherein the distributed routing component is implemented by a plurality of computing devices including the computing device, wherein each of the centralized routing components is implemented by a different computing device.
  • 16. A non-transitory machine-readable medium storing a program executable by at least one processing unit, the program for implementing a first logical router in a logical network, the program comprising sets of instructions for: determining that a route is advertised to the first logical router by a second logical router that connects to the first logical router;identifying (i) the route for a set of network addresses to add to a routing table of the first logical router and (ii) a route type for the route;determining whether to include the route as a route for the first logical router to advertise based on the route type of the route and the second logical router; andwhen the determination is made that the route has to be advertised, distributing a routing table comprising the route to a computing device that implements the first logical router in order for the computing device to advertise the route.
  • 17. The non-transitory machine-readable medium of claim 16, wherein the first logical router advertises routes of a particular route type that are advertised to the first logical router from a second logical router and does not advertise routes of the particular route type that are advertised to the first logical router from a third logical router.
Priority Claims (1)
Number Date Country Kind
201841044147 Nov 2018 IN national
US Referenced Citations (340)
Number Name Date Kind
5504921 Dev et al. Apr 1996 A
5550816 Hardwick et al. Aug 1996 A
5751967 Raab et al. May 1998 A
6006275 Picazo et al. Dec 1999 A
6104699 Holender et al. Aug 2000 A
6219699 McCloghrie et al. Apr 2001 B1
6359909 Ito et al. Mar 2002 B1
6456624 Eccles et al. Sep 2002 B1
6512745 Abe et al. Jan 2003 B1
6539432 Taguchi et al. Mar 2003 B1
6680934 Cain Jan 2004 B1
6785843 McRae et al. Aug 2004 B1
6914907 Bhardwaj et al. Jul 2005 B1
6941487 Balakrishnan et al. Sep 2005 B1
6950428 Horst et al. Sep 2005 B1
6963585 Pennec et al. Nov 2005 B1
6999454 Crump Feb 2006 B1
7046630 Abe et al. May 2006 B2
7107356 Baxter et al. Sep 2006 B2
7197572 Matters et al. Mar 2007 B2
7200144 Terrell et al. Apr 2007 B2
7209439 Rawlins et al. Apr 2007 B2
7260648 Tingley et al. Aug 2007 B2
7283473 Arndt et al. Oct 2007 B2
7342916 Das et al. Mar 2008 B2
7391771 Orava et al. Jun 2008 B2
7447197 Terrell et al. Nov 2008 B2
7450598 Chen et al. Nov 2008 B2
7463579 Lapuh et al. Dec 2008 B2
7478173 Delco Jan 2009 B1
7483411 Weinstein et al. Jan 2009 B2
7555002 Arndt et al. Jun 2009 B2
7606260 Oguchi et al. Oct 2009 B2
7630358 Lakhani et al. Dec 2009 B1
7643488 Khanna et al. Jan 2010 B2
7649851 Takashige et al. Jan 2010 B2
7653747 Lucco et al. Jan 2010 B2
7710874 Balakrishnan et al. May 2010 B2
7742459 Kwan et al. Jun 2010 B2
7764599 Doi et al. Jul 2010 B2
7778268 Khan et al. Aug 2010 B2
7792097 Wood et al. Sep 2010 B1
7792987 Vohra et al. Sep 2010 B1
7802000 Huang et al. Sep 2010 B1
7818452 Matthews et al. Oct 2010 B2
7826482 Minei et al. Nov 2010 B1
7839847 Nadeau et al. Nov 2010 B2
7885276 Lin Feb 2011 B1
7936770 Frattura et al. May 2011 B1
7937438 Miller et al. May 2011 B1
7948986 Ghosh et al. May 2011 B1
7953865 Miller et al. May 2011 B1
7987506 Khalid et al. Jul 2011 B1
7991859 Miller et al. Aug 2011 B1
7995483 Bayar et al. Aug 2011 B1
8027260 Venugopal et al. Sep 2011 B2
8027354 Portolani et al. Sep 2011 B1
8031633 Bueno et al. Oct 2011 B2
8046456 Miller et al. Oct 2011 B1
8054832 Shukla et al. Nov 2011 B1
8055789 Richardson et al. Nov 2011 B2
8060875 Lambeth Nov 2011 B1
8131852 Miller et al. Mar 2012 B1
8149737 Metke et al. Apr 2012 B2
8155028 Abu-Hamdeh et al. Apr 2012 B2
8166201 Richardson et al. Apr 2012 B2
8194674 Pagel et al. Jun 2012 B1
8199750 Schultz et al. Jun 2012 B1
8223668 Allan et al. Jul 2012 B2
8224931 Brandwine et al. Jul 2012 B1
8224971 Miller et al. Jul 2012 B1
8239572 Brandwine et al. Aug 2012 B1
8259571 Raphel et al. Sep 2012 B1
8265075 Pandey Sep 2012 B2
8281067 Stolowitz Oct 2012 B2
8312129 Miller et al. Nov 2012 B1
8339959 Moisand et al. Dec 2012 B1
8339994 Gnanasekaran et al. Dec 2012 B2
8345650 Foxworthy et al. Jan 2013 B2
8351418 Zhao et al. Jan 2013 B2
8370834 Edwards et al. Feb 2013 B2
8416709 Marshall et al. Apr 2013 B1
8456984 Ranganathan et al. Jun 2013 B2
8504718 Wang et al. Aug 2013 B2
8559324 Brandwine et al. Oct 2013 B1
8565108 Marshall et al. Oct 2013 B1
8600908 Lin et al. Dec 2013 B2
8611351 Gooch et al. Dec 2013 B2
8612627 Brandwine Dec 2013 B1
8625594 Safrai et al. Jan 2014 B2
8625603 Ramakrishnan et al. Jan 2014 B1
8625616 Vobbilisetty et al. Jan 2014 B2
8627313 Edwards et al. Jan 2014 B2
8644188 Brandwine et al. Feb 2014 B1
8660129 Brendel et al. Feb 2014 B1
8705513 Merwe et al. Apr 2014 B2
8724456 Hong May 2014 B1
8745177 Kazerani et al. Jun 2014 B1
8958298 Zhang et al. Feb 2015 B2
9021066 Singh et al. Apr 2015 B1
9032095 Traina et al. May 2015 B1
9059999 Koponen et al. Jun 2015 B2
9137052 Koponen et al. Sep 2015 B2
9313129 Ganichev et al. Apr 2016 B2
9419855 Ganichev et al. Aug 2016 B2
9485149 Traina et al. Nov 2016 B1
9503321 Neginhal et al. Nov 2016 B2
9559980 Li et al. Jan 2017 B2
9647883 Neginhal et al. May 2017 B2
9749214 Han Aug 2017 B2
9787605 Zhang et al. Oct 2017 B2
10057157 Goliya et al. Aug 2018 B2
10075363 Goliya et al. Sep 2018 B2
10079779 Zhang et al. Sep 2018 B2
10095535 Dubey et al. Oct 2018 B2
10110431 Ganichev et al. Oct 2018 B2
10129142 Goliya et al. Nov 2018 B2
10129180 Zhang et al. Nov 2018 B2
10153973 Dubey Dec 2018 B2
10230629 Masurekar et al. Mar 2019 B2
10270687 Mithyantha Apr 2019 B2
10341236 Boutros et al. Jul 2019 B2
10382321 Boyapati et al. Aug 2019 B1
10411955 Neginhal et al. Sep 2019 B2
10454758 Boutros et al. Oct 2019 B2
10601700 Goliya et al. Mar 2020 B2
10623322 Nallamothu Apr 2020 B1
10700996 Zhang et al. Jun 2020 B2
20010043614 Viswanadham et al. Nov 2001 A1
20020067725 Oguchi et al. Jun 2002 A1
20020093952 Gonda Jul 2002 A1
20020194369 Rawlins et al. Dec 2002 A1
20030041170 Suzuki Feb 2003 A1
20030058850 Rangarajan et al. Mar 2003 A1
20030067924 Choe et al. Apr 2003 A1
20030069972 Yoshimura et al. Apr 2003 A1
20040013120 Shen Jan 2004 A1
20040073659 Rajsic et al. Apr 2004 A1
20040098505 Clemmensen May 2004 A1
20040267866 Carollo et al. Dec 2004 A1
20050018669 Arndt et al. Jan 2005 A1
20050027881 Figueira et al. Feb 2005 A1
20050053079 Havala Mar 2005 A1
20050083953 May Apr 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050132044 Guingo et al. Jun 2005 A1
20060002370 Rabie et al. Jan 2006 A1
20060018253 Windisch et al. Jan 2006 A1
20060026225 Canali et al. Feb 2006 A1
20060029056 Perera et al. Feb 2006 A1
20060056412 Page Mar 2006 A1
20060059253 Goodman et al. Mar 2006 A1
20060092940 Ansari et al. May 2006 A1
20060092976 Lakshman et al. May 2006 A1
20060174087 Hashimoto et al. Aug 2006 A1
20060187908 Shimozono et al. Aug 2006 A1
20060193266 Siddha et al. Aug 2006 A1
20060291387 Kimura et al. Dec 2006 A1
20060291388 Amdahl et al. Dec 2006 A1
20070043860 Pabari Feb 2007 A1
20070064673 Bhandaru et al. Mar 2007 A1
20070140128 Klinker et al. Jun 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070165515 Vasseur Jul 2007 A1
20070201357 Smethurst et al. Aug 2007 A1
20070206591 Doviak et al. Sep 2007 A1
20070297428 Bose et al. Dec 2007 A1
20080002579 Lindholm et al. Jan 2008 A1
20080002683 Droux et al. Jan 2008 A1
20080013474 Nagarajan et al. Jan 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080049646 Lu Feb 2008 A1
20080059556 Greenspan et al. Mar 2008 A1
20080071900 Hecker et al. Mar 2008 A1
20080086726 Griffith et al. Apr 2008 A1
20080151893 Nordmark et al. Jun 2008 A1
20080159301 Heer Jul 2008 A1
20080189769 Casado et al. Aug 2008 A1
20080225853 Melman et al. Sep 2008 A1
20080240122 Richardson et al. Oct 2008 A1
20080253366 Zuk et al. Oct 2008 A1
20080253396 Olderdissen Oct 2008 A1
20080291910 Tadimeti et al. Nov 2008 A1
20090031041 Clemmensen Jan 2009 A1
20090043823 Iftode et al. Feb 2009 A1
20090064305 Stiekes et al. Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090092137 Haigh et al. Apr 2009 A1
20090122710 Bar-Tor et al. May 2009 A1
20090150527 Tripathi et al. Jun 2009 A1
20090161547 Riddle et al. Jun 2009 A1
20090249470 Litvin et al. Oct 2009 A1
20090249473 Cohn Oct 2009 A1
20090279536 Unbehagen et al. Nov 2009 A1
20090292858 Lambeth et al. Nov 2009 A1
20090300210 Ferris Dec 2009 A1
20090303880 Maltz et al. Dec 2009 A1
20100002722 Porat et al. Jan 2010 A1
20100046531 Louati et al. Feb 2010 A1
20100107162 Edwards et al. Apr 2010 A1
20100115101 Lain et al. May 2010 A1
20100131636 Suri et al. May 2010 A1
20100153554 Anschutz et al. Jun 2010 A1
20100153701 Shenoy et al. Jun 2010 A1
20100162036 Linden et al. Jun 2010 A1
20100165877 Shukla et al. Jul 2010 A1
20100169467 Shukla et al. Jul 2010 A1
20100192225 Ma et al. Jul 2010 A1
20100205479 Akutsu et al. Aug 2010 A1
20100214949 Smith et al. Aug 2010 A1
20100275199 Smith et al. Oct 2010 A1
20100290485 Martini et al. Nov 2010 A1
20100318609 Lahiri et al. Dec 2010 A1
20100322255 Hao et al. Dec 2010 A1
20110016215 Wang Jan 2011 A1
20110022695 Dalal et al. Jan 2011 A1
20110026537 Kolhi et al. Feb 2011 A1
20110032830 Merwe et al. Feb 2011 A1
20110032843 Papp et al. Feb 2011 A1
20110075664 Lambeth et al. Mar 2011 A1
20110075674 Li et al. Mar 2011 A1
20110085557 Gnanasekaran et al. Apr 2011 A1
20110085559 Chung et al. Apr 2011 A1
20110103259 Aybay et al. May 2011 A1
20110119748 Edwards et al. May 2011 A1
20110134931 Merwe et al. Jun 2011 A1
20110142053 Merwe et al. Jun 2011 A1
20110149964 Judge et al. Jun 2011 A1
20110149965 Judge et al. Jun 2011 A1
20110194567 Shen Aug 2011 A1
20110205931 Zhou et al. Aug 2011 A1
20110261825 Ichino Oct 2011 A1
20110283017 Alkhatib et al. Nov 2011 A1
20110299534 Koganti et al. Dec 2011 A1
20110310899 Alkhatib et al. Dec 2011 A1
20110317703 Dunbar et al. Dec 2011 A1
20120014386 Xiong et al. Jan 2012 A1
20120014387 Dunbar et al. Jan 2012 A1
20120131643 Cheriton May 2012 A1
20120155467 Appenzeller Jun 2012 A1
20120182992 Cowart et al. Jul 2012 A1
20120236734 Sampath et al. Sep 2012 A1
20130007740 Kikuchi et al. Jan 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130044641 Koponen et al. Feb 2013 A1
20130051399 Zhang et al. Feb 2013 A1
20130058225 Casado et al. Mar 2013 A1
20130058229 Casado et al. Mar 2013 A1
20130058335 Koponen et al. Mar 2013 A1
20130058350 Fulton Mar 2013 A1
20130058353 Koponen et al. Mar 2013 A1
20130060940 Koponen et al. Mar 2013 A1
20130094350 Mandal et al. Apr 2013 A1
20130103817 Koponen et al. Apr 2013 A1
20130103818 Koponen et al. Apr 2013 A1
20130132536 Zhang et al. May 2013 A1
20130142048 Gross, IV et al. Jun 2013 A1
20130148541 Zhang et al. Jun 2013 A1
20130148542 Zhang et al. Jun 2013 A1
20130148543 Koponen et al. Jun 2013 A1
20130148656 Zhang et al. Jun 2013 A1
20130151661 Koponen et al. Jun 2013 A1
20130151676 Thakkar et al. Jun 2013 A1
20130208621 Manghirmalani et al. Aug 2013 A1
20130212148 Koponen et al. Aug 2013 A1
20130223444 Liljenstolpe et al. Aug 2013 A1
20130230047 Subrahmaniam et al. Sep 2013 A1
20130266007 Kumbhare et al. Oct 2013 A1
20130266015 Qu et al. Oct 2013 A1
20130266019 Qu et al. Oct 2013 A1
20130268799 Mestery et al. Oct 2013 A1
20130329548 Nakil et al. Dec 2013 A1
20130332602 Nakil et al. Dec 2013 A1
20130332619 Xie et al. Dec 2013 A1
20130339544 Mithyantha Dec 2013 A1
20140003434 Assarpour et al. Jan 2014 A1
20140016501 Kamath et al. Jan 2014 A1
20140059226 Messerli et al. Feb 2014 A1
20140146817 Zhang May 2014 A1
20140173093 Rabeela et al. Jun 2014 A1
20140195666 Dumitriu et al. Jul 2014 A1
20140229945 Barkai et al. Aug 2014 A1
20140241247 Kempf et al. Aug 2014 A1
20140269299 Koornstra Sep 2014 A1
20140328350 Hao et al. Nov 2014 A1
20140372582 Ghanwani et al. Dec 2014 A1
20140376550 Khan et al. Dec 2014 A1
20150016300 Devireddy et al. Jan 2015 A1
20150063360 Thakkar et al. Mar 2015 A1
20150063364 Thakkar et al. Mar 2015 A1
20150089082 Patwardhan et al. Mar 2015 A1
20150092594 Zhang et al. Apr 2015 A1
20150103838 Zhang et al. Apr 2015 A1
20150188770 Naiksatam et al. Jul 2015 A1
20150222550 Anand Aug 2015 A1
20150263897 Ganichev et al. Sep 2015 A1
20150263946 Tubaltsev et al. Sep 2015 A1
20150263952 Ganichev et al. Sep 2015 A1
20150271011 Neginhal et al. Sep 2015 A1
20150271303 Neginhal et al. Sep 2015 A1
20150299880 Jorge et al. Oct 2015 A1
20160105471 Nunes et al. Apr 2016 A1
20160119229 Zhou Apr 2016 A1
20160182287 Chiba et al. Jun 2016 A1
20160191374 Singh et al. Jun 2016 A1
20160226700 Zhang et al. Aug 2016 A1
20160226754 Zhang et al. Aug 2016 A1
20160226762 Zhang et al. Aug 2016 A1
20160261493 Li Sep 2016 A1
20160294612 Ravinoothala et al. Oct 2016 A1
20160344586 Ganichev et al. Nov 2016 A1
20170005923 Babakian Jan 2017 A1
20170048129 Masurekar et al. Feb 2017 A1
20170048130 Goliya et al. Feb 2017 A1
20170063632 Goliya et al. Mar 2017 A1
20170063633 Goliya et al. Mar 2017 A1
20170064717 Filsfils et al. Mar 2017 A1
20170070425 Mithyantha Mar 2017 A1
20170126497 Dubey et al. May 2017 A1
20170180154 Duong Jun 2017 A1
20170230241 Neginhal et al. Aug 2017 A1
20170317919 Fernando et al. Nov 2017 A1
20180006943 Dubey Jan 2018 A1
20180062914 Boutros et al. Mar 2018 A1
20180097734 Boutros et al. Apr 2018 A1
20180367442 Goliya et al. Dec 2018 A1
20190018701 Dubey et al. Jan 2019 A1
20190020580 Boutros Jan 2019 A1
20190020600 Zhang et al. Jan 2019 A1
20190109780 Nagarkar Apr 2019 A1
20190124004 Dubey Apr 2019 A1
20190190885 Krug Jun 2019 A1
20190199625 Masurekar et al. Jun 2019 A1
20190245783 Mithyantha Aug 2019 A1
20190281133 Tomkins Sep 2019 A1
20190312812 Boutros et al. Oct 2019 A1
20190334767 Neginhal et al. Oct 2019 A1
20200021483 Boutros et al. Jan 2020 A1
20200186468 Basavaraj et al. Jun 2020 A1
20200195607 Wang et al. Jun 2020 A1
Foreign Referenced Citations (29)
Number Date Country
1442987 Sep 2003 CN
1714548 Dec 2005 CN
103890751 Jun 2014 CN
103947164 Jul 2014 CN
104335553 Feb 2015 CN
1653688 May 2006 EP
2838244 Feb 2015 EP
3013006 Apr 2016 EP
2000244567 Sep 2000 JP
2003069609 Mar 2003 JP
2003124976 Apr 2003 JP
2003318949 Nov 2003 JP
2011139299 Jul 2011 JP
2011228864 Nov 2011 JP
2014534789 Dec 2014 JP
1020110099579 Sep 2011 KR
2005112390 Nov 2005 WO
2008095010 Aug 2008 WO
2013020126 Feb 2013 WO
2013026049 Feb 2013 WO
2013055697 Apr 2013 WO
2013081962 Jun 2013 WO
2013143611 Oct 2013 WO
2013184846 Dec 2013 WO
2015015787 Feb 2015 WO
2015142404 Sep 2015 WO
2016123550 Aug 2016 WO
2017027073 Feb 2017 WO
2018044746 Mar 2018 WO
Non-Patent Literature Citations (25)
Entry
Agarwal, Sugam, et al., “Traffic Engineering in Software Defined Networks,” 2013 Proceedings IEEE INFOCOM, Apr. 14, 2013, 10 pages, Bell Labs, Alcatel-Lucent, Holmdel, NJ, USA.
Aggarwal, R., et al., “Data Center Mobility based on E-VPN, BGP/MPLS IP VPN, IP Routing and NHRP,” draft-raggarwa-data-center-mobility-05.txt, Jun. 10, 2013, 24 pages, Internet Engineering Task Force, IETF, Geneva, Switzerland.
Author Unknown, “VMware® NSX Network Virtualization Design Guide,” Month Unknown 2013, 32 pages, Item No. VMW-NSX-NTWK-VIRT-DESN-GUIDE-V2-101, VMware, Inc., Palo Alto, CA, USA.
Ballani, Hitesh, et al., “Making Routers Last Longer with ViAggre,” NSDI '09: 6th USENIX Symposium on Networked Systems Design and Implementation, Apr. 2009, 14 pages, USENIX Association.
Caesar, Matthew, et al., “Design and Implementation of a Routing Control Platform,” NSDI '05: 2nd Symposium on Networked Systems Design & Implementation , Apr. 2005, 14 pages, Usenix Association.
Dumitriu, Dan Mihai, et al., U.S. Appl. No. 61/514,990, filed Aug. 4, 2011.
Fernando, Rex, et al., “Service Chaining using Virtual Networks with BGP,” Internet Engineering Task Force, IETF, Jul. 7, 2015, 32 pages, Internet Society (ISOC), Geneva, Switzerland, available at https://tools.ietf.org/html/draft-fm-bess-service-chaining-01.
Handley, Mark, et al., “Designing Extensible IP Router Software,” Proc. of NSDI, May 2005, 14 pages.
Koponen, Teemu, et al., “Network Virtualization in Multi-tenant Datacenters,” Technical Report TR-2013-001E, Aug. 2013, 22 pages, VMware, Inc., Palo Alto, CA, USA.
Lakshminarayanan, Karthik, et al., “Routing as a Service,” Report No. UCB/CSD-04-1327, Month Unknown 2004, 16 pages, Computer Science Division (EECS), University of California—Berkeley, Berkeley, California.
Lowe, Scott, “Learning NSX, Part 14: Using Logical Routing,” Scott's Weblog: The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view, Jun. 20, 2014, 8 pages, available at https://blog.scottlowe.org/2014/06/20/learning-nsx-part-14-using-logical-routing/.
Maltz, David A., et al., “Routing Design in Operational Networks: A Look from the Inside,” SIGCOMM '04, Aug. 30-Sep. 3, 2014, 14 pages, ACM, Portland, Oregon, USA.
Non-published commonly owned U.S. Appl. No. 16/210,410, filed Dec. 5, 2018, 29 pages, VMware, Inc.
Non-published commonly owned U.S. Appl. No. 16/218,433, filed Dec. 12, 2018, 27 pages, VMware, Inc.
Rosen, E., “Applicability Statement for BGP/MPLS IP Virtual Private Networks (VPNs),” RFC 4365, Feb. 2006, 32 pages, The Internet Society.
Sajassi, Ali, et al., “Integrated Routing and Bridging in EVPN draft-sajassi-12vpn-evpn-inter-subnet-forwarding-04”, Jul. 4, 2014, 24 pages.
Shenker, Scott, et al., “The Future of Networking, and the Past of Protocols,” Dec. 2, 2011, 30 pages, USA.
Wang, Anjing, et al., “Network Virtualization: Technologies, Perspectives, and Frontiers,” Journal of Lightwave Technology, Feb. 15, 2013, 15 pages, IEEE.
Wang, Yi, et al., “Virtual Routers on the Move: Live Router Migration as a Network-Management Primitive,” SIGCOMM '08, Aug. 17-22, 2008, 12 pages, ACM, Seattle, Washington, USA.
Non-published commonly owned U.S. Appl. No. 16/823,050, filed Mar. 18, 2020, 79 pages, Nicira, Inc.
Author Unknown, “Cisco Border Gateway Protocol Control Plane for Virtual Extensible LAN,” White Paper, Jan. 23, 2015, 6 pages, Cisco Systems, Inc.
Author Unknown, “Cisco Data Center Spine-and-Leaf Architecture: Design Overview,” White Paper, Apr. 15, 2016, 27 pages, Cisco Systems, Inc.
Moreno, Victor, “VXLAN Deployment Models—A Practical Perspective,” Cisco Live 2015 Melbourne, Mar. 6, 2015, 72 pages, BRKDCT-2404, Cisco Systems, Inc.
Non-published commonly owned U.S. Appl. No. 16/581,118, filed Sep. 24, 2019, 36 pages, Nicira, Inc.
Non-published commonly owned U.S. Appl. No. 16/868,524, filed May 6, 2020, 105 pages.
Related Publications (1)
Number Date Country
20200169496 A1 May 2020 US